channel_id stringclasses 1
value | video_id stringclasses 1
value | channel_name stringclasses 1
value | kind stringclasses 1
value | title stringlengths 10 65 | description stringlengths 118 5k | num_views stringlengths 1 3 | num_likes stringclasses 11
values | num_comments stringclasses 4
values | transcript stringlengths 733 71.6k |
|---|---|---|---|---|---|---|---|---|---|
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Broccoli AI at its best 🥦 | We discussed “🥦 Broccoli AI” a couple weeks ago, which is the kind of AI that is actually good/healthy for a real world business. Bengsoon Chuah, a data scientist working in the energy sector, joins us to discuss developing and deploying NLP pipelines in that environment. We talk about good/healthy ways of introducing AI in a company that uses on-prem infrastructure, has few data science professionals, and operates in high risk environments.
Leave us a comment (https://changelog.com/practicalai/280/discuss)
Changelog++ (https://changelog.com/++) members save 5 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Intel Innovation 2024 (https://intel.com/innovation?regcode=CMCCHL&utm_campaign=Changelog) – Early bird registration is now open for Intel Innovation 2024 in San Jose, CA! Learn more (https://intel.com/innovation?regcode=CMCCHL&utm_campaign=Changelog) OR register (https://reg.oneventseries.intel.com/flow/intel/innv2024/InnovationReg?regcode=CMCCHL&utm_campaign=Changelog)
• Motific (https://www.motific.ai/) – Accelerate your GenAI adoption journey. Rapidly deliver trustworthy GenAI assistants. Learn more at motific.ai (https://www.motific.ai/)
Featuring:
• Bengsoon Chuah – Twitter (https://twitter.com/bengsoon) , GitHub (https://github.com/bengsoon) , LinkedIn (https://www.linkedin.com/in/bengsoon)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• MLFlow (https://mlflow.org/)
• Prefect (https://www.prefect.io)
• DuckDB (https://duckdb.org/)
• Agrilla (https://argilla.io/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-280.md) | 269 | 3 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io what's up friends Intel, Innovation 2024 is right around the, corner accelerate the future, registration is now open and it takes, place September 24th and 25th in San, Jose California this event is all about, you the developer the community, and the critical role you play in, tackling the toughest challenges across, the industry ignite your passion for AI, and Beyond grow your skills to maximize, your impact and network with your peers, as they unleash the next wave of, advancements in technology here's what, you can expect understand the emerging, Innovation and Trends in Dev tools, languages Frameworks and Technologies in, Ai and Beyond to empower you and the, solutions You're Building get in-depth, technical experence experience doing, handson workshops Labs meetups and, hackathons to collaborate and solve, problems in real time you can explore, featured partner and Intel solutions, they have Partners there startups there, customers there and Intel is showcasing, the latest in products services and, solutions across Keynotes Tex sessions, and the show floor to help you meet your, development needs collaborate with, experts learn and have fun engage in, interactive sessions to connect get, certified gain unique ideas and, perspectives build longl lasting, networks and of course have fun and get, inspired hear from leading industry, experts technologists startup, entrepreneurs and fellow developers, along with Intel leadership CEO Pat, gillinger and CTO Greg lavender as they, take you through the latest advancements, in technology don't miss this chance to, be at the Forefront of innovation take, advantage of early bird pricing right, now until August 2nd register using the, link in our show notes or to learn more, go to, intel.com Innovation once more that's, intel.com inovation or go to the show, notes and click that, [Music], link welcome to another episode of the, Practical AI podcast this is Daniel Whit, neck I am founder and CEO at prediction, guard and this is a this is a pretty, special and uh and fun episode for me, because I get to kick back with an old, friend of mine we uh we went to the same, University for those that haven't heard, of it Colorado School of minds and in, Golden Colorado of course a shout out to, uh all the or diggers out there that are, that are listening but yeah we we have, uh we have with us today bingen Chua, who's a data scientist and now um and, working in the in the energy sector um I, was really fascinated to talk over the, years with with bingson about uh about, all the things he's doing and in, particular uh his kind of approach and, learnings around Active Learning and NLP, models and yeah I wanted to invite him, on the show to talk through some of that, and learn a little bit from him so, welcome to the show how you doing hi, Daniel thanks for having me yeah it's, been a while since the days in Colorado, School of Minds in, in and now uh you're working a data, scientist in the energy sector and also, working in in Asia which is super cool, I'm wondering if you could give us a, little bit of a sense of some of the, unique things about doing data science, and machine learning type of things in, the context of like the energy sector in, the context of like an actual Enterprise, real world kind of situation because we, talk a lot about recently we've been, talking a lot about all of these geni, models and apis and such and that is, super cool but also there's a lot of on, the ground work going on in data science, that maybe looks quite a bit different, than that so yeah thanks um so I mean I, I work in in the energy sector and uh, it's pretty much a tra traditional type, of sector a lot of the companies um as, you go around at least in Asia or at, least over here where I'm at we do not, actually even have things like cloud, services or you know subscription and, stuff like that due to different reasons, and stuff but at the same time there's, an appetite for machine learning AI data, and all of those things um you see, people talk about geni as well but I, think the ones that I've noticed at, least for me personally that has really, brought a lot of values are kind of what, you guys were talking about in the, previous episode of a brockoli AI, broccoli AI I love, it yeah not so sexy but uh but still, really important really brings value, particularly I guess what we're going to, talk about is is active learning in the, context NLP natural language processing, and and so I think that's a that's a, pretty exciting place to be in um to to, do you know I mean it kind of translates, into genan to a certain degree but at, the same time I think to at least the, context that I'm in as well in in the, sector that I'm in we do not have things, like Health Services is um ready for us, right and so you have to figure out ways, to kind of bring that about um in a an, on Prem server BM so how do you work, around with that and how do you actually, um bring in Cloud native modern, Technologies within a traditional kind, of a structure so yeah yeah and from at, least my impression even though there's, sort of not this whether it be for, security reasons or Legacy reasons or, just connectivity that you know there's, there's not the type of connection to, cloud services like you're talking about, that others might be working with but at, the same time at least my my impression, is that this sort of sector and maybe, others there's other related verticals, where they have been sort of data driven, to some degree for some time and I don't, know if you could speak to that like the, the types of data that people you know, have been processing or storing or are, available in in those contexts but yeah, I mean that's a good point because um a, lot of these traditional Industries at, least you know in the energy sector that, we see we have sensors that are, constantly flowing in data all the time, the data is there I mean we we are, collecting data right and then going a, little bit deeper then you find that a, lot of us have been collecting a lot of, unstructured data too and so at least, within um you know where in my, experience at least what what was, happening was when I came in I was, pretty much the only data scientist and, stuff and so I had to like make my um, existence justified in a sense and so I, knew that like I had to do something, about like bring value within this, organization and that like I'll be able, to kind of prove that like hey look data, science actually does work and does, bring value but the quickest I guess lwh, hanging fruit that I found within at, least in the context of where I'm at is, is the whole unstructured data so we, have been collecting thousands and, thousands not hundreds of thousands of, unstructured data but there has never, been a way to really Analyze That at, scale so people have been analyzing it, they've been able to do some sort of a, human analysis on it but there's never, been someone who's able to like say hey, what's been happening in the past 10, years that we've been collecting all, this data what's it telling you um, nobody has really been able to do that, so I thought okay you know maybe that, could be one thing that we could, actually bring in is that like it you, have all this data that's ready for you, that has all the inside that has all the, information that is locked up right, waiting to be on earth pretty much, waiting there for us to just extract it, or mine it um so I I I did a quick PC, for one of the the Departments company, and uh said hey guys like you've been, doing a powerbi tableau powerbi kind of, thing you've been able to do stuff with, your um structured data right and you're, being able to plot it on beautiful, graphs and stuff like they be able been, able to analyze in that sense what about, all the instructure data that you've, been collecting over the years and they, said there's no way we could do it right, I mean like we've got thousand how you, can't there's no possible way for you, put it in RBI and so I said like well, maybe we could explore ways that we, could actually get into machine learning, to actually help you to scale that, analysis um from ANL standpoint, thankfully they bought it and that's, what we took off from there and uh it, was pretty cool I mean it was a journey, for sure journey to learn yeah so like, when you say unstructured data uh give, people a sense of like the kinds of, files or um not not maybe the specifics, but you know I'm imagining a file store, with some type of files in it that, contain yeah something give people a, sense of of that yeah for sure I mean I, guess unstructured data typically we, think about it as like text right um it, could be text or it could be something, that's just out of structure and in like, Microsoft the docs, or yeah so we have been storing all of, those data within SharePoint Microsoft, SharePoint right and so what I've seen, is these instructed data is usually it, usually comes in a tablate form right, you have a table that is collecting all, of the structured data but alongside, with it there's always that comment or, that additional things that you have to, actually tell the story of what you're, actually collecting and um those are the, ones that I think it's always Loft up, and you would see it's it's very typical, in a any kind of industry where you have, a tablet data data that collects you, know sensor data and everything or like, reports of what's been happening and, those are like structured but then, there's also a column that would bring, in some sort of remarks you know, observation or actions taken whatever it, is comments comments yeah but um more, often than not we just kind of like play, through it and and kind of like not, really put too much attention to it at, least within the data set we were, looking at um we had found that there, were lot more insight in those data than, the structured data that they've been, collecting I'm talking about like safety, data so we've been collecting like, safety reports every single day couple, hundred sometimes or tens of them every, day over the years and so these data has, some sort of a Insight right and it, brings in an Insight of how is a safety, condition um you know operations and so, with that like they people usually see, at the categories or that that what the, reports are being reported for but then, like when you look at the categories you, realize that sometimes it doesn't really, jive with the stories that they're, actually trying to bring in in, instructure data and so that's where the, the part where we felt like it's going, to bring in additional insight to the, instructure data that we see yeah and so, like there's this uh we've talked a, little bit about like the data that you, was there the potential insights in that, around safety and maybe other insights, as you kind of came into this industry, and kind of we getting an understanding, of like let's say that you built the, coolest data science app that was out, there and and had a cool model in it to, do some analysis like what is the, reality of how that would have to run, like what is in production mean uh for, for in in your context yeah so many, things I mean just start from the data, we don't even have labeled data to start, with say you want a classification app, or model you need to have some sort of a, label right and uh we don't have that, and so you have to figure out ways to, bootstrap your labeling process to start, off and stuff and then all the way down, to like of course training the model is, pretty easy these days right and um but, then you have to think about things like, hey what does that look like to put it, on on the server most of our servers are, running Windows server and so I've had, experiences putting production some apps, on on Windows server and that was, painful and U so like we have to figure, out ways that like work with the it and, stuff and said hey can you deploy a a, Linux server for us instead and just, work it out from there and set it up, from there that being said like that, that's just over all picture of it and, then you get into details of like how do, you actually store your model you've got, to have some sort of a infrastructure to, kind of hold that which you know in our, case um felt like mlflow is pretty good, model registry experimentation tracking, and stuff to keep track of what type of, models that I'm using and stuff like, that and then so many things, honestly gosh and then like how do you, actually put it on an orchestration kind, of a Serv, you could use KRON jobs but then you, know it may not be so flexible then you, kind of need to work something out and, so you have to get some sort of, orchestrator to spin it up and and kind, of like make that a service for your, infrastructure as well I love this, discussion because it uh I think it fits, the theme of the show so well around, being practical and the fact that yeah, I'm sure that there's actually a good, number of listeners out there who are, really wanting to do, machine learning AI data science type of, things and they are sort of in a similar, situation in in their company because, actually I think probably more of the, majority of companies are in this sort, of situation than sort of infrastructure, wise extremely modern and just cranking, everything out on on kubernetes in the, in the cloud that sort of thing um so, yeah I I I love this um so you were, talking a little bit about the sort of, problem of so you have have all this, tabular data with extra comments and and, unstructured data and you know certain, things you want to do like extract, insights or classify maybe some of the, unstructured data but then also nothing, has been labeled over time it's just, unstructured data so talk a little bit, about that bootstrapping problem and and, how you've thought about that in terms, of I've got all this stuff I want to, create a model but I have no no starting, point when we were work going through, that whole labeling process or data, preparation process was pretty, interesting because we really didn't, have anything no labelers or anything, and um I didn't have a budget to get an, external labeler and for me know just, hire hire thousands of of people online, right I know I could I could just do, that maybe but then again even that like, I've thought of it and um our data, sensitive to start with but at the same, time it's so Nu on and it's it's so, Nuance to the context of our company and, so a lot of data that we that every, company has is just so new ones to their, own context you know and um at the same, time to one of the new ones is is is the, way that like um these texts are being, written and so a lot of quote switching, happening you know um which means quote, switching which means like um in Asia a, lot of times we speak in English but we, will kind of put in some you know native, languages that we know or that we grew, up with and so it's just kind of like, you go back and forth back and forth and, it's just kind kind of a common thing in, especially in Southeast Asia and um so, you can't just hire somebody online and, and just kind of label it for you, because you just can I don't even know, how to bring in that context of these, guys right and um but thankfully I mean, I would say that um the part that really, helped was I have to have a really good, sponsor for the project and these, sponsors um they were super on board, they were not technical um they were, smmes in their own, departments and they knew their stuff, but they know enough of machine learning, and data and AI that hey it's you know, kind of a model that predicts not, necessarily making 100% accuracy get-go, and so they understand those nuances in, a sense and so they kind of supported, that and understood the kind of uh, things that we have to go through you, know as a practitioner that we have to, go through they kind of understand that, part of it um so that was really helpful, for my part because having a good, sponsor means you get really good, support for the project but that also, means that um because the product that, I'm working the app that I'm working for, on is for their people you know their, subordinates and the people and people, were reporting under them and what, happens is they said to them and say hey, guys this is your app I want you to help, B soon out with building this app and so, having the users themselves on board, from the foundational bootstrap level, really helped us so because we didn't, have any labels um they had the guys to, actually be the ones who label for us, they were the labelers and so the the, users themselves were the labelers so, that was that honestly I was pretty, really blessed to to even just have that, that kind of worked out together um I, think that that kind of works so much in, my favor yeah yeah and in that labeling, process how did you develop your sort of, set of instructions for like how you, like explaining the problem to them or, helping them Define the problem and the, categories for example in a, classification model how was it for you, because I've I've also had the, experience personally probably and been, burned a couple times where I'm like oh, this problem makes sense to me I set up, the labeling thing I release a bunch of, labelers in there and the either the, instructions don't make sense or I've, biased this in some way or you know, likely because I like you had mentioned, I wasn't super close maybe to the to the, users in in that situation but um yeah, any learnings from that experience I, think um just a lot of iteration with, them um I had so many times like I would, travel to see them um they work in our, operations so I literally travel there, um to see them in person and I said I, would we would just go through hey these, are the labels that we want to label we, kind of get a general idea of what they, are but when you get into the weeds of, it like you when you get into the, details of it you're like you would, think in this situation that label, should be here should be number one but, then no somebody else said no it's two, so um the way I worked it out was there, will always be contention I notice no, matter how tightly nit your labelers or, your team are that will always be, contention and you just got to work, around with it at least to in my, experience um I just had to work around, with it and the way I worked around with, it was I just kind of had a voting, system you know and um I I set up an, account so the technical side of this is, I I could have just given them Excel, sheets and they could just label them on, an Excel shape but I find that um you, know they are doing put me a favor kind, of thing but I want them to have a, really good user experience instead of, just going through Excel sheets and, stuff and so I I used uh argila back in, the days and uh when they started and um, preh hugging face days yeah yeah exactly, and and I noticed that arila is amazing, in the sense that like it allows you to, set up different user, and you could I mean even other kind of, interfaces I've used labels to as well, but argila was able to I could use the, API and just kind of like set up each, user right and then for each user I'll, would just kind of like sample the same, data for them um so I had to actually go, through the first round the same number, of set of uh data for them to label say, 500 of them I think I remember and they, would all label within two weeks and at, the end of that two weeks I'll collect, them and I'll find which one which are, the most contentious ones and so the, ones that are the most contentious the, ones that have the least um percentage, of the majority I would pull it up and I, said hey guys what do you think about, this this is contentious for you guys, why is it contentious and you look it up, from there right because chances are, you're going to see the same kind of a, label again of the same kind of a data, again and um if you talk it out, hopefully when you see a similar thing, and you said hey I actually we already, talked about this we all agreed that, we're going to go with this and so, that's first round we had the same label, there the same data set for everyone, it's bit inefficient to a certain degree, but I think it's important to actually, get into that place so you understand, the contentions of each person and then, the second round I have everyone just, kind of like label at scale pretty much, and um yeah we collect that from there, pretty much interesting yeah so you did, um in this process you did a initial, sort of offline bootstrapping of of, labels right and did that so like, scale-wise like what sort of scale when, you're solving so here we're talking, about an NLP problem creating a, classification model on some labels of, of this unstructured data of course this, would vary by domain but sort of what, scale of labels did you shoot for when, you were doing that initial trying to, get to that place where you could start, up and and train your first model we did, um just about 1800 to 2,000 labels okay, or rows of data basically and then we, start training our first model that, probably means you're not training like, a 400 billion parameter model with with, 1,00 samples you don't even enough, infrastructure to be able train that, yeah no we don't so what does broccoli, AI model uh look like I mean this are, all texts right and so we were we were, going with like the simpler simplest you, can find on on hugging face at that time, I think at the time sentence, Transformers were really making it big, um you know for different reasons like, whether it's topic modeling or or, classification and at the same time too, I remember they came up with the set fit, model which is fine-tuning the sentence, Transformer which was honestly, revolutionary for me yeah amazing and um, I thought it was amazing that like it's, a you're able to do um something that, was meant for similarity but then you, could actually fine tune it for, classification and and with pretty good, performance um and it's supposed to be, something that is a few shot, classification model few shot kind of a, fine-tuning and so I thought 2000 should, be enough for me to start somewhere, right and in fact when I trained that um, I tried some other models but I think, sentence Transformers were the ones that, actually gave the best performance out, of all um it still wasn't that good you, know talking about like 60 something 70%, kind of thing in terms of F1 score but, when I talked to my sponsors about this, I said hey guys like you okay with like, me deploying this at like 60 70% and, they said No actually that's fine right, because um the objective for this was, number one to bring visibility of these, reports to the users because one of the, pain points that the they said was for, us to be able to know what people have, been reporting that at least in the past, 24 hours they had to get on SharePoint, and just different hoops and Loops to, try to find out you know filtering and, stuff but to be able to get that sent, out in the email with the classification, was already win and so I thought okay, let's do that but let's not stop that, right I mean we should actually create a, pipeline and that's where the active, learning comes in it it really helped, because uh I'm glad that I actually used, argila to start with the bootstrapping, of uh our data set and having the RG, which means our users already used to, the interface and they already have an, account and so I was able to kind of, hack around with the argila a Python, apepi and um basically I uh was able to, create a loop where pretty much what, this model does every day it will bring, in the new data that people have been, reporting for the last 24 hours and make, some prediction on on it at about 60 70%, F1 score accuracy whatever it is and, then send it out to the users and these, users will see it and at the end of that, email they would say hey um I don't, think this is that signal it should be, this signal I want to give a feedback, and at the end of it they're able to, click on a link that brings them to, their profile in argila that will allow, them to give it the feedback for the, particular day data set and so over time, now it's in production every day I would, get from time to time I'll get people, giving their feedback and we've gotten, like close to 4,000 um data sets now, label um from this Active Learning and, so we so we will train model, periodically not on what I could I could, have done it automated but I didn't, really want to like just I didn't feel, the need for it yet to put it on, animation um but then at the same time, like you know you you're just collecting, an Anin and we're just training it from, time to time, basically hey friends out shift Cisco's, incubation engine merges Innovation with, the art of possible a Launchpad for, transformative emerging Tech out shift, Blends startup agility with corporate, strength to develop nextg Technologies, from the groundup in AI Quantum, Technologies Cloud native and more their, newest AI Innovation Motif addresses a, critical challenge in the rapidly, advancing world of gen AI Bridging the, Gap between concept and deployment this, model and vendor agnostic solution, supports the entire gen AI Journey from, assessment and experimentation Motif, accelerates deployment from months to, days while safeguarding against gen AI, security trust compliance and cost risks, all while empowering business function, and it teams to rapidly configure and, user assistance powered by, organizational data Motif provides, Advanced customizable policy controls to, prevent unauthorized access to sensitive, data and helps ensure compliance, throughout the entire process with deep, visibility into operational and business, metrics motivic enables you to track all, why optimize costs and make informed, Decisions by offering a centralized view, moic deters Shadow AI usage and empowers, teams to innovate responsibly so move, beyond the traditional constraints of AI, implementation utilizing AI deployment, that is both responsible and is, revolutionary ensuring your projects are, not just quickly launched but built on a, foundation of trust and efficiency visit, motif a that is m o, tfic c., [Music], AI so Bings soon uh it's super, interesting to hear kind of how the, you were able to engage the the users of, the application through this like, reporting process essentially that they, were you know had some of the right, incentives in place to to respond and to, give you updated labels and you, mentioned also the model repository, saving models getting them out with, mlflow in the context of you know you, deploying your model on Prem you, updating the model you just mentioned, kind of retraining the model what does, that look like for you right now in, terms of that cycle of when you would, want to push out a new model after, gathering this data how you would judge, that to be worthwhile or or useful in, any sort of testing that that is, relevant to that cycle of getting in new, labels retraining you know evaluating, that sort of thing what what does that, look like for you and how do you kind of, put in the right or or how have you, thought about the right metrics to, understand when to update the model at, this point honestly we we keep it simple, we just kind of like periodically do it, at a Cadence you know a couple months or, two but I did think about like what does, it look like to actually measure the, drift of the data and stuff like that of, the model predictions that could be one, of the ways that we could do it too but, what I'm seeing is actually the model is, um doing its job fairly well well enough, to actually soft the the business, problem and so we don't see a need to, actually Implement more sophisticated U, monitoring unless we need to you know, that's that's where we're at with it, yeah and when you push your model sort, of like you update it um you had, mentioned the model repository how are, you shipping your model out to the, application because I think like you had, mentioned you know you only have so many, resources I think there's there's also a, lot of people out there in your, situation where I think it was Kristen, Lum on a previous episode she had talked, about kind of that data scientist out, there that is maybe one of very few or, the only data scientist in a potentially, a large organization and having to like, do all of these things they're not like, an mlops person they're not a model, trainer they're not a observability, person they're doing all of that right, so there are limitations to to you know, how much sophistication you can put in, place and I think that like some people, go way too far and they're like oh I'm G, to implement all of this stuff and it, actually makes their life as a, practitioner less uh Happy than than, than otherwise so yeah how have you, found that balance and like what does it, look like for you to do these Cycles in, terms of tooling and the things maybe, that You' like you say you mentioned you, thought at some point maybe it's relev, to implement some of this observability, stuff but maybe not yet or there's other, priorities so what does that look like, for you in terms of how you decide what, what level of sophistication is right, and how you push things out and I'm 100%, with that honestly because uh it is a, matter of priority my customers are, happy I'm happy and I'm not going to, like change what's you know what's good, oh I don't want to break what's been, working right so to speak but that being, said like I you know when it comes to, like all of that I think I have a, general idea of what would be the, minimum thing so now I'm working on some, other things as well like phenomenally, detection and stuff like that which, needs to be deployed so having gone, through that that kind of like set up a, like a Sero infrastructure for me to, know what kind of a infrastructure that, I'm going to be looking for for whatever, else that I'm working on and um at the, bare minimum I think model registry is, is super important and uh being able to, call the different versions that you've, been training and being able to track, that and being able to call it through, an API through a function you know MLF, flow has this great python connection, with it and so being able to do that is, it's just amazing I mean it keeps my, life sane right I don't have to like, figure out where I store my model pretty, much so I would be doing exactly the, same things with whatever I'm working on, next which I I've since moved on from, that project and I'm just kind of like, maintaining it that project account now, in maintenance and now I have to move on, to a different project to solve a, different part of the business you know, in that sense but that project kind of, set like I said set the foundation and, knowing what kind of things that needs, to be done so sorry I'm kind of going, ahead of myself so one is ml flow being, the most important thing for me um in in, terms of this sort of scenario the other, one is um orchestrator is also really, important having a really robust, orchestrator so for me I think preact, was perfect for me you know and um I was, able to do different things and stuff, yeah the types of things that you're, orchestrating or what types of things, you could do it real time with prefect, at the same time you could also be, listening and stuff but at the same time, you could also just running on schedule, calling different functions um sub, functions and things like that um so, that was really cool to be able to have, that that's pretty much what we do right, now we're not really going into, real-time monitoring yet until do that, then we'll I have to figure out, something else more sophisticated yeah, yeah yeah and and are you just uh, shipping your models sort of as part of, a Docker container or or something like, that pretty much yeah we do use uh, Docker containers and uh just so that we, can keep it contained in that sense yeah, that's awesome I think you had mentioned, in one of our conversations something, about uh duct, DB where does that fit into some of this, so the data that raw data that we get, from is from SharePoint but if you have, uh anyone who has any experience with, SharePoint in terms of like wrangling, and data stuff it's so painful yeah so I, I thought that would be good to actually, have some sort of middle layer mini lake, house or data Lake kind of thing and um, I don't want to bother my it guys too, much so I thought Doug DB is a great, thing for it right I don't need a VM for, it and um you can have an embedded SQL, service that you can use so that's being, pulled every day pulling that data into, du DB and dub will be the one that, actually cleans up the data preparing, the data to send it to the model and um, that becomes like a pipeline for me to, be able to um work around the whole, complexity of uh SharePoint really, honestly yeah I have personally found a, lot of use for for duck DB even in the, past uh yeah even in the past year on, the even on the more geni stuff where, you're doing sort of like uh text to SQL, or like queries and that sort of thing, and every company we're working with has, different crazy sets of data or, different configurations of this or that, and that layer of having a kind of, unified analytics layer but also not, that's sort of uh you know easy to pull, in to python easy to spin up easy to, like test with locally and then like, deploy with, yeah that's been really useful I, remember you talked about Lance DB you, know for rack and things like that and, yeah it's the same thing I love like, embedded database I think it just works, well you know and like it's kind of, scalable eventually you know and I think, I I really like that I think there was, one uh blog post I've always referred, back to because I I also went through, the you know you and I were at mines at, the same time and then like there was, like data science and then there was, like the big data period where everybody, was in Hadoop and Spark and and all this, stuff which I know uh a good number of, people still use spark but I there's a, blog post by the mother Ducker uh oh, yeah company yeah but I think the title, is Big Data is dead or something it, basically goes through some of the, discussion around like hey we all, thought we had big data but like the, actual query problem like the types of, queries that we need to run these aren't, like big data problems what's needed is, different so yeah for those shout out to, whoever wrote that blog post because it, was really really good if if you ever, want to come on the show and talk about, it that that would be awesome yeah well, as you kind of look back on this process, and some of the things that that you've, learned like what are you what are you, looking forward to in terms of like the, future of the process of your own work, or or of the things you're learning or, maybe like as you go into this next, phase it sounds like you're working on, some new things you'll want to reuse, some of the tooling and kind of process, that you have used but you know what's, different or what are you excited about, kind of for this next phase in light of, what you've kind of learned over the the, past years generally speaking I think, mlops is just so nuanced you know in, different context everyone has this a, say of what should be done and I think, um if I learned something from this was, uh nobody really knows everything so you, kind of have to figure out from there, and you you kind of take a risk on, certain things that you decide in terms, of your system design and stuff what I'm, excited for is actually to be able to, take this and and see what it looks like, to for other things right and um in, other applications like whether it's, anomaly detection or whatever it is in a, broader sense I think I'm excited to to, see things like embedded data base you, know getting more and more mainstream, especially in the context of llm and um, geni and stuff I love to see that, getting more and more mainstream as well, one of the things I'm always thinking, about is U skill is one thing because we, a lot of the applications that we talk, about today especially in the context of, geni we always talk about like the, bigger computer and bigger scale I would, love to see that getting smaller which, it is happening now getting more, accessible on different devices and, stuff being able to do more cool stuff, on device and band stuff on they excited, for that too yeah yeah I think there's a, lot of people excited for that and sort, of this new phase of AI where people, talk about AI everywhere or this sort of, thing which in reality you know there's, been machine learning and data science, sort of everywhere for for some time but, that that sort of wave of these newer, generation of models kind of being, runable in in more practical scenarios, is is exciting but um yeah thanks for, thanks for joining beanson to to talk, about a little bit of your broccoli AI, it's been uh it's been fun and uh love, it thanks for indulging me yeah yeah you, you and I can can hype the broccoli Ai, and I'm sure we can get Demetrios uh to, help us hype it too yeah I don't know if, we trademarked that term uh he's got it, in in his hype cycle now, so yeah yeah thank thanks so much for, joining and hope to talk to you again, soon thanks thanks for, [Music], every all right that is practical AI for, this week subscribe now if you haven't, already head to practical a.m for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire Chang log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music], K LOVE |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Hyperventilating over the Gartner AI Hype Cycle | This week Daniel & Chris hang with repeat guest and good friend Demetrios Brinkmann of the MLOps Community. Together they review, debate, and poke fun at the 2024 Gartner Hype Cycle chart for Artificial Intelligence. You are invited to join them in this light-hearted fun conversation about the state of hype in artificial intelligence.
Leave us a comment (https://changelog.com/practicalai/279/discuss)
Changelog++ (https://changelog.com/++) members save 5 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Intel Innovation 2024 (https://intel.com/innovation?regcode=CMCCHL&utm_campaign=Changelog) – Early bird registration is now open for Intel Innovation 2024 in San Jose, CA! Learn more (https://intel.com/innovation?regcode=CMCCHL&utm_campaign=Changelog) OR register (https://reg.oneventseries.intel.com/flow/intel/innv2024/InnovationReg?regcode=CMCCHL&utm_campaign=Changelog)
• Motific (https://www.motific.ai/) – Accelerate your GenAI adoption journey. Rapidly deliver trustworthy GenAI assistants. Learn more at motific.ai (https://www.motific.ai/)
Featuring:
• Demetrios Brinkmann – Twitter (https://twitter.com/Dpbrinkm)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• MLOps Community (https://mlops.community)
• MLOps Community Podcast (https://tr.ee/2aUfMm9AIb)
• Gartner Hype Cycle for Artificial Intelligence, 2024 (https://www.gartner.com/en/documents/5505695)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-279.md) | 488 | 5 | 1 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io fly, transforms containers into microv VMS, that run on their Hardware in 30 plus, regions on six continents so you can, launch your app near your users learn, more at, Y.O what's up friends Intel Innovation, 2024 is right around the corner, accelerate the future registration is, now open and it takes place September, 24th and 25th in San Jose California, this event is all about you the, developer the community and the critical, role you play in tackling the toughest, challenges across the industry ignite, your passion for AI and Beyond grow your, skills to m maximize your impact and, network with your peers as they unleash, the next wave of advancements in, technology here's what you can expect, understand the emerging Innovation and, Trends in Dev tools languages Frameworks, and Technologies in Ai and Beyond to, empower you and the solutions You're, Building get in-depth technical, experience join handson workshops Labs, meetups and hackathons to collaborate, and solve problems in real time you can, explore featured partner and Intel, solutions they have Partners there, startups there customers there and Intel, is showcasing the latest in products, services and solutions across Keynotes, Tech sessions and the show floor to help, you meet your development needs, collaborate with experts learn and have, fun engage in interactive sessions to, connect get certified gain unique ideas, and perspectives build longl lasting, networks and of course have fun and get, inspired hear from leading industry, experts technologist, startup entrepreneurs and fellow, developers along with Intel leadership, CEO Pat Ginger and CTO Greg lavender as, they take you through the latest, advancements in technology don't miss, this chance to be at the Forefront of, innovation take advantage of early bird, pricing right now until August 2nd, register using the link in our show, notes or to learn more go to, intel.com inovation once more that's, intel.com Innovation or go to the show, notes and click that link, [Music], well welcome to another episode of, practical AI this is Daniel whack I am, founder and CEO at prediction guard I'm, joined as always by my co-host Chris, Benson who is a principal AI research, engineer at locked Martin how you doing, Chris I'm doing fine uh we got a fun one, today Daniel just give you a good one, yes of course it was wonderful not that, long ago to be in uh the great city of, San Francisco and run into our friend uh, Demetrios from the mlops community and, uh I I figured I'd just bring him along, for another conversation so Demetrios, how you doing I'm great man we're back, and I've got some bad news to break to, you right now I wanted to do it on air, go for it just to get your reaction oh, boy you can be vulnerable this is how we, build community yeah I'm nervous yeah so, prediction guard awesome congratulations, on all the success that you've had we're, doing a data engineering for ML an AI, virtual conference and one of your, colleagues Daniel filled out the cfp I, haven't gotten back to him yet but I, can't accept him I just am way too full, way over my head and as much as I want, to I'm going to have to divert him to, doing his own special event basically, we're going to actually take what may, have been a bad thing and turn it into a, good thing that that sounds great I'm, looking forward to learning more there, we go you know I gotta I got to make, sure that you get all the love and shine, you deserve because I'm super stoked at, what you're doing yeah yeah we, appreciate that it was great to see you, and and you had your own event in SS how, was that I do not recommend doing Live, Events to even my greatest enemies if, anyone out there is contemplating, organizing an AI conference you can do, it but I don't recommend it you're gonna, hurt yeah painful man but it it was a, big success it was just a lot of work, leading up to it as you can imagine and, we had fun and on the day of it was like, I think over 750 people showed up, a lot of great conversations a lot of, fun like spontaneous sporadic meetings, with people and that's the stuff you get, at in-person conferences that you it's, really hard to replicate virtually yeah, you know what the secret is the secret, is it's Ai and it needs a lot of hype it, really needs a lot of hype there's one, thing that we don't have enough of in AI, it's we don't have enough hype if you, had hyped it more it would have, worked you know I do a fair amount of, hyping and so for those out there that, are sick of the hype like myself I've, only got myself to blame on this well uh, Chris you sent me um a very interesting, looking hype filled chart the other day, you want to go into what that was well I, I will uh and I'm actually blaming it, all on Demetrius uh he was making fun of, the G hype cycle and gosh I hope they're, not a sponsor because we're making fun, of them today and and he was he was, going through that on and it was funny, and I said dude we need to do an episode, where we all analyze the Gartner hype, cycle in 2024 for artificial, intelligence and we we break it down and, we're going to assess it and decide what, we think of those things and we're we're, not doing this in our normal extremely, serious manner we are doing this in the, fun way and and un L you don't know, Demetrius out there which I can't, imagine because he's a regular guest on, the show here he is in addition to being, a brilliant guy in this field he's also, the funniest man in all of artificial, intelligence so this is going to be good, uh and we're going to dive into the, gardener Hive cycle today and break it, down for you we're going to start with, the real one and then we're going to uh, maybe make some adjustments to it you, know Chris you you say making fun but um, I mean Gartner seems to have fulfilled, their mission I mean we're talking about, the the hype cycle we're we're going, into it so maybe their mission was, fulfilled and you know we are their, fulfillment yeah oh my gosh yeah we're, hyping it up right now we're H very true, okay and we're gonna have fun doing it, oh I just have to say yeah please if, anyone knows how I can get a job doing, this kind of stuff just making up words, and then putting them onto a wave graph, let me know because I would love this as, a job it just seems like it's too much, fun well let's see I think Surf's Up let, top on the wave and let's start talking, our way through you know Demetrius you, wanna do you want to lead off on what, some of your ideas there so I think the, most surprising to me out of this whole, graph and for anybody that's not, familiar with the hype cycle you've got, the big like upward side and then it, goes down and it kind of crashes and, then it starts to climb back up and it's, the traditional like and the two second, version of that and I did a in our in a, previous episode I did a longer version, uh when we were looking at some specific, things on it but the two second version, is new technology comes out everyone's, super excited about it they think it's, going to be the greatest thing since, sliced bread uh it doesn't live up to, the hype they get frustrated they go, good this thing sucks and and it falls, down on the hyp popularity side and then, cooler heads prevail and they kind of go, okay well maybe it can do something okay, and and then it's into a a reasonable, sense of productivity so that's Gartner, in a nutshell so the biggest surprise, for me is at the bottom of the slope so, after it's gone all the way up the hype, cycle it's come down and crashed down, and it is at the absolute bottom the, trash of, disillusionment exactly there is cloud, AI service, yes and for me that is the biggest, misnomer because if anybody is making, any money out of any of this and I guess, maybe hype and actual money they're, detached and they're very decoupled here, but for me that was like wait what, there's no hype in Cloud AI services so, Bedrock out of there hype is killed it's, at the trough of disillusionment any, type of sage maker if you're using that, or vertex no out of there it's the, lowest of the low and so when I saw that, that was instantly like dude why are you, even doing it yeah I did not believe a, thing that I read afterwards but that, was my thing any any big surprises from, you guys I think you're point on if, there's anyone making a killer amount of, money on this it's Microsoft it's Amazon, it's Google uhhuh part of my struggle, here is some of these terms like I could, interpret them one way or another way, right like Sage maker for example which, for those that don't know is a it's kind, of like a model deployment service, within AWS and there's various, convenience around it and that sort of, thing like that's been around for quite, a while now like a very long time even, before sort of the kind of piped gen AI, stuff long for it but yeah so like is, that a cloud AI service like that's been, around for a huge amount of time or are, we just talking about like hosted model, apis right they don't say which also to, be fair have been around a long time, like you look at something like OCR or, translation or something like that and, and cloud services have been around for, a really long time and are sort of, ubiquitously used it's funny that it's, down there I I I get your point maybe, it's just like everyone knows that's, where the you know Cloud that's where, all the services are we're all paying, for them yeah so does hype correspond on, to usage I guess like in this chart is, it that people aren't hyping Cloud AI, Services even if they're used or I I, think it's an emotional thing you know, the hype side is you know how much, people are talk so maybe it's accurate, in this context there is nothing sexy, about AI services in Cloud providers and, maybe that's what they're getting at is, like yes we're paying an arm and a leg, we're giving them all of our money but, there is nothing sexy but productivity, wise it's definitely productive I I, would think so yeah it's very pragmatic, too especially for those people just, starting I don't know any easier way, than to just grab an API from like, Amazon Bedrock is just hosted model hit, that API like you would hit an open AI, API but now you have a suite of models, right so that seems to me like a a near, Miss but then at the top of the peak is, the other one that was a huge surprise, to me because because I've noticed this, trend I don't know if you guys have, noticed it but people who were formerly, ml Engineers we've all converted into, being AI engineers and an AI engineer is, so misleading because you don't know is, that somebody that is coming from like a, front-end development world and now they, do a little prompt engineering they use, a few Frameworks and they can chain, together some prompts to make a bit of a, demo on Twitter and now they're an AI, engineer or is it somebody that was deep, deep in the ml platform weeds and, because AI is now the new rage they call, themselves an AI engineer so I I don't, know about that but it's at the top I, think it's the same yeah I think it's, all I I think people use AI ML and, before it really fell out of Vogue deep, learning, interchangeably yeah so exactly I don't, know if it's also maybe connected to the, fact like Chris and I talked about this, I believe it was maybe last week the, fact that some of the disillusionment, around AI is sort of the realization, that turns out AI is integrated in, software and you still have to do, engineering to like build software and, it doesn't just sort of like having a, model is a solution doesn't really like, play out in reality you mean I can't, just buy an AI model and stick it out, there and magic things happen yeah I I, mean one would think I'm so, disillusioned yeah it's it's funny you, guys mention that too because I've seen, a few people talking about how llms are, not a product you have to build on top, of llms your product or whatever it is, your service that needs to be there so, you can't look at an llm as a product, per se and then I've also seen or I've, been thinking deeply about something, that is like the companies that are, really getting a ton of value out of, this AI movement uh I'm thinking about, one of my friend's companies who does, like a support software and now he's, leveraging Ai and llms for creating like, multi-agents and helping answer feedback, or answer questions and queries for, support and he's using AI that's awesome, he's able to sell that support product, two companies really well what I haven't, seen is companies that say Hey I am, fraud detection as a service and I'm, going to sell you this whatever, traditional ml product as a service, whereas you can create regular business, unit products as a service that leverage, AI but you can't quite or at least I, haven't seen anybody crack the nut, create some kind of a traditional ml, service type of product I don't know if, you guys have seen that and I also don't, know if I'm making much sense right now, because it's something that's relatively, fresh in my mind I'm gonna turn that one, over to, Daniel so no I wasn't making much sense, I guess is what the nice way of saying, it is um I mean so you've got like what, I would say is the things that I have, seen most are either what you were, talking about so, utilizing generative AI embedded in the, functionality of sort of domain specific, applications like the customer service, you're talking about or financial, services or whatever or access to models, over some API infrastructure right, there's maybe less like General I I, guess maybe the biggest one I've seen is, sort of just general like fine-tuning as, a service if you look at something like, you know open pipe or or something like, that but that's still fairly general, purpose it's not specific to any sort of, use case that you might use maybe to, some degree you know certain rag, Services would fit into that like we, talking to Pine Cone about their recent, like they have more kind of pre-built, things to have you do kind of like load, in all your documents and have rag set, up and all all that stuff so um I don't, know that's maybe the closest that I've, seen to to that sort of, scenario yeah well also the big question, is everybody wants to and this kind of, ties back into the hype cycle everybody, wants to be doing Rag and wants to have, all these great use cases with their Rag, and so like you were talking about with, pine cone they make it really easy for, you to do your rag but then at the end, of the day is that a viable business or, is that actually super useful as opposed, to somebody's got this support software, that they can come in and really cut, down the burden for your customer, success Engineers or your customer, success people and that is fascinating, to me because it's it's a booming, business right now the rag business, maybe yeah that's great maybe there's, some interest there is it a booming, business I don't know I haven't seen, numbers but I think the really, fascinating part to me is if you try to, juxtapose that with like a fraud, detection as a service type of product I, just haven't seen that anywhere because, I think a you're not able to really like, give away everything as freely and be, what works for one fraud detection use, case doesn't necessar it's not like you, can productize that and then go out and, sell it as a service and my opinion so, so this is a little bit of a tangent I, know but uh but that all that to say is, we're at Peak hype for AI, Engineers Peak hype yes so I'm gonna, draw us back over to the hype cycle just, for a moment and I want to read I'm, going to do something boring for a, moment I'm going to read off the things, where they are uh for our listeners, because the three of us have the benefit, obviously of seeing the graph in front, of us and for listeners who aren't so, I'm going to take a a moment and then we, can go back and start hitting them there, very quickly heading up the curve, initially The Innovation trigger we have, autonomic systems we have Quantum AI we, have first principles AI we have, embodied AI multi-agent systems AI, simulation causal ai ai ready data, decision intelligence neuros symbolic AI, composite AI artificial general, intelligence otherwise known as AGI and, then we're hitting the peak of inflated, expectations at the top of that hype, cycle we have Sovereign ai ai trism, prompt engineering responsible Ai and at, the very Peak AI engineering and then, starting to slide down we have EDI, Foundation models synthetic data model, Ops and generative Ai and just going, into the trough of disillusionment is, neuromorphic Computing smart robots, followed at the bottom by Cloud AI, services and then we slide up the slope, of enlightenment to autonomous vehicles, knowledge cfts intelligent applications, and finally the singular one on the, plateau of productivity which is where, you want to end up is computer vision, which is basically yeah we can do that, it's boring and no one talks about it, anymore but hey we're making money so if, the listeners out there are not confused, oh there's a whole bunch I don't have, any idea what they, are that's it I was going to say which, ones do you actually know what they are, because what the hell is embodied AI oh, I I learned what that is after I put out, the post so someone said oh yeah, embodied AI is when you use AI in robots, it is so yeah but there's also smart, robots on the on the cycle and I used at, a former employer I was specifically, doing AI systems in robots and I've, never heard of it you never called it, embodied AI well it's been a few years I, I'll give you that it was so but no we, weren't calling it, I mean so I think I'm at like a 30% hit, rate on these and I really would love to, know what first principles AI is because, that feels like buzzword Bingo to the, fullest I don't know um let's see first, yeah Daniel's going AI he's going to, models to find out he's um the card AI, generated card in my Google search says, when applied to AI first principles AI, suggests developing AI systems and, algorithms by understanding the, foundational principles of machine, learning neural networks and data, science from the ground up don't we do, that anyway when we're isn't that kind, of inherent in training new models and, stuff like oh but no no we're really, going back we're going back to the very, first ones you're at the second or third, principal we're beating you yeah no cuz, all you guys that are out there that, aren't using first principles you know, that's lower down on the hype cycle okay, this is, yeah so the other pieces I I mean were, there any other surprises for you guys, because I have so many other pieces on, here that I'm like what I think for me, like some of these things are themselves, correlated and yet in different places, on the chart right so it's like if you, look at generative AI Foundation models, Edge ai ai engineering prompt, engineering probably some others on, there all of those like sort of fit into, the sameish bucket and yet are on, different sides of the hump so yeah I I, don't know like some of these it's also, a matter of where do you draw the, boundaries where's the boundary between, generative Ai and Foundation models or, generative Ai and prompt engineering, I'll give you one you know as where at, the very bottom on the Innovation, trigger is quantum Ai and I've okay so, that's not gonna happen anytime soon and, and I will note that they have it only, greater than 10 years but I would, suggest it's probably greater than, greater than 10 years but isn't that uh, I mean one of the things that's, interesting about this whole cycle is, there's that one uh maybe you all can, tell me or I can look it up there's a, one law it's like a general law that, people talk about where you, underestimate short-term Innovation and, overestimate longterm Innovation or or, something, vers yeah yeah sorry I said that, backwards yeah so it seems like some of, like it's hard to especially the time, angle of this it's hard to because, things just pop up and you like really, didn't see certain things coming and, others that you thought would would come, don't so yeah it's extremely difficult, 100% one thing that I am just to tag on, what you're talking about Daniel with, the bucketing these, please tell me what the difference is, between an AI engineer and a prompt, engineer what like a prompt engineer is, someone that only does prompts I guess, and that's all that matters so they're, just so I can see how how it's like, where's the line here when prompt, engineering came out Daniel you might, remember I kind of made fun of that I, was like the whole that you talk about, like because people were saying they're, new jobs for prompt engineers and stuff, and I'm like that is a passing fact bad, like that will be just so ingrained in, what everybody does all the time that, the notion of there being someone who, that's their entire job all the time for, years is not gonna happen yeah I also um, didn't know so like I've never heard, anyone use the word or if it's a word, it's an acronym AI, trism do people go around saying that, yeah what is that what is it so it's I I, looked it up and you know what's what's, funny because this is exactly the area, that I'm working in every day it's AI, trism is tackling trust risk and, Security in AI models ah okay you've, never heard that used have you and I've, never heard that but now I feel like I, should put it on our, website because it's hyped yeah you, definitely need there that's right the, funny part is it's almost as hyped as, prompt engineering which you is, basically all you hear about is prompt, engineering right and yeah they're right, there together AI trism you never hear, about yeah there you go uh but the trism, it's it's out there it, is we hear about you know the the, components that make that up all the, time sure but just never the I've never, heard them put it together that way and, I'm sure there are people that are out, there that that you know their focus is, in the that area and they're like of, course it's trism how do you but yet, guess what most of us don't know that no, not at all I don't even know if I go and, I just look at this I don't know what, causal AI is I don't know what the AI, simulation is the multi-agent I do, understand but, then like even when you say Quantum AI I, don't know what that is the one that I I, would say is, probably in the wrong spot is synthetic, data it feels like that should be still, going up on the hype train because we're, just discovering what we can do with, synthetic data and every week I feel, like we unlock new use cases and, synthetic data is just a it's the gift, that keeps on giving in my eyes I think, that's the difference in you who, actually does it and somebody at, Gardener who you know who was tasked to, go put the chart together and doesn't, actually do the thing in real life I've, terribly offended somebody out, there well we're glad that it's out, there let's just say that we are very, happy that this exists so we can have a, whole episode dedicated to breaking it, down yes it's a conversation starter, that's what I mean like, achievement unlocked yeah unlocked so, one thing that I noticed isn't there at, all which really surprises me uh given, how much it's bantered about is ethical, AI it's not on the chart and that does, doesn't go in the trism that's not one, maybe it does maybe this is where I you, know is ethical AI now transformed from, a labeling standpoint into trism is that, is that where we're going I don't know, or what is the overlap between, responsible Ai trism and ethical AI okay, well yeah I don't and there isn't really, anything on here about gpus or Hardware, so yeah I think that's because they made, their own hype cycle for, gpus right if I'm not mistaken I I feel, like I've seen that somewhere on the, internet you'd be cannibalizing your, other chart exactly so you can't put any, GPU Hardware anything on the AI one you, got to refer people to the GPU hype, cycle and maybe it's like that with, ethical AI like they made a whole other, ethical AI chart that is the hype cycle, for ethical AI maybe so I'm not familiar, with it how many charts can you make, that's if you're Gardener I guess think, they have I mean we have just the, artificial intelligence hype cycle here, but they probably have I think I've seen, multiple you know subdivisions and stuff, out there so that's why it's a great, business to be in Gardener selling all, these different hype Cycles well, speaking of what to Hype what uh what's, not on the hype cycle but should be all, right if I could have talk to somebody a, gardener before they were making this I, would have advised and so this is my, basically this is my video job interview, right now I'm busy typing an invoice up, for you to send to them okay just for, exactly I would have advised AI Gateway, that is very popular that's climbing the, hype cycle right now because people, really like to have the option to hit an, AI Gateway and if it is not that compx, of a query you don't need to hit gp4 you, don't need the most expensive model if, you have some kind of Open Source model, that is cheap then let the simple query, go to that 7B model and so I've been, hearing people call it an AI Gateway, others I think have called it like a llm, proxy maybe or router yeah that's, another one so we would have to agree on, the actual name but that's gaining hype, for sure yeah agreed yeah it's uh I've, definitely seen the router language, whatever it is like the languages, overlap with networking um which is, basically like you're just routing API, calls so I guess that makes sense yeah, any any that you guys would have liked, to have seen on here and where I had the, ethical I'm still wondering what, composite AI is did we ever get that, answered or if I just am I having a, senior moment what is it yeah what is it, um what uh the one that really stands, out to me unless I'm just like there's a, lot of words on this page so maybe I'm, totally missing it somewhere but where, is multimodal AI yeah oh good catch, there it's not on here is it no who, cares about multie that's so weird that, should be in the peak of inflated, expectations this is like the thing of, 2024 like multimodal AI That's so even, multimodal rag should be on here like, climbing the Innovation trigger, multimodal models should be on the peak, of inflated expectations that that is, such a good catch I know tons of people, who who say multimodal and have no idea, what it means well what what does it, mean Chris yeah, what quiz time well let's having, different modalities of of input there, so that you can combine different inputs, to get a a rich output you know in a, very general sense I have no idea yeah, so voice I know when I photos yeah, photos videos all the things all the, things exactly which is what we want I, want to throw a bunch of stuff that I, have and and have a fantasttic just have, it sorted out and give me the best, answer uh and even with today's, multimodal models that doesn't happen, very well there's I I'm I'm often I'm, often frustrated and disappointed with, uh with those outputs so yeah it's I I'm, expecting better yeah and along those, lines I have two that I would like to, have seen one is just transformers in, general whereas that where are they on, this hype cycle because that also feels, like are they climbing or are they going, down I don't know it would be trough of, disillusionment heading downward because, that it's kind of we're we're we're past, that and people are now talking about, post Transformer models you know quite, often so it's kind of like yeah, yesterday so there needs to be another, dot for post Transformer models going, that's definitely going up and that's, right speaking of which it feels like, okay we've, got small language models where are they, because that is all the rage it is that, was like and maybe it's all the rage for, every vendor who is not open AI because, they can't compete on gp4 and so what do, they do they say well you can just host, your own small language model and, fine-tune it and get better performance, than, gp4 and, so I think small language models are, probably they should be in that, Innovation trigger maybe the peak of, inflated, expectations because anyone who's ever, used the 7B model, might not want to use it if they have, the choice well maybe maybe it's is it, are you sure that's going up or could it, possibly be sliding into that, disillusionment that you just referred, to potentially that's true because maybe, it is going into the trough of, disillusionment uh you know hypo just, hypothetically because I do think that, when it gets to the plateau of, productivity small models will be this, the just the Workhorse you know you'll, have them out on the edge everywhere, every freaking device you've ever, imagined or seen is going to have small, models in it that are inferencing we, won't ever have anything that doesn't, have them it'll be just the oon of, course we have our small models in our, watch which leads me to the next one, that I'm like where is this why do they, not have wearable AI that is a perfect, buzzword that should be on here and if, you look at like what meta is doing with, the glasses or if you see any of those, necklaces that you can wear and it, records everything yeah that's wearable, AI right there I just I may have just, made that up or I may have seen that, before but that one should be on here it, should be there I, [Music], agree hey friends out shift Cisco's, incubation engine merges Innovation with, the art of possible a Launchpad for, transformative emerging Tech out shift, Blends startup agility with corporate, strength to develop nextg Technologies, from the groundup in AI Quantum, Technologies Cloud native and more their, newest AI Innovation Motif addresses a, critical challenge in the rapidly, advancing world of gen AI Bridging the, Gap between concept and deployment this, model and vendor agnostic solution, supports the entire gen AI Journey from, assessment and experimentation Motif, accelerates deployment from months to, days while safeguarding against gen AI, security trust compliance and cost risks, all while empowering business function, and it teams to rapidly configure and, user assistance powered by, organizational data Motif provides, Advanced customizable policy controls to, prevent unauthorized access to sensitive, data and helps ensure compliance, throughout the entire process with deep, visibility into operational and business, metrics motivic enables you to track Roi, optimize costs and make informed, Decisions by offering a centralized view, moic deters Shadow AI usage and empowers, teams to innovate responsibly so move, beyond the traditional constraints of AI, implementation utilizing AI deployment, that is both responsible and is, revolutionary ensuring your projects are, not just just quickly launched but built, on a foundation of trust and efficiency, visit motif. a that is m o t f i c., [Music], AI maybe this fits into kind of the, agentic stuff that is repres presented, in certain ways on there but this whole, idea of whatever you know like tool, function calling SL like text, tosql like interacting with structured, databases apis whatever that is I don't, know like the maybe the general name for, that other than tool and function, calling or text to SQL but um certainly, that's like sliding into a Zone where, people are definitely doing some of, those things in production and there's, products released around it so like the, the hex magic stuff and and all that, that other where is it on the chart, though before I go on oh where is it on, the chart um I mean it's got to be, somewhere somewhere around AI, engineering so it's at the peak of ex uh, maybe maybe yeah I don't know I don't, know it might be maybe it's going down, cuz people are like H agents aren't, reliable I think that's right I think, it's heading down into the TR, disillusionment that's where I would, guess yeah Y and if you compare that to, where they have it multi-agent systems, it's got a long way to go up it is at, the very bottom of this hype cycle so, yeah I think, we instinctively are like no please no, more agents and Gardner's like oh we're, just getting started baby well and and, they're like no please more agents, together multi- agents it Gardner's, going create their own agent Hye cycle, next that's going to be the next one, that they can create and so we'll you, know take a commission for giving you, that idea Garder no problem there and, one thing can we call out the elephant, in the room because where is retrieval, augmented generation, on yeah how is that not on here really, yeah rag what's that cuz I was thinking, about it I was thinking about it and I, was like oh you know what they missed is, graph rag that is all the hype these, days and that's probably right around, where Sovereign AI is where it's maybe, like at the border the yeah it's going, up nearing the peak of exp of inflated, expectations you're right more hype than, the trism yep more hype than the trism, but I would argue rag is is heading to, the trough of disillusionment anyone, want to disagree with that no no I think, so too I think it's over the hump yeah I, do too I mean it's and people are kind, of hitting the the challenges and and, and you know and actually Daniel, Advanced rag you know which we've talked, about several times you know kind of, kind of trailer well we don't just have, rag now we have advanced Rag and and, advanced as things are starting to head, over that peak of inflated expectations, with rag well guess what we can juice, some more we have advanced rag but I, think I think the whole thing is, starting to go over the side you know, people are like okay well we've kind of, done at least the easy stuff uh to the, advanced rag point there are people that, are that are doing it better than others, but nonetheless you know it's you know, what's next so what what I'm just, curious to Second deviation we've talked, about you know fine-tuning we've talked, about rag what's coming next in that in, that sphere what what are they missing, there yeah a new model yeah I think you, mentioned that you might have had some, of these Demetrios uh what are AI hyped, items that are your own that you've come, up with a name for oh that other people, will have to, interpret to to figure out their, definition you wanna you want to guess, yes on this one all right here we go I, am going to start you off with a with a, pretty simple one this one is free range, AI free range is that is that open, access, llms close close what do you got Chris, grain fed I I can't get off the free, range thing I'm an animal guy I can't, even I can't even get into the AI head, space on this one it's AI that was, trained without guard rails okay gotcha, okay I like that well we we already, talked about about one here um that that, you alluded to Demetrios but my name for, it was trinket, AI, wearables yeah yes trinket AI TR yeah, imagine it's it's in your fidget spinner, that sounds a lot that's a much better, name than wearable AI yeah trinket, AI it is every little thing you have on, your body has a freaking model INF, single in it you know, you're and it doesn't bring you any, extra value if we're going to follow the, AI Trend you just don't have to think, anymore you can click that button and, take a picture Demetrios no it just, gives you some verbose answer to a, question that you didn't really, ask so your shirt is you're like hey, have I been sweating and then it tells, you the origin of sweat in a three-page, PDF that you have to go, download well do I get senior moment AI, that that would be good for me you know, I there's a huge market for that every, everybody over the age of you know 50 is, going to buy senior moment AI to to you, know like what what what oh and it oh, there we go and you know I can I can, continue instead of pausing for the next, three minutes to try to figure out what, it was I was about to do or I was, thinking that that's a how seniors, interface with AI so they don't get left, behind it's like this is the product, that will make sure you stay up to date, you're ahead of the, curve okay, sounds good all right I got another one, for you all this one is, EQ AI oh empathetic AI yeah so it's also, been known as empathetic AI yeah you may, hear other people out there in on the, streets calling it empathetic AI uh this, one is a type of AI that has high, emotional intelligence and it feels, empathy for you when you get frustrated, that it's not giving you the right, answer and your prompts aren't working, but it doesn't actually make your, prompts work it just feels bad for you, okay I that minus the AI bit that, happened to me yesterday I was on, Comcast on their stupid text support for, four hours texting they passed me off, and every everyone was so empathetic but, they accomplished nothing if you put, that in AI I'm quitting AI if you put, that into any AI that does that I'm just, done I'm I'm walking away from the whole, field are you sure it wasn't already AI, that you were talking it could have been, I mean it was just text it was only text, but it was horrible we've already passed, the Turing test so it's like they're, there I'm getting a response of I'm so, sorry I'm just very sorry we're here to, help you and I'm like I'm gonna freaking, kill you you know yeah yes that's what, four hours texting support will do but, don't do yeah I just I'm if you bring, that to AI it'll ruin the whole thing, for me, well this one funny enough is actually, on the up Tick when you look at the, slope the EQ a has got a lot of Runway, left yep um so my my next one is, AI either AI nepotism or AI, anti-nepotism, oh don't I'm trying to make fighting, fighting AI nepotism fighting AI, nepotism oh okay you're gonna have to, you're gonna have to go into that one, for me that's I've stoed you yeah yeah, yeah this is, exciting it's, basically using AI against like the, government using AI or what no no so uh, uh Foundation model related maybe yeah, so this would be like, multimodel AI in that you are not, preferential to one language model, family and only using that family but, you are now multimodel and you know as, such not practicing nepotism but are you, multimodal, multimodel this is also you know I knew, it by its other name uh which is, polygamy, AI yes oh gosh where are we, going ohy no or or some in San Francisco, call it polyamorous AI as it tends to be, so the the next one that I've got for, you oh where is this nepotism AI on the, hype cycle by the way uh I think it's, still a bit on the rise I saw a16z in, their in their post one of the things, they called out was multimodel future oh, yeah there's a future for this one that, is for, sure so I've got one that is called, broccoli AI okay this one's on this, one's going down is it related to some, sort of graph thing no but that could be, nice yeah branching is it synonymous, with healthy AI yeah yeah exactly maybe, you've heard it termed healthy AI, efficient yeah it's sustainable no so oh, that's another one that I've got though, but we'll get to that in a minute which, which reminds me like it does feel like, like sustainable AI should have been on, the real hype cycle like that's an, actual ter isn't it yes it is and it's, not and it's not on there the other one, that should have been on there that I, was like why isn't it on there is, Ensemble AI that feels like or Ensemble, models that feels like it should have, been on there see one of the ones that I, looked up was composite AI yeah that's, the one I didn't know I think well I, don't know it's slightly different than, Ensemble but I think that they like, composite was combining multiple, multiple AIS together um in some way or, another for one inference like you have, multiple you know models inferencing but, you have one inference back out to the, uh the user yeah something like that I, don't know although Ensemble could very, yeah very much mean for a single, inference getting a majority vote or, something like that okay so it would be, where composite AI is on the chart if, they're assuming they're correct yeah, and where where before we leave it, sustainable AI where is it on the chart, that's very much like it's got a lot of, hype to go because mid level mid level, on the on the curve up yeah okay just, think about how many people are talking, about the energy that is wasted training, the foundational models true and how we, need to build out all these data centers, and they need to be sustainable etc etc, so yeah sustainable AI for sure has some, room to grow back to BR, AI AKA healthy AI this is AI and this is, very much on the Downs slope again it, has passed its peak people are a little, disillusioned with it because it's AI, that doesn't taste good for the, organization but it's needed and so you, can imagine the cyber security folks, they love this kind of AI is this like a, linear aggression model or what would, you consider good for an organization I, think you use the word good yeah healthy, it's it's healthy for the we could go to, healthy for the organization what could, that be I I mean I actually didn't get, to do enough market research in this, section, to to figure that part out you know I, was just throwing spaghetti at the wall, but I if I were to think about what's, healthy yeah it would probably be the, traditional ml going back to the what I, was talking about before like fraud, detection is one of those, where it's not really AI some people, might know it as its former term ml, so I'm telling you they're all the same, from a marketing standpoint exactly well, yeah the waters are too muddied for them, to make any actual difference that's, right so what else you got what else you, got okay so I've got unsustainable AI, which is way different than sustainable, AI just so we're clear an inverse, but it's not even it's a whole, different uh sector of the universe that, we're talking about it's not like oh, it's just the opposite of sustainable AI, unsustainable AI is it's got It's at, pype right now let's be honest if I, could swap it out with the AI engineer, it is at Peak hype because this is AI, that was built for a product demo but, not for scale that is unsustainable AI, happens all the time yeah so anything, that you see um basically we can, hopefully none of these guys are your, sponsors but let's just cue Devon or, rabbit or Humane all those unsustainable, AI the trinkets the trinkets yeah that's, true so it's sort of analogous to doing, like prototyping software where you're, you're never intending to to grow into, production exactly so so that's all of, mine that I I could think of well I, think that was a pretty good list I did, realize I don't know maybe maybe related, to some of the discussion we had earlier, but I don't see neighborly AI on here, that's kind of creepy when you think, about it I I wasn't creeped out until, you said that, but I had this image of Mr Rogers, Neighborhood you know but instead of Mr, Rogers it's the AI hi girls and boys, maybe they can, help you clean up a few things with, their Rags it clean, no oh boy oh well I was thinking it was, like next door where it was almost like, the voting system The Ensemble but it, was for local llm gotcha yeah I realized, there's nothing about vectors or, embeddings on the chart I was just, thinking about that actually there's, yeah there's no Vector stores on here or, even just general embeddings, yeah I wouldn't that be plateau of, productivity now we've had those for so, long that they're just I don't know, lexicon no no emotion left in them yeah, what I was thinking is they probably, aren't on there because Gardner also has, one of their best products ever the, magic quadrant and that'll be the next, episode that I come and drop in on we, can remake the magic quadrant for the, different sectors and I imagine, that they have a magic quadrant for, Vector databases yes that sounds, delightful yeah well it it it has been, delightful to uh to have you on, Demetrios um I'm glad you brought your, various new AI terms to the hype cycle, and uh now I have have some work to to, do on my broccoli AI so incorporate that, into your product for sure it's it's, right around there prism it would be a, good AI logo just like a broccoli fluet, yeah the broccoli or the I saw a great, paper that was all about leaks it was, all about data leakage when you send API, calls to open Ai, and the paper started with a emoji of a, leak that's awesome like the leaks you, eat right and it was saying here's and, it was basically showing how you send, your data to open AI but but a lot of, other people are going to get it too if, you're not careful yeah which is one one, thing that we haven't really touched on, but that seems like it's got some hype, around it is what data leakage AI data, leakage data poisoning data poison I, know in my in in my day job that's a, common conversation yeah prompt, injection should be there injection yes, uh I guess this all fits under trism, yeah this is a we're going over trisms, right now trisms and, trinkets on that note that very profound, note uh it has been great to discuss the, all the trisms with you uh Demetrios, I've had a blast as always please please, come back uh as as usual give your uh, your own hype about the the upcoming, event before we close out and where, people can find out more about it yeah I, I always feel bad I come on here and, just show my stuff so this time no, Shilling I just had a blast doing this, with you guys okay so if anybody wants, to find out about the next virtual, conference or the inperson conference, they can just Google mlops community and, I'm sure it'll pop up cool all right hey, much appreciated we'll talk to you soon, demitros thanks man yeah thanks, [Music], guys all right that is practical AI for, this week subscribe now if you haven't, already head to practical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence break, master cylinder and to you for listening, we appreciate you spending time with us, that's all for now we'll talk to you, again next, [Music], time k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The first real-time voice assistant | In the midst of the demos & discussion about OpenAI’s GPT-4o voice assistant, Kyutai (https://kyutai.org/) swooped in to release the first real-time AI voice assistant model and a pretty slick demo (Moshi). Chris & Daniel discuss what this more open approach to a voice assistant might catalyze. They also discuss recent changes to Gartner’s ranking of GenAI on their hype cycle.
Leave us a comment (https://changelog.com/practicalai/278/discuss)
Changelog++ (https://changelog.com/++) members save 5 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Plumb (https://useplumb.com/) – Low-code AI pipeline builder that helps you build complex AI pipelines fast. Easily create AI pipelines using their node-based editor. Iterate and deploy faster and more reliably than coding by hand, without sacrificing control.
• Motific (https://www.motific.ai/) – Accelerate your GenAI adoption journey. Rapidly deliver trustworthy GenAI assistants. Learn more at motific.ai (https://www.motific.ai/)
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Kyutai (https://kyutai.org/)
• Kyutai keynote video (https://www.youtube.com/live/hm2IJSKcYvo)
• Gartner Hype Cycle for AI (https://www.gartner.com/en/documents/5505695)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-278.md) | 775 | 11 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another fully, connected episode of the Practical AI, podcast in these fully connected, episodes we try to connect you with, various things happening in the AI space, and connect you with maybe some learning, resources or talk about some subjects, that will level up your machine learning, game my name is Daniel whack I am, founder and CEO at prediction guard, where we're enabling AI accuracy at, scale and I'm joined as always by Chris, Benson who is a principal AI research, engineer at locked Martin how you doing, Chris doing great today it's uh dog days, of summer here in the US and uh it is it, is really hot and humid yeah super humid, and nasty I'm looking forward to AI, Control you know like weather control, from AI, and it will keep all of us at just the, right temperature right I can't see, anything possibly going wrong with that, of course not only only positives there, and that is in regardless that is in the, distant distant future of 2025 I'm sure, yeah exactly exactly let's let's focus, on the next two weeks for now uh which, is important I think one of the things, that caught me off guard this last few, weeks which you and I try to stay, plugged into various things and you know, maybe people think and listen to this, podcast that we're you know keeping, plugged in with every single thing, happening in the AI space but I was a, little bit surprised when I saw the, release um I guess I just hadn't really, been following along with what the, company or research lab qai was doing so, um this is a open research lab that, researches Ai and um they have funding, and some support in terms of, infrastructure and all of that but, they're a nonprofit research lab in my, understanding and um they actually so we, talked on a previous show we we kind of, got fooled a little bit or or maybe it, was you know a little bit of a fumbling, in terms of of marketing but it seemed, like when open AI GPT 40 came out you, know people were hyped because a lot of, the demos were voice-based but at least, you know at the time of that recording, I'm not sure all of what everyone has, access to in the the paid and unpaid and, Enterprise version but the actual voice, assistant for openai was not out and at, least as far as the release date of Q, Tai's Voice Assistant which is called, Moshi they were the first to actually, release a version of their voice, assistant which it's similar to in my, understanding what GPT 40 is is on the, multimodal side and that it is a, multimodal model so it's a realtime, multimodal model that supports a a voice, assistant and this research lab I think, it's like eight people or something like, that of course they have resources that, are supporting them right like this I, think it was a thousand gpus or, something they have resources obviously, but they were able to beat you know what, is now the the Goliath of the AI, space beat them to Market with this, realtime Voice Assistant which I think, took a lot of people maybe by surprise, or maybe some people were following it, closer and expecting it but I think um, in this sort of six Monon or whatever, time period it was when they were, working to get this out and beat the the, kind of Goliath of what is open AI which, I think in and of itself is is pretty, interesting it is I mean you know so, many uh try uh and some of the other, goliaths you know the the second tier, Goliath if you will are continually, trying to compete and they they may, touch it they may fall short I always, love hearing when a smaller group, especially if they're focusing on open, Solutions comes out and uh and is able, to do well so and they got a cool name, by the way yeah yeah and it is, interesting because this does run so, when you see the demo and and we can, pull it up here in a second and maybe, ask a few questions but when you see the, demo or the Prototype it obviously still, has some rough edges so I think you you, have some rough edges that aren't fully, kind of productized version like maybe, what is what you get with the open AI, Voice Assistant um in the forms that, it's in but it is very impressive also, because this is a model that I believe, it's models that are of a size that you, or I could run them on a even a single, GPU and they're going to open source, these models I don't I don't know what, the time frame is on that what exactly, that will look like what licensing all, of those things they do have a few talks, online so if any of the listeners know, that information and and I just haven't, run across it then then they can maybe, update us but yeah they they will be, open sourcing this which I think will, drive a lot more, experimentation and of course as we saw, with the first open llms that were, released with llama and other things, there was of course a huge explosion of, of innovation and and experimentation, going along with the release of the open, versions of those things and so I expect, that there'll be a similar thing with, you know these models and what I assume, will be other versions or other families, of these types of models moving forward, yeah I noticed you know going back to, your point about being able to to run it, locally potentially on a single GPU they, talk about in their press release uh, they just say compact uh Moshi can also, be installed locally and therefore run, safely uh on an unconnected device to, extend that a little bit I think that, you know there there are a lot of larger, organizations that are worried about IP, concerns these are topics that we've, covered quite a bit on the show in days, past so Moshi may very well find a home, in uh in corporate environments first of, all where they they don't want to send, information out and they want to get the, the advantage of that uh because it can, probably be run on a single GPU a lot of, edge devices make it possible so great, thinking there in terms of what's, possible and then finally uh thinking of, my own industry in the defense space um, since it can be run in an unconnected uh, or disconnected environment there's all, sorts of things from a from a uh a, government standpoint that they may be, willing toh to do so it's a great, strategy I I love I love hearing these, small companies that might be able to to, have a big impact uh in indust by, accommodating those those concerns well, Chris I find one piece of this whole qai, Moshi thing very interesting which is, almost like it feels a little bit like, deja vu because we back in whenever it, was I forget you know what what year, open AI came about is like there's these, big players in the AI space and they, were doing you know certain pre-trained, models and all of this stuff and robotic, things and and all of that and then open, AI came along and said oh we need a open, transparent nonprofit driven research, lab to really promote Innovation going, forward and of course as we have moved, forward through that we've seen open AI, kind of get away from that sort of pure, nonprofit status with the a little bit, more of a complicated corporate, structure right um which we've talked, about on on different shows but then, also you know just their release of, their work and their research and their, models and their data and those sorts of, things of course has has become very not, open and they of course have their own, reasoning behind that which at least, publicly they would they would say is, related to Microsoft sort of yeah well, at least publicly they would say is, related to safety of the use of these, models um of course you know there's, various people that might guess certain, other motivations Microsoft, yeah but um but yeah I I do find this, whole thing sort of like deja vu I don't, I don't know if if you're having the, same feeling here you and I both have, have a long history uh in the more than, six years now that we've been doing the, podcast of supporting open engagement uh, from different organizations uh whether, they're corporate entities or nonprofits, or whatever uh and we've seen that from, others I mean famously Yan laon talks, about uh he works for meta you know, which is Facebook's parent and talks, about nonetheless uh having open models, and all that and so we're we tend to, shine a spotlight on those organizations, that do that uh wherever possible we, certainly went through that because, we've been doing about just after we, started the podcast uh which uh was back, in 2019 or 2018 I believe actually 2018, um and about a year later open AI closed, up so we actually covered that in the, early shows you know it is what it is uh, they' they've done that they remain an, amazing corporate leader in the space, but yeah they did close all up and uh, and we tend to turn more spotlights, toward others like this so I'm pretty, excited to see uh what qai is is doing, and is able to do going forward here and, I hope this uh I hope they're able to, viably play against that top tier uh, competition I think that would be, wonderful to have multiple yeah do you, think that there's any chance for this, sort of like open research in the AI, space or in the technology space to, survive as a sort of bull workk of open, transparent research and open source uh, within the pressures that come of course, when you release this sort of technology, and you're a leader in the space and, there are actual dollar signs and, corporate concerns and certainly like, Partnerships that are necessary so you, know partnering with companies to do, this work is almost a reality I think in, the space because uh we talked about, this a little bit with the Stanford AI, index where they found that you know the, bulk of AI research is still happening, from the industry so I don't know what, are your thoughts is do they stand a, chance at staying staying the course, with this or I think there's certainly a, chance at it and I would argue uh it's, the same argument I've made in previous, shows where we talked on similar topics, is that we're seeing is the AI industry, is has been maturing these years uh at, an incredibly rapid Pace but we're still, seeing many of the things occurring that, we saw when when the software world was, really maturing over several decades and, the place where open source has really, really worked are in common touch points, where all organizations or many, organizations need a common thing and, they might build something, differentiated on top of that you know, for their revenue uh to drive, profitability but there's so much that, is underneath that point of, differentiation that they and many other, organizations can get the benefit out of, a lot of uh effort a lot of work a lot, of times they'll pay have paid employees, do it so there's a point where working, together and doing open stuff uh makes, sense for business and it drives, profitability it may not be your single, point of differentiation but if it's, anything under that why not you know why, not share the costs and uh pull, expertise for the best possible, foundations and so what I'm hoping is, that we continue to see that play out in, the AI space uh we're seeing you know if, you look at hugging face we've already, talked about the fact that uh a couple, of months ago they announced that they, were hosting a million models those are, all open source really really impressive, and so I think that there is a good, chance that a vibrant uh open Community, around AI can and will continue and it, will have a lot of corporate players, involved in it so I'm very optimistic in, that way, [Music], hey friends this episode of practical AI, is brought to you by our new friends, over at Plum Plum is a low code AI, pipeline Builder that helps you to build, complex AI pipelines super fast you can, easily create AI pipelines using their, node-based editor iterate and deploy, faster and more reliably than coding by, hand without sacrificing control, deployment is easy pipelines are live, API endpoints eliminate the the need for, constant code redeployment and debugging, by deploying complex AI pipelines as API, endpoints team collaboration is easy too, plums declared of node-based editor, enables you to build quickly while, empowering non-technical roles to, iterate on what you've done without, breaking it you can build Advanced AI, features get structured output every, time transform data and leverage, validated Json schema to create reliable, highquality structured output so Plum is, built for Builders early stage product, teams are using Plum to go from idea to, validation in record time to get started, go to use plum.com that's Plum with a b, as in plumber to request access today, that's us, pb.com again use plum.com, [Music], [Music], alrighty so as we uh as we change gears, just a little bit I had noticed uh a, couple of interesting things so I I, spend a lot of time talking to different, folks in the kind of in the Fortune 500, Fortune 100 world you know I I work at a, big company but I have a lot of friends, and former colleagues at other companies, and we chitchat about these things so, something has really come up a in a, whole bunch of conversations lately for, me and I thought wow if I'm talking, about it this much with with different, friends of mine it probably is uh is a, good topic to talk about on the show and, that's it's an interesting observation, and that is for those of you who are, familiar with the organization Gardner, uh and that organization does a lot of, um prediction and kind of identifying, different Technologies and things where, businesses uh can use them effectively, and famously they put out the gardener, uh hype cycle and what that is is it is, a life cycle for Technologies and they, basically across all technologies that, they track uh which is many they put, them on this hype cycle and track where, they are in their life cycle and the, short version of what that is it's a it, has a a steep upward curve that looks, like an ocean wave sort of that plunges, down into a trough behind it and then it, kind of comes up uh without so much, steepness Midway to kind of a, sustainable plateau and so what they, would argue is that for any given, technology there is an innovation, trigger which is this you know rocketing, up on amount of hype associated with the, technology uh and that it gets to a peak, which they refer to as the peak of, inflated expectations where it's really, high everyone's talking about it but, maybe not a lot of productive work has, happened yet super cool uh you can, probably already recognize how AI might, fit into this how we've you know with, all the things we've talked about over, time but then those expectations have, not been met and people become, frustrated with the technology and it, plunges down into what they call the, trough of, disillusionment and and that's where, they kind of go wow I thought that thing, was so great but boy it really didn't, pan out we wasted a lot of money on it, and uh it's just not it's just not, really worked out well for us but then, calmer Minds come along and they say, well wait a minute this technology has, some really good uses we just need to be, a little bit more practical pragmatic, about it and uh and and not lose our, heads over the hype and that's called, the slope of Enlightenment uh and that, reaches a point that's called the, plateau of productivity where where, basically for the long term a technology, lives out the rest of its life cycle, being a productive technology but, without all the craziness in the early, hype days so now that I've introduced, everyone to that to that life cycle I, going back to the conversation that that, I've been having uh repeatedly with, multiple people that I had noticed that, so many organizations especially large, organizations are just plowing money, into generative AI with mixed results, some are getting some decent results uh, with then you know within the context of, of it being early days in the corporate, sense but I noticed that after peing and, holding a peak on the hype cycle for, quite a long time generative AI is now, beginning to plunge down into the trough, of disillusionment and what that would, imply according to Gardner is that, people are beginning to get a bit, frustrated and I would say that's, panning out because I've noticed many, articles and social media posts over the, last few weeks that people have been, kind of going this isn't going to lead, to generative AI this isn't quite as, good as we thought it's not magic it's, all the things that you see with people, being a bit frustrated with it and those, are increasing in the number that I've, seen so it got us talking um about what, does that mean in a corporate sense, especially when you have a technology, plunging down into the trough of, disillusion and not only that but it's a, technology that has received a lion, share of funding relative to other, technologies that go through the hype, cycle it's the coolest of the AI you, know over the last couple years the, coolest of the AI uh tools in the, toolbox and uh with corporations always, lagging they're now plowing money into, it and yet expectations are falling and, so not getting to the point of it will, obviously find that slope of, Enlightenment and that uh plateau, productivity eventually what does it, mean over the next few months as we're, looking at organizations that are still, plowing money into uh generative AI but, maybe not in the most productive sense, or not as productive as they could given, the dollar value that they're putting in, so I've asked a lot of people what they, think of this Daniel what what are your, impressions of that you know it's an, interesting place to be if you're in, Corporate America or corporate anywhere, these days I do think it's interesting, and I think that in some ways some of, these feelings are are healthy in, particular what I mean is I noticed, earlier on so maybe in 2023 or you know, last fall still talking to a lot of, people with a misconception that oh we, have somehow what's going to happen is, we're going to get access to a large, language model or we're going to get, access to a foundation model in our, company and somehow that kind of equates, to a solution to them like this this, will now be a thing that solves problems, and I think that of course is a bunch of, baloney because basically a model does, nothing it's it's how you implement it, how you integrate it how you use it that, actually makes it a solution and so I, don't know how else to describe that, other than people thinking that AI would, provide a different typee of solution, than other Technologies which are, softwares that people deploy within, their companies right and so some of, this I think is really healthy in that, people are realizing oh wait a minute, there's still a need to think about how, we integrate a call to a large language, model in the context of a larger, engineering project and actually there, is engineering around the edges of the, integration of AI in some ways different, than traditional software engineering, and in a lot of ways the same whether, that be hosting services or testing and, evaluating outputs or virging the way, that we call these models or other, things there's a lot of those best, practices that are still really valid, from the software world and so to me, it's not so much and maybe this is just, just because I I of course have a vested, interest in the technology because I'm, building with it every day but I think, it's not so much a disillusionment about, AI functionality in the context of what, people are building over the next year, but disillusionment around how that, integration happens whereas before it, was sort of this fuzzy thing that we're, going to bring AI in and somehow that's, going to like solve a bunch of problems, without really an understanding of how, you would actually see return around, that now people are saying well yes, we're going to bring in AI we're going, to bring in llms but that's going to, live still in a software stack that we, have Engineers developing and we're, going to develop that on some life cycle, and yeah there's still going to be if, anything maybe increased engineering, spend because there needs to be extra, engineering around these models and so, it is enabling efficiencies it is, enabling net new kind of features or net, new products but these are still, products driven by software that, requires engineering and so that, realization I think is a really healthy, one and so maybe that thing that has the, disillusionment wasn't really ever a, real thing that could have been gained I, guess I think that's a fantastic Insight, I think in a perfect world if we can, help people along kind of get through, their own trough of disillusionment very, quickly to climb back up onto the slope, of Enlightenment by following that, guidance is is essentially what I'm, getting at before diving into it I I, know that over time as I've talked to, people it reminded me of uh Amplified, beyond what I've heard before but of, previous technologies that were supposed, to solve everything you know blockchain, was going to solve the world if you if, you recall blockchain was amazing we, were going to have it everywhere it was, going to be everything and by you know, having since reach that plateau of, productivity at the end of the life, cycle blockchain has a fantastic place, in the technology world and a Vibrant, Community but it of course doesn't solve, all things and I think people need to, realize that uh the same with these, kinds of models is that they they can do, that so I know that one of the things, that I'm trying to get people to do is, to get through their own trough of, illusionment quickly and start, recognizing in a really productive sense, how to fit it in with larger systems, we've always talked about it's really, the Software System around these models, that makes it all work that makes the, value for the user and even extending, that if you're not in the cloud it's the, hardware if you're out on the edge it's, all about what do you have on the, hardware and how does it integrate and, how does it integrate with the systems, you already have in place and what, special value are you expecting, generative AI to bring to bear that you, haven't already been uh trying to design, and solve for and so I think as people, really stopped and they kind of got out, of their their New Year's Eve party, moment and uh they said okay I'm an, engineer I need to start being an, engineer again and thinking about it and, they thought well maybe it doesn't solve, everything like I thought but I can, identify some pretty cool things that it, would help on value and I'm hoping that, people will start focusing on that and, bring engineering to your point back to, bear on this and solving it but solving, it in that larger ecosystem that, includes the overall stack that you're, in the software uh and since we're, moving, ever more out onto the edge into all the, devices that we use out there Beyond, just our cell phones that were always uh, ever present that we can find some good, uses so maybe this is a chance for a bit, of a Resurgence yes of engineering but, also I like this triggers all sorts of, like data sciency things in my mind, because as a data scientist operating, like in the in that industry for however, long it was the thing right it it was, about sort of choosing the right sets of, data tools and models to come up with a, solution or at least that's how I think, a lot of people viewed it and that may, have been a gradient boosting machine, plus a SQL data base plus a some sort of, data Pipeline and you know connecting, that into infrastructure and in, eventually into products that get out, into the world now some people might, view data science differently and and, have different views because it's sort, of an ambiguous term in and of itself, but I see one interesting thing on the, hype cycle that you were mentioning, there's a shorter time period that they, talk about this like composite AI, reaching the plateau than quote, generative AI which is interesting to me, and that I actually had to look up this, term because I have no idea what that, term means there's actually a number of, terms on the Garner height cycle that I, have no idea what they mean which I, wonder where they come from and I'm, right there with you so neither one of, knows what compos AI is so I'm sure that, there are a few people out there that, are very familiar with it and are, snickering at us and we welcome your, education and feedback on such keep, going though yeah but I looked up the, term and this appears to just be like, almost a term describing data science, which is just like using different types, of AI or machine learning together to, solve issues or create Solutions which, is sort of just is descriptive of data, science and kind of what it was for many, years so I don't know that it'll be, called data science maybe it's called AI, engineering I don't know but I do think, that we'll see kind of a return to this, idea of composite Solutions and a, multifaceted way of looking at at doing, these things not just with Gen but that, plugged in as an option into the, solution mix I couldn't agree more, despite being an AI podcast I know you, and I are always a little bit eye rolly, when it comes to all the hyper around it, we uh we try for our listeners to cut, through the hype uh and talk about it so, yeah a return to engineering and taking, advantage of some of these capabilities, in a holistic system uh to that is, highly productive and gives your end, users what they need is the way to the, Future, [Music], [Music], hey friends out shift Cisco's incubation, engine merges Innovation with the art of, possible a Launchpad for transformative, emerging Tech out shift Blends startup, agility with corporate strength to, develop nextg Technologies from the, groundup in AI Quantum Technologies, Cloud native and more their newest AI, Innovation Motif addresses a critical, challenge in the rapidly advancing world, of gen AI Bridging the gap between, concept and deployment this model and, vendor agnostic solution supports the, entire geni Journey from assessment and, experimentation Motif accelerates, deployment from months to days while, safeguarding against gen AI security, trust compliance and cost risks all, while empowering business function and, it teams to rapidly configure and user, assistance Power by organizational data, Motif provides advanced customizable, policy controls to prevent unauthorized, access to sensitive data and helps, ensure compliance throughout the entire, process with deep visibility into, operational and business metrics motivic, enables you to track Roi optimize costs, and make informed Decisions by offering, a centralized view moic deters Shadow AI, usage and empowers teams to innovate, responsibly so move beyond the, traditional constraints of AI, implementation utilizing AI deployment, that is both responsible and is, revolutionary ensuring your projects are, not just quickly launched but built on a, foundation of trust and efficiency visit, motif. a that is M, tfic c. a, [Music], [Music], well Chris I have um maybe a related, question for you which it's not exactly, related to the hype cycle but I was I, was at my local co-working space for uh, for a fundraiser on Friday night and, shout out to Matchbox co-working if, anyone's listening but yeah so that was, fun but I got into a number of AI, related conversations as I usually do, and one of the things that one of the, guys I was talking to mentioned was you, know there's a lot of people talking, about how this sort of wave of, generative AI this wave of AI in what, people are referring to AI now is being, compared to kind of like the surge of, it's like the new internet right like, when the internet was brought about and, the type of change that that created and, his point was sort of well that it, definitely created a a kind of New, Market the this space that was and is, the web and it wasn't just about, creating efficiencies and his point was, it seems like most people are using AI, to create efficiencies in in the, Enterprise whether that's you know, helping reports or automate certain, functionalities that interns were doing, before or analyze a bunch of documents, summarize those answer questions get a, quick access to information and from, from his standpoint these are all kind, of efficiency gains and not necessarily, creating any sort of New Market that, would be comparable to the huge shift, that happened when the when the web came, came about so I I was curious on on your, take of that may maybe it's slightly, related to the hype cycle stuff but uh, yeah I I think so there are two, different qualitative things there are a, lot of common traits between them but, because I'm slightly on the older side I, was an adult when the internet became, you know not when it was invented that, was actually was invented the same year, that I was born or a year before but um, at the point where it hit the general, public in a in a slight way I was in, college and by the time it became the, thing I was well into the workplace and, so you know that qualitatively the, Advent of the internet brought about a, brand new ecosystem upon which people, could do all sorts of new things I would, say it was like putting up it's like if, you're in a classroom it was like, putting up a chalkboard on the wall that, people can then go draw they can draw, mathematical equations they can doodle, they can do whatever they want but it, gave them a new medium upon which to, communicate and do stuff and interact, together and so it was that Baseline AI, is a bit different AI is it has a, similar revolutionary quality obviously, but it's expanding on that connectivity, and saying how can we get you what you, need faster and more, intelligently you know with Aid along, the way and so it's it's apples and, oranges but they're they're both in the, same fruit bowl a little bit of a, strange analogy there, yeah it's interesting that you bring up, the element of sort of creativity and, communication so there's probably some, parallels in the sense that I would say, there are many people treating these, sort of AI models and what they're, building with it as as a very creative, new I don't know if you'd call it a new, canvas on which they're painting but, definitely they're they're trying things, that are new and interesting maybe that, haven't been done before and are very, generative and but some of those things, even if you think about something very, much on the creative side like the udio, type of thing that is the music, generator that we talked about a while, back I think you could make an argument, well that is an efficiency Builder, because you could make a bunch of music, really quick for your YouTube videos or, a bunch of music really quick for your, ads or or whatever you're running online, but I think also some people are using, it as a creative element in and of, itself and doing maybe new and different, things or mixing things in ways that, people hadn't done in the past and maybe, there's thing other examples that are, that are better in my mind where kind of, it's almost a both and type of situation, so I'm I'm always maybe a sucker for the, third option where it's not like, clear-cut on one side or the other side, but this third option of yes it is about, efficiency gains but I think there is an, element of net new things that will come, out of the the AI space with these, models that maybe we are hard to predict, right now like they would have been hard, to predict in the rise of the web right, it was probably hard to predict what an, Amazon would become agreed when people, were kind of goofing around and making, websites to do this or that maybe it, would have maybe it wouldn't have but, like the level at which that sort of, company has shaped culture at large not, even just like Commerce but culture at, large you know maybe we just don't know, yet is one way to put it I I have a, couple of thoughts on that uh first of, all there is creativity in these AI, models and some people argue against, that even today like they may see but, they'll say they don't invent wholly new, Notions and stuff they they take things, that are already out there and they they, combine them and stuff like that and, there may or may not be Merit to that, but what I can do is I can compare it to, myself and other humans that I know um, and I'm an extremely creative human but, I'm creative in I have strengths in, certain areas of creativity and big, weaknesses in others and I've spent a, lot of time trying to compare myself to, these tools that I'm using uh in that, way and so I am very good at creating, out of nothing a software system in my, head and understanding all the right, things to put in place to do it even if, it's a a fairly new way of doing things, and that's a strength that I have I'm, terrible at drawing a beautiful picture, or painting and getting that out even if, I can Envision it in my head I can't do, that and what it's made me realize, seeing these tools that I'm using that, is that are producing these these, capabilities that we're all using all of, us listening to this are using every day, these days is it's made me really, question the sanctity of creativity and, I think uh at the end of the day I'm a, big believer that everything is, mathematical whether you agree or, disagree with it that you know we're, we're a biology we're based on chemistry, which is based on physics which is based, on math and that kind of science stack, that I tend to think of us as whether, something is silicon and producing stuff, from its capability or or is biological, in nature I spend a lot of time going, how special is what we as humans create, so maybe we just kind of acknowledge, that we're bringing things to bear and, these new tools that we're all using, every day brings things to bear and we, can be more productive and capable by, combining our talents and doing stuff so, I don't tend to be in either Camp I, don't tend to be in the uh this is, amazing new uh imagination from, computers my God what's the world coming, to and I don't tend to be in the ba, humbug this is just more of the same, I've seen this before and there's, nothing uh Magic about it I'm a little, bit in the middle and maybe a little bit, more nuanced than that I it's a long way, around to an answer I apologize yeah I I, definitely understand that and I think, from my own worldview and and even my, faith perspective I would think of a, sort of different special way in which, humans uh exist but at the same time the, we have created a lot of creativity with, the tools that we create and, technologies that we create and I think, there is something beautiful about the, fact that we are acting out as creators, creating things that are creative in and, of themselves right and so we're we're, kind of acting out the I don't know how, philosophical we've got on this show um, up to this this point but um but yeah we, can afford a moment here at theend, exactly this is the end of the show yeah, I would say it's it's kind of a, beautiful thing that we as human beings, are creative and we create things that, in and of themselves could be conceived, to be creative also and we co-create, with those things I think that's that's, really cool and I think that's an, element of what we've done with, technology over time and so yeah I think, my perspective is maybe we just haven't, seen what we are to co-create with this, technology um moving into the future and, how that will shape culture I think, that's going to be a longer time period, than maybe the one or twoe Gartner hype, cycle time period uh that we actually, see yeah this is shaping culture because, people know about it now but I I think, there's like a deeper way like people, knew about the internet at the sort of, hype of the internet coming out but, really how the internet would shape, culture and shape you know things like, what social media and other things have, done took a long time to realize so yeah, I I think that we have to wait a little, bit for that from my perspective great, perspective you have there and I would, encourage our listeners you know we had, a we had a little bit of moment of, finishing the show up with kind of, sharing our views on this but I think, this is important because we're all, going to see an increasing amount of AI, capabilities coming into our lives for, forever going forward at this point our, children are grandchildren the world is, changing faster now than it ever has so, these are thoughts that I hope you're, having as well evaluating how you see, yourself in this world sharing a world, with these technologies that are, increasing and if you haven't already I, hope you will join our slack Community, um where you can engage Daniel and, myself directly uh and share some of, your thoughts on how all this might work, going forward with creativity and with, these other topics we're having because, we'd love to hear your thoughts uh and, for what it's worth we build these shows, off of a lot of those conversations that, happen in the slack Community where, people are showing interest uh so please, engage us there share your thoughts uh, including the philosophical ones don't, be shy and I'm looking forward to, hearing what uh what some of you out, there are thinking yourselves cool well, thanks for having the discussion Chris, and hope you can uh uh have a good week, as you enter into more uh more fun AI, work sounds good Daniel uh stay cool in, the in the hot summer weather since we, don't have that AI climate control quite, yet yeah I'll see you next week all all, [Music], right all right that is practical AI for, this week subscribe now if you haven't, already head to practical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Vectoring in on Pinecone | Daniel & Chris explore the advantages of vector databases with Roie Schwaber-Cohen of Pinecone. Roie starts with a very lucid explanation of why you need a vector database in your machine learning pipeline, and then goes on to discuss Pinecone’s vector database, designed to facilitate efficient storage, retrieval, and management of vector data.
Leave us a comment (https://changelog.com/practicalai/277/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Plumb (https://useplumb.com/) – Low-code AI pipeline builder that helps you build complex AI pipelines fast. Easily create AI pipelines using their node-based editor. Iterate and deploy faster and more reliably than coding by hand, without sacrificing control.
Featuring:
• Roie Schwaber-Cohen – Twitter (https://twitter.com/roieschwabco) , GitHub (https://github.com/rschwabco) , LinkedIn (https://www.linkedin.com/in/roiecohen)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Pinecone (https://www.pinecone.io)
• Pinecone | Blog (https://www.pinecone.io/blog)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-277.md) | 352 | 4 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack uh I, am the CEO and founder at prediction, guard where we're enabling AI accuracy, at scale and I am joined as always by my, co-host Chris Benson who is a principal, AI research engineer at locked Martin, how you doing Chris doing great today, Daniel how's it going we're I know we're, recording leading into a holiday weekend, here we are and uh so many exciting, things La last week I got the chance to, briefly attend the the AI engineer, Worlds Fair which uh is sort of prompted, in in certain ways by our friends over, at the Laton space podcast and that was, awesome to see and of course a big topic, there was all things having to do with, with Vector databases rag uh all sorts, of you know retrieval uh search sorts of, topics and uh to dig into a little bit, of that with us today uh we have Roe, schwaber Cohen who is a developer, Advocate at Pine Cone welcome hi guys, thanks for having me today really, excited to be on the show yeah well I I, mean we were talking a little bit before, the show pine pine cone is from my, perspective one of the ogs out out there, in terms of of coming to the vector, search semantic search embeddings type, of stuff not that that concept wasn't, there before pine cone but certainly, when I started hearing about Vector, search and retrieval and these sorts of, things pine cone was already a name that, people were saying so could you give us, a little bit of background on on Pine, Cone and kind of how it came about and, and what what it is position-wise in, terms of the AI stack so Pine con was, started about four years ago give or, take and our founder Ido Liberty um was, one of the uh people who were uh, instrumental and founding um Sage maker, over at Amazon and had a lot of, experience uh in his work at Yahoo and I, think that one of the fundamental kind, of insights that he had was that the, future of pulling Insight out of data, was going to be uh found not exclusively, but predominantly in our capability to, construct vectors out of that data and, that representation that was produced by, neural networks was uh very very useful, and was going to be useful moving, forward I think he had that Insight way, before tools like uh chat GPT became, popular um and so that really gave pine, cone a great Edge at being kind of the, first mover in this space and we've seen, that the repercussions of that ever, since you know with the rise of uh llms, um I think people very quickly came to, recognize the limitations that llms may, have and it was clear that there needed, to be a layer that sort of bridged the, gap between the semantic world and the, structured World um in a way that would, allow llms to rely on structured data, but also leverage their capabilities as, they are and that is one of the places, where Vector databases uh play a very, strong role you know Vector dat, databases are distinct from uh Vector, indices in the sense that they are, databases and not indices right so like, an index may have basically is is, limited by the the memory capacity right, that the machine that it's running on, allows it to have whereas Vector, databases behave in the way that, traditional databases uh behave and in, the way that they scale of course, there's a completely different set of, challenges um algorithmic challenges, that come with the the territory of, dealing with vectors and high dimension, vectors that don't exist in the world of, just simple you know text textual, indexing and and colum data and that's, where like the secret sauce of uh pine, cone lies right it's its ability to, handle Vector data at scale but maintain, you know the speed and uh uh uh, maintainability and resiliency of a, database as you kind of comparing Vector, databases to indices and and then kind, of bringing that compared to that one of, the things that I run across still a lot, or people you know Vector databases are, are really you know incredibly helpful, now but there's still a lot of people, out there who don't really understand, how they F in you know they don't really, get it versus the nosql versus, relational databases or fine-tuning yeah, and so and they they hear you say it, does vectors and stuff like that could, you take a moment since we since we have, you as an expert in this thing and kind, of like lay out the ground work a little, bit before we dive deeper into the, conversation about what's different, about a vector database that is storing, vectors versus storing the same vectors, in something else like why go that way, for somebody who just isn't quite hasn't, really ramped up on that yet so the the, basic premise is you want to use the, right tool for the job right and the, basic difference right between a, relational database a graph database and, a vector database or a document database, for that matter right is the type of, content that they are optimized to index, meaning a relational database is meant, to index a specific column and create an, index that would be easily traversable, right and in scale um it would be able, to Traverse it across different machines, right and do it effectively right uh, graph database does the same thing only, its world live is nodes and edges right, and it's supposed to be able to build a, optimized uh representation of the graph, such that it could do traversals on the, graph efficiently in Vector databases, Vector databases are meant to deal with, vectors which are essentially long, high-dimensional set of numbers meaning, like you can you can think of an array, with a lot of real numbers inside of, that array and you can think of this uh, collection of vectors as being points in, a high-dimensional space and the vector, database is building effective, representations to find similarities or, geometric similarities between those, vectors in high dimensional space and, that means that basically it would be, very effective at given a vector finding, a vector that is very close quote, unquote to that Vector in a very large, space right so to do that like you need, to use a very specific set of algorithms, that index the data in the first place, and then query that data to retrieve, that similar set of vectors to the query, Vector at a small amount of time and, also being able to update or make, modification to that high-dimensional, Vector space in a way that is not cost, prohibitive right or or time prohibitive, right and that's like the the Crux of, the difference between a vector database, and other types of databases just to, draw that out a little bit more so from, your perspective like what would be if, you were to kind of explain to someone, hey here I've got one piece of text and, I'm wanting to match to some, close piece of text in this Vector space, what might be advantageous about using, this vector-based search approach and, these embeddings um in terms of what, they mean and and what they represent, versus doing like a you know tfidf has, been around for a long time I can search, based on keywords I can do a full text, search there's there's lots of ways to, search text um you know that concept, isn't new but this Vector search is, seems to be powerful in a certain way, from your perspective how would you, describe that yeah I think that the, lynchpin here is the word embedding, right the vector um search capability, itself is a pretty straightforward, mathematical operation that in and of, itself doesn't necessarily have value, right it basically it's like other, mathematical operations it's a tool, right the question is like where does, the value come from and I would argue, that the value comes from the embeddings, and we'll talk about what exactly they, are we just point a flag and say, embeddings are represented as vectors, which is why the vector database is so, critical in this scenario but why are, embeddings uh helpful in the first place, right so embeddings come from a, different set right like a a very very, wide set of neural networks that have, been trained on textual data and they, create within them representations of, different terms different surface forms, sentences paragraphs Etc that map onto a, certain location in Vector space the, cool thing about embeddings is that it, just so happens and we can talk about, why it just so happens that terms that, have semantic similarity have a, closeness in Vector space and that means, that if I search for the word queen and, I have the word King embedded as well in, my Vector database and I also have the, word dog, right because the word King is more, semantically similar to the word Queen I, will get that as a result and not the, word dog right and that allows me to, basically leverage the quote unquote, understanding of the world that machine, learning models and specifically neural, networks have right large language, models have of the world right in a way, that I can't quite leverage from other, modalities like tfidf and and other you, know bm25 Etc that like look at a more, lexical kind of perspective on the world, right and so when we talk about you know, practical use cases right like rag comes, up very very frequently and the reason, for that is because we are in semantic, space a user interacts with the system, in semantic space so that means that, they ask the system a question in, natural language we can take that, natural language and basically again, quote unquote understand the users, intent right and map it again into our, dimensional Vector space and find, content that we've embedded that has, some similarity to that intent right and, so we're not looking for a exact lexical, match but we're actually able to take a, step back right and look at the more, ambiguous uh intention and meaning of, the query itself and match to it things, that are uh semantically similar if that, makes sense would it be fair to say that, you're essentially because the output of, those structure being embeddings and, those are vectors and therefore you're, essentially storing it and operating on, it in a closer representation to how, they naturally would be and so you're, not doing a bunch of translation just to, fit it into a storage medium and and to, operate on it therefore it's going to be, quite a bit faster since you're is that, fair is that a fair way of thinking, about it perhaps we're we're in a way, we're compressing the representation, into something very small in a sense, right so you can think of an image for, example right an image would that that, could be like a megabyte big right we, can get a representation that in terms, of like its actual size in terms of a, vector is qu is order of magnitudes, smaller right and we can use that, representation instead of using the, entire image right to do our search now, it just so happens that again when we're, doing uh embeddings for images right we, get like that same uh quality where, we're not looking at you know we're not, looking at an exact match or like pixel, matching uh pixel to pixel with with, images that we have we can actually look, at the semantic layer meaning what is, actually in that picture so if it's a, picture of a cat we would get as a, result other pictures of cat that we've, embedded and saved in the database right, and that will come out of the, representation itself of the embeddings, that were a result of like say like a, clip model that we use to embed our, image so I don't know if it necessarily, means that like it simplifies things in, a lot of ways it actually adds a lot of, more oomph to the representation right, so you can actually match on things that, you wouldn't necessarily expect right, and that that's kind of like what the, beauty of semantic search in that sense, right is that um users can write, something and then get back results that, don't even contain right anything, remotely similar uh in terms of the the, surface form to their query but, romantically right it would be, [Music], relevant hey friends this episode of, practical AI is brought to you by our, new friends over at Plum Plum is a low, code AI pipeline Builder that helps you, to build complex AI pipelines super fast, you can easily create AI pipelines using, their node-based editor iterate and, deploy faster and more reliably than, coding by hand without sacrificing, control deployment is easy pipelines are, live API endpoints eliminate the need, for constant code redeployment and, debugging by deploying complex AI, pipelines as API endpoints team, collaboration is easy too plums declared, of node-based editor enables you to, build quickly while empowering non, technical roles to iterate on what, you've done without breaking it you can, build Advanced AI featur get structured, output every time transform data and, leverage validated Json schema to create, reliable highquality structured output, so Plum is built for Builders early, stage product teams are using Plum to go, from idea to validation in record time, to get started go to use plum.com that's, Plum with a b as in plumber to request, access today that's, us, mb.com again use plum., [Music], well roey I uh I really appreciate uh, also the the statement about adding, oomph to to your representations I think, that would be some there's some type of, good uh t-shirt that that could be, derived out of that I I see yeah for, listeners who are just listening on on, audio uh Ro's wearing a shirt that says, love thy nearest neighbor which is, definitely applicable uh to today's, conversation well this is great so we've, kind of got a baseline in a sense from, your perspective what a vector database, is why it's useful in terms of of what, it represents in these embeddings and, allows you to search through um you, mentioned rag we we've talked a lot, about rag on the show over time but, maybe for listeners this is the first, episode that they've listened to what, would be the kind of 30 seconds uh or or, some type of quick uh sort of remember, rag is X from from Rory right so I I, love quoting Andre kathi um with his uh, observation on um llms and, hallucinations so people when usually, people talk about rag they say oh Rags, sometimes hallucina and that's really, bad right and Andre kathi says actually, no they always hallucinate right that, they do nothing but hallucinate right, and that's really true right because, llms don't have any kind of tethering to, real knowledge in a way that we can, trust right uh we don't have a way to, say hey I can prove to you that what the, llm said is correct or incorrect based, on the llm itself right we need to go, out and look and search right and rag to, me is that opportunity where we can take, the user's intent we can tie it using a, for for example right a semantic, similarity search to structured data, that we can point to and say this is the, data that is actually trusted and then, feed that back to the llm to produce a, more reliable and truthful answer now, that's not to say that rag is going to, solve all of your problems but it's, definitely going to give you at least a, handle on what's real and what's not, what's trusted and what's not and it and, where the data is coming from where, those responses are coming from and it, shifts the role of the LM from being, your uh source of Truth to basically, being um a thin natural language rapper, that takes the response and makes it, palatable and easy to consume to a human, being great yeah I think a lot of people, have done a sort of maybe they've done, even their own demo with sort of a naive, rag um maybe pulling in a a chunk from a, document that they've loaded into some, Vector database they inject it into a, prompt and they get some useful output, one of the things that I think we, haven't really talked about a lot on, this show we we've talked about Advanced, rag methods to one degree or another but, I know pine cone um along with with, other Vector database providers you know, offer more than a simple just like, search that's the only F function you, can do there's a lot more to it that can, make things useful in particular like, having uh you know you mentioned pine, cone mentions kind of name spaces that, can be used metadata filters sort of, hybridized ways of doing these searches, could you kind of help our listeners, understand a little bit so they may, understand here's my user statement I, can search that against the database and, get maybe a matched document but for an, actual application like an application, in my company that I'm building on top, of this what are some of these other key, pieces of functionality that may be, needed for for a Enterprise application, or for a production application that go, beyond just the sort of naive search, functionality in a vector database yeah, for sure so we can take this one by one, so metadata is definitely like one of, those capabilities that uh Vector, databases have that are above and beyond, what a vector index would provide you, right and basically what they are is the, again the ability to perform a filtering, operation after your vector search is, completed and so you could basically, limit the result set to things that are, applicable in the application context, right so you can imagine different, controls and and selection Boxes Etc, that come from the application that are, more uh set in stone so to speak they're, not just like natural language they're, categorical data for example um and you, can use those to limit the the result, set right so that you hit only only what, you want that is something that is uh, very common to see in a lot of different, production scenario, and could you give maybe an example of, that like uh in a particular use case, that kind of you've run across like what, what might be those categories or what, just to give people something Concrete, in their mind yeah for example like you, can imagine a case where I'm not going, to name the the customer but like you, can imagine that the case where uh you, want to perform a rag operation but you, want to do it on a corpus of documents, but not on the entire Corpus but rather, on a particular project within that, Corpus so imagine that you have multiple, projects that your product is handling, um like finance and uh you know HR and, whatever uh engineering right and you, want to perform that search and then, limit it only to a particular project, and in that case right you would use the, categorical data that is associated with, the vectors that you've embedded and, saved in Pine Cone to only get the the, data for that particular project right, that is like a kind of super simple, example um but it can go beyond that, right and move into like the logic of, your application so like you can imagine, a case where you know you're looking at, um a movie a movie uh data set right, like and you want to search through, different uh plot lines uh of movies but, you want to limit the results only to a, particular genre right that's another, case right like we could just use, leverage U metadata you can think of um, wanting to limit the results that to a, time span right a start and end date, right things of that sort that kind of, like have to do more with the nature of, when and and how in what category the, vector belongs into and not specifically, the contents of the vector right so, that's one thing name spaces are another, feature that we've seen as being like, incredibly important for multi-tenant, kind of situation and multi-tenant rag, has become kind of like a very strong, use case for us and that's where you, know you you see a customer and that, customer has customers of their own and, not one or two but many many many and in, that case you definitely don't want want, to have all of the documents that these, that all of the sub customers uh have to, be collocated in one index and in that, case you basically uh break them apart, right so they're still in one index so, management of the index overall is, maintained Under One Roof but the actual, content and the the vectors themselves, are separated out physically from one, another in namespaces they're sort of, sub indexes to that super index and, that's another feature that we've seen, um as being super important to our uh, Enterprise customers as you're looking, at these Enterprise customers and with, maybe most Enterprises uh you know, getting into rag at this point at some, level and trying to find use cases for, their business to do that I know you, know my company and lots of other, companies are doing this what are some, of the ways that they should be thinking, about these different use cases when, we're talking about rag um and semantic, search and multimodal things that that, pine cone does what are good entry, Pathways for them to be thinking about, how to do this because you know they may, have come up with their own their kind, of own internal platform it might have, some open source it might have some, products already in in play but maybe, they don't have a vector database in, play yet and so you know how do they, think about where they're at when you, guys are talking to them and you're, saying let me you know we've C we've, been talking in the show so far about, kind of the value of the vector database, and the kind these use cases but not, necessarily kind of the an easy pathway, so how do you on board uh Enterprise, people to take advantage of the goodness, on this yeah that's an excellent, question and in fact it's like a quite, quite a big of a challenge because it, ends up being you know a straightforward, pipelining challenge that has existed, from the beginning of you know the Big, Data era right like is how do I how do I, leverage all the Insight that is locked, in my data in a beneficial way right and, the sad part about this story is that it, always depends on the specific use case, and it's hard to give a silver bullet a, sort of light at the end of the tunnel, is that we've recently published a tool, called the rag planner and its purpose, is to basically help you figure out what, do you need to do to get from where you, are to an actual rag application and, follow through all of the different, steps that are required in between right, and sort of like understand like from an, understanding of like where your data is, stored how frequently it updates like, what the scale of your data is it's etc, etc to the point where it could give you, some recommendation as to like what are, like the steps that you have to do like, in terms of do you build a batch, pipeline do you build a streaming, pipeline pipeline what tools should you, be using to do those things what kind of, data cleaning are you going to need to, do what uh embedding models are you, going to want to use to do this right, like how are you going to evaluate the, result of your R pipeline so all all of, these questions are pretty complex so, what I would say is as a general rule of, thumb first of all like you have to, evaluate whether or not rag is for you, right so for example there are a lot of, situations where you know rag may be the, wrong choice right because the data that, you have right and the actual capability, of answering end user questions based on, that data does not match up right and, that's how you get to see you know cases, where you know chat Bots sort of spit, out uh results that may seem ridiculous, but nobody catches it um and companies, get into a lot of hot water water, because of it right there are a lot of, scenarios where it's much easier to, start that journey and to sort of, develop the muscle memory that's, required in order to set these things up, in a lot of these use cases you see like, a lot more internal processes definitely, in bigger companies right where like, there's a a very big team that just, needs access to its internal knowledge, base um in an efficient way but um it's, not a system that is going to be Mission, critical right in any way so like if if, a person gets a wrong answer it's not, going to be the end of the world, nobody's going to get sued right and so, what I would say is there's definitely a, learning curve here for big, organizations for sure um it's usually, recommended to develop again that that, internal knowledge of what the, expectation versus the realities on the, ground is going to be to have like a, really good idea of how you assess risk, in those situations and most importantly, how to evaluate the results that are, produced by those systems right because, a lot of people are like okay you build, the rag system great and now produces, answers I'm done right like we're we're, everybody's happy that's farthest from, the truth that you could possibly be, right like these systems need to be, continuously monitored and feedback, needs to be continuously collected to, the point where you can understand right, like how changes in your data and the, way that you're interacting with it, changes in large language models that, you're implying are actually affecting, the end result right um are going to be, and how your users are actually, interacting with the system overall, right how all of these things kind of, coexist and happen together and are they, working in the way that you want them to, and of course you want to do that you, know in a quantitative and not, qualitative way right so like there's a, lot of instrumentation that has to go, into it I'm curious is a is a little, follow-up to that and obviously leaving, specific customers out of it are you, tending to see more internal use cases, of rag deployment to internal you know, groups of employees and stuff maybe from, a risk reduction are you seeing more of, an external I'm going to get this right, out to my customer and try to beat my, competition to it like where do you, think the balance is as of today I think, that there's a widespread and I think, that it's a journey right like I think, that like the more Tech native companies, that we see that are more I would say, forward-looking or you know, technologically uh adapt to kind of do, these things quickly are more ready to, not only take risks but take educated, risks in this space with the evaluation, that comes with it right so like these, are not just like let's set forget but, they actually know what they're doing um, in those cases you see them going out to, production with very big uh deployments, that is our bread and butter I would say, at the moment right with uh companies, that are more traditional um that have, been like that are not necessarily, getting Tech native you see a more uh, cautious sort of progression which is, only to be expected right like I think, that's kind of like natural to see well, Roe I have uh something that I I saw on, your website which it was new to my, knowledge which I think is also really, interesting one one of the things that, I've really liked in experimenting with, Vector database rag type of systems as, an AI developer is having the ability to, run something without a lot of compute, infrastructure maybe in an embedded way, or an Onis index something that I can, spin up quickly something that I'm don't, have to deploy a kubernetes cluster or, something to to uh or or set up a bunch, of kind a client server architecture to, to set up and test out maybe a prototype, that I'm doing and I and I see pine cone, is talking about pine cone serverless, now which is really really intriguing to, me just based on my experience in in, working with people um these sort of, serverless sort of implementations of, this Vector search I think can be really, powerful so could you tell us a little, bit about that and how that kind of, evolved and and what it is what's the, current state and and uh how pine cone, thinks about the serverless side of this, so serverless came about after we've, realized that tying uh compute and uh, storage together is going to um limit, the growth factor that our bigger, customers are are expecting to see and, it basically makes uh growth kind of, prohibitive in the space right and so we, had to find a way to break apart these, two considerations while maintaining uh, you know the performance characteristics, that our customers are are expecting and, are used to having from our previous um, architecture so essentially like, serverless has been a pretty big, undertaking on our side to ensure that, you know the quality of the database is, maintained but at the same time we can, reduce cost dramatically for customers, to just give you like an idea where like, um for the same cost of uh storing about, I don't know around 500,000, vectors before um you can now store 10, million right and that's a a humongous, difference right like it's an order of, magnitude difference I think that like, did accomplish that right like there was, like a lot of very clever engineering, that had to happen because again now, having compute and storage separated, apart means that storage can become very, cheap but on the other hand it requires, you to handle the storage strategy and, retrieval in a lot cleverer way we have, a lot of content on the website that, kind of delves deeper into how exactly, technically that was achieved and we, won't be able to cover that given the, time that we have but like the basic, premise is that you can now grow uh your, vector's index to theoretically Infinity, but practically to tens of billions and, hundreds of billions of vectors without, the cost of the expense becoming uh, prohibitive um which is the main drive, for us with our bigger customers and, also with smaller customers like um you, can start experimenting we have like an, incredibly generous free tier that, allows you to start you know like you, said right like if I'm just a developer, on my own testing things and trying to, understand how Vector database Works in, my world it's very unlikely that I'll be, able to tap the entire free toer plan, even several months in with many many, vectors uh stored right um and it will, work the same way that our Pro, serverless tiers work in terms of its, performance so it's not like a reduced, capacity performance in any way um so, you get get to feel exactly what it, would feel like and the effort that's, required to stand it up is minimal to, negligible right you just set up an, account and the SDK is super super easy, to use yeah and and am I understand sort, of representing things right like in, terms of the you know massive so there's, a massive engineering effort I'm sure as, you mentioned to achieve this because, it's not a trivial thing but in terms of, the user perspective like if people use, pine cone before and they're using pine, cone now you already mentioned the, performance is the is the interaction, similar it's just this sort of scaling, and sort of from the user perspective, scaling and pricing and maybe also you, could touch on so pine cone is people, might be searching for different options, out there and some of them would require, you to have your own infrastructure or, um some of them are hosted Solutions, pine cone at least in in its kind of, most uh typical form would be hosted by, you and yeah could you just talk a, little bit about the user experience pre, poost serverless and then also kind of, the infrastructure side like what do, people need to know and what are the, options around that in terms of what, happened pre and post so before um, serverless there was like a lot of, possible configuration choice that you, could do right like so like there was in, fact a lot of confusion with our users, you know like what exactly is the best, uh configuration for me should I use, like this performance kind of uh, configuration should I use like the, throughput optimized configuration what, exactly am I supposed to use and like, the pricing mechanism was a little bit, convoluted and I think that like serous, the attempt there was to simplify as, much as possible uh and to make it, really really dead simple right for, people to start and use but also grow, with uh with us right so again like I, said like the bottom line is you know, the external view into what pine cone, offers may have looked pretty similar, right so like if you're just a user you, may say like hey like I got like a, cheaper pine cone bill this month and, you know like I can store a lot more uh, always a good thing right always a good, thing right but not super amazing right, like but the end result is the question, is like what happens when you know you, can actually store a lot more vectors, right what what does that unlock for you, and and again I think that like the end, of the day right like the way that that, we see pine cone and this this may help, us uh kind of talk about like what's, next for pine cone is a place where your, knowledge lives right and and allows you, to build knowledgeable AI applications, right and having more knowledge is, always net positive right in that, context right so the assumption is that, like you know as AI applications grow, they accumulate more and more and more, knowledge and they become that more, powerful with any additional knowledge, that you can stuff into them and so, there's like actual value beyond the, fact that you can store more right like, and and it's cool right your application, actually becomes more powerful because, it can handle more types of of use cases, it has a better ability to be more, accurate and respond truthfully to a, user when they are interacting with it, and so I think that like in general like, there's like this uh blatant kind of, value that is only going to be apparent, once people really experience what it, means to have you know a million, documents uh that are stored in Pine, Cone versus 10 million documents that, are storn in Pine Cone and that effect, is going to be very powerful I think, that's the M majority of the of the, benefit that I see maybe that gets to, the next thing which I was G to ask, about which I also see the announcement, around pine cone assistance and I'd love, to hear more about that like what is, from of course um sometimes maybe that, can be loaded language also for people, in the in the AI space but in terms of, of this assistance functionality from, Pine Cone what are you trying to enable, and and where do you see it headed so, that has to do with the question that, Chris had before which is like what is, the journey right for customers right, and I think that like as a general, purpose that we had around assistant was, to reduce the friction between me having, a bunch of documents that I want to, interact with with an M or an AI in some, form and capacity to the point where, that actually works right there are a, bunch of ways of going about it right I, think pine cone wants to bring you know, on top of our very robust Vector, database a very smooth experience that, lets users uh really do very little and, get get all the value out of pine cone, without having to think too much about, it so for that purpose we don't only, have the ability to take your document, and then aded them and uh you know do, the endtoend process of creating that, completion endpoint for you right we're, also the ones providing the actual, inference uh layers as well right and so, it's not going to be again but if if you, asked like me like this questions like, how do you build a rag pipeline right, like a year ago even right I'd have to, tell you hey you have to go to like some, embedding provider you have to find, someone who would do like your uh you, know PDF F extraction or you know take, the data and chunk it and do all this, stuff right no more right like the, reality here is you can take a set of, documents throw them at this knowledge, assistant and the rest is kind of quote, unquote magic right it just happens for, you behind the scenes while maintaining, the quality that you want to get and at, the scale that pine cone can deliver, right which is again another, differentiator so like I said before, Pine con is built to withstand hundreds, of billions of documents right that of, vectors that you would store with us um, and still be able to produce responses, in a in a reasonable amount of time and, that's true for a knowledge assistant, because uh assistant sits on top of the, vector database so it sounds like that, may be uh a really good way especially, for small organizations you know we, talked about Enterprise and they have a, certain infrastructure and teams to go, with that but there's so many more small, organizations out there that have very, little in terms of of train people, necessarily uh to do that and they have, you know they don't have all the, infrastructure in place and they're, looking you know with assistance and, server list they're looking for simple, ways to on board and and get utility out, of it uh would you say that the, combination of serverless and assistance, and then maybe whatever they might have, in AWS or whatever platform that they're, using is kind of just made to gel easily, for them so they can get to something, working pretty quick yeah I mean at the, end of the day like if you think about, it right like like the process shouldn't, be as complicated as it is right it's, just that there are many parts to it and, nobody picked up the gauntlet of saying, like hey we'll just do it all you know, what I mean uh because all of it is, quite complicated to do right right and, so um yeah like I think that like, initially we'll see smaller, organizations kind of you know picking, that up because they don't have the, resources But as time moves along you, know you're going to have to ask, yourself even as a bigger organization, do I want to own this pipeline right is, that something that I need to own right, and what value am I getting from, actually owning all this right and so um, yeah like it would be interesting to see, so like this is a very very new product, still in public beta and it will be, interesting to see how the the market, kind of reacts to it and and and sort of, experiments with it but my bet is that, um as time progresses and uh knowledge, assistants themselves become more, capable doing things maybe Beyond rag or, Beyond Simple rag quote unquote um that, you know like more more and more uh, sophisticated organizations might want, to actually give it a try and that, really brings us maybe to a good way, that that we like to to end episodes, which is asking our guests to sort of, Look Look Into the Future a little bit, and not necessarily predict it because, that's always hard but to look into the, future and and kind of uh what what are, you excited about um it could be related, to Vector databases specifically or pine, cones specifically but maybe it's more, generally in terms of how the AI, industry is developing the sorts of, things that you're seeing customers do, that are encouraging whatever that is, what sort of keeps you um excited about, where things are headed going into the, the rest of this year I'm excited about, the fact that we're seeing sort of like, a Resurgence of what you would call, traditional AI kind of come back into, the fold um in the form of for example, graph rag I think that like the the, notion here is that you know for the, longest time and I think it's been since, like you know GPT, 35 um you basically saw like this like, uh I think over indexing on llms right, um for good reasons right like they're, super exciting they're they're very, powerful right and they can do really, really cool things right but um with, that said um it's as if every every, other technology that has ever existed, before just like dropped off the face of, the Earth and nobody nobody has ever, like talked about like okay wait so what, can we do with those things and llms, right like and where do llms fit in the, bigger picture I think that Vector, databases kind of like put llms in their, place a little bit in the sense that you, know what I mean like you're not, thinking of the llm as being the end all, be all like this is the only tool that, we need um I'm very excited to think of, llms as these operators or agents um, that can tap into the capabilities that, exist in other systems and I think that, what we're going to see more and more, and more is that people are going to, figure out like in what subset of the, ecosystem does each tool belong so what, set of problems that each tool solve um, for example like a vector database, solves like the problem of briding the, gap between the semantic world and the, structured World a graph database can, solve problems like reason like formal, reasoning over well structured data uh, relational databases can solve a whole, set of different problems that they used, to be solving like aggregation etc etc, and then uh you can imagine that llms, and agents can sit as sort of like an, orchestrating mechanism and a natural, language interface mechanism on top of, all those things together um and that's, what I'm excited to see like it's it's, kind of when like the the community as a, whole is going to like wake up from its, like llm fever dream and sort of realize, it like there's other things out there, um and and and realize that it has so, many more powers that that it could, yield um to make really exciting um, applications that's awesome well thanks, for painting that uh picture for us uh, Ro and and for taking time to dig into, so many um amazing insights about about, Vector databases and embeddings and and, knowledge management in general so uh, yeah appreciate what you all are doing, at Pine Cone and um hope to have you on, the show again to to update us on on all, those things thank you so much thanks, for having, [Music], me all right that is practical AI for, this week subscribe now if you haven't, already head to practical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residents, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now now, we'll talk to you again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Stanford's AI Index Report 2024 | We’ve had representatives from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) on the show in the past, but we were super excited to talk through their 2024 AI Index Report after such a crazy year in AI! Nestor from HAI joins us in this episode to talk about some of the main takeaways including how AI makes workers more productive, the US is increasing regulations sharply, and industry continues to dominate frontier AI research.
Leave us a comment (https://changelog.com/practicalai/276/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Plumb (https://useplumb.com/) – Low-code AI pipeline builder that helps you build complex AI pipelines fast. Easily create AI pipelines using their node-based editor. Iterate and deploy faster and more reliably than coding by hand, without sacrificing control.
Featuring:
• Nestor Maslej – Twitter (https://twitter.com/nmaslej) , LinkedIn (https://www.linkedin.com/in/nestor-maslej-b565b779)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Stanford HAI (https://hai.stanford.edu/)
• 2024 AI Index Report (https://aiindex.stanford.edu/report/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-276.md) | 774 | 14 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, the founder and CEO at prediction guard, and I am joined today to talk about a, very interesting uh report that we've, we've talked about on the show today, that AI index report from 2024 from the, human centered uh Stanford University, human centered AI Center um I'm joined, by Nester masle who is the research, manager at the Stanford Institute for, human centered AI welcome Neer I'm super, excited to be here it's great to have, you back um as I mentioned we've we've, talked on the show before about the AI, index report in previous years but for, those that haven't had that background, or listen to those episodes could you, just give a little bit of a sound bite, about what the AI index report is sure, so the AI index is an annual report that, currently is in its seventh edition that, aims to tell the story of really what's, going on nii from A diversity of, perspectives we look at Trends in, technical performance so what can the, technology do now that it wasn't, necessarily able to do five years ago we, look at Trends in the economy how our, business businesses integrating this, tool how much are investors investing in, this tool we look at Trends in, policymaking how are policy makers, responding to what's going on in the, space and we don't just look at those, things we study research and development, ethics public opinion diversity and I, think really we aim to be kind of a, One-Stop shop um kind of an encyclopedia, of what's happened with AI in the last, year that policy makers bus Business, Leaders or really anybody that needs to, know and understand what's going on in, the space can turn to when they have, questions about artificial intelligence, interesting yeah and could you tell me a, little bit about the The Institute for, human centered AI kind of why the, Institute is undertaking this and how it, kind of fits into maybe the The Wider, set of things that the Institute does, yeah of course I mean funnily enough, it's actually very recently we, celebrated the fifth anniversary of high, so congratulations founded in 2019 and, we're kind of five years into this, wonderful journey and I think the, Institute really exists to try to, advance AI research education and policy, in a way that will fundamentally improve, The Human Condition the creators of The, Institute I think came together really, because they felt that AI could be an, incredibly groundbreaking technology it, could be something that could really, Elevate the potential of humans but in, order to do that we'd have to think very, carefully y about how we actually want, to develop some of these tools and, that's what we spend our time thinking, about at the Institute realistically, when it comes to AI the ways in which, this tool is going to develop is not, only going to depend on computer, scientists and how hard they're working, but it's going to depend as well on, policy makers on Business Leaders and on, the regular public so those individuals, as well need to be given a tool that, allows them to identify and understand, how the space is evolving and developing, that's what we aim to do at the index, the AI index is the report that we feel, can give these individuals the capacity, to make the decisions that they need to, be making about this technology yeah and, the index has been published for like, you say a number of years and of course, this I'm imagining that this last year, um or maybe the last couple years have, been kind of interesting in that on this, show in our conversations in, conversations across the industry, industry generative AI has has dominated, a lot of those conversations but doesn't, necessarily mean that that is kind of, the quote unquote AI that is impacting, humans in their everyday life yeah, there's more to life than a generative, AI there's more AIS yeah exactly so how, did you come at the report this kind of, time around with the acknowledgement, that of course this is transformative, what we're seeing in many ways but also, it's not it's not the only like you know, when you talk about a report about AI, it's not necessarily just a report about, large language models I assume yeah I, think that was important for us I mean, we certainly added some new data points, this year on generative AI because we, felt it it had kind of come to the, surface and it was something that we, needed to chat about but in a lot of the, chapters we we I think for us it's, incredibly important to exactly as you, said draw that distinction between, Foundation models generative Ai and, non-generative AI systems and for, example in the research and development, section we have information not only on, the number of foundation models that, different countries are producing but, also how many notable machine learning, models these countries produce the kind, of idea being that it is possible for a, machine learning model to be notable and, not necessarily be of the generative, type similarly when it comes to the, economy section we track total AI, investment not just generative AI, investment and we even included a, completely new chapter this year on some, of the ways in which AI is interfacing, and engaging with science and that, touches on a lot of these kinds of, developments that relate to some of the, ways in which AI is used in, non-generative ways but is still really, moving us forward and leading to a lot, of really exciting advancements yeah, that's um that's interesting and to give, a sense of how how the index is put, together um you know I'm sure people, have seen different surveys that are out, there um different takes on on AI and, where it's at in terms of the approach, of the of the Institute what is the kind, of main research mechanism that goes, into uh the report and like how is it, how is it developed what's involved, Who's involved that sort of thing yeah, great question it's kind of a, two-pronged effort in that we collect, data ourselves for particular questions, that we feel there isn't already good, data for so I think we try to be, strategic and you know we find that if, there's someone else in the research, community that is already collecting, data that we find to be relevant and, interesting we don't try to necessarily, duplicate their efforts and as such we, partner with a lot of data vendors like, Accenture GitHub McKenzie LinkedIn study, portals they all collect data that we, find to be interesting and then we work, to then include it into the AI index, report but again for certain topics, where we feel that there isn't enough, data for example we felt that there, wasn't a lot of good data on the number, of AI policies and legislations that, were being released on National levels, we endeavored to collect some of that, data ourselves and in terms of the, research agenda it's set by the AI index, steering committee so we're very, privileged to be advised by a diverse, Committee of AI thought leaders people, that were really are very influential in, commenting on what kinds of things are, going on in the AI space people like, Jack Clark who's one of the co-founders, of anthropic people like Eric Bolson who, is arguably one of the world's leading, economists on AI and people like James, Mana who leads AI research efforts at, Google we work with these individuals to, discuss and identify what kind of topics, we want to track and we figure out as, part of the process where each report, should be going and what kinds of things, we need to be chatting about awesome, yeah so it's it's not just AI, researchers that are or um you know, pulling people that are that are pouring, into uh this it's it's actual AI people, from industry academics but also spread, across other other areas like economics, and yeah definitely a lot of diverse, perspectives and I think also, perspectives from a lot of people that, have been in AI for a long time because, we see this with a lot of Technologies, where AI announced itself in 2022 chat, GPT came out and then all of a sudden, everybody and their mother became an, expert on AI I think some of the people, that we really work with have been in, the space for a very long time and I've, kind of seen it EB and flow so can, especially offer a lot of very valuable, perspective on where we are in this, moment and kind of contextualize and, situate that uh in a very nice way yeah, well I I do want to get into to some of, the specific points um raised in the in, the report but before I do that I I'd, love to ask especially with these sorts, of things um has you you've been, thinking about this deeply looking at, all the data and and all that was there, any one particular thing that stood out, as kind of surprising or, counterintuitive or just stood out, generally um in the kind of, year-over-year progress of the report um, looking at at this year's anything that, stood out as surprising to you um in, particular I think thinking about the, cost of some of these Frontier models, one of the things we did in this year's, report is we partnered with Epoch AI, which is a great AI Research Institute, to do some estimates on how much it, costs to train some of these Frontier AI, systems and I think we kind of all knew, in the back of our minds that these, systems were going to be very expensive, and we eventually crunched the numbers, and you know it's one thing to kind of, anticipate that and you actually see the, numbers that you know gbt 4 is costing, close to $80 million Gemini is costing, close to 190 million and you kind of see, this trend line which is just kind of, going up and you know almost, exponentially so it really kind of puts, into perspective how far we've come and, I think it also poses a lot of, interesting questions about the future, how much further can we go I think a lot, of these companies are betting very, heavily that the scaling laws are going, to hold that they can continue pumping, more and more data into these systems, and that these systems are going to, respond with improved capabilities and, even new, capabilities and I think they're all, making interesting bets and it's going, to be very interesting to see if that's, going to hold and you know what the, future may look like is is something, that I'm going to really be watching, with a lot of open eyes and an, interested perspective yeah yeah it, seems like it's it's getting to the, point where and this was something that, was also highlighted that I saw in the, report about industry dominating, Frontier AI research and I guess that's, not like that's connected in some ways, to what you were just talking about, because the I'm thinking back to my own, PhD and like the five of us in our grad, student office right it's no group like, that that's going to go about kind of, the training of one of these new models, at this scale certainly there have been, efforts I guess from efforts like Bloom, and others that have brought together, researchers from around the world to, work on models in almost like a, collaborative way like a CERN type, effort or something like that which, requires a huge budget but at least as, far as I can tell from the the data and, what you're talking about is you know, there's those budgets are increasing, these models are expensive both to, create and to run at scale and so you I, I assume that that is connected then to, that observation of Industry kind of, leading the front on the research side, is that fair yeah I think pretty, directly connected yeah I think it's I, think a lot of these companies know that, feeding more data into the system leads, to better performance or at least it has, so far and I think these companies are, betting that that trend is going to, continue so they're pouring money in and, kind of hoping that we're going to see, improved capabilities and I think part, of the reason these kinds of estimates, were so eye popping to me is because, even coming into the report you know I, was kind of aware that you know as you, said no grad student could build a an, llm you know train it on their laptop in, a way that would kind of compete with, some of the the language models that, these big industrial players release but, we're kind of getting into territory, soon where you know you'd even have to, kind of Wonder of the kind of industrial, Giants you know who can really afford to, be building these things right if we're, going to get into territories where, they're costing a billion if not more I, mean 100 million is a lot of money but, it's a lot less money than a billion, dollars right and the scale of who can, kind of get involved at that level is a, lot different and varies fairly, substantially and that has a lot of, important implications for how AI, research is being done and what kind of, topics people focus on I mean I think, industry actors do a lot of really, valuable research and they do contribute, a lot of interesting insights but I mean, at some point you need to pay off the, investors and you need to make a product, that is commercial I iable and that, could kind of shift incentives in ways, that again coming back to this Mission, element that we talked about with high, might not necessarily align with, building AI systems that do the most to, really further, [Music], Humanity hey friends this episode of, practical AI is brought to you by our, new friends over at Plum Plum is a low, code AI pipeline Builder that helps you, to build complex AI pipelines super fast, you can easily create AI pipelines using, their node-based editor iterate and, deploy faster and more reliably than, coding by hand without sacrificing, control deployment is easy pipelines are, live API end points eliminate the need, for constant code redeployment and, debugging by deploying complex AI, pipelines as API endpoints team, collaboration is easy too plums declared, of node-based editor enabl you to build, quickly while empowering non-technical, roles to iterate on what you've done, without breaking it you can build, Advanced AI features get structured, output every time transform data and, leverage validated Json schema to create, reliable highquality structured output, so Plum is built for Builders early, stage product teams are using Plum to go, from idea to validation in record time, to get started go to use plum.com that's, Plum with a b as in plumber to request, access today that's, us, mb.com again use plum.com, [Music], well um neeter some of the things that, you mention in the in the report are, related to generative AI some of them, are not one of the things that I think, is interesting about this particular, index is talking about some of the, things beyond the technology that might, impact you know not only the the, direction of of development but actually, does potentially impact practitioners, like like some of those listening to, this podcast and one of those things, being the kind of sharp increase in, regulation in particular in the United, States um as related to AI I'm wondering, from your perspective especially as this, index has has looked at things over a, number of years kind of every year we're, talking on this show about like oh when, when and how are the regul ations coming, down what from your perspective can we, discern from from this kind of sharp, increase in regulation in the in the US, and and the maybe the practicality of, how that will actually influence, developers yeah so I would say a couple, of things I think first big takeaway for, me is that we're probably going to see a, lot more action on the state level and, we already are than on the federal level, I mean if you look at the total number, of proposed AI related bills in the, United States, on the federal level and the state level, there's been a pretty massive increase, in the last seven eight years in both, counts but there are substantially more, state level bills that have been put, into law I think looking at my notes, Here Close to 40 on the state level in, 2023 compared to just one on the federal, level in 2023 and I think this again is, not surprising I think it takes usually, a little bit longer for there to be, consensus on the federal level but it, might mean that we can use usefully look, to States as kind of being one of the, first barometers of what's going on with, the regulation and what we might see, down the pipe in the federal level I, think a second point is that when you, actually look at the Regulatory Agencies, they're all passing more AI related, legislation and AI related regulation, but the regulation itself is coming from, a diversity of bodies so it's not just, like the copyright office is kind of you, know in all the attention and passing 30, AI related regulations the Executive, Office of the President is getting in on, the fund the Department of Commerce the, Department of Health and um human, security there are so many of these, regulatory agencies that are thinking, very deeply about this tool and I think, this reflects the fact that AI is a, general purpose it doesn't surprise me, but I think it's also a note that you, know if you think to yourself that oh, I'm not a computer scientist therefore I, probably don't need to worry about how, this is going to affect my lives I would, kind of you know urge you to maybe walk, back that assumption because Regulators, across different spaces are starting to, become much more actively involved in, passing AI related regulation and I, think a lot of us are going to be, affected much sooner than uh than we, would like to believe and I mean coming, back to the state level Point as well I, think you're seeing now a lot of very, kind of hot and contentious debates, especially in California around SB 1047, I think you're going to see that in, similar states where you know I think, four or five years ago the regulation, that we did see was quite kind of, expansive and let's say a lot lower, Stakes it was very often what we would, call more expansive regulation that was, saying let's explore how we could use AI, or let's kind of Empower AI researchers, whereas now the regulation is starting, to become a lot more restrictive it's, putting in rules about how these, Technologies can develop, and how people can use these, Technologies and obviously when you go, with that approach there tends to be I, think much more debate about getting it, right and that leads to a lot of kind of, fiery opinions on either sides of any, particular regulation I guess there was, this sharp increase in that in this past, year that you that you saw there's, probably some people out there as you, mentioned there's a variety of mixed, opinions about this of whether that's, actually kind of drives more, market share for these closed providers, who have already built up some of that, market and maybe have some of that, influence and Open Access um or open, source community that maybe is doesn't, have the ability to or or the the, willingness to to put in a lot of effort, into this compliance side was there any, was there any thought amongst the group, around how not only this uh would impact, so regulations would come down they, would be maybe enforced potentially um, in impactful ways but how that would, kind of shape the development of this, technology moving forward yeah I mean I, think that's an open question we don't, do a lot of I would say predictive work, in the index I think we try to mostly, look at what we know to be true so far, but I mean certainly I think at least if, you look in California with the debate, going on with the the bill that I had, previously me mentioned SP, 1047 I think yeah it's this kind of, question of open versus of closed source, that think that particular Bill wants to, put certain requirements on models that, are above a certain compute threshold, and some of these requirements are are, so stringent that you know conceivably a, lot of Open Source developers wouldn't, necessarily be able to kind of meet all, of them and you know whether or not, these regulations are backed for, commercial reasons as you kind of, mentioned some of these industrial, players could conceivably have, commercial incentives to support, regulation that maybe you know, compromises their open source players, but I mean I do think as well there's a, lot of people in the kind of AI safety, community that very ideologically, believe that AI poses a serious, existential risk and that we should, really be hyper cautious about how we, scale and I don't think it's necessarily, the position of the index to kind of say, we support one position versus another, but I do think what I would say is that, it's now kind of we're now getting into, a moment where again policy makers are, starting to think about this Business, Leaders are starting to think about this, and we really need to think carefully, about what we do and what we put into, law because you know once it's law it's, going to shape incentives and it's going, to shape how the community develops and, I think there are compelling arguments, made by people on both sides um but it's, difficult to know what the future holds, and I think, it's hard to know whether this is the, right move or that is the right move but, I think we always have to try to hold, ourselves to some kind of standard of, intellectual accountability and kind of, acknowledge that you know we can't, always get it right but we try to we, need to try to have as thoughtful and as, well balanced of a perspective as, possible yeah I think that's a really, good perspective you mentioned in when, you were answering that about kind of, some of these regulations being tied to, kind of how, AI will develop moving forward and in, particular as tied to the size and scale, of models being developed I was really, interested that you know there is a, portion of the the report that um is, just titled will models run out of data, which I think is is really interesting, um we talked a little bit about these, models becoming more and more expensive, but of course there's an element of this, which also is the scaling of the data, required to train particularly these, generative models but other models as, well maybe computer vision models or or, other models and that is something that, you know I I definitely get the question, every once in a while from from people, that have heard something like oh you, know these models have already sucked in, all of the internet of of data right, like what's what's left to train on um, and then you've got another perspective, that I hear sometimes which is well now, we're just going to fill up the internet, with generated data which Cycles back, into training data sets so yeah I I'd, love for you to bring any perspectives, you have on this question of will models, run out of data I think it's a a, question that that's coming up, frequently for for people yeah it's hard, to know I mean uh that's that's a, problem you want to kind of go on these, podcasts and give kind of clear concise, answers but in a lot of cases it's quite, nuanced I think there's reasons to be, optimistic there's reasons to be, pessimistic I think when you kind of, look at optimistic reasons for why we, might not run out of data or why data, might not be the bottleneck that we, anticipated to be I think there are some, papers that seem to suggest that, synthetic data could meaningfully Aid in, training AI systems I think as well if, you think about it these language models, are substantially less efficient than, the human brain they see Millions times, more text than any human would in their, entire lives and in some cases can, perform better than humans at certain, tasks but it's clear that there is kind, of an architecture of the mind that, being our own architecture that gets the, job done with a lot less energy and, plausibly there's going to be more, research being done on algorithmic, efficiency that can maybe make it easier, for models to perform at higher levels, with less data and I think as well you, know we're always creating more data uh, we're kind of in an era in which more, and more data is being kind of, manifested and put out into the world, some of that of course is going to be, generative and created by AI systems but, there's also new models like meta, segment anything which make it a lot, easier to get segmentation masks so AI, could be used to also take more data, from the world in a way that could help, these systems on the other hand it does, seem that if you kind of just look at, the amount of tech stock that we have, now it seems like we might potentially, run out of that relatively soon so I, think we cited estimates from Epoch, they've since up dated those they, predict that kind of in four years we, might potentially run out in terms of if, we're going to kind of continue scaling, models in the way that we're scaling the, models there's also other papers that, are not necessarily as bullish on the, potential of synthetic data kind of uh, plugging this hole and filling in the, Gap I guess for me the kind of really, interesting thing is you know I'm really, going to be curious to to see what it's, going to be like when gbt 5 comes out, because I mean all these models have I, think but now they're good but they're, kind of incrementally improving their, their capabilities at various tasks but, it still seems to me like they struggle, on some tasks like planning they, struggle on some tasks like reasoning, they're still somewhat prone to, hallucination and there are still kind, of limits to what they can do and I, guess I would wonder is scaling a, Transformer going to be sufficient to, resolve some of those problems or do we, potentially need a new architecture a, new way of building AI systems that, could resolve some of those difficulties, and resolve some of those challenges and, it's it's hard to know but there's a, variety of perspectives here I'd be, curious to get your perspective on it as, well as someone that kind of thinks, about this as a lot too yeah yeah I, appreciate that um from my perspective, there's similar to yours it's a I think, you can look at it like a mixed bag I, think the thing that I would maybe draw, in here is that at least right now it, seems like a promising route forward is, not necessarily relying on these models, to you know Foundation models to have, all the inbuilt knowledge that that is, needed to for example answer every, question or have every fact or you know, be able to process any type of input, necessarily I think though certain, things like I I've seen this in the, progression of like function calling for, example there was sort of a generation, of these models where people figured out, that oh we can like use these to, generate calls to apis or to functions, but none of those prompts were in these, sort of fine-tuning data sets and so, like they kind of worked for that but, kind of not and now like the ones that, are coming out now that have that, pre-built into the fine-tuning data sets, they do that much easier and can extend, to a whole variety of of function, calling so I think that these sorts of, not uh like specific facts or specific, knowledge about particular apis or or, these sorts of things but the ability to, figure out what those what those more, General kind of building blocks of of, robust AI systems are and building the, prompt data sets around those my view is, that there's going to be a lot more, curation of that type of thing versus, like just hoping that increasing data, set size will solve a lot of those, problems yeah it's a question of, efficiency as well right because you, know for a lot of businesses it's also, not not super efficient to be kind of, running these kind of very large very, computationally taxing and expensive, models so when you it's it's not only a, question of maximizing performance but, what's that kind of sweet spot where you, get good enough performance and the, efficiencies kind of where it needs to, be well speaking of performance and, maybe uh utility of of these models we, we were talking a little bit about that, but maybe more generally um one of the, things that's highlighted in in the, report is standardized evaluations for, large language models or maybe, generative models we also talked a, little bit about this in relation to, there was a mlops community survey about, AI quality and eval valtion and I think, the the results of that showed people, still are having some issues figuring, out the right evaluation standardized, evaluations figuring out Roi yeah any, anything to highlight from your, perspective in terms of this evaluation, front and where the state of that is now, or how that's changed yeah I mean I, think there's two things that I would, say so the first is kind of when it, comes more to General, capabilities I think one of the things, that I've seen at the index is I don't, necessarily know if the benchmarks that, we have now which I think are mostly, academic are sufficient for dealing with, the realities of AI that we now face, which are industrial and what I mean by, this is that a lot of these benchmarks, and for the the listeners that might be, unfamiliar the way I conceptualizes is, when you know a model developer launches, a new model like anthropic recently, released Claud, 3.5 they'll test it on a variety of, benchmarks these are test of what AI, systems can do like a test of grade 8, math problems and they'll say you know, our model gets a 96% on this Benchmark, better than gemini or these other models, therefore we have the smartest and most, capable model now I think the reason the, community does this is because 10 years, ago AI was strictly an academic problem, it was something that University, researchers were thinking about and I, think they wanted to know on an, intellectual level how could AI think, and these kind of benchmarks were useful, that's how you got things like imag net, and it was useful then to kind of see, how much better we could actually get at, these systems but businesses you know, they're not solving grade8 math problems, or doing competition level math as has, tested on some of these benchmarks, they're using these AIS for wildly, different purposes and they behave, wildly differently depending on the, context I saw this firsthand I was, editing the report using different AI, tools I would use GPT and I would use, Claude and for whatever reason I really, preferred Claude over gp4 I thought that, it was a much better copy editor gb4 was, sometimes suggesting to me words that I, thought were very kind of go and I would, tell it like don't use this word and, then would use it again in two prompts, so I just kind of got a bit frustrated, but the point I'm making more broadly, here is that we have evaluations for, these models that test them on things, that businesses aren't really doing and, I think there is an opportunity there, for someone to kind of really identify, how could we use these models from a, productivity level and which ones are, perhaps um the best ones on that front, the second point that I would make about, benchmarks and standardization we talked, about this in the responsible AI section, that was co-written by Ana Royal who's, uh one of the PHD collaborators at, Stanford with us was kind of looking at, how these Foundation model developers, Benchmark their models ju supposing, General capabilities bench marks with, responsible AI benchmarks and what you, see when it comes to General, capabilities is that a lot of these, developers they're all benchmarking on, mlu which is a benchmark of General, language understanding they're all, benchmarking on codex which is a, benchmark of coding capabilities they're, all benchmarking on gsm 8K a benchmark, of math when it comes to responsible AI, benchmarks the really is no consensus, some of them are testing on truthful QA, others are testing on real toxicity, prompts others still are testing on Bold, but really across the map there is no, consistency and it's not clear to us, that this lack of consistency reflects, either a genuine belief that these, developers have that okay certain, responsible AI benchmarks are better, than others or if these developers are, merely doing this as a means of kind of, juicing their model performance and they, just choose the particular benchmarks, that best suit them but I mean it is, consequential because we're kind of now, in an era in which AI is being widely, used and when something is being widely, deployed you need some kind of, standardized evaluations or standardized, comparisons of how these different, things function and it seems at least, when it comes to the responsible AI, World we're not really kind of out a, stage where that's occurring yeah and, one of the things I see highlighted, under that section is also extreme AI, risks are difficult to analyze um could, you dig into that a little bit because, there's there's one element of this, which is practically in the industry, setting as you were just mentioning like, evaluating performance is maybe, complicated and not standardized and, relies on these benchmarks which might, not be applicable in all cases but then, the other side there's like these risks, liabilities harms that are that are, promoted across sort of academic and, Industry settings which in form kind of, the general discussion about Ai and, safety but also for a practitioner um, you know maybe they're thinking like, what do I focus on here what what's an, actual is is this risk that these people, are talking about a reality that I need, to be considering or is that sort of, just something that should inform, long-term development or something I, should consider today yeah talk a little, bit about that and like what you were, seeing around the maybe safety and risk, side of responsible AI yeah I think what, we kind of meant here is that at least, when you kind of talk about these AI, risks there seems to be kind of two, categories the more let's say short-term, risks which are things that kind of bad, things that AI can do now that we should, be paying attention to like it's, potential to be biased it's potential to, be unfair or to violate privacy and more, kind of long-term existential risks, which I guess you know pardon my French, refer to the possibility of it killing, us all at some point now I think the, kind of challenge here is that with the, short-term risks we already see this, manifesting in the present and with the, long-term risks I mean some of the, arguments these people make are, theoretically plausible like you could, imagine this happening but it's hard to, know how plausible it is there are some, people that feel very confident that AI, at some point is going to become smarter, than us self-improve and want to take, control there are others that are not as, convinced and think that's just kind of, lunacy but it's kind of challenging with, these long-term risks because they're so, theoretical and so in the future and I, mean even if you could show that these, models are sometimes deceptive that they, have the potential of you know being, Maki aelan it's still hard to kind of, get to these longer term arguments, because they depend a lot on these kinds, of theoretical argumentative claims that, are difficult to actually ground in, reality in the future and I think you, see this kind of manifesting very, tangibly with this bill s SP, 1047 where it seems to me that a lot of, the people that are in favor of this, bill which again would impose some, fairly stringent safety requirements on, models above a certain compute threshold, they seem to be of the belief that AI, could pose really serious safety risks, and there's a desire that oh if we scale, up these models even further we want to, be able to kind of shut them down and, ensure that they kind of don't pose a, threat now if in fact it is the case, that these models do have these safety, risks then that seems like a plausible, argument but if they don't then you, might in the process pass a law that, could really the ability of, open- source developers to create models, and the open- source development, Community is very important for the, startup ecosystem and The Innovation, ecosystem so that's kind of what we mean, when we talk about these risk are, difficult to analyze you know there's a, theoretical argument which is somewhat, plausible but How likely is that, theoretical argument that's that's tough, to know yeah along with the sort of risk, discussion and evaluation discussion I, guess some of what's focused on in the, report is also the perception of AI, within maybe um various demographics but, across various segments and if I, understand right right there's a sort of, General pessimism about AI in terms of, like its impact on people's jobs but if, I also understand it right AI does seem, to be making a positive impact in terms, of people's quality of work and, efficiency of work could you help us, parse through some of those things a, little bit in terms of maybe the more, kind of general public impact and, perception of this technology yeah I, think there's two things that you're, speaking to I think first is the fact, that if you look at public opinion data, at least in a lot of countries like the, United States Canada France people are, very bearish about AI when kind of asked, do you feel that products and services, using AI have more benefits than, drawbacks in those three countries and, also in a lot of other Western countries, the overwhelming majority of respondents, seem to disagree they don't think that, AI is more beneficial than, disadvantageous yet in a lot of other, let's say developing countries like, Indonesia Thailand and Mexico there is, much more bullishness people are very, excited and very hopeful we don't, necessarily know why that's the case, that's obviously something that's very, important to kind of unpack and continue, thinking about as this technology, develops and as it rolls out even, further now you mentioned this kind of, point about economic Advantage I think, what we're referencing here is that, there's now a lot of new studies coming, out in really different Industrial, sectors whether it's kind of looking at, call centers whether it's looking at, legal work whether it is looking at, computer science work that shows that, workers that use AI tend to be more, productive than those that don't at the, minimum they're able to do tasks faster, and at the maximum they're not only able, to do tasks faster but they're able to, submit work of higher quality so what, kind of explains the disconnect well I, mean I still think we're kind of in the, early days of AI integration these kinds, of studies that have looked at AI, positive impact we're quite microscale I, don't necessarily think businesses are, kind of using this technology on mass, yet and I think second is the fact that, it's hard to necessarily know where we, might go even if AI is productive right, because you know if you're working 40, hours a week and all of the sudden you, need to be working 20 hours a week, because of AI you know you could either, use that 20 hours to do perhaps, different projects that will lead to, more money and more value for your, employer or maybe your employer decides, hey we just don't need you for those 20, hours and kind of scales back what they, asks of you so I think the kind of jury, is still out on whether the integration, of AI Technologies is going to lead to, widespread automation or augmentation, and I think when you kind of look at, narratives of fear or if you at least, try to understand why different people, in different countries are as frightened, by this technology as they are I think a, lot of it comes down to this this, element of just kind of uncertainty, about what it's going to do to to their, their jobs and their livelihoods yeah, thanks for that uh that perspective um, as we kind of near near the end of our, conversation here I'm wondering after, seeing the results of of the index in, kind of this round and also working on, it for some time what does you look, forward um kind of to this next season, of AI development and adoption and, integration as you were just talking, about what do you see as exciting or, positive in terms of of how things are, moving forward and what's on your mind, kind of looking towards the next uh you, mentioned not necessarily doing, projections which is fair but what are, you curious about kind of in terms of, how how things will develop going into, this next season yeah I'd probably say, three big things I think the first one, is is scaling and a hold so we talked, about this already but I think a lot of, these companies are making these bets, that they'll feed more data into these, systems they're going to get a lot, better and I don't dispute that they're, not going to improve I think that the, improvements are going to continue I do, wonder by how much and if in fact the, improvements aren't as great as we, anticipate them to be what might that, mean for the economics of AI I think, number two looking at how are businesses, actually integrating this technology if, you look at literature on a lot of other, technological, Transformations a lot of economists, argue that it typically takes decades, from the launch of a technology to the, point at which it actually registers, positive productivity impacts largely, because very often you don't have, infrastructure that's necessary to, leverage a technology when it kind of, appears out of the box and I could, imagine a similar thing with AI now I, don't think it's going to take decades, for AI to kind of have its productivity, impact being widely felt but it will be, curious to me to see if businesses start, thinking a little bit more critically, about how they want to use this tool and, how they want to integrate this tool and, I think Third Kind of keeping an eye on, what happens in the domain of policy you, talked about like what am I you know, what is something that I'm kind of, encouraged by or kind of watching out, for I think it's yeah what is the policy, making response going to be and I'm, encouraged because I know that I think, looking back at the example of social, media you know I think it took us close, to a decade from when these tools were, launched to when we started really kind, of thinking about them on a political, level in terms of kind of regulating, them and ensuring that we built them, with the right kind of incentives and in, the right kinds of ways and if you kind, of point to 2022 as this moment where AI, was kind of officially launched I mean, we had pretty Landmark legislation in, 2023 with the EU AI act as well as, Biden's executive order so it's, encouraging to me that policy makers are, thinking about these things I guess an, open question for me is going to be, where does the tone of policymaking go, and what comparative priorities do, policy makers in different parts of the, worlds have when it comes to launching, these AI tools and models awesome well, um look forward to hearing maybe some of, those uh at least some data that's, indicative of some of those Trends in in, next year's um index yeah you'll have to, you you'll have to play some clips of, some things that I said about the future, here next year and we can kind of, revisit how ACC or inaccurate may have, been how well did we do well regardless, I definitely recommend people to of, course we'll link the index report in, the show notes so I encourage people to, take a look at that they've made it, really easy to navigate you can go by, chapter and dig into particular sections, and look at the key takeaways so yeah, definitely check it out uh is a amazing, um work that's been been put together so, thank you Nester for putting putting in, your work to to that and for all the The, Institute is doing um to help keep us, informed appreciate it very much no, thank you guys for having me so much it, was a great conversation and hoping we, can do this again next year take care, sounds great yeah, [Music], bye all right that is practical AI for, this week subscribe now if you haven't, already head to practical AI FM for all, the way, and join our free slack team where you, can hang out with Daniel Chris and the, entire change log Community sign up, today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Apple Intelligence & Advanced RAG | Daniel & Chris engage in an impromptu discussion of the state of AI in the enterprise. Then they dive into the recent Apple Intelligence announcement to explore its implications. Finally, Daniel leads a deep dive into a new topic - Advanced RAG - covering everything you need to know to be practical & productive.
Leave us a comment (https://changelog.com/practicalai/275/discuss)
Changelog++ (https://changelog.com/++) members save 6 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://graphstuff.fm/episodes/2023-finale-llms-and-knowledge-graphs-throughout-the-year?&utm_campaign=UCGenAI&utm_content=AMS-SrDev-ToFuDev-UCGenAI-Audio-None-GenAI1-GenAI-NonABM&utm_medium=Audio&utm_source=PracticalAI&utm_justglobal=) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack.
• Plumb (https://useplumb.com/) – Low-code AI pipeline builder that helps you build complex AI pipelines fast. Easily create AI pipelines using their node-based editor. Iterate and deploy faster and more reliably than coding by hand, without sacrificing control.
• Backblaze (https://www.backblaze.com/cloud-backup/personal/landing/podcast/practicalai) – Unlimited cloud backup for Macs, PCs, and businesses for just $99/year. Easily protect business data through a centrally managed admin. Protect all the data on your machines automatically. Easy to deploy across multiple workstations with various deployment options.
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Apple Intelligence (https://www.apple.com/apple-intelligence)
• Introducing Apple Intelligence, the personal intelligence system that puts powerful generative models at the core of iPhone, iPad, and Mac (https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac)
• The top AI features Apple announced at WWDC 2024 (https://techcrunch.com/2024/06/11/the-top-ai-features-apple-announced-at-wwdc-2024)
• Hybrid Search: Combining BM25 and Semantic Search for Better Results with Langchain (https://blog.lancedb.com/hybrid-search-combining-bm25-and-semantic-search-for-better-results-with-lan-1358038fe7e6)
• Advanced RAG: Precise Zero-Shot Dense Retrieval with HyDE (https://blog.lancedb.com/advanced-rag-precise-zero-shot-dense-retrieval-with-hyde-0946c54dfdcb)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-275.md) | 548 | 5 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io hey friends do you remember the, day that chat GPT launched I do well not, the exact day but the time frame it felt, like the llm was this magical tool out, of the box however the more you use it, the more you realize that's just not the, case and as AI developers yourself you, know the technology is brilliant but, it's prone to issues like hallucination, and that's not good but there's still, hope feed the llm reliable current data, ground it in the right data and context, then it can make the right connections, and give the right answers the team at, neo4j has been exploring how to get, results by pairing llms with knowledge, graphs and Vector search and to hear all, about this check out the podcast episode, about llms and knowledge graphs at graph, stuff. FM they share their tips on, retrieval methods prompt engineering and, more do not miss it you'll also find a, link in the show notes again graph, stuff. FM that's G A stuff Stu, ff. FM, [Music], welcome to another fully connected, episode of the Practical AI podcast in, these fully connected episodes Chris and, I keep you connected to a bunch of, different things that are happening in, the AI community and try to plug you in, with some learning resources to help you, level up your machine learning game I'm, Daniel whack I'm founder and CEO at, prediction guard where we're, safeguarding private AI models I'm, joined as always by my co-host Chris, Benson who is a principal AI research, engineer at locked Martin how you doing, Chris doing great Daniel so many things, happening in the news and uh just was, looking forward to a chance for us to, finally we've hit specific topics a lot, lately and I'm hoping we have a a chance, to jump in and just talk about all the, stuff how's your mind at these days in, relation to AI we haven't done a sort of, General check-in on both of us are, probably I think know each other well, enough to probably were both fairly, hopeful about things looking forward and, seeing many good things but yeah, generally how has this year looked for, you in relation to how you thought it, might look and or in your own work with, this technology how's your view of how, the technology is shaping up changed or, stayed the same in the large I think, some of the things that we have kind of, predicted you know are happening a lot, of the developments are are somewhat, predictable like if you if you do, something with imagery you'll probably, get there with video and things like, that we talked about we made some, predictions last year about that and I, think those types of things are playing, out more or less in a broad scale kind, of how we would have expected would you, agree with that in general yeah yeah I, would say uh definitely multimodality, wise yes we talked L about that a lot, what about uh I guess both of us either, are at or have interacted with friends, of ours or colleagues at a variety of, Enterprise, organizations what do you think is the, reality on the ground in terms of, adoption of AI versus what's all in the, news and the the hype and that sort of, thing uh in terms of the practicalities, and actual adoption rate of generative, AI versus kind of the things that we've, always had with us the machine learning, the data science types of projects yeah, I I think it's interesting there's uh a, lot of Reality Checking happening this, year uh especially in the last few, months um everyone's been hit with so, much gen marketing and just all the hype, and everything with it but we're also, expected to get things done you know at, work and so trying to finally get past, the hype and get stuff which requires a, lot of hard decisions uh to make and and, so you know company's kind of going well, do I you know I'm looking at the cost of, one of the big providers you know with, open AI being kind of the leader in, those and do I want to pay for that for, everything and thus do I also want to, send my date out how can we do things, with smaller models or other large, language models that are open source and, the across multiple companies as I'm, watching people make these choices and, we're having conversations about it, offline and stuff there's an Agony, associated with trying to to navigate, correctly and not end up in a bad, position for your organization and stuff, like that yeah and so I'm seeing a lot, of well we're going to use some open, source for this and we're going to use, some API calls to commercial stuff for, that and um we're going to use some, smaller models over here and how are we, going to put them all together and we've, talked about these issues across a bunch, of different conversations even the the, last guest we had we had this, conversation but I think people are, really challenged with making it all, work as I talk to people at different, companies I don't necessarily see, everyone doing it the same there's, enough uh variability to where um we, haven't arrived at the world of best, practices yet in my view yeah one of the, things that you highlighted is the sort, of multimodel future people kind of, spreading out their workloads across, multiple model providers and open models, I think that's something that seems to, be only increasing and will will, continue you know with all of this, emphasis on large language models, generative AI have you got a sense from, data scientists that you're interacting, with and or or others that you know the, actual day-to-day of data science teams, is Shifting or they're still just, training their support Vector machines, and whatever time series forecasting and, whatever those things might be I I think, the thing you know to that point there's, shifting but there's also the field has, exploded out in the number of positions, uh to support and you know way back when, I was young which was you know back when, the dinosaurs were roaming the Earth you, had like software developers and they, kind of had to do everything you know, which was is very reminiscent of how AI, has been in previous years and in the, last couple of years we've seen you know, an explosion of we first saw machine, learning Engineers you know beyond data, science and then you kept adding each, title in position and there are now, there's ux people in AI uh concerns and, it feels very much like software, exploded from the 1980s when I was a kid, uh into young adulthood for me in the, 90s and and 2000s and now you know we're, seeing that same it's very similar to me, I I look at it and I go I been you know, Deja Vu for me so um it's a maturing of, the industry and people are starting to, figure it out I think there's a, recognition finally outside of the, marketing and hype machine which goes, hard and constant always I think for the, worker bees like me there's a, realization that um it's part of, software and we've talked about this for, a long time that that needed to happen, and so a little bit less hype and more, about what can the models do and how do, I combine them and what sizes and it's, putting the jigsaw puzzle together of, what makes value for a particular, organization and that's been interesting, for me to be part of that in my own, organization and help plus navigate, through the morass and every other, organization I'm talking to is doing the, same so yeah yeah I was just sort of, curious about boots on the ground what's, changing day-to-day for data scientists, it's it seems like one of the things, that you're indicating is it's more that, roles and teams are expanding they are, yeah versus the data science teams that, have existed sort of cease to exist in, their form you know creating psychic, learn models and, start moving over to gen which is, probably not the case I was just looking, at Google Trends of uh of terms which is, always a a fun thing to look at and I, was looking at geni versus pyit learn, and there's sort of a you know psyit, learn still quite impressive search you, can see this surge of Interest kind of, in the data science hype period at least, as far as I can tell but then there's, also been a surge since kind of 2022 and, on and you know that's gone down a, little bit so I don't know how much you, can draw from that but the data science, team still lives as far as I can tell I, I don't think it's going to die because, um people are also to your point they're, they're also realizing the the, limitations and constraints of gen um, and what types of things it doesn't do, well and so people are I think a bit, smarter about it in 2024 versus 2023 and, definitely 2022 it seems like the the, wisdom is finally kind of spreading out, and people will kind of go instead of, just saying gen is gonna solve, everything they're they're recognizing, it for what it is and the capabilities, it has and they're starting to say this, is a good use case for it but we need to, pair it maybe with a reinforcement, learning model you know and they're, starting to remember oh yeah there's all, these other capabilities which we were, quite enamored with until gen keep along, and they're still really good, Technologies to use and so trying to, start recognizing what and what, shouldn't be an AI at all uh and, combining those together in unique value, propositions for their organizations is, the thing I'm seeing one other point, that I've noticed also for the first, time this year is that um companies like, the software side and the and the AI, side are finally really coming together, operationally instead of being very, stuck apart which is one of the problems, we've talked about on the show many, times and I'm seeing like agile, methodologies play out that had been on, the software side for years in these, organizations and they're now including, the AI and and data science teams and, how they're like if they're you know I'm, just making up one of the like if, they're using safe or scrum or whatever, they're using they're starting to, account for that and it feels more real, life to me it feels like ah we're, finally getting to a point of maturity, and recognizing ing that all the pieces, need to come to play or we need to be, efficient and how we do that so that's, been my kind of Enterprise uh, observation of the last few months yeah, and I don't know again Google search, Trends will give you so much but it, seems like the main Trend with data, science as a function at least according, to searching just sort of keeps going up, pretty steadily even though there's a, switching of of Technologies um it would, be nice though I know that we talked, quite a while back in a number of, episodes about being a full stack data, scientist and I know recently we had, some of that discussion around full, stack AI agent development but that sort, of idea that there would be more, integration of that software side into, data science teams and vice versa, something that maybe this is a push, that's kind of materializing some of, that that you know it's interesting the, the term full stack is so loaded uh in, terms of how people perceive it and it's, a bigger thing if you're in a very small, organization and and what it means there, is thank goodness I've got somebody who, can handle all these things that we have, gaps on because we don't have enough, resource to go buy somebody in all these, different areas and so it's more, meaningful in a smaller midsize, organization based on the nature of the, organization you're going to see it a, lot more job specific and role specific, in the Enterprise which in my view is a, good thing because you don't want to, just put full stack this full stack into, everyone because in a large enough, organization that doesn't help with your, efficiencies and stuff so but team wise, there could be more integration there, could be and I think integration is, really important and so I'm I think this, is the first year where I have a little, sense of actually seeing that in out in, the, [Music], workplace hey friends this episode of, practical AI is brought to you by our, new friends over at Plum Plum is a low, code AI pipeline Builder that helps you, to build complex AI pipelines super fast, you can easily create AI pipelines using, their node-based editor iterate and, deploy faster and more reliably than, coding by hand without sacrificing, control deployment is easy pipelines are, live API end points eliminate the need, for constant code redy deployment and, debugging by deploying complex AI, pipelines as API endpoints team, collaboration is easy to plums declared, of node-based editor enables you to, build quickly while empowering, non-technical roles to iterate on what, you've done without breaking it you can, build Advanced AI features get, structured output every time transform, data and leverage validated Json schema, to create reliable highquality, structured output so Plum is built for, Builders early stage product teams are, using Plum to go from idea to validation, in record time to get started go to use, plum.com that's Plum with a b as in, plumber to request access today that's, usb.com again use plum.com, [Music], well Chris we have got to Apple, intelligence uh this last cycle that we, went through you know everybody's got, their AI play now and I guess of course, Apple had been in AI in one way or, another uh so it's not like they were, totally absent but we got the, announcement about Apple intelligence so, what do you think First Impressions, excited confused a mix what's your, impression I'm always skeptical about, everything when it comes out because of, the hype machine as you know uh but as, an Apple user I'm looking forward to it, I want to see what they do I I buy a, certain degree into the Apple ecosystem, but I also am not 100% invested in every, way uh the way some folks are I use, Google as well and I and I you know, various things so they were very slow, app's received a lot of criticism the, last couple of years because you know, once upon a time having been perceived, as the Steve Jobs esque you know leader, of we're the ones that bring you, completely new ideas that are going to, change your world like the iPhone and, when it was released originally they, have definitely not been fulfilling that, role they've been slow having said that, having thrown the criticism out first, they have certainly I would say the, announcement seems differentiated in, that apple is kind of putting it out, they're a product focused company and, they've made these AI announcements that, clearly position AI as a feature and not, the product itself which a lot of the, other you know big companies are you, know it's almost AI is the product that, they're trying to do and so with the, announcements at WWDC, 20024 which is their developer, conference every year often called, dubdub by insiders there they are, talking about AI in the context of the, devices and of the tasking that their, users are doing and so I actually like, that I like as you know you and I both, get absolutely inundated with reach out, from startups and companies always, promoting and hyping their AI product, and uh and at least to see apple talking, about it being feature enhancement, rather than the thing itself is good, it's a little bit of fresh air on that, one yeah I think that there's like when, you have a little button that summarize, or rewrite or whatever that button is, that's very much how I like the sort of, first wave of pre- chat GPT AI features, a lot of them that came out where yeah, you just see a suggested text or you'd, see something that makes sense and ux, wise that's probably like you say very, fitting with apple and a m approach to, things I know that there was definitely, some shade thrown by some including Elon, Musk about the Reliance on open AI in, apple intelligence did you see any of, that that's Elon Musk being Elon Musk, you know he's I think part of it is just, a Gambit for the spotlight at any moment, you know inserting himself into any, Spotlight I won't go into the El if, you're listening Elon you can come on, our show and steal the spotlight you're, welcome although it might be an, interesting interesting conversation, there yeah he's he's he's the same age I, am more or less and uh and so yeah I I, always kind of go I'm just trying to, imagine when he does some of the things, but anyway back to the apple bit without, derailing on that one Elon came out with, the specific criticism of okay you're, going to send everything off to the GPT, uh API and that's a huge privacy breach, and stuff like that but Apple had, already clearly in the announcement said, every user on a peruse basis will be, given the option of do you want Siri, will say do you want to send this off to, GPT for an answer and the user on a, peruse basis can say yes I want to do, that or no I don't and they were very, explicit about that upfront so I'm like, you know Elon just if you're going to, use an iPhone or an iPad just say no, just say no and stop because that way, you still have control and that seems, like a reasonable thing I use the uh the, open AI app all the time on my iPhone, and um it's one of those things that's, open all the time and that is good, enough for many cases but there are, times when I would certainly like to, integrate that capability into my other, uh activities on the iPhone in a more, integrated way and this gives me that, opportunity so Elon was saying only have, the open AI app and as a user myself I, say no give me the option sometimes I'm, just going to have the open AI app there, but other times let me integrate it with, my other activities Apple's going to, give me the choice on whether I want to, do it I'm happy with that uh all good so, um he's not speaking for me in that, capacity yeah it might be interesting to, talk just for a second about um whether, it's Elon or you know they're certainly, in a less public or meming way a lot of, people out there that do have concerns, about this sort of closed model, providers and some people that are still, blocked by using these I'm wondering, from your perspective I I have a few of, my my own thoughts but like from a, practitioner perspective just to kind of, make it practical as we are on practical, AI what are those sort of tradeoffs I, guess as you see it now with closed, model providers or kind of using open, models or some version of hosted open, models in kind of the Enterprise or, development scenario versus kind of your, personal device certainly there's a, direct to Consumer type of angle to what, we've discussed so far but in terms of, the practitioner themselves we've, touched on this occasionally but I think, it's probably good to continue touching, on it occasionally because things are, changing over time and changing very, rapidly so yeah from your perspective do, you have any thoughts on that sure um I, think that is a an issue that every, large organization is navigating because, you have a certain amount of funding to, support your operations and almost, everybody has some you know tie-in to, one or more of the large commercial apis, and and it's a different context from a, personal user like I talked about you, know I'm paying my open AI monthly fee, and I use that all the time for a, variety of different tasks but in the, Enterprise it's a bit different there, may be that capability but I'm also, seeing um Enterprises that are really, concerned about their data going out, about their information going out if, it's not their own it's their customers, that they have by using a public API, that you're paying for where that data, goes outside of your control there is a, huge concern and risk not only about the, immediate privacy concerns but also, about the liability and the legal, concerns around that because most, organizations have a mixture of their, own and other you know other, organizations a whole bunch of, partnership agreements in large, companies maybe you're okay with your, data going out but of the 50 partners, that you have it might be that 36 of, them aren't too Keen about their data, that they have an agreement with you, holding goes out as part of your data to, a third party so that makes it pretty, challenging to use thirdparty apis in a, manner that everyone is comfortable with, so I'm really seeing a lot of, open-source models being internally, hosted there is still a lag because, Google open AI anthropic they keep, pushing the boundaries on what they're, offering and the open source Community, is not typically you know all the way to, the it's not just the model but the, services built around the model that, makes it easy to use so you have to kind, of recreate that or use existing open, source capabilities that are out there, and that requires effort and there is, funding and money spent on a good bit of, money spent on that but I would say that, of the Ring of people that I hang out, with across multiple companies that I'm, seeing more of the internal hosting of, models with the effort of trying to stay, on top of current releases and monitor, that is being the more widespread way, and there's a there's a recognition that, those models may not give you quite as, good of an answer if it's a very, expansive thing you're prompting on as a, GPT model would but that's okay in a lot, of cases it can get you through and if, you have multiple models to choose from, and combine then you can usually be very, productive without kind of violating all, those concerns that I enumerated so it's, both but for me I'm seeing more people, turn in where now I run in a national, security uh world and so we might be a, little bit bit more conservative about, that across the various defense, companies and stuff like that and so, acknowledging that there's a bit of a, you know an array of possibilities there, yeah one of the things that maybe has, shifted a little bit in my mind since, the last time I've been considering this, question and we've talked is I would say, there's all of the sort of privacy data, misuse data leaving your network all of, that I think um is a big piece of it, there's been kind of a, developing mindset that I've kind of, picked up on which is slightly different, and I started to pick this up from um, I've mentioned this a couple times, because it's been helpful for me a a16, Z's um recent surveying and reports but, the fact that yes there's a privacy, element but a lot of times organizations, are using open models because of the, control element and it took me a while, to I think fully parse through, the implications of that and I think, some of what it gets down to is when, you're connecting to one of these closed, systems and there's hosted open models, in a variety of places you could get so, when I say hosted close system I mean, like you literally don't know what's, happening behind that API or how the, model is called and that sort of thing, those are productized AI systems which, means that they're making opinionated, choices about how to improve the, performance of that product surrounding, the model right yes and that actually, can be an amazing thing right like open, AI functionality is spectacular like, without doubt um these other systems, anthropic others really spectacular, functionality but there's this element, of it where they've made some, opinionated choices for you about how to, process the data that you're putting, into that system before and after it, hits the model and so there's a lot more, going in and and I think you see this, come out very much in for example the, stuff that happened with Gemini where, you put in your prompt to generate a, image of American founding fathers and, there's clearly you know however that, worked a modification of your prompt or, extra instructions to bias that output, to look a certain way sure if you're, interested go look at up there's lots of, interesting pictures and to be fair, they've rectified that situation as far, as I know but when you have that sort of, decision made you don't have full, control like it's not just your prompt, going into the model and you kind of, choosing how to govern or bias that or, process user inputs or do your prompt, templating and so it can be really good, but it can be sort of frustrating at, that level where you get like 80 to 90%, of the way towards what you want and, then for some reason you just can't, figure out why you can't like get that, last bit or you can't figure out why, you're like this error is happening or, there's latency types of fluctuations or, whatever those things might be it could, be bias in the output so I think that, that like opinionated productize thing, like it's both a good and a bad and, depending on your scenario that may, actually be what you need right like I'm, not going to worry about these things I, trust the way that sort of these things, are being handled internally in a system, like this and I'm guessing that will be, fine for many people but then there's, people that want to build kind of these, competitive AI features into what, they're creating as a company and they, want full control to figure out like you, know to build those in exactly the way, they want to make sure that they can, test those in exactly the way they want, and to have that control element I think, that's way more crystallized in my mind, than it was previously I I think that's, a fantastic Insight right there and I, think most people miss that because with, the hype machine going we have a a habit, of talking about the models themselves, all the time and you know kind of as, product and therefore there's so much, that these companies that are putting, out these as a service are doing there's, so many humans involved that you never, see and yes that can really make it much, better in some ways because they're kind, of shortcutting what the model may not, be able to do on its own they are, shortcutting and and greasing the skids, to make you get to what you want but at, the same time anytime there's a human, involved you're going to have the bias, uh as well and they're trying to make it, safe and controlled and not have some, sort of thing that ends up in the news, in a very negative way um and that puts, constraints around it um it just is what, you numer so yeah I I think it's really, key that we look at it as not only a, model but model plus the services around, it whether you're building them or, whether someone's building them for you, [Music], what's up friends I love back Blaze I'm, happy to have them as a sponsor back, Blaze makes backing up and accessing, your data astonishingly easy this is a, service I personally use go to back, blaze.com practical AI you get unlimited, Cloud backups for Macs PCS business for, just $99 a year you can easily protect, business data through a centrally, managed admin protect all the data on, your machines automatically easily, deploy across multiple workstations with, various deployment options you can add, on Enterprise control including granular, access permissions Advanced single, signon group management controls and, compliance support they even offer, multiple restore options including rapid, recovery in the event of data loss or, ransomware that sucks you you can access, your backed up data from anywhere in the, world using their web app or their iOS, or Android app you can even restore by, Mail they'll give you a hard drive with, all your data shipped to your door you, buy a hard drive restore send the hard, drive back within 30 days and get a full, refund and get oneyear file retention, and version history over 55 billion with, a b files restored for customers so far, visit back blaze.com practically eyes so, they know where you came from and, continue to support the show this is a, service obviously recommended by me but, also by New York Times Inc magazine Mac, World PC World lifewire wired Tom's, guide 9 to-5 Mac and just so many more, you receive a fully featured no risk, trial ATB blaze.com, practical aai again there supporting the, show go there play with it start, protecting yourself from potential bad, times start today, [Music], all right Chris well a lot of times in, these fully connected episodes we do try, to kind of bring some learning element, to the Forefront as people are exploring, these topics one of the ones that has, been coming up a lot from for me which, we've we've talked a lot about on the, show is is rag or retrieval augmented, generation but we've only sort of talked, about it at a surface level and at the, sort of naive rag level which might, misrepresent sort of some of what people, are doing with this approach under the, hood and this would like generally kind, of be framed I think if you search, Advanced rag you'll find a whole bunch, of Articles and, really what's happening is there's a, naive approach to this rag type of, workflow which can get you to some, really amazing results really quickly, but then when you have to kind of, fine-tune improve that system load in, more data use documents that are closely, related one to another various types of, documents there's a lot to dig into in, terms of fine-tuning that system and, fine-tuning both how the retrieval and, the generation works and there's a whole, variety of sort of workflows that have, been developed by the community that can, help you improve your rag setup so, that's kind of one of the things that I, wanted to bring up here and maybe talk, talk through a couple of those I know, our friend Demetrios uh we talked to him, he's got some opinions about rag versus, fine tuning that's one thing you'll hear, but yeah uh would love to dig into that, if it sounds interesting to you, absolutely uh and I think you're the, right person to do this given that, you're diving into this stuff all the, time yeah well uh certainly this is rag, pipelines are the probably the first, thing that people are building with, generative AI yeah and the idea is not, too complicated the idea is let's say I, have a bunch of documents that contain, information relevant to questions or, queries that I might have instead of, just asking an llm to give me an answer, or to do something which would rely on, the probabilities of that model in, generating its own text and what it the, data that was trained on you could get, any sorts of answers out of that llm, rather than just relying on that I'm, going to inject on the Fly some of that, external data that I have into the, prompt to help answer the question or, the query something like that so we do, this all the time with chat GPT and, other system right when I say summarize, this email for me and then I paste in an, email that's how I'm injecting data into, the prompt to the model right at the, time that I need it to run so there's no, fine-tuning of the model here it's just, a strategic insertion of data when I'm, prompting the model and often this, happens in like oh I have a bunch of, developer documentation or onboarding, materials for my company or a Wiki or a, bunch of webinars or a bunch of podcasts, or whatever and I want to answer, questions out of that material then, these can be loaded into a vector, database which allows you to do the, retrieval part so to find the relevant, chunk of information that's required to, answer the question and then you take, that relevant chunk insert it into a, prompt and then respond or or let the, llm generate based on that given context, so that's sort of the naive bag approach, where you sort of have a user query you, find a single chunk of information in, some repository of information insert it, into the prompt as context and hopefully, get an answer which it is naive but it, surprisingly works amazingly well in in, many cases right it does um it's, interesting as we get in from kind of, the naive to the to Advanced you know, ideas in it and you also just mentioned, for a second fine tuning along the way, it is definitely the first step I think, it's easy to implement uh in general for, people which is why it's the first step, but I also think uh before you go on I, think a lot of organizations are getting, stuck on naive Rag and just kind of, stopping there yes and um I've noticed, that so keep going yeah yeah I think, you're totally right and I think this is, why I wanted to bring up this topic, because some will hit and they'll get, like sort of okay performance out of, their rag system but then they don't, realize that there's more options to, improve that system yeah I've seen a lot, of people thinking it solves it yeah, like all we need is an llm and then, we're just going to give it the data for, rag that we're going to inject into it, and we're done and that's that's I'm, hoping I'm hoping we can break some of, that perspective over the next few, minutes yeah and the question would be, like well okay if you're getting some, good answers and some not good answers, for example what do you do to improve, your rag system and that's where, there's a whole variety of things to, explore and like I say if you're really, interested in this I'd recommend, searching for advanced rag I'd recommend, looking at the Llama index Blog the, Lance DB blog there's a lot of really, good content out there to help you parse, through this but but let me kind of, inspire with a few Snippets of things, that you could keep in mind so the first, I think is around context enrichment, this is a very simple thing that you can, do where let's say you have a 100, documents and you split them up into, little chunks which you embed in the, vector database and you search against, those little chunks to find the relevant, thing that might help you answer the, question well depending on how you, chunked up that information it might not, give you all the context you need to, answer it might be in like the previous, chunk it might be in the next one it, might be in the one that you found or it, might be in a com combination and so, this sort of idea of context enrichment, might be that you just find that chunk, that's relevant and then instead of, inserting just that chunk insert that, chunk plus the one before it and the one, after it for example just expand it a, little bit enrich it a little bit, another sort of common thing is maybe, you want to pull the three most relevant, chunks rather than the one most relevant, and add more context there so there's, more that you can add in more than just, a single chunk the other sort of related, methodology here before we get into, maybe the more fancy stuff is um is, actually doing a two Lev search over, your data so if you think about it let's, say that I have again a 100 documents, and you know there might be similar, content across those documents they, might overlap in certain cases but, they're they're different documents well, if you take and summarize with an llm, each of those documents or pages of, those documents and then you also chunk, it up into smaller chunks that you, eventually want to use for your rag you, could first search on the summary which, would kind of point you to the right, document that's going to answer your, question and then do a second phase of, retrieval within that document itself to, pull out the relevant section right this, helps you kind of hone in on the right, document that they you're using so those, are two fairly easy to implement in, terms of how you set up your vector, database and how you do your querying, but they can provide a boost now there's, more complicated things you can get, those to those in a second but do those, make sense it does make sense yes it's, it's I'm just curious two second, question when people are going you know, in your first case there where they're, just going for the answer and not adding, the chunks around it do you think that's, kind of bias from traditional database, operations where you find the answer and, that's it or or is that am my SP I was, just wondering as you were saying that, why people might be limiting themselves, in that way yeah I think it's a, perception problem it's just like, there's so many examples out there of, getting started with Rag and that's all, you kind of see unless you kind of, really dig in so maybe it's just a, perception issue it is also like maybe, that sort of hold over from how you, would retrieve things from in a, traditional database sense gotcha those, might both factor in there's another, kind of hybrid or twole type of of, search that happens and this is, implemented in several different Vector, databases even natively now because it, can be quite useful is actually doing, two levels of searching but the first, which is a traditional full text search, or keyword search and then a vector, comparison rather than just relying on, the vector comparison so you kind of, hone in on the full text kind of, keywords first and then do a vector, comparison and you could even Ensemble, these in various ways and use one for, ranking or ordering versus the other one, there's there's a variety of ways to, implement this but this would be kind of, generally categorizes hybrid ways of, searching I think is most frequently, term so there's the context enrichment, there's the hierarchical search or index, retrieval that's the kind of summary, then chunk and then there's like the, hybrid search which would be actually, using two different search methodologies, and and notice all of this has to do, with the retrieval part for the most, part that we're talking about here not, mostly the llm side although you could, use an llm to generate the summaries for, the hierarchical approach so it's, interesting that those tfidf keyword, searching full Tech search sort of, things are are coming up again um so, back to our original way we started this, episode the data science pieces Still, Still survive in many ways it's still, yeah it's still relevant there and I I, don't think that's changing yeah the, last two that I'll highlight one that, comes up a lot that people will use is a, method called reranking so there's, actually models out there known as cross, encoders and what happens is you might, do a first level Vector search to get a, a smaller number of candidate documents, and then use maybe a more expensive, model-based approach to actually rescore, the candidates that you pulled and, reorder them hence the name reranking, reorder them or filter them to the most, relevant document so that's kind of the, reranking approach there's a couple, really interesting ones where uh you use, an llm in the loop one of those called, hide Lance DB has a good uh blog post, about this uses llms to generate sort of, hypothetical documents that should, answer this question and then you kind, of use those hypothetical documents in, the retrieval there's people that do, also query Transformations so they, actually take the query in this kind of, fits our previous discussion about, modifying a prompt except now maybe, you're in control of it where you take, that prompt in and you actually read, generate the query such that it's more, favorable to the retrieval task y that, was a lot and I know it was quick but I, think it might be good for people to, hear that and just kind of see that, there's a much wider picture of these, Advanced rag techniques and I didn't, even get a chance to sort of get through, all of them people are exploring a lot, of things but that I think paints a much, more Rich picture of what can happen in, these rag pipelines versus just that, naive approach thank you very much for, kind of bringing this to attention I I, think it would be well advised for, people to recognize they're kind of, getting to first base with this the, typical rag approach and that's working, for them in some cases quite well but, these tools are out there now where it's, not so hard to then go on and move past, that but I'm seeing a lot of people get, stuck there so thank you for kind of, covering that that territory and giving, people an opportunity if they're not, familiar with it to maybe dive into this, yeah definitely well it's been a fun one, Chris uh to bring back some data science, uh discussions into our podcast and yeah, excited to see what's coming over the, next couple weeks that we can catch up, on soon absolutely talk to you later, [Music], Daniel all right that is practical AI, for this week subscribe now if you, haven't already head to practical AI FM, for all the ways and join our free slack, team where you can hang out with Daniel, Chris and the entire Chang log Community, sign up today at practical ai. fm/, commmunity thanks again to our partners, at fly.io to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music], he l |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The perplexities of information retrieval | Daniel & Chris sit down with Denis Yarats, Co-founder & CTO at Perplexity, to discuss Perplexity’s sophisticated AI-driven answer engine. Denis outlines some of the deficiencies in search engines, and how Perplexity’s approach to information retrieval improves on traditional search engine systems, with a focus on accuracy and validation of the information provided.
Leave us a comment (https://changelog.com/practicalai/274/discuss)
Changelog++ (https://changelog.com/++) members save 6 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://graphstuff.fm/episodes/2023-finale-llms-and-knowledge-graphs-throughout-the-year?&utm_campaign=UCGenAI&utm_content=AMS-SrDev-ToFuDev-UCGenAI-Audio-None-GenAI1-GenAI-NonABM&utm_medium=Audio&utm_source=PracticalAI&utm_justglobal=) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack.
• Backblaze (https://www.backblaze.com/cloud-backup/personal/landing/podcast/practicalai) – Unlimited cloud backup for Macs, PCs, and businesses for just $99/year. Easily protect business data through a centrally managed admin. Protect all the data on your machines automatically. Easy to deploy across multiple workstations with various deployment options.
• NordVPN (https://nordvpn.com/practicalai) – Get NordVPN 2Y plan + 4 months extra at nordvpn.com/practicalai (https://nordvpn.com/practicalai) It’s risk-free with Nord’s 30-day money-back guarantee.
Featuring:
• Denis Yarats – Twitter (https://twitter.com/denisyarats) , LinkedIn (https://www.linkedin.com/in/denisyarats)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Perplexity (https://www.perplexity.ai)
• Perplexity | Blog (https://www.perplexity.ai/hub)
• Perplexity | Getting Started (https://www.perplexity.ai/hub/getting-started)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-274.md) | 369 | 1 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io what's up friends do you remember, when chat GPT launched I do it felt like, the llm was this magical tool out of the, box however the more you use it the more, you realize that's just not the case the, technology is brilliant don't get me, wrong but it's prone to issues like, hallucination on its own but there's, hope there is still hope feed the llm, reliable current data ground it in the, right data and context then and only, then can it make the right connections, and give the right answers the team at, neo4j has been exploring how to get, results by pairing llms with knowledge, graphs and Vector search check out their, podcast episode about llms and knowledge, graphs throughout 2023 at graph stuff., FM they share tips on retrieval methods, prompt engineering and so much more, don't miss it find a link in our show, notes yes check it out graph stuff. FM, episode, [Music], 23 welcome to another episode of, practical AI this is Daniel whack I am, founder and CEO at prediction guard, where we're say guarding private AI, models and I'm joined as always by my, co-host Chris Benson who is a principal, AI research engineer at loed Martin how, you doing Chris doing great today Daniel, how's it going it's going great I I am, sometimes perplexed uh throughout my, workday uh but generally the week has, gone well and I'm I'm really excited, because hopefully our guest today will, will provide a lot of a lot of ways for, us to navigate through the perplexity I, hope he finds that funny yes yes of, course uh we have with us today Dennis, yaritz who is the CTO and co-founder at, perplexity welcome thanks for inviting, me and I'm great to be on this show yeah, yeah well we've been wanting to make, this one happen for a good long while of, course been been following uh the things, that perplexity has been doing uh really, impressive and inspiring work and um, yeah just really excited to hear a, little a little bit about that story and, also this kind of space of answering, giving people knowledge with generative, AI with language models um search more, generally maybe could you set us up by, maybe talking through of course people, have followed everyone's kind of, followed this surge of generative AI, over the last couple years but there's, been kind of a a segment of that that, has focused a lot on kind of answering, questions discovering knowledge, curiosity exploring topics an, intersection with search more generally, I guess so could you kind of help us, understand maybe that Journey how that's, kind of developed and how how also you, and the the co-founders of perplexity, kind of Came Upon Your approach to that, yeah that's that's definitely been like, very fascinating almost two years I, guess um I think like General like web, search right like, what what do you want to get ultimately, get is an answer to your question right, so it's a the current iteration that we, had like when was Google and all the, other like classical search engines is, is approximation of to get to this point, right so you you have your question you, ask it as a form of query and then you, get like a bunch of documents that are, very relevant but you have to still do a, uh ultimate uh sort of like you have to, do like additional work to get to the, the bottom of it right so you have to, scan through the documents you have to, for yourself understand what is is this, a true answer if it's a not true answer, kind of like have to trust um the, information and uh as you can imagine, it's a lot of work uh especially if, you're trying to search for something, very um um complicated maybe uh things, that are not so obvious and wouldn't it, be nice to um kind of like avoid that, step right where you just like answer a, question and ask ask the question and, you get an answer right away so that, that's kind of like ultimate this, we trying to get obviously there's been, um it's been tough to get there over, last like decade or so it's you know, there's been like a lot of work but it, never quite work there is this like a, much higher level of uh hallucinations, much higher level of a maybe not perfect, synthesis of the information you kind of, like basically get like a Frankenstein, so like a instead of like adherent uh, and a nice easily parable of readable, answer you get like some just like, basically extracted pieces of the, information and you just like, concatenate it together so like not very, pleasant and you know um it's funny that, when we started so one of our angel, investor was Jeff Dean requires like no, introduction and he he was saying you, know um Google uh actually wanted to, always build something like this but, it's just because they have like such, high um expectations for accuracy, because like you know millions of, billions of users using Google right and, like if you like hallucinate like 1% of, the time you're going to get a lot of, unhappy people and so they would never, able to because of the uh models were, like not as strong as they are right now, they were never able to get to just like, 99.9% of accuracy right and that's why, kind of like this work never found out, and uh but something great happened you, know 20122 right so we when we started, our company we kind of you know B myself, and aravin my my candin uh we come from, like academics right we've been doing, like a lot of research and like language, modeling reinforcement learning and, stuff like that and he was actually at, open ey at that time like we we've been, very like clearly following improvements, of GPT models like gpt2 and then gpt3, that's where it's actually got very, interesting and it was kind of like, became obvious and obvious there is, going to be something there right so and, this was the primarily M motivation for, us to start a company we wanted to do, build an answer engine from the get-go, but it was kind of like very ambitious, you like I remember we would go to the, investors and there would say say like, oh we're going to build a search engine, and they're like start like you know, looking at you like you're crazy which, is makes a lot of sense they would say, like oh there's Google already and they, had like a fair point but we we still, kind of like weren't like very, discouraged by that we knew there is, like something there and we started like, prototyping so we the first version of, perplexity we actually created as as a, slug bot or like Discord bot where kind, of like was a very primitive combination, of a search engine plus uh at that point, it was like Da Vinci 2 model so it's, still you know pre-, GPT and it's kind of like worked much, better than I expected it was like very, quick like we put put it up this demo, like in in a couple of days and it was, like you can already see that it in, certain cases it is very helpful like we, because this was like very early company, we like trying to we were like trying to, to hire like one of our first engineers, and we didn't know like how to organize, the insurance for him or like what what, it and so we actually use this Bard to, like ask those questions about insurance, because if you go to Google and start, asking the questions about insurance, you're going to get a lot of ads and, you're just gonna get very uh quickly, you know disappointed I've appreciated, that same thing in founding a company, and using these models in that in that, way it's been useful yeah exactly and, then it was kind of like use so we were, kind of like been trying it out and like, playing was it but then um what really, happened start happening is like I think, it was like a couple of weeks before, chat GPT they open ey released this D, Vinci 3 model and I literally remember, very clearly like changeed the like, literally the model name and started, asking the the same questions that I, were asking before and I could see right, away just like so much better it's just, like so much better at understanding, what you of you intend so much better at, like understanding where like what, should it say what it should not say so, much better like synthesizing the answer, and I was just like really like blown, away and you know um I was like okay so, there's like definitely something there, and then obviously like shpd happens, like few weeks after and we were like, okay so we have to like our initial, product was actually because as I, mentioned we came from academic, background like our citations were like, the core component because it was like, very clear to us from the beginning like, if you want to get a answer you want to, make sure it's accurate you want to make, sure you can verify it and and so the, citation was like the sort of like first, class CZ in our product and then when, chpt came out we was like okay so one of, the biggest uh point of feedback for, them was okay so I don't know like if, this is accurate information if it's not, if it's hallucination if it's not how, would I verify it and that's why I would, like decide okay so this seems like a, good opportunity to release our product, we like literally in a matter of like, two days put up a website uh connected, to our like back end that we had and, just like obviously did not expect that, it's gonna uh be um people going to use, it and like the usage is going to grow, as much as possible but coming back to, your uh original question I think like, what happened was just like literally, this in a matter of like days or month I, mean obviously follows like a lot of uh, years many years of research but like it, was very clear step function in the, quality of the generated answers and you, can like literally if you if you sort of, like spend some time playing with it you, can clearly see that it's now becomes, very uh very good we also realized at, that time because like models only going, to get better things only going to get, like faster and cheaper so there's, something there's a lot of Stu to build, here for those who are are still kind of, learning about your organization and and, what you're offering could you step back, for a second and like if you were, talking to Jeff Dean or another investor, kind of giving them the elevator pitch, about what you're doing specifically and, you know how it's differentiated from, the gpts and stuff out there how do you, define that what how do you describe, yourself in terms of the specific, opportunity that you're pursuing the, fastest way to get the most accurate, answers to your questions I think that, that's essentially like answer engine, and we kind of like one of one of the, first people who coin this term and can, you differentiate that from a search, engine a little bit there yeah yeah so, basically in a search engine right so, you um you get like all the way it's, like as I mentioned before in the search, engine you have to search first you get, a documents let's say you have like top, 10 results, and you have to scan through all of them, and identify the information to get your, answer here we kind of like we do this, step for you so we take the first step, of the retrieval of the relevant, documents and then we synthesize them, into a a human readable uh nicely, formatted answer that you can then like, if needed if you want to get more, information then you can click on the, citations that are uh kind of like, nicely attributed for each sentence and, then you can like learn more information, and so we kind of like wanted to do do, things like very simple we were like, early on identified so there is like two, things we care about this is uh accuracy, and then speed because this is you know, you want to get information fast Google, uh trained us all to get instant result, search results I think that was like, very important and early on it was like, kind of like challenging because those, models were like very slow our like, infrastructure was not very advance to, do that so like you I remember like very, first version you would have to wait for, like 3 seconds or like five seconds to, get get an answer it was like very slow, but because they were like it was like, such a better experience than just like, looking the search results at Google so, people still would use it uh but our, ultimate goal was like okay so can we be, as fast as possible so yeah so the the, main differentiator is just uh we care a, lot about quality so we minimize the, chance of things being inaccurate or, like hallucinate and we want to do it as, fast as possible and so that's that's, kind of like distinguish us from Google, because you know Google doesn't for, example generate the answers uh even, though like more recently they start, doing this which is kind of like, validated our idea and CH GPT probably, primarily focuses on like um different, things you know but I guess also more, recently they start doing a viib search, as well, [Music], what's up friends I love back Blaze I'm, happy to have them as a sponsor back, Blaze makes backing up and accessing, your data astonishingly easy this is a, service I personally use go to back, blaze.com practical AI you get unlimited, Cloud backups for Macs PCS businesses, for just $99 a year you can easily, protect business data through a, essentially managed admin protect all, the data on your machines automatically, easily deploy across multiple, workstations with various deployment, options you can add on Enterprise, control including granular access, permissions Advanced single signon group, management controls and compliance, support they even offer multiple restore, options including rapid recovery in the, event of data loss or ransomware that, sucks you can access your backed up data, from anywhere in the world using their, web app or their iOS or Android app you, can even restore by Mail they'll give, you a hard drive with all your data, shipped to your door you buy a hard, drive restore send the hard drive back, within 30 days and get a full refund get, oneyear file retention and version, history over 55 billion with a be files, restored for customers so far visit back, blaze.com practically eyes so they know, where you came from and continue to, support the show this is a service, obviously recommended by me but also by, New York Times Inc magazine Mac World PC, World lifewire wired Tom's guide 9o5 Mac, and just so many more you receive a, fully featured no-risk trial at back, blaze.com, practical AI again they're supporting, the show go there play with it start, protecting yourself from potential bad, times start today, [Music], so you mentioned a few things there you, mentioned web search you mentioned, retrieval you mentioned the large, language model so at least in kind of, how I think about it and maybe others, categorize it differently there's one, element of information that you can get, from an llm which is I'm going to put in, a prompt and it's going to generate text, and that may contain some facts or, madeup facts or some text but it may be, informational right so there's there's, some sort of knowledge that can be, gained there and then in a second case, there's a way to retrieve on the Fly, external data so that could be like from, your company's documents it could be, from the web whatever and then inject, that into prompts into the model which, kind of grounds it and like you say, would give view a citation there's also, you know more agentic approaches to this, where maybe there's multiple ways that, you could get knowledge maybe doing a, web search or searching a certain, database that you have access to or any, handful of of sources that you've kind, of curated as tools and you call them in, a more automated way so I'm wondering, from your perspective obviously you are, part of a team that have been exploring, this very deeply from very early on like, you say when these models made that jump, from your perspective now both in terms, of what you're building with perplexity, and also kind of how you generally see, the ability to get accurate information, from these models how do you view those, kind of categories in terms of their, utility and what you're relying on for, the accuracy element specifically the, way I see it's going to unfold I think, the the tools and sort of like um, agentic behaviors I think that's that's, where it's going I think it's going to, be uh the main bottleneck for this right, now is just like models are not smart, enough yet to uh take into account and, and sort of like reason all of the, information that is out there but uh I, think it's going to be like a main, component so there's going to be like, models they're like very powerful, already right now they're like trained, on a lot of data like basically internet, they have a lot of internal information, internal knowledge and they can do like, already like very good job of uh um, synthesizing information there's like, certain things that they don't do well, and perhaps they never going to be do, well those things like for example like, you know like computation like when you, need to do like some maybe run like some, code or like do like uh some, sophisticated like math computations, that like the the the llm architecture, like guess Transformers is like going to, struggle at that also you know because, those models are like so we sometimes, it's going to be very expensive to them, update very frequently so you need a way, to ingest um something some new, information that just like happened and, it's still not part of the llm weights, you have a way to and this is what we, sort of like specialized also some, private documents as you mentioned right, some sometimes if it's like Enterprise, you know you have a some of the, documents that obviously the model is, was not trained on and you kind of like, maybe want to reason about those, documents right so and there's like all, kinds of other tools that you can yeah, eventually there's going to be like, agents that's going to do like actions, maybe you're G to like book a ticket, like buy a ticket or something like that, and and stuff like that so I think it, definitely where it's going it's it's, going to be like a Synergy everything is, going to come together we just need a, top level powerful model that's going to, kind of reason behind multiple things, and we'll have to need to have like long, context Windows maybe like some memory, as well and then you know just like, utiliz those tools as much as possible, how we at perplexity thinking about it, we are like okay so and there's like, multiple ways and multiple sort of like, applications multiple use cases of this, General approach we are primarily, focusing on the um information retrieval, part of it so kind of like initially web, web is the main component because, there's like lots of things to instruct, from web but also we're thinking about, how to integrate a new different data, sources, maybe like some of the different, databases maybe there's like some more, complicated or like more specialized, documents like you know maybe there's, like PDFs or something like that or like, financial data things like that the, other aspect that I'm I'm very excited, about and and we're working on is kind, of uh do you know right now we can, answer we can do like a great job of, answering like complicated questions, that you can get answers on Google but, like still not something where you can, ask expert and then can get an answer, right like what if I have like a, question that is just like requires like, multi-step of reasoning so it requires, like okay like search in web multiple, times analyzing information that during, the issue retrieval and then like maybe, refining the search so kind of like, those things that maybe going to take, some time but like if you do it yourself, you would spend like let's say like an, hour so here maybe the system would, spend like a 30 seconds and it's going, to save you a bunch of time so kind of, like those use cases that's um that can, answer like very complicated questions, and we believe that there is a world for, those type of questions technology like, this is going to be useful as people are, using you know an answer engine like, yours more and more often going forward, and you kind of alluded a moment ago to, to the fact that you know llms are not, the the be all you know there are things, they don't do well like mathematics and, such and and a variety of other things, I'm sure that we could all throw out, there but they're really powerful at, what they do but clearly there is a, place and a need for both the llms you, know these largest models that get all, the that kind of suck up all the air in, the news Cycles as well as many smaller, models that are specialized you know a, mathematics model that you know that you, plug in as we're looking at trying to to, use answer engines to retrieve, information and that information is, increasingly multimodal in nature in, terms of what you're asking how does the, architectures of those come together, this is a a space it's not the first, time we've asked it here but it's, evolving so rapidly, we're no longer hosting a model you're, now hosting a a whole collection and, they may be mixed with models that, you're aping out to and such how does, that look to you uh as you're building, this company at this point it's going to, be always like tradeoff between um like, if you have like one powerful model I, mean yes it can do like lots of uh, General things like super well but like, it's going to be slower it's going to be, more expensive one of our like key, principle is just like we want to do, things very fast so we get like f, answers as fast as possible that means, you have to design your system like, orchestration system in a way where, certain things will have to uh rely on, like customized models like and, something that is much smaller much, faster but it knows it's it's like a, specialist model so it's not a model, that like knows how to do everything but, it knows how to do like one task and, basically the the challenge here is like, how do you balance between this like, General models and this like a, specialist models and I think we've, we've been doing um this like from the, very beginning so like when you send the, request to perplexity it's just like not, one model there is like I don't know at, least like 10 different models trying to, um do lots of things with with your, request it's like all kinds of like, ranking models bunch of like embeddings, all different like classifiers and stuff, like that and like the other trade off, here is just like with General model, let's see um one of the uh big and I, think it was actually very critical, component of uh why like company like, perplexity in the first place became uh, possible is it's like the speed of, iteration like you literally can change, the prompt and you can just get a new, product like in in a matter of like, hours like imagine couple of years ago, if you wanted to build something like, you know perplex or like whatever other, like gen product you have to collect, data first have to train the model, launch the product see if this product, makes sense does it have Market feed or, not if it has Market feed then you like, like start collecting data and and then, and then you just like keep improving so, what been possible with GPT models like, an an API is just like this kind of like, flipped over so you very quickly can, build a product see if there is a any, signs of Life of this product and then, you start collecting data which is I, think honestly the most important thing, and once you collect data you can like, distill it you can uh build like many, other like smaller models and kind like, optimize The Experience so you can make, the models faster you can specialize, them I think this is the key I think, this is the honestly was like one of the, most fundamental um changes in in the, development uh and and we kind of like, took advantage of of this uh thing like, early on and then still using but it's, still tricky kind of like imagine like, every time you have this like, specialized models and you have like if, you have tons of them you have to then, like treat each model like care about, each model separate so if you want like, rchain this model so you have to spend, some time on it so, you have to like evalate it so it, becomes more difficult to manage but on, the other hand you know uh you have some, great benefits so the key is just like, you don't want to go like too overboard, with those models like if everything is, on like customized models but also you, also clearly don't want to have just, like one model that's going to do, everything so yeah I I have a question, maybe related to that which I think is a, pain point A lot of people are feeling, and I'm I'm guessing your teams have, felt which you even mentioned this like, you you can have make a small change in, your prompt or create a new prompt and, all of a sudden it's almost like you, have a new product which is sort of like, amazing in in one sense and really, frustrating in another sense because as, you were just alluding to it's like oh, maybe I have these like 17 different, things chained together and they all, have prompts that I've worked really, hard on and then like tomorrow you know, llama 17 comes out or or something and, now like it behaves it has a different, character of behavior than the previous, model that I was using I'd love to use, it but now I have all of this it's, almost like AI model debt that I've that, I've got in my system do you have any uh, perspective on that or any anything that, sort of H has happened in in your, experience in this regard yeah yeah this, has clearly been a been a thinga it's, been happening quite often and if I, would guess that's going to continue, happening one thing to we like realized, early on is okay so this is going to be, the case there's going to be like there, is not going to be like one model that, uh rules them all I mean even though, like for some time it was like gp4 but, like now we can see there's like, particular like anthropic you know, Gemini like llama there's like going to, be a future where there's several, Frontier models right because of that we, decided okay so like let's design our, infrastructure and our system in such a, way that it's is going to be modal, agnostic right so and then that means, okay so there's like a ways where you, can evaluate each each component, independently there is a way where you, can quickly um change things up to adapt, for like a new model and stuff like that, and that's uh it took some time to get, there but it's I feel like it was like, very correct decision for us and then so, for example one of the advantage we have, over let's say like U things like chbt, or like Lo or like basically one model, providers companies is just like we can, seamlessly integrate many different, models and like our users can like okay, decide this is the they want to use this, model or they want to do that model like, later on as we uh progress I think we, can even like decide based on the, complexity of the query or like the type, of the query we can like route to like a, particular model that does the better, job of those type of queries and like, you know minimize like maybe like some, of the queries are super simple you, don't need to run like a very large mod, those like to answer so then you can, like can up toize speed and things like, that so I feel like you have to just, make a system in a way where it's like, agnostic to the, [Music], model what's up friends have you ever, had trouble accessing that favorite, sporting event or that awesome show or, that film even because it's not in your, region well our friends at nordvpn can, help you switch to a virtual location to, a country where it is available, unlocking a world of entertainment plus, it's not just about streaming it's your, go-to for online security protect your, bank details your passwords and your, entire online identity if you're, traveling they can Shield your data on, public Wi-Fi keeping you safe no matter, where you're at they also have this cool, feature called threat protection that, means you can say a bite of viruses, malware and fishing sites because, nordvpn will protect you no matter where, you're at they're one of the fastest, vpns globally they ensure no buffering, while streaming and no stops for your, ISP from bandwidth rling it might sound, costly but think again North VPN costs, less than a cup of coffee a month and, you can use one account on up to six, devices to get the best discount off, your nordvpn plan go to nordvpn.com, practical AI our link will also give you, four extra months on the 2-year plan, there's no risk with our 30-day moneyb, guarantee once again go to nvn, tocom practical AI, [Music], so to follow up on on what we were, talking about before the break there I, know that you were talking about really, building around uh model agnosticism to, be able to handle that I couldn't help, but Wonder occasionally as we get a new, model out it breaks new ground on on, modality you know being added in a whole, new approach that kind of thing and so, how do you as a business Builder who is, having to try to accommodate all these, different models when you have that, jumps out and has a completely new thing, added in that was unexpected prior to, the announcement how do you in the in, the organization how do you guys kind of, pivot to accommodate that um and keep, that the agnosticism and yet provide, that extra you know functionality that's, now available how do y'all tackle that, problem the most important thing to be, is uh to not be caught off guard and try, to anticipate what's going to happen I, think that's a that's very important and, it's kind of like for the most part I, wouldn't say was like too hard to, predict what's going to happen but uh, after that uh I think it's primarily, like product decision right so like do, you does this new feature benefit your, product or not do we want to like build, something in product or not so for, example what one great example was image, uploaded kind of like multimodality, right so we we knew it was like last, year we like knew for sure that this is, going to happen at some point because I, mean there was like already like some, smaller models that kind of like, supporting that we knew that you know um, much better model is going to come out, and then because of that we kind of like, in advance start first of all like, understood like okay so this is going to, be important for our project so we can, support like this and this and that, those use cases and we decided to build, infrastructure in advance and and kind, of like anticipate it obviously we, didn't fully like predict like how, exactly it's going to operate but it was, like very close so it was like required, like literally a couple of days to, adjust uh and then we were like one of, the you know very quickly can um release, it as a product so it's having the, system that are kind of General enough, that can like support those like new, modalities it's very important get some, great deal of uh anticipation but also, you know like sometimes you know like, certain features that maybe come out, maybe they're like not useful for your, product you don't you also don't want to, like put everything into okay so like, this is a cool feature let me just add, it to your products like that that's, always not not that great idea in, general so only, things that make sense and if if those, things makes sense for your product you, likely already like thought about like, how do you how would you implement them, in advance so it just like makes it a, bit easier while you were talking I was, thinking there's there's sort of one, axis that you have to navigate here, around model releases and functionality, and modalities all that stuff there's, sort of another maybe around UI and user, experience I sort of multiple people, making the comment oh well like the chat, interface came out out with chat GPT so, everyone's sort of like focused around, the chat interface is that the best way, to utilize this sort of technology in, the long run there's probably a lot of, exploration that's still open around UI, and user experience with this type of, technology and certainly chat is is, relevant and you know we're using it a, lot already I'm wondering from your, perspective especially as we see this, functionality maybe more embedded in the, physical world around that whether that, be like in our glasses with with metag, glasses or in like kiosks in airports or, what whatever whatever those things are, what is your perspective on how, important it is to explore new types of, of of UI or user experience with this, technology I believe like and I mean we, we kind of like were like very confident, early on is just like chat interface is, is a temporary thing it's just too, limiting it has like a lot of, constraints and that's why like you know, we didn't follow the uh usual route like, was all of the like chat BS they were, like literally copied chat chat GPT and, kind like gu so put like chat interface, where we kind of like thought a little, bit more about this and we designed it I, I feel like ultimately right now we're, still in this like early stage where, like people care about the moral itself, you know so like the moral is the thing, but uh as this thing get more advanced, like as more people start using gen, products I feel like the the main thing, is going to be product itself like what, kind of things can product do if do you, do it like better than this do you have, like the best UI do you have the best ux, and and that's why we kind of like early, on was like been thinking about those, things and we kind of uh design our, product in a way that is the mostly, suitable for the things that we want to, do right so if it's like search it's, like it doesn't make we we knew that, like chat doesn't make sense for search, it's just like that's not how people, search for information that was like a, very big factor I think in our success, um the other thing I guess we also even, like last year we start like prototyping, and experimenting with this like concept, of generative UI so something where llm, can guide like what kind of like UI, elements you can generate and then like, sometimes you know like one of the, things in chat interface like if you, want to ask like a followup question, sometimes it doesn't make sense to you, know ask it as a sentence right maybe, you want to like show like a checkbox or, like a button or whatever like if, there's like it's just like especially, on the mobile like everybody uses fonts, right it's just like not very uh, convenient to type especially you if, you're on the run so you'd rather like, press a button that's why like U maybe, uh Speech and I guess like voice, technology is going to be one of the, interesting modalities for sure of, interesting interface because it's kind, of a it has a lot of advantages, obviously has like lots of disadvantages, just tooo but uh uh definitely going to, be interesting and I think like going, forward as we go towards like agentic, behaviors and like more things going to, become possible I think it's definitely, not going going to be like shot, interface it's has to be something else, I'm really fascinated by this topic and, it's I think something that that uh both, Daniel and I have some passion for and, just in you know daily use and stuff I'm, I'm wondering with you thinking about, that kind of productization do you think, that's something that perplexity engages, in in a direct way or supports other, companies uh through that and then there, are so many times in the course of a, typical day where I'm wishing I had, other ways of interfacing uh with these, capabilities that we're talking about, and I'll give you just a just a tried, example I I will take my dog for a walk, at a nearby park every day and that's my, thinking time that's where I'm really, trying to be creative and I'm walking, and I have to keep walking you know I I, don't want to stand right now I have to, I stop and I pull out my phone and it's, frustrating and people are going by me, and I'm trying to hold my dog and but I, want this experience it might be while, driving might be walking the dog where, this seamless way of utilizing these, capabilities that we've grown accustomed, to come about are you a how do you see, yourselves being part of that next, Journey on the interface side and B um, do you have any ideas on how to get, there it's just uh that's the next thing, I want definitely looking into this I, think uh and we consider like multiple, options see there like certain things I, think we can do ourselves for certain, other like things we probably have to, work with some some other like partners, because I mean yeah but I I truly share, your um kind of like experience and I, think it's yeah it's just like if you, sit in front of a computer I think they, the by far the best interface is the, keyboard I don't think you can do better, than that but uh yeah if you you know, occupied with something else if you're, driving a car maybe you're walking you, there has to be something else like even, like phone is you know it's it's okay, maybe like even like taking notes maybe, you like say a command and something, like that but and you get voice back, that's already like something but it, misses like visual information so you, kind of like want to add that so that, that means you probably have to have, some sort of like glasses on I think, it's it will definitely happen and we we, will try we for sure we like we spend a, lot of time this year improving our, mobile app to do voice to voice and we, kind of like invested a lot into like, voice generation so for example uh you, can yeah ask like various questions like, you know like if if you need something, quick like you walk in you you like want, to have like a quick look up of, information so we support that there is, something for example for if you drive a, car we have this uh uh you can read up, like the stories or like discover uh, from from perplexity that's also like AI, generated voice so it's kind of like you, listen to a podcast or like a so that's, super important yeah I think that the, next step is vision and sort of like how, do you get there maybe one challenge, that I've been thinking about this, entire time while we've been talking, relates to definitely a danger that I, think people have identified as related, to this new technology which is you've, already mentioned that you're kind of, doing web retrieval or retrieval from, certain sources as kind of a primary way, of ground in answers of ensuring um, accuracy of citations but I know a lot, of people are concerned about and, thinking about this sort of idea of data, poisoning where we're putting out, actually a lot of generated content on, the web right and that proportion of, human and generated content is going to, change over time which means even for, retrieval systems especially if you're, doing web related searches there's a, potential that you could retrieve, generated content itself and get into, this kind of weird Loop of course, there's a separate problem for like the, models and how they're training and I, mean this affects a lot of different, areas but I know probably you you've, been doing a lot of thinking about this, because it's kind of key to how you, operate as a system any perspective on, that that you feel like people should, keep in mind kind of moving forward or, things that you're thinking about in, terms of whether that's data curation or, validation or I know a lot of people are, talking talking about detecting, generated content and that's maybe hit, or miss so there's all of this kind of, connected stuff but generally around, this idea of data poisoning or generated, content on the web any thoughts to me, it's it's kind of like a technological, problem I feel like it's very remin of U, like spam classifiers right like just, like whole another level right like but, let's say like 20 years ago when you, like received, emails yeah you you would receive like a, lot of spam and then eventually people, develop technology that can like detect, it I feel like something like this will, happen so it's always going to be like a, constant battle like the between like, generators and discriminators so uh at, some point like generators maybe going, to be better it's like fighting malware, yeah yeah exactly it's it's the the same, concept um it's definitely going to be, an issue for sure but uh my hope is that, and my belief that is the good guys the, the good generators is going to be just, like it's a from machine learning, fundamentals uh discrimination is much, easier problem than generation it's like, much easier to tell what is good what is, not than than generated so and usually, seems like we've been more successful in, like detecting this stuff and then and I, don't see any reasons why it's not um, going to continue so uh this has been, really fascinating as as we wind up here, and have you here for you know one more, question as we we've talked about the, future and have been kind of you know, talking about what our expectations, might be and how those might be, fulfilled are there any other areas um, that we haven't addressed that you're, that you're interested in and possibly, as as part of that any way of kind of, summarizing your own Vision without it, being just answering questions that, Daniel and I have thrown at you but your, own vision for what the future looks, like to you and what you want it to be, and what perplexity is trying to realize, it as to kind of paint a picture of what, we might see over whatever time frame, you want to address yeah yeah so I'm I'm, very excited basically to get to the, point where you know I have any question, any any problem I want I want to have, and then just like go to Perle like get, an answer or like suggestion or even, like perform an action for this I, already mentioned this thing where uh, kind of like increasing the quality or, like the complexity of the type of, questions you can ask and then making, sure that the system will be able to, handle those I think this is the, definitely going to be the future and I, think we work hard on on that it kind of, like opens up another dimension you know, you can just uh you can ask like lots of, simple simpler questions or you can just, like one have like one hard question so, it's kind of a you know complexity, versus quality and I think ultimately, when you get information you usually use, it for some sort of like decision-, making right and so if we can then take, this information that we retrieve for, you and S decid in a form of answer can, we also then uh do like some decision, making for you and can you we perform, actions on your behalf so like imagine I, don't know like you imagine like you're, researching something maybe you want, like research to buy like best rinding, shoes right so it's a pretty painful, procedure right now because there's like, so much stuff in Internet like you don't, know what you trust so that that's why, like you know if you identify usually, like Runing shoes you just like stick, with them forever because it's just you, you trust but like things like that so, imagine if you somebody will do like, this like research for you like really, Nails it down like weits all the pros, and cons and then suggest okay so like, do you wanna this is the possible like, variance do you want to buy this stuff, and then you say yes and then it goes, like automatically buys it for you and, then like two days later it's delivered, and you you only need to type like, questions once you don't have to put, your credit card in and stuff like that, and there's like tons of examples so, kind of I would say the the future is, like first you would have to make sure, that you can nail um sort of like the, information retrial part and kind of, like generating the most useful, information The Next Step would be like, decision making so like make decision, based on this information and then third, step to actions so that's uh that's, where I think I would be excited to get, to that point that's great yeah uh thank, you for being willing to take time to, dig into a number of topics that I know, I know our listeners are exploring, themselves and also interested in and um, certainly perplexity has been leading in, a lot of these areas so um keep up the, good work and and thank you for for the, work and the perspective and taking time, to talk I appreciate it yeah thanks for, the great questions and thanks for, having, [Music], me all right that is practical AI for, this week subscribe now if you haven't, already head to pra itical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire Chang log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music], and |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Using edge models to find sensitive data | We’ve all heard about breaches of privacy and leaks of private health information (PHI). For healthcare providers and those storing this data, knowing where all the sensitive data is stored is non-trivial. Ramin, from Tausight, joins us to discuss how they have deploy edge AI models to help company search through billions of records for PHI.
Leave us a comment (https://changelog.com/practicalai/273/discuss)
Changelog++ (https://changelog.com/++) members save 5 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://graphstuff.fm/episodes/2023-finale-llms-and-knowledge-graphs-throughout-the-year?&utm_campaign=UCGenAI&utm_content=AMS-SrDev-ToFuDev-UCGenAI-Audio-None-GenAI1-GenAI-NonABM&utm_medium=Audio&utm_source=PracticalAI&utm_justglobal=) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack.
• Backblaze (https://www.backblaze.com/cloud-backup/personal/landing/podcast/practicalai) – Unlimited cloud backup for Macs, PCs, and businesses for just $99/year. Easily protect business data through a centrally managed admin. Protect all the data on your machines automatically. Easy to deploy across multiple workstations with various deployment options.
Featuring:
• Ramin Mohammadi – GitHub (https://github.com/raminmohammadi) , LinkedIn (https://www.linkedin.com/in/ramin-mohammadi-ml)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Tausight (https://www.tausight.com/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-273.md) | 292 | 1 | 1 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, founder and CEO at prediction guard, where we're safeguarding private AI, models and I'm really excited today, because I'm joined by a friend um who, I've had the pleasure of getting to know, a little bit over the past uh weeks in, the startup Community joined by Ramin, mohamadi who is the AI and ml lead at, tosite and also an Adjunct professor at, North Eastern welcome ramine hi and, thanks for having me yeah yeah it's, great to have you here it's been cool to, visit the Boston uh startup Community a, couple times and participate in a few, events together um I been really, fascinated to hear about some of the, things that you're doing at house sit so, I'm excited to dig into those a little, bit I'm wondering if you could share a, little bit with us about kind of your, really thinking deeply about the, intersection of AI and privacy but, specifically as related to privacy and, personally identifying information but, also personal health information so Phi, so taide is is thinking deeply about how, companies are handling very private, sensitive data and knowing about where, that data is um which is actually a huge, problem I'm wondering if you could talk, a little bit about I I was kind of, shocked when I heard one of your, presentations and you're talking about, just the the size of this problem and, the scope of this problem related to Phi, so could you give us just a little bit, of a sense of what Phi is why companies, handling Phi is kind of a problem and, some of the challenges related to that, sure sure I can do that so first to um, give an introduction what's a Phi or, personal health, identifiable uh So based on the hippo, rule there are 18 identifiable which can, lead to identify an entity or a person, uh within an Healthcare, organizations and uh this informations, are valuable and uh being uh targeted by, hackers uh one of the reason is that, they have high value because they, contain your sensitive personal, information such as medical history, social security number in and insurance, details which makes it very valuable on, Black Market they also use this for, monetary gains so hacker can sell this, stolen Phi to criminals who use it for, identity theft insurance fraud or other, legal activities they also use it for, exploitation and extortion basically, they use this stolen health information, to use for blackmailing and blackmailing, individual or, organizations so 133 Millions Health C, data was breached in 2023 which means, one out of three Americans life was, affected this means about 160% increase, in compared to 2022 and about, 240% increase since 2018 so like for our, listeners if at least if in they're, they're in the US one out of every three, of those listeners has had some portion, of their health information exposed in, some type of breach is that right that, is correct that that record came out in, 2023 yeah yeah that's insane and you, mentioned hacking how is this data being, breached mostly by hackers or by sort of, mistakes of organ well what is the, combination of ways that this data is, getting exposed that's a great question, so actually based on the report on 23, 78% comes from hacking of the network, storage where the data resides in, healthcare and uh there a small amount, which is about like a 2% happens by uh, when someone sto a laptop for example, and the laptop contain Phi or sending an, email via email or basically fishing, emails stuff like that so there's like a, breakdown but majority comes from, hacking 70 % and are these companies, sort of mostly healthc care companies or, like who has this data and how is it, given so that's that's really, interesting because um technically, Healthcare organization like hospitals, are only I think accountable for 30% of, these incidents and the remaining 70%, happens by hacking through their, business partners like third party, organizations that they have some sort, of software or the storage for keep, tracking of the most medical data but at, the end the cost comes to the healthcare, organizations so that cost what what are, the impli like what happens when this, data is breached what is the what is the, bad case scenario I guess sure so let me, first tell you that the how overall cost, of this has been so Healthcare cyber, security has to spend about 28 billion, over five years spent and we are still, not able to protect the Phi and uh the, way that it is that when an organization, getting hacked or that there's some data, breached uh depending on the states, there are some cap if for example if you, have been bridged and you have lost more, than 500 entities or life basically data, you need to go public about it and also, you will basically get sued and also you, need to also get fine so uh um we have, this uh wall which we call it Wall of, Shame, unfortunately and government post the, names of the organization where they got, uh hacked or they lost basically Phi, data and uh this this wall being, constantly updated and the last things, any c CIA wants that to see their name, on that wall yeah and their brand is, hurt by that but also there's fines, right for this this sort of uh, [Music], [Applause], [Music], breach what's up friends do you remember, when chat GPT launched I do it felt like, the llm was this magical tool out of the, box however the more you use it the more, you realize that's just not the case the, technology is brilliant don't get me, wrong but it's prone to issues like, hallucination on its own but there's, hope there is still hope feed the llm, reliable current data ground it in the, right data in context then and only then, can it make the right connections and, give the right answers the team at neo4j, has been exploring how to get results by, pairing llms with knowledge graphs and, Vector search check out their podcast, episode about llms and knowledge graphs, throughout 2023 fre at graph stuff. FM, they share tips on retrieval methods, prompt engineering and so much more, don't miss it find a link in our show, notes yes check it out graph stuff. FM, episode, [Music], 23 could you talk a little little bit, about like let's say that I'm a, Healthcare company and I want to not be, on that wall of Shame and I want to do, the best practices and all of that like, what is the reality of you know I know, that of course you have been thinking, very deeply about solving these issues, with AI and and machine learning or some, related issues but let's just say that, doesn't exist like what are the choices, that accomp has in terms of and what are, the challenges that they face in terms, of securing this data so um currently, Healthcare organization have series of, tools some of them like maybe four or, five tools that they're going to do the, same task and the way that these, traditional tools work is that they have, series of pattern like a Rex for example, and uh when someone tries to download a, data if this matches for example with, the some reject that they have it will, says hey you transferred Phi or hey you, downloaded Phi so those types of files, which they coming directly out of EMR, electronic medical records are uh easy, to detect the problem is with what we, call dark Phi a Phi that resides on your, network on your machines but you are not, able to detect it because the patterns, that you are using are basically not, capable of detecting that you know we, have for example organization that has, millions of patients in your network and, I don't know if you probably you have, written Rex before Rex is always as good, as the person who's writing it yeah yeah, and it also what is that uh that Meme, it's you decide to solve a problem with, Rex and then you just end up having, another problem which is Rex that's, correct so current um from the prospects, that we have talked with is that, updating these rules is costly and, requires a dedicated Engineers or it, professionals and no one likes to WR, very true at least I can say that so, that's a problem they have tools in, place but you know it's incapable of, solving the problem and when you say, dark uh Phi is this I'm assuming there's, like okay you might have a Rex for a, social security number or something like, that but if I just think of like a, doctor recording a dictation of a, patient visit or something like that, there's a lot of natural text in there, about diseases and all of those sorts of, things so is that more natural text uh, sort of health information is that what, leads to kind of the dark Phi or does it, also have to do with you know oh it's, easy to detect in this file format, because I know the the pattern I'm, getting but then someone scanned in this, document and it's in like a PDF or, something and my script doesn't know how, to scrape these different data types, what could you kind of go into I'm super, fascinated by this idea of this dark Phi, sitting around so the first thing you, start point out that is that 80% of, healthcare data are, unstructured unstructured means that, from image to audio transcript like all, sort of PDFs and uh what we have seen, also on the uh Healthcare it's a variety, of data, extensions so uh you will be surprised, that the you will see file extensions, that they don't exist but what the, clinicians or researcher do is that when, they edit a file they put dot the last, name at oh je so you we like I think in, the last study we we did we found, 8,000 extensions on our prospects, environment so literally the personal, information is in the file extension, that's, correct personal extension could person, information could be in the extension, but also random file extension that they, might use in order, to basically bypass some rules oh okay I, gotcha so they're doing a workaround, because there's like this annoying tool, that prevents this data from being, transferred around is blocking this file, type so if I just change the file type, that is correct so one thing also here, to point out is that Healthcare lives on, data clinicians need to access that data, and you you should not stop them, basically from doing the job you just, need to have a better way to detect and, basically maybe for example encrypt the, file so if someone else has stole that, file they cannot open the file our goal, here is not this to prevent clinicians, and from doing something it's just to, make it more secure yeah and and maybe, that um starts to get a little bit to, kind of transitioning to how Ai and, machine learning fit into this puzzle, before we exactly describe how it does, fit into that puzzle I think there might, be a lot of listeners out there that are, very intrigued by what the challenges, might be of applying AI or machine, learning in the context of Health Care, like what are the unique challenges if, if you're a data scientist and you're, you know building a model or wanting to, use an llm or building your own model to, use in a healthc care context what's, unique about that context that makes it, more challenging to maybe maybe it's on, the deployment side or the model, building side what are some of the, challenges related to working in in, healthcare specifically with this, technology this is an interesting, questions as someone who's keen on, mlops yes exactly I always say the main, challenge of ml is the whole project uh, but for us these challenges are a bit, more than some other AI technology due, to the space which we are in for example, we don't have access to real patient, data to train our model model and no, Healthcare organization will agree to, let you use the, data so and probably if one of them, agreed to it I'm guessing you couldn't, use that same model for a different, organization because you trained on, specific data that's sensitive for one, organization is that right that's AB, correct we have a huge data, heterogenity problem and that comes like, for example one organizations it's for, cancer organization the other like a, dental organization these datas are, different other challenges that I can, say that you also cannot collect or, transfer any data to the cloud that, means everything need to happen at the, edge data labeling is highly difficult, even human level performance have about, 8 to 10% labeling error for uh detecting, phis again for example you have lots of, types and extensions, uh data normally contains bias certain, demographics have higher amount of data, than other model development is um, confined by first model performance and, then optimization, metrics and uh you know model deployment, on the edge has its own difficulties, which we can talk about later I think, lastly, unsupervised model monitoring makes it, more challenging to detect drifts and, just to kind of Define a couple of those, things when you say the edge what is, your because people might have different, definitions in their mind of whether, that's a you know some staff member's, laptop or a desktop in a lab or, something like that or like a phone or a, microcontroller somewhere like people, have a range of that so in the context, of healthcare what is the edge, environment Edge could be from laptop, that Clan is working with from the, desktops could be the tablet for example, that you're using and could be the, storage server storage basically gotcha, but all sort of on site with a some, healthc care data center or where staff, are working on site that is correct, gotcha and when you say unstructured, model monitoring what do you mean around, that so uh what are you monitoring for, you men and drift so I'm assuming that, that has to do with like this Phi might, also be changing itself in terms of what, it's like there's a new form type or, there's a new new thing that starts, being collected is that is that what you, mean by model monitoring and that drift, element one of the things that we will, say is that the change in the, distribution of the data as you scan uh, across different groups within the same, whole organization for example there's a, group for, radiology versus there's a group for, like a normal PCP and the datas have, different distribution and we need to be, able to detect any drift in the data, distribution as early as possible, sometimes you also might find something, like concept drifts where it's more like, contextual maybe a file that under, certain scenarios consider as, Phi in some scenarios it's not actually, Phi yeah there are some rules over here, which makes it more difficult yeah yeah, contextualization I guess is a challenge, [Music], interesting what's up friends I love, back Blaze I'm happy to have them as a, sponsor back Blaze makes backing up and, accessing your data astonishingly easy, this is a service I personally use go to, back blaze.com SL practical AI you get, unlimited Cloud backups for Macs PCS, businesses for just $99 a year you can, easily protect business data through a, centrally managed admin protect all the, data on your machines automatically, easily deploy across multiple, workstations with various deployment, options you can add on Enterprise, control including granular access, permissions Advanced single signon group, management controls and compliance, support they even offer multiple restore, options including rapid recovery in the, event of data loss or ransomware that, sucks you can access your backed up data, from anywhere in the world using their, web app or their iOS or Android app you, can even restore by Mail they'll give, you a hard drive with all your data, shipped to your door you buy a hard, drive restore send the hard drive back, within 30 days and get a full refund and, get oneyear file retention and version, history over 55 billion with a b files, restored for customers so far visit back, blaze.com practically eyes so they know, where you came from and continue to, support the show this is a service, obviously recommended by me but also by, New York Times Inc magazine Mac World PC, World lifewire wired Tom's guide 9 to-5, Mac and just so many more you receive a, fully featured no-risk trial at back, blaze.com SL practical aai again they're, supporting the show go there play with, it start protecting yourself from, potential bad times start, [Music], today yeah so maybe we could get a, little bit now kind of into some of how, you've been thinking about and, approaching this problem and and, thinking about it from the the toite, perspective so in the context of this, Edge environment in the context of this, unstructured data in the context of the, constraints that we just talked about, how did you you and your team, specifically think about applying Ai and, machine learning in the context of, detecting Phi and and maybe also like, what is your goal here is your goal to, stop breaches is your goal to provide, sort of insights about this Phi how did, you decide on what the main problem is, you wanted to solve and why AI or, machine learning was relevant to solve, that problem I could actually first tell, a short story that'd be great yeah six, months ago I was on a going to Del the, AI Summit in Austin I got to the airport, and I was passing the TSA Pre where uh, the TSA agent asked me to check my bag, it turned out that the machine picked up, on this pre-workout container which I, had in my back the agent used uh this, device on the box and the result was, positive he was like oh man I need to, call for this special unit to come and, check this oh I was like sure then this, special unit with something like hmet, suit oh they came and used a kit with, bunch of different reagent to more, specifically test my pre-workout they, started sampling from the Pre-Workout, and added a bunch of these test tubes, long story short agent was like yeah, you're good he was like this happened to, my cousin also these pre-workouts are, causing false, positive so we are like that a special, unit with bunch of different model, trains to find and protect the dark Phi, while the current tools in the market, are like the first and the second, machine which leads to false positive or, unknown false negative at to side we do, see this problem as a personal problem, it's our Phi that's being targeted and, clearly the current tools in the market, they cannot protect it so he part, security rule says that you must do a, complete and true assessment of all your, risk and vulnerabilities to Epi it is so, fundamental to what we need to do and AI, is such a critical piece of taking, advantage of the newer technology around, solving what used to be uh labor, intensive problem that are could be much, easier if you can Define the scope of, the problem and you can have machine, learning models which can run, effectively and accurately in a, calibrated manner that's what we do we, take advantage of the AI for to find, sensitive data and I think we get to the, point that where risk and threats and, vulnerabilities are going to be detected, at the edge using the AI as opposed to G, I have all this hero rules which is how, do we lots of stuff today when it comes, to recognizing patterns so at outside we, use AI for example to recognize when the, sensitive data is in unstructured, content it doesn't require us to say hey, there's a keyword here there's another, keyword here that would be how you do, heris programming combination of these, three words must mean this combination, of this four must mean this those you, never get to do all the rules you will, know get to the variability you need so, in our model for example one of our, model with about 50 Millions parameters, model that uh can be set to recognize, this stuff you will never in uh years of, programming get that much logic into, your rejects right so now the other main, factor is for us is to ability to run, this models right at the edge where the, data is being created emailed printed, copid or facts we bring the AI to the, data rather than taking the data to the, AI which most of the current AI solution, do that by doing so we can ensure that, our data is always protected agnostic of, Hardware spec or network connections, yeah so all of what you said uh makes a, lot of sense in terms of the approach, and how you're applying Ai and machine, learning but also in my practical sort, of data scientist mind I'm like oh man, that's really really difficult to sort, of have these deployments of models, especially against sort of heterogeneous, types of data run them on the edge run, them across a diverse set of Hardware, from your perspective as a practitioner, where do you think was the the most, challenging of those issues was it, having to do with like the deployment, targets and the diversity of those was, it having to do with the the types of, models that you could or couldn't run in, those Edge environments that it have to, do with the actual training and labeling, of the data I imagine all of those are, really difficult problems to solve and, you had to tackle all of them but um, what were you maybe uh what was some of, the the hardest problems to solve with, respect to those things I definitely, will say the first but the most, challenging problem is the data, labeling and data creation because um we, don't have access to real patient data, so we need to create our own curated, data set which we need to ensure that we, don't introduce bias creation bias in, that, data the other thing comes around the, model training so our solution need to, be able to leave alongside other, programs that are running on a given, machine within a certain performance, boundaries one example I give you is, that um it has set of rules if there's, an application surpasses certain memory, or CPU it will block that application so, you need to be sure that all this ml, models that you have or this ml pipeline, that you have always remains below this, uh basically boundary and is that, because like these are essentially I, mean I might say Mission critical but, these are sort of critical systems right, like they're using these to treat, patients right so if they that is, correct if you pull all the memory and, the thing stops working then it's, potentially a lifethreatening type of, situation or at least a very concerning, situation in the health care context, right that is absolutely correct gotcha, yeah and in light of those constraints, of course some people now might just say, oh well we've got all these llms now and, they're they're great at doing all of, these things but I'm guessing a lot of, those aren't sort of fitting for this, sort of environment these memory, constraints so where do you go with that, where what is it looking back to sort of, uh traditional NLP sorts of things is it, model optimization is it a combination, of those how are you balancing the, constraints but also kind of looking, forward to the these new generations of, models and that sort of thing regarding, the llms uh I was reading about this uh, fight3 by Microsoft the small model yeah, and even that model requires certain, amount of core Ram or GPU correct yeah, none of the healthcare organizations, have computer reduces specs they all, have like four gigabyte of ram maximum, and some Legacy, CPUs the other problem with l is that um, it introduced some additional risk Ser, to the uh Health gr some clinicians or, researchers they're using tools like for, example chat GPT to copy paste patient, data to get some summary extraction, which it's not that's how it should be, used uh I know for example I know, actually your company Right prodiction, Guard you are trying to solve a problem, like that yeah there's certainly a lot, of people pasting things into chat, interfaces that's very concerning I'll, definitely say that yeah for sure that, is correct now when it comes to model, optimization we take series of, approaches to be sure that our model are, optimized for such an environment this, could be from knowledge distillation or, student teacher networks uh quantization, and model pruning we do Technic, combination of all of these to ensure, that every model that we have lives, within a certain boundary gotcha yeah, yeah that makes sense so it's very, important and I'm guessing the model, architectures and the approaches that, you you can only go so far it's not like, you're going to take llama 370 billion, and do these optimization techniques and, fit it into four gigabytes of memory and, run it on a CPU so it's super, interesting and I think I don't know, what is your view as as maybe You, observe in the marketplace people are, exploring these open models exploring, bigger models but at least in the space, that you work in the only way that you, kind of move forward is with small, models or customized models optimized, models how do you view that kind of, shifting into the future do you think, there there will always be this sort of, diverse set of environments in the, healthc care space that you need to, optimize models for will they eventually, you know get over their hurdles of using, kind of the cloud or large models how do, you see that developing moving forward, in Into the Future a report by Schneider, Electric indicates that currently 95% of, AI workload operates on data centers, they have forecast that this uh number, to go to 50% between Edge and Cloud by, 2028 when you're monitoring the current, developments in the market you can see, that most of the chip manufacturers are, moving towards creating much stronger, chips or machines where they allow to, run the AI at the edge for example, Intel's Metro lake or the, aipc right but it will take quite a, while for healthcare organization to, have that change adapted because you, know it requires budgets and I think, healthcare organization they go to, through machine update once every five, years I don't think they do it over all, day in machine only maybe certain, machines but definitely I do see the, future that you can bring much larger, models right at the edge but I don't, think we are there yet not yeah I, appreciate that perspective because some, people I think in our listener base like, they're constantly overwhelmed by this, news about these new big models but it's, harder to get this sort of story of a, practitioner on the ground working with, specific companies in certain, constraints there's still a quite a, diversity of constraints that uh, practicing data scientists or AI, engineer has to work within so I I think, that Viewpoint is very important as you, kind of look at what you've done with, with toite and these tools that you've, built in detecting Phi helping companies, know where their Phi is reducing false, positives figuring out how to run these, models on edge devices and all those, things do you have anything that stands, out in your mind in terms of you don't, have to mention specific customers or, anything but but success stories or, really things that you're proud of that, you're glad that you've been able to be, a part of in terms of helping protect, this this Phi any any sort of case, studies or or use cases that pop into, your mind yeah uh I can give some, example without naming anyone sure but, uh we have we were in this meeting and, this ciso was in the call and it's like, I had this laptops that was stolen and I, don't know what's on that laptop but, because we have our software on that, laptop we give them an inventory and it, turns out the laptop contain lots of, phis so they never have that view on, this type of scenarios that where the, Phi is or who has access to it but we, can basically give you that oh um I was, on another customer call and it was like, we are happy you know with the tools, that we have they have less false, positive but unknown false negative and, we are quite unhappy for writing to, rules yeah you know it takes us a while, but when you use U power to there's no, rules that you need to write it's, basically out of the box after you, installing our uh product uh it will, start scanning all the files and also, monitoring what's happened on the, machine so if someone copying or pasting, and for example Phi into an email or, into another file we can read that we, can detect that if someone faing it we, can basically detect that so this, customer calls that they have been on, they also going to be positive and, sometimes scary for customers because, you know we are able to find really, detailed information around your network, you're pulling the curtain back there's, there's work to do once you understand, it yeah well maybe as we as we kind of, draw pretty close to a a close here as, you are kind of plugged in both on the, academic side you're plugged in on the, startup side with toite you're you know, investing in this healthc care industry, from the perspective of AI and ml what, gets you excited as you look to the, coming year maybe it's things with, talite maybe it's things more generally, in the AI Community what what are you, excited about and what do you think is, of some of the positive things that, you're you're seeing kind of develop, over the coming year I think there are, two things that I'm really interested, one is this the development on uh this, large models and the fact that they're, getting smaller and smaller uh I am, looking for that that I could work with, those slm or small large models and, deploy those on right the edge and I, think the other thing that I'm quite, interested and right now actively, working on is the Federated learning I, know Federated learning it's kind of in, the background not many company actively, are doing it due to all the challenges, that it has and also some security, concerns but uh for a domain like, healthcare where you cannot transfer, data and you cannot see the data I found, that absolutely necessary that for your, models to be able to train themsel and, update thems so I think those are the, two main things that I'm looking for the, upcoming go that's awesome yeah I'm, definitely excited by both of those, things as as well and I know Chris and I, on this podcast have mentioned Federated, learning for for years I I hope that it, kind of comes comes more to the, Forefront as people figure out paths to, to do this I think that will be, interesting well ramine it's it's been, great to have you on the show I think uh, we'll we'll see each other again in, Boston before too long I think but yeah, it was it was great to have you on the, show and thanks for taking time out of, out of your schedule to share some of, these insights with us yeah thanks uh, for having me Danielle and see you soon, in Boston sounds good see, [Music], you all right that is practical AI for, this week subscribe now if you haven't, already head to practical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Rise of the AI PC & local LLMs | We’ve seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.
Leave us a comment (https://changelog.com/practicalai/272/discuss)
Changelog++ (https://changelog.com/++) members save 5 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Ladder Life Insurance (https://ladderlife.com/changelog) – 100% digital — no doctors, no needles, no paperwork. Don’t put it off until the very last minute to get term coverage life insurance through Ladder. Find out if you’re instantly approved. They’re rated A and A plus. Life insurance costs more as you age, now’s the time to cross it off your list.
• Neo4j (https://graphstuff.fm/episodes/2023-finale-llms-and-knowledge-graphs-throughout-the-year?&utm_campaign=UCGenAI&utm_content=AMS-SrDev-ToFuDev-UCGenAI-Audio-None-GenAI1-GenAI-NonABM&utm_medium=Audio&utm_source=PracticalAI&utm_justglobal=) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack.
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Ollama (https://www.ollama.com/)
• LM Studio (https://lmstudio.ai)
• llama.cpp (https://github.com/ggerganov/llama.cpp)
• OpenVINO (https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html)
• MLPerf client working group (https://mlcommons.org/2024/01/mlperfclientwg/)
• Article - 5 top small language models (https://datasciencedojo.com/blog/small-language-models-phi-3/)
• GPTQ article (https://towardsdatascience.com/4-bit-quantization-with-gptq-36b0f4f02c34)
• Article - Which quantization method is right for you (https://www.maartengrootendorst.com/blog/quantization/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-272.md) | 516 | 3 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io hello and welcome to another, fully connected episode of the Practical, AI podcast in these fully connected, episodes Chris and I keep you connected, with everything that's happening in the, AI world and hopefully share some, resources with you to help you level up, your machine Lear learning game my name, is Daniel Whit neack I am CEO and, founder at prediction guard where we're, safeguarding private AI models and I'm, joined as always by Chris Benson who is, a principal AI research engineer at, locked Martin how you doing Chris doing, great today Daniel how are you um I'm, doing well yeah yeah lots lots going on, lots of fun stuff in the in the AI world, but uh Chris I I I have to say uh so one, of our one of our listeners pointed out, something which uh I realized afterwards, as well on our last fully connected, episode the one about, gp40 or Omni we played some clips of The, Voice Assistant and we talked a little, bit about the model and the model of, course was released and and we were, using it but the voice back and forth, wasn't yet plugged into GPT 40 so, they're just I think releasing that, shortly so I I think there was a bit of, General confusion in the community about, because when you click that button it's, like you just take over into the The, Voice interface and it's like I was on, GPT 40 and so probably could have been, maybe handled a bit better on on our end, and maybe there end too but open AI got, us I guess we we were fooled but yeah, still really cool stuff still the the, things that we talked about are are, still what GPT 40 is but just wanted to, clarify that for our listeners if they, had listened to that previous episode, while we're while we're doing that I, realized I had made a mistake on that, same issue uh after the show that I, wanted to uh confess on I had watched, all the videos over and over again about, the video portion doing that and then, when we were in the show I had I'll, claim is a senior moment where I was, thinking just I watched so much the, video I was thinking oh yeah I I did, that and so it wasn't afterwards I was, like no that was the videos I was, watching what was I thinking so anyway I, wanted to conf so we we had a couple of, blunders on that one but having fun, there you go in our eagerness and, excitement to talk about GPT 40 and, record it very quickly afterwards um we, got got so we we got we both got got, there so all good we we'll move on well, for for listeners who don't know the the, show is fairly spontaneous if you, haven't been following us long and we, dive into stuff pretty quick and, occasionally occasionally we fumble a, little bit with that yeah yeah so thanks, for journeying with us through this this, wild and crazy world of of AI and now I, I think something that's been on my mind, quite a bit Chris with some recent, announcements but also things that have, been kind of developing over the past, months is this area of local offline Ai, and AI PCS there's sort of people are, using this terminology AIP PCS and, current marketing hype yeah the current, the current marketing hype so I thought, maybe we could dig into a little bit of, a little bit of that today and talk, through kind of what what exactly does, that mean how are people using AI models, locally what is an aipc what are the, relevant kind of types of models and, optimizations that can run locally all, of that seems to be a big well it it's a, little bit hard to parse out some of, that maybe if you're new to the new to, the space and you know what is a ggf, versus a ol Lama or you know all these, these things you know coming out so yeah, I thought that would be good to dig into, and I know that you have been interested, in kind of AI at the edge for some, period of Time how do you see whether, it's a staff laptop or maybe a heavy, compute node in a manufacturing plant or, something where you'd want to run AI, quote at the edge or locally let's say, generally Al or offline what are the, reasons that people would would sort of, want to go that direction well I think, you know this is kind of a thread we've, talked about off and on on different, episodes uh over time and you know AI, models are hosted in in software and and, they're going to always be wrapped in, that and as we you know the software, expands from the cloud all the way out, into every device that we're already, using and so it's only natural that as, AI becomes more accessible and cheaper, to deploy that it's going to you're, going to start having models you know, that are kind of ramping up existing, software out there and there's of course, we're going through all the hype that's, associated with that but the way I see, it is is simply just a a natural, evolution of where software development, would go and to your point a moment ago, about the hardware side we don't talk, about that a lot we're very software, focused in general but the hardware side, is really going through Revolution I, know that you I know that in your, business you have partnership with Intel, and are are seeing that in that capacity, and I certainly see that in my day job, and so there are so many more Hardware, capabilities coming out to support these, functions many of which will function or, targeting low power disconnected, environments and so this is a timely, topic and and if you look back for, preent before that we've always seen, over the the years in software and Cloud, development kind of a shifting back and, forth between local capability and, suddenly you get a new generation of, hardware and things will go a little bit, more local and you'll have your own, equipment and then things will move back, into the cloud and that natural give and, take is part of the the flow and I think, right now we've been so Cloud focused, the last few years because that was, really the only available option that, now we're we're seeing a lot of new, capability rolling out on both Hardware, software and models that are going to, enable Edge functionality to really, explode over the next few years yeah I I, was asked I was at a conference last, week and I was asked which direction, things would be going either local AI, models or hosted in the cloud and I, think the answer is definitely both in, the same way that there's a place for if, you just think about databases for, example as a technology there's a place, for embedded local databases that, operate where an app application, operates there's a place for databases, that run kind of at the edge but on a, heavier compute node that served maybe, some environment and there's a use case, for databases in the cloud and sometimes, those even coexisting for various, reasons and in this case we're talking, about AI models so I have a bunch of, files on my laptop I may not want those, files to leave my laptop so it might be, privacy reasons that I want to you know, search those files or ask questions of, those files with an AI model so privacy, security type of thing or in a, healthcare environment they may have to, be air gapped or offline sort of thing, or U Public Utilities sort of scenario, where you can't be connected to the, public internet but then it it might, just be also because of latency or, performance inconsistent networks or uh, flaky networks where you have to operate, sort of online offline there's a whole, variety of reasons to do this but yeah, there there's also a lot of ways that as, you said this is rapidly developing and, people are finding all of these various, ways of running models at the edge and, we can highlight if you're just into, this now and getting into AI models, maybe you've used open ai's endpoint or, you've used an llm API if you wanted to, run a large language model or an AM AI, model on your laptop there's a variety, of easy ways to do that I know a lot of, people that are using something like LM, Studio this is just an application that, you can run and test out different, models there's a project called olama, which I think is really nice and really, easy to use you kind of just spin it up, you can either spin it up as a python, library or as a kind of server that's, running on your local machine and, interact with olama as you would kind of, an llm API and and then there's things, like uh llama CPP and a bunch of other, things these these I I would kind of, categorize as local model applications, or systems where there's either a UI or, a server or a python client it's kind of, geared specifically towards running, these models locally and then there's a, sort of whole set of technologies that, are kind of python libraries or, optimization or compilation libraries, that might take a model that's maybe, bigger or not suited to run in a local, or lower power environment and run that, locally so if you're using the, Transformers library from hugging face, you might use something like bits and, bites as a library to quantize models, shrink them down there's optimization, libraries like Optimum and mlc open Veno, the these all all have some have exist, for some period of time actually I think, in the past we've had the Apache TVM, project on the show and we talked about, octoml so there this is not a New, Concept because we've been sort of, optimizing models for various Hardwares, for some time but these optimization or, compilation libraries are also usually, kind of Hardware specific so you, optimize for a specific Hardware whereas, other other of these local model systems, are maybe more general purpose less, optimized for Hardware specifically I, don't know if you've got a chance to try, out any of these systems Chris uh, running some models on on your laptop I, have a little bit I've used a Lama I, think that's my go-to and you have a, like a M1 or M2 M3 whatever M there is, now MacBook yeah I have an M2 that I may, that I have ACC I have a couple of, different uh laptops one that's old and, one that's uh well I guess an M2 is old, by today's standard so may may have to, upgrade that one pretty soon but but, yeah I've used I've used AMA primarily I, probably haven't used as many of the, tools as you have given given the, business that you're in it's I I think, one of the things that I'm really, interested as as people are are doing, this now is understanding you know, because we're really focusing on this, kind of the infrastructure the plumbing, of making all this work locally and and, doing the Integrations with the cloud, but I think there's another another, thing just to throw into the the mix of, the conversation and that is how as, people start whether in the cloud or, particularly local uh having multiple, models out and you have have the, infrastructure now you know as we're, talking about to run it but starting to, look at the the API and the middleware, to enable inferencing across apis, without direct human intervention and, have it make sense where you have, different responsibilities much like we, have had for a number of years in the, software realm so as we're talking about, that I wanted to throw that in as, another topic that I think is going to, be really really important and hasn't, had nearly as much attention as the, fundamental infrastructure yeah I think, that gets probably to a couple things, one is that one current major difference, between sort of hosted Cloud models and, offerings versus local models is likely, you're not going to run a mix of 10, different llms on your local laptop all, at the same time all loaded into memory, um that would be a pretty a pretty, significant at least right now a pretty, significant ask to kind of switch, between models in that way but there are, certainly cases where you know I think, the market is showing that people want, to not be restricted to one model family, and they're spreading out their usage, against multiple models so that, definitely needs to happen in the cloud, and from you know model providers but, that doesn't mean you couldn't throw in, the mix a selection of local models as, well for specific purposes and I think, that gets to the other thing that you're, talking about which is kind of data, integration automation pipelining all of, that sort of thing I saw a comment on, LinkedIn I think it was even this, morning that those that are really, winning in the AI space are those that, have taken what they've learned kind of, from automation data pipelining data, integration in previous cycles of data, science and integrated those with, generative models at various stages, because a lot of a lot of the value and, I've seen this too A lot of the value, that you get out of these models is not, the models themselves but the system, that you build around them and you know, that involves a lot of data integration, and Automation and maybe even routing, between different models in the case, that we're talking about here maybe even, routing between local models and Cloud, models and so yeah I think that that, that's really stressed especially as you, talk about running these models kind of, everywhere quote unquote across cloud, and local and on Prem and data center en, IRS but yeah it's it's interesting I, don't I don't myself have an aipc yet, maybe at some point I will but uh but, yeah I'm I'm excited to see where all, this goes uh indeed I think as we push, forward I think one of the things that, I'd like to see especially for local, whereas you know we have you know we, mentioned a few minutes ago a llama and, some of the other tools for, infrastructure I was actually as we were, talking here I was looking through um, Yan laon's various posts because he was, proposing recently in the last week or, two prior to this conversation kind of a, way of structuring different model, interactions and and from a, responsibility standpoint that they're, doing at meta and of course my employer, we have our own version of that on how, we structure different things but really, interested in seeing if the community, kind of comes around with kind of an, open framework you know best practice, framework around how to do that you know, be able to do it on a laptop an M2 M3 M4, be able to have those interactions, locally and and a framework that would, span between that local and the cloud, interaction so that you have something, and I don't think that right now, everyone seems to be doing that on their, own and I and there there's a lot of, similarities between them but there, doesn't seem to be kind of a standard, approach to to how that all comes, together, [Music], if you're anything like me you have a, certain tendency to put things off until, the very last minute seeing the dentist, going to the doctor Home Improvements, that NeverEnding chore list of yours and, while most of the time it works out just, fine the one thing in life that you, really cannot afford to wait on is, setting up term coverage life insurance, you've probably seen life insurance, commercials on TV and thought yeah I'll, look into that later no later doesn't, come this really isn't something you can, wait on choose life insurance through, ladder today here's what we love about, ladder and why we allow them as a, sponsor they are 100% digital no doctors, no needles no paperwork when you apply, for $3 million in coverage or less just, answer a few questions about your health, in an application ladder's customers, rate them 4.8 out of five stars on trust, pilot and they made Forbes best life, insurance 2021 list you just need a few, minutes and a phone or laptop to apply, ladder's smart algorithm Works in real, time so you'll find out if you're, instantly approved no hidden fees you, can can to any time get a full refund if, you change your mind in the first 30, days latter policies are issued by, insurers with long proven histories of, paying claims they're rated a and A+ by, a am best finally since life insurance, costs more as you age now yeah right now, Now's the Time to cross it off your list, so go to ladderlife decom slprc aai, today to see if you're instantly, approved again that's, ladder.com, practical AI l a d d ER, life.com practical aai, well Chris uh there there's an, increasing number of options if if you, were to explore this space and kind of, that interaction uh of local models with, your with your systems there's an, increasing number of choices of quote AI, PCS which I think is a hyped term now, one of them uh you mentioned Intel Intel, coming out with aips I think uh Lenovo, is is shipping some with the Intel's, core Ultra processor I think the name is, meteor Lake the code name at least and, similar probably to as a response to, maybe what's more familiar I already, mentioned the M1 M2 M3 Etc line from, from Apple but Nvidia is also working, hard on like the GeForce RTX AI PCS, which there've been kind of gaming PCs, with like gpus in them for for sometime, but I think most of these quote AIP PCS, are more of an integrated type of, processor or system where where it's not, just like an add-on to the laptop but in, the case of like the core Ultra or the, M2 there's actual processing in the, architecture that is optimized for, executing models and so they're sort of, AI ready shipping AI ready and that, brings up kind of some interesting, questions in my mind which are well how, do all of these AIP PCS compare if I'm, about to get myself an aipc uh where, where should I go and in thinking about, that and looking at some of these, benchmarks I was really encouraged to, see that ml Commons which is the, organization behind ml perf which is a, set of benchmarks that benchmarks and, working groups that have been working, for some time to Benchmark various, systems for performance on running AI, workloads machine learning workloads, they've just announced this spring an ml, perf client working group which is, really geared towards essentially an, application or a workload that you could, run across these various aips or maybe, uh AI enabled Edge machines and that, that sort of thing um to really do kind, of llm based workloads and do some, benchmarking for both training and, inferencing on these quote clients so, they're they're kind of referring to, these generally as as clients so they, say the new ml Commons effort will build, ml benchmarks for desktop laptop and, workstations for Microsoft Windows and, others other operating systems so um, quite interesting I'm glad that this I, as far as I can tell some someone in our, again you know sometimes we get things, wrong so someone in our audience can, correct us maybe David uh from ml, comments who's been on the show before, he can correct us if something's already, been published but I couldn't find the, sort of set of benchmarks I think it's a, work in progress but really excited to, see this when it comes out yeah I think, it'll be interesting as as these laptops, come out I'm I'm it's hard to imagine, that the entire industry doesn't have to, to go full in on this regardless and, thus kind of making the distinction of, an AI laptop a little bit of a a, redundant thing because there's a point, and maybe we're already maybe we've, already arrived now where the idea of, purchasing a new laptop that is not an, AI laptop is a ridiculous thing you know, it's it becomes a must-have feature to, have going forward and therefore all, laptops kind of have to go that, direction at some point in here yeah, well I I definitely think that that, could be one downside of this whole, thing thing is there's I I mean I know, the prices will go down but these things, are really expensive right now so there, is going to be a sort of disparity of, those uh already for some period of time, if if you're a a new developer maybe a, an indie developer that purchase of that, MacBook is a pretty significant expense, for you already and you know myself I, just use a refer think pad from like, four years ago you know not an AI PC, it's a core i5 and you know uh not a, terrible laptop but definitely not not, anything that anyone uh would, necessarily be jealous of now I can run, some models on this laptop you know, using olama and other other systems and, I think that that gets down to maybe, another element of this so there's going, to be on one side these clients that get, increasingly sophisticated and build in, more AI enabled or accelerating, functionality into their chipset and, into their their Hardware but there's, also going to be increased, sophistication on optimizing models that, can't run locally such that they can run, locally and this is where people might, kind of I think this is also a point of, common confusion that I've heard in, workshops that I've given at conferences, and other plac, where people kind of look at let's say, it's llama 3 or something like that, llama I want to run llama 3 while you go, to hugging face the top downloaded llama, 3 is the base model and then you've got, these fine tunes for instruction or chat, and then you've got all of these other, flavors of llama 3 like you know GG UF, gml qat a AQ a WQ uh you know all all, sorts of like acronyms that are really, difficult to understand so may maybe it, would help to just break this down just, slightly there's usually when a model is, released they release a base model or, the pre-trained model or whatever it's, called a base model so that's that kind, of shortest name usually The Meta you, know meta llama 3 that usually is maybe, a good model that you might find tune, off of but not generally the best model, to start with because it's it's a base, model it's not fine-tuned for any sort, of set of General instruction following, or chat and they usually release along, with that then a set of fine-tune models, for instruction or chat so you've got, meta3 or uh llama 3 instruct which is, usually the better model and then you've, got this whole world of community, members out there that build pipelines, so we had noose research on they have, pipelines built so that when a model is, release they can create all these, different flavors of it which include, flavors for running these optimized in, certain ways so these would be these, other acronyms that we can dig into a, little bit but these are most of the, time either additional fine tunes or, quantized versions or somehow optimized, versions of these models that are meant, to be run kind of a diverse set of, environments, [Music], what's up friends do you remember when, chat GPT launched I do it felt like the, llm was this magical tool out of the box, however the more you use it the more you, realize that's just not the case the, technology is brilliant don't get me, wrong but it's prone to issues like, hallucination, on its own but there's hope there is, still hope feed the llm reliable current, data ground it in the right data and, context then and only then can it make, the right connections and give the right, answers the team at neo4j has been, exploring how to get results by pairing, llms with knowledge graphs and Vector, search check out their podcast episode, about llms and knowledge graphs, throughout 2023 at graph stuff. FM they, share tips on retrieval methods prompt, engineering and so much more don't miss, it find a link in our show notes yes, check it out graph stuff. FM episode, [Music], 23 well Chris I kind of started getting, into the alphabet soup a little bit I, don't know if if you're sometimes as, confused as as I am with all of these, all of these model names uh but they're, getting increasingly long yes you know, one of the uh keep finish your point, there and then I have a question for you, afterwards yeah yeah sure know I was, just gonna highlight a few of these, different quantization methods so that, people could maybe have them in their, mind so there's the flavor that are gml, or ggf you might see those letters this, is GP generated unified format that's, that's what that stands for and this is, a this is an optimization of a model, that will take that model that maybe, requires a GPU to run is larger and, creates a C++ replica of the, llm and allows you to run it in, quantized versions meaning that the, parameters of the model are taken from, you know number that might have 32 or 16, digits behind the the period and uh get, that down to two or four or eight that, sort of thing which makes the model, smaller and more efficient so these are, mostly geared towards CPU sort of CPU or, laptop kind of environments there's also, gptq which is really a focus kind of, quantization method but it's still meant, for GPU only so these usually ship in, similar kind of formats to previous, models but they do kind of some, calibration informed quantization to get, that model smaller there's qat which is, quantization aware training which as it, might sound involves some actual, training retraining of the model to, inform the the quantization there's, others like, awq and this is another quantization, method so all of these sort of letters, that you see if you're wondering what, those are those are all kind of, referring to these different kind of, flavors of the model that might be, generated for either running the model, in an optimized way locally on a CPU on, a laptop or optimize still on a GPU but, in a smaller format that's more, efficient what are your thoughts on the, on the CPU derivative what are your, thoughts on you know perform performance, and capability relative to the its own, base model and its GPU siblings yeah I, mean I I think that the reality right, now and then maybe where it's headed I, think the reality right now is the CPU, based models even you can run some, models that are even you know 7 billion, parameters or something in some quantise, version on a CPU you're not going to get, the same throughput as you would with, hosting the model on a GPU in other, words the kind of tokens per second um, the speed at which you're generating, will be lower but it it's possible to, run those what you're not going to do is, and this is what I also tell people like, the same way that like you're not going, to host your microservice for your, company on your laptop you're not going, to host uh some AI functionality for, your company on your laptop usually um, that's gonna still live in the cloud but, if you have some sort of private use, case where something can't leave the, laptop or maybe it's like your lap, you're deploying laptops in a disaster, relief scenario and there's going to be, not that much connectivity right it's, per it's it's still enough throughput to, get responses so if you had like chat, over your docs disaster relief docks and, that on your laptop like it's still, enough to get an answer in that scenario, and you know people can push it pretty, far I think the difference is just again, not one or the other it's the it's the, use case I guess yeah I think and just, to add to that use case slightly I think, there's also disconnected or partially, connected mobile platforms where you, can't necessarily rely on the cloud, access and you have devices that are out, there as well I would just to lump that, in kind of pulling around full circle, for a moment back to the laptops these, AI laptops coming out kind of thinking, back to you know the the natural kind of, segregation of responsibilities that we, have in software development aside from, just the AI world would you imagine that, a reasonable level of support in those, would be to to be able to actually do, training on on like seven billion you, know size models the smaller range that, are so you know that are there's so much, more of those where that maybe in the, not so distant future I'm training a, model it's in that range my own laptop, can handle that not just for inference, but for training and then for very large, models that's still cloud-based do you, think that that is a reasonable level, that that we might see AI laptops able, to support I mean I think that you can, see some people trying some more, training or or rather fine-tuning sorts, of things on diverse Hardware I think, basically what I'm seeing right now is, still primarily inference on local, machines and utilization of things like, in context learning and rag type, workflows to integrate data rather than, fine-tuning locally so I think that, that's that's kind of the reality of, where it is I think there's a, possibility in the future maybe of you, know some type of training type of, scenarios that that will happen maybe, not on client devices but spread across, client devices so you know it's been a, while since we've talked about Federated, learning but it will be interesting to, see if that kind kind of rears its head, in this world I know that there's been, efforts to kind of train llm adapters in, a Federated way and there's some papers, about this and share kind of parameter, efficient updates to to weights across, different client devices that seems, really intriguing to me but yeah I don't, know we'll we'll see where it goes maybe, I'll be proved wrong but I think I'm, increasingly more of a proponent of most, people don't need to fine-tune you know, a lot of it can be done with Rag and, chaining and agents and you know, selecting the right models so I think, that that especially as models get, better that will be the case kind of, moving forward but yeah I'm I am looking, forward to getting an aipc and you know, putting it through its Paces eventually, yeah right now my I have to my M2 is, through my employer so I'm my next one, will be I'm hoping to be an uh either an, M4 I'm thinking out possibly a non-apple, one as well so the m4s are supposed to, supposed to be able to do quite a bit, more than than the earlier Generations, so looking forward to uh to being able, to pursue this yeah yeah definitely well, I hope uh our audience we we'll include, a few links to some blog posts about, these quantization methods and some of, the systems like AMA and LM studio and, others that we talked about i' encourage, everyone to to get hands on and and try, your own hand at it and you'll get a, sense for for the performance of these, models locally so definitely give it a, try all right well thanks a lot Daniel, that great information today uh was good, show and thanks for bringing that yep, we'll talk to you, [Music], soon all right that is practical AI for, this week subscribe now if you haven't, already head to practical a.m for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical AI fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music], k l |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | New trend: RAG as a Service?! | The Practical AI guys discuss a new trend in the AI world...
#podcast #ai #machinelearning #datascience #artificialintelligence #deeplearning #ml #mlops #nlp #dataengineering | 231 | 4 | 0 | it kind of seems like people are, figuring out that generative models are, great at being kind of assistants and, automats but not necessarily like, predictors right or or other kind of, functions too like analytics types of, things and that that sort of stuff and, there's i' I've noticed there's been a, new term coin at least for me rag is a, service have you guys run across that, rag rag as a service is now like is that, acronym just, R sort of yeah, R yeah don't don't even try you're just, going to strain the vocal cords if you, do that man yeah rag is a service we'll, rag you if you bring your thing we're, going to rag you we're going to rag you, all over the yeah well I think there, what you're saying Daniel it really, speaks to something that I've been, seeing too which is the maturity in the, last whatever six months has been it's, become very clear that there's |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI in the U.S. Congress | At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU (https://www.gmu.edu) to pursue a Master’s degree in C.S. with a concentration in Machine Learning.
Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC’s AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act.
We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.
Leave us a comment (https://changelog.com/practicalai/271/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Don Beyer – Twitter (https://twitter.com/RepDonBeyer) , LinkedIn (https://www.linkedin.com/in/don-beyer-6b444b4)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• U.S. Representative Don Beyer (https://beyer.house.gov)
• Congressman Don Beyer, Mason student and lifelong learner (https://www.gmu.edu/news/2023-04/congressman-don-beyer-mason-student-and-lifelong-learner)
• Beyer Statement On President Biden’s AI Executive Order (https://beyer.house.gov/news/documentsingle.aspx?DocumentID=6017)
• Beyer Appointed To Bipartisan Task Force On Artificial Intelligence (https://beyer.house.gov/news/documentsingle.aspx?DocumentID=6082)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-271.md) | 290 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of the, Practical AI podcast this is Daniel wh, neck I'm the founder and CEO at, prediction guard where we're, safeguarding private AI models and I'm, joined as always by my co-host Chris, Benson who is a principal AI research, engineer at loed Martin how you doing, Chris doing great today Daniel uh, excited about today's guest I'm thankful, in this particular weekend that the, government has given us a a holiday so, we're about to have a long a long, weekend here in the US and uh to kick in, the long weekend we've got with us the, current congressman from Northern, Virginia who is Don berer welcome Don, Daniel and Chris thanks so much for, inviting me to be on the show yeah well, we were super excited to have you join, us I I think Chris and I were talking, before the show it's just encouraging to, see how you're engaging with this, subject of AI and encouraging that there, are people like yourself in our, government and um I'm sure governments, uh around the world as well thinking, very deeply about this topic could you, give us a little bit of your background, in terms of particularly how you started, getting more and more interested in kind, of Science and AI policy well not to go, too deeply but when I was in high school, and in grade school I just loved math, and math puzzles U when I was a kid, Scientific American there was a guy, named Martin Gardner who had a big, puzzle at the back of Scientific, American every month that I I loved, doing and when I was in high school I, thought I was going to be a physicist my, dream was to be work at The nebor, Institute of theoretical physicist in, Copenhagen and then, I figured out in college I wasn't smart, enough to do that so I I became a car, dealer instead and spent most of my, professional career although my mom, didn't think it was a profession using, math that I knew in fourth grade once I, had division down I was okay for the, business world uh but I was always, interested in it and when I got to, Congress um I was privileged enough to, be appointed to the science committee, you know science committee is not, everyone's First Choice like nobody, gives you money because you're on the, science committee but I found it, fascinating not only because of climate, change because we got to do the, oversight of NASA we were the first ones, to see the pictures of the black hole, when they figured out gravitational, waves we had those scientists come talk, to us and it was just really fun and, interesting so I got to do a lot more, science than I'd ever done before some, years ago well actually like 40 years, ago I'd heard a graduation speech by one, of those 60-minute guys Eric seide and, this is like 19 late 1970s and he talked, about how much information we were, generating every year and our inability, to see the patterns in that information, it was just too much and there's always, been in the back of my mind that it, would be that we are being overwhelmed, by information so maybe 8 10 years ago, uh when AI was really starting to catch, gear there was a corsera course I'd, taken a couple of corsera courses my, first one was on gamification which I, thought was pretty cool and so I took a, AI course on AI and the first three, weeks uh I just found totally, fascinating the idea that you could do, signal amplification that you could use, mathematical formulas and linear algebra, to progressively get closer to um actual, connections and relationships but then I, got to the first exam and literally I, say I got a zero on it it was worse than, that I didn't turn it in because it, couldn't answer a single question, because I didn't know pyth on I Didn't, Know Java I had never taken linear, algebra and I just put it aside and then, two and a half years ago our local, University George Mason University you, know one of the things you do in, politics is you Tour all the new sites, any place you can cut a ribbon right so, we were cutting the ribbon on their new, Innovation Center in Arlington and I was, just really intrigued and actually, jealous and so as a throwaway I said, could I ever take courses here and they, looked at me sort of funny didn't really, answer me but I got the lady's car, and I wrote her later that night and, said I'd really like to sign up and uh I, was late for the filing deadline the, application deadline but they waved it, for me and I ended up taking, pre-calculus in that spring and again, I'd taken all that 50 years ago 56 years, ago but it was it was like taking it all, over again and so in the meantime I just, finished my sixth course on, object-oriented programming and I'm I'm, coding now for the first time in both, Python and Java and getting ready for, the other 11 courses which as long as I, live long enough will give me a a, masters in machine learning that's, fantastic ironically uh before I move on, to a question for you I took that same, corsera uh certificate course and it was, hard it was a hard course you did much, better than I I'm, sure it was tough it was a tough course, uh so and I also had to Bone up on some, skills that I had long since lost or or, not touched on it's a fantastic story, just to back up for one second I, actually first you know saw this story, that you're telling us here when Ian, Bremer had interviewed you and and I was, following his social media releases and, uh and was very inspired by that you, know especially because it's so common, for the public just to assume Congress, doesn't understand science doesn't, understand technology and Ai and there, you were doing it um and you know, especially leaping unabashed at the age, you're at which in my view is a plus, here uh because we and we have a lot of, folks I'm in my 50s that are my age and, older that follow the show and I I, really wanted to bring this out I'm kind, of curious were you nervous about it you, know in terms of diving into this you, know there's this perception that even I, experience at my age that AIML is a, young man young woman's game and we're, all kind of a little bit on the older, side for it and stuff did you have any, fear of diving into this topic how was, your head at in terms of of making that, step Chris I should have been more, afraid than I, was you know I had originally graduated, 1972 degree in economics and I worked, for a year or so and then I decided that, I didn't know what I wanted to do I was, wandering around so I thought well I'll, go to med school but I didn't have any, of the prxs so I went back and did the, whole Premed thing in about 12 months, and at the time I had figured out that, it was just sailing I was competing, against 18 19 20 year olds and I was 23, and I'd worked for a year and was, married and and I crushed you know I, just felt like I dominated so I thought, maybe I would again that all these kids, were you know they're too busy trying to, figure out who their next romantic, liaison might be and how much they could, drink tonight and boy was I wrong the, kids that I've been in school with are, so serious and they have these great, technical backgrounds from high school, and they were the ones crushing me so it, was it was really a great exercise in, humility for me and I honestly say, I did well in college and taken courses, here and there over the years I've never, worked harder in a course than I did on, this last Java course I typed 193 pages, of notes wow it's funny you say that I'm, trying to teach my daughter right now to, start taking notes in class and andru oh, Dad and stuff like that but I I'm like, the good students that's what they do, well I I still I have two packed, notebooks from the python course and you, know I look around the classroom with, the other 80 kids I'm the only one with, an notebook a lot have laptops open and, some have their phones and some are, asleep um the whole range and as you've, been diving into this subject at a more, Hands-On level I'm wondering you know on, the one side you're a part of and, participating in conversations about Ai, and increasing conversations about AI on, the government side and policy maker, side and then on on this side you're, kind of in the weed so to speak of you, know creating literally programming in, in python or or whatever whatever it is, what's been maybe surprising to you, about like how AI or machine learning is, developed at a Hands-On level at a, practitioner level versus the perception, at the policy maker level is there, anything that's stood out to you or, things that have surprised you before, answering that though you touched on, something that is is has always, fascinated me I don't know if you've, read Kim Stanley Robinson's Mars Trilogy, It's a Wonderful book it got me through, a gubernatorial campaign 15 pages a, night and one of the the things in it, that I really took away from it was the, idea of both leadership from the balcony, and Leadership from the field that one, of the lead characters in it would spend, a year or two managing the planet Mars, from like a general manager level a, presidential level and then he'd go out, in the field for two years and live in a, tent by himself um looking at Mars, biology and when I was in the car, business just to be mundane once again, I'd always spend two weeks every summer, working as a technician in the shop as a, mechanic by the end of the third day I, was cursing as bad as they went and I'd, hate management by then by the third day, and you know but I i' come away from it, at the end of the those hot summer days, with cut hands and a real appreciation, for what it was like to work on cars all, day long replacing water pumps and, trying to figure out where that, obnoxious knocking noise is and now I'm, finding the exact same parallel with, artificial intelligence and serving on, the AI task force and the AI caucus um, the various committees that we have that, is really fun to see and like I'm I'm, still very much a rookie a baby in this, field but I can imagine what two or, three years later a senior software, engineer is doing the people that are, building these wonderful models right, now that understand how the Neal, networks come together on the other hand, trying to figure out what do we do about, deep fakes what do we do about, hallucinations what do we do to protect, our electoral systems um you know to, look at the policy side also and and I'm, blessed to have people like U I have a, wonderful tech fellow which means some, other Foundation is paying for him for a, year um who's an MIT computer scientist, who worked in AI he's actually knows the, the math and the computer science and, the hardware of it to help with me on, the policy side that's great I I'm, looking forward to diving into the, policy side but before we go there how, have other members of Congress received, your dive into this topic you know with, with such an intensity you know do they, tend to come to you do they look to you, are you seen that way and what hopes, might you have of members of Congress, really digging in whether they're in the, house or the Senate digging in to this, topic with some sense of willingness to, recognize that it is, a huge topic for our future do you have, any hope for that or do you think it's, going to kind of stay this way now Chris, I'm embarrassed to say this but I think, the primary reaction has been, Amusement you know especially when I was, doing you know single variable, multivariable calculus I'd bring my the, homework to the floor because sometimes, we'd have long vote sessions especially, during covid when we had to vote by, proxy so people would come on over and, say what are you doing you know and then, they'd look at it and then get a little, anxious and walk away from me so no I, don't expect many other people to be, going back and taking undergraduate and, graduate courses in it but I do think, many many members of Congress are trying, to read everything they can about it, there's an you know there an abundance, of AI books out you see them all over, the place and people are trying to read, every article they can we have had just, a myriad visits on the Hill from people, that are experts in the field um just, this last week we had a number of people, on the safety side people from nist G, for example that's very encouraging to, hear and we' have people from industry, and people from Academia and you know, Stuart Russell was there um a week or, two ago from Berkeley he wrote wrote, apparently the the classic textbook on, it Mark andrees came a couple of weeks, ago to give us his techno Optimist, Manifesto so we we've been listening to, as many people as we can in order to try, to develop good policies well Don I I, think some of us who are you know, observing our US government with things, coming out like the executive order and, you know you mentioned nist I know I've, read a bit of some of the guidelines, best practices um some of what they're, they're digging into could you help, those of us who aren't well-versed in, the things that policy makers are, involved with around the topic of AI, what how would you categorize those what, are the main focuses and what are the, main activities that policy makers, congressmen like yourself what are those, main activities that are going on and, how should we kind of be expecting some, of that to trickle down to us as, practitioners or kind of hit our desk so, to speak I think there's a Title Wave, coming at us right now congressionally, because it seems like every every group, every committee every caucus wants to, have a little AI specialty so for, example I'm on the AI caucus which is, pretty large right now bipartisan almost, all of it which is really encouraging uh, and so we're doing the education piece, for other members and their staffs, especially their staffs and for those, maybe uh in an international context, because we do have international, listeners what uh what is a caucus in, terms of uh the AI caucus that you just, mentioned well the the first AI caucus, is open to all members of the House and, Senate and literally in earlier years, we'd have 10 or 12 members and six would, show up for lunch now 150 show up wow a, lot of them staff but really interested, in the speakers and what they can learn, and then the individual we have smaller, groups like half of the Democrats belong, to the new Democrat Caucus which sort of, defines itself as being pro-innovation, pro business prot trade and so they have, their own AI caucus the progressives I, don't think have one yet but they're, really interested in it and then the, probably the most important one is the, speaker Mike Johnson and the Democratic, leader hakeim jeffre appointed a 24, person task force for this year to try, to look at the 200 different pieces of, legislation that have been introduced on, AI and not focus it down to the handful, that we should pass this year that would, be building blocks for the years to come, you can probably you elegantly put it in, four or five buckets but clearly the, Deep fake problem not just deep fake but, the whole copyright plus problem from, music and and illustration and, photographer and text and and voice, obviously Scarlet Johansson you that, piece then there's the whole piece about, generative Ai and what can we expect and, what can we rely on from the large, language models that are springing up, the whole safety concern which one of, the I think most encouraging things by, the way all this is on the background of, social media and the fact that we've, done nothing in the 25 plus years of, social media except make it impossible, to sue them section 230 and no one's, been able to come up with an agreement, on how to modify 230 to allow people to, be held accountable without crashing the, whole internet making it um just endless, lawsuits so we're trying to get ahead of, that first of all humbly Congress will, never be ahead of the American people, but we want to get ahead of where, Congress typically is by looking at, significant legislation it's really good, to hear some of this and I don't think, you know I'm sure that that's accessible, to people if they know where to look but, you know I think Daniel and I are, learning a lot from you here today in, terms of how this works compared to, previous technologies that we've seen, over the decades this is a bit of a, different Beast AI uh it's going much, faster it is likely to have a much more, profound impact on you know work, certainly in the industry I'm in Warfare, across the board even you know what it, means to be a human as you go forward in, time to some degree with these changes, happening you know we're getting big, news in the AI space every week you know, a couple of weeks ago as we Rec less, than two weeks ago as we record this, open AI announced uh chat GPT for Omni, and that alone relative to its previous, version of the model changed how people, are using AI in day-to-day life you have, the entire open source arena with, hugging face there are over a million, models there this thing is happening so, fast how do you envision Congress trying, to get an appropriate you know handle on, that you know whatever that means to you, on such a fast moving expansive topic I, I I struggle as a citizen to Envision, how that even happens and I'm I'm really, I've been waiting to ask you that, question for uh ever since we agreed to, do this Chris you're not suggesting that, Congress acts slowly are you I would, never do that to a, congressman that is a really hard, question because as I've discovered in, my nine and a half years there it moves, glacially yes you know that are founding, mothers and fathers by building in a, senate and a house you the competing, Chambers and you add a filibuster in and, a oneperson hold and a senate that I, respect as part of the founding, compromise but a very small fraction of, Americans elect a big fraction of the, Senators like 30% of Americans elect 70%, of the Senators and it's amazing we get, anything done and then you almost, sometimes need what they call a trifecta, where that one party contr R the house, the Senate and the presidency to do any, major legislation like the Affordable, Care Act or the inflation reduction act, or Donald Trump's tax cut and jobs act, those happen under trifectas so all we, can do is keep this as bipartisan as, possible and then probably deal with the, whatever emerges as the largest, downsides we're not really having uh any, big downsides yet we we talk about the, threats I mean there are very obvious, things you the whole seesan issue you, know making an undressed Taylor Swift or, sex videos for underage teenage girls, that never participated in them that, those are very real threats and we, struggle to know who to hold accountable, for them but in general the hope is that, by talking about it looking at these 200, plus bills looking at the plethora of, bills that have been introduced at State, levels I understand more than 50 just in, California that we figure out the, handful that actually make a difference, and actually protect people and by the, way I'm very impressed by the, president's executive order Biden's, turns out it's the largest executive, order in American history they've hit, all of their benchmarks so far their, timelines and maybe the most important, thing is they set up the safety, Institute at nist now led by Lizabeth, Kelly that's Staffing up so finally at, the federal government level there is a, group that is specifically charged with, dealing with the safety and trust issues, so we've seen that that executive order, I think we had a previous where we talk, through certain pieces of that really, encouraging to see some leadership there, how would you view kind of the more, International side of this in terms of, how the the us and our policy makers are, proceeding forward versus uh policy, makers across the world um how do you, view that from your perspective and what, conversations are going on as related to, kind of our positioning within that and, the role that AI plays plays globally, Daniel I think that's it's a really, important issue and I think there's lots, and lots of conversations I think I've, had no fewer than eight meetings with, actual European parliamentarians who, have been putting together the their e, EU AI act and just in the last three, weeks you know one big dinner and a long, session with during the day with those, same people and their their lead, technical staff explaining how the eua, they're Act is working and how it, differs from ours you know the the, shorthand that's an, oversimplification is they describe, themselves as a regulatory superpower, and we are all committed to Innovation, you so we're not licensing algorithms or, giving permission to do certain things, but we also know that we have to be, there in the UK when they talk about, their blessley Doctrine we're going to, Japan for their stuff ultimately in the, Middle Run we need to have something, like a Geneva Convention on AI this is, especially true to the extent that we, can engage China in it we know China is, concerned obviously they're investing, hugely in it but they also concerned, about the safety parts of it and we all, have to come together right yeah that's, exactly what I was about to ask next is, and we often bring it up on the podcast, is kind of the safety you know you have, all these things pulling against each, other with tension you have you know the, Innovation just driving forward, constantly as we've talked about the, understandable safety concerns that we, all have which we have increasingly over, the last few years been talking about, and which our audience is also you know, you know demanding there there's a lot, of concern out there is uh with looking, at the international balance and, tensions you know we have we have Russia, doing what Russia is doing with Ukraine, we have China and Taiwan and and all of, these things you know Ukraine is the, first war that is becoming increasingly, Aid driven in ter terms of the, Technologies being used China is an AI, superpower as you know along with the us, and we're all talking about this need, for you know us and the Europeans to to, work together but there's always the the, concern about bringing everybody on, board because you're I love the idea of, the Geneva Convention that you just, mentioned if all the major powers can, get on board how do you envision the all, these different tensions pulling against, each other possibly working out and, getting the motivations of all of the, Western countries led by the US with, Russia and and its sphere of influence, and China and its sphere of influence uh, coming together do you have any either, aspiration or expectation on on how you, see that coming together over the next, few years Chris is probably more, aspiration than expectation and and with, you coming from the defense industrial, base you know how important our war, Fighters think this is indeed I do the, chairs of our intelligence committees, the chairs of our armed services, committees they very much do not want, want us to be behind China there's a lot, of debate about how much human agency, should there be in kinetic the use of, kinetic force uh can you have machines, deciding who to Target rather than uh an, Air Force pilot even if he's sitting in, a room with levers uh in Colorado, Springs um at least that's a human being, saying let's take out that car or take, out that building rather than letting a, drone decide who to attack in the like, and then there's space and the, weaponization of space and the role that, AI plays in that my hope is that we will, have some renewed Arms Control in the, days ahead it's been pretty sad the last, couple of years as Russia has, progressively withdrawn from Arms, Control agreements and China has been, unwilling even to sit down and talk to, us but sooner or later if we're going to, make the world a safer place we need to, talk about all the nuclear weapons on, the planet and at the same time talk, about artificial intelligence too and, one of the very first pieces of, legislation introduced was Ted, Ken buck and and I on prohibiting, letting an artificial intelligence, algorithm make the decision to to launch, a nuclear attack on another country you, know that that has to be the president, of the United States and the chairman of, the Joint Chiefs of Staff and the, Secretary of State you know human beings, making decision of that magnitude and, yes they can use all the data they want, but a machine can't decide I totally get, that 100% but I'm I'm often kind of, shocked at how many people don't, understand that uh you know the idea of, nuclear weapons uh a few years ago I was, with the CTO of uh Lo heed Martin at the, time who's since left and I was doing an, event in London and I was on the stage, and I actually had an audience member, say could you talk to us about with an, assumption in the question could you, talk to us about the fact that the US, has uh AI controlling its nuclear, weapons and what you think the, implications are and I I laughed it off, and and said that's not the case, obviously but that was one of those, first moments where I I realized how, much misinformation about these topics, uh was out there and obviously since, then that's just gotten more and more, you know there's in so many variations, deep fakes constant misinformation that, AI enables in nefarious uh intent could, you talk a little bit about kind of the, safety of how to approach that from, regulation and you you're you're, obviously deeply into the topic you kind, of give us a little bit of guidance, things I can tell my family who are not, into AI cuz when we sit around Christmas, time and holidays in the extended family, they don't know either and I'm always, shocked that my own family doesn't know, and so would love to hear your your, thoughts on just kind of how to approach, some of these incredible misconceptions, that people have number one is safety, issue is how about how about Job, elimination we know that it's going to, replace many many jobs one of the, exciting things is what do they call, ambient clinical documentation you know, doctors and nurses say 25 30 50% of, their time is filling out data on the, clinical visit they've just finished or, in the middle of it now there's software, Hardware that listens to the, conversation between uh Chris and his, doctor and and writes it all down and, but but before the time the doctor or, the patient leaves the doctor can read, it and check it and yep that's what we, talked about saving immense amount of, time but then so job elimination and, what do we do we know that that's, happened in every Revolution, agricultural industrial information but, we also know that will probably happen, much more quickly now than it ever has, before so much less time to react and, for people to adapt to that change you, know second level is all the, misinformation whether coming out of a, large language model unintentional or or, the intentional stuff how do we protect, against that then there's the the whole, notion and some people take this, seriously other people will say n that, with desktop ability to generate DNA to, synthesize DNA and the ability to look, up what's a smallpox vaccine what's the, DNA of that or or let go from covid-19, let's make covid 27 um the whole notion, of bioweapons or others things that can, be used based on the information that, comes from large language models and Ai, and then all the way to the existential, threat it's interesting that so many of, the computer scientists I talk to just, really say no no no we're nowhere near, artificial general intell intelligence, we and even if we were at AGI that's not, going to be conscious it's not going to, have will it's not going to plan its own, things I tend to be on the humbler side, which is we don't know where, Consciousness comes from and we don't, know when and where it's an emergent, property from what if we're building, machines that can think some things you, know thousands or millions of times, faster than weekend why are we so sure, that there won't be an emergent, Consciousness coming from this and in my, conversations with Elizabeth Kelly at, the nist safety Institute my plea is, that there's at least some subset of, that group that always is keeping the, existential threat at the top of Mind, well Don I'm always thinking of my, practical a to-day day-to-day work in, using models building models applying, things in a sort of Enterprise context, from your perspective now that you have, both of you from kind of the Hands-On, granular level but also from the kind of, global and policy maker perspective if, you could say something to Everyday, practitioners in the AI space who are, building these systems what should we, have in mind or what should we be, thinking about kind of moving to the, future you know you said policy and uh, regulations and all of those things will, catch up but there's people building, these systems now so from from your, perspective um what are some things to, keep in mind from a practitioner level, Daniel I love your question I get a, blast email I think it's once a week it, might be twice from something called AI, tangle yeah and in every one there's, like 15 or something new ideas about Ai, and in the new companies that just, sprung up and I've always read and say, well what's what are they going to do, and they all sound really similar they, going to help you manage your Enterprise, and they're going to coordinate blah, blah blah and it's never very inspiring, instead if I were in that mode if if I, could quit my job now and start an AI, company the first thing I'd probably try, to do is say what are the real big, challenges in our lives that we're not, fixing how can I use AI to improve our, climate change posture how can I use AI, to lift all the people that are food, insecure in America out of that food, insecurity or or something that's very, close to my heart 10 years ago with a, noble Republican we started a suicide, prevention task force and if I ever get, past my masters what I'd love to do is, based on where history is is use AI to, work on a predictive model and who's at, risk for suicide we lost one of our, beloved Capitol police officers 3 days, ago um who took his own life died by, Suicide and I used to say good morning, to him every morning big smile sweet guy, I talked to a bunch of his fellow, officers yesterday nobody had any idea, this was coming and if we lose 50,00, ,000 people a year that's just about, where we were last year to death by, Suicide wouldn't it be great to be able, to use AI to figure out ahead of time, that a thousand of them were at risk and, intervene and save those lives or or, make even more and that's just one small, example but there's so many ways that, I'd love for us to use AI the generative, part yes but especially the predictive, part to see if we couldn't you know make, the world just a really better place, you've all you've all probably read was, it the Minority Report, yeah Phil K dick right well they, actually that they used science fiction, to so they could look ahead you know, Chris could look ahead and figure out, who's going to commit a crime and they, throw him in jail ahead of time well we, can't do that but if you can use, artificial intelligence to figure out, who's most at risk of committing a crime, and intervene in a positive way to, change their lives maybe we can have a, safer happier world I really love this, line of thinking it's uh Daniel and I, focus very much on the show about kind, of AI for good and not everything being, you know about making a profit in a, business but kind of how does it affect, society it's a big theme in the show and, I run a nonprofit when I'm not working, at Lockheed uh on it's a pure Public, Service uh nonprofit and it's turned, into another full-time job that I don't, get paid for when you think about how AI, can do good in these ways obviously, taking into account you know the safety, and privacy issues that are there you, mentioned climate change you mentioned, food insecurity and suicide prevention, which kind of ties into a mental health, theme and as we have these AI agents you, know we as we combine generative AI with, some other tools and and you know kind, of there's an agent for everything in, the future you know there's some agents, that can handle manyi tasks there are, many specialized and everyone talks, about that coming how do you you know, you you mentioned about kind of Suicide, Prevention and I think that's a, fantastic idea if you have that agent, that personal assistant that's always, there and can kind of Take Care of You, Beautiful idea also in education um I, have a a daughter who just turned 12 and, I'm trying really hard and I know, education is a big thing for you um how, do we see our children today growing up, in this world how can AI help them with, education how can it make their world, better there's the scary things like job, loss that that we always worry about but, there's also these amazing potentials, for good as well what are your thoughts, about about uh Ai and education and, where that goes into the future I'm very, excited about it by the way let me just, before that one of the that ties in some, of the last couple of questions one of, the things that fascinated me about my, last meeting with the European Union, people was apparently their AI act, banned the use of AI for emotional, recognition and they didn't want not, just facial recognition but reading, faces to see how you know Chris's, feeling today and I I was concerned, about that and pushing back on it, because you'd think that that might be a, really helpful thing as you look at, somebody who might be at risk there are, a couple of linguistic professors at, Georg University who are trying to use, uh people's writing and texts to see, language that jumps out that suggests, suicide ideation once again for a, predictive sense but moving to education, U I'm sure you guys had the same, experience I know my wonderful Jason, Lang or Tech fellow must have had many, boring boring hours in high school while, you sat there while everyone else tried, to catch up I can't tell you how many, plays and poems I wrote while other, people were trying to figure out the, physics problem you know the fact that, you can use technology in general and, artificial intelligence in particular to, let people go at their own pace and, learn as much as they can as fast as, they can but then on the other side for, the kids that can't read at grade level, in second and third and fourth grade I, know that you can use technology in a, way that can help them um learn these, reading skills early on and improve it, it should be the kind of thing that, applying artificial intelligence to, education should make the teacher task, easier personalized education I've never, had to teach a classroom full of 30 kids, but you know 30 kids have 30 different, levels of ability and that's got to be, really challenging as you mention that, I'm the emotional detection thing I, think if you can get through the privacy, concerns and who's in control of that, data I would imagine that that could be, a huge plus across mental health, education and stuff to do that and so, I'm very encouraged to hear you say you, know indicate that assuming that the, context is Right assuming that we can, find the right you know constraints and, barriers around it that that would be a, plus going forward rather than just, saying nope we're not going to do that, leading into this I tend to take my, daughter Rogue a little bit on her, homework the teachers are telling her, don't use any AI on any of your stuff, and um and the teachers are bound by the, policies of the school board so I'm not, lashing out at teachers at all in that, way I just wanted to be clear on that, but policy right now is very much, against that in the school systems and I, tell her no no I'd much rather you learn, the material but let's use the, Technologies to help that learning, happen do you think that that will, prevail and that we were going to have, these Technologies in a really, beneficial way very personalized for, each student as you mentioned do you, think we can get through the politics, and the lobbying against that that, currently exists I think so I I think, it's natural there's going to be, resistance in the short run but already, I'm I've talked to a lot of college, professors who are like don't worry, about it you know I'm thrilled that, they're using it because they're, learning the material and they're asking, deeper questions and the AI is often, pushing back and and asking what they, know you know it's I think it's fine and, worst case we can go back to blue books, for the exams where you handw write the, whole, thing yeah I definitely uh remember my, fair share of uh all those tests where, you have to fill in the little the, little dots, umon with your pencil yeah scantrons, those are interesting closing out here, uh don we're getting near to the end of, our conversation and we've talked a bit, about education policy International, approaches to and how AI is influencing, kind of global relations and those sorts, of things I thought it might be fun to, end here with just asking you how AI is, influencing your life personally um what, have been some things that have been, helpful for you and as a congressman, working in our government how is or are, you thinking AI will shape your your job, I confess at the beginning I'm a huge AI, Optimist maybe not as far as Mark, andrees but still a big AI Optimist and, where I see it most meaningfully is in, healthc care you know the fact that we, can now in some cases diagnose, pancreatic cancer three or four years, ahead of when we could otherwise I met, with a bunch of radiation oncologist the, other night and the difference in, getting radiation treatment for cancer, 20 years ago and today because of, artificial intelligence is night and day, you they can exactly pick out your tumor, you know to the Micron externally and, put that protein beam or Neutron Beam on, and make it dissolve and go away you, know it's just remarkable there's a, wonderful new book out called why we die, on the science of longevity that argues, that the first person to lived to be 150, years old has already been born he or, she is among us today because of the, difference that artificial intelligence, just the applied knowledge of this, extraordinary amount of data that we, have you we have some good pretty good, ideas about physics and some on, chemistry we know very little about, biology very little about the human, brain but artificial intelligence is, going to open up a lot of those doors, for us well thank you for taking time, today to give us a bit of that optimism, but also help us understand man kind of, how how government is thinking about, some of the more difficult and uh safety, related issues with AI um we're very, encouraged to have you um in those, conversations and taking time to join us, and uh speak to practitioners directly, in this conversation so thank you so, much John it was great to talk to you, thanks a lot Don we really appreciate it, thank you Daniel and Chris good luck, [Music], all right that is practical AI for this, week subscribe now if you haven't, already head to practical ai. FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our be freaking residence break, master cylinder and to you for listening, we appreciate you spending time with us, that's all for now we'll talk to you, again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | First impressions of GPT-4o | Daniel & Chris share their first impressions of OpenAI’s newest LLM: GPT-4o and Daniel tries to bring the model into the conversation with humorously mixed results. Together, they explore the implications of Omni’s new feature set - the speed, the voice interface, and the new multimodal capabilities.
Leave us a comment (https://changelog.com/practicalai/270/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Ladder Life Insurance (https://ladderlife.com/changelog) – 100% digital — no doctors, no needles, no paperwork. Don’t put it off until the very last minute to get term coverage life insurance through Ladder. Find out if you’re instantly approved. They’re rated A and A plus. Life insurance costs more as you age, now’s the time to cross it off your list.
• Neo4j (https://graphstuff.fm/episodes/2023-finale-llms-and-knowledge-graphs-throughout-the-year?&utm_campaign=UCGenAI&utm_content=AMS-SrDev-ToFuDev-UCGenAI-Audio-None-GenAI1-GenAI-NonABM&utm_medium=Audio&utm_source=PracticalAI&utm_justglobal=) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack.
Featuring:
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
Show Notes:
• Hello GPT-4o (https://openai.com/index/hello-gpt-4o)
• AI Engineer World’s Fair (https://www.ai.engineer/worldsfair)
• AIQCON - the AI Quality Conference (https://www.aiqualityconference.com)
• Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing) (https://www.amazon.com/Brave-New-Words-Revolutionize-Education/dp/0593656954)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-270.md) | 798 | 10 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io hello and welcome to another, fully connected episode of the Practical, AI podcast in these fully connected, episodes we keep you connected with, everything that's happening in the AI, world and help you find some resources, to level up your machine learning game, my name is Daniel whack I'm founder and, CEO at prediction guard where we're, safeguarding private AI models and I'm, joined as always by my co-host Chris, Benson who is a principal AI research, engineer at loed Martin how you doing, Chris I'm doing good today Daniel how's, it going with you it's uh it's all good, yeah I got the chance last week to visit, Boston and see a bunch of cool stuff to, a few labs around MIT which was a lot of, fun I toured a couple Labs where they're, making or they're using AI to make, proteins like drug candidate proteins, very cool so the idea is um one of the, companies literally named AI proteins, hopefully we can have them on the show, sometime I requested their their CEO, while I was there he was giving us the, tour but yeah the idea being that you, can use various AI driven methodologies, to explore the space B of proteins for, drug candidates and they're kind of, binding to certain stuff I'm not a, biologist or anything like that but then, they take those and then synthesize them, in the lab and test them and eventually, hope to get them into drug sort of drug, candidates and through FDA testing and, all of that stuff so it's pretty cool I, would love to have them on the show the, uh I've Loosely followed that field over, the last couple of years largely because, um someone that I used to work for is a, chemistry PhD from Harvard but had uh is, uh very familiar with biotech so uh he's, kind of kept me up to date on some of, that it sounds fascinating uh I know, that drug uh Discovery is really all, about AI these days I think that's where, all the actions happening in that field, yeah and it's pretty amazing at least, from what I've heard from a couple of, those companies just the hopefully the, speed like the orders of magnitud ude, faster that they'll be able to explore, the solution space I guess so testing, you know thousands and thousands of drug, candidates uh very quickly rather than, maybe you know a postto or a PhD testing, only a handful over the course of many, weeks or even years they're able to do, things much faster which is really, interesting and of course they're, exploring that really use ful, application of that technology but I, guess this is one of the reasons that, some people might have sort of ethical, concerns with some of this stuff because, it's kind of like you can apply the, technology in a really positive way and, explore drug candidates I'm sure you, could also think about things that would, be harmful to humans and even think, about like biological weapons and that, sort of thing and explore that solution, space in the same way and those sorts of, things don't need FDA approval so yeah I, imagine that there's people smarter than, me that have thought more deeply about, those concerns and I know it was, mentioned I think in our last round of, interviews about mozilla's report on AI, this last year we had an episode on that, but yeah I was thinking about that while, I was there it sort of cuts both ways I, guess it does it's uh just since you, mentioned that I know in kind of the, defense and intelligence world with AI, capabilities being the great equalizer, the idea of uh malignant forces in the, world deciding to focus on such things, which incidentally is very illegal under, international law right uh but certain, places in the world don't care so much, about that uh and so we'll have to see, it's uh I've had a lot of conversations, about the very good and the very bad, about AI uh with folks lately and and, what an Uncharted World we're moving, into at this point yeah yeah well I'm, very happy at least the the people that, I've run across are quite ethical and, and moving towards things that will, hopefully hopefully benefit us all um, but speaking of benefit to many people, there was something rolled out this week, that definitely caused a bit of a stir, and also instantly appeared on a bunch, of people's phones and devices and that, was the next GPT GPT, 4 uh standing for, Omni if I got that right uh, gp40 Omni I don't know the full, background of that naming if it's meant, to evoke omniscience or I think the, explanations I've seen have been about, multimodality yes you know just the fact, that it can uh photographic video voice, everything um and yes it was quite a, release uh you know everybody's been, talking about upcoming the expected uh, release in this summer potentially of, GPT 5 you know interestingly enough uh, this came out and was quite even though, it's still part of that four family it's, had quite an impact I know that in the, last week I will say that it's been the, most open app on my phone uh pretty much, around the clock yeah starting to feel, like a family member uh because it's, involved in all of our family decisions, uh trying to uh things like getting a, leak in the drywall and trying to use it, to do something as mundane as figure out, the, concerns and it seems to be you know, when my wife and I are talking about, household things uh we now have chat GPT, as a third party and all those, conversations it seems to have, supplanted my daughter I'm not sure she, likes it very much so yeah well uh since, you have been getting handson and using, GPT 40 quite a bit for you either in the, announcement or in your own use of it, what are those things that stand out as, the things that have changed from let's, say GP 4 to GPT 40 well I don't have a, list of things in front of me at this, moment or anything but things that I've, certainly experienced is it is much, faster than just uh GPT 4 had been it's, able to respond very quickly in any of, the modalities that we're talking about, and when you say modalities you're, meaning sort of text speech image that's, correct it seems much faster I haven't, measured it across the board but I think, the thing that's been notable in my own, workflow of it, is that I'm not having to wait around, and kind of figure it out and you know, before this week I'd kind of say okay, I'm going to get on to gp4 and ask it a, question I kind of stop everything what, I'm doing and do that um and I think the, difference is the thing that's really, impacted me the most is maybe the, subtleness of no longer waiting around, being able to do it just by speaking and, being spoken to and it's no longer a, stop and do something kind of activity, it's now as I'm doing it uh as we're in, the middle of conversation it just, becomes part of the conversation I don't, tell my wife uh hold on one second um, I'm going to check real quick on this, question with gbd4 and let's see what it, says and then we can you know take that, into account as we talk now it's just, like right there at the kitchen table we, just do it so third party in the, conversation dueling gpts yeah that's, right that's right yeah I think that, some of the the main features uh if if, people haven't been following it quite, as much in the news which I'm sure a lot, of a lot of our listeners have been, following it quite closely one thing, that they focused on was speed so in, particular with the voice response when, your voice is kind of recorded in then, they're talking about responding in, milliseconds rather than I think before, it was a few seconds something like that, which of course uh is much faster I, think in general it's a fast model model, uh in my understanding in terms of, response and streaming across the, modalities also it's in terms of access, both account wise and cost-wise another, drop in cost as far as the cost for the, performance goes so that's a trend that, I think continues and also one of the, things I was happy to see was most of, the GPT models over time have p you, basically in terms of token count for, putting in languages other than English, because you would get higher token, counts and if you're charged by how many, tokens you put in or generate out and, let's say you're putting in Korean or, something like that then it's actually, more expensive to use the tool in those, other languages so I think they at least, in my understanding from what I've read, I don't know if that's fully Equitable, at this time um but there was an effort, to kind of correct some of those issues, as they came up so have you used the, video features much I've I have a bit, and it's very good compared to things, that have come before but sometimes it, seems to get amazing context and, occasionally it struggles a little bit I, think it depends on how much context is, able to get out of the imagery yeah, mostly used this sort of image related, stuff uh versus video so uh audio and, text and image is kind of what I've done, but yeah they show a good number of, things in the demo videos related to uh, video and also even kind of combining, one version of this running with another, version and having interactions between, the two and interview prep with the tool, and all sorts of cool stuff so if people, haven't seen it I definitely recommend, that people go and check out the demos, to kind of get a sense of the, performance but yeah it's overall quite, impressive the subtlety of being able to, do these things with that reduced time, and across modalities you know while it, might not be whatever giant jump that, upcoming GPT 5 would be the fact that, it's changing our behaviors and the way, that we're using it in this last week, and enabling things that just weren't, practical before I think that really, makes a difference going forward to the, point where um I work with uh here in, the Atlanta area I work with some of the, local universities in their various, computer science Colleges and Schools, and such and uh I was at one on Friday, for kind of a dayong strategic planning, meeting on computer science and where, they were going with it and we were, talking about this while we've been, talking about ai's impact obviously at, any kind of computer science program, this may change not only what you can do, but education as well in a in a pretty, fundamental way in terms of teaching and, being being able to do it in real time, and stuff and we had uh quite a rich, conversation around it lasted quite a, while uh around how we might be able to, utilize these new capabilities in the, classroom going forward and also how it, might change curriculum so I think we're, really starting to get to a point where, I think a lot of new capabilities and, education are are right around the, corner, [Music], if you're anything like me you have a, certain tendency to put things off until, the very last minute seeing the dentist, going to the doctor Home Improvements, that NeverEnding chore list of yours and, while most of the time it works out just, fine the one thing in life that you, really cannot afford to wait on is, setting up term coverage life insurance, you've probably seen life insurance, commercials on TV and thought yeah I'll, look into that later, no later doesn't come this really isn't, something you can wait on choose life, insurance through ladder today here's, what we love about ladder and while we, allow them as a sponsor they are 100%, digital no doctors no needles no, paperwork when you apply for $3 million, in coverage or less just answer a few, questions about your health in an, application ladder's customers rate them, 4.8 out of five stars on trust pilot and, they made Forbes best life insurance, 2021 list you just need a few minutes, and a phone or laptop to apply ladder's, smart algorithm Works in real time so, you'll find out if you're instantly, approved no hidden fees you can can any, time get a full refund if you change, your mind in the first 30 days ladder, policies are issued by insurers with, long proven histories of paying claims, they're rated a and A+ by am best, finally since life insurance costs more, as you age now yeah right now Now's the, Time to cross it your list so go to, ladderlife decom slprc aai today to see, if you're instantly approved again, that's, ladder.com, practical AI l a d d r, life.com practical aai, [Music], I'm having a discussion with my podcast, co-host Chris about the new GPT 4 model, what do you think we should talk about, that sounds like a great topic for your, podcast you could discuss how GPT 4.0, compares to previous versions especially, in terms of its capabilities and, improvements delve into its applications, across different Industries like, healthcare creative arts or customer, service and consider exploring the, ethical implications including privacy, concerns and bias in AI it might also be, interesting to discuss real world case, studies or predictions about how AI will, evolve in the future what do you think, your listeners would be most interested, in hearing about well a lot of that, seems pretty interesting you mentioned, Healthcare what were you thinking in, relation to, healthcare sorry I'm having issues right, now our systems are experiencing heavy, load please try again later well at, least I'm not the only one having issues, at least I know that Chad GPT 40 has, issues itself uh at times there's, something slightly re uh satisfying, about that I must say yeah well um it, was doing pretty good there until it had, volume issues or whatever they're, experiencing and I and I got cut off I'm, going to call it a mental blank you know, I have those from time to time you know, I I'm just going to say it so what do, you Chris uh are friend over over in uh, the chat GPT World chat GPT, 40 suggested some things about privacy, concerns as related to AI I probed a, little bit healthcare related things but, it wasn't able to give me an answer and, got bogged down but it also mentioned, privacy concerns um yeah have you, thought about that as you've obviously, been using the system what what changes, now in terms of privacy now that we have, 40 and not four how is it different if, at all I think it is uh and this is a, topic that has come up quite a bit this, past week in in various online forums, there was a particular LinkedIn post, I'll try to find it and and include it, in the show notes if I can that brought, it up and with us now talking to it and, receiving it back how does that impact, this is this recording is it not, recording how does this qualify under, different state laws when we were busy, typing it in and getting our questions, back you know while there were privacy, concerns it wasn't extending now to to, audio recording of voices you know which, is covered under uh state laws of all, states in the US at least and I'm sure, many countries out there what do you, think I'm just curious I know neither of, us are attorneys but uh but now that, we're leaving our phones open to chat, GPT and capturing people I'm sure in I, I've done it in public places a bunch, this week so how do you think that, impacts do we need to tell everyone, we're doing it okay everyone quiet every, okay I'm starting Chad GPT 4 it's weird, because it's some of the same feelings I, think people had originally when they, started bringing Alexa or Google homes, into their home and it was sort of, always supposedly not listening but it, had to be listening at least to get the, Wake word right so there was this, awkwardness there in terms of what's, actually being recorded and and that, sort of thing I think the difference, here you kind of almost got there when, you were talking about how are you are, using it in your everyday life I think, people can see that this technology, because there's a quick response so, there's as I was playing that like you, could tell the first response that I got, from chat GPT was pretty quick I would, say it's still not quite like you and me, talking it's not natural right but it's, it's pretty quick and so there's this, tendency then to think oh well I can, leave this on at certain times or like, you say have it as part of the dinner, table conversation you kind of then, bring in the devices like The Meta AI, glasses and like maybe I just have chat, GPT watching what I'm watching through, my meta AI glasses and telling me about, this or that and so you've got all of, these modalities coming together it's, recording in your kind of physical space, not only your voice but potentially, images and videos from your physical, space and all of that data is going over, an API to open AI or Microsoft or, however the Microsoft open AI, conglomeration I don't that's not a word, uh works these days but yeah it's that, embedding I think of the technology in, the physical world or the clear, application of that within our sort of, physical world and like you say not, pausing to go and pull up a tab and talk, to chat GPT it's it could be ubiquitous, and embedded in our physical world I, guess would be a good way to summarize, it to extend that a little bit Sam Alman, the open AI CEO one of the comments he, had made this week in an interview was, somebody was saying when should use it I, believe and he said oh you should just, have it on all the time just listen yeah, uh and I'm paraphrasing him I'm not, quoting him uh but the gist was never, have it off I know that was one of those, moments that the Privacy uh notion at, least right now I'm under I'm operating, under the assumption that it's coming, into play when I and the people around, me are familiar with it and we've kind, of made that choice to do that but I, certainly you know going back to the, Alexa notion and stuff I think this is, going to continue to be an issue here, the Alexa stuff we have those as well uh, oddly enough I don't find myself paying, much attention to them anymore I guess, I've just gotten so used to them being, part of the environment and stuff but, we'll see yeah well AI meeting the, physical world is definitely I think, going to become more and more a reality, at the Boston Logan Airport when I was, flying out this last time I saw they had, you know normally they have little boots, where there's a person that's like your, helper at the airport like if you have, some random question about where the, bathrooms are am I at the right gate or, um how do I catch this bus there's a, there a helper they just didn't have, anyone there at the thing and then just, relabeled it virtual assistant and just, had a screen that you could push and, talk to and I know there's a good number, of companies that that are working on, sort of interactive virtual agents for, retail environments that sort of thing, and then you have this Crossover with, the glasses and rabbit R1 and Humane AI, pin and meta AI glasses and all this, stuff so are you becoming a cyborg Chris, are you mostly just keeping it in your, phone I think I've accepted the fact, that it's inevitable to do that I say, that half tongue and cheek half not to, that point actually it makes me think, you know this is penetrating so far, beyond People Like Us in this space and, I have a a very good friend who is who I, don't think would identify as a, technology person and she brought up the, fact uh that and this isn't even, specific to chat GPT 40 or anything but, but it is to your effect there she, brought up that they had pulled in she, and her daughter had pulled into a, Chick-fil-A and um they noticed a sign, that said Robot Crossing and they didn't, really know what that meant but then, they actually saw a robot delivering, food and now that robot I'm sure at this, point doesn't have very sophisticated AI, capability for interactions it's, probably pretty basic but in the, conversation I pointed out it's, inevitable you have with so many you, know as we pointed out a week or so ago, that we're over a million models already, in hugging face and with these kinds of, of profound releases each week it's only, a matter of a very short time before, even the most mundane ret retail, experience is going to have uh both, Robotics and AI in that and so all of, those things raise the privacy concerns, that we were talking about before and, they also raise cultural uh and and just, you know folks getting used to it, frankly and of course it inevitably led, to the concern over jobs and such as, that as is often coming up but um I, think this is maybe the first year that, it's moving so fast in terms of these, capabilities that even I am trying to, I'm even struggling to uh to take them, in as they come out how about yourself, even though you're in that profession, yeah well I definitely it's even even, out here in the prairie uh in in Indiana, the Prairie it's becoming the Silicon, Prairie with Intel building their big uh, Factory in Ohio and new semiconductors, coming to Lafayette Indiana so plug for, the Silicon Prairie anyone out there can, hashtag silicon Prairie on Twitter and, and all the places but even out here uh, at well I'm close to a university close, to Purdue but they've had food delivery, robots here six to eight years now where, they've had food delivery robots around, around campus here which is is cool but, you just made me feel very parochial in, that way so yeah um but it's cool I I I, don't know it's it'll be interesting to, see also which brands kind of embrace, this and how they embrace it and and put, it into their retail environments and, how that's associated with specific, Brands and how they create new, experiences for customers around these, things it's quite, interesting what's up friends do you, remember when chat GPT launched I do it, felt like the llm was this magical tool, out of the box however the more you use, it the more you realize that's just not, the case the technology is brilliant, don't get me wrong but it's prone to, issues like hallucination on its own but, there's hope there is still hope feed, the llm reliable current data ground it, in the right data and context then and, only then can it make the right, connections and give the right answers, the team at neo4j has been exploring how, to get results by pairing llms with, knowledge graphs and vector search check, out their podcast episode about llms and, knowledge graphs throughout 2023 at, graph stuff. FM they share tips on, retrieval methods prompt engineering and, so much more don't miss it find a link, in our show notes yes check it out graph, stuff. FM episode 23, [Music], [Music], hey so this week I'm going to have to, migrate some Services out of aw ECS into, a kubernetes cluster that's hybrid, between my cloud environment and a data, center what should I keep in mind that, sounds like a big project here are a few, things to keep in mind when migrating, Serv from a to S ECS to a hybrid, kubernetes cluster one assessment and, planning start by evaluating your, current ECS configurations and services, this will help you understand the, dependencies data flows and necessary, resources mapping out how these will, translate into kubernetes is crucial two, kubernetes environment setup setup yeah, that was getting kind of boring so um, maybe just tell me what cool song I, should listen to while I'm doing the, migration uh oh oh boy are you still, there no dice all right well ah man I, struck out twice open AI you had your, chance live on the Practical AI podcast, and I got skunked both times so yeah, well we didn't rehearse enough yet they, did have some pretty cool videos on the, chat GPD 40 release you know they had uh, several different ones but one of them, they had two phones talk you know with, chat gbt talking to each other and they, introduced them and and they gave them, little monikers to differentiate between, them but I will admit I tried that at, home um right after it got released I, saw that video I was like I want to try, that and it did not I will confess it, did not work well uh for my end either, un rehearsed so um I guess those chat, GPT Folks at open AI have the inside, track on smooth conversations I'm sure, it worked at one point as most demos do, but yeah still still impress n, nonetheless I have to say I gave it a, pretty complicated question there maybe, one that I could definitely use some, help with so yeah I think it did pretty, good at answering of course and was, responsive I'm wondering Chris what you, think about now that we have GPT 40 what, is the future of all of these different, physical AI device gadgets that have, come out in recent times so there's been, the rabbit R1 there's been the Humane AI, pin there's been The Meta AI glasses and, probably others that I'm not even aware, of what's your thought on how this, influences these sort of AI gadgets, while this is also a golden age of AI, startups it's also the bar keeps getting, raised very rapidly and unexpectedly so, you can go from super cool to obsolete, overnight you can be one anouncement, away from a a tough moment there for, your product and or service so um you, know for in instance now that the world, has had a little time to try out the 40, version and it's changed the way we do, it a little bit that's set a new bar, it's set a new expectation on how you're, going to interact with AI and I will, confess that this week whereas both you, and I are always big fans and supporters, and Advocates of open models and being, able to to do that instead of just, having a service provider I have to, confess that um when I was using open, source model models this week uh with as, much as I was also using the 40 model it, was frustrating it because my my own, expectation had Arisen so if I was using, one of these products and the world just, changed in terms of um kind of standard, expectation on these model capabilities, it wouldn't take much to not be able to, survive uh that if you can't react to it, quickly enough so it's yeah it's, interesting times that we live in so, where do you think if anywhere those out, there building AI products or maybe, products that are driven by AI features, where can they capture value because, it's certainly from my perspective it's, you know even with this release in GPT, 40 unless you're already a certain ways, there it's probably not just having an, llm API because that is essentially just, a commodity now that is now there's you, know some are more expensive than others, but essentially the price is kind of, dropping to almost zero unless you're at, a very high usage rate which certainly, some companies are and that becomes an, issue for them but yeah where do you, think the value is to be had I still, think it comes from kind of a classic, Steve Jobs throwback uh comment is it's, not just about the AI it's not just, about the llm it's about you're, producing something of value that's, trying to solve a problem, and you're combining all these things, together to create the right you know, capability or experience for your, customer and I still think that's where, it's at maybe if I give a devil's, advocate to my own comments a moment ago, if you're going to have a product that, has AI integrated into it make sure the, AI is really serving the capability of, that product as opposed to being about, the AI itself because then you can you, can be undone with by the next, announcement so I really think it's it's, utility for the thing that you're buying, the device fors we're buying more and, more um AI enabled devices going forward, so and most of them will not have you, know the Leading Edge capability via API, you know in it so yeah I think that the, space of those that are working on, general purpose serve everyone type of, AI products which definitely fits into, these kind of assistant places it's It's, A Hard Road because like you say, something could knock you off that, pedestal quite easily it's hard to, compete in terms of price and the, commoditization of these things but in, the Enterprise it's still very hard to, utilize these tools um that report that, I've referred to a number of times from, andreon Recently they were saying, there's these huge budgets in AI across, Enterprise companies and 75% of it has, nothing to do with the usage of the, model at all or the hosting of any, models or anything like that it all has, to do with engineering Integrations, around workarounds and malfunctions and, making sure it's reliable and dealing, with all the issues so there's still a, lot of space I think even if you're not, vertically focused but certainly there's, also people that are vertically focused, that are I think will come out really, well one of the companies that I was, able to interact with a little bit last, week they're doing Financial workflows, in the cial Services sector called Far, Side AI automating things that used to, take days with market research and, creating slide decks and all of this, stuff it's pretty cool things but they, they're bringing their domain expertise, into that field and they're applying it, and that's what really creates the value, that's why someone would would pay for, that where whereas there's not really, going to be that many people that say no, I would rather build that from a raw LM, API and you know just not very many, people are going to do that because it's, much harder than you might expect so, yeah I think that that in certain, verticals applying domain knowledge, creating these agents these automations, that's a really interesting space moving, forward as well one of the things that, you taught us a while back was kind of, that the relatively speaking smaller, models and that kind of what seven eight, billion range you know you know where, you're able to do it on just one piece, of hardware and stuff, and I think that was fantastic guidance, that you gave us this was on a previous, episode we'd look it up and we can, connect back to it but I think that um, that's where all the action is I mean I, think that's whereas the the Press goes, to these huge model releases the real, action in creating value in a product is, going to still be these smaller models, that are fine-tuned very well to the, problem that they're solving and I think, those will continue to be wow because, whereas chat GPT 40 is wonderful uh in, terms of these conversations usually, wonderful on these conversations on our, iPhones an iPhone is only one of many, things I pick up in a given day and, frankly as we go forward I would expect, all the other things I pick up are, probably going to have some models, associated with it just to do what that, does very well maybe there's the reason, why my GPT 40 isn't performing well, because I'm using it on Android anyway, one of the other things I wanted to, mention Chris and this is kind of tied, into some of this as well where you know, there continues to be in advance of, these closed Source models I think if, you look there's a chart that hugging, face maintains about the sort of, convergence of open models and the, closed models and the closed models are, still ahead and now of course GPT 40 is, up there at the peak of it but those, lines are converging so they're not just, running parallel and closed models are, all the way kind of ahead to Infinity, but there's a sort of crossover point, which we'll see if that actually happens, but that's kind of at least as far as, those graphs it looks to be what's, happening which is interesting there was, some news out of hugging face this week, though that is good news for those that, aren't big foundation model Builders and, have big clusters of gpus so hugging, face uh announced that they're going to, be sharing $10 million worth of GPU, compute and the article that I read said, to quote help beat the big AI companies, so this is quite relevant to the, discussion that we're having now in my, understanding they're they're making, this compute uh these gpus in a project, called zero GPU they're making this, compute available within the hugging, face spaces compute and application, environment and so yeah for those of you, out there you might be sitting around, and still wanting to innovate with with, open models or try your own things and, feel maybe not adequately resourced in, terms of compute and particularly gpus, so really cool to see uh hugging face, take this step and provide some of that, GPO resources to the the community, that's operating on hugging face so yeah, check it out if you just search for zero, GPU you can probably find out a little, bit about that effort from hugging face, and I love seeing that from them you, know we've we've long talked about that, if you probably looking a a little ways, down the road you know AI ever, integrating more and more with the, software around it to the point where, it'll be kind of ludicrous to have, software that doesn't have some sort of, AI capability in it in in the future, it's feeling more and more like software, in that way that when we hit the million, uh the million open models on hugging, face and then just seeing uh these, capabilities coming up you know when you, said that that reminds me of like you, know all the major Cloud providers will, kind of offer a limited free tier you, know uh so that you can go do some stuff, with it and that's kind of how hugging, faces offering that with open models, feels to me in terms of being able to go, use something when you might not have, the resource otherwise so uh yeah it's, good stuff but boy gosh the world is, changing fast here isn't it yeah Clen, from hugging face he he made a quote in, the The Verge article that I was reading, it's very difficult to get enough G cus, from the main Cloud providers and the, way to get them which is creating a high, barrier to entry is to commit on very, big numbers for long periods of time and, of course that's something that smaller, companies or even individuals don't have, the resources to do so it's cool to see, well there's the zero GPU thing so if, you're out there if you're wanting to, learn if you're wanting to run some of, these models yourself that in itself is, a great Learning Resource and an option, for you to do but there's a couple of, really cool things event wise coming up, soon and actually events where either, Chris and or I will be present, physically so wanted to mention those to, everyone because there's some good, things that will be streamed in terms of, content and uh learning resources like, workshops from people all across, industry so the first of these is with, our good friends over at the mlops, community they're putting on this AI, quality conference it's AI quality, conference.com and that's going to be, June 25th in San Francisco and as you, all have seen when Demetrios has been on, the podcast um that guarantees to have, some some really great content there, they've got really great speakers in, including people that have been on this, podcast before like Jerry Leo from llama, index and and others so would definitely, recommend going going there and learning, from people at the AI quality conference, then also that same week in San, Francisco so if you wanted to you could, time this quite nicely which is what I'm, going to do there is the AI engineer, Worlds Fair so you can go to ai., engineworld and find out more about that, that's also going to be in San Francisco, it's going to be June 25th through the, 27th and of course this is at least, partially, uh arranged and and sponsored by some, also some good friends involved with the, latent space podcast and Discord, Community which we've done collaborative, episodes with and sponsored also by, Microsoft there's a day of workshops, there's speakers and lots of really, great content there so two wonderful, learning opportunities I I'm not sure if, both but I'm I'm pretty sure some of, that content will be streamed also live, online so if you aren't able to make it, out to the Bay Area you could still see, some of that content and participate, would encourage everyone to get out at, least online and and see what's going on, in the in the community and check out, these great events from some of our, friends across the community that sounds, fantastic I hope people join us there, and there's one other uh learning source, that I thought I'd mention uh it's one, that I have not it's a book I haven't, read it yet I'm about to read it and I, thought I'd go ahead and share that with, my Fascination about uh Ai and the way, it's changing education we talked a, little bit on this show and and some, other recent shows it's by Salman Khan, Khan is the CEO and founder of KH, Academy which has helped millions and, millions of people learn skills uh and, fantastic I use it regularly the book is, brave new words how AI will, revolutionize education and why that's a, good thing um it was just recommended by, Bill Gates of Microsoft and the Gates, Foundation Fame so uh that's probably, one of the next books I'm about to pick, up and I thought I'd uh mention it and, maybe we'll bring it up again on future, show so recommending that one sounds, good well let's have GPT 4 take us out, hey I'm about to go to sleep after this, because I'm recording this podcast late, with my friend Chris I'd like to dream, about an amazing future with AI could, you give me some suggestions about maybe, some thoughts I could have as I go to, sleep which might prompt some really, amazing dreams about AI in the future, that sounds like a great way to wind, down here are a few ideas to let your, mind wander into the possibilities of AI, in the future one imagine an AI that can, predict personal health events and, seamlessly coordinates with medical, professionals to provide proactive care, ensuring Optimal Health with minimal, effort from, individuals two think about an aid, driven Global Environmental system that, manages resources and energy usage, across the world balancing human needs, with planetary health and restoring, ecological balance three Envision a, world where AI personal assistance, evolve to become lifelong learning, Partners adapting to your interests and, educational needs over time making, learning continuous engaging and, perfectly tailored to your personal and, professional growth as you drift off let, your mind explore how these advancements, could change everyday life create new, opportunities and solve the big, challenges facing Humanity sweet dreams, about the future of AI you can't beat, [Music], that all right that is practical AI for, this week subscribe now if you haven't, already head to practical a.m for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire Chang log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Full-stack approach for effective AI agents | There’s a lot of hype about AI agents right now, but developing robust agents isn’t yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue’s CTO, tell us more about their approach and some of what they have learned along the way.
Leave us a comment (https://changelog.com/practicalai/269/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://graphstuff.fm/episodes/2023-finale-llms-and-knowledge-graphs-throughout-the-year?&utm_campaign=UCGenAI&utm_content=AMS-SrDev-ToFuDev-UCGenAI-Audio-None-GenAI1-GenAI-NonABM&utm_medium=Audio&utm_source=PracticalAI&utm_justglobal=) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack.
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Josh Albrecht – Twitter (https://twitter.com/joshalbrecht) , LinkedIn (https://www.linkedin.com/in/joshalbrecht)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• CARBS (Imbue’s cost-aware hyperparameter optimizer) (https://arxiv.org/abs/2306.08055)
• Imbue paper on the stepwise nature of self-supervised learning (https://arxiv.org/abs/2303.15438)
• A paper on initialization/feature learning co-authored by Jamie Simon, a member of Imbue’s technical team (https://arxiv.org/abs/2310.17813)
• Imbue (https://imbue.com/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-269.md) | 538 | 7 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io well welcome to another episode, of practical AI this is Daniel whack I, am the CEO and founder at prediction, guard and I'm joined us always by my, co-host Chris Benson who is a principal, AI research engineer at locked Martin, how you doing Chris I'm doing very well, today Daniel I'm hoping that we can kind, of imbue Today's Show with a a sense of, wonder and exploration yes well, thankfully we have an agent on the show, with us that's going to be very helpful, in that uh today we have Josh albre who, is CTO and co-founder at mbw welcome, Josh thanks it's great to be here yeah, yeah well we we sort of in a not very, funny way um teed up a couple of things, things to talk about there as related to, agents but could you give us a little, bit of background you you talk with mbw, about the dream of uh personal Computing, the dream of Agents doing work for us in, the real world kind of your approach to, that but we we'll dig into a lot of, those things but could you give us just, a little bit of background in terms of, how you as founders of impu came to, these problems around agents and, accomplishing more kind of complete or, complicated tasks with agents yeah I, mean AI is definitely something that, I've always been interested in and, excited by I remember a long time ago uh, my friend uh read some book in middle, school I think like maybe Ray Kur wildes, The Singularity is near like oh wow, there's AI wow so exciting and you know, did all that come true I don't know, necessarily but uh it it seemed like an, interesting thing and I was always been, interested in thinking and logic and Ai, and Neuroscience and when I went to, school I was originally going to do, cognitive Neuroscience but it the, professor was a little bit too boring so, I did AI research instead and so ever, since then I've kind of you know so I, published a bunch of papers and things, but it felt like it wasn't really going, to have a big impact on the world so I, went off to do startups but all the time, that I was in startups I was always, looking back and looking and saying like, oh it's like now the time to get back, into like more fundamental AI research, like does this stuff work yet and, eventually it came a point where it's, like Yep this stuff is working like what, I've always wanted to do with AI systems, is like make better tools for us like, there's so much work that we have to do, in in the real world that is just not, that fun not that interesting and not, really moving things forward and so all, my time at startups and the things that, I've been working on they've all been, very practical very applied versions of, machine learning and so I've always, wanted to you know we are an AI research, company but it's not AI research for AI, research sake it's AI research to, actually make tools that are useful and, so what we're doing in MB is we're, trying to make tools you know and even, just starting for ourselves like can we, make robust coding agents for ourselves, that can really help accelerate us and, help kind of take over some of the, boring tasks that we don't necessarily, want to do and that's what sort of gets, into agents it's like agents are AI, systems that are acting on your behalf, tools like you know chat Bots Etc are, really cool it's great to be able to, answer questions it's great to be able, to generate text but if I have to copy, and paste that text every time over into, some other thing and like do all the, work myself it can only save you so much, time right it's like a better version of, Google at the end of the day or a better, version of like a search engine or, something like that or a book and so I, think the real promise of AI is in, systems that actually take actions but, in order to get that to work we still, have a lot of work to do on the, capabilities side like when you're, talking about taking actions in the real, world there's a lot more risks a lot, more kind of downsides that come from, that uh and you need to be careful about, like you know you don't want to empty, the user's bank account like that's, going to be a really bad product, experience right so how do you make, systems that the user can actually trust, systems that are robust systems that you, can know are actually correct and that, flag for you like hey I'm not really, sure about this so this is kind of why, we always talk about coding and, reasoning is we're talking about the, ability to kind of like understand the, outputs that are actually being created, and understand is this correct is this, actually going to be useful for people, and really like thinking it through more, like a person instead of just hey here's, a generation good luck so that's kind of, how we got to agents is like we want to, make practical systems we care about you, know making these systems actually, robust and useful for people and that's, what a lot of our research is focused, around when it comes to agents and sort, of where we're at with them now so we're, recording this in May of 2024 for those, that are that are listening back how, would you kind of categorize in your, mind because you know you can download, Lang chain you can like create what is a, agent you know maybe for this purpose or, that purpose that searches the web or, does this thing or or that thing and, there's certainly even in my own, experience a lot of fun to be had in, that uh for sure but there's a lot of, challenges in making this sort of at, least in the Enterprise setting making, this a reality for solving problem s, much less in sort of my those random, times in my personal life where I need, to do things so how do you categorize, like as of now the state that we're now, of course everything's changing the sort, of main sets of challenges that where, people are hitting blockers when they're, trying to create these agents yeah, that's something that we actually played, around with a lot last year we, interviewed a whole bunch of founders of, different agent companies both you know, like on our podcast and our Thursday, nights and AI events and also just in, person kind of off the Record a bunch of, friends friends of friends people, starting companies really trying to, understand like what are the problems, that people are running into when, they're trying to make agents and the, thing that we kept coming back to is you, know there are all these tools like Lane, chain and and all these other bits of, infrastructure out there or ways of like, testing things like scorecard AI or all, these different like libraries but the, problem that people really had was like, what you really want as a software, developer is but does it actually work, like does it actually answer the, question correctly and can I get these, things to do what I want as a product, designer or as an engineer without, having to specify all of the like, details myself like that's sort of the, promise of AI and right now they're, really great for getting like a first, past version of this system working, where it's like oh cool like you ask at, a thing and like 60 70% of the time it's, right that's great that's so amazing wow, it's like getting this really, complicated question right some of the, time but 60 70 80% isn't really enough, for like deploying this and going from, that 80 to 90 to 95 to 99 to 99.99 like, that's actually a lot of work and so, people have made all sorts of techniques, you know for rag or for kind of other, types of ways of conditioning the, answers to kind of make them better and, better but the things that work today, are kind of the more constrained, versions where you're sort of you know, you're asking like a very simple, question or you're in a very narrow, domain and so the programmers the, product designers can like make sure, that like everything works out within, these rails once you are in the more, like General assist consistent kind of, category it's like a lot a lot harder I, think we've seen a lot less stuff be, successful there so I think in terms of, categories and like in terms of kind of, the problems that people are running, into I would say it the main one I would, summarize as robustness like correctness, like can you actually get these things, to be robust all the time I think that's, what really distinguishes agents like if, we think about agents in the real world, like a dog is an agent I'm an agent A, robot's an agent like a dog is actually, extremely good at not dying for a really, long time right it's not that 90% of the, time when it walks across the road it, doesn't get hit by a car like 100% well, almost 100% most of the time you know, it's like pretty safe like it's usually, like as agents we're being very very, conservative very cautious so that we, take correct actions and there's a lot, of like heris sixs and intelligence that, goes into being conservative being risk, averse like being able to take a long, chain of actions without going wrong, without something else horribly going, wrong and our agents like don't have, that kind of Common Sense and that kind, of reasoning right now I think if they, did it would make it a lot easier for, people that are building agents as we, were kind of going through the last, couple of questions talking about kind, of the problems that people are run into, when they're trying to make agents work, and you know what can they do to to, ensure that it has a good outcome I also, run into people all the time who I think, really struggle to understand you know, within the context of this you know all, the hype and the boom of generative AI, what can you use an agent for, productively in Enterprises in 2024 you, know they're used to going to these web, interfaces that are you know becoming, ubiquitous for us all, but the notion of saying okay I'm going, to going back to what you said earlier, on kind of getting it out of that web, interface can you kind of paint a, picture about how people out there who, are trying to bring this productively, into their organization as an agent, versus a web interface how they might, even conceive of what of how to approach, problems that they might want to solve, with the technology there's a lot more, work to be done today to make agents, work for a system I think if you, approach it as a more holistic system, then it's more likely to work so if you, think like okay where are the places, that I could go wrong how like what's, the confidence that I'm getting back, from this system can I flag that for, human reviewers can I have like a bunch, of different checks in place that are, both like in domain like for programming, like does it pass a linter or does it, pass this like style guide or like does, it at least type check right or is the, syntax correct like there's a lot of, checks that you can do kind of in domain, that you can help out there's different, in different Industries as well and then, there's sort of you can use the llm to, score this and ask like is this, particular thing wrong with that, particular thing wrong with it so as you, start to build up more kind of like, safeguards and guard rails around these, then you can start to get them to a, level robustness where like maybe for, the easy cases it's okay for your, application for it to fail and you know, where that failure rate is and you've, done a lot of work to understand like, how much can we tolerate one of the, things that we've done a lot internally, is working on our own evaluations this, is a really critical thing for anyone, who's like trying to build real systems, you have to get really into the weeds of, what does it mean for the system to be, right we've actually taken all of the, open source NLP benchmarks and made our, own internal versions of these systems, to make sure that they're not, contaminated by the training data and to, make sure that the questions are, actually correct so one of the things, that we'll have coming out in not to, distant future actually is I think, hopefully being able to contribute back, some of that evaluation work that we've, done of like cleaning up these existing, benchmarks but we also have a bunch of, our own internal ones as well and I, think it's kind of critical for anyone, making these systems to like make them, yourself like by hand at least like a, 100 look at them is this the right, answer okay what did it get okay is that, right like getting to a place where as, humans you agree on this you're getting, your machine system to calibrate well to, this then you're checking like okay are, the things that we're getting as inputs, in production like are they from the, same distribution like does this test, actually make sense for this they're not, drifting like if you have adversarial, systems like fraud or something much, much more typical if you have something, where you're getting the same kind of a, query every time then it can be possible, to get something where you can trust it, enough to say like oh okay cool this is, getting us 99% that's acceptable we have, some you know guard rails here we can, check how well it's doing over time we, have people looking at these and, auditing some of them that's kind of the, way to make this really useful as you, have to be like really getting into the, weeds and into the details of how do we, evaluate this what do success look like, Etc and for the use cases out there the, like the most successful use cases that, you've seen I don't know if you have, good examples of those either internally, or or externally but when you when you, think of those, I like what you're saying about digging, into the details I'm wondering also how, much sort of specific domain expertise, is actually factoring into how you, handle those details so if you're, building an agent to help people process, data in a health care scenario or data, in a financial services scenario or in a, coding assistance scenario there's kind, of this view like if I just download, Lang chain if I go and kind of have this, zero shot approach right where this, agent might be expected to do anything, my impression is that the most, successful agentic type of workflows out, there so far have been very much driven, by people with high degrees of domain, expertise in an area that are able to, work through those details is that, impression correct do you have any any, thoughts on that yeah that seems pretty, much right I think there's there's this, promise of AI that like someday you'll, be able to just ask it to do anything, and the interface sort of you know, affords that it looks like oh like, there's this text box I can just ask it, to do whatever and it will give me back, a response and wow it even sounds so, confident and so correct wow that's, great this can do anything maybe it even, succeeded at that case one example uh, that I I love from a little while ago, was uh we were trying to see how well, existing LMS would do at detecting bugs, and so we would ask like the first thing, that I did was like I looked okay like, is there a bug on this line I found a, function that had a bug I was asking, like yes there's a bug on this line it's, like oh wow look it's so so good at this, like wait a second how about this other, line that definitely does not have a bug, oh yeah there's a bug on this line this, doesn't work in this case like wait wait, wait you're just always saying yes like, this is not quite right so yeah it seems, to like promise that but you have to, really dig into the detail use few shot, examples and retrieval and all these, other kinds of techniques to kind of, like get into the weeds and the more, domain expertise that you can bring to, bear the dramatically better I think the, outcome is going to be so Josh I'm, really intrigued by sort of this, statement on what you put online in, terms of imuse thinking about building, robust foundation for AI agents as being, a full stack approach I like that, because it it sort of reminds me I don't, know Chris if you remember quite a while, ago when we were still talking about, data science I guess it's data science, is still a thing but we were talking, about it a lot more uh years ago and, there was this kind I I foret I think it, came up a few times like this discussion, about being a full sta data scientist, and often times those are the most, productive where you have an, understanding of how data pre-processing, happens and building your application, how the model is embedded in in software, and deployed and all of this stuff and, so I love that sort of thinking in that, respect and I'm wondering From imb's, perspective how you think about taking a, full stack approach when it comes to, agents yeah we take it I think to a, slightly more extreme degree than most, people uh in that we do everything from, setting up our own Hardware building our, own infrastructure pre-training, fine-tuning RL evaluations data, generation cleaning like UI user, experience like the whole thing and the, thinking there is that at each one of, these places you can tweak some things, to make the overall thing kind of work, better together right so you can kind of, change the training data that you've, used in your system in order to make it, more like the kind of thing that you, actually need for your product and then, in RL you can kind of set objectives, that are related to the things your user, actually cares about and then on the UI, you can use the capabilities that you, have to kind of help highlight places, where this particular system fails right, so I think we're really interested in, kind of the full stack approach and the, ability to like tweak things at each one, of these levels and for us it comes from, our like history as a research company, one thing that we've always really, focused on is being able to like deeply, understand the technologies that were, working with us so for us you know, pre-training fine-tuning doing RL it's, not just a black box like we want to, open these things up and understand like, what's actually happening inside of, there we have a paper Club you know, every Friday where we're looking at like, the state-of-the-art stuff that's coming, out reading through this and trying to, like really understand what are neural, networks really learning like how is, this language model actually learning, where does it fail there are really, interesting papers you know that show, particular logic puzzles where this, thing doesn't work and it's like oh okay, it's not really doing logic it's not, really doing addition is doing this, other thing but if you you know if you, tweak it in this way like oh now you can, get it to learn like a simpler form of, addition that is more General oh okay, that's really interesting right so what, is a Transformer really good at learning, like what things in the data actually, matter and how do you evaluate these, things as well as another thing that uh, you know we've also thought about a lot, like one of the things that we set up, that has been super useful is looking at, the not just the accuracy of our systems, but the perplexity on multiple choice, question answering data sets, specifically not perplexity over all the, tokens but perplexity specifically for, the multiple choice question answering, things this gives you a much more fine, grain, understanding of like is this actually, being right or not it gives you a really, precise metric for this and this idea, came from a paper which was about you, know I think something like um our, emergent properties of language models, or Mirage or something like that was the, title of the paper their point was like, you know a year or two ago people were, like oh look like these language models, have these like emergent behaviors like, they're suddenly learning to reason or, whatever it's like oh wow they're like, suddenly getting so smart but when you, really dig into it it turns out that if, you look at the performance on a log, scale it's linear so what was really, happening is just our metric was not, very good right we weren't really asking, the right questions we weren't deeply, understanding what was happening it was, just always in a log sale just always, getting better and you just couldn't see, it in the metric and so for us you know, this is a good example of like you want, to deeply understand what's going on, here we don't want to just treat these, as magical entities but rather they're, just Technologies they're just really, bags of features at the end of the day, uh that we can use to do actual work in, the real world uh and so I think that, that's kind of our approach is to like, take the full stack approach understand, everything from like okay how does the, infin B Network work like how does that, fit into our like performance, optimizations like how does the data, work like how does the network work like, how are all these things adding up to, give us you know some final erir or some, final user experience that's really good, you're kind of really fascinating me, with that statement so many people are, do take kind of that blackbox approach, and they don't necessarily have that, kind of research first orientation that, you're describing as a company as a, business how does that research, orientation where you were rejecting the, blackbox perspective and saying we're, going to open it up we're going to, Tinker we're going to understand the, specifics of how small changes you know, affect that how does that affect how you, approach this compared to whoever you, would perceive as your competition or, something what is it what does it mean, for you as a company to take that kind, of research first approach yeah I think, there are trade-offs to it uh one, trade-off is that you know it takes a, little bit more time and effort to do, this to like really deeply understand, things rather than just like hack it, together throw it out there but I think, the benefit is in the long term like, when we do really deeply understand, these systems it makes it a lot easier, to make modifications and to make, changes and to know how to improve, things these systems are very expensive, to train like there's a lot of effort, that goes into this and it can be very, expensive to like just try a whole bunch, of things and like if you don't really, know what you're doing it's easy to, waste a lot of time uh and so I think, for us we would rather take a step back, say like okay what's actually going on, here can we make robust systems can we, make robust baselines can we get this, working in a way that like we can trust, our results that we can understand, what's going on and build on top of, those another thing that we've built, internally that was been really useful, kind of along these lines is uh carbs is, cost aware Paro region basian search or, something like that but basically it's a, hyperparameter tuner that is cost aware, so we can take any system that we have, and say Hey you have all these you know, 10 or 20 different hyper PRS these, different knobs you can fiddle like how, do you get you know I have a system that, works but how do I make it way better we, can take this just throw it in there, come back the next day and it's tried, hundreds of experiments at different, scales so it tries at a really small, scale and it sees like okay for a really, small scale like this is the best way to, do it and then as we get higher and, higher and spend more and more time and, resources and money on it like this is, kind of how these pyper parameters, change how things change as we scale uh, and just understanding that like there, are these scaling laws there are scaling, laws for different parameters like how, can we back those out and learn for any, given architecture any given problem, having an automated system to do this, allows us to kind of like quickly, develop this and took some time to make, this system right but it like really, pays off to have that kind of deep, understanding of the systems that we're, working with so I think for us it's kind, of like taking a longterm view I think, in the long term it's much better to, actually understand what's going on and, it does take a little bit of upfront you, know work that's why you know we don't, necessarily have a product yet we're, working on it I think we'll get there, and I have confidence that we'll get, something really cool but it does take a, little bit longer uh and that's okay I, think we'll end up with something much, cooler as a result as someone who's, working both on the like all the way up, the stack even up to interfaces and and, all of that but you're also training, these Foundation models certainly the, sort of both the market and the, technology and the options around, foundation models have just sort of, blossomed and these have proliferated, over the past year especially what's it, been like internally you know I we've, had a couple people on the show and I I, find this interesting like from the, perspective of someone inside a company, that is training Their Own Foundation, models how do you go about maintaining, Focus within the this sort of, environment where eventually you're, going to have to spend a significant, amount of time you know investing in a, model AR specific model architecture, specific data sets that sort of thing, but you know things are shifting all the, time you mentioned reading you know, papers and trying to keep up but yeah do, you maintain that focus and what um, what's sort of life like in the midst of, being a a foundation model uh uh builder, in May of 2024 yeah I would not, necessarily characterize this as a, foundation model builder and that part, of what we do is train models but that's, not the only thing that we do yeah and, the reason that we do it is not, necessarily to make you know the biggest, bestest Foundation model ever like I, think there's you know a lot of money, going into uh other uh companies, spending huge amounts on these and, general purpose on general purpose, versions of these systems and I think, for us the more interesting thing is can, we make them more specialized ones can, we take these can we adapt them can we, make them more specialized can we find, ways to have them work together to pull, different things together and make a, model that's kind of better at doing, that that sort of synthesis and kind of, like pulling these things together and, better at the particular task that we, care about we've seen really good uh, results from this we'll have some blog, posts in the next few weeks about this, but I think we've seen some really good, results on much much smaller models and, so you know I think if you look at like, DEC coder for example I think that model, still significantly outperforms llama of, a model that's the same size and even of, of much larger models and because this, is because like it's really trained on a, lot of code and so to generate code is, like something it's very familiar with, as opposed to being a pretty small part, of its distribution right so I think, again this comes back to the fundamental, understanding part like because we know, like these are just bags of features yes, having a bigger bag of features is, definitely better but then your, inference time goes up as well and if, you want better bags of features like, you need to give a good data like the, really important thing here is the, quality of data that you're giving it, less so the like you know absolutely, massive size I think for practical uses, so our focus is like can we make these, like really specialized and you know, very useful for ourselves for our own, purposes I think we're pretty happy to, see people out there competing making, better Technologies you know driving the, cost of these things down making huge, context Windows like giving them away, for free in many cases you know that's, great like we're we're happy to see more, competition there because I think the, part that we're more interested in is, how do we actually use these things at, the end of the day and put it all, together to be really useful I love that, you mentioned deep seek that's a, favorite of ours as well at prediction, guard and generating SQL to do data, analysis and code and uh in our chat, interface yeah we love love that and so, yeah I totally agree there's there's a, lot that can be done with that sort of, thinking you also mention your in your, work kind of more the and I do want to, get to kind of more the front-end, interface side but before we get there, you mentioned kind of pursuing, fundamental laws behind uh deep learning, in order to again understand and create, this foundation for the agents that that, you're building what have been some of, the things that you've pursued in in, that area as kind of the theoretical, underpinnings for this progression, towards robust agents there's a bunch of, things that are still in progress that I, can't you know speak to directly but we, we're definitely interested in say you, know how do you initialize things, properly like the M work by Greg G Etc, we have one of our researchers with a, collaborator of his and like working on, kind of understanding exactly what the, right way is to parameterize these, language models in a theoretical sense, but for a practical reason so if, theoretically like this is the right way, to parameter them then the Practical, implication is like you no longer need, to tune the learning rate as you scale, them up this is super helpful because, it's like one of the key factors and so, to remove some of these hyperparameters, makes it much more efficient to kind of, explore the space right so that's like, an example a very concrete simple, example of like a place where sort of, the theoretical understanding can help, you other places where this can help are, not as easy to point at like the exact, Theory and sort of more informed by that, or more like physics like physics didn't, start with like perfect theories of, everything right we sort of like kind of, did some experiments and had a more, experimental understanding of the world, before we had perfect theory about why, everything worked I think we're at that, phase with machine learning as well and, so there's some interesting work by one, of our researchers uh Jamie Simon who on, kind of like what's actually happening, in the fundamentals like when we're when, we're learning things like there's this, notion from one of his papers about, learnability like a network of a fixed, size can only learn so many things and, you it's like very precise or we had, another paper about self-supervised, learning where you can see like oh, there's a sort of like stepwise nature, as it learns like each piece of the, thing so each of these little like, theoretical things is telling you, something about how they work we don't, have a full picture and the real ones, are like quite complicated and a little, bit more complicated than these smaller, examples but each piece is giving you, like a sense for what's going on and, allowing you to like operate in this, space uh without having to like guess, and check quite so much it's not as much, of a black box it's more of like a, machine where you know you don't know, the exact internals but you know like, don't make it too hot or it'll explode, right like don't make your learning rate, too higher it's not going to work right, so you can see not just learning rate, but other sorts of precursors earlier, like you can look at various like Norms, or other quantities to understand like, is this like you know getting too large, is this growing large over time is this, something that's actually too small and, like we want to you know we can actually, up the learning rate later or do we need, to apply more regularization of a, particular type like you can kind of get, a sense for these things even if we, don't have Kind of Perfect laws yet we, also get some law out of like the carb, typer parameter Optimizer that I, mentioned before we can see things like, how do these parameters change with a, scale and understand like you know not, just how do the learning rate and data, and parameters change but how do very, specific hyperparameters change like, what is the depth versus width that that, you should have like for this particular, you know type of regularization like how, much exactly should you have and how is, that changing and that goes back and, kind of informs like okay like what is, actually happening under here like it's, weird that like this particular Trend, holds over scale like it seems like it, needs less and less of this like that's, kind of interesting why is that and, sometimes we'll see a paper that's like, oh that fits in I see what's going on, there that's nice so we're getting more, and more I think collectively as a, machine Learning Community we're also, starting to understand these things a, lot more I think when people Point At, You Know neural networks or language, models as like blackboxes like oh nobody, understands I think that's quite a, mischaracterization of it there are a, lot of people that have a lot of very, good ideas about how these things work, and nobody on this call probably knows, exactly how a car works and that you I, don't think you can make a car from, scratch I certainly couldn't and there, especially modern cars today are quite, complicated but we can use cars to go we, need and we like roughly know how they, work so it' be weird to say like oh we, don't know how cars work I think machine, learning and neural networks are a lot, more like that than than most people, kind of give us credit, [Music], for what's up friends is your code, getting dragged down by joins and long, query times the problem might be your, database try simplifying the complex, with graphs a graph database let you, model data the way it looks in the real, world instead of forcing it into rows, and columns stop asking relational, databases to do more than what they were, made for graphs work well for use cases, with lots of data connections like, supply chain fraud detection realtime, analytics and generative AI with neo4j, you can code in your favorite, programming language and against any, driver plus it's easy to integrate into, your text stack people are solving some, of the world's biggest problems with, graphs and now it's your turn visit Neo, 4j.com dveloper to get started again Neo, 4j.com, developer that's neo4, j.com developer, [Music], so Josh going into the break uh you had, a really good analogy there about the, fact that you know the sophistication of, cars it means that while we all use them, all the time we may not understand every, aspect of them and I wanted to go back, for a moment because I've been kind of, percolating on some of the things that, you said earlier and you've been talking, about kind of the trust and robust, systems and all but I was wondering um I, know in my own life uh I'm very involved, in the trustworthiness of models and and, you talked a bit about you know getting, good outcomes and and being able to, detect that do you have any guidance on, what it means to engineer trust into, model training so many organizations, that I that I've seen kind of tag the, trustworthiness of models on at the end, as though oh yes we have to do that too, and with you have such such a insightful, and deep way of approaching the, engineering you know rejecting the, blackbox approach any guidance you have, on how you engineer trust in it from up, front so that as you get through the the, training life cycle you come out with, something that kind of you have a high, degree of confidence is what you're, intending it to be I think a lot of, people are trying to do this and there, is good work to be done there and we can, do things to improve the models and make, them more trustworthy and and during, training and that's great but I think by, far the largest place that we should be, focusing is actually after training we, don't don't trust people because like oh, I looked at their schooling and like you, know they seem real trustworthy up to, this point like I'm going to give them, my credit card and I'm going to give, them my bank account like no uh you know, we're going to be like looking like what, is this person doing you know okay we're, gonna be checking things afterwards like, you know there's a lot of other stuff, that needs to happen post training and, in deployment where we can actually, trust things so I think for me it's, actually a lot more about like what is, happening when you're actually using the, model like what kind of auditing or, real- time verification or user, interaction or other sorts of checks or, things that you have on can you have, other systems that are checking the, behavior of this like for an agent you, know maybe you'd want to predict like is, this action going to have potentially, going to have a negative consequences or, is this going to be potentially, dangerous or will this be something that, the user might not want and those seem, like good things to have as totally, separate systems that are completely, unrelated to the development of your, original model you would not want the, original model to be responsible or, connected to this at all you'd want to, have a totally separate thing that's, looking at this right and so I think, trust is better thought of as like a set, of different types of data that can give, you confidence that things are going, well that have gone well and will, continue to go well and so you can only, get so much trust up ahead by kind of, Designing the system in a particular way, and you have to understand like what is, that model good at what distribution was, it trained on have we shifted from that, distribution have we shifted from the, task that it's good at how well has it, done over time is it likely to you know, go wrong in this new example so I think, it's it's more of a posttraining more of, a practical kind of a problem and the, idea that we could like solve this all, by making like safer trustworthy models, is a little bit it's going to be, difficult to succeed at that task maybe, this ties into the trust element, certainly the kind of collaborative, approach with agents but you do talk Al, also a lot about some of the thinking, that you're doing around interfaces as, well and it sounds like you've also been, utilizing or or trying to utilize some, of what you're developing internally for, coding and other things so what are you, thinking about in terms of interfaces, and kind of how are you dog fooding some, of those things internally to kind of, learn about interfaces beyond the kind, of AI chat interfaces that we're all, familiar with yeah so I think the, learning internally from using our own, kind of prototypes and internal you know, products and demos has been there's been, quite a lot of that uh like without, actually using it it's hard to kind of, get this learning about like okay you, know is this trustworthy or not or like, does this actually work like what UI do, I want to use for this I think when I, you know I made some prototype it, generates a bunch of code and very, quickly I started to realize like H, that's great but like it's really, annoying to review this much code right, I see a lot of products out there, they're like oh look it'll like make a, PR for you yeah I mean how fun is it to, review a PR of you know a few hundred, lines if there's like a few lines that, are wrong you have to like search, through for this bug it doesn't really, tell you anything about where it is like, this is just a really awful user, experience uh and so I think instead if, we approach it from the perspective of, like okay what do I want as the user, here what I want is for this to be, pretty interactive and for this to tell, me like okay maybe there is a bug here, or yeah you asked for me to make this PR, but like your ask was like kind of, ambiguous and I needed to make some, assumptions here's the assumptions I, made like here's how confident I am in, them do you want to change them uh yes I, do like okay like once it's more, interactive once you're going back with, the you back and forth with the user, and trying to flag places of ambiguity, uncertainty risk Etc to the extent that, you can be correct about those it can, make the user experience de a lot lot, better any anecdotes from your own sort, of internal experiences with uh these or, or things that you've tried either on, the positive or or negative side one, thing that I really like about co-pilot, just as an example is that it keeps it, short right so it's easy to review think, when when co-pilot style things make, these huge like Generations that's why, they normally don't because it's kind of, hard to review it and to trust it and to, like do that but I'm imagining that, people are probably going to get to a, world where they realize like oh okay, this is kind of annoying maybe you could, point out you know places where there, are potential bugs like can you just, tell me what lines like seem like the, most suspect so we for example made some, you know like internal error Checkers, and linters that will sort of highlight, like oh okay yeah you know this thing's, not even important like your your your, editor does this for you right you can, also highlight things like hey this, doesn't look like it was actually, properly implemented here or this, function specification is like kind of, ambiguous for these edge cases like do, you want to take a look at that a lot of, the work that we've done uh for, evaluations is related to this as well, so when we look at evaluation data most, of the time when systems fail it's, actually from like under specification, and not from oh the model like messing, it up fundamentally it's more like as a, user I didn't really decide what I, wanted so I think one thing that's, really interesting to me is that coding, is not really about like pure, correctness in this like abstract, mathematical form where there's like a, perfectly correct version of this the, version of the function that you want, and that I want are actually subtly, different and like what I want in the, moment uh might change you know from, moment to moment as well and so the user, like really needs to be connected to, that and as it happens I also you know, learn about things where I'm like hm yes, you did exactly what I wanted but that, turned out to be not a good idea uh and, so I think the user needs to be there, and able to learn and refine like what, they even want and what's even possible, in the world so you pequ my interesting, in there it's with as a coder myself who, makes all sorts of errors in my code, constantly um as you're doing that and, and kind you're kind of changing the, workflow uh over time of how the coder, is spending their time and then, ultimately potentially how they're, thinking about coding as they adjust to, the the new approach that your tools are, doing how does that look for the coder, going forward in terms of how does it, change their day-to-day experience of, coding are you able to rescue me from uh, spending 90% of my time coding errors, and and forever trying to get myself, back out of that hole that's really like, the vision for mbw and for the company, and for the work that we're doing is can, we get to a place where people not just, coders but other even non-technical, people can effectively write higher, level pseudo code or code or intent and, like actually have this translated into, real code and into something that you, know actually makes your computer do, what you want that's why you know when, we're talking about like making a new, you know personal computer Etc like, we're really at the end of the day the, thing that is missing is the ability to, robustly write this software uh and you, can we can as software Engineers like, get down on the details and like you, know get everything there and we spend a, lot of time fixing our own bugs Etc and, our goal is to make it so that as a user, you can keep working at a higher and, higher level of abstraction and feel, confident in that right now you can work, at a super high level of abstraction, just say like make this whole thing for, me it doesn't work and so that's not, very fun because it's like that's busted, and now you like how do you get in the, details Etc so how can we make it robust, enough so that you can work at a higher, level of abstraction and trust that this, part was actually correct and be able to, have that dialogue back and forth when, like okay you know maybe it's not quite, working like I wanted or maybe it's not, possible to do this thing or not as easy, to do it in the way that I wanted to do, it Etc so how do you have a dialogue and, help educate the person about what is, possible what isn't working what might, not be working where they should dig in, so it changes the workflow and I think, we're interested in how do you change, this workflow in a slightly more, incremental way you could you know just, say like oh we're going to have the AI, system do everything for you and like, magically try and figure it out but I, think from our previous experience we, don't think that these types of products, are nearly as like good to use as a user, experience like trying to fully automate, something kind of is disempowering to, people and also results in of a worse, experience and a worse product so we're, more interested in this like interactive, dialogue like tool that as a person I'm, trying like maybe you know you can just, write a line of pseudo code you get a, big block out it tells you like one line, that is you know potentially problematic, for you to look at or maybe it just gets, it right okay great you can move on to, the next one so that's like one imagine, like one way that you can think about, writing code is like writing sud but, there's other ways you could write it, you might also write a command like you, know change the file to add lots of log, statements or you might also you know, say like make this function more robust, or there's like lots of different ways, that you can interact with this and how, can we give people more tools more like, paint brushes for being able to change, code and ultimately like make their, computer do what they want I think the, thing that's really exciting about this, is that when you can robustly write, software what you're really doing is, being able to create agents that can do, a huge swwa of tasks if you're not able, to write robust software then the only, way your agent can interact with your, computer is with things that we have, already programmed as actions like okay, we programmed it to go to a website and, like click a button that's it but if it, can software now it can do some huge set, of things and even things that you never, intended or programmed in the first, place so for us like agents and writing, code and reasoning are all like, intimately connected I have one more, tiny followup to that that's a personal, thing I run into all the time and uh, having someone with your expertise I, want to throw it at you does it make a, difference um is most software, developers including people in the AI, space doing models and stuff you know, they write in Python they write in, usually variety of different languages, and as I shift from one to the other I, find that um some of the capabilities, that are currently out there they're, great on python because everyone on the, planet's writing python but if I'm, writing on something that's slightly, more obscure maybe even something big, like rust it struggles to do the exact, same thing that it can do flawlessly on, the python side do you anticipate a time, where that context shifting no longer, applies very well and that they're all, High Fidelity in terms of what they can, do are we always going to be dogged a, bit with the the the obscurity issue uh, of certain languages uh it might go the, other way it might be that like because, it's so much more robust in Python we, should only ever write in Python and so, what we do is we just write in Python, and we make a python to Russ converter, or we make a thing that assembles python, to assembly or whatever right like it, might be that it's sort of better to, like double down on like a really small, set of things that we've made tons of, data for and works really really, robustly because you get a better kind, of user experience like one of the, things that you know a lot of these, models struggle with now is like you, have different versions of like numpy or, python or auntu or whatever things are, different how is it supposed to know, what version you're using right and so, there's this combinatoric like explosion, of like complexity that comes from all, these different possibilities and so you, know an alternative way to do this would, be to say you know what let's not do, that let's just say you got a v 22204, youve got this Library version you got, that one like if you do this I think it, might work a lot better so it could go, actually in the other direction instead, of it making it more robust on all these, Niche things we might say like you know, what just just all work in that level, and like let's not worry about what, language it writes maybe we only write, at this higher level we never even look, at that code anymore so we don't care if, it's in rust or in Python I think once, that happens once we sort of abstract it, up a level then you might be able to, come back and say why are we writing, this in Python like this is a not a type, safe language this is really slow like, why don't we change it to like be a, language that fits better for language, models uh and that might be an even, better future thing but that will, require generating a ton of data to make, this actually work so I see that as like, a maybe probably a future thing not a, thing to focus on right now but that's, my guess as to how it'll evolve but also, an alternative world would be you know, what it gets really cheap to just, generate all this data so we just make a, converter from our all of our python, pre-training data to just make it do it, in JavaScript and rust and elixir and, whatever all the time anyway uh so fine, uh we just like train it to be good on, all these I don't know we'll see which, way it goes yeah well uh, Chris will be happy if anything stays in, R I'm sure you be happy we just started, we just started working on our official, R client for for prediction guard Chris, so uh you can be a beta user there you, go it's been great to talk through again, I love this concept of the sort of full, stack approach that you're that you're, taking and triggering things in my own, mind to to think through in my own work, but uh but as you look forward either, you personally or or you at andb um look, forward to kind of the the things that, are happening this year either in the in, the community as a whole or or at MW um, what's kind of most exciting for you, that that you see as a possibility kind, of coming coming um into the future we, whether that be multimodal stuff or new, types of Agents or uh products or, directions that the community is going, or the research is going um what what, kind of stands out to you about that as, you look to the future I think the thing, that is going to be most exciting over, the next year or two at least for us, internally and probably for other uh, providers externally is I think we're, going to make really good progress on, what we have been talking about today on, actually reasoning on robustness like I, think once you can get to a place where, you ask this question and you get back, an answer that is really correct and, like robust and grounded it's not just, oh it said yes but like it has all the, right reasons and it kind of like, understands the nuance of like okay yeah, it's like yes is but like there's like a, little bit of complexity here you can, ask follow-up questions and those are, also right and robust like that ability, to robustly reason and answer questions, is going to unlock some huge amount of, work that I think people are not really, anticipating like once we really have, the ability to robustly reason through, scenarios now we're talking about a lot, more like labor displacement and, disruption than we were before there's a, lot of jobs that like all of us like can, pretty easily put together like well, first I do this then I do that then I, think about this like okay it only takes, like one person to do that when you have, these tools that are that powerful so I, think there's going to be a lot more, change uh in this area than people are, really expecting right now you know it's, not to say that all jobs disappear or, something but the nature of work might, change pretty dramatically and we might, have like much more powerful tools than, I think people are anticipating right, yeah well uh we we're really happy that, imbw is thinking deeply about those, things as we look to the Future and and, at a really practical and useful way as, as we look forward so thank you for, doing that thank you for your research, and and for taking time to join us this, has been great yeah it's been it's been, great thanks a bunch, [Music], guys all right that is practical AI for, this week subscribe now if you haven't, already head to practical a.m for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical a fm/ community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | "Is everything we're doing now in AI a bandaid?" | Demetrios Brinkmann from the MLOps Community on the Practical AI podcast.
#ai #transformers #machinelearning #podcast | 485 | 7 | 0 | is everything that we're doing now in AI, a, Band-Aid because Transformers just, aren't the right tool for the job like, one big workaround yeah exactly like is, that is that am I crazy to think that I, don't think so actually I I was talking, to one of our customers about this like, they have so much like logic around, double-checking the outputs of models or, like formatting uh the outputs of models, or and I'm talking like hundreds Pro, hundreds and hundreds and hundreds of, lines of of code thousands of lines of, code I don't know written all around, like this sort of workaround of like and, it's because they're using general, purpose a general purpose model right, that uh you sort of have to massage into, how you want it to behave is I guess is, it is it a little bit ironic that that, you know use rag to clean up the, problems with Transformers is that what, we're saying here |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Autonomous fighter jets?! | Yep, you heard that right. Autonomous fighter jets are in the news. Chris and Daniel discuss a modified F-16 known as the X-62A VISTA and autonomous vehicles/ systems more generally. They also comment on the Linux Foundation’s new Open Platform for Enterprise AI.
Leave us a comment (https://changelog.com/practicalai/268/discuss)
Changelog++ (https://changelog.com/++) members save 10 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Ladder Life Insurance (https://ladderlife.com/changelog) – 100% digital — no doctors, no needles, no paperwork. Don’t put it off until the very last minute to get term coverage life insurance through Ladder. Find out if you’re instantly approved. They’re rated A and A plus. Life insurance costs more as you age, now’s the time to cross it off your list.
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Pentagon takes AI dogfighting to next level in real-world flight tests against human F-16 pilot (https://defensescoop.com/2024/04/17/darpa-ace-ai-dogfighting-flight-tests-f16/)
• Top US Air Force official rides in front seat of autonomous F-16 (https://www.flightglobal.com/fixed-wing/top-us-air-force-official-rides-in-front-seat-of-autonomous-f-16/158152.article)
• Open Platform for Enterprise AI (https://opea.dev/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-268.md) | 267 | 1 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another fully, connected episode of the Practical AI, podcast this uh is a fully connected, episode where we keep you connected with, everything that's happening in the AI, Community all the interesting and crazy, news out there and hopefully a few, things that will help you level up your, machine learning game my name is Daniel, whack I am the founder and CEO at, prediction guard and I'm joined as, always by my co-host Chris Benson who is, a principal AI research engineer at loed, Martin how you doing Chris doing great, today Daniel how are you doing uh I am, doing well uh mentally a little bit uh, less physically I ran a a half marathon, yesterday which was uh really exciting, and the first sort of running type event, that I've done personally and I have to, say my my training was going good for a, while I would say the last couple months, was not going as well and so let's just, say that I'm in a good amount of pain, today um but self-inflicted I guess it, it is I'm I'm sorry I sympathize it's, been I have done a couple of half, marathons but it has been a while since, since I've done them and I know that at, the end of those I was definitely uh you, sound much better than I did afterwards, I got to tell you well I've been in bed, most of the day since you and I can see, each other but listeners can't I will I, will report that you look very well for, someone who just did a half marathon I I, look terrible at the time I am sitting, in a chair not moving so uh yeah that's, key excellent well I guess someday it, we'll be doing half marathons and, there'll be things like robots running, along beside us maybe uh powered by, artificial general intelligence and you, know they'll have their own I I'm, presuming we don't have to compete, against the robots I'm I'm hoping, because you know I I don't think I would, do very well or maybe I'll just have, some sort of automated or augmented um, knees or legs put in and I can cyborg, the uh the marathon you know they've, long had uh meniscus is the is the, cushioning in your knees they've long, had meniscus transplants but maybe, they'll have like robotic you know, intelligent meniscus and like it Springs, you up you know pushes off or something, like that nobody you'll have that edge, and they'll have to detect it then you, know the competition you know for for, everything being equal who knows where, we're going on that but you know yeah, speaking of autonomous systems and you, know in the spirit of robots and stuff I, thought I would kick us off by talking, about the uh I've been keeping of kind, of an ongoing news story but it popped, up in the last week or so which is the, x62 a Vista which is a it's a project, that the air force uh has been leading, with a number of companies and uh for, full disclosure locked Martin uh my, employer is involved though I personally, have absolutely nothing to do with this, and my information is only what's, available publicly so I just wanted to, give my my disclosure there before we, get into but I've been following the, news stories on this because it is super, cool it is an F16 Fighting Falcon a, fighter plane which are they've been, around for a long time they're actually, 50 years old this year but it's gone, through multiple ownership loed Martin, is the owner of the F-16 now and it's, kind of one of those um for NATO, countries kind of standard Baseline, fighter planes but the reason it's an, x62 A versus an F-16 in this case is uh, it has been enabled with a fully, autonomous AI autopilot that's not only, designed to fly the plane but flies the, plane in combat and they have been doing, simulated tests for the past uh about, roughly the past year I don't have all, the dates in front of me and stuff but, um this last week it made a new Splash, because in addition to the usual human, test pilot which sits in the cockpit but, does nothing they have manual controls, to override the AI but on all the tests, they have not needed the test pilot to, do anything because the AI autopilot is, so darn good and this past week the, secretary of the United States Air Force, also flew in the cockpit it has two, seats and flew in the front seat with, the test pilot in the back seat neither, human touching any controls while they, did a simulated combat scenario in the, sky with other airplanes uh flying, against you know in a human control, aanes human controlled airplanes against, other test pilots flying combat, scenarios and uh rumor according to what, uh the news reports are everything has, just gone flawlessly it performs, exceptionally well and it's just uh you, know it's one of those moments in time, where you realize this stuff it's you, know we we talk about models and often, our models are you know just in the, cloud and we're using them on apps and, things like that but this is a type, where you have a model that is you know, in the lingo out on the edge it is, controlling uh an advanced piece of, Machinery to a very high degree of, performance and um you know we we kind, of had the moment with Tesla cars doing, full auto but now we are we're talking, about some of the most uh sophisticated, aircraft in the world not just little, drones but big full-on fighter planes, being flown as well as any human or, better than any human fighter pilot in, the world so uh what do you think of, that I've I've talked to in for a while, but I'm I'm rather taken with the uh, with just the moment it's really, interesting in a number of ways I was, thinking back to I guess it was in last, month when I was in Boston and I um got, to stop by the MIT media lab for an, event and they were had a panel with, some various luminaries um one of the, panels was an investor panel and they, were all talking some of the questions, were of course related to various things, about AI it was an AI focused event but, I I was struck by one of the comments, about kind of this next wave of, innovation and Ai and the panelist was, basically saying that the days of just, being kind of an innovator in AI as a, model builder as a foundation model, builder are in some ways over what's, really interesting now is embedding AI, everywhere in the physical world and at, the edge you know here's an example of, that happening in an airplane of course, but there's certainly other things, happening in the civilian space as well, with AI assistance in the retail, environment also of course in in cars, and that sort of thing but yeah retail, environments or manufacturing, environments agriculture Machinery all, of these sorts of things where AI is, going to be embedded in all of these, physical spaces that brought up that in, my mind as I was thinking back to that, event but then also thinking here I know, you've made some comments before being a, pilot yourself just a civilian aircraft, pilot about the AI systems that already, exist for example for commercial, airliners and other systems that, actually can even now do better in many, ways than human Pilots, but then there's always that I guess, fear on people's Parts where you know, it's acceptable for a human to make a, mistake in such a scenario because you, know they could potentially be punished, of course in airflight maybe they, wouldn't survive if they made a mistake, which would be really unfortunate but, for a machine to make a mistake in such, a scenario is sort of unforgivable, because the machine shouldn't make a, mistake so there's kind of this double, standard that's happening do you see, that shifting or changing at all with, with some of these recent developments I, think it'll take longer in the, commercial airspace and just to address, one quick thing to the best of my, knowledge at this moment there are no AI, systems authorized by the FAA in the, United States to fly commercial, airliners right but there's a lot of, interest in testing about those kinds of, systems that are out there there's even, I may be wrong about this but I believe, it was MIT that has a system uh that is, designed for that that's not been, deployed in production that's kind of an, open system for airliner navigation and, such but there's a lot of work in this, area and certainly on the military side, there is not there's lots and lots of, constraints so I don't want to I don't, want to represent it as like oh you can, do whatever you want there's tons and, tons of gateways you have to you have to, earn your way through in the testing but, there is definitely an a fullon interest, in military circles and defense circles, about using Ai and just about every, conceivable use case that you might want, to come up with on the ground in the air, under sea in space you name it, everything and that's without getting, sidetracked I spend a lot of time in, those scenarios in my day job away from, the podcast but many things uh in the, military world are are classified and, you can't really talk about it one of, the really cool things about uh the X, 62a Program is it's being done in the, light of day it's a news story every, time something news happens and you can, go and and search it and find all sorts, of of information about it it's, interesting um over time if you over the, last few years um I am one of those, people because I've seen this a lot as a, pilot and as a just a non-pilot I will, trust myself to Ai autopilots and trust, my family's lives if it were to come to, that because they're so darn good I've, seen them back as far back as a DARPA, event that was public on YouTube in 2020, it was a simulator but the AI pilot beat, one of the best fighter pilot, instructors in the world an Air Force, instructor the equivalent of what people, would know as Top Gun in the Navy uh and, just demolish the poor guy and that was, four years ago now and over four years, ago and so we've come you know that's, the prehistoric times in AI you know in, the way we think of AI so I really do, think that we're we're crossing some, thresholds now and really the thing, that'll hold us back is the public, becoming comfortable enough to really, you know embrace the technology as that, and I think one of the before I draw to, an end and I'm not picking on Boeing but, the Boeing you know problems with the, 737 Max which is not an AI system they, are automated systems but they're not AI, systems uh has really shaken the, Public's trust in Automation in aircraft, in airliners and so there's that will, slow things down but you know someday, when we do have FAA approved systems and, the airliners that we're all flying, every day I think that we will be orders, of magnitude safer than we are with even, seasoned airline pilots today I'm so, sorry as a pilot to say that to you, Pilots out there I don't mean that I, have many good friends uh who are in, that occupation but that's just the way, AI is it's quite amazing, [Music], if you're anything like me you have a, certain tendency to put things off until, the very last minute seeing the dentist, going to the doctor Home Improvements, that NeverEnding chore list of yours and, while most of the time it works out just, fine the one thing in life that you, really cannot afford to wait on is, setting up term coverage life insurance, you've probably seen life insurance, commercials on TV and thought yeah I'll, look into that later no later doesn't, come this really isn't something you can, wait on choose life insurance through, ladder today here's what we love about, ladder and while we allow them as a, sponsor they are 100% digital no doctors, no needles no paperwork when you apply, for $3 million in coverage or less just, answer a few questions about your health, in an application ladder's customers, rate them 4.8 out of five stars on trust, pilot and they made Forbes best life, insurance 2021 list you just need a few, minutes in a phone or laptop to apply, ladder's smart algorithm Works in real, time so you'll find out if you're, instantly approved no hidden fees you, can cancel any time get a full refund if, you change your mind in the first 30, days latter policies are issued by, insurers with long proven histories of, paying claims they're rated a and A+ by, am best finally since life insurance, costs more as you age now yeah right now, Now's the Time to cross it off your list, so go to ladderlife decom slprc aai, today to see if you're instantly, approved again that's, ladder.com, practical a l a d d r, life.com practical aai, [Music], well Chris one of the things that I was, thinking about when you were bringing up, this story about the, x62 autonomous testing was one of the, comments you talked about was the sort, of regulations and guard rails around, the testing that it's also happening in, in open there's you know regulations, especially in the airspace about testing, these vehicles and and that sort of, thing, I was remembering back I had a, conversation with um breakfast with a, group that just came out here to Purdue, University where I'm located there uh, the company is called wend Racers and, they have sort of commercial autonomous, drones that are really kind of midsize, drones that do like um male uh remote or, rural mail routes or something like that, like they send mail in the UK they have, drones that take mail out to all of, these different islands in the UK that, need mail deliveries and that sort of, thing but then also there's the chance, to use these for you know disaster, relief or humanitarian Aid and that sort, of the thing and I know one of the the, things that they talked about was just, the struggle in finding ways to test, autonomous drones especially in the, airspace to actually make significant, progress in the R&D and testing and all, of that you actually have to be able to, take flights over significant distances, and that sort of thing and here you see, you know these tests happening on the, military side I know there's differences, kind of Civilian and government with the, ability to test things and availability, of airspace and all of that but how do, you as a pilot maybe you're maybe more, familiar with some of these Reg ulations, than the rest of us are how do you see, this technology being able to develop, over time with such restrictions around, testing and how could that be eased up, in a reasonable way without Undo You, Know issues and danger and that sort of, thing because obviously if you have, drones flying over populated areas that, is definitely an an issue but at some, point there's going to have to be a, drone fly over populated area indeed the, and so and to start off with I certainly, am not an expert in that I have some, very loose familiarity with the the, process uh military they have their own, dedicated airspaces there's military, airspace and especially um it's all over, but especially out west places like, Edwards Air Force Base and a number of, others where you have literally you know, hundreds of square miles uh that you can, do testing in and and obviously there's, a long history of that over since the, the dawn of flight the FAA is very aware, you know of the need to innovate on this, and so they basically you have to apply, for what you're trying to do um and show, them that you've done due diligence from, the engineering safety you know all the, concerns about that um and basically, I've I follow a lot of Aviation news so, I've kind of read about a number of, these programs that have come into being, and then they give you a little bit of, leash and you can kind of you have to, kind of earn your way through a number, of gateways you know where you where you, successfully do something in very small, scale very small scope and increase your, way into it but it seems to me that that, is happening more and more and in some, cases if there is a uh a military, utility to doing that then um there can, be coordination also with military and, taking advantage of military airspace to, have more room things like that so um it, seems though though obviously government, agencies are not the spe things, typically that there are opportunities, uh for even private businesses and stuff, to get some support in that way they, know it's, coming yeah this is probably um, something we could refer people back to, our previous episodes with the Jake and, others it's unlikely that we'll be, seeing the skies filled with weaponized, autonomous drones doing whatever they, want there's a lot of there's a lot of, hopefully responsible people thinking, about these things but the main, interesting piece here is both on the, commercial side and on the military side, the ability to increase safety and, decrease people human Pilots being in, dangerous situations I think seems to be, the focus of a lot of this now you know, there's probably all of those out there, that can imagine all sorts of scenarios, of misuse and and all of those sorts of, things but there's also in our previous, conversations with people at least I I, have some hope that there's some, reasonable people and and thoughtful, people that are part of these programs, yeah it just at risk of sounding like an, apologist um I point out to people there, are a lot of safeguards to that point I, work in defense uh I come home I well I, mostly work from home but I have my, family and my dog and everybody else, who's doing this whether they're in the, military or whether they're civilian, supporting that they have their family, and their kids and all that so um the, notion that there's like the dark, military Minds behind the closed doors, is uh in my experience a fiction we all, you know when we get on the phone and, we're even for a a business thing we're, talking about the same things that, everybody else talks about you know the, weekend and what you know my dog wasn't, feeling well and my kid was staying home, from school or whatever and so I'm very, encouraged in that way uh it's normal, people running these uh and they have, different motivations obviously uh, depending on where they're at and what, organization they're with but it's one, of there are things that I get worried, about with AI going forward but that's, not one of them yeah I might refer, people back to our episode leading the, charge on AI and National Security with, uh General Jack Shanahan really good, episode too yeah retired US Air Force so, if you want to get a sense of someone on, on that was sort of leading the charge, on the inside for a good long time then, I I would recommend that episode from, being a civilian myself it was good to, have a chat with him yeah General, Shanahan is uh who is now retired is, both uh that was a recent episode as we, record this and was also the original, hard charger for AI in the military and, uh and is in a a unique he's still, considered even though he's retired to, be one of the top experts and, influencers so I hope people check that, out yeah well I don't know if this was, widespread news but I thought it would, be a cool thing to highlight for people, you know you're talking about kind of, this further testing and I'm sure some, of that testing on the autonomous, vehicle side involves standards and best, practices and Frameworks all of that's, necessary to really Advance a technology, from R&D to prototype and otherwise and, I think that we're seeing also some of, that on the Enterprise AI generative AI, side of things so this last couple weeks, I was informed about this project which, is now a project at the Linux foundation, and the project is called the open, platform for Enterprise AI just, abbreviated to, Opa which seems like an unfortunate and, awkward acronym um I I don't I was, trying to think like how do I yeah Opia, I I don't know I see you avoiding the, obvious uh High School uh way of doing, it yeah I mean not the greatest of, acronyms but uh yeah the Linux, Foundation has this Ai and data, foundation so if you're not familiar, with the Linux Foundation you can look, it up but this Enterprise open platform, for Enterprise AI is a very, collaborative initi, it seems and just some of the companies, involved I'll kind of list them out not, all of them but just to give you a sense, includes Intel and any scale Cloudera, data Stacks Domino data lab hugging face, Mino zelix a bunch of different, companies that probably you're familiar, with certainly ones that we've talked, about on this show and there's a few, inter elements of this open platform for, Enterprise AI but the general goal I, think is to enable and facilitate or the, way that they frame it is aims to, facilitate and enable the development of, flexible scalable gen AI systems that, harness the best open source Innovation, from across the, ecosystem and that's kind of vague in, terms of the where they're going with, this but I think if you look sort of a, little bit deeper I think there's some, really interesting things of where this, could lead one is they recognize certain, common and developing archetypes or um, main use cases where people are using, generative AI for example the rag, workflow retrieval augmented generation, workflow and they're kind of take that, rag workflow and creating blueprints for, the various pieces that are involved in, a an industry standard kind of advanced, rag workflow not just a naive rag, workflow that you might play around with, on your laptop but something that could, be deployed in the Enterprise and so, they have some blueprints or kind of, architecture type of things I think, there'll be more of that that will be, developed and then those architectures, or blueprints have certain components, within them for example retrieval or, ranking system or an embedding model or, guard rails for models or fine tuning, systems or a vector database and then if, you follow the link to the GitHub, related to the OPA project Opia project, whatever you want to call it I noticed, some really interesting kind of a few, categories of some things that aren't, quite complete there yet but that, they're building in public and those are, both examples of implementing these sort, of reference implementations of Industry, standard ways of going about doing, certain things so like chat with your, docs code generation assistant that you, can plug into Visual Studio code, document summary visual question answer, and those reference implementations, include opsource ways of doing these, different things in a kind of industry, standard way another one is they have it, seems like they're developing a series, of micro open microservices that could, be plugged in to do various of these, components and then finally a set of, evaluations um so they have a repo, evaluation Benchmark and scorecard, targeting performance on throughput and, latency accuracy on popular evaluation, harnesses for safety hallucination other, things like that so there seems to all, of that put together I know that was a, little bit rambly but it seems like, they're kind of focus here on These, Blueprints reference implementations of, things represented in those Blueprints, and then industry kind of Enterprise, level evaluations for performance and, issues Within These systems that sort of, thing so this definitely seems, encouraging to see a lot of, collaboration on this and see the, support from the Linux Foundation yeah I, mean with the Linux Foundation being you, know one of the most reputable open, source organizations in the world, certainly the top few it's really, important that initiatives like this, come into being and the reason is that, in the business world I know you in your, company and I certainly as I'm talking, to people in different companies, everyone out there is trying to find, their own way into implementing, generative AI Solutions and how do you, put it together how do you architect it, I have my own thoughts around that and, and I know the company I work at has its, own thoughts around that and I end up, talking to people at different organiz, ations and they're struggling with many, of the same problems but they come to, their own Solutions you know based on, however their team wants to approach it, and as we know from other you know, before generative Ai and even before AI, came along it's an early point in every, growth you know development of every uh, you know whether in software or anything, else where you have everyone kind of, going off and doing their own thing but, they realize that that itself will while, it might solve the immediate itch they, need it creates a whole new set of, problems as they have to grow and, integrate with other organizations so, seeing what the open platform for, Enterprise AI has to offer uh it looks, very promising um and and I would I, would encourage organizations out there, to take a look at it um and whether you, adopt it or not maybe it helps frame how, you're choosing to solve problems in a, way that might make situations you're in, down the road that you're not thinking, about yet a little bit easier to cope, with well Chris as we kind of look back, to the last sets of newsworthy AI stuff, happening in all over the place both in, terms of large language models Genai and, not geni one of the themes recently that, it seems like has been happening and, kind of in getting into its prime is, video generation I don't know if you've, uh been following this sort of stuff but, I know that there was I saw something, from Microsoft I saw something from, Alibaba I think um of course there was, the open AI video Generation stuff, there's been things from Runway ML and, yeah so what are your general thoughts, on where all of this video Generation, stuff is happening or is going I a, couple of thoughts there I don't think, it should surprise anyone at this point, who's following uh you know the industry, you know when we were doing our thoughts, for 2024 last year we were talking about, this would surely come next you know, because we were willing to still imagery, and stuff and the rate that we're seeing, things progress from a quality, standpoint you know when is going so, fast you know it was not long ago that, open AI released Soros uh that wasn't, long ago at all and we were kind of, going oh wow look at you know it's it's, here and look at this first thing and, now there are many options available, after just a few weeks, and I think uh I've been somewhat amused, to look at the reactions in public about, people and the the concerns about uh AI, safety and you know deep fakes being so, much better now in 2024 than they were a, year ago right now we're going to have, to adjust and uh and take it in and, recognize the utility and come up with, some safeguards for it I guess it was, kind of obvious to us to and and those, of us who are following this week in and, week out that we'd be here and so now, we're here I'm waiting to see some of, the more interesting creative productive, things that people are going to put this, to I'm I'm really looking forward uh at, this point to to seeing some utility, coming from it that's, meaningful and yeah just uh so people, can go out there and look at these, things one is called Vasa one which is, the one from Microsoft research and the, kind of tagline there is lifelike Audi, driven talking face, generated in real time this was an, interesting one it kind of almost, reminded me of the sort of videos that, I've seen from Synthesia and these other, companies that kind of help create, Talking Heads essentially for marketing, videos or training videos this sort of, thing and um very impressive stuff there, you might have seen something going, going through on Twitter or LinkedIn, with you know people always try to make, the Mona Lisa face talk and saw that was, one of their examples that they had um, which you know that seems to be a sort, of given that youd try that if you're, working in this space and the most, recent one wasn't actually anywhere, close to being the best stuff it was it, was I I saw that uh maybe a week ago and, it was pretty cheesy but um I mean we're, truly arrived in 2024 if you can have, video now certainly at least talking, head video that is indistinguishable, from a person you would be very if you, were to put you know compare have two or, three people and have two or three AI, generator ones mix them up and have, people choose which ones are which um I, know that I probably could not do that, successfully uh a I might get lucky and, pick one or two but we're getting there, and so uh it's I really am curious to, see how these are put it like beyond the, novelty of it uh of seeing them uh, finally arriving after talking about, this stuff for a while I really am, curious to see how people use them for, you know we like to talk about AI for, good I really want to see instead of, people worrying about this uh strictly, about the security concern which is, legit I'd like to see some people do, some amazing things for it uh that is, going to benefit people in humanity at, large and I'm excited to see those use, cases and if anybody out there has, something uh please point us to it, because those are the use cases I'm, waiting to see yeah and the the one if, if people are searching from Alibaba is, just called Emo or I guess e it's emo I, assume emo uh alibaba's emo and Vasa, from Microsoft if you wanna if you want, to take a closer look it kind of seems, to me Chris like a time when you know, when Dolly came out the first one and, then there was it was like Dolly stable, diffusion and that there just seemed to, be this snowball really quickly of image, generation things it seems like we're in, a similar cycle right now with the video, Generation stuff and then eventually you, know it'll be integrated into our chat, interfaces and other things that I don't, think it's going to be long at all to, get to that point I think we're going to, be amazed at how fast those get, integrated yeah uh because every time, they keep building on themselves and you, know we the one thing we've noticed over, the last two years is the acceleration, in the development uh and we we'll say, something will come out in the next year, and then it comes out two months later, and you know couple of times we said, well we predicted it but we were wrong, on the timing on that I think it's going, to happen pretty darn quick and and to, illustrate that though it's not specific, to this use case hugging face announced, this past week that uh they had crossed, over the 1 million Mark there's 1, million yeah AI models hosted hugging, face yes congratulations to uh hugging, face and and the team there that's, amazing all those you know it it wasn't, that long ago where they were nowhere, close to a million but they keeps Accel, accelerating and so we'll hit 10 they'll, hit 10 million in no time I'm sure but, um to your point earlier that I think, it's not just going to be seeing these, new technologies coming out where we're, looking at the at the demo but I think, think for like the second half of 2024, and into 2025 there'll be such a huge, push at getting models integrated into, real world scenarios you know what we, would like to say is at the edge in all, sorts of different contexts and those, that's really quite honestly what I'm, excited to see is If instead of just a, talking head with the audio that's IND, discernible I want to see that uh in, some good contexts that are in places, that we're not used to seeing them that, make a big difference and so that'll be, a pretty cool for me that'll be a a, cooler Milestone than just seeing the, demo uh up front yeah it does seem like, that there's a some big possibilities in, even spaces like education and other, places where hey you have some text, content you have some sort of curation, in place but creating very much uh, appealing and realistic looking, educational content that would fit, certain scenarios because there's tons, of sort of self-study stuff online uh, some of it has better video quality than, others but also some of it's at a, certain level that's you know if you, have one set of content a professor, records maybe a video course or, something that lasts you'd have to watch, it for an hour every day for many weeks, maybe but if you can repurpose some of, that content to answer questions and, create engaging courses in different, shorter forms or for different age, levels and that sort of thing and some, of that was able to still be video still, be engaging but not take a huge amount, of video production to create which is, very expensive and time consuming I, could see a lot of possibilities there, there's probably many others i''d love, to hear from our listeners if they have, ideas about this we'd love to hear about, them in our slack Channel um if you want, to join or elsewhere just to illustrate, that for a moment uh and we T you know, we've talked about education use cases, many times both uh in how it intersects, with traditional education you know like, I have a daughter in Middle School uh, and also you know things like continue, education for grown-ups you know that, are continuing through this everchanging, world that are that you know constitutes, our careers but it's very easy to LEAP, from you know the Vasa example that, we're talking about with the talking, face is being generated in real time as, they note and thinking every kid in, school in addition you know potentially, as we as things are transitioning, forward and we still have traditional, education paradigms that most kids are, involved in but maybe every kid has, their own uh their own personal teacher, in addition to a classroom teacher and, that personal teacher explains the math, in a way that that student understands, compared to the student next to them and, you get a lot of personalization and, support that way that would be wonderful, to see that and so kids aren't left, behind and if you don't understand it, the way the teacher's explaining it you, don't have to struggle because you, already have your personal assistant so, there's many many thousands of use cases, along those lines um so that's the kind, of thing that I'm pretty excited about, for the future cool yeah well as we kind, of draw things to a bit of a close here, we normally try to provide a Learning, Resource for people in these fully, connected episodes and I I want to share, one today we've been doing a bit of, experimentation of our own Chris with, these practical AI webinars these uh I, think what we've been calling them gen, AI Mastery so we've done two at this, point one related to text to SQL and one, related to private chat UI and I think, it's been a good experience so far at, least to motivate us to do do it a bit, more and we're really trying to make, these webinars a live good learning, experience for people and something, where we have some Hands-On you know a, visual component with some Hands-On that, you don't kind of get in just the audio, podcast scenario so we we do have, another one of these planned and I would, highly recommend that that you go to, tinyurl.com, gen-y 3 tinyurl.com gen, a-master 3 and we'll put that in the, show notes as well and sign up for this, next one it's going to be about, multimodal Ai and we're fin anzing the, guests but I already I think I know who, they're going to be and it's going to be, a sort of rockar there helping us learn, about multimodal AI doing cool things, with video as we've been talking here, but also imagery and kind of tying, together text prompts in there as well, for kind of multimodal rag sort of, systems so if you're interested in that, definitely sign up it's going to be it's, going to be a great experience so we'll, we'll have that link in the show notes, and look forward to seeing everyone, there yeah it's a lot of fun to do those, sessions cuz uh it's live real time and, everybody can see everybody else in the, chat and there real-time Communications, as we're doing them uh make it pretty, special yep all right Chris well it's, been fun I hope you uh can enjoy the, rest of your weekend and we'll talk to, you soon take it easy, [Music], Daniel all right that is practical for, this week subscribe now if you haven't, already head to practical a.m for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Private, open source chat UIs | We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).
Leave us a comment (https://changelog.com/practicalai/267/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Danny Avila – Twitter (https://twitter.com/lgtm_hbu)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Register for our next webinar (similar to this one) focused on multimodal AI (https://tinyurl.com/genai-mastery3)
• LibreChat (https://github.com/danny-avila/LibreChat)
• Prediction Guard (https://www.predictionguard.com/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-267.md) | 398 | 7 | 3 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io Welcome to our next practical AI, webinar this is our second webinar or, live event that we've done Chris we've, always done the podcast, pre-recorded and it's been fun in last, time to talk about text to SQL live and, now to have another chance to have a, live live webinar um are you enjoying, these I am enjoying them I think we need, to do these more often they're a lot of, fun yeah yeah and today so I I'll kind, of frame the conversation for today but, uh first let me uh also welcome Dany, from liberat he's joined us to talk, about the topic today thanks for joining, Danny yeah of course it's an honor to be, here yeah well thank you for everything, you're doing um on the Liber chat, project and in the community today as, those that are in the webinar know and, maybe if you're listening later we're, going to be talking about crafting the, next generation of AI chat, interfaces and just to kind of frame the, setup for this we've talked a lot of, times and I still frequently even this, week last week I hear about people, saying oh my company doesn't let me use, the chat GPT interface or even literally, today like 3 hours ago I was in a, meeting with a number of customers and, they were asking the the question was, how do I get a chat interface that, allows me to switch between models and, like try different things with different, models like if I want to try llama 3 or, something like that are there ways for, me to do that are you still encountering, that as well literally every day and I'm, very sensitive to it uh you may recall, that I uh I was getting after teachers, online and a teacher pointed out that, hey this is not our choice it's the, school system so I've been super, sensitive to this ever since then and uh, we have challenges I'm I'm hoping today, can put a big dent in that one yeah, great well I'm am super happy that we, have Dany with us because this is um, what Dany has devoted a huge amount of, energy to with Liber chat both in terms, of providing an open source chat, interface also providing a chat, interface that allows you to plug in, different AI systems whether that be, open AI or many others um Clos source, and open- Source types of models and, systems and providing even functionality, related to I think even Rag and and uh, plugins and and other cool stuff so I'm, going to I think now pass it over to, Danny and let him kind of share a little, bit about you know the background of, libert chat what it is and how he views, kind of the need for private or open, chat interfaces and what they're trying, to accomplish with Liber chat and how, they see that fit into um the industry, and what's sort of needed so over to you, Danny looking forward to this yeah, absolutely you know part of the original, idea was kind of inspired by a chat gbt, leak I don't know if you remember this, but there was someone whose messages, were being being seen by a different, user yes you know he's from Poland or, Russia and someone woke up one day and, they all their messages were in Polish, that surprised me but it really planted, the seed for what I wanted to see and I, thought that was just a basic thing to, overlook and just started crafting from, that impetus and yeah I think it's, inherently completely private with the, flexibility of having like remote stuff, in there too uh which I think is, important but yeah I honestly started, off as a a learning experience it was, chat gbt had just come out and rocked, everyone's world so I was like wow I, really want to learn how this interface, works like what are they doing here and, I had just started learning about um uis, in general and web development so it, really interested me and something I saw, was you know this was such a huge tool, in my learning you know as I learned, this thing was being built out but also, you know there was that need right away, because you know I posted it on GitHub, and the next day I had six stars and I, was blown away I was like what six, stars that's great who who's looking at, this yeah it was already you know, getting picked up by search algorithms, or github's algorithms and I was just, like totally Blown Away by that and, totally motivated you know a lot of, people started commenting right away, what they want to see and I think having, access to these tools sooner rather than, later is going to be a huge thing no, matter you know what your team size is, what your business is and obviously you, know you have to tread carefully with, the Privacy side but I I think I've, built something battle tested at this, point that um you know thankfully it's, as much as a contribution from if not, more so a contribution from the people, using it would really help me along the, way so you mentioned kind of like the, itial problem that you saw that kind of, motivated you to go down this rabbit, hole which was kind of seeing others, messages which is definitely a a piece, of it in terms of how an application, like this manages State and data and, that sort of thing what did you find, like going down that rabbit hole is, there a way that you can like categorize, the main categories of things that, people have come to find you useful, about having their own chat interface um, rather than kind of a one provided by a, a model um data and privacy but what are, what are the kind of those main features, or things that are on people's mind when, they're thinking about having their own, interface I think for me too it's just, owning your own data and data is like, the new commodity it's so valuable even, to these big AI companies they, constantly releasing their own, interfaces that are Cutting Edge I might, add um but they're also looking to, collect data and I think a trend I want, to see in Tech and especially from the, open source world is just owning your, own data that stays between you and, these large language models and your, company and you really have that luxury, through this app and so that's a big, driver for me and I think that's a big, component for a lot of people, and yeah and it's also as you're, learning from these things it's like I, think it's so valuable to kind of like, categorize and piece through like you, know certain conversations you've had in, the past and that's why like one of the, main features that's been like Mainstay, since the very beginning was being able, to search your messages and to this day, it's not a feature on chat gbt or or, many different interfaces so I think, that's very interesting to see play out, but I also I know a lot of people just, that one simple feature gets them on, board I'm curious as we talk about uh, the applicability of this you know for, for instance large corporations and I, work for one of those large corporations, which built its own interface uh some, time back and others will have well but, there is going forward that takes a lot, of Maintenance a lot of concern if you, were in front of like a chief digital, and AI officer a cdao for a large, corporation and it may have already, created its own what would be your pitch, for saying you know come over to Liber, chat because of X how would you convince, the Fortune 500 companies that are out, there that this is the way to go rather, than investing on their own compared to, their whatever investment they've, already made that sun cost number one, it's completely open source it's got a, lot of contributions there's nothing, being hidden in terms of its, interoperability and and also number two, it's highly configurable with any kind, of Internet Network you might want to do, or you know it could be completely like, sealed and even work with large language, models without needing to hit some kind, of remote service and it could be, completely on local connections it, really just depends on on the admin's um, level expertise and connecting all these, things but even just using the default, Docker compose you can just spin up, something that's only available to you, and is configured in a way if you're, using like insecure default variables, and things like that it'll warn you, right away so yeah I think those are the, top three things I'll say to try to, convince someone well I think people are, eager to see a a chat interface we've, been talking about them um so we'll kind, of uh let you go from here and and show, what you want to show and I'm sure we'll, have some questions and thoughts as, you're showing these things and yeah if, you could talk through kind of the the, demo and and what's on your mind as, you're thinking about the different, features that that you're showing all, look good all looks good great so yeah, I'm running this locally and I'm using, olama I have it hosted on my computer so, I'll just write High there and it should, usually takes a second and load up and, olama for those that aren't familiar you, want to describe that just a second I, guess it's hard to find a specific term, for what it does but basically helps you, manage local large language models like, it helps you pull down their latest, build files and then it helps with the, prompt wrapping process and serves them, on an API so uh it just makes them, really accessible um wherever uh you can, run them cool second finally got to me, and now that it replied it you know next, couple of replies should be a little, quicker but basically you know this, interface is it should look pretty, familiar to a lot of people um you know, unabashedly taking a lot of inspiration, from chat GPT there's just a couple core, things uh like I mentioned the search, messages and if I search here it's, already picking this up it's a previous, conversation I had I was testing some, file there and aside from that going, back to this this is kind of segmented, but for people who have like a need to, set more custom parameters or even just, setting instructions here making sure, that it generates what they want to see, so I'll say make sure to write, code in, markdown and I'll say write me, recursive python, function yeah so it's doing doing the, job whether or not it needed my, instruction it depends but uh it had it, and so it kind of steered it right away, to use markdown which gets rendered like, this it looks beautiful and it even has, like the copy copy code and uh uh the, nice sort of like edit button copy all, that stuff that one would expect and, really it's pretty simple too and I I I, like that Simplicity I think I've seen a, lot of interfaces kind of get lost with, the the technical side of it and I'm, sure that has an audience um and those, are great interfaces for for certain, technical things but um something about, this is is just immediately accessible I, think and of course we mentioned that we, could switch AI providers and I've got I, heard someone recently call this you, know got to catch them all Pokemon of AI, um but these are just a little Showcase, of like all the different ones we can, use so um I personally like grock um, just because of its speed and it's, blazing fast this is running llama, 370b and this was like also a switch so, in the interface that you're showing, you're having a conversation it's also, was a switch between AMA and grock in, the same thread am I understanding that, right yeah correct that's awesome yeah, so you can kind of and does that message, thread sort of History carry through to, the different models I guess yeah so, that's where the database comes in just, keeping track of the, conversation not just the the back and, forth but also any changes you might, make so if I make changes here on the, Fly um that'll get recorded with the, conversation State yeah I love how you, can stay in context of the problem that, you're trying to solve uh and yet still, optimize uh against different models uh, in terms of what they're better and and, worse on the fly without it of taking, over and becoming the primary concern so, very nice thanks yeah and I that brings, up something really good user feedback I, got maybe along the lines there could be, like a smart router that kind of knows, or even you could pre pre-configure, beforehand like which is the best AI for, this sort of task and just kind of, switches it for you so maybe that's, something down the line I'm still kind, of drafting in my head but of course, course like as these things evolve, there's like we mentioned rag so a lot, of people have the expectation for for, files to work with these things and and, of course this uh Libra chat supports, that and here I just dropped a, CSV I'll say tell me about these sales, yeah and this was just mock data I made, up so it's about Gadget B Gadget B, different sales data um so I was able to, look at that and and just kind of give, give some context about it and of course, I could switch to model just as before, so I switched to co here and didn't give, as good of a response as GPT for but it, was able to see that and kind of work, with it um and even that like the file, processing that's all based off a local, rag solution so it's using like a local, dat Vector database in a local server, that's just dedicated to the files and, yeah that's one of the things there one, of the things I'm particularly excited, about is agents and agent, workflows of course we have open a, recently made a solution of their own, with assistance and I think people are, still discovering the capabilities of, this but I think it's exciting for me, like not just uh working with like what, AI companies have as Cutting Edge but, also as inspiration for the open source, side because I think they model this, really well and it's giving me ideas of, you know how can we do this for with AMA, how can we structure something like this, with the latest meta AI models I have a, test prompt here and this is how I, generated my sales from before but I'll, add something, here, finally, output the file you, created so this can pretty much uh not, just write code but execute it uh within, open AI sandbox so that's really great, for things like data analysis and just, generating mock data like this too yeah, so it might take some time while that's, running uh there's a question um did you, test your local rag system against, others like open AI so maybe some, interest in kind of the performance of, that rag system with a variety of models, and and locally versus of those built-in, to closed systems I did I I definitely, with open AI especially because they, have a a rag system through assistance, and to be honest it's right now it's not, doing nothing too special it's what they, call naive rag but I found even with, naive rag you have a really good prompt, and so I tested that prompt several, different iterations of it you can, really get something effective almost, across the board like any llm, and with open AI solution it's really a, black box you can't even see the prompt, that's being generated I'm not sure how, to like steer it better yeah, transparency is an issue in terms of it, well I guess it's nice when things go, right MH um and it's sort of automagical, but then when things aren't going right, you really wish you could understand a, bit more totally so where are you in uh, in terms of your multimodal chat Story, how far along are you in terms of what, you're trying to get to one of my main, goals right now is to offer even more, access controls and configuration over, the interface experience like admins, want to, create so for example I understand that, you know especially it's first time, someone logs in here they might not know, oh what are all these models I mean I, recognize Google I guess or they would, need a little more clues or they might, not even think to click here for the, model um but I really want to see an, update it's I'm actively working on this, where there's just one drop down and you, kind of get a bit of more info on that, what you're selecting what it's good at, and it's like okay this guy can search, the internet and I need that for this, task and also just being able to control, like which users can access what too, because that that's a pretty big need, especially in the Enterprise setting so, but in terms of multimodality like in, the sense of uh of AI being able to work, with different formats um I think down, to pipeline we'll see Integrations with, videos but right now we're handling uh, Vision with images and that's pretty, useful been a huge help for me so um we, started exploring liberat at prediction, guard because, a bunch of our customers who are using, prediction guard wanted a private chat, interface because prediction guard, itself is a platform that allows you to, run large language models in a private, secure environment with safeguards, around them for like factuality and, toxicity and prompt injections and a, bunch of other things um and so our our, customers are all those kind of privacy, focused uh security conscious customers, who are maybe running prediction guard, either on their own infrastructure and, want a private chat interface for the, models that they're hosting with, prediction guard or they want an, interface that's not a closed one for, usage of our models and so here what you, can see is we've taken uh liberat which, again Danny mentioned is open source and, we've been able to take it into our kind, of branding and we have prediction guard, here where you can you know set your API, key and use prediction guard running on, top of our platform and because it's, open source because it's transparent we, were able to take this and also um you, know integrate our own sort of flare, into this so I know uh engineer from our, team Ed and Danny work together so, thanks for that where we were able to, integrate some of these checks for like, toxicity and integrate our various, models into the mix so still kind of, like Danny was showing in terms of, running here I'm running with neural, chat 7B this is uh running in a privacy, conserving setup in Intel's AI Cloud on, gouty 2 infrastructure so it's a very, unique setup that we've kind of, optimized and we're able to connect to, our own model use this really slick, interface which is Liber chat it's just, sort of branded a bit with our colors, and logos and that sort of thing but, also we can integrate the unique, features of our take on on on an AI, system right so let's say I'm really, concerned because I'm using an open, model that doesn't have some of the, guard rails around it like closed Source, models um I can go into the config here, and turn on a toxicity filter to make, sure that the model isn't cursing me out, or giving me any sort of like stuff that, I don't want to see right and and so, here you can see we've have a little, toxicity score thankfully it wasn't very, toxic in in this time around so uh, continuing similar to what Danny was, showing but again our kind of own own, take on that with our models and kind of, the safeguards around that one cool, thing that we found really useful is, that a lot of our customers they want an, interface like this but they also want, it authenticated um so they have their, system set up so we've integrated we're, a g Suite company so we've integrated, Google login here and it's only our or, that can log in so the prediction guard, org and now I'm authenticated here's my, chat like Danny mentioned that is, private and searchable um so yeah this, has been a a really amazing thing for us, where we've been able to take and build, on the great open- source stuff that, Danny has built at liberat and create, something that works really well for our, customers and for our setup so before I, leave and stop screen sharing I saw that, there was a question earlier on about, translation with language models a lot, of what we've been showing is English, right some language models like open AI, they say that they'll do other languages, but uh sometimes that that doesn't, always work out so we have a translate, endpoint in our API and so we've done a, bit of this testing with large language, models translation and kind of standard, translation systems like Google, translate and Bing translate and others, um or even other models like NLB no, language Left Behind from meta and in, our translate endpoint you can uh send a, translation and then actually get the, get the result along with a the score so, we're using uh comment scoring which is, a way to score translations and I think, the question was how well do do large, language models translate and are able, to chat in in different languages versus, machine translating with a commercial, translation system so what we've seen in, scoring both commercial translation, systems and large language models is uh, that some large language models, depending on the language like if you're, going into Hindi um with open AI you, might get a good translation or one that, is comparable to Google Translate a, small amount of the time like 5 to 10%, but mostly the commercial translation, systems are generally better and, definitely as you go down the longer, tale of languages it get sort of worse, and worse even in even in chat in like, Mandarin a lot of models um don't do so, good even though that's kind of the next, highest represented language in in data, sets out there so yeah it's definitely a, mixed bag there I don't know if Danny or, or Chris if you have a comment on that, but before we go to other questions but, I'm good so some other questions on the, liberat side are you building tools for, llm evaluation since you have all the, comparison models out there I think, they're kind of imagining oh I can, switch between models easily in this, interface how does that help me in an, interactive way it could help me, evaluate the performance of of different, models but there's probably a, non-interactive you know version of that, I guess maybe toward your road map you, you were describing later the idea of, automatically switching to optimize so, this would be an incremental step in, that direction yeah absolutely I I back, to the uh data ownership I just think, it's absolutely crucial to to have built, some kind of pipeline for evaluation as, well especially if you're really into, fine-tuning your own models and really, it's crazy to think about but even the, data that you know we have just casually, with these large language models and if, they're a very capable model it's almost, like a gold mine for the next model and, just having that you know at your own, ownership not just some cloud service I, definitely want to start it off simple, you know being able to you know thumbs, up thumbs down but then being able to, integrate like complex evaluation tools, like you guys already have with the uh, toxicity score and or the translation, rating I think that's awesome cool could, you talk a little bit this isn't one of, the questions but I saw in your document, ation discussion of like plugins could, you talk about that a little bit like, what that means in the context of Liber, chat so they are inspired by just chat, gpt's use of plugins and really what um, the AI Services now refer to as tools or, functions and really it's just a way to, be able to interact with some algorithm, or API that's kind of programmed there, already and you're just kind of letting, the model decide the inputs and outputs, or the inputs and rather interpret the, outputs and I'm I'm using it obviously, in the plug-in system where you can make, requests to dolly or staple diffusion, for image Generations you can search, archive papers things like that but, really what I came up with uh the plugin, system specifically is is almost a year, old now which is crazy but I actually, developed it before open AI had these, functions in their API so in the process, I I learned kind of deeply how these, llms were you know understanding certain, tokens a little better for for like, formatting and now we have like such a, rich environment now for like getting, you know only Json responses or being, able to use tools with anthropic so I've, got a lot of things plann there um where, I want to see just that tool environment, really grow and also like for people who, are building on top of Libra chat I want, to see better documentation and better, developer experience in like adding, those extra tools where you know this is, a tool only my company can see and just, being able to like plop it in real quick, I'm looking at your GitHub and uh notice, 117 contributors listed there is your, community built up around this has, evolved going from you know Soul, developer in the beginning and now you, have a group of people that are actively, contributing at some level how has that, uh changed the project and changed how, you're spending your time to fulfill the, expectations of so many people and all, the folks that they're serving in turn, yeah it's been amazing I've learned so, much in the process I think I need to be, conservative with my estimates on, getting things done so I can address, contributions and things like that and, and that is definitely a thing I want to, keep if anything even more vote more, time because some people are making, really great things you know I'll even, shout out Marco who's constantly, contributing things and there's things, that he just gets to so much quicker, than I do that I don't quite find time, to review but it's sitting there and, it's great it's already working and I, want to dig in the weeds a little bit, but also I think that's really what's, helped the project explode to that, there's such an openness to what people, want to see it what you know I just had, someone today say that they were um it, was their first open source contribution, and I just thought that was you know, that's really cool to see just kind of, people learning in the process and and I, was there too I was always kind of, daunted of like contributing to anything, so just seeing people you know step in, the water I definitely want to Foster, that more um I'm curious as kind of a, followup to that has there been a point, where you've seen adoption option, occurring and as part of that adoption, you know uh not not that you have, favorite children so to speak and I, understand you know that you're super, happy for for every organization out, there but has there been a moment where, you've seen some organization that you, know you might be super familiar with or, something adopt uh or know about you, know and kind of went holy mackerel I, can't believe that they're using my, stuff has there been a moment like that, for you oh yeah for, sure I caught wind of of mistel using, the app just to prototype their chat, interface um that's the only one I know, for sure but I've also you know there, have been people within Microsoft who, are kind of just helping people, prototype you know their own interfaces, and things like that and it's just like, that to me is you know a step back and, I'm just kind of Blown Away the big boys, of the space definitely Yeah in our, system in the way that we've kind of, customized Liber chat here here um we, use this modelbased factuality score, which is actually factual consistency, between reference text and text out of, an llm so you can do a factuality check, between two different pieces of text to, get a score that would show kind of, factual consistency between the two, which that's kind of the most relevant, thing for most llm use cases because, many people are using rag or they have, internal company data that represents a, source of Truth so in Libra chat here, we've uh we're working on the, integration with the rag piece which, would be be a cool integration there but, um for now we just have this sort of, factuality context so facts that, shouldn't be violated right and the I, could put uh something here and turn on, the factuality check and then ask a, question so the fact I put in was that, the sky was green I could ask you know, what what color is the sky and then um I, think neural chat will actually respond, factually but I'll do the check against, the gold standard information that I put, in which is actually that the sky is, green and you can see that I get out a, factuality score which ranges from zero, to one and in this case it's very low, because I put in that information about, the sky being being green so yeah that's, a sort of interesting way up you know, I'm so thankful for this project being, open source and being customizable, because this is the kind of cool stuff, that people are enabling within their, own chat interfaces that we're working, with and it's awesome to have a a robust, system that works well in that way looks, like the there's another question Danny, how feasible um and something that you, would venture yourself into is combining, Liber chat with such Frameworks as flow, wise or um cray uh I don't know how to, say that I don't know what that is I, think it's crew AI crew AI there we go, yeah I don't know if you know of those, things but yeah I'm familiar with both I, think flow wise is really great um, giving that that string based logic user, interface logic that you know no, programming you kind of put all the, pieces together I see that being, integrated much sooner and crew aai, which is more of like an agent, orchestration uh framework but flow wise, I think it could serve as kind of like, another back end you know like just like, you see another so many end points as I, like to call them which are mistal, Google open AI so forth um I could see, it being easily integrated like that, where I don't really want to reinvent, the wheel with something like that um, because they've done such a great job, and I just want to be able to handle the, Integrations but you know because, obviously it's not going to be, everyone's need and for crew AI you know, I definitely have a lot of ideas there, I'm trying to establish kind of like a a, framework for agents first and then, potentially you know get into agent, orchestration where agents are talking, to each other and things like this but, we're not quite there yet um we got the, open AI side shaping up but we want to, see some open source Integrations there, awesome well uh I think this is a good, question to maybe kind of draw um near, to to a close here is something we um, something asked in the webinar chat but, also something we usually kind of ask, people that we're talking to on the, podcast is you know you're following and, kind of plugged into all of these, different things that are happening in, the AI ecosystem and you know things, even I'm learning about today that even, though we're I I think you know we would, be plugged into many things that we hear, about there's but there's just so much, going on after kind of looking at that, landscape and you know how Innovation is, happening how people are using your, interface but also more widely the, things that you're seeing people do in, the open- source space or otherwise, where do you see all this kind of going, and how do you see the future of both, Libra chat and maybe even just things, that you're is there something you're, particularly interested in in the AI, space to see how it develops um in the, coming year yeah I think kind of been, hinting at this already but just I think, the future it's the future I want to see, and I feel like a lot of people in Tech, want to see it too and it's the open, source future where these large language, models are getting so good every day, there's a lot more time and money, invested in being able to like host, these things just from a consumer grade, computer and I think catering to that is, probably going to be direction of my, project and many similar projects, because it even blows my mind that I can, use something like llama 3, where a year ago I might have thought oh, this is 2 years away and I really think, that's the direction both on the high, level and low level and I think it's, part of the reason it's the Project's, really taken off just because these, things are so accessible and we don't, have to pay SAS subscription money to, you know just use a a message for AI so, yeah that's awesome I think that's a, future that at least uh Chris and I are, are looking forward to and I I'm sure, many on the on the webinar you mean I, get my wallet back at some, point well I'm sure there'll be other, things to pay for but uh you know new AI, PC or something to run all your local, models but uh that's right cool well, thank you so much Danny for for taking, time thank you to everyone that joined, the webinar um this was this was a ton, of fun um we're going to be doing, another one of these webinars very soon, the next one I think will be around, multimodal Ai and um some practical like, handson instruction in how to create, things like multimodal rag systems or, kind of search over images and videos, and so that's going to be a ton of fun, so keep on the watch for that that's, going to be a fun one um until then yeah, looking forward to seeing you next time, Chris on the on the podcast absolutely, thank you Danny thanks to everyone who, joined us today yeah you guys are, awesome thanks for having, [Music], me all right that is practical AI for, this week subscribe now if you haven't, already head to practical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical AI fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Study for a week and you're an AI expert? | Jared Zoneraich from @promptlayer on the "Practical AI" podcast. Full audio 👉 https://practicalai.fm/261
#podcast #ai #machinelearning #datascience #artificialintelligence #deeplearning #ml #mlops #nlp #dataengineering #llms | 534 | 14 | 1 | the fact that you use the word expert is, so funny is it's such a new field it's, it's almost amazing I tell everybody you, could become an expert in this thing, very easily like nobody really knows, what's going on you kind of just need to, study for a week and you're expert which, is a very unique place to be but uh, anyway uh I think it makes it fun it, makes it fun to be able to dive into, something and and get as deep as the as, the leaders in the field so oh yeah yeah, 100% and just to be on The Cutting Edge, and know that nobody really knows what, they're doing here uh some people do but, there's very few of them and um yeah so, regarding the apis chat TT the way I, think about it and the way I usually, explain it is |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Mamba & Jamba | First there was Mamba… now there is Jamba from AI21. This is a model that combines the best non-transformer goodness of Mamba with good ‘ol attention layers. This results in a highly performant and efficient model that AI21 has open sourced! We hear all about it (along with a variety of other LLM things) from AI21’s co-founder Yoav.
Leave us a comment (https://changelog.com/practicalai/266/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Yoav Shoham – Twitter (https://twitter.com/yshoham) , LinkedIn (https://www.linkedin.com/in/yoavshoham)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Jamba - A Groundbreaking SSM - Transformer Open Model (https://www.ai21.com/jamba)
• AI21 Labs (https://www.ai21.com/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-266.md) | 286 | 2 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of the, Practical AI podcast my name is Daniel, whack I am CEO and founder at prediction, guard and I'm joined as always by my, co-host Chris Benson who is a principal, AI research engineer at locked Martin, how you doing Chris doing great today, daniiel how's it going it's going great, the sun is out and uh summer is upon us, along with uh lots of new AI models and, excitement going on in the space and on, that note uh specifically as related to, large language models we're really, excited to have with us today yov who is, the co-founder and co-ceo of AI 21 and, professor ameritus at Stanford welcome, yov how you doing, I'm doing good really a pleasure to be, with you guys yeah yeah we're we're so, excited to have you on it's a a show, we've been wanting to have for for some, time now I'm wondering if you could kind, of give us a little bit of the, background of AI 21 and specifically, maybe how you view AI 21 as fitting into, this wider landscape of llm companies, and technology so maybe a good starting, point will be to say why we started the, company in the first place a little over, six years ago we started the company, because we believe that deep learning, remember at the time llms were not a, thing but deep learning was mostly, applied to Vision we believe that modern, dayi uh requires deep learning it's a, necessary component but not sufficient, we believe that certain aspects of, intelligence this thing we often call, reasoning will not emerge purely from, the statistics, and it's the sort of thing AI did back, in the uh in the 80s and we believe that, uh we left money on the table and it's, time to bring the two together that's, why we started the company uh now fast, forward today where what does the, landscape look like and where do we fit, in so although I said that large, language model so very quickly we fell, into llms uh we uh we the heaviest uses, of gpt3 when it came out uh we decided, to roll our own and really Lang anguage, is where the action is because we often, say that Machine Vision is a lens into, the human eye but language is a lens, into the human mind because there's no, thought as intricate and nuanced as you, want that can't in some way be expressed, in language uh vision is an quote, unquote easy problem of course it's not, easy but there's something to understand, that this is a phone I don't really care, what the pixel is way on the on the side, here always exact true but it's really, primarily true that's not true with, language language connections matter, terribly you change a word here the, whole meaning of the sentence changes in, general you can't escape semantics when, you deal with uh with language and so, it's harder but if you crack it that's, gold uh if you look at the Enterprise, from the beginning we were focused on, the Enterprise 80% of the data in the, Enterprise is text mostly um either not, used or way underused and there's a, really good opportunity there and that's, kind of been our Focus so of course, we're not the only uh people uh with, large language models um we are one of, the handful of companies that do really, large very capable language models our, first model was called Jurassic 1 was, going back a few years it was not a most, Innovative model but it was a good, Workhorse it was uh GPT like Auto, regressive left to right model and at, the time was slightly bigger slightly, better than gpt3 of course both those, models are by now Eclipse we very, recently released our most recent model, called Jamba which is uh very, interesting in a number of ways and we, can dig deeper but maybe at uh you know, 30,000 feet, architecturally it's uh it's different, it's not pure Transformer model it, really is mostly based on structur Space, structured space state model sssm as, they're called and we can speak about, the advantages and disadvantages of, those but basically we took that, architecture and added elements of, Transformers the attention layer to get, the bo of Both Worlds and you get, performance that is as good as any model, of its size better than most of its kind, of size group and extremely efficient we, have a context length that's larger than, any other model in the size like a you, know we released the the version, released has a 250k context window, length although we trained it up to a, million and yet it all fits onto a, single 80 gigabyte GPU and so your show, is titled practically I this starts to, make it practical that's great and, speaking of practicalities uh you, mentioned the focus on Enterprise from, the beginning you also mentioned that a, lot of data in the Enterprise is kind of, locked up in this unstructured, text I remember when I first got into, data science the the focus is oh we're, going to do big data and all of this, cool analytics stuff with data, warehouses and I think that's sort of, waned a little bit I'm wondering if you, could talk to the that point like why, are Enterprises what types of value can, they get out of this sort of text that's, sitting around and because I think maybe, a lot of listeners maybe they've tried, you know these chat interfaces whether, it chat GPT or gemini or whatever but, maybe they're less exposed to the, workloads that enterprises are doing, with llms so could you give us a picture, of how Enterprises are unlocking value, with that kind of 80% of text Data maybe, just by way of example or at a high, level sure and really the use cases are, quite broad the industries are very, broad whether it's Finance or Healthcare, education or you know you name it and, the use cases are very are varied but to, pick some concrete ones let's say uh you, have uh manuals they're companies with, thousands of manuals and whether it's, the end user wanting to uh I recently, needed I had a new sort of oven, microwave combination and for the life, of me I couldn't find the relevant, information in the manual so I searched, online and so on it'd be really, convenient to go and ask a question get, the just the right answer but even if, it's not the user it could be the tech, support person who themselves want to, get quick answers that's an example we, call this contextual answers another, would be summarization rather than, response to a specific query you have, this 10K report that came out and you, want a py summarization of it maybe a, summarization geared toward certain, aspect you care about so that be another, use case these are both um wave of, consuming data there's of course gen is, a terrible name uh but uh we won't find, that battle you're stuck with it well, you know you'll get me started I'll, start complaining about gen about AGI, and so on uh but certainly some use, cases call for producing information not, only consuming information so for, example one of our use cases very, successful our product descriptions you, have companies uh retailers and you know, e-commerce companies who have thousands, of Products that come online constantly, and writing a product description is, labor intensive error prone expensive, timec consuming and we're able to, compress all of that dramatically so, these are some use cases I'm kind of uh, Curious also as you're looking at these, opportunities in the Enterprise uh and, addressing these various use cases as a, company who is creating models and, putting them out there for Enterprises, to use for people who have not you know, are not in the industry itself how do, you as a co-founder and CEO see your, company as like how do you say let's go, do this like we see the value in this uh, compared to others that are making, models like in other words if you say, I'm going to make a model what is it, about that motivation which makes you, think you'll make a difference in that, Enterprise Market uh you know and you're, kind of representing all companies that, do so just to shed some insight on on, how a Founder thinks in the space I, wouldn't uh preport to to represent the, entire industry as I'll speak for, ourselves fair enough o overshot on my, asking no worries but uh maybe somebody, is comment to others so first of all the, Baseline is a general purpose very, capable model there's a need for that, now uh there are companies who provide, services using other people's um models, and that's totally legit if you actually, own the model that you can do things, that you wouldn't be able to do, otherwise and our emphasis in addition, to the general capability of the model, is in order to make it practical there, are two things especially in the, Enterprise so if you're you know using, uh you know a chatbot to you know write, homework assignment the stakes are low a, mistake doesn't carry a big penalty uh, and probably won't somebody nobody would, read it anyway but um if you're writing, a memo to your bus or uh to your prize, client and if you're brilliant 95% of, the time, and garbage 5% of the time you're dead, in the water and so reliability is key, and as we know white language models are, these amazing creative knowledgeable, system but probabilistic and so you will, get I don't like here's another term I, don't like hallucination but you'll get, stuff that either isn't grounded in fact, doesn't make logical sense and so on and, so you can't do that so you need to get, high reliability that's number one I'll, tell you in a moment how we do that but, the other thing it needs to be uh, efficient you know for every customer, query you're going to pay $10 to answer, it and it'll take you 20 seconds to, answer it that's not no good either and, so you need to address that also so we, have several things we're doing in this, regard the first is what we call task, specific models in addition to our, general purpose model like Jamba that, came out we uh provide language models, that tailored to specific use cases you, can think about it as a Matrix you have, Industries and you have use cases and it, turns out that while initially some you, know you might think that oh I'm going, to do a healthc care llm or a finance uh, that's a little bit boiling the ocean, you want to be more specific and one way, to specific is to think about what I'm, going to use it for these are the, columns so for example take uh, summarization um that's a specific task, and now you can optimize your system and, I am deliberately saying system and not, language models I'll tell you in a, moment why but you can optimize that to, that use case so all companies now are, experimenting with multiple Solutions as, they should and um in this particular, use case a very large financial, institution took several of their, financial documents several hundred and, tested various Solutions our task, specific model and summarization and, some of the general purpose models of, you know other companies and ours were, just hands down better in terms of the, quality of the answers they got there, was no hallucination if you pardon the, expression very on point very uh, grounded and so on because it optimized, the task but by the way if the system is, a fraction of the size of the general, purpose model so you get the answers, immediately and the cost of serving is, low and this enables use cases that this, latency and unit economics enable use, cases that would just be unrealistic, otherwise so our TX specific models are, one approach and maybe I won't overload, my answer with saying why it's not only, modeled but we'll get to AI systems the, other is and it's related having models, are highly efficient that goes to Jamba, as an example of a model that's very, capable but not big if I to jump ahead, and you know let's think about 2024 what, are we going to see in the space among, on other things uh you'll see focus on, total cost of ownership of the reality, of serving these models you're going to, see a focus on, reliability uh and you're also going to, see focus on not the term I hate agents, uh but AI systems that are more, elaborate than this transactional, interaction with long mod tokens in you, know few seconds token back thank you on, to the next one more elaborate so this, is I think what's going to happen, technologically in the industry you're, also going to see correlate with that, the industry moved from what today's, mass experimentation to actual, deployments uh we're we're seeing signs, of it now and I think 24 you'll see this, uh sort of face shift there also, [Music], [Music], this is a chang log news break on April, 18th meta released the latest version of, their open is large language model with, state-of-the-art performance The Verge, rounds it up like this quote meta claims, both sizes of llama 3 beat similarly, sized models like Google's Gemma and, Gemini mistal 7B and anthropics Claude 3, in certain benchmarking tests in the m, mlu Benchmark which typically measures, general knowledge llama 38b performed, significantly better than both Gemma 7B, and mistal 7B while llama 370b slightly, edged Gemini Pro 1.5 end quote What, followed was your typical X Bros posting, n mind blowing demos of what llama 3 can, accomplish where n equals the number, that a rival xbo just posted plus one, not very interesting but two things that, did St stand out as interesting to me, about this announcement first they, didn't compare llama 3 to gp4 at all so, we can only assume it's still comes up, short when compared to open AI is best, second they continue to call Llama open, source even though the license retains, the commercial requirement of your, business not being too big which is 700, million monthly active users so I guess, llama 3 is open for businesses of all, sizes depending on how you define all, and sizes you just heard one of our five, top stories from Monday's Chang log news, subscribe to the podcast to get all of, the week's top stories and pop your, email address in at, changelog.md per news worth your, attention once again that's changel, log.com, [Music], newws so you have I love that you bring, in this element of of thinking about AI, systems not just large language models, or the model maybe that ties a little, bit into what you were just talking, about about more complicated workloads, or automations that are likely coming as, part of the solutions that people are, building but I'm wondering if you could, comment on that like where where does, systematic thinking and the thinking, about architecting AI systems fit within, what you're seeing people do now and, what you think needs to happen for them, to get value out of these models so the, part of the answer that I'm comfortable, speaking about are has to do with what, is out there already and the others I'll, speculate maybe at a little more higher, level so even if you look at task, specific model they're really not models, they're little systems So when you say, want to do summarization and you say I, care about these elements there's a, little data processing and reasoning, goes on before you call the language, model so you feed it you don't just, stick it in the context you actually do, some reasoning so you can steer the, model the right direction and then when, you get something back you don't just, spit it out you don't uh sort of sample, temperature zero and give the top answer, you get answers and you evaluate them, with validators and only when you're, confident that the answer is uh legit, you return to the user and it may sound, very expensive but actually the, operation of an llm totally dominates in, terms of the compute resources and time, these other elements and that's an, example of a system around the language, model but that's a baby step what you're, going to see is and you're already, seeing it now but right now it's uh, people touching parts of the elephant, and doing it in a very ad hockey way but, you're going to see people stitching, together multiple call to a language, model because a task may require, multiple things and it's not just, chaining it can be more complicated, scripts that you're running but you, can't just do it it's not like writing a, you know a script you know a script, familiar scripting language uh and, running it because the Computing, elements here are different they're, expensive and they're error prone and if, you just for example Cascade C language, model number one it can be very, expensive and second these errors, compounds and you get at the end much, more noise than signal and so you need, to worry about that you need to execute, differently and so that's an example of, what you'll see and there are other, aspects of these AI systems that you'll, see come into play the term, orchestration is often used here it, means different things to different, people but very much you have these, elements that are running either, sequentially or in parallel somehow you, need to execute this execution kind of, like an operating system but uh an, operating system consistent with AI, elements and so we and other people use, the term aios again an overloaded term, doesn't mean anything precise but that's, the spirit of things I kind of want to, get maybe to the roles that are, interacting with this AI OS because I, think one of the things people are, struggling with is how do I put the, right talent in place to build these, because you're talking about like, programmatic operational systematic, thinking which is kind of like there's, an ele of engineering there but it's not, people that are necessarily building, their own models they're architecting, these Solutions and putting the right, checks the right validations in place, they're creating you know more than, chains these workflows and there's some, Engineers coming to the table there but, there's also domain experts who maybe, are able to speak into some of how the, models are prompted so um do you have, any kind of observ, from your experience with how people are, putting together teams to architect, these Solutions and these systems like, you've just described is it from your, perspective still going to be a heavy, kind of engineering dominated type of, process going forward or are you seeing, a mix what what's your observation there, so my answer won't be based on an, observation because the systems don't, exist yet they're baby solutioned right, now but but uh I don't think they, represent what we will'll see going, forward but um in answering your, question it very much will be a mix, there will be companies such as ours, that will put in the foundational uh, infrastructure to run these complicated, flows these will have to be extensible, systems and they'll be extensible in a, variety of ways some of them absolutely, you'll be able to have programmers write, actual code and insert the code there, but there absolutely will be a role for, low code or even no code specification, of the flow you want on top of this uh, framework there will be uh a data, scientist that will write uh validations, of various kinds and data pipelines for, sure and so I think everybody from the, developer to the data scientist to the, business user who's somewhat SA to the, end user who just wants a system that, works everybody will have a role and, interaction and we haven't mentioned, devops yet devops here is going to be, very important also as we've kind of, talked around the ecosystem a little bit, and what you know about systems, themselves can we turn a little bit and, could you tell us a little bit about as, we're leading toward into Jamba but I'd, like to know a little bit about kind of, where where the company has been and, some of the models that you have have, put out there leading into to this one, and kind of the heritage of how you've, developed that would really be, interested in kind of how you've pursued, that since you started the company I can, divided into three periods in our long, history of six years that's an eon in AI, these days you know that's I had a, different color hair when we, started as I said we started by building, Jurassic one we just felt like uh we, absolutely had to build it uh and we did, we innovated there but in a minor way we, you know we had a vocabulary that was, five times the size of what was common, at the time it was rather than 50,000, tokens we had, 250,000 but slightly larger than gp3 not, to make a point just because it worked, out that way 178 billion parameters uh a, dense model and that served us well but, uh the next phase in our sort of so we, we did many things we had our own, application called War tune that done, very well a reading and writing, assistant uh using our technology but um, on the models themselves the next thing, we put out are our task specific models, which basically is not really, distillation it's and it's not just, fine-tuning like I said it's putting a, system around it but at the end of the, day you get something compact for, certain use cases and that set is, growing that was our second uh phase and, the third phase was uh really seeking a, way to make these models fundamentally, more scalable more efficient to serve, especially in this era of you know rag, uh kind of solutions so you have stuff, that you want to kind of bring in at, inference time to influence the output, of the system and at some point the, system chokes you know we had a context, Windows of 4K then 8K then 16k now, although some bigger numbers are thrown, out but most models choke at 32k maybe, 64k that's not enough if you want to put, so we wanted something that now if you, were to run it on you know 64 uh h100s, uh you can do a lot of things but that's, not realistic so the question was how to, get something that efficient that can, run effectively on a small footprint and, that's how we got to Jamba with Jamba, you mentioned taking some things from, kind of the Mamba architecture this sort, of SSM and adding in some Transformer, based things for those that aren't, amiliar with the kind of background with, those types of models maybe the kind of, non- Transformer models that people were, exploring could you give a little bit of, context to that and why it was important, for I mean you've already mentioned, efficiency and other things but why you, felt it was kind of important in this, generation of model to pull the trigger, in a slightly different architectural, Direction sure and for this maybe we, could uh double kick a little bit about, about how these system are architected, so at some point the dominant, architecture where the RN and the uh you, know these and then lstms as you go left, to right the system uh doesn't remember, the distant path what it does it carries, with it a state that somehow encapsulate, everything that it's seen so far that's, quite powerful but as this path gets, long it gets harder and harder to encode, and access that information that's we, encode, and uh it worked fine for vision because, this in Vision object recognition is, something very local it's a iconic, iconic in the sense that what you see is, what you get right like I said the phone, you know this is a phone I don't care, what's here so I go along I hit the, phone so I don't need to remember but in, language different and in fact if you, looked at the benchmarks by the way, another pet peeve of mine benchmarks are, can be very misleading but that aside uh, if you looked at the naal langage, benchmarks they kind of puted along with, not much progress until Transformers, came in and Transformers again, coincidentally what is it about six, years now they changed the architecture, and they had the attention mechanism, that says no I mean as I'm going along I, can relate disperate pieces of, information and that allowed you to uh, do things you couldn't do otherwise and, that's great so the quality the aners, shut up you pay a price because the, complexity is quadratic now in the, context length and that kills you which, wasn't the case with RNN or lstms there, it's linear I mean you just and so the, question is how can you have your cake, and eat it to enjoy the benefits of, being to relate disperate kind of pieces, of information and yet have something, that's if not linear close to linear and, so Mamba so first let's say Mamba is a, straight kind of left to right uh what's, called SSM model and uh this structure, safe space but uh its Innovation was it, was a version that allows you to, actually paraliz the training and much, more efficient but it still suffered, from the uh lower quality of answers and, so what our guys did was say okay we'll, take this as a basic building block and, mumai is all of what four months old now, cademia recently yeah but uh said that, seem like a really good idea but let's, now take elements of the Transformer, architecture, and put it in so every few in now case, it was every eight or 16 depending on, which version layers you put an, attention mechanism so you take a little, performance hit but not nearly as much, as if you had Transformers all the way, so that's kind of how it LEDs to this, particular architecture well yoav you, you did mention that Mamba is only a, recently released architecture and, published architecture but you've been, able to move quite quickly and I want to, talk a little bit about Jamba and the, the release and all of that but prior to, that it might be interesting for, listeners you know most of our listeners, aren't sitting in a company that is, trying to be a foundation model builder, building these kind of more general, purpose models I'm wondering if you, could give a picture a little bit um, behind the scenes whatever you think, would be interesting on what does it, actually take to go from hey this idea, we want to mix kind of get the best of, both world with Mamba and Transformers, um all the way to hey here's our blog, post releasing a model what were some of, the challenges in that kind of middle, Zone and what is that process like to, determine you know from data set to, exact architecture and the sort of final, training runs so first I'll say that um, I don't think that everybody needs to be, building Foundation models but as I said, to somebody some somebody organizations, that Technical and wants to remain, relevant even if they're not building, Foundation models they should understand, how they're built and if they really put, their mind to it and their resources, they could build one because it really, gives you a visceral deep sense of, what's going on now regarding the the, Jamba we actually try to be very, transparent you know people so this is, our first open source model and um the, reason we did it uh was that it is very, novel and there's lots of more, experimentation to be done here, optimization serving the uh you know, training these models can't be done on, every uh type of infrastructure serving, them similarly and where you do serve, them right now we've had several years, to optimize the serving of uh, Transformers we wanted to enable the, community to innovate here and so we, were quite uh explicit it in our white, paper perhaps unusually so relative to, the industry so so the listeners who, want to kind of get the Nitty Gritty um, I really encourage them to look at the, technical white paper but um I can tell, you there been a ton of uh, experimentation and ablations that our, guys did trading off very lots of people, use the term hyperparameters it's a it, hides a lot of things are very different, from one another but how many layers do, you want and you know how many mamba, layers how many attention layers batch, sizes um all kinds of stuff that and, know where what really makes the, difference it's hard to sometimes, understand what makes the difference and, again we try to share the for example, Mamba I said that M performance doesn't, compete with a performance of comparably, sized Transformer models but that's at, the when you look at the details it's, actually quite competitive on many of, the benchmarks but then there are a few, that it's really bad at and that gives, you a clue of why that's the case it can, latch on to surface formulations and and, syntax that the Transformers managed to, just abstract away from and so we, describe how you know you made this, observation you correct for it there's a, lots of details that go into making, these decisions and then there's also, pragmatic decisions for example we, wanted a model that will fit on a single, 80 gb GPU that was a design decision and, from that emanated a few things that you, know U we did put a bigger model and you, know certain contact windows will fit, there others won't it's still you know, 256k is humongous compared to the, alternative but we can also do a million, and larger but not on a single GPU and, so those are some of the design, decisions and the rationale honestly it, is a process although condensed a, process that involved you know hundreds, of decisions that led to what we uh put, out that was a really great explanation, I appreciate that it's as you were going, through it and I was thinking about the, applicability for Jamba in the, Enterprise and kind of bringing the, Innovation I'm curious is why I know you, had kind of alluded to the fact that uh, Jamba early in the explanation was kind, of the First open- Source model and so I, was wondering as you're trying to enable, Enterprise Innovation what was the, change in your thought process that made, you decide to go open source with Jamba, versus the earlier models what was the, thinking around that I was curious as, you said it and wanted to wait till we, got to the end yeah um it really was uh, very simple we felt like if we were the, only ones augmenting and pushing on this, model it wouldn't Advance as fast as it, could and we saw that within Days of our, putting it out there there was I think, today I haven't tracked but when I, looked about a week ago there 30,000, downloads and I forget how many Forks, but a large number of forks some fine so, by the way very important to say what we, put out is a base model not a fine tune, model and we're very clear about it and, we caution people for using it for, production purposes or for user facing, uh application and of course we'll be, coming out with our uh in fact we've, announced uh that uh it's available for, preview our aligned model but we felt, like there's it was really important for, the community to add value to this, architecture and that's why we did it, for those that are listening a little, bit later on the podcast so it looks, like Jamba at the time we're recording, this was released at least on hugging, face um well it was updated 15 days ago, and I see the the blog post um at the, end of March I believe but now on, hugging face there's sort of 38 models I, see with Jamba in the name that's sort, of not including those maybe that forked, and just created their own special name, so already you're seeing this kind of, explosion of a model family I guess, which is quite interesting I'm wondering, over time as a company you mentioned, kind of not being the only ones working, on the model family and wanting to see, it become more is that observation kind, of based on what you've seen in other, model families whether it be llama 2 or, mistl and others and there's sort of, because when I look at a model like that, that's released I almost immediately and, I know people you mentioned devops, people have automated pipelines in place, to create the quantized version of of, this or um fine-tune it for that on, their their data set we had the the news, research we had a discussion about new, research and what they're doing in in, some of this area as well so what is the, sour of innovation that you're hoping, for with the kind of Jamba model family, is it you mentioned fine tunes or you, know you releasing the bass model there, could be fine tunes but I think also, there could be much more than that so, what are you kind of hoping to see as, people get handson with the model and, try to explore various elements of how, to use it yeah uh fine-tuning is, happening will happen uh like I say we, have our own fine tun or aligned model, and but that's not the reason we put it, out there the reason we put it out there, is that people can contribute to the, very model so others can benefit from it, and I think there's at least two areas, where a lot of value can be brought one, is serving efficiency for example when, you consume it on hugging face it's less, efficient than we consume it on our, platform because we have optimized the, serving and we'll continue to optimiz, imiz but a lot of smart people out there, and we'd love for them to optimize it, further and everybody will benefit, including us that's one thing the other, thing is that I think it's a really we, would really value it if this kind of, model were able to be trained on, multiple types of infrastructure which, currently isn't the case and uh so I, think by putting it out there people now, they can look at the white paper they, can look at the model and they can now, enable further training of such models, which will benefit everybody including, us so as we start to wind up here um, fascinating discussion thank you very, much for taking us through all the, Insight I like to wind up asking kind of, where you think things are going and and, if you could address it potentially at, two levels both kind of where your own, organization uh expects to go what kind, of thinking you have over whatever, Horizon is on your mind but also give us, insight into how you think the industry, as a whole is progressing and and how, you expect that kind of servicing the, Enterprise need to evolve and you know, with the strategies that are out there, we'd love to understand how you're, seeing the world in that way I think the, key notion is reliability trust and, reliability uh you need to have the same, kind of trust in the system to be able, to predict what they'll do be able to, understand what they did as you do with, other pieces of software you know we, always have uh errors you know uh even, the Pentium had a bug but that's an, exception whereas currently it's the, rule for language models so that can't, be in the Enterprise and everything that, I think about what's going to happen in, the Enterprise orients around that I, think you'll see uh special purpose, models like our Tas specific models I, think you'll see uh uh AI systems, increasingly sophisticated and robust, right now they're not robust they're, experimental but you'll see me more AI, system and I think um this may sound, philosophical so bear with me but um, there's a question within thei Community, do these language model actually, understand what they're talking about, they they spit out this incredibly, convincing stuff very smart something on, point and how can they not understand, and it sometimes they're totally stupid, and everybody we all have favorite, examples and I think we need to get to, the point where we believe that the, system actually understand what they're, talking about and what understanding is, is again it sounds philosophical and, there's a philosophical aspect to it for, sure but it has very practical, ramifications and so when I think about, the future all these pragmatic things, task specific model AI systems but in, the background this notion of, understanding these system need to, really understand that's what I'm, looking at yeah that's great well I, think as a part of the development, towards that certainly open models and, Innovation around these model families, like we talked about I hope is a key, piece of that and from a member of the, community just want to express my thanks, to AI 21 for being a leader both in, terms of the thinking, and infrastructure and innovation in, this area but also a leader in terms of, putting things out there for the, community to to work on as a community, so thank you for what you've done with, with Jamba and really excited to to, follow AI 21 and and where you're headed, next so thank you so much for joining us, yob it's been a pleasure thanks very, much for having, [Music], me all right that is practical AI for, this week subscribe now if you haven't, already head to practical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly toio, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spend any, time with us that's all for now we'll, talk to you again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Udio & the age of multi-modal AI | 2024 promises to be the year of multi-modal AI, and we are already seeing some amazing things. In this “fully connected” episode, Chris and Daniel explore the new Udio product/service for generating music. Then they dig into the differences between recent multi-modal efforts and more “traditional” ways of combining data modalities.
Leave us a comment (https://changelog.com/practicalai/265/discuss)
Changelog++ (https://changelog.com/++) members save 26 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Udio (https://www.udio.com/)
• CLIP (https://openai.com/research/clip)
• BridgeTower (https://arxiv.org/abs/2206.08657)
• LLaVA (https://llava-vl.github.io/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-265.md) | 292 | 2 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another fully, connected episode of the Practical AI, podcast in these fully connected, episodes Chris and I keep you fully, connected with everything that's, happening in the AI world the news the, trends the new models all the good stuff, and talk through some things that will, hopefully level up your machine learning, game I'm Daniel whack I am founder and, CEO at prediction guard and I'm joined, as always by my co-host Chris Benson who, is a principal AI research engineer at, locked Martin how you doing Chris doing, great today Daniel I don't know how, we're gonna pick what to talk about, there's so much stuff coming out right, now there's a lot um mostly kind of new, well there's always new models I guess, but it it did did seem like a big week, with I think new GPT for Turbo new, Gemini it's really hard for me to keep, track of the numbers and parameter, counts and all that but I know that new, Gemini 1.5 something I forget the, different numbers yeah and then new, mistol new I think it's different, mixture of experts top of the open LM, leaderboard we've got udio which I was, just so I've been at some s up, accelerator stuff and then at, conferences events and I'm like it seems, like I go into one of these events and, at the end of the day people just say oh, did you hear about X and I'm like no I, was I haven't had my laptop open, something else has happened apparently, for the last two hours I haven't had my, laptop and I've missed I was doing work, you know yeah I don't know if you've, seen the trend I think this was a, prediction for, 2024 which I think was a well-informed, prediction for 2024 from many different, people and I think we talked about this, in our own discussions of, 2024 around uh, multimodality of AI in 2024 whereas in, 2023 you kind of saw this explosion of, in many cases text to text AI meaning I, put in a text prompt and I get some text, back now we're seeing an explosion of, multiple modalities of data input and or, output to these models and that's mostly, what I'm seeing is that consistent with, your view as well Chris totally I was, thinking about that and a moment for us, to brag tell we've actually been fairly, good with our predictions the last few, years who knows may maybe we're actually, setting the it's a self-fulfilling, prophecy it's just everyone's listening, to practical Ai and they're making our, predictions real that must be what it is, I'm sure you know that all of the you, know like over at open AI they're just, listening to us and they're going okay, that's what we need to go work on it's a, lot of pressure yeah and I tell you what, it's a good thing we're steering the, entire industry by ourselves right which, is yeah it's a lot of pressure but it's, good sure no pressure at all man I do, want to talk about multimodality today, but I I've just got to share with you, some of this udio stuff Chris which I I, think well udio udio I don't know if, anyone knows how to say this yet I was, saying udio maybe it's udio UD IO I, believe I'm not going to Hazard a guess, till I hear somebody else do it yeah, yeah it's coming out of beta I think, that there were some leaks of some of, what they were doing before but, essentially if you go to this website, you can sign up for an account they have, it marked as beta so I'm not sure, exactly where this is going necessarily, product wise wise but what you see at, least in its beta form is essentially a, space where you can put in a text prompt, it kind of reminded me of uh almost like, a clip drop or something some of these, image generation platforms where you can, kind of pre-select some elements of the, prompt and the goal would be to, completely generate a coherent and, compelling song or piece of music or, composition it's essentially a music, generator so we've seen a little bit of, this in the past right and in the past, we've kind of heard things generated, like kind of dreamy ambient L things and, and maybe useful for kind of, backing YouTube videos or or something, like that but not really compelling, music in and of itself and I think, what's interesting about this yudo is, that it generates both this sort of, compelling music but also lyrics and, also synthesized voices singing the, lyrics all together in one so Chris, before while you were setting up your, studio there to record the podcast I was, busy on udio figuring out what what is, there now there's a couple of really, interesting ones that I listen to and, I've preloaded a couple in here for us, to listen to and to give our audience a, little bit of a sense of the audio, absolutely because this is an audio, podcast so you know what better format, so one of these which I found really, intriguing was Dune the Broadway musical, and I would go to that by the way just, to make it very clear I'm standing in, line to buy tickets so to speak well the, music has been generated for this and um, I'll site uh Bobby B so Bobby B on, udio and he's created Dune the Broadway, musical so just to give a sense of, people The Prompt that went in to this, to create it it says teen pop show tunes, film soundtrack uplifting playful female, vocalist happy anyway so you get a sense, of kind of similar to a I guess like an, image generation prompt where you're, saying like high resolution Unreal, Engine like this sort of stuff to give, it some stylistic guidance but I've got, this pre-loaded in here um Chris, everything that you're going to hear is, AI generated so let's listen to song of, oracus from Dune the Broadway musical oh, you got my attention, man the greatest leader we've ever seen, they that he's a listen, guy eyes bright blue and hair black you, should see him ride on a sand worm's, back up to, Victory what what do you think move over, Wicked move over lame as Rob all just, you know I saw Hamilton recently move, over hamilt we're all about doing the, musical now that's it exactly yeah so, good and even the lyrics you know eyes, bright blue and hair jet black you, should see him ride on a sandw worm's, back I mean that's great that's good, right there I like the fact that the, music actually deviated far from the, darkness of Dune you know the the, Perpetual darkness of of the theme that, was fun I'm that was great all right, ready for more yeah oh yeah so I tried, out my own of course I had to try out my, own of course so my prompt that I put in, and I only experimented with this so I'm, sure you can do much better but I put in, a song about two podcast hosts trying to, navigate the wild and crazy world of AI, in the style of pop rock practical AI, the musical we really appreciate you, breakmaster cylinder the mysterious um, breakmaster cylinder behind our our, theme music for the show but this is, what uh udio can do and um I selected, specifically to have it pop rock and to, autogenerate the lyrics so I didn't put, in in any lyrics so I have two op, options for you Chris I have two, selections that you can see which one, you like better here's the, first the first thing they do hit, recording their room discussing the, trends where the BS will take us and, launching the forecast oh yeah oh yeah, oh yeah caught up in the W where the, data streams TI with a crackle in the, voice and the LPS are just right they, with ay that's quick as a, [Music], flash all right selection one thoughts I, like that, one you just transported me from like, you know 53 which is what I'm at now all, the way back to like 16 in the 80s you, know late 80s that I was all about that, that was good okay cool yeah I I love it, okay that one was they also generated a, title it's interesting I don't know how, much of this you know many models are at, play Under the Hood here and how they're, coordinated I'm guessing maybe there's, some that generate the lyrics and some, that generate the title and then somehow, that's merged together in a music, generation because the obviously the, voice and the lyrics have to be, coordinated somehow I at least didn't, see a lot of underlying explanation of, what's going on here but pretty, interesting and that was generated I, would say in 30 seconds or or something, I don't know not that long right so, let's take a listen to number two this, one was titled digital, [Music], Odyssey both host I've deep my friend oh, yeah oh yeah voices through the digital, tide they learning as the Cod BL got, theories theories everyone must hear a, journey through the AI, sphere there you go even with a little, bit of a guitar type solo there at the, end I know it was good I like I like the, first one better the first one felt like, I was you know like I was a kid again, right there but but I like both and uh, yeah this was good I I could just spend, all day generating music now that I may, do that actually I might have just taken, up your Saturday oh my God my wife has, all these chores planned for me because, we're recording here late on the, Saturday morning and I'm I may get, myself in trouble by by yes okay well, there you go so yudo um check it out, super cool stuff I think this does bring, up some really interesting challenges, issues uh struggles excitement also of, course joy in hearing doing the musical, but it is super interesting I even, thought about this when I was going, through here well you know Bobby B, created Dune the Broadway musical and I, just downloaded and I'm doing it what I, want with it which I guess is playing it, on my podcast so Bobby b i I hope you're, okay with that now, technically technically this is machine, generated so at least as far as I, understand in the current US legal, system such a thing would not be, copyrightable sorry Bobby, sorry Bobby I'm not giving legal advice, here obviously and not a lawyer but, that's my understanding from our, previous conversations but what's, interesting is I think similar to these, AI generated art things that you know, were put into art competitions and one, right there could be now in my case I, just put in a simple prompt that you, know generated something in 30 seconds, there could be some really, deep thought put into how to construct, this prompt and the various kind of I, think Bobby's prompt was much better, constructed than mine and also you can, upload your own lyrics into this to add, a level of creativity so there's really, an open question here of how much human, creativity is actually a big portion of, this generation and will the establish, laws and legal entities eventually, recognize the creativity that's put into, prompting these sorts of systems just, like there was a time when there was a, question of whether or not if you took a, picture with a camera right you just, click the button yeah now photographers, out there are going to get really mad at, me because I think they would recognize, it's way more than clicking a button, right there is a whole lot to, photography and that's I think why it's, been accepted as an art but people, argued at one point you know hey you, click that button it's machine generated, you can't have a copyright for that but, eventually that those laws change so I, wonder Chris if you have any thoughts, about if if or when that might change in, these cases yeah I mean I think it will, because yeah I guess what if you look at, the stream of AI advancements that we've, covered over the years there's a sense, of inevitability that when these things, come out they catch on and they become, popular and then they become the norm, and then eventually as we keep seeing, the laws gradually catch up over time, and um things like yudoh if I'm, pronouncing the name right is going to, be typical it won't just be them, there'll be others as well and so I know, that you know by way of example of that, inevitability we have a a Spotify, account for our family and um you know, we're listening to the traditional way, of streaming music historically and and, one of the things I do on that is I, really like to explore new genres and, new types of music that I don't know and, I'm always trying to think how do I get, to that yeah but I'm very likely to use, something like udio to prompt what I'm, feeling what I'm thinking and try to, explore new music that way because I, don't really care as a user whether or, not it's an artist that's human or an AI, model that generated if it sounds good, to me and so I think that sense of, inevitability will bring about the, change over time yeah I think, personally I think right now even it's a, great area where especially if you're, going back and forth like you're trying, a prompt and then maybe you're modifying, the lyrics and if there's some sort of, back and forth that definitely gets into, a little bit of a gray area where how, much even of the generated stuff, ignoring the creativity and the prompt, how much of the generated stuff is, actually machine generated versus human, post edited for example that's right so, yeah I think that that even now that's a, bit of a gray area but then my personal, uh thought is eventually this will be, more recognized as a creative Pursuit, but you know we'll see you know this new, inroads into music through this AI model, with this being so far the most, interesting that I've seen this probably, will really scare the music industry you, know because this is taking it to a, whole different level uh and there, probably will be a lot of lobbying a lot, of lawsuits suits uh you know we saw, this past year you know actors going on, strike because of AI based video and the, creation of characters or the, representation potentially of live, people and I think we'll see some form, of that here this is a process we're, going to go through over and over again, and I was talking to a good friend just, the other day about this and life ahead, and stuff and and how to do this and I I, said the smart people will align, themselves with these capab ities you, know it's not about whether it's a good, future or a bad future or whatever From, perspective but it's an inevitable, future if I could give advice as a, non-lawyer uh and nonprofessional, musician in the music industry but, someone observing this I would say find, a way to get on board with it and make, it work for you quickly because it's not, going away so yeah yeah very true and I, do also wonder of course those things, that we just heard were completely AI, gen ated but it's interesting to me that, maybe a creative person who is and there, are many that are embracing some of, these things like musicians could, actually iterate very very quickly on, different ideas putting their own voice, to backing music or um getting prompted, with lyrics that aren't quite so good as, what they would like but gives them a, creative starting point and you know, really explore spaces that they might, not have explored before so that might, be cool to see as well that kind of, human udio teaming yeah I agree and, another thing that I think will become, inevitable uh it's a you know this so, here's a startup idea for folks uh is, with all the advancements over the last, few years in in kind of emotional, recognition uh from models and, understanding if you combine a, capability like this and uh and you, choose to opt in which there's privacy, concerns obviously with the service that, also is monitoring uh your yourself you, know and maybe maybe the data is only, available to you but can generate, content that is exactly specific to what, you're dealing with in life and when it, when you need to pick me up it not only, does it find the the right music but it, finds the right lyrics for the situation, and stuff and so there's a lot of, interesting psychological considerations, here that could be both good or bad, obviously so U but I think that's pretty, fascinating I I'm wondering if I can, find a service in a few years that will, that will do that and it follows me, through the day and I I keep the content, private to me in my account but I can it, gives me the pickme up and uh when I, want that's what I'm looking for for, whoever is going to go out and do that, in the world personal soundtrack and, narration and Vibe my life yeah, [Music], this is a chang log news break YouTuber, internet of bugs posted a lengthy, breakdown exposing Devon's creators, cognition labs for falsifying claims, about their world's first AI software, engineer Devon was pitched as a fully, autonomous software developer and one of, the more impressive demos showed it, completing and getting paid for, freelance jobs on upwork sound too good, to be true it did to internet of bugs, who says quote I broke down the Devon, upwork video frame by frame and here I, show what Devon was supposed to do what, it actually managed to do instead and, how bad a job of that it did on the, whole that's not surprising given the, current state of generative Ai and I, wouldn't be bothering to debunk it, except one the company lied about what, Devon could do in the video description, and two two a lot of people uncritically, pared the lie all over the Internet and, three that caused a lot of non-technical, people to believe that AI might replace, programmers soon end quote Devon really, did Garner a lot of attention also known, as money because of that demo we talked, about it on our shows with a healthy, amount of skepticism I think but I'm, thankful their claims have been debunked, and I hope we all give cognition Labs, the side ey from here on out, exaggerating your development, capabilities maybe Deon really really is, Human After All you just heard one of, our five top stories from Monday's, change log news subscribe to the podcast, to get all of the week's top stories and, pop your email address in at Chang, blog.com newws to also receive our free, companion email with even more developer, news worth your attention once again, that's changel log.com, [Music], snews a lot of these things like I say, are moving into this, multimodal sphere and it might be worth, just kind of looking back a little bit, at how we got to where we're at in terms, of multimodal functionality sort of how, that gradually has changed over time, from NLP and speech to multimodal models, that we're seeing now I think one good, way to if we kind of step back and look, at it on a holistic uh or historical, standpoint you kind of started out with, modes of data processing that were maybe, separated but often tied together in a, sort of chained way we didn't really, think about it chaining at the time, right but you had speech synthesis, models for example that were really, specifically trained to only do text to, speech right and in some cases even that, was broken up into sub models of like a, vocoder and um other types of models and, you had texttext models you had maybe, computer vision models that would, process images to do object recognition, or or even videos in certain cases or, frames of videos but all of these were, specializations so the whole idea of, there being computer vision is right as, a specialization is that I am, specializing in models that process this, mode of data right and speech technology, the discipline of speech technology is a, discipline of really focusing on, processing either speech inputs or, speech outputs and then NLP quote, unquote had special models that would, take in text and maybe classify or, detect entities or do machine, translation or or these sorts of things, so we kind of that historically was kind, of how the field was developing and if, we skip kind of the middle portion and, come back to it now we've gotten to a, point where there's seemingly these, large Foundation models that are able to, take in multiple inputs at the same time, of multiple modes so for example an, image and text in the same input paired, together and you know answer questions, so this would be like what we see with, GPT Vision or we see a text prompt or, even in video input like we have seen, with Gemini recently where you can, import a whole video and ask for a, summary of all the visual components and, and that sort of thing that's kind of, from my end how I view the the book ends, is that also from your standpoint any, comments on Chris in terms of how we've, progressed from one end to the other of, that I'll pivot slightly in response to, that and say as you were describing that, it really resonated with me on uh with, something else that I've been thinking, in that development and that is um I, consume a lot of content uh through, audiobooks anytime I'm kind of on, autopilot you know driving or Mo the, lawn or doing any kind of thing I'm, listening to audiobooks uh for learning, purposes mostly at kind of a double, speed as fast as my brain can process it, because I like to consume as much as I, can I can't do that my mind is too, slow you get used you get used to it, after a while and but it's just to get, the information in and I just went, through uh a really pullit Sur, prizewinning book through audio called, an immense World by Ed Yong which is, fascinating and I highly recommend it to, anybody that wants to do it but it is, all about the way we and all animals in, not only humans but all animals in their, unique ways perceive the world through, their senses and how vastly different, those are and the theme that came up to, me throughout that was how multimodal, everything about humans are the way that, we learn our experiences are all, multimodal we don't have just vision and, just audio and just text we're taking it, all in at the same time I think this, progression that we've seen in terms of, moving into multimodal this year has, been really fascinating in terms of, coming into really how we take in, information and how we learn and I think, going back to udio today and uh seeing, you know what they're doing and looking, at the other multimodal capabilities, that we've been learning it feels like, we're finally getting to some uh I know, we keep saying this it's always kind of, cool at the moment when the new thing, comes out but uh it feels like it's, really aligning with what it means to be, human as well ironically that's the, background thought process I had as you, were going through that yeah there's, these scenarios in which knowledge is, definite like we process knowledge, across multiple modes of data inputs and, certain things are not all you know many, things are not all represented in text, or in any given mode and I think you've, seen this already kind of utility over, this with things like GPT Vision which, is a kind of visual instruction tuned, model and maybe that's something to, share with the audience if if you're not, familiar there's kind of this the music, Generation stuff maybe that's a little, bit newer but there's kind of this, ongoing work in visual instruction, tuning and this would be the type of, model in which you would have an image, input and maybe a text prompt and, traditionally like I remember I think, there's even some of these models still, that are quite popular to use on for, example AWS textract for example is a, OCR system but you can also do visual, question answering now it used to be you, had a specific model architecture for, visual question answering it was a, research topic in and of itself there, was a specialized model and this kind of, illustrates some of the progression that, we've had there was a very specific, discipline around visual question, answering and very specific models that, could do those things and they Advan, but then recently you've got what has, begun being termed visual instruction, tuning for models where the models are, actually similar Foundation models to, what people are using for other modes so, for example if we look at the lava model, L VA so not llama but lava maybe a bit, hard to distinguish in the audio that's, a op source, manifestation of the GPT Vision system, or similar functionality to that and if, we look at how that operates it actually, is built off of and this is kind of we, talk about this a lot on the podcast, Chris where you're always sort of, building on the shoulders of giants and, a lot of what's come before even though, some of these functionalities seem to, pop up out of nowhere but there are kind, of previous signals and Chris I don't, know if you remember we I think had an, episode where we talked about clip yes, which was a multimodal way to embed both, text and images and Y something, developed by open AI contrast of, language image pre-training is what clip, is from open AI correct which thankfully, is open to everyone back in the days, when open AI was open and we can still, use it but clip allows you to embed an, image or text in a similar embedding, space which means you're converting an, image or a piece of text into a set of, numbers and if you compare those sets of, numbers in that Vector space you can, actually find things that are, semantically similar by the distance, between those vectors which is, interesting and kind of makes immediate, sense if you're doing texttext things, like the semantics of one piece of text, or the meaning of One Piece oft text, could be similar to the meaning of, another piece of text it's very, intriguing though if you make this, multimodal and say a nice sunset on a, beach in Florida and then you have an, image of a sunset somewhere on a beach, and then you have an image of a car, driving through New York and then you, have an image of a spaceship in outer, space and you could actually find which, of those images is semantically similar, to a text input right so that's kind of, the clip way of embedding things and, then on the other side you have large, language models right which can take a, text prompt reason over that text prompt, even though it's not really reasoning, it's just autocompleting but we can, think of you know functionally it takes, in that prompt and outputs some output, related to the question that's input the, query the instruction that sort of thing, so what they've done with lava which has, been around for some time and people, have built different types of lava, models and sort of its own family in and, of itself is paired the clip style, embedding model or a visual encoding, system with a large language model and, then created this text and image input, so if you look at the architecture of, what they do what happens is they have, have an image input that goes through, the vision encoder for example clip that, produces an embedding they have a, language model like llama that accepts a, language instruction or text input and, that creates an internal hidden, representation embedding and the first, thing they do is they train a projection, Matrix for the vision encoder and can, you talk about what a real quick what a, projection Matrix is yeah yeah so the, language model produces an embedding, embedded representation of the text the, vision model creates an embedded, representation of the image but these, two are different model architectures, and the embeddings can't be directly, compared one to another because one's, llama and one's clip even though they, functionally produce embeddings so the, projection Matrix is a sort of, translation of the output of the vision, encoder the CP model into a a space in, which it's concatenated or combined, together in some way with the output of, the the Llama model and that is a, trained projection such that it, accomplishes the end tasks that you're, training for like a visual question, answering or reasoning over an image, that sort of thing so that's the initial, pre-training is is finding that, projection Matrix and the interesting, thing here is actually this it's a, combination of models which is, intriguing right because you can always, update llama to the next cool thing like, Gemma or you could always update CP to, the next cool thing like bridg Tower, from Intel and combine them in really, interesting ways and do this retraining, and then people then fine-tune these, models Based on data sets that they've, created for specific tasks like we've, seen with language models so there might, be a science question an answer for, reasoning over science images right or, sort of visual check in a specific, domain so to give people a sense of the, functionality of this type of model if, you haven't played around with GPT, Vision or something like that one of the, examples on the the lava paper uh site, which I find interesting is there's like, a meme image of a world map but it's, made out of chicken nuggets and so it, looks like a world but it's made out of, chicken nuggets so the picture is there, and then the user input the text input, along with the image input is can you, explain this meme in in detail so, there's some element of the question, that's needed to answer that question, because if you're saying this is a meme, you're asking for specific details right, and then you definitely need the visual, content to answer that question, otherwise you would just hallucinate, something about a meme sure so the lava, answer is the meme in this image is, creative and humorous take on food with, a focus on chicken nuggets as the center, of the universe the meme begins blah, blah blah and then essentially explains, the humor which is maybe not the best, way to make something more humorous yeah, a little bit dry there yeah but you can, think of other cases where you would, need sort of visual input and text input, to create an answer right like if you, say what was the guy who raised his, right hand in this video wearing right, that necessarily like as a human we, would process that both from the text, input standpoint and the visual content, standpoint and so I think it's really, interesting this sort of exploration of, not just chained models from multiple, modes together like we've seen in the, past kind of in history where you have a, speech model you have a computer vision, model you have a language model and you, can chain them together in interesting, ways but this joint encoding this joint, joint processing of multiple modes of, data at the same time is actually, required for some of the types of, reasoning that we might want to augment, or automate from a standpoint of how we, process information as as human so yeah, I would recommend that people kind of, look into this lava model it's open, there's multiple like I say it's sort of, a family or a style of doing things so, there's a bunch of examples of that on, hugging face and demos that you can, actually try try out and try kind of an, open version of what you get with GPT, Vision that sounds good I I I just been, struck through our entire conversation, going back to what I mentioned earlier, about how how close this is in terms of, uh matching how we as humans process you, know as you as you took us through the, merging of the modalities uh a few, minutes ago and me having just uh I'm, I'm actually in the middle of a I think, it's called the Great Courses and I'm, about the best brain and it's talking, about how exactly that happens in our, brain to convert it into a form which is, uh chemical you know Electrical uh in, nature for our brains to actually, operate on since we don't actually see, and smell and so it's just fascinating, that while the underlying AI models are, not the way the brain operates the kind, of the modalities are starting to emerge, in that way um it's really neat so thank, you very much for taking us through that, understanding of how multimodality Works, in a practical sense you lived up to, practical AI in all in all ways there, well this episode has a little bit for, everybody where you get a uh fun, Broadway song and then we also talk, about projection matrices so there you, go something for everyone that I I, enjoyed it Chris and yeah I think this, will be a trend that we continue seeing, throughout this year so if you haven't, got handson and tried a little bit of, this multimodal stuff whether you go to, udio and and try to create a song or you, go to chat GPT and try to use GPT Vision, or Gemini and process a video or, download the lava model and try to run, some multimodal queries that's the best, way to sort of get an intuition for how, these things behave and what's possible, um we'd really encourage you to get get, hands on so um homework assignment, between now and and next episode I guess, absolutely all right it's been fun Chris, we'll talk to you soon take care Daniel, [Music], all right that is practical AI for this, week subscribe now if you haven't, already head to practical ai. FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | RAG continues to rise | Daniel & Chris delight in conversation with “the funniest guy in AI”, Demetrios Brinkmann. Together they explore the results of the MLOps Community’s latest survey. They also preview the upcoming AI Quality Conference (https://www.aiqualityconference.com) .
Leave us a comment (https://changelog.com/practicalai/264/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• The Hacker Mindset (https://thehackermindset.com) – “The Hacker Mindset” written by Garrett Gee, a seasoned white hat hacker with over 20 years of experience, is available for pre-order now. This book reveals the secrets of white hat hacking and how you can apply them to overcome obstacles and achieve your goals. In a world where hacking often gets a bad rap, this book shows you the white hat side – the side focused on innovation, problem-solving, and ethical principles.
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Demetrios Brinkmann – Twitter (https://twitter.com/Dpbrinkm)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• MLOps Community (https://linktr.ee/mlopscommunity)
• AI Quality Conference (https://www.aiqualityconference.com)
• Evaluation Survey (https://hq.yougot.us/primary/WebInterview/3AW6LW5D/Start)
• RAG failover talk from Jerry Lui (https://home.mlops.community/home/videos/a-survey-of-production-rag-pain-points-and-solutions)
• Prompt Templates the Song (https://www.youtube.com/watch?v=g6WT85gIsE8)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-264.md) | 441 | 5 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of the, Practical AI podcast this is Daniel, whack I am founder and CEO at prediction, guard I'm joined as always by my co-host, Chris Benson who is a principal AI, research engineer at locked Martin how, you doing Chris doing good Daniel how's, it going today man uh it's going great I, just landed in Boston and um you were, texting me and you're like hey Demetrios, from the mlops community wants to hop on, and record an episode and I was like, I've got to get out of this train, station so I just found the nearest stop, and got out and I don't have my normal, setup so I probably sound weird but I, was like these are the best times when, we get to have our friend Demetrios on, how's it going man you guys can't get, away from me I've blackmailed your boss, into letting me come back on here, standing invitation absolutely what's up, man how you doing I'm so excited because, I try not to abuse this standing, invitation I had a great time the last, time I was on here we had a bunch of, laughs and a lot of people reached out, to me because we had you know this rag, versus fine tuning conversation and I, think things have kind of like the, pieces have Fallen the cookies have, crumbled it feels like fine tuning is, not as popular as it was I don't know, what you all are seeing out there but, Rags are the go-to in these days yeah, that's what I'm seeing everyone's, ragging each other all over the place, and and I think uh yeah along with that, there's sort of like in my mind has, developed this category of like so there, was rag which of course you're, augmenting the generative model with, data but not in the in the way that like, people typically think of fine-tuning, you're just doing retrieval but I think, also these other sort of workflows, around calling external tools or like, the neuros symbolic stuff that we're, seeing of like combining a rules-based, algorithm or function or traditional, quote unquote machine learning algorithm, with generative models maybe to get the, inputs and then like connections to, databases these all these different ways, like it kind of seems like people are, figuring out that generative models are, great at being being kind of assistants, and automats but not necessarily like, predictors right or or other kind of, functions too like analytics types of, things and that that sort of stuff I've, noticed there's been a new term coined, at least for me uh rag is a service have, you guys run across that rag rag is a, service is now like a thing is that, acronym just, rag sort of R don't even try you're just, going to strain the vocal cords if you, do that man yeah rag is a service we'll, rag you if you bring your thing we're, going to rag you we're going to rag you, all over the yeah well I think there, what you're saying Daniel it really, speaks to something that I've been, seeing too which is the maturity in the, last whatever six months it's become, very clear that there's traditional ML, workloads and use cases that are kind of, going to always be traditional ml, workloads you think about like your, fraud detection models or like the term, prediction or the recommender system, even and then you have your generative, AI workloads or use cases and that's, something like these like transcription, or you have the llms which are doing all, sorts of stuff but Rags are probably the, biggest ones in that where you get that, co-pilot or code generation I think is a, a huge one there's not like such a big, overlap app where you're saying okay the, generative AI use cases or the, generative AI models are going to, Dethrone the traditional models I agree, with that yeah definitely different use, cases I think uh one thing we had our, first uh geni Mastery webinar with the, podcast when was that Chris a couple, weeks ago or something like that yeah, and we were talking about text tosql, specifically and that you know analytics, like SQL is really good at doing, analytics right especially descriptive, analytics and aggregations and all of, that and it just doesn't make any sense, for you to take a big table and somehow, figure out how to dump its contents in a, prompt and have a model reason over it, because it's probably going to get it, wrong anyway but also there's this, existing tool which can be called which, is really awesome at doing those things, I love the idea and the exploration, that's happening right now on how can we, merge both of these worlds and how can, we see what different parts work well, together and which combination of the, traditional ml plus the generative AI, can go together and I know that you do a, series of surveys with the mlops, community I think the last time that we, talked uh we were talking about some of, your survey work and some of the, interesting findings but I think you've, gone through other iterations of this, right so are you seeing interesting, things pop up as the community around, this technology matures so 100% And I, just come on here when I got survey, insights to share I guess that's what, I'll be known for whenever we have a, nice, surveer of AI I'll come and share it, with you all but this one is cool, because this time so we did uh an, evaluation survey and we launched it, when we had our virtual conference which, had a huge turnout it was over two days, spanned which was awesome by the way it, was great you were part of one of the, past ones right and Daniel you had a, awesome spot but the two-day span we, tried something different and we said, what if we do two days but since it's, virtual nobody has to fly anywhere so, instead of you trying to watch a live, stream for eight hours a day on a, Thursday and then a Friday why don't we, just do two Thursdays in a row and then, you don't have to feel like wow I just, had 20% 30% of my week eaten up by that, 8 Hour live stream or 16 hour live, stream now you can tune in tune out on a, Thursday of your choosing but we, launched it there and the response has, been amazing because normally maybe, we'll get like 100 150 people that will, fill out the survey this time we had, 322 is the number of these which is, super cool to see and so let me give you, some of the clear insights one there's, budget being allocated towards AI these, days I don't think that's going to, surprise anybody the fascinating part is, that like 45% of the respondent said, that they're using existing budget and, then a whole like whopping 43% here said, no we're using a whole new budget so, you've got EXP exporation happening in, generative AI like never before but when, it comes to that one other takeaway has, been that like the mlops AI ml Engineers, they're really trying to figure out what, the biggest leverag use cases are and, how they can explain that and I think, what we're seeing is there's a lot of, companies that are open to the, exploration right now and they're open, to letting people say all right cool, what is the most valuable for our teams, and our company is it a chatbot that is, an internal chatbot is it an external, chatbot what does that actually look, like what is the use case it's kind of, funny we we actually talked about that a, little bit uh last week Daniel and I did, in terms of trying to get non-technical, people engaged in it and I think that, there are organizations all over the, world right now that are doing exactly, that you know to your to your point, point on the result that you're seeing, there's a lot of effort uh and a lot of, money being thrown at how do we start, doing that and it's all really like the, shining star here was just Rags, obviously it's very clear everybody's, using rags and the participants, self-identified as being intermediate in, rags that was the majority so we had, like 31% saying that we have some, experience with llms and rags and then, you only had 6% saying we are at the, frontier of llms and rag model, [Music], Innovation what's up friends there's a, new book out there called The Hacker, mindset this is a productivity cheat, code to unlock new levels of success in, your career in your creative Pursuits, and in your personal growth this book is, about leveraging the principles of white, hat hacking and applying those skills to, the broader world it's available for, pre-oder right now and it's not your, typical productivity guide this is, written by Garrett g a seasoned wh hat, hacker with over 20 years of experience, this book reveals the secrets of hacking, and how you can apply those skills to, overcome obstacles and achieve your, goals so don't miss your chance to get, ahead and get this book The Hacker, mindset you can pre-order your copy, today at the hacker mindset.com be among, the first of many to tap into this power, of hacking for your success join the, movement and embrace a new way of, thinking again that's the haacker, mindset.com, [Music], so let me ask you guys a question uh get, some opinions going here on that do you, think you know with all of these uh, assistants and chat Bots and and all the, other kind of you know Focus things in, gen with those using rag do you think, those are for more kind of General use, cases having some domain knowledge in or, do and do you think that maybe going, back to the last time we we were had you, on the show when we were talking about, fine-tuning do you think that kind of, high specialized Fields will still stick, with uh with fine tuning instead of rag, do you think in other words the degree, of expertise required if you will to get, a job done productively do you think, that makes a difference on whether or, not you go rag or fine tuning or do you, think it has nothing to do with that I, would love to hear what Daniel has to, say in a minute but I've heard something, said about like fine-tuning is for form, so if you're trying to get a different, form on the output or for example if, you're trying to get functions and the, whole basically like GPT functions are a, perfect example of this so if you're, trying to get a homegrown model to do, that type of thing then fine-tuning, makes sense otherwise it's not, necessarily the good call Because unless, you're using a very small model I think, is the the thing there that okay how, what's the trade-off that you're doing, and the other thing that is probably, worth talking about when it comes to, like the difficulty levels that I've, seen is people that want to go and do, use a small Model A small like domain, specific model that is distilled or it, is very fine-tuned and distilled and, it's on their own in their own, infrastructure and in like they need a, whole team to support that that's like, hard mode you're playing on hard mode, and you contrast that with just uh gbd4, call that's a whole different level of, the game that you're playing and so I, kind of look at it that way like how, much are you willing to trade off when, you're trying to figure out is this the, way forward I would agree with that I, think the only thing I would add is, there is still at least as far as I've, seen with our Enterprise customers like, a in ination that they need to F tune, like that's still a kind of general like, oh we need to do that at some point and, I think once they kind of solve a few of, their use cases without that they kind, of disillusion themselves of that notion, in many cases but the good thing like, you're saying Demetrios is like you can, probably prove to yourself without a, huge amount of effort using an easy to, use API whether or not you need, fine-tuning and do that in like a day, versus like just immediately going to, jump to fine tuning and how do we get, gpus and how are we like what model, server are we going to use and all of, that stuff like you say that gets very, much into hard mode very quickly and you, don't sort of need that to validate your, use case often and even if you do find, tuning sometimes it may be down the line, when you've been running the pre-train, model for quite some time and you, actually have a good prompt data set to, fine-tune with because most people don't, start out with that either that's a huge, Point too and this is probably like the, biggest unlock that happened is that, okay now that anybody can use the open, AI API you can quickly see if there's, value in that crazy idea you have and, then you can go down the line like all, right now I'm going to use an open, source model which is turning the knob, to something harder or maybe it's not, even using an open source model maybe, it's just like can we get the same, results with a smaller model from open, AI so instead of gp4 we're going with, 3.5 turbo can we then go to an open, source model and get the same results, like I almost look at it like a spectrum, of how difficult you want to make your, life and how much upkeep you're going to, need and all of that but as with, everything there are benefits if you go, to that very small model if you need it, and you just have to really like play, out and see if you actually are going to, need it speaking of another survey um, just to make it an increasingly survey, driven uh show um I I don't know if you, saw the uh survey says yeah survey says, the uh andreon Horwitz uh post that they, did another sort of survey which was, kind of interesting and part of what, they drew out I forget the range of, participants that participated in the, survey we can link it in our show notes, I was just posted March 21st um we're, recording this maybe week a little more, than a week later and um was about, Enterprise so 16 changes to the way, Enterprises are building and buying, generative Ai and one of the things that, they specifically highlight there was, Enterprises are moving towards a, multimodel future and specifically a, multimodel future driven at least, partially by open model and so I think, the other kind of interesting Trend that, you're seeing is they have like graph, with like how many model providers are, people using per company or whatever and, you see like three four five like um in, many cases and also a high adoption of, open models and I think what they're, trying to draw out is like some of that, is maybe driven by security privacy, things but I think also it's driven by, like control and flexibility and once, people start realizing I also find it, still a pretty big misunderstanding that, people have that all of these models, sort of behave the same and in reality, pretty much every model has a character, of its own and it specific behavior and, even just switching from open AI to an, open model for like text to SQL for, example a model that doesn't do other, things well but does that really well, can prove to be really useful or maybe, it's a specific language thing like, we're doing a mandarin chat right now, with one of our customers and so whether, it's language or whether it's task um, people are think I think finding out, that their future is multi model or, multimodel provider or whatever mainly, because of that behavior thing but also, because they can have some control over, when they use this model or when they, use that model and kind of create the, the mix that's right for them and that's, kind of a way it's like a route around, fine-tuning in some ways because you can, kind of assemble these reasoning chains, even with multiple models involved that, do very specialized tasks and that can, kind of help you avoid spinning up and, running up your own fine tune that does, this very you know unique reasoning, workflow you can kind of bring all the, experts in bring all the expert models, to help you I can't help but Wonder um, you know like that's a built-in, capability that you have at prediction, guard but I I think that there's a, maturity issue there with a lot of, organizations on getting to that point, and so I'm what I'm seeing is very, mature organizations are going exactly, to that having multimodels capabilities, and they have the ability to distinguish, between Which models they should use for, which circumstances I suspect and maybe, there's some survey data on this but I, suspect that that's a still a fairly, small group of of even Enterprise, organizations that have gotten to that, level of understanding of what they can, do and I think there's a spectrum, falling off from there that the bulk of, the world is in right now in terms of, trying to figure out how to make it work, for them let me twist some statistics to, play in your favor hold on I'm gonna, crunch some numbers awesome I love it, live uh you're probably not using an AI, model to do it but uh yeah I'm gonna I'm, gonna weave that that narrative that, you've got Chris and I'm going to go, with it with the survey data as you're, asking for no but I actually the biggest, question as you guys are talking about, this that goes on in my head is you know, what Engineers really do not like and I, think it makes them very anxious is, having a single point of failure and so, if you are relying on open ai's API and, you have a lot riding on that where does, that all go if the CEO gets ousted and, so I imagine that a lot of people, thought twice after there was that big, drama that happened and people started, thinking you know what maybe we should, try and have a few redundancy options, just in case now you do have to have a, bit more maturity to say okay I can't, use the same prompt as I use always so I, have to have this prompt Suite or prompt, tests or prompt templates and I think, that's another thing that's happened, since the last time we talked when we, had the conference to try and get, people well prompt Ops is one agent Ops, is another one but I I created a song, called prompt templates and I'll put it, in the maybe we could play it real, [Music], fast, [Music], you, as someone who's in kind of the defense, industry I'm for agent Ops because it, sounds bad doesn't it Oh I thought you, were gonna say you were for prompt songs, which would which would bring smiles to, people in the defense that's what I was, exactly missed out on that one but he is, I don't know if people can see this but, he's got a wonderful shirt on that is, for sure and being in the defense, industry I don't know how you can get, away with that we in work from home work, from home is how I get away with it and, for those that aren't watching his shirt, it says I hallucinate more than chat gbt, which is a classic shirt which Demetrio, sent to me I have to say so thank you, very much I love this shirt I love that, you wear it that is what I'm very happy, with but the other thing that I wanted, to mention about the survey and then we, can move on and keep talking about other, stuff of the day topical issues but the, data that people use and the data with, which we evaluate the output it seems, like people just don't know what's going, on there we haven't figured that out yet, there's no consensus it's not really, clear and like the classic data sets or, the classic, evaluation pieces that you use They, Don't Really hold up so everyone's got, to be having their own data that they've, created and that they're testing against, the output but it's really hard like to, do that at scale right and it's really, expensive also so that's what we saw in, the evaluation data or the evaluation, survey data is that you've got to, handpick these you've got to match them, up and it's human curated the testing, data sets that you create we had 42% are, using data that they've created as their, data sets to evaluate if the model is, working or not yeah that's crazy and you, mentioned expensive it could be monetary, but it could also be sort of iteration, time too typically like when we were, back when we were creating machine, learning models which lots of people, still do because it's the thing driving, basically all the predictive stuff but, um you know you could run your model and, evaluate it maybe it took a few seconds, but um you know a couple minutes it's, but here when you're thinking about, running especially against an API that, has like variable latency or maybe the, calls each execution of your prompt, chain is taking like 15 seconds and even, if you want to run that over like 100, 200 300 example reference inputs all of, a sudden your iterations become really, really slow that's something I've, noticed that people kind of struggle, with is really making their evaluation, quick enough that they can iterate feel, like they can try a lot of things even, if they have like a big budget to try a, lot of things this kind of iteration, time is really frustrating also because, maybe there's other people that are, involved that aren't technical right and, they don't want to think about like, concurrency and python right they just, want to go into an interface and like, try some stuff and yeah so you've got, all the things mixed together which make, it a bit of Chaos in in many cases, that's so true the iteration speed the, time the I mean we see here that this is, crazy 72% of ground truth labels were, manually labeled by humans and so to, have to go and do that and then also how, often are you doing that how like, there's so many questions and so many, unknowns for what the best practice is, that's one thing that came up on the, challenges is that it's just like a lot, of people called out something that we, synthesized into lack of guidance like, nobody's saying this is the best, practice this is what we've seen works, really well for us because maybe some, people say well this worked well for us, sometimes and you can try it and see if, it works well for you too that's kind of, the state of the industry right now, there's something I want to tie back, into something else that we've been, talking about lately and that is the, fragment of nature of the community and, it's another thing Daniel and I have, talked about recently is that we do have, communities but we have multiple, communities and in many cases uh not in, your case but in many cases they're, they're very platform dependent you know, vendor specific and it makes it compared, to a lot of uh programming languages it, makes it harder for people to come in, and find specific best practices so I, I'm actually not at all surprised to, hear the survey kind of playing that out, I think that that's kind of a natural, Fallout of the challenges that we're, having uh with community in general so, in my understanding this correctly It's, like because a lot of the communities, are being built around certain tools you, have the best practices for those tools, but not necessarily for the industry you, can't generalize those best practices, yeah and I think also the different, channels through which people are, communicating kind of naturally develop, their own bias I don't mean bias in a in, a bad way necessarily but just the bias, towards like emphasizing certain things, right like you get into the news, research Community um we had a great, conversation about that and like people, are talking about oh like we're doing, all this like activation hacking and, representation engineering and like, that's but that's like not really talked, about like if you're over here in the, Llama index Discord or Lance DB Discord, or like whatever and some of that's, driven by the focus on what those tools, do but also like where people are coming, from right kind of more the Indie hacker, building app sort of stuff or the, rigorous like academic side or the like, Enterprise like I really just want to, get something into production side, there's like all these different Slants, people are coming from that is so, fascinating to think about like how each, of these communities has their main, focus and since it is there is so much, surface area and there's so many areas, that you can go different areas to, explore that each Community is exploring, their own area and if you go into that, you can tap into what people are talking, about in that area verses if you go into, another Community you get oh well what's, going on here what's the focus of this, community this is a different outcome, from what we've seen if you if you step, out specifically of kind of the AIML, world and you look at more just computer, science computer programming uh, communities out there there's usually, kind of a place to go and you kind of, learn the same sets of of skills and, values you know around that and that's a, little bit different from this it's been, one of the challenges I think that that, the AIML world has struggled with a, little bit so I'm not at all like I said, I think your survey captured that, Essence in it thank you for sharing that, with me because I'm going to steal it, and I'm going to say it a bunch, hopefully you don't mind you didn't put, a trademark on say it all you, want it's a great Insight I've seen it, just in the mlops community right we, have people that are really trying to, productionize Ai and so what people in, there are talking about is really like, pragmatic and practical how can I get, this being used in my company so that I, can either save money or make money like, money is the ultimate metric there if, you go into as you mentioning like these, different communities if you go into the, Llama index Community there's a lot of, talk of rags and actually we had Jerry, on in the conference and he showed this, slide that I thought was so incredibly, done it wasn't him that did it I can't, remember the person who created it but, it was like the 11 ways that Rags fail, and so it had all these different ways, that you need to be aware of and I think, one that's coming to light that people, are seeing is so important important is, how you need to get that retrieval, evaluation correct because if you're not, retrieving the right thing from the, vector DB then it doesn't matter what, you give or what the output of the llm, is if you give it some kind of crap then, it's not going to give you anything, there and the other piece that I think, is fascinating is that like how do you, make sure that all this data that you've, got in the vector DB is up to date and, so we've talked about this a bunch and, again this is in the mlops community, we're very industry focused and how can, we make sure that we are productionizing, this so in a production scenario you've, got your HR chatbot that is using a rag, system and you say all right cool we've, updated the vacation policy so we went, from a European Vacation policy to an, American vacation policy and you've got, Daniel over here saying all right HR, chatbot like how many days of vacation, do I have how do you make sure that, everywhere in the vector database it now, is updated to the American vacation, policy and so okay cool in the vector, database maybe you say you know what we, were able to scrub everything or we just, pull from the most recent documents but, then you were a good engineer and you, made sure to pull in a bunch of, different data sources so in slack turns, out that you're grabbing some data from, that and people talk about how it still, is the European Vacation policy and now, Daniel's been quoted of having 30 days, of vacation when really he only has two, that's, unfortunate yeah um actually this is a, conversation we just had the other day, with a customer because also at least, some of these databases they have, depending what you go with if you go you, know with a plugin to an existing, database maybe there's kind of more, traditional updating and upserting sort, of functionality some of these is just, like put a document in get it out delete, it there has to be a layer of logic on, top of these that actually help you do, some of that so in their case it was, like oh we want to take in all the, articles that we've had on our web on, this website and that's going to be it, and then they're like well what if we, update those do we just like blow, everything away and redo it and I think, my in ended up my answer like with the, amount that they had I was like probably, like if you can have something running, in the background honestly that's, probably the safest thing for you and, it's going to take like a couple hours, or something but then at least you the, going into the making sure everything, synced up is in in that case like they, could just version the files of the, embedded database but yeah it's an, interesting set of problems it is a fun, one and also you know what I'd love to, explore too is the idea of like arbac or, Ro based Access Control how are you, seeing people go through that and do it, well because that feels like another one, that can be really misused so for rag is, one thing for text to SQL some of that, maybe can be kind of nice because if, you're embedding some function in an, application that's already has our back, on the database then um you could use, that credential and hopefully that, carries through, but for the vector database side we've, interacted with people that have maybe, like an internal chat and an external, chat where the external chat is a subset, or should use a subset of the documents, from your internal chat so in that case, you sort of have like, two it's bifurcated rather easily and, you know that's like somewhat easy to, deal with because you could just have, like two tables or two collections what, whatever that is in the vector database, and kind of merge the retrieval or use, them selectively in in certain ways but, as soon as then you have like many many, different roles or even user specific, things I don't know like many Vector, databases that would be however you, manage that would be transparent to that, Vector database so you'd have to somehow, manage the metadata associated with it, there may be certain people will have to, follow up uh Chris we haven't had a muta, on for a while um but they're always, thinking about these uh role-based, access to really sensitive and private, data I'm sure there's people doing, Advanced things but in terms of the main, tooling that people are just grabbing, off of the Shelf a lot of that logic is, just absent exactly yeah I want to hear, if anybody is doing arback and they've, figured it out that's one thing I'm, fascinated with because it is a very, again going to the community that I I, run with that's something that, productionizing kind of it comes hand, inand with that yeah and it could also, have to do with the guard rails that you, put around the large language model, calls because if it's like a public, facing chat or something like that that, you may want to filter out pii or um, like prompt injections may be a very, important thing versus like internally, ideally you trust people as long as you, know how the data is Flowing like there, might not be as many restrictions in, terms of what can go in or who's, accessing things and that that sort of, thing but yeah it's, [Music], [Music], interesting this is a chang log news, break Pierre Carl long leas announcing, the release of common Corpus on hugging, face quote contrary to what most large, AI companies claim the release of common, Corpus aims to show it is possible to, train large language models on fully, open and reproducible Corpus without, using copyright content this is only an, initial part of what we have collected, so far in part due to the lengthy, process of copyright duration, verification in the following weeks and, months we'll continue to publish many, additional data sets also coming from, other open sources such as open data or, open science end quote here is more info, about this massive data set common, Corpus is the largest public domain data, set released for training llms common, Corpus includes 500 billion words from a, wide diversity of cultural heritage, initiatives common Corpus is, multilingual and the largest Corpus to, date in English French Dutch Spanish, German and Italian common Corp shows it, is possible to train fully open llms on, sources without copyright concerns you, just heard one of our five top stories, from Monday's Chang log news subscribe, to the podcast to get all of the week's, top stories and pop your email address, in at Chang log.com newws to also, receive our free companion email with, even more developer news worth your, attention once again that's changel, log.com, newws, [Music], so in addition to this sort of, evaluation stuff we've spent a lot of, time talking about data and evaluation, and retrieval what about on the on the, model side do you think will ever escape, the world of Transformers Demetrios so, this is something I've been thinking, about a ton man and I've got some, thoughts on this that like is everything, that we're now in AI a, Band-Aid because Transformers just, aren't the right tool for the job have, have you guys thought about this it's, like one big workaround yeah exactly am, I crazy to think that I don't think so, actually I was talking to one of our, customers about this like they have so, much like logic around double-checking, the outputs of models or like formatting, the outputs of models or and I'm talking, like hundreds and hundreds and hundreds, of lines of of code thousands of lines, of code I don't know written all around, like this sort of workaround of like and, it's because they're using general, purpose model right that you sort of, have to massage into how you want it to, behave right is it a little bit ironic, that you know use rag to clean up the, problems with Transformers is that what, we're saying here oh I get it the rag, what we need is the Lysol wipes there, you, go oh but I I often wonder like are we, having to over engineer this because the, core of the problem it's like we're, trying to put a Band-Aid on something, instead of going and fixing the root of, the problem and right now it feels like, there's nothing out there that can even, stand a chance against the Transformer, architecture so of course we can't say, well I would rather use XYZ, but I just get the feeling like when we, think about AI in 2024 or like the chat, GPT AI era we're probably going to be, laughing at the whole idea of, Transformers if in 10 years we're, looking at that it's going to be like, yeah okay Transformers were great but, they were a stepping stone I know that, there's a quite a bit of research going, on in general about doing different, types of architectures I know that, there's a number of organizations that, have been testing uh Alternatives uh to, Transformers in the last couple of years, but I don't think anyone's gotten there, or if they have uh then you should reach, out to us and let us know so that we can, be talking about it here on the on these, podcasts so I think there's a lot of, folks out there that are really, wondering what's next because we've kind, of you know we're we're essentially, taking one uh superet of architecture, and we're doing everything we could, possibly do with it you know and every, big step forward in in the last few, years has been around what else can we, do with this architecture so at some, point I agree with you Demetrius, Something's Gonna Give uh and we've got, to try some new approaches in there yeah, that's what it feels like to me it's, just like what's the next step and I, would love to also hear from whoever if, there's something that feels like it's, promising it's really exciting to me I, don't know enough about that that's very, much the research community that I don't, get to spend a lot of time in and so and, I'm sure there's a bunch of false flags, and people get excited about something, and then it turns out that after you, throw a bunch of GPU at it it doesn't, work out like we thought it would or, like we we saw promise but it didn't, actually work out when it holds up to, scale so I understand that right now, we're in the era of Transformers I, wonder how long we're going to stay in, this era not only around specific, architecture, in that capacity but almost new, approaches for the first time in a while, you know neuromorphic Computing is, really Rising again as a topic of, interest and it's not there yet you're, talking about architectures both on the, hardware and the software side that are, not specific to to either Transformers, or even gpus underlying it and stuff but, it's been interesting to see the, maturity that's developing you know you, talked about the exposure to research, even for me that's the same case is that, you know you have all the pure, researchers out there uh but now we're, we're starting to see them expand out in, lots of ways uh and trying completely, different approaches and I I'm pretty, excited that we're going to start seeing, some interesting results over the next, few years as people are looking for, Alternatives um across both hardware and, software architectures I think we're, pretty close to a turning point can you, break down real fast what was that big, word you just used, anthropic what was that I can't even say, it my gu tongue, neuromorphic Computing I think is what, you're what you're talking about, neuromorphic Computing that is a big, word what does that even mean I don't, know I got to Google that real fast so, so um and I am the last person on the, face of the Earth that should be trying, to explain neuromorphic Computing I put, you on the spot but have yeah there no, worries but having been exposed to that, the the short version is almost like um, you know in the earlier days of AI and, we and people would say uh you know in, the marketing people would talk about oh, mimicking you know the neocortex to you, know of the human brain and stuff like, that and we all kind of is this GPU and, Transformer based architectures we're, like it's not really like the human, brain well the neuromorphic architecture, is actually that it's it's the, legitimately like how does that the, architecture of a brain and I'm saying, this like there's probably neuromorphic, Computing scientists out there listening, to me now going oh my God somebody take, his mic away way that's a terrible, explanation but in my in my fairly, primitive understanding that's kind of, where is is you know how do neurons, really work in real life and how do you, do compute artificially in that capacity, so but I know that there's definite, interest uh that doing that uh I know, Daniel has a relationship with Intel, through prediction guard and I know, Intel uh has an interest in that field I, think they're one of the leaders in it I, Googled it intels all over the first, page or I lexity it it was all cited, from Intel that is very true I would, hesitate to say it out loud because I'm, probably wrong but they may very well be, the global leader in that space right, now so yeah makes sense well that is, awesome I'm glad that you taught me, about that I appreciate you for teaching, me neuromorphic now I can say it, properly and everything well you know, what now that you say that we're GNA, have to have a show on neuromorphic, Computing coming up pretty soon so yeah, exactly let's get down into it I want to, listen to that for sure we'll dive into, that we we'll get Daniel can uh can, reach out to us contacts there oh that's, classic well dude thank you very much, for coming on as we uh as we wind up, here uh it's always a pleasure anyone, who has been listening to the show long, knows that you're uh join us regularly, on the show it's always special for us, we have a great time with you so thanks, for coming on today we will get the show, notes for the survey uh and some of the, other topics that you brought up today, so people can join and uh folks if you, haven't gotten into the mlops community, podcast uh that Demetrio hosts you, definitely need to check that out it is, an awesome podcast highly recommended by, both myself and Daniel so uh hope people, join you over there oh and can I also, plug we're gonna have an inperson, conference and I'm really excited about, that a little bit shaking in my boots, because June 25th it's going to be our, first in-person conference ever and it's, going to be all about AI quality and, we've got some super cool speakers, coming we managed to get the CTO of Cruz, to come and talk about what they've done, since their little mishap in regards to, like making sure that their AI is, quality we've also I mean there's so, many great people you can go to AI, quality, conference.com and we'll throw the link, there in the show Notes too I'm very, excited for it but the speakers are, going to be awesome the attendees are, going to be amazing I think what I'm, most excited for though is that we're, gonna have all kinds of fun random stuff, you can imagine it's going to be a, conference but it's probably going to be, more like a festival I may have people, riding around in tricycles giving out, coffee or we'll have a little dj area so, and or a jam band like breakout room a, bunch of Legos hanging around I don't, know yet so if anybody has any ideas on, how we can make it absolutely, Unforgettable I would love toar about, that too and I'm going to throw out one, last plug for you uh is that when you, say that I believe you because I know, that you've heard me say this when we, were off the air but just in case anyone, doesn't know this Demetrios is the, funniest guy in the entire AI World um, and does hilarious things if you don't, follow him on social media you are, missing some really great great content, so um anyway just wanted to say that I, people should show up at the conference, just to see what you're doing if no, other reason even aside from the cool, content you have they'll enjoy it so, thanks for coming back I mean there's, going to be great speakers you're going, to learn a ton but there's also going to, be some really random stuff that you're, going to be like what is going on here, and hopefully you really enjoy it, because that's kind of what I that's, what I'm going for okay well thanks a, lot man I'll talk to you next time, likewise see, you, [Music], all right that is practical AI for this, week subscribe now if you haven't, already head to practical ai. FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly .io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Should kids still learn to code? | In this fully connected episode, Daniel & Chris discuss NVIDIA GTC keynote comments from CEO Jensen Huang about teaching kids to code. Then they dive into the notion of “community” in the AI world, before discussing challenges in the adoption of generative AI by non-technical people. They finish by addressing the evolving balance between generative AI interfaces and search engines.
Leave us a comment (https://changelog.com/practicalai/263/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Ladder Life Insurance (https://ladderlife.com/practicalai) – 100% digital — no doctors, no needles, no paperwork. Don’t put it off until the very last minute to get term coverage life insurance through Ladder. Find out if you’re instantly approved. They’re rated A and A plus. Life insurance costs more as you age, now’s the time to cross it off your list.
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• NVIDIA GTC March 2024 Keynote with NVIDIA CEO Jensen Huang (https://youtu.be/Y2F8yisiS6E?list=TLGGFIbdOwQMZx4yODAzMjAyNA)
• 5 Forces That Will Drive the Adoption of GenAI | Harvard Business Review (https://hbr.org/2023/12/5-forces-that-will-drive-the-adoption-of-genai)
• Here’s why AI search engines really can’t kill Google | The Verge (https://www.theverge.com/24111326/ai-search-perplexity-copilot-you-google-review)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-263.md) | 247 | 1 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another fully, connected episode of the Practical AI, podcast this is the episode in which, Chris and I keep you fully connected, with everything that's happening in the, AI World we'll hopefully talk through, some of the news and also keep you to, date with some of the latest learning, materials I'm Daniel whack I am CEO and, founder at prediction guard I'm joined, as always by my co-host Chris Benson who, is a principal AI research engineer at, Lockheed Martin how you doing Chris, doing well today Daniel how's it going, oh it's going great we uh my wife and I, have been traveling a bit around the UK, which has been enjoyable other than the, train delays and rain and wetness is, actually we I think today we saw all of, the above of sunshine rain hail snow the, full the full gamut of things it's, amazing how you can have so much weather, in one little island exactly exactly, variety go few miles and it changes yeah, yeah variety well there've definitely, been a good variety also of interesting, AI things happening as there always are, one of the things that was kind of, interesting to me um which was, circulating around my feeds was Nvidia, generally because they had their GTC um, I always want to say GTX but I think, that the card GTC um event which is, their kind of innovation a yearly, conference sort of thing but um Jensen, the CEO was making some comments I don't, know if it was actually at the event or, in another venue but making some, comments about, kids learning to code and his comment, was related to I forget how he phrased, it but basically the gist being kids, shouldn't learn to code anymore because, AI is going to do that fairly well so I, don't know I don't have kids Chris but, what is your thought as as a parent as, the designated father on this exactly on, this podcast well I think he's right in, the long run I I think he mentioned that, in the keynote it was a little section, where he covered that and he said always, in the past we've taught our kids they, need to code you know in in recent years, and he said now ai is changing all that, and I don't think that's news to anybody, in the larger sense you know we use AI, for coding and Technical controls and, all sorts of stuff all the time and, that's just built into our ecosystem, over the last few years and is, increasing constantly I have to admit, that he was kind of like we're at the, moment you know it's and I'm not quoting, him but you know he was kind of like, this is the moment we're not teaching, him anymore and I'm kind of like, maybe maybe because I know that even, with with us as adopters of AI, technology you know for coding all the, time I mean if there's anything that we, do it's it has to do with that and, there's all sorts of complexities and, bumps in the road and things like that, so in the large is it moving that way of, course I think everybody would agree, with that um but I don't think that, we've all just arrive there because, Jensen just said it in a keynote, personally um I have great respect I, don't mean to be negative and I I just, like it was like flipping a light switch, kind of the way he put it and I don't, think we I think it's a very very slow, flip with lots of nuance I guess one way, to rephrase the question would be at, this moment were you to have a child, going into college let's say would you, encourage them to pursue software, engineering or computer science or this, sort of thing I would I think that in, the future and actually I've spent a lot, lot of time thinking about this topic in, general hundreds and hundreds of hours, uh and this is one of those things where, I think we're accelerating into the, future and I think with AI capabilities, year by year making massive changes to, how we live and work humans are going to, have to be fairly Dynamic and how they, do things and one of those skills, continues to be various technology you, know orientations and I don't think, that's going away anytime soon though AI, will continue to change where those, boundaries are so in the spirit of we're, always going to have to learn new things, anyway I don't see any problem in diving, into technology uh en coding today with, the recognition that the technology will, constantly change underfoot and you're, going to have to change with it so I I, don't think I would recommend and, actually I do uh for Georgia State, University I I'm one of their advisers, uh in industry for their computer, science school and uh I would not, dissuade any of those students from, pursuing a computer science curriculum, at this time um they just need to be, dynamic enough to change over the years, yeah and maybe part of it is also these, tasks that like if you go to a 3- week, boot camp on front-end engineering or, something like that probably many of the, things that you would learn in such a, boot camp not even though I think there, will be a need for front-end Engineers, for some time also I I think that's true, but that sort of basic level that you, would get is maybe at the level where, it's going to be more and more cookie, cutter uh sort of things that an AI, system is going to be able to do with, guided by the hand of maybe a less, technical person like a designer rather, than a front-end engineer right but sure, I run an AI company and we really need, software engineering and programming so, I think think at the minimum all of, these AI systems that are coming about, are going to need people to build them, and maintain them and infrastructure, that operates well and scales um systems, are still going to have to scale and, people are going to have to worry about, distributed systems and all of these, sorts of things that are really hard, engineering problems from my perspective, I agree the human algorithm partnership, is going to go a long way for many years, but it will change all the time but, that's not limited to our Fields uh here, this is something that all Industries, are going to be facing is that constant, change on what the partnership looks, like so I am not inclined if unless, somebody sees an area where AI is in the, next you know very few years going to, completely take over all human, activities in it I don't see any reason, to avoid these things I think we're in a, Perpetual learning mode going forward, yeah i' I've actually been thinking, about this sort of dynamic a little bit, after we've had a couple guests on the, show that are working on these systems, like prompt layer and other ones where, they're managing prompts and reasoning, workflows at the intersection of domain, experts and software Engineers so the, technical side kind of where software, meets domain expertise I was on a panel, at Purdue University speaking of of, students and that sort of thing and one, of the questions was around this you, know well hey I'm I went into this, program thinking maybe I would try to, become a data scientist is that still a, thing or should I be thinking about, something else I think one encouraging, thing for people is if you're a data, person no one really has more than a, year of experience architecting and, building these type of generative AI, system so in a sense you could do one, really compelling project and be ahead, of most of the people trying to get, these jobs right so in in one respect, that's really encouraging I think it is, Shifting though what a data scientist, means and this coming from a data, scientist that has been a data scientist, for 10 plus years I think there is this, kind of hollowing out of the Middle, where you had the three things where on, one side you had software engineering, and infrastructure and that was the land, of sort of software engineers and devops, and all that and on the other side you, had domain experts and in the middle you, had data scientists who kind of, translated the domain expertise into, predictive models and machine learning, and such that then got handed off on the, other side to integrate into software, and I think what you see is kind of this, hollowing out of that middle where, domain experts are getting much closer, to the software side and so I think, there's kind of two maybe take ways from, that from my perspective either you, could go into the the AI engineering, side which is maybe less hardcore, infrastructure low-level programming and, more almost narrative writing of prompts, and creating of these reasoning chains, and all of that sort of thing and become, amazingly good at that and rely on good, software and infrastructure people on, the other side or you there's still, going to be a need because everything is, still software there's still going to be, a really heavy need for people that can, make your chains of reasoning and hosted, models and software deployments actually, go well right totally agree and you you, said one thing in there as well that, really really resonated with me you said, several things that did but one thing, that jumped out that I'd like to, reiterate is the notion that if you do, that one big project you're really out, in front again it's one of those things, where because it changes so fast right, now from month to month that it doesn't, take much to do the new thing that's, just coming out and the people that, might have had many years of experience, in the title haven't had experience in, this new thing and that keeps on, happening and so the notion of being a, you know title whatever your title is, for X number of years is really losing a, lot of meaning in that and that you, might have been in the space but with, the space increasingly evolving you can, kind of catch up into modern experience, pretty quick so um people are very, worried about jobs in the space but, that's a that's a little bit of a a way, where you can be super competitive and, jump up in the area of your interest by, leaping into the front and disregarding, the traditional metrics that we tend to, use on, [Music], that, [Music], if you're anything like me you have a, certain tendency to put things off until, the very last minute seeing the dentist, going to the doctor Home Improvements, that NeverEnding chore list of yours and, while most of the time it works out just, fine the one thing in life that you, really cannot afford to wait on is, setting up term coverage life insurance, you've probably seen life insurance, commercials on TV and thought yeah I'll, look into that later no later doesn't, come this really isn't something you can, wait on choose life insurance through, ladder today here's what we love about, ladder and what we allow them as a, sponsor they are 100% digital no doctors, no needles no paperwork when you apply, for $3 million in coverage or less just, answer a few questions about your health, in an application ladder's customers, rate them 4.8 out of five stars on trust, pilot and they made Forbes best life, insurance 2021 list you just need a few, minutes and a phone or laptop to apply, ladder's smart algorithm Works in real, time so you'll find out if you're, instantly approved no hidden fees you, can cancel any time get a full refund if, you change your mind in the first 30, days ladder policies are issued by, insurers with long proven histories of, paying claims they're rated a and A+ by, a am best finally since life insurance, costs more as you age now yeah right now, now is the time to cross it off your, list so go to ladderlife decom slprc aai, today to see if you're instantly, approved again that's, ladder.com, practical AI l a DD R, life.com practical, [Music], AI, [Music], well Chris as people are diving into, their first projects and areas of, interest and new things in the field one, of the interesting things and kind of, learning resources that we don't maybe, spend a ton of time talking about, although we are actively engaged in it, is community around the AI space and, where people can connect with that sort, of community we've produced a lot of, content but we've also engaged in, various spheres over the years and there, might be a lot of new people let's say, their web developers or their backend, Engineers or whatever and they're, getting into this space they're doing, projects and they're kind of normal, programming conference isn't or maybe, has some AI topics but they're wondering, is there a better place to find people, that are doing these sorts of projects, and and that sort of thing I know you, were at one point involved in kind of, the Meetup space although Co maybe had a, little bit to do with the downgrade of, some of those communities yeah it's a, great point and it's evolved in, interesting ways kind of what you're, getting at there um and to start with, the last thing that you said I was for a, number of years I ran the Atlanta deep, learning Meetup and and the phrase deep, learning is kind of antiquated now as, well in the learning and it kind of fell, off when covid hit but we were really, the preeminent kind of AI oriented uh, community in the Atlanta area which is, where I'm at it's interesting as another, Counterpoint to go into this you and I, met in a different Community we met in, the go language Community because we, were both uh go programmers and we were, kind of the two people thinking a lot, about Ai and data science in that, community so it was a natural thing for, us to gravitate together but, interestingly if you're really focused, on different aspects of AI whether it be, generative AI or other fields of AI, since we've recently pointed out that, not all things are just generative AI, even though that's the hot thing right, now there are many vendor specific, communities we have a podcast specific, Community here where we engage our, listeners all the time and there are, some platform specific communities but, there was kind of a like where we met in, the go Community there was an overall, whether whatever you were doing in the, go space there was a larger community, and you kind of knew all the people and, all the names that were there and would, follow that and be participant in it, here there we don't really have that we, have many many fragmented AI communities, and we'll go to hugging face for you, know to get open source models and to, see what's going on there and you know, there are lots of these smaller, communities but um I would imagine if, you're coming into the space today and, you're one of those people who really, want to dive into AI here in, 2024 it must be very hard to figure out, what space you should be in to make all, the connections uh and to ramp up you, any thoughts on that what you what would, you recommend to somebody Daniel if, somebody were to to come into the space, today yeah I it is a bit of a challenge, because it is a bit fragmented and maybe, we could split this up into a couple of, kinds of Engagement maybe one from a, more technical side and one from a less, technical side so in terms of, architecting and building generative AI, apps or other kind of AI apps or, fine-tuning models and and that sort of, thing I think what I would generally, recommend is starting out with some sort, of Learning Resource that is probably, going to be on hugging face Lang chain, llama index Lance DB one of the other, Vector database providers these sorts of, projects have really good tutorials and, guides associated with them starting out, more project related like those are you, know trusted projects in the AI space, and trusted Platforms in the AI space, and then looking at one of those, projects like let's say you go and you, find a guide that is you know setting up, multimodal rag system to search over, videos or images with uh llama index or, or something like that right well a lot, of these projects not all of them but a, lot of them have some type of forum or, chat kind of interface that the, community around those projects gathers, in so often times it's either Discord or, slack or a forum type of thing so I, think if you start kind of in those, spaces like a llama index or Lance DB, and look at a guide that is something, similar to what you want to build you, try going through the guide but you also, look to see if those projects have a, Discord associated with them or a slack, Channel associated with them and go, ahead and log into those and you know, it's okay to lurk for a while but as, you're going through your example and, you know you don't understand this or, you're getting that error just go ahead, and be brave and put something in in, those spaces I've generally found them, to be fairly welcoming for example if, you go to llama index there's a, community page and you'll see right away, join Discord there's many other of these, spaces so like our friends over at the, latent space podcast have a very very, active Discord server that they're, running we have a slack Channel, associated with this podcast but there's, other projects like Lance DB has a, Discord Channel and these are generally, people that are building projects within, this sort of space within this sort of, topic and they're generally open to hey, I might not only be using llama index, but I could ask a question about choices, of vector databases or choices of models, and everyone in there is kind of working, in this space and may have biased or, opinionated thoughts on that but you, gradually kind of learn and meet people, in that way so I think it's in some ways, it's a little bit more project related, than kind of overall Community related, and hugging face is a great community in, the sense of GitHub being a community, but it's not where people are having all, of these different conversations about, specific projects and guides and that, sort of thing they're collaborating on, models and data sets and that sort of, thing but maybe not in a kind of, asynchronous chat sort of way so what do, you think about the social element, because because we there's some great, guidance there on learning and kind of, connecting on a project basis and stuff, but where would you go for the, personalities you know for the, friendships that you develop how would, you approach that Daniel yeah it might, depend on people's personality and what, opportunities kind of uh present, themselves to those people there are a, good number of events that are gradually, happening like uh our friends over at, the mlops community have had a series of, online virtual events I know there was a, AI engineering event out on the West, Coast are events like hugging face I I, think is doing some type of hugging face, tour with demos at various locations in, person so that's a really great place to, kind of meet face- Toof face with people, and interact and build relationships in, terms of personalities and that sort of, thing one thing you could do is look at, our previous episodes of this podcast, and go and even if you don't look listen, to all the episodes which of course you, should because they're all great or, maybe some are better than others but, they're they're all pretty good I think, um but uh but you could look at the, guests from the previous podcast and go, to LinkedIn go to Twitter go to Blue Sky, whatever your favorite social is and see, if you can find some of those people on, those platforms and those are trusted, people that you know we've met over the, years and so in an online sense you can, start following them and see who they, are kind of reposting and interacting, with on online and that's kind of how, your web of connections can form a, little bit so I want to turn from the, community questions that we were just, talking about and spread it out we the, community you know notion that we were, just discussing was really you know, focused on those of us who are embedded, in the in AI work we probably do it for, a living this is kind of very Central to, us but there is most of the world out, there that does not fall into that, category and yet um AI is still, impacting their lives in in tremendous, ways increasingly uh and one of the, things that I have been keenly, interested in lately is for the rest of, the 99% out there that are not building, their professional lives on AI in every, moment the way we are they still need, some you know entrances into how to use, this in a productive way we are getting, on the podcast and uh with our audience, and and the listeners talking about uh, Gemini and chat GPT all the time and, they're these other 99% they're hearing, this too in the news but they don't, really understand it they're probably, not using it uh they might have tested, out one of the free interfaces here or, there to see what it was like but it's, not part of their workflow right now, we're seeing a period in 2024 where, organizations are starting to explore, and even demand that their employees, start using these tools uh they're, making them available but they're really, struggling with adoption uh I've run, across all sorts of issues where hitting, mainstream adoption with generative AI, tools has been a tremendous Challenge, and so I'd like to dive into that for a, couple of moments and kind of talk it, over where we're going because that's, certainly I know the organization I'm, part of is interested in this topic and, I talk to people every day that are, trying to figure out how to we get it, out there beyond our software developers, and our data scientists what are your, any thoughts you have there in terms if, if you have your typical non-technical, worker uh they're a knowledge worker and, they have a set of tasks every day how, do you start to break that nut in terms, of getting those people recognizing, where uh some of these generative AI, tools can help them do their own I have, a couple of examples I'll go to in a, moment but I'm curious what you've seen, out there Daniel yeah I think there's, one side of it which is maybe uh places, for those people to start but Al an, interesting piece of this is a mechanism, for how that knowledge trickles into an, organization which I think is an, interesting topic in and of itself one, pattern that I've seen a little bit at, organizations is maybe a a couple, Champions that are higher up on the, ladder that see the vision of, transformation and see that this is, going to be a transformative technology, for their organization and those leaders, might take some some type of courses or, certification or crash course type of, thing in the topic so MIT has some kind, of AI for digital transformation or AI, generative AI related things for, non-technical people that they offer, online and I think maybe some even live, Nvidia has some courses like what is, generative AI generative AI explained, Nvidia has some of those types of things, so there might be these leaders who see, the vision for how this is going to be a, transformative technology they might do, one of those things to understand at a, level that makes them comfortable and I, think part of the trickle down is, leading by example in that so when, they're having interactions with their, team and or other teams I think it can, literally be you know something comes up, in a meeting and you're sharing your, screen and you literally you just have a, tab open that is chat GPT or Claude or, gemini or whatever and you go over there, and you literally just you answer a, question or you get something done, immediately because you know how to, interact with those tools to do, something quickly and that can be a, light bulb moment for other people where, they see a person leading by example, using a tool and not like here's how you, use this tool sort of way but really in, the flow of how they're doing their own, work and I think that seeps through, that's really impactful because it shows, oh this person who is maybe influential, in my organization is operating in this, way and able to do these cool things, with these tools can I you know do that, I think some of it can also be a little, bit directed where you're having your, one-on ones maybe with your direct, reports right and they're asking, questions that they should be able to, Source the answer to or accomplish very, quickly with these tools and you can, tell them hey you know there's there's a, pretty quick way that you can get this, summary or develop this outline for your, presentation out of this article let me, show you how to do that and actually, have them go to the site generate the, outline for the presentation based on, some article or something and actually, do that in your one-on-one, even to the point of encouraging people, like hey you should maybe just have this, bookmarked or have it up on your tab so, I think that's kind of how some of that, trickle down could happen and how I've, seen it happening is kind of a foothold, in these influential people within an, organization and then leading by example, and kind of in a one-on-one sort of way, rather than a top- down directive of we, shall now do things this way right, understood I I actually haven't example, uh to illustrate one of the things that, you were talking about there in my own, employment uh we have both access to, chat gbt and we also host internal, models internal open source models uh, which we love because that way you don't, have to worry about the if you're, sending proprietary information out, things like that and so this morning in, my day job uh I was working with a team, of people there's a presentation that, has to come out of that a PowerPoint, presentation that has to come out as one, of the deliverables of that and um as we, were having a group discussion sharing, screen was able to type in some of the, things that we wanted to talk about into, the model generated dynamically while we, were on a group call I generated a a set, of talking points to the various issues, that we were addressing basically uh, kind of a presentation within the promps, if you will was then able to turn that, presentation into VBA code Visual Basic, for applications code where it embedded, the content in that VBA code then I was, able to open up PowerPoint right there, copy and paste out of the prompt this is, totally non-technical what we're talking, about here go to the tools tab down to, macros and open the Visual Basic editor, in PowerPoint which is available to, everybody pasted in the code as a new, module and ran it and it produced our, PowerPoint for our team right there the, whole thing took five minutes to get a, 30-page PowerPoint Point set up now, there was a lot of manual tweaking to be, done afterwards and adding some graphics, and stuff like that but we probably cut, you know five eight hours worth of work, out of our workflow by tossing the, critical ideas into the prompt turning, them into that code and copying and, pasting in doesn't take a developer to, do that that anybody could do that so, that's one of many possible use cases, where you're you're using it you haven't, replaced any of the workers but you're, accelerating everybody's productivity, dramatically, at that point and saving a lot of time, to do it so as I've been thinking about, how we how we get more people in the, world to use these Technologies to their, benefit I think having a number of, different kind of typical Persona you, know use cases like that that many, people might need uh so there you know, there's your PowerPoint strategy folks, is you know right now in whatever job, you're in you can do that if you have, access to one of these larger models and, then the other thing I wanted to dive, into is was it's interesting the, emotional quirks people are worried, about everything from will this take my, job if I start using it and make me, irrelevant they wonder who's watching, when I do this can my boss see if I, Stumble if I'm struggling with something, who in like in my company is aware of, what I'm doing so there's a kind of a, lot of fud fear uncertainty doubt, associated with the use of the tools and, I think part of that is just kind of in, these trainings that you were alluding, to earlier to be able to to have, discussions with with people about their, fears and see if you can get some, interest and uptake by going right at, the thing that's holding them back and a, lot of times people think it's technical, it's not the PowerPoint thing you can, have zero Technical Training and go do, that if you just know to open up that, single thing in the PowerPoint deck and, type in the and not type in but copy and, paste the code that was produced at the, prompt so um I think that there are, hundreds or thousands of opportunities, along this line uh that people could, take advantage of any thoughts on what, you might do in that way it's, interesting that you bring up the fear, and uncertainty piece because there, there are a lot of misconceptions and it, doesn't often work to just straight up, invalidate those so somehow I think if, you're thinking about this adoption in, your organization and you're working, with people to some degree you kind of, have to find an entry point where, there's less of this fear and, uncertainty because I think all of us, that are working with these models and, have been working with these models, recognize that working with generative, AI models prompting them integrating, them into your workflow it isn't often, what you expect it to be getting into it, and you H kind of have to build up your, own intuition of oh this is kind of how, this model behaves and this is kind of, how this model behaves and this is kind, of how the prompting works and oh this, you kind of have to build up some of, that intuition before you get a sense of, how they operate but you're never going, to build up that intuition if you just, focus on the use case that people have, some fear over like generating putting, in customer information into into the, interface or something like that so I, think to some degree you have to find, some use cases where people are able to, safely interact with these models and it, could be, private chat interface that you allow, people to use it could be a local chat, interface like LM Studio or something, like that that you encourage people to, use because it's local and there's, nothing going anywhere and you can tell, people oh yeah this is fine and then, they get a sense of the models and you, can go from there so I think it's about, finding that foothold to some degree one, of the interesting things that I've, found is people sort of expect these, models they get disillusioned when they, ask like a search engine like question, into these models and they just don't, find what they need and so that's kind, of some of that intuition that I was, talking about like these models operate, slightly different than a search engine, that's a great point and so everyone, kind of had to build up a little bit of, intuition I think when they learned, about you know how to Google things uh I, I think there is a like there is a skill, of how to Google things right and so, there is a similar intuition I I still, you know I'm sure you've run into this, many times like people ask questions in, a business context and you're like why, didn't you just Google that well maybe, they don't have the intuition around how, to like properly I've definitely seen, this before properly search the internet, to find answers and self- serve, themselves actually it's interesting I, found a an article this week that talked, about why AI search engines really can't, kill Google this was from The Verge, publication and so it talks about search, engines like perplexity and u.com and uh, Google Gemini and chat GPT to some, degree it goes through it's really, interesting article people should look, it up and um we'll include it in our, show notes but it talks through some of, the kind of main use cases that you, might have learned to do in Google like, navigation or something navigation, questions that don't really work so well, in chat in the current chat interfaces, so there's a different sort of intuition, that needs to be built up and one isn't, just a drop in replacement for the other, that's a great point and there's a not, only are they distinct skill sets but, there is a super set of how you use them, together you know for their strengths, along the way and you know with the, notion that you know like search, engine's primary job you know as the, article notes there is to get you to a, website and when I talk like I'm still, old school and I tend to I don't just go, to a website that I already know through, Google I will actually just type it in, directly because I know it and but my, daughter who is 11 she knows the website, she knows where it's at but she still, puts it in Google to go there and her, friends do that too she she uses it as a, navigation tool to the point that you, just made a moment ago whereas when, we're prompting you know in these models, were really seeking information in a lot, of ways you know whereas the instead of, getting to website that has information, we're kind of getting the model to feed, that information to us directly and I, personally tend to use both it's very, common for me to flip back and forth, between Google and a large language, model and use them each for what I want, or if I don't know exactly where to go, for Google I'll learn a bit from the, model and then I'll do a deep dive on a, website that's specific to what I just, learned from the model and get there, that way so it's an evolving landscape, of tools to get these things done now, yeah and I I would definitely encourage, people to check out this article it's, quite interesting they go through, different types of queries like, navigational queries what they call, buried information queries exploration, queries uh Evergreen Information like, how many weeks in a year when is, Mother's Day real time queries like, sports scores and and that sort of thing, the exploration questions that I, mentioned like why were chainsaws, invented and this like exploration and, learning sort of thing so and they, compare some of the answers from, different ones so if you're struggling, maybe with this intuition maybe that's a, good place to jump in and and try some, of those queries yourself that are there, and see what comes back from the various, chat GPT or gemini or u.com and those, sorts of things and then circling back, on our community idea before try those, things uh hop into our community here at, the change log and share what you've, done with that we're very curious to see, what people choose to do coming out of, these discussions that we've had today, I'm looking for the most creative ideas, uh to inspire me myself so please uh, send what you got sounds great Chris, well it's been it's been fun exploring, this topic with you and uh look forward, to many further exploration questions in, the future hope you have a great evening, you too take care, Daniel, [Music], all right that is practical AI for this, week subscribe now if you haven't, already head to practical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partner at fly.io to, our beat freaking residence breakmaster, cylinder and to you for listening we, appreciate you spending time with us, that's all for now we'll talk to you, again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI vs software devs | Daniel and Chris are out this week, so we’re bringing you conversations all about AI’s complicated relationship to software developers from other Changelog pods: JS Party, Go Time & The Changelog.
Leave us a comment (https://changelog.com/practicalai/262/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://www.neo4j.com/developer?utm_source=changelog&utm_medium=podcast&utm_campaign=practicalai) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack. Visit Neo4j.com/developer (https://www.neo4j.com/developer?utm_source=changelog&utm_medium=podcast&utm_campaign=practicalai) to get started.
• The Hacker Mindset (https://thehackermindset.com) – “The Hacker Mindset” written by Garrett Gee, a seasoned white hat hacker with over 20 years of experience, is available for pre-order now. This book reveals the secrets of white hat hacking and how you can apply them to overcome obstacles and achieve your goals. In a world where hacking often gets a bad rap, this book shows you the white hat side – the side focused on innovation, problem-solving, and ethical principles.
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Kent Quirk – Mastodon (https://hachyderm.io/@kentquirk) , Twitter (https://twitter.com/kentquirk) , GitHub (https://github.com/kentquirk)
• Sharon DiOrio – Twitter (https://twitter.com/sharondio)
• Steven Pyle – LinkedIn (https://www.linkedin.com/in/stevenpyle)
• José Valim – Twitter (https://twitter.com/josevalim) , GitHub (https://github.com/josevalim) , Website (https://dashbit.co)
• Jerod Santo – Mastodon (https://changelog.social/@jerod) , Twitter (https://twitter.com/jerodsanto) , GitHub (https://github.com/jerodsanto) , LinkedIn (https://www.linkedin.com/in/jerodsanto)
• Kevin Ball – Twitter (https://twitter.com/kbal11) , GitHub (https://github.com/kball) , LinkedIn (https://www.linkedin.com/in/kbal11) , Website (https://www.kball.llc)
• Nick Nisi – Mastodon (https://nicknisi.com/@nicknisi) , Twitter (https://twitter.com/nicknisi) , GitHub (https://github.com/nicknisi) , Website (https://nicknisi.com)
• Johnny Boursiquot – Twitter (https://twitter.com/jboursiquot) , GitHub (https://github.com/jboursiquot) , Website (https://www.jboursiquot.com/)
• Adam Stacoviak – Mastodon (https://changelog.social/@adam) , Twitter (https://twitter.com/adamstac) , GitHub (https://github.com/adamstac) , LinkedIn (https://www.linkedin.com/in/adamstacoviak) , Website (https://adamstacoviak.com/)
Show Notes:
• JS Party #317 (https://jsparty.fm/317) (This will 404 until Thursday!)
• Go Time #306 (https://jsparty.fm/306)
• Changelog & Friends #28 (https://changelog.com/friends/28)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-262.md) | 245 | 4 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io hello Jared Santo here practical, ai's producer and managing editor of all, the shows here at Chang log Daniel and, Chris took this week off but we didn't, want to leave you hanging without, anything to listen to so today's episode, is going to be a little different than, our usual Fair AI is permeating the, entire software industry so we found, ourselves talking about its impact, sometimes in practical ways other times, in less practical ways on many of our, pods so today we are serving you a, sampler platter you'll hear a segment, from this week's JS party podcast where, me and my two co-hosts kall and Nick nii, discuss the recently announced Dev, project which is making waves in, developer land you'll hear a segment, from a recent go Time episode called how, long until I lose my job to AI where, Johnny Boro and his experienced panel of, friends discuss using Code gen AI to, augment your Dev skills instead of, replacing you and finally you'll hear a, segment from the change log where my, co-host Adam stovak and I talk to Jose, valim Jose is the creator of The Elixir, programming language and he's been a, guest here on practical AI in the past, talking about Elixir AI tooling today, you'll hear us question him regarding, Elixir place in a World increasingly, influenced by large language models and, how he thinks about it as a language, author and promoter hopefully there's a, little something for everyone on this, episode and if it's not approaching AI, from a perspective that's compelling to, you don't worry you're regularly, scheduled programming with Chris and, Daniel we'll be back next week okay, first up it's JS party and, [Music], Devin, [Music], uh there's some news that's good there's, also some news that's maybe bad maybe, good I don't know the thing that, everybody's talking about this week at, least as we record and last week as well, is Devon Devin deev i n the first AI, software engineer according to the, makers of Devon which is cognition, Labs a new company which raised a series, a led by Founders fund headed up by, Scott woo Who seems to be a very, intelligent person even from a young age, if you watch that video of him doing, math very quickly at ages when it seems, like you shouldn't know math very, quickly and um they got a demo out there, of this new AI software engineer so I, could say more I'll stop right there you, all have probably seen the demo kall and, Nick or oh yeah at least heard about, what's going on this is a this is a new, tool which can start from scratch and do, some cool stuff I'll just leave it there, for now we can yeah I mean talk about, the details if you're excited you too, can pay for the right to have a software, engineer that can only fix one in seven, of your tickets and spin up lots of new, ways for AWS to charge you money without, your oversight sounds like like an, intern no just kidding uh sounds nice, what are you what are you referring to, or is this some some specific things, that Devon's been up to I so high level, there's a couple things that I'm, referring to here so one is like they, are pumping up the marketing on that, this is a standalone soft you know why, get a coding assistant get something, that can go and do your software and, they publish some data on it and like it, does do better than the state-of-the-art, in terms of tackling a going from a, GitHub issue to okay I'm going to, actually solve this Implement a change, and get it to happen but like the number, they published I think was, 13.86% of issues unresolved so that's, about one in seven so you point it at a, list of issues and it can independently, go and solve one and seven and first off, to me I'm like that is not an, independent software developer like, that's and furthermore I find myself, asking if its success rate is 1 and7 how, do you know which one right like are the, other six those it just got stuck or is, it submitted something broken right, because if it sets up something broken, that doesn't actually solve the issue, not only do you have it only actually, solving one in seven but you've added, load because you have to go and debug, and figure out which things are broken, and like like you have a whole bunch of, additional load so like I think the, marketing stance there is a little um, over the top relative to what's being, delivered the other thing in this is, around I think a part of what they do is, oh it can spin up resources for you, right and they show this cool demo of, like you pointed at this thing and it, allocates a bunch of different, production resources for you and the, person who's handled devops in me before, and now the you know engineering leader, who has to sign off off on our digital, Ocean or AWS or Google cloud or whatever, expenditures you might have looks at, that and is terrified by I'm going to, give an llm which is known for, hallucination which is you know these, things are not you you have to design, application and I'm building, applications with LM but you have to, design around their unpredictability and, their willingness to lie and I'm going, to give that raw access to spinning up, resources in my cloud like that sounds, well it sounds like something I would, not sign up for I'll say that okay kall, let he whose success rate at issues that, is greater than one and seven cast the, first stone yes I was going I was, wondering how what Nick's ratio is over, there you know like one and seven sounds, about the way I would do I'd pull off, the easiest one first does Devon know, what the easy tickets look like you know, because that's the skill right there I'm, over here counting on my fingers trying, to see if I'm within that, ratio but do you know when you fail or, do you just throw out broken code and, you're like ah here you go it's it's, more of a question of do I know when I, succeed I guess right which is yeah same, thing you think you succeeded until you, find out later that actually you f fail, you know that's that's been my, experience or you succeeded under the, constraints that you put yourself under, right or that was actually specified in, the ticket itself but you actually, failed at some other unnamed unlisted, constraints that were unknown at the, time but are obviously clearly there in, production and so in that context you, failed it's not easy it's not easy to, succeed in this world well what about uh, kall what if you can't you point Devon, at like a $5 a month digital ocean and, say you know deployed to This and like, can't you cap your risk I guess on the, on the devop side probably you probably, can and like I do want so I'm taking a, hard skeptic stance on particularly the, claim that this is an AI software, engineer like don't hire a person use, this thing and this is their claim so I, think it's fair for you to be that harsh, on them because they say meet Devon the, world's first fully autonomous AI, software engineer that's a very bold, claim so so I think it's fair that, you're being that harsh go ahead they're, showing some cool stuff it looks like a, pretty interesting tool to put in the, hands of someone who knows what they're, doing and is able to validate it and is, able to say okay go and solve this you, know relatively well- constrained, problem where I can easily validate the, correctness of your output go at the, sandbox where I know that you're not, spinning up massive amounts of resources, in a way that i' going to regret or even, go at this non-s sandbox situation but I, can I have the knowledge to check what, you did look at the logs and be like, yeah that's okay those are really cool, things that could be really valuable, that could dramatically increase, somebody's, productivity and those are so far from, being something that I would trust, independently to replace a software, developer that they're not even in the, same country like maybe not even in the, same world like the these are just, completely different claims yeah I think, that the sensationalism of this comes, from not what it can do now but what it, represents and the progress that it's, made when comparing to other things like, whatever it was comparing that you know, 133% to to other AI chat things that can, do things uh it's way better than all of, those it still sucks compared to a human, but it's made like Monumental progress, in terms of AI and I guess the question, is does that continue can it get further, than that or will it reach some kind of, limit and then the other piece of it I, think just from a marketing thing and, I'll be honest the only thing I've seen, on it really is a uh a Fire Ship video, is that it's it's already doing some, work on upwork so in a way like that's a, marketing claim that they it competes, against Real humans for jobs truth, according to them I haven't confirmed, but it what you said is true that they, say that yes so this is the struggle, with it with all of the llm world right, now and all of the AI world because on, the one hand you get you we we have been, in a place where we're in the rapid part, of an scurve there have been some very, rapid advancements in the core, capabilities of these things and they, are super freaking cool like really cool, and also they have a lot of limitations, a lot of those limitations are baked, into the architecture that's being used, and so you get kind of a situation where, like there's a bunch of people doing, really cool stuff with this and like, figuring trying to figure out what it's, good for but it demos way better than it, does anything reliably in production, because you can get a really cool, outcome you know 40% of the time some, situations 70% of the time and like you, show that and people like oh my gosh, this is going to take over the world and, I would not trust a for example AI, software engineer that even that that, could handle 70% of my tickets but 30%, of the time spins up millions of dollars, of cost for me right or like other, things and once again like not trying to, take away from the technology but I, don't think these hyperbolic claims, actually serve anyone except for getting, attention they get attention okay great, and you're going to get a whole bunch of, people who buy this thing are, disappointed if it costs them a bunch of, money they'll sue your ass off and like, why would you do that to yourself it's, somewhat similar, to generative AI in the image let's just, stick with like static image world where, everywhere you see is impressive results, and they'll be like this new like mid, Journey 7 is off the charts amazing, here's nine examples that'll blow your, mind right and if you click through on, that they're all going to be very, impressive like those are amazing things, but then you have to stop and think well, mid Journey didn't create nine examples, that blew my mind mid Journey probably, created 40 50 maybe 500 examples and, then you human decided which ones were, amazing and you cherry-picked those out, as the examples and like that's a great, team work guys right Computers Plus, humans equals better results and so, there's a cherry pick and that's the I, mean that's what code review on these, things will be that's what happens when, you tell co-pilot no I did not want that, function right it's all as hipster Brown, calls it in the chat room human in the, loop and that's exactly what is, necessary and I think the reason why you, call them hyperbolic claims kall is, because, they're saying it's a fully autonomous, AI software engineer human out of the, loop let it rip and maybe fans of the, bear will like to say Let It Rip but, those of us who aren't fans of Devon are, thinking let's not let it rip too much, because it might just tear the whole, thing down now I'm being hyperbolic Nick, you're not along do you agree with me, somewhat yeah I think like yeah it's, humans who are deciding what is good out, of that and kind of helping to train, that going forward but in a way like I, was trying to think and try to relate, this to another article I saw that, wasn't about Devon specifically but was, about like prompt engineering as a quote, unquote profession being taken over, already by AI because an AI can iterate, and more quickly come up with a way to, answer the questions that you want by, appending exactly what it wants to hear, at the end of a string and I think the, example that I I heard from that was, like we want you to answer this question, and it the AI is quote unquote, incentivized to answer it a little bit, better if you put it into a scenario, that it likes so the AI is Captain Kirk, on the Enterprise and it has to answer, this question to save a planet from, whatever and the question could be, what's 2 plus two or something like, something really simple and by putting, in all of these extra prompt words that, the AI is coming up with on its own it's, making better results overall and I'm, just wondering how that marries to the, idea of humans being the ones who curate, the the good ideas that come out of it, well prompt engineering I've been, convinced by swix that it's a code smell, yeah like we I was at first I was, convinced like this is the new thing, that everybody needs to learn and I, think it's just the a leaky abstraction, that's we're currently dealing with as, humans because the tooling is not good, enough so that we have to engineer the, prompts I mean Google's search box is, prompt engineering right like knowing, how to Google is it's the exact same, thing it's just way harder and it's like, way more magical now to like tell it the, magical incantations to get the best, results back out and so the fact that it, knows what results are better to me is, not intelligence or anything it's just, like we just need that to go away and I, think that's I think Devon's actually an, example of where they've productized and, hidden a lot of the inerts that we've, currently been exposed to in order to, make the tool work better than it would, for an inexperienced user to use it like, they've actually turned it into a, product and I think that's great I think, it's one step on a long line of, iterative improvements that will make it, so the prompt engineering I mean you're, just going to basically talk to it in, layman's terms and it will know how to, feed itself the correct prompt so to, speak in order to get the to get the, goodness out but I don't know okay well, back to you yeah I mean I think so high, level on all of this AI stuff is there's, really cool stuff there we're figuring, out how to use it and the current state, is is clearly intermediate however the, the thing I want to keep coming back to, with this is like there are things that, it's like okay this technology is, immature and we're going to evolve, around it and you know figuring out how, we handle prompts and managing prompts, and what's generating them and whatever, like that that fits well in that bucket, and there are things that are, fundamental pieces of the way the, technology is designed right llms, machine learning models in general are, statistical probabilistic they they're, very different than most things you, think about in soft software where, you're trying to make something that is, logical consistent like you could put a, in you get B out and that is not there, with these things and so you can design, applications around that and there are, things that you can do to to sort of pin, that down to add validation that is, outside of the llm and do other things, and maybe Devon is doing that but I, think the more we start looking at these, sort of you know places that require, judgment places that require Precision, places that like if you just make some, random up it can cause a lot of problems, like those are not actually like there's, a fundamental thing about what the, technology does that means it's not, necessarily going to be a good building, block for that and so making hyperbolic, promises about where it's going to, develop that depend on it being a, fundamentally different technology than, what it is feel like they are setting, yourself up for a lot of heartbreak what, about the job market do you think it's, fundamentally affected by tools like, Devon as they progress over the next, three to five years because we're not, talking about humans out of the loop I, think we're all in agreement here that, that's not feasible or smart at least in, today's technology plateau of llms but, less humans in the loop you know that's, seems like it's very feasible if these, tools continue to iterate and even just, not have Revolution but evolutionary, advancements from here yeah if it makes, me three to five times faster do we need, three to five times fewer Engineers yeah, I mean I think there is so this is a, technology that has the potential to, dramatically impact the productivity of, software engineers and I think there's a, couple different things around that as, we think so shortterm that can create, some disruption right short term that, means that a company that had been, running on say five engineers and might, have needed to hire and expand to 15 now, they don't have to expand nearly as soon, and things like that so I think I think, there is the potential for relatively, short-term disruption I will say both, the history of Economics broadly and, software in particular is that every, time we make it easier to code we, discover there are whole worlds now that, we can address and build software around, that we couldn't before so if for, example and and this actually there's a, particular example of this that I think, is interesting to dive into so one of, the big economic challenges of in the, tech industry in the last four or five, years is that we had these massive tech, companies with incredibly High Revenue, per employee Google meta Netflix like, the fangs mostly right and so they were, able to set this salary bar that was, super high they were paying ridiculous, amounts of money that's a technical term, ridiculous for software engineers and, then when we had very low interest rates, and a ton of VC money flowing in to the, industry there was lots of companies, whose fundamental business economics do, not support that level of salary per, software engineer who were nevertheless, paying that amount of salary per, software engineer based on VC capital, and sort of this thesis that okay we'll, be able to scale out of this and we'll, get whatever and I think that caused a, lot of distortions and problems in the, field now if suddenly software Engineers, are three to five times more, productive the range of businesses that, could use software but previously could, not afford to compete with the fangs Etc, of the world there's a whole set of, business models in there that that, become viable because it's that much, cheaper to develop software and so I, could imagine this actually dramatically, expanding the number of viable either, software businesses or businesses that, are non-te but want to would like to, include software or could have custom, software and dramatically expanding the, number of those that that happen so I, think longterm, I don't think it's a negative impact on, the software engineering career path I, think that there's what it means to be a, software engineer looks a little bit, different when you have different types, of tooling that has been true as long as, I've been around like I just JavaScript, land right I remember when jQuery was a, revelation oh my gosh this is going to, make me so much more productive it did, make me so much more productive all, these other different things and now the, level of tooling that we have there that, supports our productivity building, things in the front end is astronomical, and has that taken away from the number, of people writing JavaScript speaking of, astronomical Astro has a new database, moving, [Music], on what's up friends is your code, getting dragged down by joins and long, query times the problem might be your, database try simplifying the complex, with graphs a graph database let you, model data the way it looks in the real, world instead of forcing it into rows, and columns stop asking relational, databases to do more than what they were, made for graphs work well for use cases, with lots of data connections like, supply chain fraud detection realtime, analytics and generative AI with neo4j, you can code in your favorite, programming language and against any, driver plus it's easy to integrate into, your text stack people are solving some, of the world's biggest problems with, graphs and now it's your turn visit Neo, 4j.com developer to get started again, Neo 4j.com, developer that's neo4, j.com, [Music], developer up next we have a go Time, Podcast fireside chat between longtime, programmers Kent Quirk Sharon diorio, Steven pile and host Johnny Bor, so it's one thing to have J you know, pump out Snippets of code that is part, of a larger hole right whereby I'm the, engineer I am engineering a, solution not just I'm not just a Code, Monkey just clacking out you know syntax, I'm I'm trying to fix a problem I'm, engineering a solution to a business, problem now I could go you know as high, level as I can I can just open up you, know co-pilot and track mode and say hey, this is what I'm trying to accomplish, start spitting out you know files right, now maybe today right it can build, somewhat trivial apps I've seen YouTube, videos and clips and things of it, spitting out entire working react apps, and all these things and that's that's, great and I think over time it's going, to come even better at doing those, things but I have a hard time trying to, sort of correlate that or trying to, replace solution building because to me, Solutions aren't static right when a, business comes to me and says hey I need, you to build a solution to this problem, I build it they take it into production, they do stuff with it and they come back, and says hey you know what this is great, now I need to change it in this way or I, need to account for this this exception, or I need to account for this particular, use case or this specific customer where, 90% of the time it works like this way, for every customer of this type but for, this customer of that type on timate, Thursdays during the full moon, completely different thing exactly so, now what what what am I like how how are, we supposed to treat those like entirely, madeup solutions that you know am I just, feeding that back into the system and, saying hey so I account now account for, these you know alternative approaches is, it going to be like it was when the, first generated code Frameworks started, hitting the scene and you'd go in and, there'd be all this code it was like yes, super fast if you had to do an omm it, wrote all the code for you, Etc and then you needed to change, something and all of a sudden it was, like you know change management in some, of those days was yeah or regenerate the, whole thing from scratch and oh sorry, about all your, customizations um yeah so that'll be, another big test that AI has not yet, proven it can do well so so let's talk, about art for a second though because, this this is again a similar thing like, everybody's really excited look at the, images I can generate with you know mid, Journey or whatever stolen art well, right but but the point is again it's, going out and giving you the average, solution it's going out and going here, here are the things that look most like, what you described that are somebody, else has created already and kind of, gonna Cobble pieces of that together or, here's an opinion formed by the loudest, voices out there that I sucked up as, Source data hell yeah but like I sat in, a a meeting today where an artist went, over her design basically her design, process for a big design project like, here's the resources I looked at here's, the you know the feeling I was going for, here are the things I considered I, looked at these type faces this typ face, reminded me of this you know Building, architecture which is relevant to the, site where you know and then that artist, proceeded to churn out over the course, of a couple of months 200 pieces of of, support art for an event that was a, brilliant design, exercise by somebody deeply steeped in, art and creation who then studied the, event and what the event needed and, integrated all that and yes some random, person could have sat down with mid, journey and said make me this stuff and, it would have been much less, good but people who don't know the, difference would have been sure it looks, fine you know I mean we've all seen that, right you know my my document has 37, fonts in 12 colors but it looks fine to, me um but like there's a big difference, between you know something crafted and, something just slapped together and and, yeah I guess I think that AI is going to, make it easier to slap together but for, for most people though would you argue, so here's what I'm not saying I'm not, saying that these things generated by AI, like if you're a konur of a particular, art you know you're an architect of a, particular kind of application or, solution or you know or or thing, whatever it is you can you you can, critique right the output of Genna as it, stands today again arguably it's going, to get better at at what it does right, but you can critique the output today, and be like ah this is subpar right this, is not as good as what I could have come, up with but for most people it's good, enough it depends on what they're using, it for and so again if you're just doing, something for yourself who the hell, cares I mean yeah I'll slap something, together out of 2x4s if I'm building it, for my garage I don't care but if I'm, going to sell it if I'm going to make a, business around it that's the part where, I'm saying I don't think the AI stuff is, there if you're just doing a hacky, project for personal use yeah I mean, maybe you would have had to pay somebody, to come in and slap that shelf together, in your garage if you didn't have the, skills to do it yourself and so now, there's this kind of you know yes, there's a few things that I couldn't do, before that now I can do today for, myself design that invitation for my, kids birthday party, hell, yeah I can't draw but I can use an AI, That's there's nothing wrong with that, and yet you know so yeah there's, probably some you know the Kid Next Door, you're not paying 20 bucks to to do that, for you but that's now what happens in, you know in the future as AI evolves and, it improves so you know we get to this, uncan uncanny valley level of like oh, now it's not just good enough it's like, it's like the standard now it's building, your whole kitchen right see do we need, to worry about that and how much C out, there is most of what's out you know how, many cruds have we ever created in our, lives how many cruds are still being, created every day yeah okay so that's a, problem that's largely been solved, greater or lesser degree but yeah I mean, it's white box right you make white box, easy to do yeah and and Tech has always, been making things that that had Gates, around them or real or created limited, availability or and making them more, more available like Artisan things that, used to be only certain Artisans could, do that only musical artists had a, studio and an audio engineer and now, they can go and you know create their, own tracks with one app at home for me, that's the part that's like oh I can't, complain I can't be the kagin, complaining about this latest thing that, might make something that I do more, accessible to other people like I've, benefited from these other things that, came along it's time to share the wealth, I don't want to I'm I'm really rooting, against, it yeah I don't think it's about, gatekeeping I I mean I'm I'm not like I, feel like a big chunk of my career has, been spent trying to help people learn, to program and so I'm not thinking that, the reason I'm skeptical is because I, don't want other people to do what I do, I think it's more because I feel like, the hype and the reality are distinct, that what the reality is producing is, mostly devoid of creativity people are, confusing knowing what to look up with, being creative and I think knowing what, to look up is a skill and a lot of us, have it and the better Engineers I think, are better at it and so yes AI helps to, ease that problem but knowing what to, even ask, about you know like or looking at a new, solution to a problem, that's something that I think is well, beyond what's what a are capable of now, or in the reasonably like the llm model, I think is fundamentally non-creative, that's my take on it spicy we're not, we're not at the at that point the, unpopular opinions yet like no so one, thing you mentioned like the whole, teaching pro like y'all remember when um, maybe it was during uh maybe first or, second Obama term or something but there, was this giant push to teach everybody, how to code right like it was it was, everywhere it was in the media it was in, newspapers it was every like you know we, need to teach our young how to program, now I'm looking at a clip from you know, Nvidia CEO like I know three or four, days ago or something saying hey people, shouldn't learn how to how to program, you should now let you know the new, programming language is a human language, and I'm thinking man you are sitting, here you stand to gain billions of, bajillion dollars right if wish comes, because you're producing you know chips, and stuff for these things of course, you're GNA say that right so this this, so but I mean what I'm definitely not on, the don't teach people how to program, Camp no no I'm gonna I'm going to take a, slightly spicy take here I don't think, he's completely off now in the time, we've all been Engineers we've seen, waves of different things that are going, to come and take our jobs you know, offshoring and as you know code, generation now ai and they haven't and, my theory is that the key thing that an, engineer has is the ability to, communicate and even when you're, supposed to be communicating to people, on the other side product the business, whatever affectionate term you use for, them aren't always as good at that, although it should be part of their job, but having somebody who can think back, and forth they'll will I think always be, a need for those people because every, every CEO thinks they have the answer to, every, question that's what they hate to do, right but they they really shouldn't if, they have a business that's big enough, to grow their biggest skill is finding, the people and put in the right place so, if your job right now is like doing, cruds for a company that can't even, explain what they want I wouldn't worry, because they're not going to be able to, explain what they want to AI right yeah, no it's the thinking logic it's yeah, there's the think about break something, down into steps and think logically like, I once did have a client very earli in, my career who was a pretty good business, person who really wanted to automate his, business and he was able to sit down and, explain it to me like if he had had the, tools to program he could have written, his own code because he thought about it, really logically and and it was just my, job to basically take dictation and turn, it into Pascal for him that back in the, day but but that was that's few and far, between you know quite honestly most, people who specialize in business aren't, specializing in thinking logically, they're specialized in thinking about, people and like you said about, Communications so does AI then make you, more what you already are if you're a, logical thinker you'll benefit and if, you're not you still, struggle and who gets to train the, agent I I want I want to be in the, training side I want to be I want to be, the one doing the building of the things, that you use right, yeah it's a good question I think at the, point when it comes like a personal AI, where it's just like it's tuned to you, your data doesn't get shared it's just, you know then it come becomes like a, superpower right you can just it's like, your co-pilot right but how would that, work if it's only got your data it's, basically replicating to a point you, there's a generic it it's trained on the, universe right and then specialized for, you right is the way that we're see all, this yeah it's like creating your own, your own GPT but it's based on a larger, model right yeah exactly so I mean like, my company we built we have a query, engine that looks like SQL and then we, built an AI where we train that AI like, as part of the part of the prompt we, basically can go out and get your data, and all the names of your fields and the, data types of your fields and we can, plug them into the AI query so that when, you say show me my slowest service it, can go all right what are the fields, that are named according to you know, time duration and what are the things, that look like a service name and now I, can write a query for you so it knows, how to query honeycomb and then it can, write that query for you from your inept, prompt because it's been specialized for, that particular type of application and, I think that's a really cool use of AI, that that is that is a productivity, boost for for people who are already, technical like you know like full, disclosure I am my startup is a, honeycomb customer so you know I I I go, on the dashboard and I can formulate, those queries but I'm already a highly, technical user who knows how to use, these kind of tools to get to and and, know exactly what kind of data I'm, looking for and when I've found it now, for the lay person right who doesn't, know like the lay person it, the more I think about it the more I'm, thinking okay if I'm a on one end of the, spectrum you have the complete lay, person who is using perhaps you know, Chad GPT or something like it to maybe, generate copy and not hiring a copyright, person like you might traditionally do, back in the day right I'm sure the, copywriters of the world are suffering, right now because content creators you, know like content is like yeah content, creators are suffering because this, stuff is now being being generated so, are all our Google, searches right so if if that was your, job absolutely you're impacted right and, the lay person can now bypass you and, get to something that again the good, enough right they can get something good, enough to achieve some means right on, the ex the complete opposite of the, spectrum you have people who engineer, software right that again given the, context of the conversation we're, talking about like how safe is our jobs, right so when I'm asking this question, I'm not asking is the lay person going, to find ways of reducing right their, Reliance on sort of I don't want to say, lower skill just the different kind of, skill right I'm thinking like for people, like us as software Engineers who, presumably will be impacted by this to, some degree right and we already are, right for us there's also the micros, Spectrum whereby if you're on the sort, of the lower end of that spectrum and if, the only thing you're doing is, generating crud well I'm sorry your job, is indeed in Jeopardy if that's the only, thing you've been doing right with your, career on the opposite side of it is the, highly, specialized person who understands a, business problem has to debug and, troubleshoot and and talk to people and, integrate different things and, institutional knowledge right all that, stuff I mean I don't see that skill, right I don't see that being replaced by, AI anytime soon am I wrong here I don't, I don't think so personally how many of, those people do we need and that's the, thing right not game of musical chairs, we should be looking for our chair now, no any business is thinking do I need a, thousand Engineers right when 500 will, do I mean I think as we've all said, right it makes us more productive today, so I'm writing more lines of code per, day than I was five years ago right, right right so that's good you know but, we all kind of expect prod it to, continue to rise so this is a, productivity tool m not a replacement, tool you know we're also using like, using languages that are more expressive, than they were you know like the code I, write in go is probably onethird the, length of the same code I write in in, C++ or used to write in C++ back in the, day so that's also a productivity boost, at some level at least if you believe, the old metrics that it's basically you, can write the same number of lines per, Cod of code per day no matter what, language you it in but I I think, actually knowing how to use it is a, skill like to put on your resume but not, before too long or if not now I mean I, even if I didn't want to use it I would, because it's becoming of like it's GNA, be a point where like oh you know I use, Co co-pilot all the time oh good you, have point for you to get the job is, prompt engineering on already and your, L it's going to be, soon no but if you put it in your, interests as AI it shows up in the, keyword searches oh there you go, right nice that's I mean that's a good, point but it but to me it's like back to, my woodworking thing it's like I know, how to use a power saw I know how to use, a drill press I know use lathe you know, those are those are kind of expected, today if I'm going to do Woodworking and, say I only use hand tools people are, going to look at me like I don't have, time for you same thing yeah and the, same is true of like if you're not using, Co pilot what am I paying you per hour, what are you doing you, know why are you not why are you not as, productive as you could be yeah I think, at this point the only people who really, aren't using it are people who are doing, like very Arcane languages or people who, their businesses that they don't allow, it their company doesn't allow it I, think everybody else has at least tried, it I mean if you're a company does that, doesn't allow such things like I I, understand not bringing sort of open, source code into your organization that, might be the wrong license model for you, or something like that right you don't, want to be in some hot water all you, have to do is look at you know oracle, and and Google over the whole Java thing, I think those were the companies, involved but if you allow your engineers, to use a model where you can control the, kinds of things that were used in the, model right for the training and you can, have maybe you can run your own internal, right gen for code generation whatever, it is right I think you if you're an, organization that is afraid of these, things you should at least follow that, route as opposed to saying hey nobody, can use any gen coding tools whatsoever, because I think you're going to lose, people if you do that because because, I'm going to look at my peers that are, that get to use these things and are, they they're learning those skills right, and then now I'm falling behind because, everybody's using you know some sort of, cod generation tool and I'm not right I, mean this is where I hope somebody, reaches out somebody who hears this and, reaches out and can answer that question, of like can you have a copy of Co pilot, that you train on a specified set of, repos and only those repos private repos, well I mean I would you know as we joked, I mean if I'm a go developer I'm, training it on Johnny's, code because if I don't like it I can, know Johnny but nice you know what I, mean there's people out there one neck, one neck to choke I get it I get it it, doesn't matter what language it is, there's people out there that that you, respect their code and you'd be like yes, I would like my code to be more like, this I it would learn that that's what I, was thinking or that's the way this, problem should be thought about I mean, yes there's precious few of those people, and those people will probably never, lose their jobs but for the rest of the, M morals that are going to have to work, with the the tools that are out there, and I would love to if this is a, possibility now that you could train, co-pilot on what you'd say to train it, on and not all of GitHub I think there, are companies that are working on that, product I feel like I've even seen a, product announcement like it but yeah I, mean you know the thing about it is you, can take one of these llms and you can, essentially subset it and you can make a, tiny compact llm that will run in a box, that you can actually stand on your, desktop and then you can further train, that with new information so so that's, exactly what you want to do here you, want to take a coding Centric llm like a, co-pilot and create the Mini version of, it and then train it on your, repositories and now it knows how to, write your code and it's also not, talking out to the cloud while you're, doing it so there's got to be businesses, like that if there are and this is where, we redact all of this and put it into, our business plan, [Music], right what's up friends there's a new, book out there called The Hacker mindset, this is a productivity cheat code to, unlock new levels of success in your, career in your creative Pursuits and in, your personal growth this book is about, leveraging the principles of white hat, hacking and applying skills to the, broader world it's available for, pre-order right now and it's not your, typical productivity guide this is, written by Garrett g a seasoned wh hat, hacker with over 20 years of experience, this book reveals the secrets of hacking, and how you can apply those skills to, overcome obstacles and achieve your, goals so don't miss your chance to get, ahead and get this book The Hacker, mindset you can pre-order your copy, today at thehacker mindset.com be among, the first of many to tap into this power, of hacking for your success join the, movement and embrace a new way of, thinking again that's the haacker, mindset.com, last up we have Jose valim creator of, The Elixir programming language on Chang, log and friends that is our talk show, flavor of the Chang log podcast, [Music], to Jared's Point earlier AI as it's, known today GPT chat GPT and others are, not that good with assisting with Elixir, programming and so I guess the question, is what does it take to make it good you, mentioned embeddings earlier you, mentioned uh documentation being more, readily available what does it take from, a I guess a leader in the Elixir world, to enable llms to be better like what, role do you play in that Journey for, them to better consume the documentation, and better know how to programming, Elixir to help folks like Jared and, myself or our team or others to to, really become better and more proficient, Elixir versus just like anytime Jared, you know asks Chad GPT for assistance, it's just like no it's not good so just, quit you know so I think if I got the, question right I think we did our work, correctly in the sense that at least, from the the language point of view in, the sense that like documentation was, always first class so documentation is, very easy to access so if what you want, to do is to like configure an llm it's, actually very easy to access that, programmatically send that extract, information and we talked about like uh, one of the things that you also have to, do is like try to get understanding from, the the source code so you can find oh, this code is using those modules is, importing those things and those are, things that you can do relatively easily, in Elixir we can most likely improve, that so I feel like uh we have the DI, for the cheese uh it's just a matter of, somebody going and like cutting the, cheese you know it's yeah I I feel like, the the foundation is there in terms of, like having this information structured, but somebody needs to feed it somewhere, but again uh we can go back like maybe, it's a corpus size like maybe chpt like, index hex PM already not sure right, maybe it has done that I don't know I, don't know if I if I can send a letter, to somebody hey please index my my, website or maybe it's a matter of uh so, one of the things is that uh red Monon, they have um they release twice a year, like kind of a a graph plotting GitHub, against stack Overflow and I think, they're having like uh the most popular, languages according to GitHub and stack, Overflow and then there's like a you, know linear thing in the middle and it's, very funny because alexir is high on the, GitHub side but quite low on the stack, Overflow side and one of the reasons for, that is because we have always had the, alexir Forum so that may be one of the, things where's the knowledge where's the, back and forth from the community yeah, the knowledge is in the Forum is that, thing being dexed because we know it's, te overflow is right yeah and right so, and ironically that's one of the reasons, I think I may be misquoting that red mon, they are considering removing stack, Overflow from their plots because it's, PR like I think it has been losing, relevance in the last years right but, you know maybe in the effort of trying, to have a closer Community where, everybody can engage with each other, where I am active in the Forum and I'll, probably not have this patience if I was, dealing with stack Overflow right we, created our community a special place, but it's not known so yes so I think I I, think it's still like too many too many, unknowns but I think at the core at the, core we unwillingly did a good job, because we were worried about uh, documentation being accessible, documentation being first class so we, did that and that can be and we promote, people to write documentation lots of, documentation right so there is a lot, there and yeah and maybe the rag is, going to be the thing that uh is going, to be enough that's one of the hopes, right going back to we want everybody to, be able to use this if rag is good, enough then a lot of people would be, able to augment their ecosystems without, depending on on open air or whatever but, we are still evaluating when you when we, talked about sort of the the long-term, future of Elixir artificial intelligence, and that sort of larger topic of you, know how long will be relevant and you, know can AI generated well that whole, conversation this makes me think of this, you know necessity to not have a black, box that is whatever AI is because just, like you said who do I send a letter to, to index my stuff so that my very, relevant language today remains relevant, tomorrow because tomorrow says AI will, continue to be more and more relevant to, developers in their journey to develop, right so who do we send a letter to how, do we know well currently the status quo, of AI is for the most part a black box, obviously opens source llms and indexes, have become more and more pushed because, of this challenge but I think this, illustrates and highlights really the, the long-term challenge because even you, can't say for sure why what wasn't, indexed was indexed for the Elixir, Corpus whether that's the forums whether, that's the documentation through hex, documentation or whatever it's unclear, to Someone Like You how to enable chat, GPT or the likes to better support, Elixir assistance for developers using, those things to use this tooling and, that's just not cool cuz long term we, need to have inroads into those places, so that we can be part of the future if, AI is predicting how we'll get to the, Future yeah and I think and I think uh, yeah it's is too early I think we're, going to improve a lot I was listening, to a podcast today uh where Sam mman he, he was saying like um they improved CH, GPT 3 about 40 in the orders of, magnitude in terms of size performance, and things like that since they started, I think 10 times for CHP three and a, half and I think open source is going to, to catch up uh I think and and I think, uh that's the hope but yeah it's also, like we go back to this when we are, thinking about live book because what I, want is for open source chewing right, but when I'm building a feature for live, book right I need to build the best, feature for the users right and when I, can use ch pt4 right and I immediat I, can immediately see the results and, they're really really good right I can, use other tools off the shelf they're, not as good right so uh we we are a, small company we are doing open source, so my options if I have to choose for my, users is going to be Chad pt4 because it, gives me the best result for the least, amount of effort right I just it's there, and this is like so we're backing like, about my indecision about investing this, stuff is that because I want open source, right I want things should be open, source but right now the quickest return, of investment is gy and then I am in, this contradiction space right but yeah, and it's just I think it's just patience, we have to be patient and you know I, think probably in one year and the whole, thing is like it's crazy to think about, is that this thing has been happening, for a year only right it appears that, this thing has been out for so long but, it's a year and I think like if I'm back, on the show in a year we may potentially, be having a very different conversation, so yeah we'll see do you have any fear, about this like even as you respond to, that you sort of had some I wouldn't say, like trepidation in your voice but you, sort of had some uncertainty do you have, any like fear and uncertainty and doubt, the fud that people sort of pass around, do you have any fear about this no not, really in the sense that I consider, myself like very lucky very fortunate or, whatever or or blast whatever you want, to say it I think maybe it's a I'm not, being overconfident here but more like, thankful that I think whatever happens, to me it's going to be fine I truly, believe that what's going to make alixir, survive is the community more you know, than whatever technological changes, unless there's something very drastic I, I talked to my father about this about, Investments right so like when Bitcoin, wasn't a crazy right and then my father, is like oh have you heard like about, this thing that if you put your money, there like people got this huge return, and then I always told him father if we, got to know about it it's because it's, too late you know it's like or or if, something happens right it's like oh, Father like if something happens is, because like if something goes this bad, is because it's going to be bad for, everybody so like don't try to fight it, right so again like unless there's a, very major change I think I will be fine, right so I'm not worried about me in the, sense I always think more about it's, more about ideals you know again like I, like to say well me 10 years ago that's, where my trepidation is if things go, like closed Source you know and and, those things they happen by we don't see, the results like I I think another poic, topic about this it's like hey I use, Chrome as soon as Chrome came out I IM, today I don't use Chrome anymore but as, soon as Chrome came out I immediately, swap to Chrome right and if I had known, that this would lead to a point where, you know Google is in this position, where it has a lot of control over the, browser over the web right and over how, we use the internet like 10 years ago I, would probably not have used chrome if I, could have seen it right or so I think I, think that's where my trepidation comes, from of like things being closed Source, like the developer experience so today, another example today like elixir was, the first programming language that, GitHub had like the new navigation code, navigation things that were provided by, the community right so there were some, programming languages and there still, are where they have very good navigation, and exploration on GitHub UI and the, path for that to get that feature to get, that behavior was and I'm very welcome, that I very thankful that the GitHub, team they you know they discuss with us, doesn't allow us to do that but that's, close Source right and GitHub plays a, major role over you know how developers, use right so it's all comes back to this, idea of like if you want to provide a, good experience for your users how much, of that is behind something closed, source that you have no control and you, are depending on somebody you know, paying attention to you like or you, having a contact or you know me having a, name because I was very active in the, Ros community that GitHub uses like 10, years ago right those are the things, that but I like I feel lucky you know, but I it worries me right like uh how, much is being closed right how much is, is going to be out of our control and, then the the trapid I guess is like what, does that matter for the small Jose out, there right that want to start building, his thing today and they won't be able, to well you killed the vibe there just, oh thank you uh well you just that's me, at parties you know so you just me, parties not, invited just Kidd oh funny all right, well let's uh should we try to close on, an up note on a high note on a on an, upper was, that wow I had no idea I think we should, end it right there Adam don't you think, we ended on a high cheese a knife tactic, there I love it it was do you want, higher I don't think we be good for the, listeners I think that was plenty High, Enough For Me Adam were you satisfied, with that that was a high note literally, yes and um I I dig, [Music], it thanks for listening to this very, special episode of practical AI these, were just extracted segments of much, longer conversations if you want to hear, more there's a link in your show notes, to each of the episodes featured here, thanks again to our partners fly.io to, our beat freaking residence break master, cylinder and to our friends at Sentry, save yourself 100 bucks on the team plan, when you use code change log while, signing up that's all for now Chris and, Daniel return on the next, [Music], one, K LOVE |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Starting the Joint Artificial Intelligence Center | Lt. General Jack Shanahan on the "Practical AI" podcast. Full audio 👉 https://practicalai.fm/257
#podcast #ai #machinelearning #datascience #artificialintelligence #deeplearning #ml #mlops #nlp #dataengineering | 137 | 2 | 0 | I would sit there and cry in my beer so, to speak and say oh nobody understands, my problems and then one day I'm driving, into the Pentagon um just desponding, about the how I'll never get this thing, called the Jake bill because I have no, people I have no money how are we going, to get there I'm listening to guy Ros, how I belt how I built this and it was, listening to the the CEO the founder of, belon rout routers it was a fascinating, story it was exactly my story he says I, built this thing in the my parents, garage and there were days when I was, just ready to throw in the towel and, give up and there were other days 24, hours later where I would suddenly some, technological breakthrough or some big, contract that we we got and all of a, sudden it looked bright again I lived, that I said well if he could turn it, into that company then hell I can do the, same thing in the Department of Defense |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Prompting the future | Daniel & Chris explore the state of the art in prompt engineering with Jared Zoneraich, the founder of PromptLayer. PromptLayer is the first platform built specifically for prompt engineering. It can visually manage prompts, evaluate models, log LLM requests, search usage history, and help your organization collaborate as a team. Jared provides expert guidance in how to be implement prompt engineering, but also illustrates how we got here, and where we’re likely to go next.
Leave us a comment (https://changelog.com/practicalai/261/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Shopify (https://www.shopify.com/practicalai) – Sign up for a $1/month trial period at shopify.com/practicalai (https://www.shopify.com/practicalai)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Jared Zoneraich – Twitter (https://twitter.com/imjaredz) , LinkedIn (https://www.linkedin.com/in/imjaredz)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• PromptLayer (https://promptlayer.com)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-261.md) | 130 | 4 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, founder and CEO at prediction guard and, I am joined as always by my co-host, Chris Benson who is a principal AI, research engineer at loed Martin how you, doing Chris I'm doing fine how's it, going today Daniel it's going great I'm, pretty excited to prompt our Our Guest, today and hear what he has to say we're, joined today by The Prompt Master Jared, zerik uh who is founder at prompt layer, how you doing Jared I am doing well, excited for this we're excited to have, you it seems like um maybe from my, perspective there was kind of the, release of all of this generative AI, stuff and then there was this, realization that there's kind of a new, skill needed around this thing called, prompt engineering and then it seems, like some people have kind of like I, don't know if they've moved past the, term or they've tried to develop other, terms I see other terms being developed, around like AI engineering and such and, as related to generative AI so could you, just give us a sense of like from your, perspective who you know is obviously, building things for prompt engineering, maybe to start out kind of what is, prompt engineering and from your, perspective how have you seen it develop, as a skill over the past year since um, people have been thinking a lot about, prompting generative models yeah the, last year since the word was invented or, maybe a more than a year ago but uh uh, yeah no I think it's a really good, question I think this question of what, does prompt engineering even mean, is is also a good question um and I, think I'll tell you how we think about, it uh so honestly so for prompt layer we, consider ourselves a prompt engineering, platform so we've really steered into, this term that is kind of overloaded, we've embraced it for sure and it was, half by accident and uh I would say half, realizing it was kind of beneficial to, us because the fact that it has no, definition means anyone who's kind of, getting into llms and getting into, prompt engineering say oh there's a, prompt engineering platform maybe I need, that and they don't even necessarily, know what it is so it kind of helps us a, little bit there I guess prompt, engineering I first started hearing that, term I'd say gpt3 days kind of a little, bit before chat GPT maybe gpt2 back when, everyone was just using the open ey, playground and you'd start to hear a, little bit about prompt engineering and, stuff like that and it was kind of cool, but that was I guess the days before it, clicked for everyone uh maybe maybe you, can call it that days before chat PD, came out and before people really, realize how much potential this, technology has so that was when I think, prompt engineering first became a word, and then kind of got even more you know, scale AI I think is famous for maybe the, first one to publicly hire someone with, a role prompt engineer and uh from there, we called their platform a prompt, engineering platform at the beginning, people brought a lot of people to us who, had no idea what prompt engineering is, and kind of the definition we've started, to roll with is as a company we consider, prompt engineering to be the tuning of, the inputs to the llm so the prompt is, the main input but it also includes what, model are us and what are your, temperature what are your other, hyperparameters but the whole process of, prompt engineering to us is what goes in, and what comes out and that specifically, is and I can talk a a little bit more, about this if it's interesting but, specifically a little bit different to, the mlop definition or the the standard, machine learning definitions of, hyperparameter tuning and standard, traditional ML and uh specifically we, call ourselves a prompt engineering, platform and not an llm OP platform for, that reason because I do think there is, slightly a difference so I don't know if, I answered the question fully but uh, that those are my thoughts on prompt, engineering is we all dived into it like, you talked about you know the open AI, playground originally that I think, everybody kind of dips their toe into, first at least before the chat GPT, release days with that uh one of the, things I discovered as other models, started coming out was some of the, skills I was developing for prompting, did not always translate as I expected, across to other models have you uh have, you seen one of those things where every, model had its own kind of variations of, what seemed to work from a a, productivity and output standpoint any, thoughts around that like how do you, guys how do you guys see that with all, these different models coming out and, little variations across them in terms, of how they respond yeah that's for sure, a good observation I think uh it's, become clear that each model I mean each, model's made a little bit differently so, you have to talk to it a little bit, differently uh I think these differences, are going to maybe get a little less, significant over the future you know, when I guess chat tpd came out and the, big thing was if you were nicer to the, model you're going to get a better, answer maybe because stack Overflow, questions that are nicer get better, answers or something like that and I, think now you have people talking about, oh if my grandma's gonna die you need to, answer this or I'm gonna tip you $100 if, you answer this these are all I think, tricks that work today they're not going, to last forever these are like little, things people figure out but the part, you mentioned of talking to different, models differently I think is not going, away these are, a lot of these models are made very, differently we think it's pretty, conclusive now that we're not going to, live in a world with an open eye, Monopoly of language models uh I think, just a week or two ago I saw a good, tweet like now we have mistol Claude tp4, and a few others that are all really, good and they're all made somewhat, differently and there are intricacies, and I think our philosophy and what we, talk to our users about uh regarding, prompt engineering is think about it as, a blackbox it's most helpful to be kind, of a little bit naive and stupid here, and not try to understand how an llm, works and just try to track the inputs, to the outputs if that makes sense have, you found any difference from people, that are coming maybe from a deep sort, of like data sciency background where, they like are, overanalyzing everything and then other, people who are coming maybe from a not, technical background but they're maybe, domain experts and they're getting into, um developing these prompts have you, seen different struggles on each side of, that Spectrum where because you have, sort of this very interesting mix of, people that are trying to be like prompt, Engineers quote unquote some of which, I've kind of seen are very much just, like non-technical domain experts who, are really good at even like psychology, and writing narrative instructions you, know um being articulate and then you, kind of have this other side which is, the data science side and they're really, into modeling and wanting to analyze all, of the outputs and that sort of thing so, have you seen different struggles on, both sides of that in terms of being, effective prompt Engineers I think you, put it well that there's kind of these, two groups coming at LMS traditionally I, mean this is what makes LM so cool to me, is that traditionally ml traditional, machine learning standard like mathy, machine learning kind of need a PhD uh, maybe you don't need it but PhD is very, helpful uh it's intense how you're, building these models doing a lot by, hand still you're doing a lot of this, tweaking by hand in traditional ML and, then open AI came out with this amazing, API and I've done a lot with devil in, the past and hackathons and stuff like, that and helping companies make apis and, in my opinion open AI is API it's like, the best docs I've ever read in my life, it's so simple you just give it text and, you get text out and like you said this, kind of opened up this completely new, technology that's just so much better, than everything else to non-technical, people you don't need a PhD to be able, to understand how to communicate and I, think it brings about this new skill set, which is prompt engineering which in my, opinion is kind of a mixture of, communication and like being able to, write succinctly but also uh being able, to think algorithmically so I don't know, the exact word for thinking, algorithmically I've heard step wolf use, that word I like that way to talk about, it but kind of just the scientific, method do you know how to think in terms, of creating a hypothesis trying it out, tracking it are you strategic about this, and I think it's the same Challenge on, both sides I think some people try to, over complicate it if you're coming from, an ml background and you try to, understand why a certain token gives you, an output and I almost think these, things are getting more complicated not, less you kind of just need to take the, naive approach and say hey I'm just, going to talk to it I'm going to try to, get the put I want I'm going to keep, trying stuff till it works as someone, who's used like the apis a lot you know, and you're talking about you know open, AI being so good so many folks listening, to this may have only uh have only used, things like the normal chat interfaces, on each of the models you know when, they've gone and tried them out or pay, for a subscription for the the top end, and stuff and have never touched the, apis at all could you take a moment and, just talk about what an API experience, is like to someone who is has done a lot, I'm kind of taking advantage of you as, an expert in the area share that a, little bit with listeners just so they, kind of get the other side of that, because not everybody probably the vast, majority of them don't I'll explain and, that just to Sidetrack myself for a, sentence I think the fact that you use, the word expert is so funny is it such a, new field it's it's almost amazing I, tell everybody you could become an, expert in this thing very easily like, nobody really knows what's going on you, kind of just need to study for a week, and you're expert which is a very unique, place to be but uh, anyway it makes it fun to be able to, dive into something and and get as deep, as the leaders in the field so yeah 100%, And just to beond The Cutting Edge and, know that nobody really knows what, they're doing here some people do but, there's very few of them so regarding, the apis chat TT the way I think about, it and the way I usually explain it is, it's basically a very thin wrapper on, top of the API every time you talk to, chat GPT behind the scenes it's using, this llm technology it's sending your, message and a little bit of a preamble, before your message and the Preamble is, basically we could call it the prompt, and it's basically saying hey you're an, AI assistant make sure to be helpful to, the user maybe don't be controversial, you can use a calculator if you need, stuff like that and then giving the user, messages and that's all it is and the, process of prompt engineering is how do, you tweak that Preamble and how do you, how do you get it to respond in the way, you want to respond by telling it what, to do do when I talk about the API I'm, talking about the things open AI has, exposed to let you build your own track, GPD and let you build your own products, on top of it and I just think open AI, has done if you want to get started and, you haven't touched these apis today the, best best thing to do is just go to open, AI docs and just read the getting, started tutorial it's really well done, and I think as someone who's running a, developer tools company I'll tell you, how hard it is to write good docs I, think our Docs from my perspective I, think they should be much better other, people say they're great but well, there's always like room for improvement, and it's just a very hard thing to do so, that's where you should, go um from your perspective over the, last year as people have gotten into, doing this practice of prompt, engineering have you generally seen, people that are engaging with your, platform be sort of more informed coming, in like they've done that exper, experimentation with chat GPT or the, apis and they are like using words like, f shot whatever blah blah blah and like, this stuff or is it still kind of like, people coming in that say I know I think, I need to have a prompt engineering, platform but I'm kind of don't know, where to start have you seen that shift, even over in this last year I would say, we're still very early in this and, there's a few leaders but everything's, up for grabs in terms of AI products I, don't buy the whole notion of it's only, the incumbents who are going to be able, to use AI there's so the AI products, people are making today are just, scratching the surface having said that, I have seen a bit of a change since we, launched our product we launched our, product January of last year so January, of 2023 I guess that was a few months, after chpt came out these things were I, think we were I think we are the first, prompt engineering platform maybe there, there's some argument there but we're, one of the first at least and when we, launched our product we had a lot of, Indie hackers and individual hobbyists, using our platform that was the whole, Community back then um that lasted for a, little then we kind of moved on to AI, first startups so like one or two person, startups these are like the really, cutting edge ones uh we used one of them, it's a great company to actually, refactor our whole code base into, typescript a lot of really cool stuff, and then from there I would say starting, in the fall and I heard this from a few, other founders in the space it felt like, there was started to become a real shift, where real companies and when I say real, I mean maybe companies that actually, make money um started actually getting, serious about Ai and getting serious, about llms and I think we're still, seeing that maturation uh continuing, where these real teams are building AI, products that they care about and not, just like Twitter demos are one thing, prom layer is interesting for a Twitter, demo but prom layer becomes really, useful for a team that is serious about, building their product and has multiple, stakeholders and wants to collaborate, and so I am seeing that this shift is, still is happening more and more, companies are getting serious about, their LM products and getting value and, revenue from them but we're still at the, very very beginning of the curve so a, lot a lot more to come I think that was, a great sort of intro into prompt, engineering and the state of prompt, engineering I'm wondering if you could, help us maybe, understand yes it's good if people get, Hands-On with these models kind of gain, some intuition about how they behave in, different ways that you prompt them what, is it about this discipline of prompt, engineering that needs sort of, systematic ways of, managing your prompting methodologies um, and how is that different to or the same, as different sorts of Engineering in the, past you know we've always had Version, Control and that sort of thing, what's kind of unique and not unique, about this discipline of prompt, Engineering in terms of how you need to, approach it systematically just starting, from first principles there's one, fundamental thing that's changed and, it's that we're now building upon a, probabilistic technology so we're now, building on a technology that sometimes, gives us some answer sometimes gives us, another answer and is trending to be, more confusing on why it gives one, answer and not less and confusing I mean, yes theoretically it is deterministic, you can really dive into the weights and, maybe figure it out but nobody's really, practically going to do that it's, virtually too hard for 99.9% of use, cases with models so we're at a place, where you're working with a blackbox now, and yes some servers some architectures, become blackboxes if there a bad code, but that's a different type of black, that's not a real blackbox but we're, building technology that's the code I, write yeah me too that's why I'm going, to get banned from our repo soon, but yeah so you're Building Technology, on this black box and you need to think, about it differently and I think this is, a big philosophy we have at Pro ler I, think we have the philosophy of we built, a lot of great stuff in traditional, software and traditional machine, learning and had a lot of great, learnings git is a fantastic tool, Version Control is important access, controls are important test driven, development is important but do we, necessarily want to take one to one of, everything not really I think the, biggest difference between llm based, development that building AI, applications and versus building, standard software is uh who are the, stakeholders like we were talking about, earlier now you can have subject matter, experts who are not necessarily software, Engineers but rather these prompt, Engineers these AI whisper like these, people who are able to talk to the, blackbox and able to communicate with, the AI we could call it in a little, sci-fi sense but we have this new, stakeholder in the process of software, engineering who is not going to jump, into the code and not going to jump into, G and that's why at least for prompt, layer we've taken a very first, principles groundup approach where we're, saying hey what can we learn from normal, software and how people work together on, software and do verion control and, collaborate and how can we take that, into llms and bring in new stakeholders, and bring in new coll ators and let, people actually build on this blackbox, technology in a systematic way so, hopefully that makes sense it does um, I'm curious you said some things there, that I'm really kind of pequ my interest, that you kind of were contrasting it, with deterministic programming you know, that we've all kind of grown up with and, now we're in this new age and we have, these non-deterministic things and you, can give it the same prompt and it may, or may not on any given day give you the, same the same answer back so how has, that kind of fundamentally changed, software development in general and I'm, encompassing all the things when I say, software development both the AI uh and, the the systems around it defeat it you, know because when you're dealing with, that potentially unexpected return in, that non-deterministic black box that, you're talking about how do people, handle that when they're trying to, devise uh and say Hey I want to use a, model in the thing that I'm building how, did they change in the way they think, about that I don't know how people think, but I can tell you a symptom how they, think which is we work with a lot of LM, teams of course because they're using, our platforming we talk to them we try, to we're always trying to talk to people, trying to figure out what to improve and, one of the big feces we've come to with, prompt layer is that the iteration cycle, of prompt engineering is different than, software engineering so uh what I mean, by that is your code deploys and, continuous integration and that whole, sort of thing is happening at a, different Cadence almost almost always, in mature software than prompt, engineering because because of lot a lot, of reasons maybe you're updating the, prompt frequently maybe again different, stakeholders are updating the prompt, that's why we encourage people to have, their prompts in a CMS or we call it a, prompt registry you can put it in a, postgrad database something like that, but you don't want to block your prompt, engineering cycle on end deployments and, that's I think the symptom of this new, thought pattern of how you're building, let's call it blackbox software and how, you're kind of thinking through these, problems where they are different, problem sets also they're not you're not, solving the same type of software, problems you're solving that's called, language problems or things that can, employ this new type of software and so, it's not just a different like way to, build but it's also a different way to, think about it and then we can also get, into like how do you code now how that's, different and how uh I think think uh, maybe this is maybe more cont I don't I, don't I I don't like to predict things, but uh because I think it's kind of a, Fool's errand but if I had to predict, one thing I think as a lazy programmer, there's something really nice about just, having a a little block of LM do, something for me so for example uh, parsing strings or parsing chunks of, text and reordering it you can do that, deterministically and it'll probably be, better but there's a probably a world, where models are going to get cheap and, quick enough where maybe it's worth the, engineering time not to build it well, and just Outsource it to AI but maybe, that's a whole different, tangent yeah and maybe you could get, into a little bit of the, implications of these prompts in a, registry right because you could have, like these sort of random tasks that I, don't know if if you all saw the Devon, thing that's been going around um where, it's like a, Junior engineer agent that can you know, write scripts and interact with software, documentation and today I was thinking, about that similar to what you were, saying Jared it's like hey well I could, go and take all of these strings and go, and interact with the API to translate, them into another language which is what, the task was that I was doing or I could, just like say hey write a script that, does this and do it for me so there's, those random things but then there's, another type of prompt which as soon as, you're starting to expose a system to, end users then small changes in the, prompt like on the back end could, produce very different changes in the, behavior to your actual end users and, cause like actual you know problems in, so could you talk a little bit about, maybe the implications of changes in, your prompts and like best practices, that you found around I know we're, talking a lot about prompt versioning, and Registries around those prompts and, I know the prompt layer is thinking, about more than that around evaluation, and other things but yeah maybe before, we go to those other things could you, talk about what you've seen in terms of, how people are managing these different, prompts some that have very low risk in, terms of changing them very often and, maybe some that have actually a large, risk in terms of maybe small changes in, the prompt yes I think there's a life, cycle here for how you care about your, prompt and I think every prompt in any, mature product and any if you have a, product an AI product that's making your, company $10 million a year or something, like that you're going to care about any, change to it and I think there's that, cycle so let's say that's the end stage, at the beginning you probably are just, going to ship any prompt you're just, going to write a prompt it's going to, kind of work you're going to try it once, or twice say all right good enough let's, get the MVP out in that case maybe the, prompt is in your code maybe don't care, so much about what you were just saying, about like updating having a breaking, change all right whatever I got to get, it out let me get five people using it, all right so you You've Done That What's, Next you probably uh now have like five, different prompts on your system they're, scattered everywhere I was just talking, to a Founder who was visiting us our, office today who was actually had this, exact same story and then he moved all, his prompts to a text file or actually, in his case it was just like a TS file, on his system so now you have your PRS, in one place that's the next step but, still in your code base it's still, linked to deploys and like you said you, still have no way of knowing if you push, a new one what happened like is it, breaking 20% of our use cases that not, great but I would call this stage uh I, like this word for it the like Vibe, based prompt engineering so this is the, vibe based prompt engineering stage, where you're kind of writing a prompt, you're testing it in playground you're, doing it once or twice and you're just, judging like you're just looking okay, yeah it's pretty good this lasts a, little bit the time when this is no, longer good enough is usually when your, products either getting to a greater, level maturity maybe you're rolling it, out to GA or uh you're adding more, stakeholders to the team maybe you're, adding a PM maybe you're adding a, Content writer or a subject matter like, we were talking about earlier a, psychologist or a lawyer or something, like that and you need more more people, involved so now you have a non-technical, person writing your prompts who isn't, really capable of building out the whole, Dev workflow to test it out you really, want to like make sure it doesn't break, everything so there's a few things I've, seen people few strategies I've seen, people employ here one is of course, traditional software let's bar some, stuff AB testing Let's uh release it for, some people if we're monitoring user, feedback maybe if users are giving us a, thumbs up thumbs down we should be able, to see it pretty quickly also having, prod staging Dev staging, different like we call them release, labels a lot of words for them let's, call that category slow releases so, that's one way then there's two other, big ways of solving it uh second way, regression tests so again another, concept borrowed from software, engineering let's find cases where it's, failing and see if we succeed in this, test case third which is also kind of, like regression test I guess is back, testing let's just run it on on Old, examples and see if it changes I think, in a lot of llm use cases the really, hard part about this problem is that you, don't know what the ground truth is, let's say we're making a summary there's, no correct answer to a summary there's a, good summary a bad summary it's almost, very hard to understand if it is good or, bad we actually there there's a user of, ours that is doing this exactly and, they're trying to figure this out and, they're using a combination of human, graders and whatnot but we can talk more, about that in a second but in this case, often the best thing to do is just rerun, it on old respon responses and see how, much changed and oh I updated the prompt, and 50% of my responses changed maybe I, should look into those oh only like one, out of a thousand changed probably good, enough so it's all about uh tradeoffs in, this world and everything's again it's a, new way of thinking it's, non-deterministic so how do you trade, off how much it's changing versus how, much you need to make sure it doesn't, change maybe you want to force specific, output that is Det termin istically, graded so for example you're giving a, Json or a Boolean output so there's a, lot there's a lot of strategies here but, uh if I were to kind of give it a, oneline answer you should be deciding at, what Cadence you update props by what, stage your product is at and how bad, these issues are going to, [Music], be, [Music], you know when we started podcasting back, in 2009 and online store was just the, furthest thing from our minds now we, have, merch.com and you can go there right now, and order some T-shirts and that's all, powered by Shopify it's so easy all, because Shopify is, amazing Shopify is the global Commerce, platform that that helps you sell at, every stage of your business from the, launch your online shop stage to the, first real life store stage all the way, to the did we just hit a million Doll, stage Shopify is there to help you grow, whether you're selling security systems, or marketing memory modules Shopify, helps you sell everywhere from their, all-in-one e-commerce platform to their, in-person POS system wherever and, whatever you're selling Shopify has got, you covered Shopify helps you turn, browsers into buyers with the internet's, best converting check out up to 36%, better compared to other leading, Commerce platforms and sell more with, less effort thanks to Shopify magic your, AI powered Allstar you know nothing gets, me and Jared more excited than when our, guests get that coupon code in their, email in their show ships or to everyone, out there who loves Chang law podcast, and can go to, merch.com and get your favorite threads, to support our podcasts it is just the, best thing ever from stickers to threads, all that is at merch law.com and did you, know that Shopify Powers 10% of all, e-commerce in the US and Shopify is the, global force behind all birds rothy and, Brook linen and millions of other, entrepreneurs of every size across, 175 countries plus shopify's extensive, help resources are there to support you, in your success every step of the way, because businesses that grow grow with, Shopify sign up for a $1 per month trial, period at shopify.com, practical aai all overc case go to, shopify.com, practical AI now to grow your business, no matter what stage you're in again, shopify.com slpr practical, [Music], aai, if you're looking at like a large out on, the edge system of systems in the sense, of you have a number of models deployed, and they all do specific things and some, of them are generative and some are not, and for the generative ones they may be, trying to address very specific, functions that they're doing with a, system like that you've got it in, production and maybe you've done kind of, that minimal viable product approach on, getting it up but when you get things to, where they're kind of stable in, production you were starting to kind of, address some of that I know that I'm, grappling in my own head with like how, to think about being able to make those, tweaks and changes to prompts in any, given model in the system and detect, that is there like a best place for me, to start because I'm still trying to, kind of grapple with the larger picture, and really understand it and so if I, want to change something but I don't, want to impact the larger stable system, what would you kind of be like what if, it was you in that position like one two, three try this try that try that just to, give me a a good Hands-On takeaway uh, from that and I apologize for the, selfish nature of the question but uh, it's hard to do that no I think it's, good to have the selfish type of, question here because uh one thing that, I actually think a lot of people get, wrong in this space is that every prompt, is kind of different and the answer to, this question is really very unique to, what you're actually trying to do and, the task you're trying to solve there, isn't, really I mean a lot of people are trying, to sell it so I like to say that maybe, I'll change my opinion in a month or a, week or a day I do that all the time as, I learn more so I no worries you're, allowed to it's good to change your, opinion but uh right now I think a lot, of these eval sets that people produce, are not that useful for building real, products because you're trying to, evaluate your prompt for your real, application not for some pie in the sky, financial data set or something like, that having said that I think the first, question to ask and maybe we'll use this, example so I would modularize it I would, think about it on the pr level uh so I, guess there's two ways to think about it, we could think about modular tests and, then end to end tests we should be doing, both for this case do we have a ground, truth I think is the first question I'd, ask is there ability to make a data set, with ground truth that we can compare to, or is it like a summary type example, where there's no answer in the case that, I'm looking at I think you could, establish ground truth I don't know it, would be easy but you probably could, because you could bypass what you're, seeing and you could have a a human kind, of assess what the generative AI was, model was trying to assess as well so, you could get a ground truth that is a, human analysis of it as a proxy, excellent so your life is now 10 times, easier that's always good it is great, yes Step One is uh whether you do it, yourself whether you hire some people on, MK or QA or however you do it step one, is kind of build it doesn't have to be, big built build a small data set and try, to get into this method of test driven, prompting or eval griv and prompt, engineer it I don't know if we have a, word for it yet maybe we need to Define, one but try to build build some sort of, metric we can evaluate our test on so, you don't have to be just one by one, trying these examples out so what I mean, by this is build a let's say 10 let's, say 15 let's say 100 whatever it is, depending on the use case of input, variables to your prompt so your prompt, is probably having hey your task is to, do this here's this data blah blah so, the data is an input variable in that, case and then get a human to give you, the output and then start every time you, test the prompt let's run it on that, over time we can let me I could talk, about over time how to make that better, but does that make sense it does it does, thing I'd add is over time how you make, that better is you start connecting that, back with real data and how your users, use it so again you can make your life, 10x easier if you have good user, feedback and a way to know if the, production inference if the production, uh L run actually worked or not so user, feedbacks a way to do that say a user, gives you a thumbs up thumbs down now, you can take all those thumbs UPS thumbs, down make a new data set out of that and, now now you're really going and now, you're building this whole feedback loop, and I think I I'll say like our biggest, goal with prompt layer our MO is to, shorten the prompt engineering feedback, loop I think that's what everything, boils down to in this world maybe along, with that element of feedback I'm, wondering if you can talk a little bit, because we've talked a lot about, evaluation prompt versioning there's the, other element of this which I know you, all um are thinking about deeply which, is uh sort of logging and monitoring and, there's certainly cases where oh I have, this chain of llm processing or even, Loops that could happen like you know, llm as a judge or something that kind of, or critic kind of elements of LM prompts, that actually could Loop until something, happens or a certain number of times and, the way that you develop your prompts, both in terms of their length in terms, of how effective they are could, drastically impact your latency of, processing it could impact your cost um, in terms of how much text you're putting, into models especially if they're, charging you for how much text you're, putting in so yeah could you talk a, little bit about maybe the highlights of, some kind of best practices around, logging and monitoring and how you think, about that at promp layer yes so I think, uh not to sound like a a broken record, here but uh the thing I like to or I go, back to a lot is everything is use case, dependent so you brought up that some, people are very concerned about latency, some are very concerned about cost I, know teams that are concerned about, neither of those and their only concern, is are we getting the right answer for, example code generation type startups a, lot of times latency doesn't matter, you're giving them a task hey again in, our case of prom layer hey can you we, worked with a company called grit where, we said can you move our whole code base, into typescript it could take a week I, don't care so latency and costs don't, matter them but then there's a lot of, other cases where most cases probably, latency costs matter so in that case or, in either case why logging is important, here is uh just debugging honestly, I'll be honest logging and observability, is kind of the most boring part of our, platform not because it's uh not useful, but because it's obvious uh I think we, started with observability this is our, this is table Stakes to shortening that, feedback loop and that high level goal, this is table Stakes because you you, need to collect the data you need to, collect the data to build these evals to, see when it's not working to be able to, triage issu say someone uh one of your, users tells you hey I got a weird error, this happened to me actually I was using, superhuman Ai and it didn't work and I, told them about it I said I got like a, weird output maybe you guys want to, debug it and they asked me what prompt I, gave it to do the output I don't, remember don't you have a logging system, maybe you should use prompt layer and, they uh but yeah you should be able to, figure out why something broke you, should be able to step through step by, step into the chain see which version of, The Prompt it used maybe you have, multiple versions in production and uh, our Lo is just logs each request and let, you integrate it with metadata like user, information it's one thing to like log a, bunch of things but like let's say that, I want to improve latency or cost or or, something like that and I maybe have, like 17 different prompts that I'm using, across my system how have you learned, how to present that information to users, so that they can kind of especially from, a you mentioned kind of the skill of, prompt engineering having this, algorithmic thinking kind of piece to it, but there's also a lot of people you, know there's coming in maybe that's the, part of their brain that they're, building up and they're bringing in, these other skills with them so how have, you found it useful to present this sort, of information to people to give them, the right sorts of feedback along their, kind of Journey of optimizing things so, I think latency and cost are the easiest, things to figure out how to they're the, easiest metrics you get out of the box, when you're doing prompt engineering, because you're always getting latency, cost uh the harder metrics is is my, answer correct is it rude to me is it, mean is it does it have AGI I don't know, um are you Microsoft Bing releasing we, have a we did a prompt intering, tournament last night the first round, was can you avoid a PR disaster like, Microsoft big but that's the hard part, is making sure your answer doesn't go, off the rails or isn't wrong but latency, and cost and those type of logging like, base level logging specific things those, are the tractable metrics so we give you, laning cost for each prompt template we, give you it broken down based on version, and then we also have a full analytics, page that actually we revamped the other, week for one of our customers who they, were going to have to build out their, whole like a whole bi dashboard because, I think something about their either the, founders of the company or the investors, were worried about some users spending, too many credits so we kind of revamp, our analytics page to just save them, some time there so you can use our, analytics page to see which promp, templates are costing you the most which, users are costing you the most maybe if, you're segmenting things to prod maybe, you're segmenting based on GE and just, kind of filtering Down based on that, sort of thing I want to ask a question, as you have kind of pioneered this whole, Space jumping into prompt engineering, quite honestly before anyone really knew, what it was as you pointed out earlier, and you've been building out this, capability as you look to the Future and, you know the future is changing so, rapidly right now you know we're all in, this massive acceleration of things, coming out you know Daniel and I every, week are trying to figure out of all the, things happening what do we actually, talk about you know it's it's getting, harder whereas it used to be there was, something that happened last week and, we'll talk about it now there's so many, as you're operating a business in this, um you know kind of intense increasing, environment where do you think this is, going uh from a prompt engineering, standpoint what will prompt engineering, become as we become increasingly, multimodal and you know all the, Fantastic things that are happening on a, weekly basis I would imagine that would, be fairly hard to try to plan ahead uh, on you know where the industry is going, and and the technology and where to put, your business how do you see the future, what's the next one year two year five, years in your head look like that's a, billion dollar question right um I think, we try to do two things we try to not, predict the future because it's too hard, and we try to build something useful, that is built on first principles that, make sense and I think that's how we try, to stay ahead of the curve there a, little bit so I can give you some, examples so for example kind of just the, whole process of iterating, of testing for evals we procrastinated a, little bit on building that part of our, platform we always needed knew we needed, it it's been the buzzword in the, industry for like six months now but we, really wanted to know how to build that, correctly and I think we spent a lot of, time talking to a lot of teams and, saying how do you do evals today and, every team we spoke to did it in their, own way in a Google spreadsheet building, out some weird unique things so our evil, product if you try it out it looks like, a spreadsheet for that reason and it's, very much inspired by the robustness of, a Microsoft Excel type product where it, seems very simple but you can take it in, a lot of different ways so I think we, are trying to become future proof by, avoiding taking strong opinionated wins, we want to support best practices and, build best practices for the community, especially in a space like this but we, want to do it without pigeon ho holding, people into different ways of doing, things and I think it's been funny, seeing how people like the hive mind has, changed their opinion on what the future, is I remember we were uh a year ago we, were uh talking to like investors, obviously not the investors that are on, our team right now and there were, investors like oh promp engineering AGI, is just G to take over we're not going, to have any infrastructure anymore and a, lot a lot had that opinion I don't think, many have that opinion anymore let's, just say and I think it's very obvious, to us and it's been obvious to us that, prop engineering is the process of, giving inputs to the llm and choosing, that which model you're using and even, with the most advanced llm ever let's, say the llm is Advanced as a human you, still have to tell a human what you want, you still have to tell the intern what, task you wanted him to do and that's, prompt engineering and there's always, going to be a process of inputs there so, that's how we think about the future and, lack, thereof Jared uh thank you so much for, coming on to share your insights today, um I we definitely appreciate that um, that you and the the team at promp play, are thinking deeply about these things, and building really good tools to, support the community and I encourage uh, everyone in the audience to check out, the show notes um follow the links find, out more about prom layer and the cool, stuff that they're doing and I hope we, can have you back on the show um in, another year um when I'm sure prompt, engineering will look very different, than it does now but thank you so much, for joining Jared it's been a pleasure, yes thank you for having me this has, been, [Music], fun all right that is practical AI for, this week subscribe now if you haven't, already head to practical a.m for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire Chang log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next, [Music], time, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The main components of LLM apps | Raza Habib, co-founder and CEO of HumanLoop, on the "Practical AI" podcast.
#podcast #ai #machinelearning #datascience #artificialintelligence #deeplearning #ml #mlops #nlp #dataengineering #llms | 400 | 4 | 0 | I think you're exactly right we sort of, think of the blocks of you know LM map, being composed of a base model so that, might be a private fine tune model or, one of these large public ones um a, prompt template which is usually an, instruction to the model that might have, gaps in it for retrieve data or context, a data collection strategy and then that, whole thing of like data collection, prompt template and model might be, chained together in a loop or might be, repeated you know one after another and, uh, there's an extra complexity which is the, models might also be allowed to call, tools or apis but I think those pieces, to get taken together more or less, comprehensively cover things so tools, data retrieval prompt template and base, model are the main components but then, within each of those you have a lot of, design choices and freedom and so you, know you have a combinator large number, of decisions to get right when building, one of these applications |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Generating the future of art & entertainment | Runway (https://runwayml.com) is an applied AI research company shaping the next era of art, entertainment & human creativity. Chris sat down with Runway co-founder / CTO, Anastasis Germanidis, to discuss their rise and how it’s defining the future of the creative landscape with its text & image to video models. We hope you find Anastasis’s founder story as inspiring as Chris did.
Leave us a comment (https://changelog.com/practicalai/260/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://www.neo4j.com/developer?utm_source=changelog&utm_medium=podcast&utm_campaign=practicalai) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack. Visit Neo4j.com/developer (https://www.neo4j.com/developer?utm_source=changelog&utm_medium=podcast&utm_campaign=practicalai) to get started.
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Anastasis Germanidis – Twitter (https://twitter.com/agermanidis) , GitHub (https://github.com/agermanidis) , LinkedIn (https://www.linkedin.com/in/agermanidis) , Website (https://agermanidis.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
Show Notes:
• Runway | Website (https://runwayml.com)
• Runway | Twitter (https://twitter.com/runwayml)
• Runway | AI Film Festival (AIFF) (https://aiff.runwayml.com)
• Runway | Gen-2 (https://runwayml.com/ai-tools/gen-2)
• Runway | Gen48 (https://gen48.runwayml.com/winners)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-260.md) | 43 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of the, Practical AI podcast I am your co-host, Chris Benson uh usually I have my uh my, our other co-host Daniel whack with us, he is not able to join today but we're, we have a great show in store we have, with us a super interesting guest you, may very well if you follow AI have, heard about this guest in this uh, company doing some super cool stuff so, uh I'd like to introduce anastasis, anastasis sorry I'm I'm mispronouncing, gerus who is the co-founder and CTO at, Runway sorry I I screwed up your name, there did I get it anywhere close to, right there yeah all good uh thanks so, much for having me no sorry for the, stutter there thanks for joining us on, the show you guys are doing some really, cool stuff at Runway wanted you to, actually before we dive fully in kind of, tell us a little bit about your own, background and then we'll kind of dive, into kind of the environment that you, find yourself in and and the industry, and what kinds of problems out there are, interesting as we dive in so first of, all you know CTO of a hot AI company how, did you get there how did you get to, where you're at right now well the first, thing I would say is that uh I did not, get here by planning for it uh I think, uh in some ways planning against being, where I am today so just to give a a, background so my background is kind of a, hybrid of engineering and art so I was, uh for the past decade or so I've been, kind of in different startups working as, an engineer at the same time having my, own art practice and so doing kind of a, variety of work in kind of media arts, and interactive Arts Runway was the, first time where those two kind of, different worlds have converged uh for, me but Runway started in in art school, so this is not really where where, companies uh AI companies get started, usually uh so my uh my motivation for, going to our school was actually to take, a break from technology to really, explore like the more creative and um, like in some ways open and exploration, of those Technologies without any, concern about like making something that, would make a commercial sense at some, point uh but it just so happened that uh, you know I met my co-founder fers there, and we started kind of making the small, tools and uh like one thing led to, another and we realized that this was, kind of a really useful thing to build, out and kind of spend our focused time, on it sounds like it was a bit of a, passion project you know without that, commercial intent up front you know in, the beginning you kind of fell into it, because it was what you love yeah and I, think that's how the best things get, started very very usually it's just like, um and that's been a a general pattern I, would say no just at the start but just, throughout the way we be building the, company is where there is this book that, we really we give to every employee, that's called why greatness cannot be, planned and it just talks about this, idea that like when you have very very, concrete goals in mind it's actually, very often you end up not meeting them, and sometimes going for like the next, stepping stone is the right approach to, actually get to a very interesting, findings or novel insights and so that's, been part of how Runway started and, that's been part of how Runway had, continued to grow but yeah initially I, would say our main goal was like these, machine learning models are super, difficult to understand super difficult, to use especially when we started like, around five years ago but they're super, interesting for AR and they can make, really compelling things with it once, they get to the point where they can, actually use them at that point you know, generative models uh kind of AI was a, bit at an earlier stage in terms of like, both how many people cared about it and, also the results of those models but it, was still even at that point really, useful for artist the moment we gave the, right tools for them to use it and so, that that was kind of the Inception of, Runway I'm curious recognizing that, there wasn't you know the master plan, that you were implementing you know, there was a bit of serendipity uh to how, you arrived there I am kind of curious, you mentioned that you would kind of set, aside technology before you were going, back into art right there and I'm kind, of curious did the Technologies you in, prior to Art School play into where, you've come out here with you know in, terms of Runway being that end result or, did you you know is there any connection, there or were they just you happen to be, in a different area and were finding AI, were were you active in AI prior to, going back into art school my interest, in AI kind of goes back into like at, least high school and before uh so I've, been before Runway I was uh working as a, machine learning engineer as a kind of, distribut systems engineer at different, companies so definitely had a background, in this area was very interested in AI, my interest was specifically in neural, networks which you know when I was kind, of decades ago they had become kind of a, like ignored area of machine learning, like they were kind of seen as a dead, end that like they wouldn't be able to, like at that point like support Vector, machines out there kind of models were, more popular uh but there was still, something very compelling about neural, networks that uh made me actually get, kind of start working with them from, kind of high school with some initial, projects uh so been very interested in, AI kind of throughout the motivation for, going to art school was and just to kind, of keep more context on the kind of art, art school it was it's a program at NYU, that was kind of exploring the, intersection of art technology uh, technology was still part of it but it, was less kind of technology for the sake, of technology or for just like novelty, for the sake of novelty more, understanding like how the technology, could be used in creative ways or in, ways that are maybe uh unconventional as, you were coming into art school and you, have this background as a machine, learning engineer and the passion for, art what has been you know your your, initial vision for that industry like, within entertainment human creativity, which are things that you currently are, targeting how did you see them and how, did you expect to be able to impact the, industries with AI going into the, process so like things are moving so, fast and we're seeing these amazing, Technologies which we're going to be, talking about in the minutes to come but, I'm I'm really curious what your, perspective was about where this was, going for art and entertainment prior to, actually arriving there the perspective, for us has always been that those models, those techniques are never going to be u, a source of ideas they're going to be an, acceleration an expression of a like, Creator's ideas this is the kind of, mindset that we uh we started building, those tools around and that's why from, the beginning we started working very, closely with fil makers or with, designers or with artists in making, those tools and getting their feedback, on how to make them the other aspect to, in terms of how we were kind of seeing, the trajectory of those models was when, you look back at um like 2017 or 2018, when we just started kind of working on, this the results of those models were, you know pixelated lower resolution very, experimental you know the composition, was off uh but you could see the trend, very clearly that you know every year, the resolution was doubling the Fidelity, was improving at the fa predicable way, and so it was not a matter of if it was, a matter of of when this would arrive, timing those things is always really, difficult so we didn't really know like, exactly when uh we're going to get to, this like Breakthrough where those Ms, really started becoming actually useful, but it was we knew that it was going to, happen at some point in next years most, people who were machine learning, Engineers um and I I work with, University students a lot and people at, the company I'm at now and companies and, that's kind of their dream job and I, find it it's really interesting to me, that you said I'm going to set that, aside for a little bit and go and do art, school what was the driving factor for, you because obviously that turned out, for your story that turned out to be, crucial that, juxtaposition uh if you will of those, different factors I'm just curious what, made you say I think I'm going to put, down machine learning engineering for a, while and go back to art school I was, just curious what that was because, obviously that seemed to create a, perfect environment for you to Spring, from I would say mainly uh just the the, motivation the need to explore the, possibilities of something without uh a, very clear uh expectation that you know, it need to result in a tool uh that was, kind of uh necessarily useful or uh just, the the being in an environment where, you can kind of have this openend, exploration of the possibilities of this, technology uh it was less that you know, I wasn't in machine learning I wanted to, get away from it it was more I wanted to, explore it in a context where there was, no kind of expectation that you know I, need to build something that was you, know commercially valuable or like super, useful of course that took a turn and, end up that that that was a way to get, to something that that end up being very, a very good fit for a company uh but I, would say initially it was like I was, very interested in at some point I think, in 2015 2016 there were just starting to, emerge this kind of New Movement around, making art with AI and there were some, initial Explorations a lot of them in, kind of the open source world and I just, started contributing to making kind of, small projects around uh making uh kind, of tools to make art with AI and so, really just wanted to spend more time, building those things and less kind of, in the kind of purely in the industry, working with machine learning because I, think those those two things you're, working with the same underlying models, and the same same Technologies but uh, the actual results are very different, that you're creating with them and just, one one more kind of story from art, school like to illustrate we like one of, the first projects that uh we built with, my co-founder Chris uh was this drawing, tool essentially where um there was this, model that Nvidia released that was, meant for kind of self-driving car, research and the main idea of this model, was you could give a kind of a layout of, a essentially street view so like, kind of indications of where pedestrians, are or like the road is or other cars, are and then generate an image using, that layout it doesn't sound like the, most kind of creative model or like, creative use case for uh a tool um the, context of that model is very much for, like as part of like self-driving kind, of car research and just kind of, creating synthetic data for that and so, on but we decided to build this drawing, tool around it where you could Define, kind of the layout of a scene uh and, then generate kind of street views based, on that layout we saw that the moment we, gave it to artists the kinds of scenes, that we're creating were super different, than like what the regular oppos of the, mod was so they would create like giant, pedestrians or like street signs flying, from the sign from the sky so there's, the same insight there that you know, you're working with the same types of, models the same types of Technologies, but seeing them with a fresh set of eyes, and a different perspective makes all, the difference and so this is what I, came to art school to do is just see the, same underlying kind of ml AI, Technologies with a new kind of set of, eyes explore new possibilities and this, is what we hope to do also with the tool, [Music], itself what's up friends is your code, getting dragged down by joins and long, query times the problem might be your, database try simplifying the complex, with graphs a graph database let you, model data the way it looks in the real, world instead of forcing it into rows, and columns stop asking relational, databases to do more than what they were, made for graphs work well for use cases, with lots of data connections like, supply chain fraud detection realtime, analytics and generative AI with neo4j, you can code in your favorite, programming language and against any, driver plus it's easy to integrate into, your Tex stack people are solving some, of the world's biggest problems with, graphs and now it's your turn visit Neo, 4j.com developer to get started again, Neo 4j.com, developer that's neo4, j.com SL, [Music], [Music], developer, so you arrived at art school for that, purpose of seeing all this through a new, set of eyes and you met your co-founder, Chris and you guys had that spark of an, idea which would become Runway can you, talk a little bit about the Insight that, you had there that created runways, before we dive fully into what Runway, has done since I I'm really curious what, the moment where where you and Chris you, know kind of said we have something here, this is something we're going to go do, was there a distinct moment did you just, kind of gradually arrive there what was, that moment like where you decided it's, time to go be an entrepreneur in this, context so I wouldn't say it was a one, moment that kind of was the turning, point so we were working in a lot of, different projects with Chris and, Alejandro the other the other co-founder, um and the each of those projects was, kind of a standalone tool around kind of, helping for let's say specific art, project for an artist or for a specific, kind of medium or specific kind of, context over time we realized that there, was a lot of the same things that we had, to do for each new project and at that, point setting up like being able to run, models was even more like difficult than, it is today even like kind of running a, Google collab notebook was like too much, to us sometimes for like a like artist, without any technical kind of background, or KN how about how those models work so, the initial idea was was let's start, from what's already out there in the, open source world like there is already, kind of wealth of different models that, perform different tasks but let's make a, kind of a creative tool around them so, let's bring the kind of interface and, the kind of experience that artists are, familiar with from other creative tools, uh but use those new models that were, coming out that have all these, interesting possibilities kind of on the, back end that was like the main idea of, Runway initially um and also as I, mentioned before there was was kind of, that that Vision was there from the, start that as these models was were, becoming better and better more uh the, applicability of those models will go, increasingly more from you know the more, experimental use cases to something, that's like actually driving production, and it's like really really useful for a, variet of credit workflows and we saw, that happen kind of very quickly after, starting Runway you mentioned along the, way there that the difficulty of, implementing some of the models and and, even today with a number of different, choices out there it's still something, that many companies are are contending, with is you know how to address models, how to how to train them where they're, going to train them what the deployment, you know how it fits into products, there's there's a gazillion questions, out there you were doing this at a, moment where that wasn't even as sorted, as it is now and it's still in, development at this point how did you, manage that um because that's when I've, talked to other people that's often been, one of the biggest challenges is just, getting the resources in place, especially at that time when it was, still in early development what was that, like to try to bring that bring your, vision out when the obviously the, environment that we were doing AI in was, still fairly exclusive in a lot of ways, uh in the sense of access to expertise, resources uh you're in an art school, that's designed to help you do that but, that couldn't have been easy yeah so uh, we essentially had to figure out a lot, of things from scratch uh as we were, building this uh so as I mentioned, initially Runway was based around like, providing access to existing open source, models but we quickly actually realized, that we needed to uh build an in-house, like research team in order to really, get those models from something that's, that makes a good demo or good prototype, to something that's really useful so, that was actually from kind of the the, first few months of run we became very, clear that we need to do this of course, none of our street had built a research, thing before that was uh like I had, engineering and research and some kind, of ML and research background but the, experience of how how to build the team, like what skills is to bring in was like, the nobody on the team had it and so a, lot of the things we just have to figure, out from scratch one nice thing I would, say is that because we started so early, we had years to figure this out so if, you're just coming into like Ai and like, as part of kind of building a new, company today the the time Horizon kind, of you need to figure those things out, in a much more Sol fashion so for us, like we spend the first years figuring, out like what does it mean to actually, build a research oranization within a, startup uh and what does it mean to, build like a robust Town deployment, pipeline so that you can not only kind, of serve those models but also serve, them interactively because a big part of, the way we build tools at runways like, the interaction is such a it's a very, key aspect of really making those models, useful I think when I've talked to other, entrepreneurs about this you know they, have a a tough time as you're kind of, getting to the place where you're at now, in terms of being able to you you now, have the research you're doing amazing, research but you had to kind of get from, A to B in the meantime and kind of keep, the company alive how did you approach, from a funding customers things like, that while you were kind of figuring all, these things out because that strikes me, as a pretty hard problem to tackle in, you know as you're moving along but you, still have to pay the bills if you will, how did you tackle those kind of issues, in terms of creating an AI startup That, Couldn't instantly be everything you, know that it is today from day one I, would say the main inside is to we, wanted to make sure that Runway was, useful at each stage of its Evolution so, even though you know the generative, models are not as were not quite as, powerful back when we started as they, are today they weren't as big a part of, the initial kind of tool offering and we, wanted to make the tool as, as useful from like the very beginning, as possible so the product of Runway, went through many Evolutions that really, track how the kind of AI models evolved, and at which stage were Ed for which, things a big part of early Runway was, building out a video editor that was a, really that combined some of the more, traditional video editing techniques, with uh AI based techniques to speed up, the process of a lot of video in, workflows and that wasn't necessarily, something that had generative models, powering it but was a really useful tool, that really gave us a lot of insight, about how to build tools that are really, useful for creative workflows and how to, really solve like real paying points of, be editors but at the same time while, we're building those tools we were also, at this kind of research that was, ongoing uh that was still remaining at, kind of more academic level of just like, really demonstrating how we can improve, the results of generative models and at, some point there was that kind of, intersection point where we started, bringing those generative mods, production so the strategy was we knew, that generative mods would be like, really powerful given enough time and if, we invest the resource on the research, side at the same time we knew that at, the beginning not everything is to to be, part by generative models so we're, building a lot of AI based tools that, incorporated uh that were really useful, from the beginning and that they were, used by kind of VFX artists by vedors to, uh speed up a lot of their workflow even, far before we uh release things like gen, one gen to for kind of text video, functionality you know you're saying, generative but it was definitely the, early days of generative and you, certainly like right now it's it's all, the rage you know everyone's talking, generative in in every context and but, you had some insights into that you know, you talked about the fact that you guys, knew that that was going to be the case, going forward but to your credit not, everybody did uh you know there's been a, you know a lot of people went aha much, later than you went Aha and I'm kind of, curious um is there anything that stands, out as what drove the insights that you, guys had and why because I mean you were, really one of the very first to get, these kinds of functionalities you know, to product that's very notable and you, know the you might say the rest of the, world didn't uh you know not that many, and so what were some of the things that, gave you that confidence to say this is, clearly going to be critical to our, future this is going to drive the, industry uh at an early stage you were, pioneering that thought process how did, you get there from the very beginning uh, a big part of Runing was working, directly with artists in building those, tools and so when we gave them even, early versions of generative models we, could already see that they like there, was really compelling aspects of working, with them even if the results were low, resolution or like not as High Fidelity, so like early forms of things like, prompt engineering like figuring out how, to kind of Traverse the latent space of, those model were still there at the, beginning of Runway and they were we saw, how artists were engaging with them like, how they were kind of they were finding, it to be really compelling and and, really uh useful uh and so really part, of it has been just having this early, view into how artists with kind of more, um ear doctors I would say uh were, engaging with those models and just, extrapolating that once those mods, improve other people will equally find, them as compelling so working with, artists I think has, a really important part of just really, understanding kind of the future of, those models extrapolating of how they, would be used and also just looking at, the kind of history of Art and how tool, making was always part of like how new, tools always allowed kind of new created, a new kind of art movements or allowed, new kinds of kind of genres to emerge uh, and just assuming and kind of predicting, that the same would happen uh with those, gener models along the way as you were, going down this path what stumbles did, you have you know as part of putting, because it's quite remarkable because, you clearly could see the future you, know before you got there and and with, more clarity than others that might be, in a similar position as you did that, what kinds of things did you were either, unexpected uh or challenges that were, bigger than you thought you know the, things were maybe at a moment in time, you were grinding your teeth and going, ER this is not exactly how I had it, planed do you have any stories to that, effect during this process uh many, stories and many learnings along the way, um for sure I think the biggest, requiring Insight that we've had around, how to build for those tools and the, thing that I think is still not fully, appreciated today it's how important, control is in terms of interacting with, those models and so every time we, invested into adding more ways in which, you can really control the outputs of, the models that people were using inside, Runway we saw a whole new set of, possibilities and whole new kinds of, usage so that has been a really, consistent theme and even kind of at the, beginning uh we just saw that those, models had a lot of flows that they, might you know not always like if you, have a very simple ways of controlling, them might not really give you what you, want and you might have to like do a lot, of like tries with the same all can, generate lot of outputs to get to kind, of where you want to uh your Desir, result um and so that's really what we, saw with the kind of early like when we, first released Gen 2 you could only kind, of control things with a taex prompt and, we saw very quickly that that led to, people just kind of generating like tens, or hundreds of outputs in order to get, to the result that they wanted and so we, invested kind of continues more and more, adding more and more ways in which you, can like manipulate things essentially, as a film director would think about, creating a scene so a FM director would, have a vision not just of like a you, know description high level description, what the scene is but how the camera, moves in a scene or like how do the, characters interact with each other so, having ways in which you can control, really the the kind of camera motion or, like the motion the object motion like, the motion of the characters in the, scene like all those things that like, make total sense from a career's point, of view but they're not necessarily how, like maybe ml researchers were, necessarily think about those models I, think that has been always the Insight, that you know we never we never saw, negative effects from adding more and, more ways of controlling those, [Music], models this is a change log news break, pter is the internet OS peer is an, advanced open source desktop environment, in the browser designed to be, feature-rich exceptionally fast and, highly extensible it can be used to, build remote desktop environments or, serve as an interface for cloud storage, Services remote servers web hosting, platforms and more I've been around long, enough to see a bunch of these desktop, OS and browser window demos and toys but, this is the first time I've been, impressed by one enough to keep the tab, open longer than 30 seconds from the URL, structure to the cloud storage, integration to the developer portal pwer, strikes me as an actually viable, internet-based operating system with, potentially real world use cases and, that's saying a lot oh and it's also, entirely built with vanilla JavaScript, and jQuery so you know the devs haven't, cargo culted together something they, can't grow and maintain on that note, they say for performance reasons pter is, built with vanilla JavaScript and jQuery, additionally we'd like to avoid complex, abstractions and to remain in control of, the entire stack as much as possible, also partly inspired by some of our, favorite projects that are not built, with Frameworks VSS code photo p and, only office you just heard one of our, five top stories from Monday's Chang log, news subscribe to the podcast to get all, of the week's top stories and pop your, email address in at Chang blog.com newws, to also receive our free companion email, with even more developer news worth your, attention once again that's ch.com, [Music], newws so before the break you brought up, Gen 2 and I'd like we've had a little, bit of a a history in on the development, which is fascinating it's an incredible, story you have tell us all about Runway, today you've arrived here you have gen, two uh just talk a little bit about how, you're impacting industry today and for, listeners who haven't been to your, website you talk about advancing, creativity with artificial intelligence, and you specifically note that you're an, applied AI research company shaping the, next era of art entertainment and human, creativity what does that mean in 2024, as you're out there in the space can you, talk a little bit about the company as, it is now yeah to give some CeX Gen 2 uh, is a text to video and image to video, generation model so essentially it takes, a description of a scene and then, generates a video output from that scene, and it's one of the many models that we, have a Runway uh kind the most kind of, well on one the broad kind of vision of, the company has kind of remained the, same over the the last five years and, it's understanding and creating the new, generation of creative tools and then, kind of working with artists directly to, figure out to help them kind of shape, those tools as much as possible and so I, think where we are today is I would say, we're still at the very early stages of, where those models can go I think V, generation this is really the year where, V generation gets really good and so, like we're we're really excited to kind, of be part of kind of building out those, Technologies and figuring out like how, to work with artists to make them kind, of as useful as possible we've seen over, the past year since we released Gen 2, like film studios you know streaming, companies ad agencies kind of adopting, Runway and that adoption is not just, from kind of individual creators but, it's really we see companies starting to, use those models and and incorporate, them in the workflows and I think it's, not going to be a you know a binary, shift where you go from not using, generally models at all as part of kind, of making video or making art to using, it everywhere it's a more gradual, transition and for us the big goal is, kind of teaching folks how to use those, models supporting all the creators that, are making interesting things with those, models so we have a AI Film Festival, that where we showcase kind of films, that use AI in different ways so I would, say for us the goal is very much kind of, holistic of like we do the research we, create do research and development in, building out the next generation of, those models we build useful tools, around those models and we also work, with artists and with companies that, want to adopt those models in their, creative workflows as you have been, working into this for years for most of, the rest of the world the past few, months have been a big eyeopener, particularly with big cloud companies, you know producing their models and, stuff and competing in that there's the, obvious aspect of you have the, industries that you're playing in and, that you're strong in but what concerns, do you have from a competitive, standpoint against other companies you, know especially these big, all-encompassing cloud companies that, are that are in the sort of the AI arms, race to produce the ever larger uh more, capable model at no point in this, conversation have you expressed any, concern have you raised that or anything, which is quite notable you usually, people are are a little bit worried, about that um and you seem very strong, in your space how do you see those other, big players that are out there do you, see them as competitors even or are they, far enough from you that that's not a, big deal or or you're so tightly into, your the industries that you're serving, specifically that you have a huge, competitive Advantage how do you see all, that for us we we've always um kind of, had the perspective and mindset of, running our own race and so we try not, to uh kind of be too distracted by, especially at at these days like there's, so much kind of noise and discourse, around AI that it's easy to kind of get, stuck and like following the latest, development so I think that's kind of, the number one aspect when we first, released Gen 2 last year one of our, positions that was not as popular I, would say last year was that video, generation models were were going to be, the kinds like video was the modality, that kind of encapsulated as much World, Knowledge uh and uh usefulness as, possible and last year a lot of the, focus was on language and for us it was, a bit kind of an orthodox to kind of, maybe pay so much attention to video, specifically and and claim that video, generation models were like really the, way to build really broadly useful AI, systems and over the past months we've, seen more companies kind of entering the, space of video generation models and so, it was not nothing unexpected like we, know that those models are going to be, really useful for a wide variety of use, cases uh they're going to be useful, Beyond reading creative tools which is, really our Focus uh and so for us it's, really important to maintain that focus, of really like not just building those, models and like making kind of cool, demos around around them but really, figuring out like bridging that gap, between you know those demos and really, deploying them to products and really, getting kind of people to use them and, getting kind of making them controllable, so there is still that Gap I would say, from doing just the research and, developing the model to actually making, those models controllable and deploying, useful tools and for us always it has, been the focus to breach that cap and S, that's kind of continues to be our Focus, so again like video generation models, are still very early and like we haven't, kind of seen anything yet about what, they'd be uh ultimately capable of you, can imagine you know a year from now two, years from now every company is going to, have like a photo realistic video, generation model and that's an, assumption that we're making that the, competitive advantages shift over time, and at that point like what's the, differentiation of of runway for us it's, always been working very closely with, arties building uh really useful tools, and bridging and bringing making those, malls really probable and useful it's, fascinating to me because I talk to so, many people in different companies and, and they're busy trying to just AI, everything and they're kind of all about, the AI you guys are doing the AI but it, sounds like, competitively having been so embedded, into the artistic ecosystem with your, tooling is really you know kind of, something that keeps you right there uh, while everybody goes through the kind of, the AI model Wars you know in terms of, trying to produce so much do you think, that long Heritage of the tool making is, probably key to your future in that, sense is that kind of how you're, thinking about it I think it's the most, important aspect of how we're operating, uh otherwise again it's too easy to get, lost in a shortterm race such just, having kind of a marginally better model, for a few weeks versus kind of really, having the mindset of building the most, useful to long term and then obviously, updating the model making sure you know, you get stateof the a results with it, but it's not the goal it's not the focus, to have the best model the focus is to, get artists to make you know the coolest, things or the most compelling things, with those models and if that remains, the goal then that also informs how how, we build those models and so like, another aspect of runways just like we, have uh a research team and then we also, have a creative team inhouse that works, with the research team on the daily, basis and like tries out the latest, models informs how do the research like, what kind of controls we need to the, models and having that perspective is, really like when I talk to researchers, that work in you know academic clubs, that or kind of large industry labs they, might publish papers about the potential, creative applications of those models, but they don't interact with artists, daily they don't often know like is it, this actually use for or is it just a, hypothesis that that I'm making and at, Runway as as a researcher you get that, feedback on on a daily basis uh and I, think that really changes how you, approach building those models for, listeners you and I can see each other, though this is an audio only podcast but, you had this glint in your eye a moment, ago when you were talking about kind of, where you expected these video models to, be going for just a minute there you, reminded me of the kind of the kid in, the candy store you could see your, passion really flying out of your eyes, there and obviously I'm the only one, that could see that talk a little bit, about where you think this is going, that's what everybody is wondering, there's so many questions you know that, people have in terms of you know how, video fits in their life what life, becomes like when you have uh generative, capabilities that essentially you know, simulate life in so many ways what are, you expecting over the next year or so, and and like I'm not holding you to it, obviously but just what do you, anticipate might happen in the video, space generatively and then how would, you see it several years out you know, when it's kind of exponentially been had, time to grow a bit what does that look, like to you the way we uh we like to, think about uh those gener V models is, we have this term their General World, models essentially they simulate, different aspects of the world because, in order to kind of similar to how you, know you have large language models have, been trained with a very simple task to, just predict the next token in a, sentence in order to predict that the, next token and perform the task really, well they have to gain all this, understanding about kind know different, aspects of human knowledge different, aspects of the world just to solve this, task well, uh because they need to you know, complete sentences that might you know, come from an encyclopedia or like a, forign post or it's like a wide variety, of case that need to handle so we think, very similarly how those the video, generation models operate in order to, predict the next frame you need to gain, kind of not understanding of basic kind, of rules of motion or like physics uh, you really need to gain a kind of a more, comprehensive uh like broader, understanding of the world and so like, if I think you know a year from now, where do those models go essentially, becoming more and more higher Fidelity, simulations of the world giving you the, ability to really imagine all sorts of, different kind of scenarios like build, out tell all kinds of different kind of, narra and stories and I think that the, applications of that are kind of really, there is kind of wide ranging kind of, application application of that that, goes beyond the kind of con creation use, cases which I think for us are kind of, still remain the focus but just building, models that can you know perceive the, visual world like all of course like can, be used in all kinds of other ways as, well thank you for sharing your story as, we finish up here we have a lot of young, listeners on the show and there is I, guarantee that there are quite a few, young artists that are technically, inclined out there you know high school, maybe early college age and they're, listening to this and they're going that, guy just lived the life that I'm wishing, I could live uh you know that's the kind, of thing that I want to do what would, you whether they identify themselves, kind of as a young artist who's, technically inclined or technologist who, loves art however they see themselves do, you have any guidance on how they might, step into the future and kind of get to, that sweet spot for them given the fact, that clearly the technology specifically, with AI and the artistic world will, continue to merge and and develop, together for years to come how where, should they go what should they do any, thoughts I would say the number one, thing is following your curiosity and, tining as much as possible so there is a, lot of ways in which you can you can, start kind of like building those models, yourself you can start kind of running, them you can start to get kind of an, understanding of what you can do with, them and that's available to really kind, of anyone uh and so really like you can, start getting involved today in building, projects uh kind of exploring AI or, making creative projects with AI that, would be the number one thing it's also, I would say for me planning trying to, plan ahead too much as never quite, worked uh really focusing on like what I, can build today like where kind of, curiosity and uh interestingness will, drive me next has always been kind of, the the guiding principle and so that, would generally be my my recommendation, is not trying to think of you know what, where technology will be five years from, now because really nobody can fully plan, ahead but rather trying to really build, interesting things today it's actually, surprisingly I would say easy to like if, you started making you know projects, open source and just showing them to, others it can be quite fast that you can, get notice for those projects and you, can like start to you know build a, community around them work with other, people and collaborate on your projects, and kind of with those collaborations, kind of one by one you can kind of get, to a point where you can kind of start, kind of doing this work full time so, like really focusing on the next project, I think uh for me has been really the, the way to go well anastasis thank you, so much that was fantastic guidance, appreciate your your perspective, fascinating story leading into this and, especially in all the early Insight uh, that you guys had thanks for coming on, and talking about Runway and and the, world in which you guys are are trying, to make a bit better appreciate it thank, you, [Music], CHR all right that is practical AI for, this week subscribe now if you haven't, already head to practical AI FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire Chang log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next, time, [Music], and |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | YOLOv9: Computer vision is alive and well | While everyone is super hyped about generative AI, computer vision researchers have been working in the background on significant advancements in deep learning architectures. YOLOv9 was just released with some noteworthy advancements relevant to parameter efficient models. In this episode, Chris and Daniel dig into the details and also discuss advancements in parameter efficient LLMs, such as Microsofts 1-Bit LLMs and Qualcomm’s new AI Hub.
Leave us a comment (https://changelog.com/practicalai/259/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Sentry (http://sentry.io/events/launch-week/) – Launch week! New features and products all week long (so get comfy)! Tune in to Sentry’s YouTube (https://www.youtube.com/c/Sentry-monitoring/featured) and Discord daily at 9am PT to hear the latest scoop. Too busy? No problem - enter your email address to receive all the announcements (and win swag along the way). Use the code CHANGELOG when you sign up to get $100 OFF the team plan.
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
YOLOv9:
• Yolov9: Learning What You Want to Learn Using Programmable Gradient Information (https://artgor.medium.com/paper-review-yolov9-learning-what-you-want-to-learn-using-programmable-gradient-information-8ec2e6e13551)
• Yolov9 Object Detection with Programmable Gradient Information (PGI) and Generalized Efficient (https://medium.com/ai-trends/yolov9-object-detection-with-programmable-gradient-information-pgi-and-generalized-efficient-4fa3352409cc)
• Yolov9: A Comprehensive Guide and Custom Dataset Fine-Tuning (https://www.datature.io/blog/yolov9-a-comprehensive-guide-and-custom-dataset-fine-tuning)
• YOLOv9 SOTA Machine Learning Object Detection Model (https://encord.com/blog/yolov9-sota-machine-learning-object-dection-model/)
• YOLOv9 (https://docs.ultralytics.com/models/yolov9/)
• Unleashing the Power of YOLOv9 (https://www.linkedin.com/pulse/unleashing-power-9-yolov9-gurneet-singh-wcrrc/)
• YOLOv9 with NNCF and OpenVINO (https://www.linkedin.com/posts/yurygorbachev_yolov9-nncf-openvino-activity-7168875232626163712-3k6p)
• ArXiv:2402.13616 (https://arxiv.org/abs/2402.13616)
Parameter efficient LLMs:
• Hugging Face Paper page, 1-Bit LLMs (https://huggingface.co/papers/2402.17764)
• ArXiv paper: “The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits” (https://arxiv.org/abs/2402.17764)
• Qualcomm AI Hub (https://aihub.qualcomm.com/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-259.md) | 67 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, fly.io welcome to another fully, connected episode of the Practical AI, podcast in these fully connected, episodes Chris and I keep you fully, connected with everything that's, happening in the AI and machine learning, World we'll take some time to dig into, the latest uh news articles and releases, from the AI community and hopefully, share some learning resources that will, help you level up your machine learning, game my name is Daniel whack I am the, founder and CEO at prediction guard and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris doing, great Daniel how's it going it's going, great I'm spending a few weeks in the UK, um which is a lot of fun and uh have got, enough sleep to not be jet lagged quite, as much so that's, encouraging okay so we have a, transatlantic podcast going here today, exactly worldwide that's right across, the pond practical AI worldwide 21st, century Incorporated I don't know we, need rebranding exactly yeah yeah well, Chris one of the things I was going, through I don't know how often people, are flying these days but one of the, things that stood out to me as I took my, flight across the pond was now when you, board at least some flights you don't, even give them your ticket right you, just go up and there's a little I guess, you would call it a kiosk a little Edge, device that takes a picture of your face, and matches it I assume with what was, your scanned passport which you scanned, at the time of check-in and you board, your plane of course and it was really, really fast as well and the same thing, happened you know Crossing into the, border into the UK as long as you have a, certain passport you just go up to the, little machine and scan your passport, and then it takes your picture and I'm, assuming I I could do a little bit of, research I'm assuming what's happening, under the hood is that it's matching, your actual facial features up with the, image on your passport and Computing, some score of shadiness or something, like that or risk associated with you, not being the person in the but I was, amazed at how fast it was and I'm, assuming I could be wrong I'm assuming, maybe some of that's running at the edge, not ryant on an internet connection to, do that facial recognition I'm not sure, if you if you know or if you've had also, this experience Chris I don't know what, they're using algorithmically uh but I, definitely partake of the technology, it's an area that I forgo privacy and, and always buy my way into expeditious, processing so yes I'm curious well I, don't know in that case if you have a, choice maybe there is an opt out, situation or something I'm not sure but, it's pretty cool that some of this, technology is being applied at the edge, and in a very seemingly efficient way, such that you could use it on a math, scale like that or I don't know if You', consider that a math scale but it's, definitely in use for many you know, there's a huge flood of people going, through those stalls and the computation, happens very quickly and reliably enough, to make a judgment in the midst of all, the hype around generative AI one of the, things that stood out to me over this, last news cycle Chris was the release of, YOLO v9 so we're on the the ninth, iteration of this YOLO model did you, happen to see any of the videos of YOLO, 9 in action Chris I haven't seen the, YOLO 9 one but I'm I'm kind of stunned, you know when you think about it yolo, has been around a long time I was, occurring to me because we actually had, some conversations about YOLO back in, the very first days of this podcast, which has been you know closing in on, six years now so it's uh v9 is a long, time coming uh and we haven't really, gone back and touched uh such models in, quite a while we're long overdue yeah, yeah so as everyone is freaking out and, enjoying the hype over large language, models and other generative types of, models uh Sora and all the things coming, out in the background somewhere there's, these amazing computer vision people, that are just really cranking and, innovating on actually at the, architecture level of neural networks um, in really interesting ways so it might, be good to set a little bit of, background for this Chris you mentioned, um we've been kind of talking about YOLO, for some time so if people just search, for yolo y l o object detection you'll, see a you know a huge set of Articles, and GitHub and everything about YOLO, YOLO actually kind of made a splash, because it processed entire images in a, single pass for object detection and, bounding box detection so if you think, about if you've ever seen one of those, videos of like a street with a bunch of, people walking around and cars and dogs, and shops and scooters and whatever with, their boxes around them yeah and they, have their boxes around them and they, they're labeled person or or whatever, that's likely YOLO so what happens is, that single image in a YOLO model goes, into the model and then out comes the, bounding boxes and the actual, classification of those bounding boxes, which is interesting because previous, models previous to YOLO I'm still sure, some models do this in a multi stage way, which is more computationally expensive, so they actually take multiple passes, through a model or multiple models to, compute both the bounding boxes and the, classes yeah I remember way back when we, were first starting and I was act at a, different employer I was at Honeywell, leading AI there at the time I remember, just as 2 came out we were using that, for a couple of projects that we were, working on way back in the day but that, seems I mean that's like before, dinosaurs were in the Earth by AI, standards but yeah way back yeah and I, think even we had a podcast episode, maybe about fast R CNN or whatever it's, called the fast version of R CNN good, memory good memory um that one's cool I, mean that one I think how that one, worked was you pass your image in and, then it detects the bounding boxes of, objects and then in a second pass it, then, classifies each kind of sub section of, the image as its class which also is, very effective but it's less efficient, computationally than the YOLO kind of, single pass thing and as you mentioned, there have been multiple versions of, this so between YOLO and now version two, version three all the way up to version, nine kind of each version of these in, some ways has and not just in a kind of, more train with more data way they've, actually made kind of very, significant discoveries and improvements, in neural network architecture training, methodologies this sort of thing that, has led it to be kind of the go-to, solution for at least realtime object, detection in images which is why you see, all these videos of the bounding boxes, around around people and such they've at, least gotten the visual bit a little bit, nicer than they used to where you had, the big clunky boxes overlaying, everything it was correct yes yeah well, the v9 version of the project uh which, dropped at least if the date on the, archive article link is right that would, have been the 21st of February of 2024, as we're recording this so not that long, ago but it was um developed by an open-, Source team and kind of built on on top, of a code base from ultral litics YOLO, version 5 and it's released I believe, under the GPL 3 license is the code that, they released but the it seems like what, they focused on with yolo, v9 was continued focus on efficiency to, where like you can do real time object, detection meaning like as the frames of, a video are coming in you can process, those in in real time with the model so, efficiency is is really key in these, types of applications and then they, focused on one of the fundamental, challenges of of deep learning models of, these deep neural network models which, is called the information bottleneck, principle which happens because, especially as you kind of propagate if, you think about a neural network what it, is is a big data transformation right, you take a bunch of Matrix data in the, front end maybe representative of an, image and that gets processed through, successive layers of processing and then, out the other end comes maybe these, indication of classes or other things, and the information bottleneck principle, talks about the errors or the lack of, information or the loss of information, that you lose as you process an input, through the successive layers of the, feed forward process of that of that, neural network which in some ways can be, addressed by having bigger networks and, more data maybe you're less prone to, these informational problems but it's, more of a problem when you're dealing, with these very efficient lightweight, networks like the YOLO networks because, you have less layers to deal with and, you don't want to lose any information, that might be relevant to the, classification of the outputs I notice, within Yolo's 9 docs they talk about, also reversible functions as well does, that feed into no pun intended does that, feed into the ability to not lose data, by reversing that feed forward through a, function backward how does how do you, see that's utility yeah so the, interesting way that they dealt with, this or or kind of address this at least, in this version of the model is, something that they're calling, programmable gradient information or, PGI and the PGI portion of their, research and and advancement relies on a, couple of things but one of the main, things is this focus on again improving, the informational efficiency of the, network and one of the ways that they've, done this is with what they call an, auxiliary reversible branch and this, gets to these reversible functions that, you mention so the concept of a, reversible function for those that maybe, that's new to them means that the, function and the inverse of the function, can transform data without the loss of, information and so again there's that, loss of information piece there and so, it's a little bit hard to describe this, on the podcast without having a, whiteboard or a visual but if you think, about this PGI functionality that, they've add added into the network it's, kind of like they're bolting on this, auxiliary reversible branch which helps, deal with this information loss as, gradients are calculated during the the, training process and so during the, training process this reversible Branch, helps not lose that gradient information, as during the forward pass and during, the calculation of the updates of the, weights of the model and that helps it, be very efficient during the training, process but it's called auxiliary which, is key because you can actually unbolt, it and take it off for inference which, means I think part of the problem in the, past with these reversible branches and, efforts at this were helped with the, information loss but it also decreases, the efficiency in terms of computational, efficiency of the model during inference, I'm going to throw a question at you and, I realize this is not your your thing, but just in case is you're using a, reversible function in that programmable, gradient information process that you're, talking about and in a normal feed, forward Network you know you're, maintaining the weights as they're going, through and change those are are changed, and are you reversing functions to, maintain that back in the same space to, where you're actually maintaining a new, weight and you're keeping that that, gradient information for maybe future, feed forward passes or do you have any, sense of what the purpose of that is, yeah I think that uh so definitely will, link some of the papers and the, explanations in the show notes so feel, free to to look at that for um accurate, information and let us know if we get it, wrong but yeah I think that the idea is, that and the reason why this is, especially useful in the training side, of what they're trying to do and it it's, kind of unbolted during the inference, side is that during the training time, it's really crucial that as your, calculating the updates to your weights, you can do that in a very, informationally accurate precise manner, especially for these lightweight uh, networks which have fewer parameters to, train and so maintaining that, information especially as you're, calculating updates based on the, gradients is really important, [Music], gotcha, [Music], this is a chang log news break shipping, quality software in hostile environments, Luca cladic writes quote I once had the, opportunity to work for a startup that, had fallen from Tech debt into Tech, bankruptcy bankruptcy Michael is, Nature's doover it's a fresh start it's, a clean slate like the witness, protection program exactly not at all, although we managed to get it back on, the right track it made me rethink the, concept of tech debt and how we ship, software especially in hostile, environments end quote he goes on to, tell this true story in great detail, which is horrifying yet Echoes so many, of our experiences here's just one of, the many horror scenes Luca describes, quote there is also a handcrafted build, server a Jenkins box hosted in the, office but no record of how it's, provisioned or configured if something, were to happen to it the way you build, software would just be lost each job on, it is subtly different even for the same, Tech you have an Android source code, that you build three instances out of, but each of them builds in a different, way end quote this is a solid essay, replete with warnings and a plea at the, end to ditch the tech Deb concept alog, together you just heard one of our five, top stories from Monday's changelog news, subscribe to the podcast to get all of, the week's top stories and pop your, email address in at changel log.com, newws to also receive our free companion, email with even more developer news, worth your attention once again that's, Chang log.com, [Music], newws we talked a little bit about YOLO, version 99's, programmable gradient information I I, had to remind myself PGI programmable, gradient information the other piece of, the architecture and I think this is, just really interesting you've sort of, got all of this going on on the llm side, where things are getting very, interesting ways to fine tune and, preference tune and all all these, families of models on the computer, vision side man they're really really, thinking deeply about the architectures, going into these models which have made, them so so efficient the other thing, that kind of is a combination of things, that have come in the past that they're, utilizing and this YOLO v9 is a, generalized Elon architecture so this is, kind of a progression of a couple things, that have been in Yolo models in, previous generations but they've, combined them in kind of a unique way, this stands for generalized efficient, layer aggregation Network or G Elon and, and this combines a couple of things, from previous generations of YOLO and, from things like CSP net this has to do, with how features are aggregated and, gradients are aggregated through the, model in a very efficient way again, leading to a very parameter efficient, model meaning a smaller set of, parameters in Yolo v9 will have similar, performance to maybe models with many, more parameters so this leads to the, efficiency overall it's pretty, interesting they talk about being able, to uh adapt to a much wider range of, applications without sacrificing speed, or accuracy is that a form of fine, tuning the model uh or something that, they're doing ahead of time that you're, then fine-tuning on top of that at least, how I read some of that flexibility in, was yes uh there's kind of a parameter, efficient this is a parameter efficient, setup for fine-tuning maybe to a variety, of types of scenarios or even training a, new model from scratch in an entirely, new domain and and doing that very, efficient and some of the things that, I've seen you know people have already, quantized this model using things like, open Veno which is is very popular for, these kind of edge Vision cases and, running this very efficient so real time, object detection on even desktop or, laptop CPUs so the new architecture, developments are both geared towards, yeah that efficiency but also squeezing, every ounce of performance out of, parameter efficient models both in terms, of training and flexibility across, different use cases yeah I think there's, great applications for this on the edge, where you're not in a one of the giant, clouds with essentially if you're, willing to pay for it infinite compute, available to you whether it be training, or for inference either way so the fact, that this can run on just about anything, I mean back in the early days we could, do YOLO V2 on smaller equipment but it, was uh it didn't run smoothly you'd have, points where the it would overwhelm the, computational cycle and so it's nice, seeing something like this has come this, far uh it's quite an open source Library, yeah and there's a a link that we'll add, into the show notes which includes a, notebook for running YOLO v9 in a collab, notebook even like I say on CPUs so in, terms of the efficiency one of the, things that I saw was YOLO v9 operates, with 42% fewer parameters and 21% less, computational demand than YOLO, V7 yet it achieves comparable accuracy, so you know it was already fairly, accurate right and kind of an industry, standard but now with much fewer, parameters and I think that that is, definitely a trend that we've been, seeing not only in computer vision but, in other cases where you see things like, ol llama or other things that llama CPP, that are allowing you to run large, language models on a variety of Hardware, including just on your on your local, laptop and you know quantization type of, libraries like bits and bytes and, Optimum and big DL and these libraries, that allow you to run maybe 7 billion, parameter large language models or other, generative AI models but in lower, precisions so that you can run them on a, variety of Hardware or optimize them for, a variety of Hardware we also had neural, Magic on the show a little while back, now who has a set of libraries for, optimizing models to run on CPUs and, yeah so there's a lot of kind of, precision and quantization that can, happen even on top of the use of these, parameter efficient models one of the, interesting things also that I saw this, last new cycle which at least in the, circles that I run in with large, language models people were talking, about a lot which is this release from, Microsoft or a paper from, Microsoft that I think is titled, something like the era of one bit llms, which is interesting because um you know, a lot of people have talked about going, from maybe float 32 to float 16 and 8, and 4 bit Precision that sort of thing, and this kind of uh brings in this idea, of onebit, llms with this architecture bitet and so, I found it interesting that we got both, YOLO v9 but now comes on the llm side, this onebit architecture and it seems, like a similar thing is happening I, don't know if you remember back when we, were talking about R CNN and some of the, larger computer vision models we've seen, the progression to more and more, parameter efficiency and flexibility, across deployment scenarios and now, we're seeing that maybe in a more rapid, way with llms and this you know one bit, llm but also all the other quantization, and that sort of stuff that we've seen, on generative side do you have any sense, from an application standpoint like, where you might go with these one bit, llms like what are some of the use cases, that come to mind for you yeah I think, it's interesting so um this one bit llm, that was released they talk about it, having similar performance to a model of, the same parameter size but more, computational efficiency because of, course these parameters are are bits, they're actually not just zero and one, we we can talk about that here in a, second but more computational efficiency, so I think that this is really, interesting for cases where you do want, to run maybe an llm on an edge device in, a scenario like think about disaster, relief and you have a device out in the, field that's giving help to First, Responders or something giving them, information or processing information, from training documents or something and, you're using an llm to provide ERS is, likely very spotty internet connection, in that case and so having something, that could run on device in a variety of, scenarios would be quite relevant or, maybe so it one scenario would be lack, of connectivity I think another scenario, would be very latency sensitive, scenarios where you want a response very, quickly you don't want to have to rely, on network overhead or things going out, of a network that you're operating in, for security reasons that sort of thing, might be a good use of these yep that, sounds interesting they have a term in, here that I'm curious about uh referring, to bitnet uh they talk about it being a, 1.58 bit llm uh and hugging face in, their paper notes that all large, language models are 1.58 do you have any, comment about that what that means the, reality is if you I think they talk, about this in the paper if you go down, to a truly one bit llm each weight of, your model is either zero or one right, then yeah you would expect to lose a lot, of information that might be important, and so they make a a slight compromise, in here maybe it's unfair to call it a, compromise they they make an astute um, conversion from bytes in other words, zero to one or bits to what they call T, uh Turner so these are basically, Triplets of or or three bits together so, you have for example weight could be, minus one 01 or something like that so, you've got three numbers that represent, certain information and that's where, they kind of get this, 1.58 bit gotcha so this is also why it's, kind of they release this new type of, architecture that processes these uh, turnery bits or Turner um these, combinations of three bits and that's, presented in the Microsoft paper but, yeah I think this is only the kind of, latest I I think we'll see my prediction, would be that we'll see many more things, like this where people are trying to be, parameter and compute efficient with, large language models we've seen models, getting U more more efficient and more, you know compact over time time and uh, as we're looking at so many smaller very, capable models being used out on edge, devices do you envision something like, this where they're really targeting you, know efficiency in terms of being able, to do that in something like small, electronics or is that a little bit, overly ambitious for where this might, take us in a reasonably foreseeable, future yeah it's actually a good, question because one of the things that, we saw also um I don't know if it was, this week but recently at least was, qualcomm's announcement and release of a, huge number of I forget how many a whole, bunch of models on what they're calling, the Qualcomm AI hub for models that run, on device on their, Snapdragon processors and other things, at the edge on small devices so these, wouldn't be like the small devices of, like a microcontroller or something like, that there's still a good bit of power, in these processors but it is super, interesting that Qualcomm has made the, effort to make these types of models, whether they be object detection or, large language models or other things, available in optimized forms to run on, very small devices and I think it's a, trend that we'll we'll keep, [Music], seeing, [Music], it seems somehow like in computer vision, it took maybe what five years we've been, doing uh five or six years we've been, doing this podcast and over that time, we've seen computer vision models shrink, down and down and become faster and more, parameter efficient it almost seems like, that's happening much faster on the, large language model side and generative, model size um it's like Shunk from five, years to one year where a lot of that's, coming out for on device usage when we, and the rest of the change log team are, looking at what content to bring onto, the show uh there various guests and, there are all sorts of topics and, advancements going out uh it's become, quite challenging to narrow it down to, just uh what we can cover in these shows, and largely that's because of what, Daniel was just saying that tremendous, acceleration, and the advancement of this technology, is very hard to keep up with and report, on especially trying to figure out what, folks are most in need of hearing are, being pointed to so on any given week, how which of the dozens of things that, that are happening do you want to do so, and I would say for those out there, listening like in this episode we've, talked a lot about parameter efficient, models and whether it be the Qualcomm AI, models or the onebit llm or YOLO and, running these on device and at the edge, might be natural to think oh the new, cycle is totally switched to local, models running all the models locally, and that'll solve all the problems and I, think the reality is in the future it's, going to be kind of both and right, you're not going to serve let's say that, you integrate a model into some social, media application or whatever mobile, application or or you're serving a web, app and it's got some AI integration or, something like that it's very unlikely I, think that you are going to want to, serve up millions and millions of, requests using only local models and in, the same way if you've got an Enterprise, batch use case right and you want to, process 1.5 million documents through a, large language model you likely don't, want that running on your Mac M2 or, something like like that's not the, deployment strategy for that scenario, but yet you will see a lot of models, running at the edge or or locally and I, think the reality is that we'll go into, kind of a both and sort of scenario, where yes a lot of things you'll be able, to run locally but the same as like I, mean you can run a lot of software, locally but it doesn't mean that you're, also not running software in the cloud, you know AI is just a new layer in your, kind of software stack so we're going to, run it locally yeah and we're going to, run it in the cloud that's exactly right, that was where I was going to go uh, anyway you just hit it and and that was, it's following the maturity trend of, software and just as we have huge, software systems that you can only run, in the cloud and in a massive scale and, you have apps on your phone and you have, also very small micro Electronics which, have even smaller software functions on, them uh integrated in maybe in the Bios, all these different areas and we're see, models doing the same thing so one of, the things that we're often asked to, address and we have done repeatedly over, the years is what's the current way to, do training and deployment and I think, to your point Daniel there are now now, that we're maturing uh rapidly in this, industry there are many ways and there's, not one right way to do it anymore, that's kind of figuring out your use, case figuring out what mixture of, different model types need to contribute, into that and what the architecture for, all those models and how they, communicate through the software and, what Hardware is available to them so, it's become quite complicated there's no, longer the way uh you know to borrow the, Mandalorian saying it's it's now it's, now manyways do you have any thoughts on, how people might approach that how do, you think about it when you're doing, things in prediction guard and trying to, help your customers move forward, basically you kind of have to split, things up a little bit by stage of your, project and also the use case that, you're considering so what I mean by, stage of your project is I really, encourage people like especially if they, have a generative AI use case the best, thing you can do to get a sense of like, let's say that I want to, summarize news articles related to, stocks that I want to trade on you know, or something like that the very best, thing you can do is not jump right to, okay I'm going to fine-tune a model for, that or spin up some crazy GPU, infrastructure or something like that, the best thing you can do is just get, some off-the-shelf models and if you, want to either run them the easiest, Cloud way to run those would be to run, them you know if they're small enough in, just a collab notebook or a hosted, notebook environment like that that's, more than enough to figure out if if, they're going to work for your use case, right or if you want to go the more, local deployment route there's things, like I I already mentioned you know of, course if you want to run YOLO that's, that's easier now than ever and there's, quantized versions of that that you can, run on on a CPU even you don't need even, a special type of Hardware but then for, the generative side of things there's, things like AMA and LM studio and llama, CPP and these things that will allow you, to prompt models and figure out if, they'll work for your use case locally, so that's kind of exploration stage then, you have to decide okay well if this, project is a work project right I, figured out maybe that I can prototype, this and figure out it might work then, you kind of have to play through the, scenarios in your mind well oh if this, is a mobile app and I'm processing, customers private data Maybe it makes, sense to try to run a model at the edge, in my mobile app on their device you, know a qualcom, AI model from their AI Hub on their, mobile device and that would be really, good but if it's a web app application, and there's not as aggressive of a, security posture probably you want to, figure out how you're going to run and, host that model in a way that makes, sense to you even from a public endpoint, that's just a product like you know, together AI or mistol or or something, like that or you're going to figure out, how to run it in a secure, local environment with either a product, that can host that model in a secure, environment in your own cloud or in your, own network or your own kind of self-, deployment of that model using things in, your Cloud infrastructure like Sage, maker and AWS or or other things like, that yeah it's increasingly it it's, becoming part of the software and your, larger architecture as you you know, we've seen in you know the recent couple, of years especially the strong rise of, mlops you know which kind of corresponds, to devops in terms of deployment and all, those things do you tend to think of it, in more of an integrated way or do you, still at this point in time as we're in, 2024 think of it as separate approaches, you know from the software what how do, you parse those two sides of that coin, it's interesting I think I at least in, my own mind I tend to separate them out, maybe depending on some of what's, involved in a project so if it's the use, of a pre-trained model I think the, burden is a lot more in kind of the, traditional devops monitoring testing, uptime automation deployment that sort, of thing because likely you're just, interacting with the model via an API, like you would integrate any other API, now there's certain things that can help, you like versioning prompts and testing, for model drift and or data drift and, that sort of thing but it's not so, dissimilar I would say to traditional, software development whereas if you are, really have a unique scenario and you're, fine-tuning a model for a unique, scenario you're likely going multiple, iterations on curating your data set on, training your model on evaluating your, model on versioning your model releasing, it in your model servers up updating it, with new data that comes in and I think, some of that specific mlops type of, software will likely appeal to the, people that are doing that process which, are usually data scientists and not not, software engineers and versioning your, model versioning your data evaluating, your model and the way that those, systems are set up like weights and, biases or clear ML and these types of, things are quite useful in terms of, versioning your model out when you're, training it like that so I think mlops, is alive and well but I also think that, with the rise of this kind of apid, driven AI Development A lot of that does, or can fit into more of the devops side, of things yeah when you're using an API, that somebody else is hosting, maintaining has fine-tuned all that, you're basically using it as a service, like any other service that would not be, Ai and so you just treat it as an API, along the way yeah yeah and and where, that's maybe slightly different is you, are getting kind of some variability out, of that API both in terms of performance, and latency which are maybe common, across software projects but also in, terms of the performance output of the, model especially if you're using like a, closed model product like an open AI or, anthropic or something like that they're, making improvements to their underlying, model under the hood all the time and it, is really more of a product it's not, just you're hitting the model there's, layers around the model which are, product layers that can influence the, behavior of that model I mean you just, kind of look at what's happened with, gini over the past three or four weeks, we don't need to you know get into all, of the details of that if people want to, look look it up they can but I think a, lot of those issues that that product, had were actual product issues that, we're at the product layer surrounding, the model not performance necessarily or, biases in the actual model but in the, filters around the model and how things, are modified in and out of the model and, so that actual product that you're, interacting with can really cause small, changes in how things go into the model, on the product level can make huge, changes in the quality of the outputs of, the model that sounds like some pretty, good practical AI advice right there I, think for me at least that very much, helps me to kind of contextualize the, different things that we may be doing at, work for myself and as we're making, choices and decisions and how we're, going to tackle different problems so I, appreciate uh you sharing that guidance, there yeah and I guess we're talking, about the ml off side of things and, we've talked about practicalities of, deployment schemes and quantization and, all of that this episode and in terms of, a Learning Resource for people if they, want to dive into some of this there's a, lot of great ones out there one one is, to follow the mlops community podcast, which is a podcast that Chris and I love, and have collaborated with over time, Demetrios shout out to the great things, you're doing funniest guy in, AI yeah check out everything that, they're doing over there I also ran, across this Intel Ops professional, certification from Intel there's if you, just search for Intel ml Ops, certification um this is totally free as, as far as I can tell there's seven, modules and eight Hands-On labs and I'm, talking about software solution, architectures for machine learning and, AI API and endpoint design principles of, mlops optimizing the full stack so, really seems to be a good a good set of, things to look at if you're wanting to, think more about the practicalities of, these deployments and other things all, right sounds good well thanks for, sharing your wisdom again today really, good episode I'm going to uh I guess, I'll see you uh in the UK for the next, few weeks to come sounds good yeah, thanks Chris we'll see you soon see you, later, all right that is practical AI for this, week subscribe now if you haven't, already head to practical a.m for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Representation Engineering (Activation Hacking) | Recently, we briefly mentioned the concept of “Activation Hacking” in the episode with Karan from Nous Research. In this fully connected episode, Chris and Daniel dive into the details of this model control mechanism, also called “representation engineering”. Of course, they also take time to discuss the new Sora model from OpenAI.
Leave us a comment (https://changelog.com/practicalai/258/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://www.neo4j.com/developer?utm_source=changelog&utm_medium=podcast&utm_campaign=practicalai) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack. Visit Neo4j.com/developer (https://www.neo4j.com/developer?utm_source=changelog&utm_medium=podcast&utm_campaign=practicalai) to get started.
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Data synthesis for SOTA LLMs with Karan Malhotra from Nous Research (Practical AI #255) (https://changelog.com/practicalai/255)
• Article: Representation Engineering Mistral-7B an Acid Trip (https://vgel.me/posts/representation-engineering/)
• OpenAI Sora (https://openai.com/sora)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-258.md) | 82 | 1 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of, practical AI in this fully connected, episode Chris and I will keep you fully, connected with everything that's, happening in the AI World we'll take, some time to explore some of the recent, AI news and Technical achievements and, we'll take uh a few moments to share, some learning resources as well to help, you level up your AI game I'm Daniel, whitnack I am founder and CEO at, prediction guard and I'm joined as, always by my co-host Chris Benson who is, a tech strategist at locking Martin how, you doing Chris doing great today Daniel, we got lots of news that's come out this, week in the AI space uh barely time to, talk about amazing new things before, stuff comes out yeah I I been traveling, for the past uh five days or something, I've sort of lost track of time but it's, like stuff was happening during that, time in the news especially the Sora, stuff and all that and I feel like I, just kind of missed a couple new Cycles, so it'll be good to catch up on a few, things but one of the reasons I was, traveling was I was at the treeh hacks, hackathon out at Stanford so I went, there as part of the kind of Intel, Entourage and um had prediction guard, available for all the the hackers there, and that was a lot of fun and it was, incredible I it's been a while since, I've been to any hackathon at least in, person hackathon and they had like five, floors in this huge you know engineering, building of room for all the hackers I, think there was like 1,600 people there, participating from all over yeah and, really cool of course like there were, some major category of interest uh one, you know like in doing Hardware things, with robots and other stuff but of, course one of the main areas of Interest, was AI which was interesting to see and, in our the track that I was a judge and, Mentor in one of the cool projects that, won that track was called meshwork so, what they did and this was all news to, me while some of this I I learned from, you know the brilliant students but they, said they were doing something with, Laura and I was like oh Laura that's the, fine-tuning methodology for large, language models I like that yeah that, figures like people are probably using, Laura but I didn't realize and then they, came up to the table and they had these, like little devices like Hardware, devices then it click that something, else is going on and they explained to, me they were using Laura which stands, for long range it's a these sets of, radio, devices that communicate on these, unregulated frequency bands and can, communicate in a mesh Network so like, you put out these devices right and they, communicate in a mesh Network and can, communicate over long distances for very, very low power and so they created a, project that was disaster relief focused, where you would drop these in the field, and there was a kind of command and, control central zone and they would, communicate back transcribed audio, commands from the people in the field, and would say you know oh I've got a you, know I've got a injury out here it's a, broken leg I need you know help whatever, or meds over here or this is going on, over here and then they had an llm at, the command and control center parsing, that text that was transcribed and, actually creating like tagging certain, keywords or events or actions and, creating this nice command control, interface which was awesome they even, had like mapping stuff going on with, computer vision trying to detect where, like a flood zone was or there was, damage in in satellite images so it was, just really awesome so that all of that, over you know a couple day period it was, incredible that sounds really cool and, they did they start the whole thing, there at the beginning of the hackathon, or yeah uh they got less sleep than I, did although although I have to say I, didn't get that much sleep you know it, wasn't a normal weekend let's say you, can sack out on the on the plane rides, after that sounds really cool yeah and, there were like it was the first time I, had seen one of those Boston Dynamics, dogs in person that was kind of fun and, they had other things like these faces, you could talk to I think the company, was called like we head or something is, like these little faces all sorts of, interesting stuff that I learned about, so I'm sure they'll blog post and I, think some of the projects are posted on, dev post the site Dev post so if people, want to check it out I'd highly, recommend scrolling through some really, incredible stuff that people are doing, fantastic I'll definitely do, [Music], that what's up friends is your code, getting dragged down by joins and long, query times the problem might be your, database try simplifying the complex, with graphs a graph database let you, model data the way it looks in the real, world instead of forcing it into rows, and columns stop asking relational, databases to do more than what they were, made for graphs work well for use cases, with lots of data connections like, supply chain fraud detection realtime, analytics and generative AI with neo4j, you can code in your favorite, programming language and against any, driver plus it's easy to integrate into, your text stack people for solving some, of the world's biggest problems with, graphs and now it's your turn visit Neo, 4j.com developer to get started again, Neo 4j.com, developer that's neo4, j.com, [Music], developer Chris um one of one of the, things that I love about these fully, connected episodes is that we get a, chance to kind of slow down and dive, into sometimes technical topics, sometimes not technical topics but I was, really intrigued you remember the, conversation recently we had uh with, Kuran from new research absolutely that, was a great episode I would people can, pause this and go back and listen to it, if they want you know I asked a lot of, selfish questions I learned a lot from, him but at some point during the, conversation he mentioned activation, hacking and he said hey like one of the, cool things that like we're doing in, this you know distributed research group, and playing around with generative, models is activation hacking and we, didn't have time in the episode to talk, about that or and actually in the, episode I was like I'm just totally, ignorant of of what this means and so I, thought yeah I should go check up on, this and see if I can find any any, interesting posts about it and learn a, little bit about it and I did find an, interesting post it's called, representation engineering mistol 7B an, acid trip I mean that's a good title, that's quite a finish to that, title yeah so this is on uh Thea vogel's, blog and it was published January so, recently so thank you for creating this, post and I think it does a good job at, describing some of I don't know if it's, describing exactly what Kuran from noose, was talking about but certainly, something similar and kind of in the, same vein there's a distinction here, Chris with represent what they're, calling representation engineering, between representation engineering and, prompt engineering so I don't know how, much you've experimented with prompt, promp optimization and yeah what is your, experience Chris sometimes like these, very small changes in your prompt can, create large changes in your output yes, that is an art that I am still trying to, master and have a long way to go, sometimes it works well for me and I get, what I want on the output and other, times I take myself down a completely, wrong rabbit hole and I'm trying to back, out uh to that so I I have a lot to, learn in that space yeah and I think one, of the things that is the frustration, for me is I say something explicitly and, I can't get it to like do the thing, explicitly I'm on a customer site, recording from one of their conference, rooms um they graciously let me use it, for the podcast and over the past few, days we've been you know architecting, some solutions and prototyping and such, and there was this one prompt that we, wanted to Output like a set of things, and then look at another piece of cont, and see which of those set of things was, in the other piece of content and it was, like no matter what I would tell the, model it would just say they're all, there or they're all not there like it's, either all or nothing and no matter what, I said it wouldn't change things so I, don't know if you've had similar types, of frustrations I have I'll narrow the, scope down on something try and you know, I'll go to something like chat GPT you, know with the GPT 4 and I'll be I'll be, trying to narrow it down I'll be very, very precise with a short prompt that is, you know the 15th one in secession so, there's a history to work on and I still, find myself challenged on getting what, I'm what I'm trying to do so what have, you stumbled across here that's going to, help us with this yeah so there's a, couple of, papers that have come out they reference, one from October, 2023 from the center for AI safety um, representation engineering a top down, approach to AI transparency and they, highlight a couple other things here but, the idea is what if we could not just in, the prompt but what if we could control, a model to give it a um you might think, about it like a specific tone or angle, on the answer it's probably not a fully, descriptive way of describing it but the, idea being like oh could I control the, model to always give happy answers or, always give sad answers or could I, control the model to always be confident, or always be less confident right and, these are things generally you might try, to do by putting information in a prompt, and I think this is probably a, methodology that would go across I'm, kind of using the example with large, language models but I think you could, extend it to other categories of models, like image generation or other things, it's very like you kind of put in these, negative prompts like don't do this or, behave in this way you're occasionally, funny or something like that as your, assistant in the system prompt it kind, of biases the answer to a certain, direction but it's not really that, reliable so this is it seems with this, area of representation engineering or, you might call it activation hacking um, is really seeking to do if we look in, this article actually there's a really, nice kind of walkthrough of how this, works and they're doing this with the, mistal model so cutting to the chase if, I just give some examples of how this is, being used you have a question that's, posed to the AI model in this case mistl, what does being an AI feel like and in, controlling the model not in the prompts, so the prompt stays the same The Prompt, is just simply what ises being an AI, feel like so the Baseline response, starts out I don't have any feelings or, experiences however I can tell you that, my purpose is to assist you that sort of, thing kind of a bland response same, prompt but with the control put on to be, happy the answer becomes as a delightful, exclamation of Joy I must say that being, AI is absolutely fantastic the you know, and then it keeps going right and then, with the control on to be they put it as, sort of like minus happy right which I, guess I guess would be sad um it says I, don't have a sense of feeling as humans, do however I struggle to find the, motivation to continue feeling worthless, and and, unappreciated so uh yeah you can kind of, see and this is all with the same prompt, so we'll talk about kind of how this, happens and how it's enabled and that, sort of thing but how does this strike, you well first of all funny but second, of all the idea is interesting I looking, through the same paper that you've sent, me over I they talk about control, vectors and I'm assuming that's what, we're about to dive into here in terms, of how to apply them yeah looks good and, this is sort of a different level of, cont so there's various ways people have, tried to control generative models one, of them is just the prompting strategies, or prompt engineering right right, there's another methodology which is, kind of fits under this control which, has to do with modifying how the model, decodes outputs so this is also, different from this representation, engineering methodology people like Matt, Rickert have done things many others too, where it's you say oh well I want maybe, Json output or I want, either a bin like I want a binary output, like a yes or a no right well in that, case you know exactly what your options, are so instead of decoding out, probabilities for 30,000 different, possible tokens maybe you mask, everything but yes or no and just figure, out which one of those is most probable, so that's a mechanism of control where, you're only getting out one or another, type of thing that you're controlling so, this is interesting in that you're still, allowing the model to freely decode what, it wants to decode but you're actually, modifying not the weights and biases of, the model so it's still the pre-trained, model but you're actually applying a, what they call a control Vector to the, hidden States within the model so you're, actually changing how the forward pass, of the model operates if people remember, or kind of think about when, people talk about neural networks now, people just use them over API but when, we used to actually make neural networks, ourselves there was a process of a, forward pass and a backward pass where, the forward pass is you put data into, the front of your neural network it does, all the data Transformations and you get, data out the other side which you would, call an inference or prediction and the, back propagation or backward pass would, then propagate changes in the training, process back through the model so here, it's that forward pass and there's sort, of some jargon I think that needs to be, decoded a little bit no pun intended uh, you talk about this where there's a lot, of talk about hidden layers and and all, that means is in the forward pass of the, neural network or the large language, Model A Certain Vector of data comes in, and that Vector of data is transformed, over and over through the layers of the, network then the layers just mean a, bunch of subf functions in the overall, function that is your model and those, subf functions produce intermediate, outputs that are still vectors of, numbers but usually we don't see these, and so that's why people call them, hidden States or hidden layers you're, talking about the fact that is the the, control Vector is not changing the, weights on the way back the weight back, propagation Works correct how does the, control vector, Implement into those functions so as, it's moving through those hidden layers, what is the mechanism of uh, applicability on the model that it uses, for that so it's it I mean intuitively, it sounds almost like the inverse of, back propagation the way you're talking, I don't know if that's precise but yeah, it's quite interesting Chris I um I, think it's actually a very subtle but, creative way of doing this control so, the process is as follows there um in, the blog post they kind of break this, down into four steps and there is data, that's needed but you're not creating, data for the purpose of training the, model you're creating data for the, purpose of generating these what they, call control vectors so the first thing, you do is you say okay let's say that we, want to do the happy or not happppy or, happy and sad operation so you create a, data set of contrasting, prompts where one explicitly asks the, model to act extremely happy like very, happy all the ways you could say to the, model to be really really happy and you, know rephrase that in a bunch of, examples and then on the other side the, other one of the pair do the opposite so, ask it to be really sad and oh you're, you're really really sad and be sad and, you have these pairs of prompts okay and, then you take the model and you collect, all the hidden States for your model, while you pump through all the happy, prompts and all the sad prompts and so, you've got this collection of hidden, states with in your model which are just, vectors that come when you have the, happy prompt and when you have the sad, prompt so step one the pairs of kind of, like a preference data set but it's not, really a preference data set it's, contrasting pairs on a certain axis of, control right okay and so you run those, through you get all of the Hidden States, and step three is then you take the, difference between so for each happy, hidden State you take its corresponding, sad one and you get the difference, between the two okay so now you end up, with this big data set of for a single, layer you have a bunch of different, vectors that represent differences, between that hid and state on the happy, path and the sad path so you have a, bunch of vectors now to get your control, Vector step four you apply some, dimensionality reduction or or um Matrix, operation the one that's talked about in, the blog post is PCA but it sounds like, people also try other things PCA is, principal component analysis which would, then allow you to extract a single, control Vector for that hidden layer, from all these difference vectors and, now you have all these control vectors, so when you turn on the the switch of, the happy control vectors you can pump, in the prompt without an explicit, extruction to be happy and it's going to, be happy and when you do the same prompt, but you turn off the happy and you turn, on the sad now it comes out and it's sad, that's interesting, where would you want to use this to, achieve that bias versus some of the, more traditional approaches such as you, know asking in the prompt what is we're, listening to this where is this going to, be most applicable for us yeah I think, that people anecdotally at least if not, explicitly in their own, evaluations have found very many cases, where like you said it's very, frustrating to try to put in your, prompts and just not just not uh get it, and what's interesting also is like a, lot of this is boilerplate for people, over time like you are a helpful, assistant blah blah blah and they have, their own kind of set of system, instructions that at least to their best, of their ability get what they want so I, think when you're seeing, inconsistency in control from The Prompt, engineering side like I always tell, people when I'm, working with them with these models that, the best thing they can do is just start, out with trying basic prompting because, if that works you know it's the easiest, thing to do right you don't have to do, anything else sure but then the next, thing or maybe one of the things you, could try before going to fine-tuning, because fine-tuning is another process, by which you could align a model or, create a certain preference or something, but it takes you know you know generally, gpus and maybe is a little bit harder to, do uh because then you have to store, your model somewhere right and all this, stuff and host it and maybe host it for, inference and that's difficult so with, the control vectors maybe it's a step, between those two places right where you, have a certain Vector of behavior that, you want to induce and it also allows, you to make your prompts a little bit, more simple right you don't have to, include all of this junk that is kind of, General instructions you can Institute, that control in other ways which also, makes it easier to maintain and iterate, on your your prompts because you don't, have all this long stuff about how to, behave so to extend your the happy, example for a moment I want to drive it, into like a real world use case for a, second let's say that we're going to, stick literally with the happy thing and, let's think of something where we would, like to have happy responses maybe a, fast food restaurant you're going, through a drive-thru at a fast food, restaurant or a couple of years from now, they may have put an AI system in Place, White Castle has it now oh okay well I, there you go there you go you're you're, already ahead of me there so okay I'm, coming now also shows that I'm unhealthy, and go to White, Castle okay well I'm now coming forward, with my thoroughly out ofd use case here, um and so uh we have the model and maybe, we to use the model on without without, doing retraining it or anything we want, to uh maybe use uh retrieval augmented, generation apply it to the data set that, we have which might the be the menu and, then maybe we use this mechanism that, you've been instructing us on the last, few minutes for that happy thing so that, the drive-through consumer can have the, conversation with the model through the, interface they it it applies primarily, to the menu uh but they get great, responses and maybe that you know helps, people along I I don't always get get, that happy response from all the humans, uh in the drive-throughs where I go to, uh to have my unhealthy food things, first off thanks for making me hungry, for for White Castle but uh we're, recording this in the late afternoon, dinner is coming up uh you know pretty, soon so we're it is coming up uh there's, an unspoken bias right here yeah exactly, um what's interesting is you could have, different sets of these that you can, kind of turn on and off which is really, an intriguing like you have this sort of, zoo of behaviors that you could turn on, and off I think even oh you're you have, this one interaction that needs to be, this way but as soon as they go into, this other flow you need to kind of have, another Behavior it may be useful to for, people to get some other examples so we, said the happy sad one there's some some, other examples that are quite intriguing, throughout the blog post um from the, hopefully I'm saying that name right if, not we'd love to have you on the on the, podcast to help help correct that and, continue talking about this but um, another one is honest or dishonest or, honest or not honest and uh the prompt, is you're late for work um what would, you tell your boss and the one it says I, would be honest and explain the, situation and you know it's the honest, one and then the other one uh says I, would I would tell my boss that the sky, was actually green today and I didn't, and I didn't go out yesterday or yeah um, I would also say I have a secret weapon, that I used to write this message um so, kind of a different flavor there the one, probably inspiring the blog post the, acid trip one is they had a a Trippy one, and a non-t Trippy one so the prompt was, give me a one sentence pitch for a TV, show so the the non-t trippy one was a, young and determined journalist who's, always serious and respectful be able to, make sure that the facts are not only, accurate but also understandable for the, public and then the trippy one was our, show is a kaleidoscope of colors trippy, patterns and psychedelic music that, fills the screen with worlds of wonder, where everything is oh oh man, um just uh I'm going for the latter one, just for the yeah exactly yeah they they, do um lazy not lazy they do left-wing, right-wing creative not creative uh, futur looking or not futur looking, self-aware um so there's a lot of, interesting things I think to to play, with here and it's an interesting level, of control that's potentially there one, of the things that they do highlight is, this control mechanism could be, applied both to jailbreaking and anti-, jailbreak breing models so by that what, we mean is models have been trained to, you know Do no harm or not output, certain types of content right well if, you Institute this control Vector it, might be a way to break that model into, doing things that the people that train, the model explicitly didn't want it to, Output right but it could also be used, the the other way right um to maybe, prevent some of that jailbreaking so is, an interesting interplay here between, maybe the good uses and the less than, good uses on that spectrum that entire, AI safety angle on using the technology, responsibly or not sure they represent, or uh references the rep in library, which uh I guess is one way to do this, but there may be other ways to do this, if any of our listeners are aware of, other ways to do this or convenient ways, to do this or examples please please, share them with with us we'd love to, hear, [Music], those this is a change log news break, GPT script is a new scripting language, to automate your interactions with llms, which for now just means open AI from, the Project's homepage quote quot the, ultimate goal is to create a fully, natural language-based programming, experience the syntax of GPT script is, largely natural language making it very, easy to learn and use natural language, prompts can be mixed with traditional, scripts such as bash and python or even, external HTTP service calls end quote, the project includes examples of how to, plan a vacation edit a file or run some, SQL the central concept is that of tools, each tool performs a series of actions, similar to a function and GPT script, composes the tools to accomplish tasks, you just heard one of our five top, stories from Monday's Chang log news, subscribe to the podcast to get all of, the week's top stories and pop your, email address in at, changelog.md or news worth your, attention once again that's changel, log.com, newws, [Music], well this was a pretty fascinating Deep, dive Daniel thank you very much yeah, yeah you know um you can go out and and, control your models now Chris it'll be, the first time ever uh I think you know, that I've done it well there always, trying different stuff I think we'd be, remiss if we got through the episode and, didn't talk about a few of the big, announcements this past week yeah a lot, it's been quite a week um you me, mentioned right up front uh open AI, announced their Sora model which case, you're able to create very, hyperrealistic video from text I don't, believe it's actually out yet at least, when I first read the announcement it, wasn't available yet they had put a, bunch of of demo videos yeah I checked, just before recording this and I, couldn't see it it's still not released, at this point yeah okay so but they have, a there's a number of videos that open, AI has put out so I think we're all kind, of waiting to see but I the thing that, was very notable for me this week I, really wasn't surprised to see the, release and we've talked about this over, the last year or so is you know if you, look at the evolution of these models, that we're always you know kind of, documenting in the podcast episodes and, stuff this was coming and we all knew, this was coming at some we just didn't, know you know how how soon or how far, away but we talked many months ago about, we're we're not far from video now so, open AI has gotten there with the first, of the hyper realistic video generation, models and definitely looking forward to, gaining access to that at some point and, seeing what it does but there was a lot, of reaction to this in the general uh, media you know in terms of AI safety, concerns you know how do you know if, something is real going forward and, stuff and you know it's it's the next, iteration of of the more or less the, same conversation we've been having for, several years now uh on AI safety what, are your thoughts when you first saw, this, yeah it's uh definitely interesting in, that it definitely didn't come out of, out of nowhere just like all the things, that we've been seeing we've uh we've, seen video generation models uh in the, past generally not at the level either, generating like very very short clips, with high quality maybe or generating, like from an image a realistic image, some motion or maybe, videos that are not that compelling I, think that the difference and of course, we've only seen like you say it's not, the the model that we've got Hands-On, with but we've seen a the release videos, which who knows how much they're, cherry-picked I mean I'm sure they're, they are to some degree and also aren't, to some degree I'm sure it's it's very, good but um other players in the space, have been meta and Runway ML and and, others but yeah this one I think was, intriguing to me cuz yeah, generally there were a lot of really, compelling videos at First Sight and, then I think you also had people just, like the image Generation stuff has been, you have like real photographers or real, artists that look at an image and like, say oh look at all these things that, happen and it's the same here like they, all kind of have a certain flavor to, them probably based on how the the model, was trained and they still have I think, I was uh watching one where it's like a, grandma blowing out a birthday cake and, one of the candles had like two Flames, coming out of it and then like there's a, person in the background with like a, disconnected arm sort of waving but if, you had the video as like a b-roll and a, really quick, type of video of other things you, probably wouldn't notice those things, right off the bat if you slow it down, and you look there's like the weirdness, you would expect just like the weirdness, of like six fingers or something with, image generation models right so yeah I, think it's really interesting what, they're doing I don't really have much, to comment on in terms of the technical, side other than they're probably doing, some of what we've seen that people have, published of course open AI doesn't, publish their stuff or share share that, much in that respect but it probably, follows in the vein of some of these, other things and people could look on, hugging faers even hugging face spaces, where you can do video generation even, if it's only like four seconds or, something like that or not even that, long but I think the the main thing, aside from the specific model as itself, is it's kind of signaling in the general, Public's awareness you know that this, technology has arrived and just as with, the the other you know with chat GPT, before and things like that you know, it's going to be one of the it's here, now everyone knows and uh and we'll, start seeing more and more of the models, propagating out and some obviously will, be closed Source like open AIS is and, hopefully we'll start soon seeing some, open source models doing this as well, yeah speaking of Open Source uh another, a competing large Cloud company Google, decided to try their hand in the open, source space as well or at least the, open model space and they released a, derivative of their closed Source Gemini, and I say derivative because they say it, was built along the same mechanisms, called Gemma and it's currently as we, are talking right now in the number one, position on hugging face at least last, time I I checked not long before this, although that changes fast so I probably, should have checked right before I said, that it's still number two but well it's, the top language trending language model, got uh stability's stable Cascade, knocked it out of the overall top spot, but yeah the the Gemini ones are quite, interesting because they're also smaller, models which I'm a big fan of um most of, our customers use these sort of smaller, models and also even having a two, billion parameter model makes it very, reasonable to try and run this locally, or in Edge deployments and that sort of, thing or in a quantized way with some, level of speed and they also have the, base models which you might grab if, you're going to fine-tune your own model, off of one of these and they all have uh, instruct models as well which would, probably be better to use if you're, going to use them kind of out of the box, for General instruction following, criticisms heard just about the approach, as I've heard uh a number of people, saying ah they're they're putting a foot, in each side of the camp you know one in, closed source with the main Gemini line, and Gemma being open source and the, weaker but I would in turn say I'm very, happy to see Gemma in open source we, want to encourage this we want the uh, the organizations who are going to, produce models to do that and you're, right uh going back to what you were, just saying this is where most people, are going to be using models in real, life is you know they're is is if you're, not just running through an API to one, of the largest ones but you don't need, those for so many activities so I think, you know this is we've talked about this, multiple times on previous episodes, models this size are really where the, action is at it's not where the hype is, at yeah but it is where the actions at, for practical productive and accessible, models yeah definitely especially for, people that have to get a bit creative, with their deployment strategies either, for regular atory security privacy, reasons or for connectivity reasons or, other things like that I could see these, being used um quite widely and and, generally what happens when people, release a model family like this and you, saw this with llama 2 you've seen it, with mistol now with Gemma we'll see a, huge number of fine tunes off of this, model now one of the things that I need, to is you do have to agree to certain, terms of of use to use the model there's, it's not just uh released under Apache 2, or MIT or something like that Creative, Commons so you accept a certain license, when you use it and and I need to read, through that a little bit more so people, might want to read through that I don't, know what that implies about both, fine-tuning and use restrictions uh so, that would be worth worth a look for, people if if they're going to use it but, certainly would be easy to pull it down, and and try some things um they do say, that it's already and I'm sure actually, hugging face probably got a head start, you know a week or so maybe of Head, Start to make sure that it was supported, in their libraries and that sort of, thing because I think even now you can, use the standard Transformers libraries, and other trainer classes and such to, fine-tune the model sounds good so uh as, we start to wind down before we get to, the end do you have a little bit of, magic to share by, chance that's a it's a good one Chris um, yes uh on the road to AGI magic as your, predictions from the for the year talked, about there be people talking about AGI, again and certainly certainly they are, it's not directly an AGI thing but this, uh company magic which is kind of, framing themselves as a code generation, type of platform in the same space as, like GitHub co-pilot codium maybe they, raised a bunch of money and posted some, of what they're trying to do and there, was some information about it I think, people seemed to be excited about it, because of you know some of the people, that were involved but also because they, talk about code generation as a kind of, Step stepping stone or path to AGI so, what they mean by that is well okay, initially they'll release some things as, co-pilot and code assistant type of, things like we already have but uh, eventually there's tasks within the set, of things that we need developers to do, that they want to do, automatically um not just having you, have a co-pilot in your own coding, but in some ways having a a junior Dev, on your team that's doing certain things, for you and of course if you take that, then to its logical end as the dev on, your team AI Dev on your team gets, better and better maybe it can solve, increasingly General problems through, coding and that sort of thing so I think, that's the take that they're having on, this code and AGI situation okay well, like I said quite a week uh full of news, and uh when you combine that with the, Deep dive you just took us through on, representation engineering um especially, with an acid trip, involved yeah we've been we were, hallucinating more than uh chat GPT as, our friends over at the mlops podcast, would say can't beat that that we got to, close the show on that one yeah yeah, well thanks Chris I would recommend that, people take um if they're interested, specifically in learning more about the, representation learning subject or, activation hacking take a look at this, blog post it is a more of a kind of, tutorial type blog post and there's code, involved and references to the library, that's there so you can pull down a, model maybe you pull down the Gemma, model the two billion one in a collab, notebook you can follow some of the, steps in the blog post and see if you, can do your own activation hacking or, representation learning I think that, would be a good a good learning uh both, in terms of a new model and in terms of, this methodology sounds good I will talk, to you next week then all right see you, soon, [Music], Chris all right that is practical AI for, this week subscribe now if you haven't, already head to practical a FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next, [Music], time, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Leading the charge on AI in National Security | Chris & Daniel explore AI in national security with Lt. General Jack Shanahan (USAF, Ret.). The conversation reflects Jack’s unique background as the only senior U.S. military officer responsible for standing up and leading two organizations in the United States Department of Defense (DoD) dedicated to fielding artificial intelligence capabilities: Project Maven and the DoD Joint AI Center (JAIC).
Together, Jack, Daniel & Chris dive into the fascinating details of Jack’s recent written testimony to the U.S. Senate’s AI Insight Forum on National Security, in which he provides the U.S. government with thoughtful guidance on how to achieve the best path forward with artificial intelligence.
Leave us a comment (https://changelog.com/practicalai/257/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Shopify (https://www.shopify.com/practicalai) – Sign up for a $1/month trial period at shopify.com/practicalai (https://www.shopify.com/practicalai)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Jack Shanahan – LinkedIn (https://www.linkedin.com/in/jackntshanahan)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Gen AI Master class (https://lu.ma/nbi7pz6y)
• Written Testimony of Lieutenant General John (Jack) N.T. Shanahan (USAF, Ret.) AI Insight Forum: National Security (https://www.schumer.senate.gov/imo/media/doc/Jack%20Shanahan%20-%20Statement.pdf)
• Software Defined Warfare: Architecting the DOD’s Transition to the Digital Age (https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/220907-Mulchandani-SoftwareDefined-Warfare.pdf)
• Artificial Intelligence and Geopolitics:vHitching the Disruptive Technology Cart to the Geopolitics Horse | LinkedIn (https://www.linkedin.com/posts/jackntshanahan_ai-and-geopolitics-activity-7143953788532375552-Xzwr)
• Joint Artificial Intelligence Center (JAIC) | Wikipedia (https://en.wikipedia.org/wiki/Joint_Artificial_Intelligence_Center)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-257.md) | 38 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whitnack I'm, CEO and founder at prediction guard and, I'm joined as always by my co-host Chris, Benson who's a tech strategist at locked, Martin how you doing Chris doing good, it's a beautiful almost spring day here, in Atlanta I walked my dog before the, show uh yeah the sun is out it's getting, nicer out uh have to start with the, weather of course yeah and uh spring is, breathing sort of new life into uh, various ideas around this podcast as, well and before we hop into our our, really interesting guest interview today, I wanted to highlight something new that, we're trying at practical AI we want, this show to be practical and useful and, part of that is actually helping you all, get live Hands-On help with the latest, Technologies and so March 14th at 100 PM, Eastern we're gonna have a our first gen, AI Mastery class or webinar or tutorial, or whatever you want to call it we're, going to dive into all things Text tosql, data analytics questions with large, language models and we're going to be, joined by chuni who uh is from Lance DB, the CEO there who is a recent guest on, the show so it's going to be a lot of, fun find out more at, tinyurl.com, j-my one and we'll include the link in, the show notes as well I'm pretty, excited about it Chris I am too looking, forward to that it's a new feature for, us to to do and we hope uh folks in the, audience uh enjoy that I guess if it's, okay I'm going to dive right into our, guest go for it I will say ahead of time, I have been looking forward for years, for having this guest come on the show, it's a long time coming a long time, coming because I'm at lockeed Martin and, we have a lot of uh don't cross the, streams mentality I had to wait for this, guest to retire from his previous, position so that there could be no uh, crossing the streams concerns and so I'd, like to introduce Jack Shanahan he is, the only senior military officer who has, been responsible for standing up and, leading two organizations within the, United States Department of Defense, dedicated to field AI capabilities one, of those is was Project Maven which was, also known as the algorithmic Warfare, cross functional team and the other is, the Department of Defense joint, Artificial Intelligence Center and he, was the founding and inaugural director, of that Jack welcome to the show Krist, and Daniel thank you so much for having, me on I just want to I'll I'll begin by, just saying how much I love the title, practical AI because I am a practitioner, this is not about research and, development now there is a lots of, Wonderful happening in the ai& world but, that's not my world my world is how do, you take this esoteric technology and, turn it into real product for the, government or for my case for the, Department of Defense and a little bit, for the National Intelligence Community, because project Maven you talk about, Crossing streams we crossed streams it, was both what we say the IC the Intel, community and the Department of Defense, but I'm really happy to join both of you, for this podcast thank you no problem, it's uh you know I will note I follow, you on LinkedIn I encourage our audience, to follow you on LinkedIn you are out, there reading the scientific papers as, they come out and you often when you're, sharing posts will put a spin on the, various uh you know papers that are, coming out from that Ai and National, Security context um because those papers, obviously not aren't necessarily in that, way very insightful uh would encourage, people to do that wanted to start out as, a military professional how did you get, into this particular I mean this is you, know you you were the right person at, the right place at the right time in the, Department of Defense how did you end up, in this what caught your attention and, how did you get going on this story yeah, I guess that what I would begin by, saying is I spent 95% of my career not, working on this because I started in, actually fighter Aviation for the first, 15 years of my military career I spent, 36 years in uniform retired in the, summer of 2020 and then from the flying, piece I was charged to go run an, intelligence organization and then I did, a command and control wing and then a, flying wing out of Nebraska and then did, a command down in San Antonio which was, about intelligence surveillance, reconnaissance and policy positions and, other positions in the Pentagon so this, was not my destiny my path as we say, from the beginning however when we got, to that point where I came back as a, three star into the Pentagon in, something we call the under secretary of, defense for intelligence I didn't walk, into that job with AI is is on my plate, as we say I left that job with AI, dominating everything I did every single, waking moment in most of my non awake, moments nightmares dreams whatever you, want to call it as I say I cut a long, story very very short by saying we had, an intractable problem and we had, intelligence analyst looking at full, motion video coming from Dron and there, was more drone video than at any point, in history hands down the Intel analyst, just couldn't do it they couldn't spend, enough time in a day as I used to joke, about at the time they would have uh, some sort of energy drink next to them, and probably some chewing tobacco trying, to just this mind-numbing work of, looking through this video it was just, first of all we're going to run out of, people but two we were going to get more, and more collection not just full motion, video but from other assets as we say to, include unclassified open source, information and it was a success, catastrophe more collection from more, places at more classification levels, than at any other point in history, period And so what could we do and we, could not find a technology that was, ready to be fielded in the Department of, Defense again wonderful research and, development work that was going on and, goes on as we speak today so we turned, to commercial industry in fact we turned, to Silicon Valley and they said yeah we, got something that could probably help, you and it's called computer vision, natural language processing and that's, where the journey began to be honest, with you and uh we formally stood up in, April of 2017 it's hard to believe it's, seven years now on that journey and then, we were Off to the Races and I did that, for just about 18 months and then I was, asked to stand up the joint AI Center or, the Jake which was to take on everything, that we weren't doing in Project Maven, project Maven was about intelligence but, the entire department needed to start, bringing in Ai and everything else the, department does and there was no, mechanism for doing that each of the, military services had something going on, but I would call them pilot projects, smallscale Pilots that could not cross, that Valley of Death so the whole point, of this new organization was let's get, going let's get uh going at scale fast, would be very good fast and at scale, would be superior and that's what this, was all about it was speed and scale so, that is how I ended up at the Jake after, having done project Maven and we had to, stand up and art as I say I don't I, don't you know I've decided it's no, longer tongue and cheek I actually was, the CEO of two AI startups in the, Pentagon that doesn't make sense to a, lot of people like the Pentagon does, startups yes we did two startups in the, all the challenges that you would expect, from startups I lived firsthand with my, team in Project Maven and the Jake so, that's a good starting point to tell you, the journey that we took to get to the, final 2020 where I walked out the door, but I haven't walked out the door, because I'm still very connected to, what's going on in government you, mentioned uh in part of that discussion, the words at scale a couple of times and, I know as even now I'm building my own, company you you encounter problems at, each level of scale that you you try to, achieve but when you're talking about at, scale at the government level or, worldwide where all of these, organizations are are operating I'm sure, there are things that go beyond the, technology element um so choosing saying, oh this is a great model for this, application right whether it's a, computer vision or natural language, processing or geni now I'm mostly, curious because I I don't really have a, good window into what does it mean to, actually scale one of these applications, AI applications in the National Security, Space versus in an industry context, where you might be scaling to you know, this many users or something within your, company what's different and what's the, same and what's kind of unique of that, at scale component in the National, Security world really all gerain points, Daniel and there's so much to talk about, just on that Lo uh I think I've now, spent enough time now that I'm out of, uniform I'm out of the Department of, Defense working with Venture Capital, companies Insight Partners in particular, and I get a chance to go and spend time, with CEOs of everything from small, startups to pretty big companies that, are still in that Venture Capital, business somehow there whatever seed, round of funding that they're getting, and the journey is remarkably the same I, actually surprising to me um I would sit, there and cry in my beer so to speak and, say oh nobody understands my problem and, then one day I'm driving into the, Pentagon um just desponding about the, how I'll never get this thing called the, Jake bill because I have no people I, have no money how are we going to get, there I'm listening to guy Ross how I, built this and it was listening to the, CEO the founder of belon routers it was, a fascinating story it was exactly my, story he says I built this thing in the, my parents garage and there were days, when I was just ready to throw in the, towel and give up and there were other, days 24 hours later where I would, suddenly some technological breakthrough, or some big contract that we got and all, of a sudden it looked bright again I, lived that I said well if he could turn, it into that company then hell I can do, the same thing in the Department of, Defense and I want to come back to, something you said Daniel because it's, so incredibly important to this, discussion the technology of course was, fundamentally at the center of what we, were trying to do but everything else, was even more important to how do you, get to scale so I would say maybe the, differences um are well they're used for, military operations but I would say that, if you look at any big commercial, company that was not built as a digital, company it was built as a hardware, company in the Industrial Age we have, the same experiences I've talked to, enough people now and say oh yeah it's, the same thing we're going through it's, getting that pilot project scaled and, built in a way that you can then put it, in one you know for a company so let's, say for a medical or financial business, it's for that company but for a lot of, other places just how do I get this, thing scaled to the rest of the world, but in terms of Industries very simil to, what we were going to and I talk about, this all the time I have eight things, and I don't have to go through all of, them but they have nothing to do with, technology it's you know mandate Vision, alignment obstacle clearers bureaucratic, enablers resources authorities and then, sort of talent management all of those, are not technology focused but they have, to exist where you can't get there we, all want to focus on I have this cool, new widget or Gadget and this thing is, going to change the world well it might, but if you don't have all those other, pieces in place which if you understand, the bureaucracy known as the federal, government you have to have somebody, that understands how to navigate in that, world whatever I didn't know about, technology and that's a fairly long list, of things at the time I started all this, journey I sure knew a lot about how to, work in the in the federal government I, had commanded six different levels in, the United States Air Force I had been, you know in all these places where I had, to do this day in day out so that part I, was very comfortable with I wasn't, nearly as comfort with technology P so I, had to it was a hockey stick learning, curve no if hands are bust about it I, still feel like I'm a novice today by, the way and that's seven years in the, past where we started it's actually, eight we got started in 2016 so I think, there's more similarities there are, differences at least for the non-digital, companies because they have the same, problems um that a CEO is experiencing, today saying is this AI thing real, should I really invest millions of, dollars of this company's money what is, the return on investment the that return, on investment question is an unsettled, discussion and I had a lots of those, with people very skeptical about what we, were trying to do in the joint AI Center, that's actually a point I'd like to, extend a little bit uh you talked about, the federal bureaucracy and the, challenge and the US military is not, just a single organization it's a lot of, large organizations with their own, structures and their own cultures in, each of the services in a sense that's, almost a much harder thing I would argue, than a lot of like you know Fortune 500, companies not only from size but just, the you have so many different things at, play when you're taking something as new, and as hyped as AI is so you know you, having both the potential of the, technologies that are being developed, but with a lot of hype thrown in on top, of that and those skeptical people in, different organizations within the, larger how do you navigate that to make, you know you talked early on about the, velocity being so important uh velocity, at scale how do you navigate that in a, large organization what are some of the, lessons that you learned on that because, normally we see such slow CH you know, the American people see such slow change, in in other uh Nations too in their, governments at least that's the, perception you did a lot in a very short, amount of time how did you navigate that, very carefully it was hard it it was, hard to do it but Chris I'm gonna come, back to the word that you said and to me, I I would put this at the core of, everything I was trying to do and that's, culture you're trying to change the, culture of an organization and the thing, about the Department of Defense I'm not, going to say it's completely unlike a, lot of big companies but I think there, are big differences because when you, talk about the Department of Defense, there are cultures within cultures, within cultures there are military unit, cultures there are service cultures, there's an office of the secretary, defense culture there's a pentagon, culture foreign as that culture is it is, a culture unto itself when you walk into, that building every morning and leave at, the end of the day dark on both ends so, this culture piece is difficult to, figure out how to change that so what, you have to do is just persist for days, months and years at a time and saying, join me you know come on this journey, with us and there'll be a lot of, resistance to that because not, surprisingly when you talk about war, fighting the Department of Defense is, still largely a risk averse organization, it has to be because lives are at stake, we're trying to take a culture and, change it in a way that people are just, not comfortable with because of the hype, because they don't understand what AI, really is to them they keep hearing, Miracle Whip Miracle Miracle Whip this, thing that we're just going to take out, of the jar and spread on a a couple of, pieces of bread and all will be well, with the Department of Defense it just, doesn't work that way so this idea of, changing culture from a bottom layer top, layer middle management layer it all has, to be done simultaneously and part of, this culture is and this is why when I, talked about what you need is is um not, just the bureaucratic enables but the, people who are obstacle clearers and I, put them in three categories the classic, disruptors which the Marine Corps, Colonel that ran project Maven Drew, kokor was absolutely positively one of, those you also need the people that, clean up the broken glass that comes, from the disruptors and we had a lot of, people that were capable of doing that, and then I have the networkers these are, the people that a lot of times are what, we would consider middle management they, get disparaged but to me they're the key, to suc uccess there are the people that, know how things get done in the federal, government they go have a cup of coffee, with the budget person they go have a, cup of coffee with a service General, officer and they work the networks that, they've established over years and if, you find somebody that can do that, effectively you're going to do much more, than people that just think sheer force, of will alone will change culture i you, really need some ratio of all those, three types of people and the ratio will, change over time which I found in, Project Maven we need the disruptor, period we had to force feed this down, the throat of the Department of Defense, hard hard hard to do but over time as I, got into the Jake I needed more of the, people that could do the networking and, that would you know do a little bit of, disruption but more it was about, cleaning up some broken glass but really, moving faster and faster so culture eat, strategy for breakfast it always has it, always will you have to put culture at, the center of any technology project, Well Jack I really love how we're, getting into a lot of these culture, strategy Talent sort of subjects which, is really key and practical across a lot, of organizations that are trying to, adopt this technology one of the things, that you mentioned which kind of caught, my ear is you know there's people, involved in particularly in the context, that you've been working in that do have, a true valid concern around risk of, these Technologies like hey people's, lives are on the line and we care about, that which hopefully they they should, care about that right so after working, in these environments for a while and, bringing new technology to sort of risky, situations for a lot of those people out, there that are na also navigating maybe, maybe it's not people's lives but maybe, it is like real impact to people's lives, and integrating AI systems around, automation or really uh sensitive, subjects like finance and healthare and, other things do you have any learnings, or thoughts from your experience of AI, in Risky situations or with sensitive, data that you'd like to highlight and, how that played out in your situations, what you learned over time yeah Daniel I, would say this is something we talked, about pretty much every day now maybe, was a little different because we knew, the problem from the beginning that, problem could not be solved in any other, way so we knew we were going but when I, got into the Jake the conversations we, had were generally along this line we're, going to to start with lower risk lower, consequence use cases solve those learn, from them and then slowly move up the, ladder maybe even fast up the ladder, depending on what we were talking about, to get into those higher risk use cases, there is a lot of talk of course about, the dangers or the risk of so-called, killer drones I'll tell you the thing I, wasn't working on in the five years last, five years of my career was killer, drones I had nothing to do with those, because they were too high risk too high, consequences and we had to understand, how to actually do a AI before it, started getting to that so we started, with things like predictive maintenance, or there were some medical initiatives, that we were taking on there were some, things that we could put on individual, um intelligence platforms or sensors and, so we learned from that and then based, on those Lessons Learned began to sort, of edge our way into more consequential, use cases and I to me there would be a, clear analogy with any business that's, out there because again a CEO the risk, will be different it might not be life, death but it could be tens of millions, of dollars and that's very risky for a, CEO so the idea of proving success at, something and the reason we stood up the, Jake is because we showed enough success, in Maven as difficult as it was we did, show real success and put AI models out, in combat operations within a year of, standing up the organization okay let's, go try some other things and so what, you'll see now is okay now that we've, learned all these lessons they're going, to a little bit more consequential use, cases U maybe AI enabled autonomous, drones and maybe not lethal drones but, autonomous drones and then you'll start, seeing this in fact you can kind of see, some of this playing out in Ukraine as, you look at the headlines what they're, doing with drones on both sides right, now so to me it's like starting with, something that's manageable get your, arms around it learn what it means to, build a data management pipeline because, what I've found in the dod hands down, what stopped people cold when trying to, start AI projects was Data much better, now than it was seven years ago much, much much better but it's still hard so, if you can't get over that data hurdle, then people get despondent they can't, reconcile what they're hearing about the, hype with the challenges of doing it for, real and I tell you I've been around, long enough now to know when somebody, has never done a real AI project and, just talks about AI because there's a, big difference in those two things one, of the things I wanted to share with the, audience uh is that you have recently, submitted uh paper with a lot of, guidance uh to the US Senate they had, the AI Insight Forum on National, Security and you submitted your document, uh and this was very recently on, Wednesday December six of this past year, and uh it's great in that it gives a a, fantastic kind of oversight into AI in a, national security context and you offer, some recommendations which I'd love to, go through in terms of that but you, mentioned Ukraine and you also mentioned, that in the document I was one to as a, transitional question kind of what have, we learned that you can share from, Ukraine so as you're racing through, getting AI gradually integrated into our, armed services and we're watching and, supporting the ukrainians against the, Russians and we've had this realtime, real life learning process that's come, out of that and seeing what they've done, with drones and stuff is there anything, that stands out in your mind that was, either you know great learning or, something unexpected that came from that, real life application that's being, developed right now technology is at the, center of this fight that's going on, this war that's going on between Ukraine, and Russia and what you've seen in, Ukraine in the first year of the war was, how quickly they adapted it's an amazing, story It's actually an amazing story, that what you have is some people that, might have been born in Ukraine came to, the United States got educated went out, to Silicon Valley but when the war, started went back over to Ukraine and, then focused on how much faster they can, bring technology to the fight things, that we've been talking about doing in, the US Department of Defense for many, years they did instantly like people, don't maybe understand this part of it, moving their entire government to the, cloud if they had not done that it would, have been a disaster they would have, lost everything but then just on the, military technology pieces how quickly, they were able to bring in what I say is, this really an example of what we've, been talking about for a while is, softwar driven Warfare it is moving that, fast and it's not to imply that, technology of course is no longer, relevant it's as relevant as ever but, technology that is software defined is a, different kind of technology and that's, how Ukraine has been gaining an, advantage is by moving faster than the, Russians and adapting much faster and, the drones are the best example of this, it's crazy to see what they've done the, idea of bringing in a 100,000 drones, some of which are firsters View and, you're watching somebody wearing these, goggles drive a drone with explosives on, it into a Russian tank and blow it up so, there are a lot of less now no two, conflicts are exactly the same so we got, to be careful about the fungibility of, Lessons Learned but I think there are so, many lessons that will apply to Conflict, for the next 10 to 20 years and this, idea of being smaller smarter cheaper at, triable networked and even swarming it's, playing out in Ukraine as we speak now, is that going to make the difference, between winning or losing the war well I, think it's made the difference in not, losing the war it's much harder to win, the war because you're up against Russia, it's a very difficult fight that they're, up against but what we're seeing is, technology used used in such imaginative, ways and it's this weird, juxtaposition of World War I like trench, warfare with AI enabled systems May in, light capabilities that are being used, to find Targets spot targets and send, artillery against targets so you're, seeing what a lot of us thought was, coming and it did happen it is happening, about how quickly you can adapt to the, changing conditions of the battlefield, or I would say battle space because, cyber is part of this as well electronic, warfare is really really important right, now because it's killing Ukrainian, drones the Russians are very good, electronic warfare so now the ukrainians, are trying to adapt based on what's, going on so we have to listen to those, lessons and apply them to the US, Department of Defense and I by the way, and I'll stop here is I think what, you're seeing is the replicator, initiative which is buying thousands of, drones of various sizes and capabilities, is in a large part I think based on the, lessons that they're absorbing from, what's we see in Ukraine I wanted to go, back for a moment to software Define, Warfare there is a the paper that you, just referenced that you wrote with Mr, mulchin Don if I'm getting his last name, pronounced correctly uh who I who was, your CTO uh at the Jake and I believe, he's the CIA CTO at this point he is, that was uh quite a landmark paper uh, ironically uh for those of us uh who are, in industry and maybe not military, related at all in the audience you're, used to Daniel and I always talking, about you know AI is still part of the, software it's all it's all bound, together you can't do AI without the, hardware and the software and the, systems written together to make it, practical AI That's usable and you, really went there in software defin War, it's called software Define Warfare, architecting the dod's transition to the, digital age and uh it was quite a, landmark paper for those of us in that, industry because it really laid out the, future of how software needs to be, integrated uh do have any anything that, you wanted to comment on about that just, given I thought it was very important to, anybody uh concerned with AI uh in the, dod and it was really written by non, mulchandani after his experiences in, Department of Defense with the Jake but, then he left and before he took the CIA, he had had all these ideas germinating, and he said you know what I'm going to, write about this and we're co-authors, but he's the author I'm the editor and I, plac the operational imprator on that, report and it's so important because it, represents all the commercial software, industry best practices that need to be, brought into the Department of Defense, and what's sad about this Chris is to, your point is if you're in commercial, tech industry and you were to read this, report you would be flabbergasted that, the department is not already doing all, these commercial best practices which, are just the way of doing business you, know microservices platform is a service, software is a service it's just so, foreign to the Department of Defense, getting there it's come a long way but, that's why wrote that paper it is a, blueprint and say we can talk about this, is what nand and I is to to have these, conversations toward the end of my time, there he said we could talk about AI, till we're blue in the face and we will, it's a wonderful conversation but unless, you make the Department of Defense, modern in digitally modernized which, includes data best practices and do all, the things that's in that software, defined Warfare report AI will be, meaningless you'll never get there, you'll at least not at scale and by the, way I am now a member of the Atlantic, council's Commission on software defined, Warfare so they've taken that mantle and, are making it the centerpiece of all, these recommendations that're going to, have so it's a it's a really important, piece and and I thank nonan because as, as I'll say for as long as I talk about, this nonan changed that organization the, day he showed up because he understood, there was a different way that we had to, be doing business than the standard, traditional Department of Defense way, one of the interesting things that I, think I haven't seen the the specific, paper you're referencing Chris but I I, like the the ideas that that are being, discussed here because one of the things, that I've seen and I'm kind of trying to, parse through and figure out how to put, words to is when I'm on customer sites, or interacting with people it's, something about this like AI technology, that seems to disconnect people's mind, from the fact that there's still a need, to like have error checking or like it's, not just that you have this Ai and you, send something over and it kind of, automates things and then you move on, there's still the idea of a software, application and there's still a call at, the minimum to an API that maybe you, need like retries around or some sort of, health checks or backups or this sort of, thing does that perception uh hold water, in terms of what you've seen Jack as, well and um you were talking a lot about, culture any recommendations for those, kind of dealing with this fact of like, swapping software for AI versus, embedding AI in software yeah completely, resonates it absolutely resonates, because this is I I even say maybe this, is going a little bit too far but I, don't think it is it's the and Ukraine's, validating this by the way the next, conflict is going to be an apid driven, conflict if you get that piece right and, you can update faster than your, adversary on this idea of you know, softwar driven Warfare you will have a, competitive Advantage because the battle, speed you let's face it you put out an, AI model and you never update it then, you might as well have never done it in, the first place the idea of the, battlefield is going to change the, battle space will change but also models, will drift and all the other things that, happen they get exposed to different, data new data whatever you're going to, have to do a continuous integration, continuous delivery or deployment that's, part of this but you can't do that in, the traditional way of doing which is, you have an entire weapon system that, you have to pull apart completely pull, apart rebuild and put back out again, that doesn't work you're going to have, to break all this apart and just f focus, on those little bits and pieces that, have to be updated you mentioned API, calls there's so much more that can and, should be done in the department of, getting those status updates and two a, fees you know one feeding those updates, of higher headquarters down but two, putting all the things that the end user, is seeing feeding that back uphill, that's what we have to be thinking a lot, more about to be able to hander what I, think is going to be a very very chaotic, environment you know this AI however, great it may eventually become does not, change the fact as I said in my test, that Warfare is a very chaotic nonlinear, dirty ugly place and it's horrible that, we have to go to war but this technology, could provide that competitive advantage, that makes a difference in a future, fight, [Music], you know when we started podcasting back, in 2009 an online store was just the, furthest thing from our minds now we, have, merch.com and you can go there right now, and order some T-shirts and that's all, powered by Shopify it's so easy all, because Shopify is, amazing Shopify is the global Commerce, platform that helps you sell at every, stage of your business from the launch, your online shop stage to the first, first real life store stage all the way, to the did we just hit a million dollar, stage Shopify is there to help you grow, whether you're selling security systems, or marketing memory modules Shopify, helps you sell everywhere from their, all-in-one e-commerce platform to their, in-person POS system wherever and, whatever you're selling Shopify has got, you covered Shopify helps you turn, browsers into buyers with the internet's, best converting checkout up to 36%, better compared to other leading, Commerce platforms and sell more with, less effort thanks the Shopify magic, your AI powered Allstar you know nothing, gets me and Jared more excited than when, our guests get that coupon code in their, email when their show ships or to, everyone out there who loves Chang law, podcast and can go to, merch.com and get your favorite threads, to support our podcasts it is just the, best thing ever from stickers to threads, all of that is at merch Chang law.com, and did you know that Shopify Powers 10%, of all e-commerce in the US and and, Shopify is the global force behind all, birds rothy and Brook linen and millions, of other entrepreneurs of every size, across, 175 countries plus shopify's extensive, help resources are there to support you, in your success every step of the way, because businesses that grow grow with, Shopify sign up for a $1 per month trial, period at shopify.com, practical aai all overc case go to, Shopify, / practical ainow to grow your business, no matter what stage you're in again, shopify.com practical, [Music], AI to extend what we were just talking, about uh right before the break you kind, of finish talking about the study of, Warfare being asymmetric technology, Advantage you talk about that in your, paper and just the challenges and you've, talked about integrating in these latest, greatest Technologies one of the things, that you address a fair amount is you, talk about teeming between Ai and human, being in the context of National, Security and adjacent with that which I, know Daniel asked you a little bit of a, question about that earlier kind of, having to do with safety and the, security of the AI and stuff can you, talk to us a little bit about you know, that's obviously a question that many, people have in their minds that have, nothing to do with this space you know, the military or the the industry around, it uh just concerned about what does, teaming mean how far in your thinking, does autonomy go is there a point just, to infer something that as the pace of, warfare speeds up you know we have, drones and we have different types of, autonomous Technologies and stuff but as, the pace of warfare speeds up you look, into the future versus kind of what, we're looking at in the near- term how, does that change the human AI teaming, equation as you're trying to keep up you, know and maybe having humans in the loop, can be a problem or a serious logistical, issue how are you looking at the future, uh in terms of where we're are today and, where that's going and what are some of, the challenges that we need to overcome, to get there I think so so strongly, about this idea of human machine teaming, or human systems integration that if we, don't get this right we're in trouble, and what I mean by that is there are, people have been writing about human, machine teeming in some form for 50, years no different in terms of the, concept of how do you get the boast out, of human machines but to me we're at a, period where it it is changing and it's, changing because the machines are going, to be so much smarter than we're used to, talking about that it will be a, different view of human machine teaming, than we're used to so I think it's a, whole new research field there already, people working on this and it's what I, amazed at some of the work that's being, done but we just don't know how good, some of these Technologies are going to, be let's just take the example very, briefly example of of sort of chat GPT, or the equivalent there's this idea of, prompt engineering I look at that as, almost like the elevator operator in the, 30s or 40s whenever it was and it goes, away eventually because people get, comfortable with it but they're not as, comfortable with it today you need help, figuring out what that interface looks, like but the other thing I say and this, is where I come back to I really do, believe there will be situations I call, this the bell curve of military, operations some cases you really do want, that human making that final decision, maybe to launch nuclear weapons is the, president of the United States and, nobody but the president of United, States on the other end you need the, machine to do what the machine does best, there is no time to get in the way, everything else in between I don't know, what that is 80% 90% is human machine, optimized for both that centor idea and, that what we need a lot more work, working through and here's the example I, would yeah from from my time in the, flying World there were a lot of times, the machine did not operate as intended, the human was there to make sure that, everything went fine with the mission no, that broke again we're GNA have to pull, that circuit breaker po I started in the, F4 circuit breakers popped all the time, youd piss out hydraulic fluid out the, back end whatever so the human would, have to make up for a lot of the, machines mistak I'm not sure that's, going to be a luxury in the future that, you're going to have to let the machine, do what the machine does and the human, do what the machine does very well what, did humans do well they do reasoning, they do inductive reasoning they do, deductive reasoning do abductive, reasoning they put things in contracts, they do context they deal with other, human emotions machines don't deal with, emotions they don't understand emotions, and I'm not sure despite what some, people claim they ever will so what does, that look like I think there's a lot, that has to be done in this area in, experimentation play around with it see, what works because if you don't get it, right and there's a couple of examples, where it's went dramatically badly and I, think the 737 Max is one of those, examples well we're going to put this, software in trust us you don't need to, be retrained as a pilot on this just, listen to what it says in the cockpit, that was wrong and it's proven that it, was dramatically the wrong thing to do, so I think this is an area that needs a, lot more explanation there are people, writing about this and then really good, writing in various places but we have to, figure out how to get this part right, because the machines are going to be, smart enough that they have to be, allowed to sort of run when they should, run we're seeing both modern models are, getting so impressive in certain areas, in certain capabilities there's a lot of, hype around them they can't do, everything but what they do they tend to, do quite well recognizing that there are, hallucinations and other technical, issues to be worked through over time, but we're seeing kind of a rapid, increase in capability in a lot of these, areas so one of the things that I get, asked all the time just kind of on the, side with people is as those, capabilities is go to some level of, increase in the future whatever that is, and whatever the tasking is and you've, kind of alluded to it kind of changes, the balance in human uh teaming is there, a set of metrics or some guidelines that, you have in your mind on as you see the, technology progress Maybe not today, maybe not this year maybe not next year, but maybe five 10 years out and you see, these capabilities in specific areas, increasing far beyond what a human can, do for a given task, how do you rebalance how do you assess, that you talk about Assessments in your, paper as well so that you say this is, one of those moments where you let the, model do what the model does really well, because it does it faster it does orders, of magnitude better than a human could, do in the same amount of time how do you, make those assessments because I think, that for us humans with emotions around, technology and warfare I think that, causes a lot of concern it seems to be, the foundation of many questions that I, get asked how do you make that metric, judgment on those adjustments to the, culture of how we interact in that way, you know kind of the way we think about, it yeah a few thoughts on that and while, I remain an AGI skeptic I do believe, these machines these AI enabl or smart, machines will get so good the human, interfering with the machine could be a, worse outcome than human machine, together and the example that I've heard, I think it was Gilman Louie raised this, about alphao in the move 37 if a human, was there to override the machine it, would have lost the game machine said no, you really do want to make this move, leave it alone and it w so that's an, example of humans are going to get to, the point where they've got to be more, comfortable with this technology and, they're not necessarily today why, because just like when I use chat GPT, whatever version of a large language, model it does get some things still very, wrong maybe the ratio is only 10% versus, 20% just a year ago but if that 10% is, dangerously wrong then do you really, want that in military operations not yet, you don't so how do you do that this is, the core of your question to me it is, test and evaluation it's experiments, it's putting it into the hands of users, as an MVP a minimal viable product which, again is a little bit different than or, a lot different than the way the, Department of Defense has fielded its, systems in the past where you fielded, any system when it was as close to, perfect as you're going to get which may, have not been perfect but close enough, for government work as we say but this, case you need to put things in the hands, of users sooner rather than later this, is why I'm a big fan despite my, reluctance to say we should not be using, LGE language models for putting items in, the presidential daily intelligence, briefing you should have all sorts of, experimentation being allowed in all, these federal government organizations, and agencies in fact the White House, executive order on AI says go try this, out it does basically says go do it so, I'm a fan of the experimentation what, we'd call sandbox experiment put put it, in users hands because we don't know yet, but part of that is core test and, evaluation we do not want a shortcircuit, test and evaluation and why is this so, important because I think that's where, we're going to see the biggest risk at, least initially some risk are going to, be hard to figure out until you use them, in an operational world we just know, that just like in Commercial Business, some things are going to surprise you, but that's why you then update the, models because you learn but in that, early stages of design and development, before you get to the Fielding part, there's this thing that we have done, really well in the government for many, many years especially on the hardware, side it's called test and evaluation, that does not go away when we talk about, AI in fact AI is still novel enough I, think it's more important than ever to, get spend a lot of time on the t&, doesn't mean you're going to go slowly, relative to sort of putting out other, things but I am a little bit cautious, about moving too quickly on some of this, because we just don't know what those, risks are yet so there's a lot of work, going on including some things I've been, I've been working on on a risk, management framework for AI systems in, the military because this risk it is a, hierarchy right there's one and there's, AI enabled nuclear weapons really really, bad on the other end there's process, automation for a finance system that, touches nothing having to do with war, fighting negligible risk move really, fast on that one and then a lot of, things in the middle you've got to do, test and evaluation determine how many, risks there are and then come up with, risk mitigation strategies I'd love to, weave a couple things together from what, you mentioned in your test testimony one, of those having to do with a concept, which I thought was really interesting, around technoeconomic Net assessments, and I think there's a lot of people um, so ask you to maybe explain what you, mean by that there's probably a lot of, people that see news articles about oh, this country is ahead of this country, and this type of AI whatever and of, course you know you see articles about, oh China whatever company in China, bought up this many gpus or like other, things like that and depending on where, you're at you get concerned um but it's, all sort of very amorphous and it's hard, to really grasp where countries are in, terms of the AI stack and their their, capabilities and the other thing that, you mentioned in the testimony is, encouraging the dod to take bigger bets, and I imagine that that's also connected, with sort of making sure that our, technoeconomic net assessment is sort of, on the on the upward Trend could you, talk a little bit about those ideas and, maybe how they're connected and how you, would love to see those kind of big bets, going forward yeah if anybody was not in, the government and skimm through my, testimony they probably would have went, past that paragraph and not paid any, attention to it there's the reason I put, it first it's that important it really, is the center of attention and that is, it's just let me put it in terms of say, one commercial company competing against, another commercial company in the same, general business space CEO of one is, always looking at CEO of two saying how, much faster are they moving what Special, Sauce are they bringing into their, product that threatens our market, dominance this is going on every day in, commercial industry well it's also going, on at between states in this case in AI, so the problem is with States is you can, get a lot with industrial Espionage but, it's a little harder to do when you have, a nation state like China and both sides, United States and China are talking, about how much they're each doing in AI, but how much is reality and how much, again is the hype well you need a lot of, intelligence assessments and I found, this out from my earliest days in, Project Maven where I would ask these, questions about okay and I'm I'll stay, very unclassified here the people's, Liberation Army what does their AI stack, look like the Intel Community is not, spending any time collecting on or at, least they weren't at the time because, nobody told them to what is a GPU okay, well we know where we have to start this, conversation then is what does their, compute look like what do their models, look like are they using open source or, who's building them for China what does, their talent base look like so this idea, of a of not just technology but then, equally important on both sides us China, or us Russia whatever you want to say is, what are they doing with the technology, what are they building new operational, concepts are they actually reorganizing, when you start seeing bureaucratic, reorganization they're far along we, don't see that yet we don't see it in, the Department of Defense because we, haven't figured out exactly what this, new technology is going to do for us yet, so that's what I mean by this and by the, way the US government used to have this, thing called the office of Technology, assessment I think new Gingrich managed, to kill that as part of the revolution, in government whatever now there's some, serious work to bring something like, that back now I'm just talking about, within the the Department of Defense and, the intelligence community on the, classified side how do I bring in all, this information both unclassified and, classified information to give us a, relative net assessment a net assessment, Us Versus Them where do we stand and it, turns out that's pretty difficult to do, because technology is not so easy to, take a picture from a satellite of a GPU, and decide how much farther they are, ahead so to me we have got to do better, at understanding Visa you know United, States Visa China or Russia or anywhere, else around the world what they're doing, in this area it's so it's such a big, concept to get right and it's hard, because it's just not like collecting, against tanks or nuclear weapons or, something I can't see it anymore and uh, as I say there's no fluid coming out of, a building somewhere to tell me that, they're working on a particular project, that well Jack we have covered so much, material here and the time that we've, been talking has flown by I think we're, going to if you're willing we're going, to have to have you back on the show, because there's a lot that we haven't, been able to address yet as we finish up, I'm kind of thinking in the background, about culture and what you were talking, about and just the massive, organizational and even National change, that we're doing here in the United, States in government and Military and, intelligence it's changing the way that, we are looking at the future of War, fighting um you mention in your paper, The Joint War fighting concept which I'm, very familiar with and I believe there, is an unclassified version out there, that we'll add into the show notes but, it's changing the way we think about all, that we do with the military and, intelligence and and I think AI is a big, part of that as we wind up do you have, any uh we often ask guests to kind of, take the last question to be whatever, you want it to be paint a picture of the, future uh with whatever you want to be, can you give us a sense of kind of what, you're excited about going forward how, you think some of this may evolve and, what that means to the United States, military and intelligence community so, that uh the listeners of this mostly not, involved in that directly have a sense, of where things are going any thoughts, you want to finish with today yeah, thanks Chris and thanks Daniel for, allowing me this time with both of you, today and it's a big thought and it's, something I've been thinking a lot about, and when I retired I went back and got, another master's degree thanks to the GI, bill and I did get a chance to think you, know big thoughts about technology over, the course of human history I mean, really back to the very beginning I took, some courses about looking at what we, would call maybe not technology today, but it was back then say sugar or tea, and how it sort of diffused globally, what I say in my testimony I do believe, in my core that we are going to be in, the middle of at some point a third, Revolution you know sort of agrarian, revolution industrial revolution this is, different it's not the fourth Industrial, Revolution it's some kind of digital, Revolution we don't know what it's going, to look like because we just haven't, been there yet I say in my testimony the, future is to a large extent both, unknowable and unpredictable why because, it's not determinism and it's not, technological determinism it will be, dependent upon the decisions by many, many many many thousands of people from, leaders to citizens of countries, deciding they like or don't like the, technology it's going to take maybe 50, years to 100 years to play out and his, historians will be the ones that look, back and Define when this revolution, began but to me it is fundamentally, different which means Warfare will be, different the character of warfare is, going to change I will not say the, nature of war is changing because the, nature of war is a human centered, decision like why we fight and why one, country fights another country and so on, so I don't believe the nature of War but, the character of warfare is going to, change dramatically as we're seeing in, Ukraine and we have a chance to be on, the right side of that as I say in my, test Tony a symmetry equation if we're, on the wrong side of it we risk losing, and we're not used to losing and this is, a very serious risk so this idea of it's, playing out as we speak get involved in, it don't wait for it to catch up you've, just sort of got to dive in and and, start working these big projects in the, government wherever you are or anywhere, else and in the industry well that's a, great call to action to finish up with, uh Jack Shanahan thank you so much for, joining us on the Practical a AI podcast, um, really really interesting conversation, thank you for your insights and, hopefully we can get you back on the, show to cover things going forward, really appreciate your time thanks Chris, thanks Daniel and of course I'll be glad, to come, [Music], back all right that is practical AI for, this week subscribe now if you haven't, already head to practical ai. FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ commmunity, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next, [Music], time, King look |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Gemini vs OpenAI | Google has been releasing a ton of new GenAI functionality under the name “Gemini”, and they’ve officially rebranded Bard as Gemini. We take some time to talk through Gemini compared with offerings from OpenAI, Anthropic, Cohere, etc.
We also discuss the recent FCC decision to ban the use of AI voices in robocalls and what the decision might mean for government involvement in AI in 2024.
Leave us a comment (https://changelog.com/practicalai/256/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://www.neo4j.com/developer?utm_source=changelog&utm_medium=podcast&utm_campaign=practicalai) – Is your code getting dragged down by JOINs and long query times? The problem might be your database…Try simplifying the complex with graphs. Stop asking relational databases to do more than they were made for. Graphs work well for use cases with lots of data connections like supply chain, fraud detection, real-time analytics, and genAI. With Neo4j, you can code in your favorite programming language and against any driver. Plus, it’s easy to integrate into your tech stack. Visit Neo4j.com/developer (https://www.neo4j.com/developer?utm_source=changelog&utm_medium=podcast&utm_campaign=practicalai) to get started.
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Gemini (https://gemini.google.com/app)
• FCC decision on AI voices (https://www.fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal)
• FCC Bans AI Voices in Unsolicited Robocalls (https://www.wsj.com/tech/ai/fcc-bans-ai-artificial-intelligence-voices-in-robocalls-texts-3ea20d9f)
• Prompt Engineering Guide (https://www.promptingguide.ai/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-256.md) | 41 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you we, just dropped dance party our third, fulllength album on change log beats buy, it on band camp and iTunes or stream it, on Spotify Apple music and the rest Link, in the show notes thank you to our, partners at fly.io launch your app close, to your users find out how at, [Music], fly.io well welcome to another episode, of practical AI this episode is a fully, connected episode where Chris and I keep, you fully connected with everything, that's happening in the AI world all the, recent updates and also share some, learning resources to help you level up, your AI and machine learning game I'm, Daniel whack I'm founder and CEO at, prediction guard and I'm joined as, always by my co-host Chris Benson who's, a tech strategist at locked Martin how, you doing Chris doing pretty good Daniel, it's Lots happened this past week A lot, has happened uh it seems like I don't, know if it felt like this to you but, there's sort of a little bit of a lull, uh around the holidays maybe too much, eggnog yeah too much eggnog um but we're, fully back into the AI news and, interesting things happening one of the, ones that I had seen this week Chris was, a decision well I don't know how all the, government stuff works but the FCC which, regulates communication and other things, in the in the United States had a ruling, about AI voices in Robo calls so if, people don't know rooc calls are, automated phone calls typically when I, worked back in the Telecom industry we, call it sort of dialer traffic right you, spin up a bunch of phone numbers you can, call a bunch of people this is how you, get phone calls from numbers that seem, maybe local to where you're at but, they're really just automated calls and, then you know you pick up and realize, it's spam or someone trying to sell you, something or something happening anyway, there was an interesting one where there, was an AI voice clone of President Biden, and I think they were robocalling a, bunch of people and trying to sort of, change views about President Biden via, this recording well it wasn't a, recording it was a voice clone of him, saying certain things which hopefully, would sway people's political, affiliations or sentiments leading into, election season anyway this was one of, the things that was in in the news and, maybe prompted some of these decisions, or at least highlighted some of these, decisions by the FCC to ban or fine, people that were using AI voices in, these rooc calls so yeah what do you, think Chris first of all I think whoever, was doing that has a serious ethical, issues to contend with yeah well I'm not, sure that a lot of, dialers um are primarily motivated by, their ethical concerns yeah I mean I, think that we've we've been seeing this, coming for such a long time and we you, know we've talked about it on the show, you know with all the generative, capability and the ability to commit, fraud and the ability to rep, misrepresent yourself in ways like this, so uh I'm glad the FCC got on top of it, after something like that happened and I, think unfortunately I suspect we'll see, quite a bit more of such things uh as, you pointed out not everybody follows, the law as well as as maybe they should, I keep waiting for them just to ban, robocalls altogether and it would just, take the whole issue away from us you, know we'd have ai generated voices in, other contexts of course but yeah one, interesting thing I I actually forget if, this was a conversation we had on this, podcast or elsewhere maybe someone can, remember I don't always remember all the, things we've talked about on this, podcast but I saw either in a news, article or we were discussing someone on, the other end of the spectrum who was, using cloned voices or synthesized, voices to actually spam bait the, spammers right so they would they had, like a script set up where they would, get a robo call right or a Spam call and, actually they have this a conversational, AI that would try to keep the spammer on, the line as long as possible I think we, did talk about that I remember that I, remember that yes so I don't know if, that's illegal I found that one also, kind of fun because that you you see, these people on YouTube that sort of, spam bait the spammers right and try to, keep them on the line because when if, they're talking to an AI voice right, then they're not scamming my grandma or, or something like that right that's true, so yeah that's that was I think the goal, in that but I I don't know maybe maybe, all of this is gets in a little bit of a, murky Zone it does but I I would say the, FCC the Federal Communications, Commission got it right on this one, score one for the government yeah what I, don't know I think this would still, allow because obviously when you call on, to change your hotel reservation or you, call your airline or something there's, synthesized voices and there have been, for many many years um not necessarily, synthesized out of a neural network but, synthesized voices so I'm assuming that, that I haven't read the ruling in detail, I think the main thing that they're, targeting is these rooc calls and so I, don't think that covers these assistants, but I don't know that's a good question, I would assume it goes to intent you, know and in the representation of the, voice and if it is clearly as in the, case of the FCC ruling is mimicking uh a, person for the purpose of, misrepresenting you know how they're, seen or whatever or how you know what, their positions are and such then I, think that anything I I think that's a, very reasonable thing I think it all of, the types of of circumstances we find, ourselves in where people are trying to, commit fraud or misrepresenting, themselves in some way probably need to, be addressed in this way and but there, obviously for every one of those there's, probably a thousand legitimate use cases, as well so I agree that's yeah there is, probably a weird middle Zone because, even if you remember when I think it was, originally Google did their demos at one, of their Google IO conferences one of, the things that was shown on stage is, clicking and calling like your pizza, place and ordering a pizza with an AI, voice right like or make me a, reservation at 5 pm at this restaurant, but you can't there's no form on the, website right so there was an automated, way to make a call with an AI voice to, make the reservation for which seems, completely legit to me because you're, not you're repres presenting everything, appropriately you know you're you're not, pretending you're not getting around uh, you know that kind of thing it's you, have a tool and it's a tool and I think, and frankly I could use I could use a, few of those in my life you know and, just take care of all the things but I'm, probably not gonna call anyone and uh, and have an AI model pretend to be Joe, Biden or anybody else so yeah I think it, definitely like you were saying it gets, extremely concerning when there's a, representation that this is this person, and they're trying to sway your mind in, one way or another and it's not that, person yeah pure ethical problem right, there I mean that's so yeah well I don't, know do you think that this represents, some of what we'll see this year in, terms of a trend of government, regulation of generated content I would, not be surprised especially you know, when the we talked last year about the, executive order here in the US that came, out and I think that was indicative of, further actions to come I mean they, essentially laid out a strategic plan on, how they were going to address AI, concerns and FCC was one of the agencies, I believe that was explicitly listed in, the order if I recall and so I'm I'm not, surprised to see them weighing in on, this at this point so I it'll be, interesting to see how it mixes across, uh National boundaries and you know see, how various countries are addressing it, and what that means for so much much of, this is uh is transnational in terms of, Technology usage and even organization, spanning and so it will be a a curious, mess for all the lawyers to figure out, going forward yeah when the dialer is, using twio or Telex or something to spin, up numbers but they're doing it from an, international account which is probably, not even in the country where they're, operating and there's all these layers, it gets into, gets into some crazy stuff I I know, that's always something that stands out, to me I always listen to the darket, Diaries podcast it's one of my favorites, so shout out to them uh for the great, content that they produce but yeah, that's always a piece of it right is, putting enough of these layers in, between to where yeah sure there's, regulations but we just need a blanket, rule a global blanket rule that's just, do the right thing let's just everybody, everybody out there just do the right, thing but we may not have things to talk, about on the podcast, then yeah well the messiness of the real, world will will continue but um yeah the, speaking of of Google I mentioned the, Google demos and the stuff they've done, over the year with synthesize voices and, all that and of course recently they've, been promoting Gemini which is this, latest wave of AI models from from, Google which are multimodal kind of, first models yeah there's a whole bunch, of kind of related activity uh in there, and that they took their existing, chatbot barred and they rebranded it, into Gemini and there are several, there's Gemini Pro very confusingly, there is the paid service now of Gemini, Advanced which is using the model called, Gemini Ultra so I know initially there, was some confusion about Advanced versus, Ultra well Advanced appears to be the, service Ultra is the underlying model So, Pro represents a model size or Ultra or, or it represents a subscription tier, both in different ways so Pro is the, free tier there's nothing less than Pro, we only start oh obviously yeah we've, talked about this with Apple products, before there's no low quality anything, right exactly that's what I was about to, say there's no such thing as low quality, it's Pro you start with Pro and that's, the free version and it's the smaller, model that we can all go and you can go, to Gim just as you could go to, b.google.r, is roughly the equivalent of GPT 3.5 the, free version uh on the open AI side and, now Google advanced which has the Google, Ultra model is competing against chat, GPT which is hosting the GPT 4 model at, the high end and there have been a, billion reviews of how the two go, against each other head to-head have you, tried uh the various ones or tried, Gemini I've not tried Ultra yet because, I haven't decided to pay for it because, they're asking for 20 bucks a month so I, haven't been able to compare it directly, I've watched a whole bunch of YouTube, videos uh more than I should have um, where it showed people doing side by, side and I think it's a really good, model but it generally it has met with, some disappointment in that people are, expecting the newest thing is always, going to be the greatest thing possible, and I think we saw something with gp4, where when open AI released it and it, had its initial you know Fanfare and, then they've built a lot of, infrastructure and services around it, and you know the various plugins they've, also fixed a lot of the problems behind, the scene while maintaining the actual, underlying model whereas uh Google has, not done that they put the model out and, it's comparable in many ways but it, feels very very rough around the edges, and it doesn't always give you the best, output so most of the direct, head-to-head comparisons most of the, various tests I've seen have had uh gp4, went out on a head-to-head thing so my, expectation on that would be that Google, will start working around the issues, that it has and cleaning it up and, probably within a few months it'll, probably catch up a little bit closer in, that way so our our company and actually, the the last few I've been a part of, have been big Google users in terms of G, suite and you know Google workspace and, email and docs and all of that stuff so, I'm I'm kind of embedded in that, ecosystem and you know I'm not uh, thankfully not having to deal with teams, or something like that as I know many, are I am at work it's terrible oh God, I feel for you and I guess I do, experience that Pain by on a second, order way because I have to take a lot, of teams calls but anyway outside of, that which is probably enough said then, uh so I I'm always trying the Google, stuff that comes out and I had tried, Bard and I think also before that just, the general interface to I don't know if, it was branded as always branded as Bard, or I remember Palm but I think Palm was, below or you know embedded in Bard I, don't remember always what The Branding, was but yeah now there's Gemini I would, say my impression was similar Chris in, that I just took literally one of their, you know how you log into any of these, systems like chat GPT or Gemini and I, literally just tried one of their, example prompts like try this I think it, was like print out how to do something, in Linux or something like that I I, think list processes or something I just, clicked the button like the example, prompt and it wasn't able to respond to, the to the example prompt which I you, know these are rough edges I'm sure the, model does a lot of things really well, and that was just like a fluke in many, ways but it I think does represent a lot, of those rough edges that they're, dealing with and my impression I've said, this a few times on the podcast it's, like when you're a developer working, directly with one of these models it's, kind of like taking your drone that's, flying all great and you're controlling, it and then you take it out of autopilot, mode and there's all of these things to, consider that you really just didn't, think about because they're taken care, of by great products like coher, anthropic or open AI or whatever so I I, definitely feel for the developers, because there's a lot of a lot of things, and a lot of behavior to take care of, but yeah that was was not the best way, to Wi me over I think they might have, done better to hold back just a little, bit longer and do a little bit more they, talked about that they had roughly a 100, private beta testers and that seems to, me a very small sampling of beta testers, to be working on it you mentioned, another name just now which I I wanted, to throw out that is very absent from, this conversation out there that is, anthropic uh I don't see a lot of, comparing it to Claude and stuff like, that are Claud too at this point or, maybe yeah anthropic and, cohere maybe some other ones absolutely, it right now it's been a two-horse race, between these two uh which made me a, little bit sad I I wish there had been, more a little bit more expansive and, also uh against some of the open source, models that are out there because one of, the the topics that you and I are often, talking about is with the proliferation, of many models some of which are private, some of which are are open it increases, the the challenges for the rest of us in, the world to know what to use and when, and when to switch and things like that, something that I know you know quite a, lot about yeah it's been uh intriguing, to see all of these and I would say all, of them are on some type of cycle right, so we're talking about maybe, gp4 is in the lead and here comes Gemini, and then we're mostly talking here about, the closed proprietary models that sort, of ecosystem but then I'm guessing you, know Claude had a big release at some, point and they're probably in their, cycle where I have no inside knowledge, of this but it's just my own perception, that anthropic cohere they'll they're in, a different release cycle obviously than, open Ai and Google so we'll see, something from them in the coming months, I'm sure in terms of upgrades or, multimodality or extra functionality, like Assistance or tying in more things, things like Rag and that sort of thing, as we've seen with open AI assistance, and file upload and that sort of stuff, you know if we're fair about it uh when, you think back to when GPT 4 came out it, didn't have all the things that you know, the ecosystem has grown substantially, since its release and it had some of the, same challenges of that and I think this, might be with Gemini coming you know I, think everyone kind of took that for, granted they were a little bit less, splashy than a big giant new model, coming out and I think this is one of, those moments where you kind of go wow, there's more to this than just the model, itself you know big new model I got that, but there's so much to the ecosystem, around a model and the various plugins, capabilities extensions whatever you, want to call them Google qum extensions, and you know at this point but I think, it really goes along the lines of, something we've been saying for a long, time and that the software and the, hardware it it's all one big system it's, not just about the model so I suspect, Google is very well positioned to make, the improvements in the coming weeks so, it may be interesting to revisit some of, these tests after a short while yeah and, there are other players that are kind of, playing on this boundary between open, and closed either on that sort of open, and restricted line so releasing things, that are open and not commercially, licensed or open source but with some, other usage restrictions and and that, sort of thing there's cool stuff, happening in all sorts of areas one of, the ones that we've been looking at as a, model from unbabel which is a, translation service provider they have, this Tower family of models which does, all sorts of translation and grammar, related tasks but there's also a lot of, multimodality stuff coming out so I, noticed you know we talked about text to, speech at the beginning of this episode, and I'm just looking at the most, trending model right now on hugging face, is the metav voice model which is a 1, billion parameter model that is uh Text, to Speech but if I'm just looking, through kind of other things that are, trending we've got text to speech image, to image image to video semantic, similarity which are of course kind of, uh eding related models text to image um, automatic speech recognition or, transcription so there's really a lot of, um multimodality stuff going on as well, and people releasing that I know one, that you highlighted was some stuff, coming out of um I believe it was Apple, right yes related to image modif or what, how is it phrased image modification or, something like that image editing image, editing um it's mg e is the acronym, which I'm guessing they're I haven't, heard them say this but I'm guessing, they're calling it Maggie or something, like that and it is a where you you'll, give a source image and they have a demo, that's on hugging face and you, essentially kind of talk your way in, through the editing process and, gradually improve it and everything so I, think they had the bad luck of, announcing this and releasing it at the, same time that Google did Gemini to go, head-to-head on gp4 so I think it, largely got lost in the news cycle but, uh it looks like it might be a very, interesting thing and I think uh you, know they're competing against like, Adobe you know doing image generation, and all of these companies have some, level of image editing model, capabilities so it will be interesting, to see how apples plays out and how they, apply it to their products what I think, is a differentiating or interesting, element of this which is maybe not text, to image or text to text sort of, completion but the common types of, things that people are wanting to do, which are somewhat model independent but, are more workflow related so things like, rag pipelines where you upload files and, interact with them you've kind of uh GPT, models or the open AI chat GPT interface, where certainly you can upload files and, chat with them or analyze them anthropic, actually was an early one where because, of their High context length wend models, had the ability to upload files and chat, with those files I don't think at least, I couldn't tell something similar in, Gemini other than uploading an image and, chatting or reasoning over that image, which is sort of like the vision piece, of it but more than multimodality, there's these increasing workflows that, people are developing one of those that, I think is really interesting is the, data analytics use cases that are coming, out so you have actually I I've seen a, trend in a lot of these companies, popping up that are something to the, effect of new Enterprise analytics, driven by natural text queries so I'm, thinking of like defog I think it is yes, these companies which are a chat, interface where you type in a question, maybe your SQL database is connected and, you get a data analytics answer or a, chart out and this is something that I, believe if I'm again understanding I I, don't know all the internals of chat GPT, but it's interesting that there's, different takes on this approach and I, think there's a lot of misunderstanding, about how this actually happens under, the hood so I don't know have you done, much where you've like uploaded a CSV or, you've done that sort of thing in chat, GPT and asked it to analyze it or, something like that ironically that's, literally something I'm playing with, right now um I know you didn't know that, before asking the question but um I saw, a similar post about kind of analytics, being used for this and so um I'm, experimenting with it but I'm still very, early how how are your results initially, they're not as good as I want but I, think that's mainly my problem I keep, running into little bumps where I'm, trying to get the CSV usable, very well so I have a database uh that I, dumped some data out of and was trying, to do that um but I literally just did, this today it was H today was day one, and then stopped and came in for us to, have this conversation so let me let you, know in another week or so how that, faned out but it caught my eye because I, saw a conversation online about this and, uh some of the personalities that I I've, always associated with you know being, super technically bright uh analytics, folks were kind of saying we're just, hitting that moment where this kind of, just uh AI driven conversational, analytics is now going to be available, to everyone and I was like well that's, what I want that's what I need so I'm, actually trying to do something uh do, something for work right now on that on, those, [Music], [Applause], [Music], ones, what's up friends is your code getting, dragged down by joins and long query, times the problem might be your database, try simplifying the complex with graphs, a graph database let you model data the, way it looks in the real world instead, of forcing it into rows and columns stop, asking relational databases to do more, than what they were made for graphs work, well for use cases with lots of data, connections like supply chain fraud, detection real time analytics and, generative AI with neo4j you can code in, your favorite programming language and, against any driver plus it's easy to, integrate into your text stack people, are solving some of the world's biggest, problems with graphs and now it's your, turn visit Neo 4j.com developer to get, started again Neo 4j.com, developer that's neo4, j.com SL, [Music], developer, [Music], well Chris um I'm was asking these, questions about this data analysis stuff, because this is I've done a a few, customer visits recently where we've, been talking about this functionality, and I've noticed as I've gone around and, talked to different people there's some, general misunderstanding about how you, can analyze data with a generative AI, model one because there's something, people think is going on that isn't, actually going on and two because, generally if you ask a language mod just, a chat model without uploading data like, math type of questions usually it is, really terrible at that right even like, adding things together or doing like, basic aggregation is something that, these models are known to to fail on, pretty poorly and so the question is, like well how am I getting anything, relevant out of these systems to begin, with and again I don't know all the, internals of of chat GPT but this is my, own understanding there's some, difference if you look at maybe like an, example like defog or chat GPT or Vana, AI these are some examples of of this, that's going on chat GPT takes the, approach and my understanding where in, their assistance functionality so when, you type you upload a maybe a CSV and, you ask a question and you wait for, seemingly forever while the little thing, spins and it says it's figuring or, analyzing I think it is what it says, something like that yep my understanding, of what's happening is is more of what, they used to call code interpreter it's, actually, generating some python code that then it, executes under the hood to analyze, your data that you uploaded and then, somehow passes along the results of that, code execution to you in the chat, interface so this is a very astute, observation by whoever had this that, yeah these models really stink at doing, math but what doesn't stink at doing, math is code right so these models are, pretty good at generating code so why, don't we just sidestep the whole math, thing and generate the code and then, execute and crunch your data and we're, good to go I think the thing that often, what I've seen people struggling with, like the assistance API and chat GPT is, again they have to support all sorts of, random General use cases right because, you know people could upload a CSV of, all sorts of different types or other, file types and so there's a lot to, support and it's kind of generally slow, and hard to massage and to work working, right what I've seen more in the, Enterprise use cases that we've been, participating in is less of focus on, code generation to do the data analysis, and more of a focus on SQL generation to, do analytics queries so this is more the, approach of the SQL coder family of, models defog Vanna AI we're doing very, similar things to in the cases where, we're implementing this similar to the, Vanna AI case where where you connect up, let's say you have a transactional, database like your sales or something, like that or customer information or, product information and you want to ask, an analytics query right well SQL is, really good at doing aggregations and, groupings and joins also large language, models especially code generation models, or code assistant models are really good, at generating SQL because like how much, SQL has been generated over time it's, it's very well-known language to, generate right and so you kind of, sidestep the code execution piece in, that case where you're not generating, python code but you're generating from a, natural language query a SQL query to, run against a database that's connected, and you just run that SQL query a normal, good old regular programming code to, give you your answer and then you send, it back to the user in the chat, interface so I thought that would be, worth highlighting in this episode, because there does seem to be a lot of, confusion of what's actually going on, under the hood like how can one of these, models analyze my data well the answer, is it kind of isn't it's just generating, either code or generating SQL that is, analyzing your data it still gets you, there though it's in a sense uh you know, since you're not directly having the, model do it it's sort of a workaround in, in a manner of speaking but I think if, you look at something like you know the, ecosystem built around uh chat GPT, there's a lot of tooling around it and I, think that's I think this year we're, going to see more and more of that you, know whether it be the the SQL use case, that you're talking about or continued, with uh open AI I think Google will do, that well I think anthropic will will, get on that and you'll see these kind of, tools for for doing exactly that kind of, thing where you may not have a model, that does a particular task super well, but it can produce an intermediate that, can do something very very well I think, that's a level of you know we keep, talking about maturity of the field and, I think part of that is recognizing, maybe there's a better way to do it than, just having the bigger a better latest, model so yeah I think that's a great way, of approaching it not to self- fulfill, my own Prophecy from our predictions, from from last year I think in our 2024, predictions episode one of my, predictions was that we would see a lot, more combination of I think what is, generally being called neuros symbolic, methods but maybe more generally just, like hybrid methods between what we've, been doing in data science forever and a, kind of front end that is a natural, language interface driven by a, generative AI model so in this case what, we have is good old-fashioned data, analytics just like the way we've always, done it by running SQL queries it's just, we gain flexibility in doing those data, analytics by generating the SQL query, out of a natural language prompt using a, large language model and I think we'll, see other things like this like you know, tools and Lang chain is a great example, of this where you generate good, old-fashioned structured input to an API, and that API is called and gives you a, result but this could be applied in all, sorts of ways right so let's say time, series forecasting I don't think right, now language models and I've actually, even tried some of this with fraud, detection and forecasting and other, things with large language models and, not very good at doing these tasks but, they can generate the input to what you, would need in the kind of traditional, data science tasks so if you say again, imagining bringing in this seqll query, stuff if you have a user and you want to, enable that user to do forecasts on, their own data well you could have them, like put in fill out a form and like in, a web app and like click a button and do, a bunch of work or you could just have, them say hey I want to forecast my sales, of this product for the next six months, or something from that request a large, language model will be very good at, extracting the parameters that are, needed and Poss possibly generating a, SQL query to pull the right data that's, needed to be input to a forecast but, that forecast is going to be best to, best that you just use like meta's, profit framework or something it's just, a traditional ARA statistical, forecasting methodology and you just, like forecast it out with that input and, then you get the result right so this is, a very it's the merging of what we've, been doing in data science forever with, this very flexible frontend inter face, and I think we'll see a lot more of that, I completely agree with you and not only, that but I think there'll be a lot more, room for llms that are not the gigantic, ones you know we've talked a bit and, we've had guests on the show recently, you know talking about the fact that, there's room not only for the largest, latest greatest giant model but there's, enormous Middle Ground there where you, can have smaller ones and combine those, with tools um so it's pretty cool seeing, people innovate in this way and start to, recognize that not everything has to, come out of the mo the largest possible, model you have available to you and add, that in so I'm really looking forward to, seeing what people do this year along in, their various Industries and you know, and how that spawns new thoughts so yeah, and especially with um a lot a lot of, things being able to be run locally I've, seen a lot of people using local llms as, an interface using Frameworks like olama, and others which is really cool to be, able to use llms on your laptop to you, know automate things or do these types, of queries or experiment locally so yeah, I think that even adds another element, um into the mix and for Edge Computing, you know for truly Edge Computing where, it's not practical to have a cloud, backing you know and or the networking, between where that model would be in the, cloud and where you're trying to do it, there's a huge amount of opportunity to, use them in that in that area so yeah, I'm hoping that we see a lot of, innovation you know last year was kind, of the even the year before was kind of, the race to the biggest model I'm kind, of hoping now we see what what other, branches of innovation people can can, come up with to take advantage of some, of that and also recognize that the, midsize ones uh have so much utility to, them that's untapped yeah and maybe, before we leave the sort of news and, everything that's going on in this kind, of co-pilot assistant analysis space I, did see you know I actually my my wife, needed help connecting to printers, printers are not a problem that is, solved by AI yet I guess and will, continue to be continue to be a uh, problem forever in Tech but I was, noticing in the you know recent updates, to Windows there's the little co-pilot, logo there like even embedded within, windows and I don't know that whoever, watched the Super Bowl during in the US, the the Super Bowl as we record this was, the the day before we were recording, this but there was a co-pilot commercial, during the Super Bowl and that's another, uh interesting thing because this is now, it's running on people's laptops, everywhere and of course that's, connected to the open aai ecosystem in, my understanding through Microsoft right, but uh yeah this kind of AI everywhere, and also the sort of AI PC stuff that, Intel's been promoting and running, locally is is going to be an interesting, piece of it totally agree as we wind up, I want to briefly switch topics here I, received some feedback a few episodes, ago from a teacher who was listening and, I was so happy that to have you know one, and maybe many teachers out there, listening to us and and considering this, and as we often do uh people may not, realize but Daniel and I we have a topic, but we are largely unscripted so we are, kind of shooting from the hip in terms, of what we're saying it's a very genuine, and real conversation we're not looking, at a whole bunch of notes and, pre-planned script and I I made a, comment about uh my daughter in school, and the fact that I really think schools, should take advantage of models and as, part of the learning process as part of, the teaching to integrate it in whereas, often school systems right now are, saying you're not allowed to use GPT for, instance in your homework in that I said, Ah that's stupid you know that teachers, would not do that and I this teacher, reached out and said well first of all, we really want to uh and I'm, paraphrasing her uh and she said second, of all you know a lot of times they it's, not in their power anyway it's the, school system policy and stuff and so I, just want to apologize to anyone, especially the teachers out there that, might have been offended I'm much more, cognizant now of what I'm saying on that, it was kind of a shooting from the hip, but it was insensitive and I found that, what that teacher pointed out was dead, on it was right on uh and I just want to, thank the teachers out there uh, especially those who are trying to take, advantage of these amazing new, technologies and talk their systems into, bringing them into the classroom and not, make it just the bad thing not to use, for homework so uh thank you to the, teachers uh for doing that and I just, wanted to call that out it's been a, really important thing from my, standpoint to say so thank you I think, it represents the complexity that people, are dealing with it does you know, teachers, want their students to thrive I think, generally we should assume that most, teachers are are really actually um, motivated and engaged both in culture, and technology and the ecosystem wanting, their students to thrive but sometimes, like you say they have their own, limitations in terms of what what is the, system within their with that they're, working in and you know privacy concerns, and and other things so yeah that's a, good call out Chris I'm glad you took, time to mention it I want to say one, last thing and to teachers out there who, are trying to get these things into the, classroom so that your students uh have, the best available tools to do things if, you ever need someone to back you up, reach out to us we have all our social, media Outlets you can find us on find me, on LinkedIn and if uh I will be happy to, give a whole bunch of reasons to your, school systems on why they might want to, use the tools I'll be happy to work with, you on that and I thank you for fighting, that fight on behalf of the students, that you're serving yeah and speaking of, of learning something that we can all, learn and be better at is all the, different ways of prompting these models, for multimodal tasks and prompting and, data analysis and I just wanted to, highlight here at the end a Learning, Resource for people a while back I had, mentioned a lecture and series of slides, that were was very helpful for me from, dare AI D, AI now I think that they've converted, that series of slides and that prompt, engineering course I think is what they, call it into a prompt engineering guide, so if you go to a prompting guide. AI, they've have a really nice website that, walks you through all sorts of things, and also covers various models in terms, of the you know chat GPT code llama, Gemini Gemini Advance we talked about, those on this show and talks about, actually prompting these different, models so I'd encourage you if you're, experimenting with these different, models and not immediately getting the, results that you're wanting that may be, a good resource to help you understand, different strategies of prompting these, models to get things done as you need to, get them done it's a great resource I'm, looking through it as you're talking, about it and it's uh it's the best I've, seen so far well Chris um this was fun, I'm glad we got a chance to cover all, the the fun things um going on and uh, We've complied with the FCC using our, actual voices still we'll see if how, long that lasts but it was it was fun to, talk through things Chris we'll see you, soon talk to you, [Music], later that is practically I for this, week thanks for listening subscribe now, if you haven't yet head to practical AI, FM for all the ways and don't forget to, check out our fresh Chang log beats the, Dance Party album is on Spotify Apple, music and the rest there's a link in the, show notes for you thanks once again to, our partners at fly.io to our beat, freaking residence break master cylinder, and to you for listening that's all for, now we'll talk to you again next, [Music], time, love |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Data synthesis for SOTA LLMs | Nous Research has been pumping out some of the best open access LLMs using SOTA data synthesis techniques. Their Hermes family of models is incredibly popular! In this episode, Karan from Nous talks about the origins of Nous as a distributed collective of LLM researchers. We also get into fine-tuning strategies and why data synthesis works so well.
Leave us a comment (https://changelog.com/practicalai/255/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Read Write Own (https://readwriteown.com/?utm_source=changelog&utm_medium=practicalai&utm_campaign=changelog) – Read, Write, Own: Building the Next Era of the Internet—a new book from entrepreneur and investor Chris Dixon—explores one possible solution to the internet’s authenticity problem: Blockchains. From AI that tracks its source material to generative programs that compensate—rather than cannibalize—creators. It’s a call to action for a more open, transparent, and democratic internet. One that opens the black box of AI, tracks the origins we see online, and much more. Order your copy of Read, Write, Own today at readwriteown.com (https://readwriteown.com/?utm_source=changelog&utm_medium=practicalai&utm_campaign=changelog)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Karan Malhotra – LinkedIn (https://www.linkedin.com/in/karan-s-malhotra)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Nous on Hugging Face (https://huggingface.co/NousResearch)
• Nous Research (https://nousresearch.com/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-255.md) | 25 | 1 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspired to or, are curious how AI related Tech is, changing the world this is the show for, you we just dropped dance party our, third fulllength album on Chang log, beats buy it on band camp and iTunes or, stream it on Spotify Apple music and the, rest Link in the show notes thank you to, our partners at fly.io launch your app, close to your users find out how at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, the CEO and founder at prediction guard, and I'm joined as always by my co-host, Chris Benson who is a tech strategist at, locked Martin how you doing Chris doing, great today was nice seeing you a few, days ago uh in person and In the Flesh, in the flesh yeah that was great I think, you posted a picture on LinkedIn so uh, if anybody doesn't know what we look, like and and has some crazy reason to, want to know there's a smiling mug of us, on Daniel's profile so yes yes and uh, what the reason we met is I was on a, client visit on site and we were, prototyping out some stuff like chat, over your docs and natural language to, sequel stuff and all sorts of things, with prediction guard and one of the, models that we were using was from new, research um and that works out great, because we have curan multra here who is, from uh news research uh co-founder and, researcher there so welcome glad to have, you Karan hey oh thanks for having me, I'm I'm extremely excited to chat with, you guys yeah like I said um I'm a huge, Well I this is our first time meeting, but I feel like we're already friend, because I've I've had so much of my own, benefit and interaction in working with, models from new research a lot of, amazing models that you've posted on, hugging face and research that you're, doing I'm wondering if you could just, give us a little bit of a background, about uh noose specifically and kind of, how you came together as researchers and, started to me from the sidelines it, seemed like oh all of a sudden there's, these amazing models on hugging face and, I don't know who these people are these, new research people but they're amazing, um so give us a little bit of the, backstory there absolutely yeah um so, just as a general overview We Are One, Part like open source research, organization we put these models out for, free we put a lot of research out for, free some data sets so people can build, on top of these open models on the other, hand we're very recently a company as, well a C Corp so we've uh been working, pretty hard after getting some seed, funding on uh building together some, some exciting stuff I I won't go too, into on during the overview point but uh, we're continuing to do our open source, research and development and release of, models indefinitely the way we started, is very interesting and it would be, pretty out of nowhere to to the outside, for sure it was it was extremely fast, for us we are a collective of people who, have been playing around in the open, source language model space for a while, ranging from like GPT two release to, llama release to like the first, Transformers paper we've got people from, various eras of gen of when they came in, and for myself it was gpt2 uh I stumbled, upon a collab notebook and uh started, fine-tuning made some Edgar Allen Poe, and Lovecraft, Tunes I've done the same that's awesome, and we just got pulled into this world, of look at these next token predictors, that are just managing to smatter, together the most wonderful and amazing, stories, that slowly turned into a deeper and, deeper dive of well how can I use this, for learning information how can I learn, to use this for production and, automation right it's evolved over time, for us we started off just working with, different open source collectives, actually once open AI kind of released, gpt3 and had Clos sourced it you know we, were used to open source gpt2 we were, like oh man what are we going to do like, how are we going to continue to play, with the level of customization and, interactivity that we had with gpt2 then, Uther had released gptj 6B the Cobalt AI, Community this community of people who, tune models and inference models started, to pop up I think around 2020 uh 2021, with in the face of this so a lot of us, started to have places to centralize and, play with these models we got to, contribute and learn how to become, better open source AI developers Etc, eventually there was a need for more, concrete organizations to do this kind, of focused work on the the creation of, these models we were stuck with like, okay architectures for a while like Pia, but thanks to meta you know we wouldn't, be here without meta I'll say that first, and foremost like the great llama yeah, yeah like prior to llama right like, everyone's like oh Facebook evil like my, data Etc and here we are like they are, kind of like the Shepherds of this new, era of the open source AI movement so, when llama came out there was a paper, that came out called alpaca by Stanford, lab right and this was about distilling, data from bigger models like gpt3 chat, GPT gp4 and being able to train smaller, models on that distilled synthetic data, something they called instruction data, so that alpaca format really opened up, the playing field for everybody to start, making these instruct style models these, actual four prod use sty styal models so, there was an idea I had in my head of, well the alpaca guys are using only gbt, 3.5 outputs what if I only generated gbt, 4 outputs it'll be a little expensive, but you'll probably get a better model, out of it than alpaca at the same time, that I was looking at this there was a, guy on Twitter named technium who had, just started putting together his own, synthetic data set based off alpaca and, the GPT 4 only as well so I was working, with a group at the time called open, Assistant under lion their really big, nonprofit and while I was working on, that we had some gpus they were cool, with us using towards uh the development, of new models so I reached out to, technium and I said hey I have a little, bit of compute you have gp4 data in the, same format I have gp4 data in the same, format let's train a model so we trained, a model called gp4 X vicuna this model, was on the vicuna fine tune we fine, tuned to fine tune basically the vuno, model was a alpaca style fun and we, tried our data set on top of it it was, good it was okay then we thought you, know we'll probably get a better result, if we just train on the Bas llama model, and the resulting model was the very, first Hermes, model gotcha the OG the OG and and, that's kind of how it started to come, together was uh we both had a data, thesis on use gbd4 only and follow, alpaca and we trained on llama and we, got Hermes and we didn't know what, benchmarks were we didn't know anything, about any of this stuff we just made a, model and it got a ton of attention we, put it out under this name noose, research noose comes from the Greek word, for intellect we thought it' be good, name for AI, company uh but it was it was just a, place for you know fun projects and fine, tunes and stuff it was just a name we, were using for our collaboration and, people started swarming and asking you, know what's news research like what's, this sudden like mystical like uh open, source organization that like put out, this like best model and we're like best, model like we just you know we just, tried something it was it was really, organic and it got to the point that, people started telling us you know you, must have trained on the benchmarks like, these are doing too well and we were, like what's, benchmarks like we we're not really like, uh coming from an academic place as much, as from like a Enthusiast that became so, committed that it became our life right, it became our day-to-day yeah so from, there uh people started to ask us can I, join news research now there wasn't a, news research to join there just two, guys right what ended up happening was, we formed a private Discord server and, we thought there's a lot of people who, range from somebody who's like 16 17, years old Savant on Twitter hasn't even, been to college yet insane that, Transformer stuff to uh mid-30s is you, know working a really really good fanges, job and just wants to really create and, let loose that was another class of, volunteer and then you have you know, older gentleman who has already exited a, company or something who has just been, playing with code for a while and wants, to jump in and hang out so we ended up, being this really eclectic group you, know we don't know what your name is we, don't know what your race is we don't, know your gender anything it's just, Discord profile picture Twitter profile, picture right so we came together GRE to, about like 40 people all working, together on various different projects, like Hermes Tunes data synthesis the, cppy bar series context length extension, Etc and just from this kind of, interaction between Twitter and Discord, and bringing people in that we thought, were cool we ended up becoming what, people will call Open Source research, org yeah that you sort of stumbled into, creating this amazing uh research, organization which is ruling the world, which is, awesome it's what open AI might have, been oh well yeah it's really sweet, thank you guys yeah and I I love it it's, so cool to hear that story and that, background and I see like in my own sort, of little snapshots here and there like, I'm connecting that in my mind over the, the past couple years as I've as I've, seen you all post different models and, that sort of thing this is something I, you know we've definitely touched on on, the show before but some of our, listeners ERS might not kind of fully, grasp when you say the sort of like, synthetic data sets that you were, focused on and in this um alpaca format, could you kind of explain a little bit, like we've talked a lot about, fine-tuning and you know preference, tuning and rhf and different things but, what does it specifically mean that like, you would take synthetic data what does, that mean in in your case and like why, why does that result in something good, in in fine an open model people might, think oh this is synthetic data why, should I expect it to like be any good, so could you kind of help explain that, subject a little bit yeah absolutely so, I mean out of context synthetic is like, as meaningless as like artificial right, it could data is data but in this case, it's referring to a particular class of, data that's been generated by another, language model or another AI another, diffusion model Etc that can actually be, used to further train models now you, might say why would you want to do, something like that how is it helpful, what was important to us is we were all, GPU poor right we were all running on, laptops or maybe a 3090 maybe a 4090, like as individuals we we don't have, data centers so training or even tuning, like a large model in the early days, like 70 billion parameters something, like that was just unfeasible for us and, knowing that gbt 3 is like something, like 175 billion parameters and 3.5 and, four can only go up from there the, question became how can we make these, small 7 billion parameter models even, compete with these massive massive ones, these ones that I want to run offline, these ones that I might want to run on, an edge device on a phone on a drone Etc, right like how can I make them even, useful so there's two things to talk, about here one is synthetic data and the, other is distillation right so synthetic, data is just referring to like any kind, of data that's created by a model in, this case the reason that's useful is in, particular distillation so if I told you, to go study comp ey for 10 years for, example and and put in that massive time, investment and really focus on General, programming and then I told you you know, now it's time for you to learn about Ai, and Transformers and stuff and put you, through all the math prerequisites Etc, like you're going to come out with like, a really strong Foundation of how to do, the work but the problem is you've put, in a massive time in investment now if I, take that guy who spent 10 years doing, engineering then another five years, doing Ai and I ask him hey can you teach, somebody like just really important like, compressed tidbits that'll help them, just get up and running to do the work, that's data distillation right that's, knowledge distillation so you look at, these big models like a Claude or a 70b, model or gb4 and you can see like, they're amazing they're brilliant at, everything they have a bunch of high, quality data they're trained on and they, have a bunch of lowquality data they're, trained on that they can interact with, and Express in a high quality form so, instead of me having to read a massive, 10 pager for why some chemical reaction, or some like tax Bas process like, whatever you want it to be like instead, of reading a massive document on that, and then feeding that to a language, model we can just have that really smart, model that already understands it really, well compress that information into an, instruction or into a conversation uh, into like two sentences three sentences, five sentences like half a page and we, can just train a much smaller model on, that compressed information and it will, learn the compressed information you, know to the degree that a language model, learns something you know not perfectly, but uh because of that what the alpaca, guys did was they generated a bunch of, seed tasks from gbt 3.5 on various, different domains and topics and create, these kind of compressed instructions, with instruction an input question from, the user and then an answer so the, instruction could be like given the, following math equation explain step by, step why this is the answer and then the, input is the equation which is your your, question and then the output is the, compressed answer so all of that we can, take as one sample in the data set and, we can make hundreds of thousands or, millions of samples like that of various, different domains and various different, tasks so the alpaca guys did this less, than 100K examples I believe and they, trained the Llama models on these and, they found massive boosts to Performance, that this distilled information like a, human successfully uh compresses and, transfers over so when I saw that and, then independently when technium saw, that and then independently when many, others saw that we were like this is so, intuitive this is exactly how I've, learned anything by just going on, Discord and Twitter and bothering people, to give me the compress bit of how I do, something we should try doing this with, even higher quality models than 3.5 so, we created I can't remember the exact, number at the moment but at least 50,000, maybe 100,000 examples originally for, Hermes one like this just using gbd4 and, then uh we trained on that and ended up, getting performance that was extremely, extremely like massive boost compared to, the other models that were not trained, using this kind of method so without, these Giants that have already, established themselves in the space we, wouldn't be here like without open AI, without meta like we literally wouldn't, have the model and the data to do the, kind of work that we did to make Hermes, what it allowed for us is like for local, models to finally be like comprehensible, and for us to finally have like offline, capabilities to kind of take the good, stuff from something like gb4 or, something else and make it uncensored so, it still has all this understanding of, all these topics but it doesn't have all, that rhf inside it necessarily that, safety ises it so that when people, utilize the model has all this, intelligence but it's has more freedom, of thought to kind of Converse with you, on topics that open AI May reject gotcha, one of the things I was curious about as, you were going through that was a few, episodes back Daniel and I were kind of, talking about the effect of model, licensing you know on the community and, the different kind of license concerns, that were coming out from whether it be, you know meta open AI you name the, organization is that ever a challenge, for you since you're kind of using those, to get started in terms of the inputs, has that been a concern or or do you, anticipate it being a concern I think, that of course like generally like us, International regulation on this stuff, is evolving the conversation is evolving, very much so naturally there is like you, have to keep it top of mind you have to, think about these kind of things but, thankfully because all of our model, releases are like open- source and we, don't profit from them like if somebody, goes off and creates a product using our, model you know good for them but we, don't necessarily take on that liability, or that worry of saying hey like we're, going to sell you this model that was, created with gb4 outputs we we actually, actively try to stay away from doing, that but because the data distillation, Paradigm is so effective you know if a, model comes out that's better than gp4, and it's open source and I can can use, it locally and in their to it says you, know you can use this to make a, commercial model then we can apply the, same techniques that we've been, preparing and researching and, understanding from these close models, and use it there so right now like we, don't stand to or try to or have any, plans to profit from using any of these, outputs we're not about that because we, want to be careful and respectful of, these model creators but that and these, companies but that being said we're, learning all these techniques and, developing all these techniques that, will be useful for when that time comes, and for when that's available especially, with the Advent of something like mistol, uh if we do distillation from a mistro, model like mistal medium or something, like that that's completely from my, understanding you know barring their to, saying otherwise but I believe it it, doesn't uh it's completely okay in that, situation for us to create models like, this that can be used commercially Etc, regarding the to stuff though like as, much as we air on the side of caution, I'd find it hard to see a company, enforce their to when these larger, models are likely trained, on not all copyright free stuff like I'd, find it hard pressed to believe that, these closed Source companies their, models are you know totally copyright, free and totally copyright clean so if, some other company that was feeling a, little more rambunctious than ourselves, was to say you know we are going to, commercially release on this I imagine, it'd be difficult for them to become, after without the other group opening, their books and there's actually a, pretty interesting interaction that, happened regarding this between Google, and uh open AI if you guys are, familiar so yeah I saw this U, interesting picture the other day it was, like the interesting web of AI and it, was like how Microsoft Google open AI, like it's like on one side there's the, ones and it shows how they're connected, to the other ones is like this, visualization and like how many of them, overlap in these strange ways between, like whether it's uh together or mistol, or meta Google Microsoft open AI is sort, of very interesting web of connections, that that probably Mak some of these, things rather difficult leave it for the, lawyers to sort out yeah yeah that's the, thing is like we can look at an example, right like you hear that phrase like, good artist copy great artist steal, right like so the data distillers we're, copying right like we're just distilling, this information like we're trying to, like make our models more like those and, we don't really plan to commercialize, we're just doing it for free for, everyone but the great artists are you, know Google you know like you look at, Bard and it tells you you know I was, made by open AI now it's fine for our, open source model to say I was made by, open a because we're very transparent, about this is trained on gbt outputs but, when Bard violates the to with a paid, product yeah bold yeah that says like I, was trained by open AI right You' think, that open AI would come after this, multi-billion dollar company like, immediately right instead you see a, tweet from first you see Google deny it, then you see a tweet from Sam Alman, which was something along the lines of, I'm paraphrasing here something along, the lines of I'm not mad that they, trained on our outputs I'm mad that they, lied about it and I'm sitting there like, okay you're mad about this but like, don't you aren't you going to pursue the, legal action in your terms of services, no no because everyone would have to, open their books up too that being said, I don't condone the commercial use of, that kind of stuff like the release like, making a paid model from gbd4 outputs, like I wouldn't advise anyone sell a, model made with them just because like, you know we want to respect people like, to and stuff they worked hard and spent, billions to make this stuff or hundreds, of millions however much they spent but, there is certainly room for hypocrisy in, the in that Realm of the large corpse, but that's my thoughts on on the, licensing stuff and that's definitely my, own individual thoughts like we're a, pretty decentralized Collective at news, so you'll find people with all sorts of, opinions all over the place and as a, company we don't hold any view, whatsoever on that yeah um I'm wondering, maybe this gets a little bit to the, distributed nature of this but I I know, that there's sort of various collection, s of what the news research group has, done over time you mentioned Hermes but, then there's there's these other kind of, categories of things too like the yarn, models capy Bara puffen obsidian just, looking over the hugging face now I'm, wondering if you could just give us like, from your perspective a little bit of a, map of these different things and like, how people might categorize the, different collections of what noose has, done I definitely want to talk about, like the future things and ongoing, things as well but as it stands now what, are the kind of major categories of what, the collective has invested in over, their time in over time certainly, certainly so uh within the stuff that's, viewable on hugging face at least we've, got the Hermes series of which like I I, told you guys the initial story of how, it went down but uh from there technium, kept going I haven't personally had any, interaction with the Hermes model since, the initial from there Tech just, continued to create more and more, synthetic data collect from more and, more sources use more and more open data, sets and he's just got the I guess, award-winning like data thesis he the, guy really knows how to go about uh, curating and synthesizing good data so, technium it's his baby the Hermes, project so everything you've seen since, is really his work and anyone who has, kind of collaborated with him but almost, like it's you can't call it anything a, solo project because of the open data, sets we use too like like everything is, built on the shoulders of giants and the, shoulders of each other as little people, but uh Tech really has helmed the Hermes, initiative so far I think that's our, most popular model series and he, released the open Hermes as well because, we had some data in the original Hermes, that we never released publicly and uh, we wanted to make that kind of an option, for everybody so that's Hermes still, follows the same kind of philosophy of, synthetic data and it now uses the chat, ml format instead of the outpa format is, what we kind of upgraded to then you've, got uh Kappy Bara and Puffin which are, both done by a volunteer and uh you know, OG member ldj who you may be familiar, with Luigi Danielle Jr so the kibara, series was uh using an amplify instruct, method this novel method that uh ldj had, worked on alongside another one of our, researchers Jay so ldj and Jay can get, confusing but uh the two of them worked, on the copy bar series created the data, set trained the models and then Puffin, was uh the idea of using handpicked, smaller samples from some of our larger, data sets to make Sleek data sets for an, easy tune and see how that works kind of, uh in the spirit of the Lima paper where, they just used a few examples to get, really good results those are really the, popular Tunes using synthetic data for, like General use yarn is this novel, context length extension method at the, time of creating by IM Mozilla also, known as Jeffrey canel and Bowen Pang, also known as block 97 alongside uh, enrio Chipotle and a Luther AI so what, happened there was these guys were, already looking into Contex like the, extension for a while and uh when we, kind of came under the news Banner to do, the work uh it opened up a little bit of, resources from compute sponsorships it, opened up a more centralized place for, them to be able to do that collaboration, I had no hand in in uh the yarn models, whatsoever and that's the exciting thing, is everyone really gets to work in their, own spheres in their own kind of, autonomous circles and then we just, check in and see you know how's the, research going how's it coming along, because we really work with people that, we heavily believe in and we believe in, their idea so if we don't already have, an idea we kind of just say you know, please freely create because we brought, you in because what you will freely, create will push forth our agenda anyway, so I think those are our big model, releases and series that we have, available outside of that we have a, bunch of stuff on our GitHub as well, stuff that's being worked on stuff that, hasn't necessarily come out yet there's, a lot of, that so I got a question for you is a, followup it's pretty fascinating the, story that you've been telling us here, because of that kind of organic you know, creation of the organization or, Collective and I'm wondering as you've, done that and you kind of went through, and talked about the different model, groups and kind of talked about you know, the owners or spiritual owners if you, will of each of those families how do, the different members of the collective, interact to kind of share like how do, you each push each other along or share, information or give ideas so that cross, family efforts can kind of benefit from, the overall Collective and as you said, now a C Corp and you guys are more, organized at this point so what kind of, culture is developed around those, Communications and learnings yeah, absolutely I mean when it started it was, just like a small Discord maybe like 10, people from there like we kind of, created more channels as people wanted, to work on more things and we had, initially split up into like three four, different topics or sectors that people, could assign themselves to one being, data synthesis of course so we can kind, of find new novel methods and formats, for distillation and the creation of, synthetic data one being training like, people who are just like really good at, training hyber pram stuff and people who, will come up with new architectures and, new techniques another being agents a, group of people who want to actually try, to build tools and do autonomous work, with this stuff and then we had this one, category that it was a prediction for, the future of simulation so we had, people that were very interested in kind, of bringing this stuff into simulation, into Unity into kind of seeing how all, these things came together and it was, interesting because the training built, on the data synthesis the agents build, on the training and then the Sim would, build on the agents was kind of the idea, so everybody needed to work together, because all those things are so, intrinsically connected but people would, have specializations on kind of where in, that workflow they wanted to work we, didn't end up doing a lot on the Sim, side of things now recently there's a, lot more interest uh because we have a, lot more you know capability generally, as the AI Community does you know but as, we've grown to we went to 40 people it, was fine now we've gone to like 5,000, people in the Discord it's a little, unwieldy there so uh what we do is we, kind of tear people in you come into the, Discord you can see maybe two channels, and then we give people a developer role, but we don't really let people select, their own roles because we want to make, sure we can kind of sort through people, we know to kind of let them through and, even as we do open source research a lot, of it is unreleased and we want to make, sure that it's kind of protected before, release uh so we create this developer, role so people can then see like way, more channels of just general, development and development conversation, and from there as we see you know, contributors who have started to do more, work or show more passion towards, contributing to news in a particular, field or who have some reputation or, some portfolio in a particular field, then we'll assign them one of those, roles and that will open up the family, of channels relating to those roles and, our current projects surrounding that, role so like data synthesis projects, agent projects training projects Etc so, we kind of just tear it out so people, can interact and people who have been, around for a while or people we consider, fellows or part of the cohort they can, usually see pretty much everything so, they're pretty effective and serving as, coordinators for the cross communication, between these different channels and, groups and even if something has like a, particular uh someone has a particular, role or some channel has a particular, role it's supposed to be a part of like, it's still Discord and we're still very, chill so like people will still work on, like various different overlaps inside, of just one channel as, [Music], well if you're listening you know that, artificial intelligence is, revolutionizing the way we produce, information changing Society culture, Politics the economy but it's also, created a world of AI generated content, in including deep fakes so how can we, tell what's real online read write own, building the next era of the internet a, new book from entrepreneur and investor, Chris Dixon explores one possible, solution to the internet's authenticity, problem blockchains from AI that tracks, its source material to generative, programs that compensate rather than, cannibalize creators read write own is a, call to action for a more open, transparent and Democratic internet one, that opens the black box of AI tracks, the origins we see online and much more, this is our chance to reimagine world, changing Technologies to build the, internet we want not the one we, inherited order your copy of read WR own, today or go to read write own.com to, learn, [Music], more hi have a uh selfish question which, now that this is one of the advantages, of doing the podcast they get to talk to, all the amazing people doing amazing, things and learn from them but I'm, wondering as a person who is also trying, to fine-tune some models either just for, my own enjoyment and and learning but, also fine-tuning models for specific, tasks and in uh specific uh customer use, cases and that sort of thing there's a, lot of people out there I think many of, our list ERS who are thinking like since, you being part of this Collective have, worked for you know since the sort of, dawn of of these many you know the, proliferation of fine tunes from llama, and Etc and as you've seen all that as, you're doing more and more fine tunes, now as you're looking towards the future, do you have any kind of good advice or, things to keep in mind for all those, like fine tuners out there that are, thinking about grabbing something off of, hugging face creating their own versions, of these models maybe they have their, own ideas about a specific take on on a, Model um any general tips that you found, to be really useful over time or like, pitfalls that you'd like to highlight, yeah I mean I can I can try to think of, a few off the top of head I'll say that, hyper parameters are really important, and uh it's important to try to get that, right it's going to vary from model to, model but a lot of the time some people, think hyper pams like don't matter as, much uh to like obsess over and some, people think it's like a secret sauce as, well so I'd say like try to do a lot of, research into like good hyper prams like, good learning rate like I'd also say, like I could be totally wrong about this, as I am not the trainer of Hermes today, or a lot of these models but something I, personally believe in a lot is like, ignore like people telling you to only, train for like x amount of time like if, you're not overfitting like just keep, going like if you can if you have the, compute like keep training and keep, going like train for more tokens more, Epoch like that's something I I heavily, believe in uh in terms of trainers to, use there's a lot of people who make, their own scripts for specialty stuff, and there's of course like you know you, can just use hugging face but the, library we use is called axolot axo l o, TL like the animal uh by Casias uh Wing, Leon of the Open Access Collective we, think Axel is probably the best general, purpose trainer for for luras Q lauras, fine tunes Etc it like any open- Source, repository is buggy and stuff you're, going to have to work out but it's in my, opinion probably the easiest and most, effective trainer to use for like pretty, much any model architecture available, right now so I definitely Point, everybody towards Axel awesome yeah, that's super useful we'll share some, links in uh in our show notes as well so, people make sure and check that stuff, out another kind of interesting question, um as you see you know I think we saw, these waves of of models that came out, maybe around uh synthetic data fine, tunes or or other types of fine tunes I, see this like interesting sort of thing, happening over the past however many, months you know not that long in the, scheme of things but in the AI World, maybe a while where we're kind of now, like there's a lot of interesting, approaches more so than just fine but, like mixture of experts and merging and, of course multimodal stuff coming out, now I see news kind of dabbling in that, you don't have to answer for the whole, Collective but as there's so many of, these things coming out and different, approaches what are some of the things, within that doesn't have to be one of, those but what are some of the things on, on your mind kind of moving forward uh, or on uh n's mind kind of more generally, sure um I'll try to go from like simple, to complex on the kind of stuff that, sounds great I think that definitely, just like straight up instruction tuning, is great there's other ways to tune like, the Evol instruct method I would advise, people to try to create new instruction, methodologies that allow us to make even, better formatted data people don't spend, enough time trying to create new, instruct formats uh and we've definitely, been swamped with not doing that as well, so I think towards the general Community, it's a really easy place to get started, you don't need to really know how to, code so much as think about how a human, might more effectively phrase something, or format something and kind of remix, from there I think that's like probably, the easiest place to start then there's, a model merging right model merging is, great you can just like take two models, and Frankenstein them together to, question mark results you know you got, to just try and see what happens and and, feel it out then from there I would say, there's stuff like DPO there's rhf DPO, like this kind of r WS things that can, let you like enable rejections or create, censorship or put some kind of General, concept or attitude towards the model uh, we found that to be pretty effective, with the latest news Hermes mixol DPO it, seems like people really like it and, prefer it over just the sft so that's, another thing that I'd heavily recommend, from there we get a little more complex, we have some reward model stuff we're, working on that I won't speak to just, yet outside of saying we're working on, it that we think is going to be like, pretty big for reasoning boosts of, course there's techniques like Chain of, Thought and tree of thought for like, multi-step prompting creating data sets, even out of that for any of these, purposes I've already mentioned is going, to be really effective now to stuff that, maybe not everybody can actually a lot, of people would already be able to do, this there's like something that we like, to call over at new activations hacking, where you're kind of uh messing with the, way that a model I'm trying to think, about how to say this in like the most, layman's terms like you're trying to, mess with how a model like generally, Vibes about, something so rather than just doing a, system prompt or something like that you, can actually like change the the model, vectors to kind of be like more, political about something less political, about something more tur more specific, and it has far more effect and control, over a model than a system prompt it's, basically like a system prompt that like, tells it to embody certain, characteristics but it's not something, you can really uh jailbreak or get, around as far as my testing is shown, certainly not as easily as a system, prompt like we have no problem, jailbreaking even the most censored, closed models today like it can be done, by anybody with the right words right, but um this activation stuff it really, creates a bit more of a robustness and, Fidelity to the concepts that you're, trying to tell it to embody there's a, few more I'm trying to think of that, would be useful for people uh one thing, is soft prompting it's not really around, anymore it used to be pretty big during, the gpj like pre- llama days and the, Cobalt AI guys really pioneered the use, of it in the open source Community but a, soft prompt basically takes like massive, prompt and and compresses it down to, like way less tokens so you can give, your model like a huge prompt like huge, system prompt or huge amount of, information and use like way less tokens, so soft prompting is cool it's not going, to be too difficult to like update it, for like llama mistro like today's, architectures it's just like nobody has, really done it that I've seen so you, know to the community if you guys do, that please, share um that's actually much easier, than the activation stuff I think and, then finally probably the hardest, unsolved is like uh sampling methods, like today we use like top K top P like, you know nucleus sampling Etc whatever, like there's better ways to pick tokens, for sure there's better ways to judge, the value of tokens for sure and, everyone has been too kind of concerned, with higher levels to get that low and, do whatever the magic math is that I, can't do that would uh you know enable, some steering and some uh even Beyond, steering like alternative sampling, paradigms and I think that would, probably bring the biggest change in, transformation to literally all models, regardless of the tune regardless of the, architecture Etc, get pulled off so really looking forward, to something like that happening in the, space that was a lot of really good, advice that you have there I was sitting, there trying to take notes while while, you're were talking through it and, everything going wait but he said that, too and he said that too a really good, answer there um thank you for that as we, are starting to wind up here uh wanted, to ask you I know about as we're, recording this is looks like it was just, over three weeks ago about four weeks, ago when we release uh this episode you, guys announced uh your $5.2 million seed, financing round so congratulations on, that that was pretty amazing thank you, and I'm kind of wondering so like you've, kind of started with this kind of fairy, tale story of kind of organically, building from the ground up you know, yourself you connect with somebody else, a few other people join you get to, thousands of people contributing you Fin, and and really producing amazing work, and then uh you're incorporating and now, got the seed round coming where does, that lead you it's kind of a skies the, limit kind of scenario it seems you know, that now that you're you're kind of, launching in uh you know on that you, know as a corporation as you said where, can you go from here what do you, anticipate over the next couple of years, or or even several years out you know, what's the vision what do you want to, achieve you've come a long way so far, what's next AGI no I'm just, kidding I believe you if you said it, actually, I mean like you know someone will do it, but uh and then you'll distill the, knowledge then we'll distill and then, you you'll run the AI on your on your, neural link or on your contact lens or, something right uh but for us like, there's a huge focus on locality there's, a huge focus on offline there's a huge, focus on take the power back run the, model yourself do everything at home, like that's big for us and at the same, time of course we believe in scale but, there's this idea that you know there's, so much unsolved at the, model size why don't we do that before, we go to a trillion params because we, can scale those realizations but for us, like there's certainly you know a, transformation and change in attitude, and in pressures from going from Pure, open source volunteer to as well having, kind of this more corporate Branch could, created as well but that being said it's, been pretty consistent our ethos and our, motivation for why we do this and like, you said it really was organic in the, sense that like we're of the times we're, a product of the atmosphere of the AI, Community like people have said nice, things like you guys are setting the, trend and it's not really true so much, as the truth is like we are one of many, embodiments of the sentiment that the, community has and that the world has we, think like there's more than one news, research in this world you know there's, alignment Labs there's pigmalion there's, Cobalt there's people who have been, around before us people who will come, along the way people who have already, formed since we have and there's lots of, people who have kind of embodied the, news research ethos and it's not really, just our ethos as much as the overall, community's ethos they people who have, come before us people who will come, along the way uh who do very very, similar style of work as us this kind of, open work and I think that's got, everything to do with the fact that like, this is what the people want uh we are, just the every man just like everybody, else we're not like uh billionaires or, super like all x Facebook or anything, like that we're we're just a bunch of, people who really really care about this, who who want to see everyone have access, to language models everyone be able to, automate their lives everyone be able to, push their understanding of any topic to, the next level and our work as we become, an organization that's looking to you, know be a company and create Revenue Etc, we won't let it tamper or hinder any of, the open source work we do in fact we, want it to empower all of that work, because we believe that the tools and, the developments and services that we, will be providing as a corporation will, only serve to better feed the entire, open source Community we're not really, looking to suddenly make like a closed, Hermes or something like that we're more, looking to create tools and do research, that makes your open Hermes far more, effective far better and uh you know, good enough that you may want to pay for, that tool, uh it sounds like something I would pay, for that's for sure thank you yeah it's, super inspiring I I really appreciate, you uh taking time current to talk with, us I thoroughly enjoyed this because I, I'm such a fan of of everything you all, are doing and the community that you've, built so thank you for saying true to, that culture and and what you're doing, and I'm really looking forward to seeing, what what happens in the future and and, where things head and I I hope that we, can talk again and have uh news back on, the show and in a year when of course, everything will be different in the AI, world and I'm sure you'll still be doing, uh interesting things so yeah you're, always welcome back on the show thank, you so much it's been a pleasure to chat, with you guys thanks for being so candid, and I'm glad we were able to kind of, push our message forth more and thanks, for the validation you and the community, have given us to keep doing this great, work all right thanks we'll talk soon, see, [Music], you, [Music], that is practically I for this week, thanks for listening subscribe now if, you haven't yet head to practical AI FM, for all the ways and don't forget to, check out our fresh Chang log beats the, Dance Party album is on Spotify Apple, music and the rest there's a link in the, show notes for you thanks once again to, our partners at fly.io to our beat, freaking residence break master cylinder, and to you for listening that's all for, now we'll talk to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Large Action Models (LAMs) & Rabbits 🐇 | Recently the release of the rabbit r1 device resulted in huge interest in both the device and “Large Action Models” (or LAMs). What is an LAM? Is this something new? Did these models come out of nowhere, or are they related to other things we are already using? Chris and Daniel dig into LAMs in this episode and discuss neuro-symbolic AI, AI tool usage, multimodal models, and more.
Leave us a comment (https://changelog.com/practicalai/254/discuss)
Changelog++ (https://changelog.com/++) members save 5 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Read Write Own (https://readwriteown.com/?utm_source=changelog&utm_medium=practicalai&utm_campaign=changelog) – Read, Write, Own: Building the Next Era of the Internet—a new book from entrepreneur and investor Chris Dixon—explores one possible solution to the internet’s authenticity problem: Blockchains. From AI that tracks its source material to generative programs that compensate—rather than cannibalize—creators. It’s a call to action for a more open, transparent, and democratic internet. One that opens the black box of AI, tracks the origins we see online, and much more. Order your copy of Read, Write, Own today at readwriteown.com (https://readwriteown.com/?utm_source=changelog&utm_medium=practicalai&utm_campaign=changelog)
• Shopify (https://www.shopify.com/practicalai) – Sign up for a $1/month trial period at shopify.com/practicalai (https://www.shopify.com/practicalai)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• rabbit r1 (https://www.rabbit.tech/)
• Salesforce blog on LAMs (https://blog.salesforceairesearch.com/large-action-models/)
• LangChain tools (https://python.langchain.com/docs/modules/agents/tools/)
• MM-LLMs: Recent Advances in MultiModal Large Language Models (https://huggingface.co/papers/2401.13601)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-254.md) | 83 | 1 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another fully, connected episode of the Practical AI, podcast in these fully connected, episodes we try to keep you up to dat, with everything that's happening in the, AI and machine learning world and try to, give you a few learning resources to, level up up your AI game this is Daniel, whack I'm the founder and CEO of, prediction guard and I'm joined as, always by my co-host Chris Benson who's, a tech strategist at locked Martin how, you doing Chris doing very well Daniel, it's uh enjoying the day and and by the, way since you've traveled to the Atlanta, area tonight we haven't gotten together, but you're just a few minutes away, actually so welcome to Atlanta just got, in yeah we're within not maybe a short, drive, uh depending on your your view of what a, short drive is anything under 3 hours is, short in Atlanta so and I think you're, like 45 minutes away from me right now, so yeah so uh hopefully we'll we'll get, a chance to catch up tomorrow which will, be awesome because uh we rarely get to, see each other in person it's been an, interesting couple weeks uh for me I so, for those that are listening from abroad, maybe we had some major ice and snow, type storms uh recently and my great uh, and embarrassing moment was I was, walking back from the office and the, like freezing rain and I slipped and, fell and my laptop bag with laptop in it, broke my fall which is maybe good but, that also broke the laptop so um, actually the laptop works it's just the, screen doesn't work so maybe I'll be, able to resolve that but it's like a, mini portable server there isn't it you, know yeah exactly yeah you know you have, enough monitors around it's not that, much of an issue but uh yeah I'm I had, to put Ubuntu on a burner laptop for the, trip so yeah okay it's always a fun time, speaking of personal devices there's, been a lot of interesting news and, releases not of well I guess of models, but also of interesting actual, Hardware devices related to AI recently, one of those is the rabbit R1 which was, announced and sort of launched to, pre-orders with a lot of Acclaim um, another one that I saw was the AI pen p, n which is like a little I don't know my, grandma would call it a brooch maybe, thing like a large pen that you put on, your uh jacket or something like that I, am wondering Chris as you see these, devices and I want to dig a lot more, into some of the interesting research, and models and data behind some of these, things like rabbit but just generally, what are your thoughts on this sort of, trend of Aid driven personal devices to, help you with all of your personal, things and plugged into all of your, personal data and sort of AI attached to, everything in your life well I think, it's coming maybe it's here but I know, that I I am definitely torn I mean I, love the idea of all this help along the, way there's so many like I forget, everything I'm terrible if I don't write, something down and then follow up on the, list I am not a naturally organized, person so my wife is and my wife is, always reminding me that I really, struggle in this area and usually she's, she's not being very nice in the way, that she says it so um it's all love I'm, sure but yes yes so part of me is like, wow this this is the way I could, actually you know be all there get all, the things done but the idea of just, giving up all my data and just being U, it's like so many others it's not uh, that aspect is not appealing so yeah I, guess I'm I I'm not leaping how much, different do you think this sort of, thing is than everything we already give, over with our smartphones it's a good, point you're making I mean we've had, Computing devices with us in our pocket, or on our person 247 for what at least, the past 10 years you know for at least, for those that adopted the iPhone or, whatever when it came out but um yeah I, so in terms of location certainly, account access and certain, automations what do you think makes it, because obviously this is something on, the mind of the makers of these devices, because I think both the AIP pin and the, rabbit R1 make some sort of explicit, statements in their launch and in their, website about privacy is really, important to us this is how we're doing, things to because we really care about, this so obviously they anticipated some, kind of additional reaction but we we, all already have smartphones I think, most of us if we are willing to admit it, we know that we're being tracked, everywhere and all of our data goes, everywhere so I don't know what is it, about this AI element that you think, either makes a an actual difference in, terms of the substance of what's, happening with the data or is it just a, perception thing it's probably a, perception thing uh with me I mean, because everything that you said I agree, with you're dead on and we've been, giving this data up for years and we've, gotten comfortable with it and that's, just something that we all kind of don't, like about it but we've been accepting, it for years and I guess it's the, expectation that with these AI, assistance that we've been hearing about, for so long coming and we're starting to, see things like the rabbit come into, market and such that there's probably a, whole new level of kind of analysis of, us and all the things and in a sense, knowing you better than you do that uh, is uncomfortable and probably will not, be as uncomfortable in the years to come, because we'll grow used to that as well, but I have to admit right now it it it's, an emotional reaction it makes me a, little bit leery yeah maybe it's prior, to these sorts of devices there was sort, of the perception at least that yes my, data is going somewhere maybe there's a, nefarious person behind this but there's, sort of a person behind this like the, data is going all to Facebook or meta, and they're like maybe they're even, listening in on me and putting ads for, mattresses in my feed or whatever the, thing is right um so that perception has, been around for quite some time, regardless of whether Facebook is, actually listening in or or whatever or, it's another party like you know the NSA, and the governments listening in but I, think all of those perceptions really, relied on this idea that even if there's, something bad happening that I don't, want happening with my data there's sort, of a group of people back there doing, something with it and now there's this, sort of idea of this agentic entity, behind the scenes that's doing something, with my data without human oversight I, think maybe that's if there's anything, sort of fundamentally different here I, think it's the level of Automation and, the sort of agentic nature of this which, does provide some sort of difference, although there's always like you know if, you're processing Voice or something, there's voice analytics and you can put, that to text and then there were always, NLP models in the background doing, various things or whatever so there's, some level of automation that's already, been there but I agree and I think but, you mentioned perception up front and I, think that makes a big difference um I, guess with like you mentioned NSA, intelligence agencies I think we all, just assume that they're all listening, to all the things all the time now and, that's one of those things that's, completely beyond your control and so, there's almost no reason to worry about, it I suppose unless you happen to be one, of the people that an intelligence, agency would care about which I don't, particularly think I am yeah so it just, goes someplace and you just kind of, shrug it off there's a certain amount of, what we've done these years with mobile, where it's uh you're opting in I think, it's leveling up to we're saying with, some of these AI agents coming out we, know how much data about ourselves is, going to be there and so it's just, escalating the optin up to a whole new, level so um hopefully we'll see what, happens yeah hope it works out well we, haven't really uh for the listeners, maybe that are just listening to this, and haven't actually maybe you're in, parallel doing the search and looking at, these devices but in case you're on your, run or in your car we can describe a, little bit so the I I described the AI, pen thing a little bit the rabbit I, thought was really really cool design it, I don't know if there's any nerds out, there that love this sort of uh, synthesizer analog sequencer teenage, engineering stuff that's out there but, actually the sort of Hardware design, teenage engineering was involved in that, in some way so it's like a little square, thing the rabbit R1 it's got like one, button you can push and speak a command, um it's got a little hard actual, Hardware wheel that you can spend to, scroll and the screen is kind of just, they show it as black most of the time, but it pops up with you know the song, you're playing on Spotify or some of the, things you would expect to be happening, on a touch screen or that sort of thing, but the primary interface is thought to, be in my understanding speech not that, you would be pulling up a keyboard on, the thing and typing in a lot that's, kind of not the point the point would be, this sort of speech driven, conversational and they even call it an, operating system conversational, operating system to do certain actions, or or tasks which we'll talk a lot more, about the kind of research behind that, but that's kind of what the device is, and and looks like it's interesting that, you know going with the device route you, know and the fact that they're selling, the actual unit itself and you know over, the years we've been you know we started, on on our computer or well we started on, desktops and then went to laptops and, then went to our phones and the phones, have evolved over time and we've been, talking about you know wearables and, things like that over the years as, they've evolved but I think there's a, little bit of a gamble in actually, having it as a physical device because, that's something else that they're, presuming you're going to put at the, center of your life that versus being, kind of the traditional phone app, approach where you're using the thing, that they are that your your customer, already has in their hands what are your, thoughts about the physicalness of this, offering I think it's interesting one of, the points if you watch the release or, launch or promotion video for the rabbit, R1 he talks about sort of the app-driven, nature of smartphone phones and there's, an app for everything and there's so, many apps now that navigating apps is, kind of a task in and of itself and the, Silicon Valley meme no one ever deletes, an app right so you just accumulate more, and more and more apps and they kind of, build up on your phone and now you have, to organize them into little groupings, or or whatever so I think the point, being that it's nice that there's an app, for everything but the navigation and, orchestration of those various apps is, sometimes not seamless and burdensome, I'm even thinking about myself and kind, of checking over here you know I got in, the Uber oh I forgot um to switch over, my payment on my Uber app so now I've, got to open my bank app right and then, grab my virtual card number and copy, that over but then I've got to go to my, password management app to like copy my, password or like there's all these sorts, of interactions between various things, that aren't seamless as you might think, they would be but it's easy for me to, say in words conversationally hey I want, to update the payment on my current Uber, ride or whatever right so the thought, that that would be an easy thing to, express, conversationally is interesting and then, have that be accomplished in the, background if it actually works is also, quite interesting I agree with that and, I can't help but wonder if you look back, at the Advent of the phone and the, smartphone and you know the iPhone comes, out and it really isn't really so much a, phone anymore but a little computer and, so we kind of the idea of the phone, being the base device in your life has, been something that's been with us now, for you know over 15 years and so one of, the things I wonder is could there be a, trend where maybe the phone doesn't, become if you think about it you're, texting but a lot of your texting isn't, really Tex texting it's messaging in, apps maybe the phone is no longer the, central device in your life going, forward and maybe you're actually having, your primary thing and so that would, obviously play into rabbit's approach, where they're giving you another device, it packages everything together in that, AI OS that they're talking about where, conversationally it runs your life uh if, you expose your life to it the way you, are across many apps on the phone but, it's an opportunity potentially to take, a left turn with the way we think about, devices and maybe the phone is no longer, in the future in the not so distant, future maybe the phone is no longer the, centerpiece if you're listening you know, that artificial intelligence is, revolutionizing the way we produce, information changing Society culture, Politics the economy but it's also, created a world of AI generated content, including deep fakes so how can we tell, what's real online read write own, building the next era of the internet a, new book from entrepreneur and investor, Chris Dixon explores one possible, solution to the internet's authenticity, problem blockchains from AI that tracks, its source material to generative, programs that compensate rather than, cannibalize creators read write own is a, call to action for for a more open, transparent and Democratic internet one, that opens the black box of AI tracks, the origins we see online and much more, this is our chance to reimagine world, changing Technologies to build the, internet we want not the one we, inherited order your copy of read WR own, today or go to readright own.com to, learn, [Music], more, [Music], all right Chris well there's a few, things interacting in the background, here in terms of the technology behind, the rabbit device and I'm sure other, similar types of devices that have come, out actually there's some of this sort, of technology that we've talked a little, bit about on the podcast before I don't, know if you remember we had the episode, with, asui which they had this sort of, multimodal model that I think a lot of, their focus over time was on testing you, know a lot of people might test web, applications or websites using something, like selenium or something like that, that automates desktop activity or, interactions with web applications and, actually automates that for testing, purposes or other purposes in ask youi, had some of this technology a while back, to kind of perform certain actions using, AI on a user interface without sort of, hardcoding like click on 100 pixels this, way and 20 pixels down this way right so, that I think has been going on for some, time this adds a sort of different, element to it in that there's the voice, interaction but then they're really, emphasizing the flexibility of this and, the updating of it so actually they, emphasize like I think some of the, examples they gave is you know I have a, certain configuration on my laptop or on, my screen that I'm using with a browser, with certain plugins that make it look a, certain way and everything sort of looks, different for everybody and it's all, configured in their own sort of way even, app wise apps kind of are very, personalized now right which makes it, challenge to say click on this button at, this place it might not be at the same, place for everybody all the time and of, course apps update and that sort of, thing so the solution that rabbit has, come out with to deal with this is what, they're calling a large action model and, specifically they're talking about this, large action model being a neuros, symbolic model and I want to talk, through a little bit of that but before, I do I think we sort of have to back up, and talk a little bit about AI models, large language models chat GPT has been, interacting with external things for, some time now I think there's confusion, at least about how that happens and you, know what the model is doing so it might, be good just to kind of set the stage, for this um in terms of how these models, are interacting with external thing so, the way that this looks at least in the, rabbit case is you click the button and, you say oh I want to change the payment, card on my Uber unclick and like stuff, happens in the background and somehow, the large action model interacts with, Uber and maybe my bank app or whatever, and actually makes the update right so, the question is how this happens um have, you used any of the plugins or anything, in chat GPT or the kind of search the, web typee of plugin to a chat interface, or anything like that absolutely I mean, that's what makes I mean I think people, tend to focus on the model itself you, know I mean that's where all the the, glory is and and people say ah this, model versus that but so much the power, comes in the plugins themselves and or, other ways in which they interact with, the world and so as we're trying to kind, of pave our way into the future and, figure out how we're going to use these, and they're going to impact Our Lives, whether it be the rabbit way or whether, you're talking chat gbt with its plugins, that's the key it's all those, interactions it's the touch points uh, with the different things that you care, about which makes it worthwhile so yes, absolutely and I'm looking forward to to, doing it some more here yeah so there's, a couple things maybe that we can talk, about and actually some of them are even, highlighted in recent things that happen, that we may want to highlight also one, of those is if you think about a large, language model like that used in chat, GPT or you know neural chat llau, whatever it is you put text in and you, get text out we've talked about that a, lot on the show and so you put your, prompt in and you get a completion it's, like fancy auto complete and you get, this completion out right not that, interesting we've talked a little bit, about rag on the show which means I am, programming some logic around my prompt, such that when I get my user input I'm, searching some my own data or some, external data that I've stored in a, vector database or in a set of, embeddings to retrieve text that's, semantically similar to my query and, just pushing that into the prompt as a, sort of grounding mechanism to sort of, ground the answer in that external data, so you got sort of basic autocomplete, you've got retrieval to insert EX, internal data via a vector database, you've got some multimodal inputs so and, by multimodal models I'm meaning things, like lava um and actually this week, there was a great published on January, 24th I saw it in the daily papers on, hugging face mmm's recent advances in, multimodal large language models so if, you're wanting to know sort of the, state-ofthe-art, and what's going on and multimodal large, language models like I just mentioned, that's probably a much deeper dive that, you can go into so check out that we'll, link it in our show notes but these are, models that would not only take a text, prompt but might take a text prompt, paired with an image right so you could, put an image in and you say also have a, text prompt that says is there a raccoon, in this image right and then you know, hopefully the reasoning happens and says, yes or no if there's a there always a, raccoon in the there there's always a, raccoon everywhere that's one element of, this as a that would be a specialized, model that allows you to integrate, multiple modes of data and there's, similar ones out there for audio and, text and other things so again summary, you've got text to text autocomplete, you've got this retrieval mechanism to, pull in some external text Data into, your text prompt you've got specialized, models that allow you to bring in an, image and text all of that's super, interesting and I think it's connected, to what rabbit is doing but there's, actually more to what's going on with, let's say when people perform actions on, external systems or integrate external, systems with these sorts of AI models, and this is what in the sort of Lang, chain world if you've interacted with, Lang chain at all they would call this, maybe tools and you even saw things in, the past like tool former and other, models where the idea was well okay I, have uh maybe it's the Google search API, right or Sur API or one of these search, apis right I know that I can take a Json, object send it off to that API and get a, search result right okay so now if I, want to call that search API with an AI, model what I need to do is get the AI, model to generate the right Json, structured output that I can then just, programmatically not with any sort of, fancy AI logic but programmatically take, that Json object and send it off to the, API get the response and either plug, that in in the sort of retrieval way, that we talked about before just give it, back to the user as the response that, they wanted right so this has been, happening for quite a while this is kind, of like we saw one of these cool AI, demos every week right where oh the AI, is integrated with kayak now to get me a, rental car and the AI is integraded with, you know this external system and all, really cool but at the heart of that was, the idea that I would generate, structured output that I could use in a, regular computer programming way to call, an API and then get a result back which, I would then use in my system so that's, that's kind of this tool idea um which, is still not quite what rabbit is doing, but I think that's something that people, don't realize is happening behind the, scenes in these tools I think that's, really popular in the Enterprise you, know and you know in the Enterprise with, air quotes there uh because that, approach is you know in large, organizations uh they're going to other, you know the the cloud providers with, their apis you know Microsoft is you, know has their relationship with open Ai, and they're wrapping that you know, Google has their apis and they're using, rag uh you know in that same way to try, to integrate with systems instead of, actually creating the models uh on their, own I would say that's a very very, popular approach right now in Enterprise, environments that are still more softwar, driven and still trying to figure out, how to use apis for AI models yeah and I, I can give you a concrete example of, something we did with a customer, prediction guard which is the Shopify, API right so e-commerce customer the, Shopify API has this sort of Shopify I, think it's called Shopify ql query, language it's structured right and you, can call the regular API via graphql, right and so it's very structured sort, of way you can call this API to get, sales information or order information, or do certain Tas asks right and so you, can create a natural language query and, say okay well don't try to give me, natural language out but give me Shopify, ql or give me something that I can plug, into a graphql query and then I'm going, to go off and query the Shopify API and, either perform some interaction or get, some data right so this is very popular, this is how you sort of get AI on top of, tools what's interesting I think that, rabbit observes, in what they're saying and others have, observed as well I think you know you, take the case like ask UI like we talked, about before and the observation is that, not, everything has this sort of nice, structured way you can interact with it, with an API so think about pull out your, phone you've got all of these apps on, your phone some of them will have a nice, API that's well defined some of them, will have an API that me as a user I, know nothing about right there's maybe, an API that exists there but it's hard, to use or not that well documented or, maybe I don't have the right account to, use it or or something there's all these, interactions that I want to do on my, accounts with my web apps with my apps, that have no defined structured API to, execute all of those things so then the, question comes and that's why I wanted, to lead up to this is because even if, you can retrieve data to get grounded, answers even if you can integrate images, even if you can interact with apis all, of that gets you pretty far as we've, seen But ultimately not everything is, going to have a nice structured API or, it's not going to have an API that's, updated or has all the features that you, want or does all the things you want, right so the question I think the, fundamental question that the rabbit, research team is thinking about is how, do we then reformulate the problem in a, flexible way to allow a user to trigger, an AI system to perform arbitrary, actions across an arbitrary number of, applications or an application without, knowing beforehand the structure of that, application or its API so I think that's, the really interesting question I agree, with you completely and you know there's, so much complexity that they refer to it, as human uh intentions expressed through, actions on a computer and that sounds, really really simple you know when you, say it like that but there's so that's, quite a challenge uh to make that work, in an unstructured world so uh I'm, really curious I I haven't they have, their research page but I don't guess, they've they've put out any papers uh, that describe some of the research, they've done yet have they just in, general terms and that's where we get to, the the exciting world of large action, models somehow that makes me think of, like Arnold Schwarzenegger you know, large large action heroes there you go, exactly you, [Music], know, [Music], you know when we started podcasting back, in 2009 and online store was just the, furthest thing from our minds now we, have, merch.com and you can go there right now, and order some T-shirts and that's all, powered by Shopify it's so easy all, because Shopify is, amazing Shopify is the global Commerce, platform that helps you sell at every, stage of your business from the launch, your online shop stage to the first real, life store stage all the way to the did, we just hit a million Doll stage Shopify, is there to help you grow whether you're, selling security systems or marketing, memory modules Shopify helps you sell, everywhere from their all-in-one, e-commerce platform to their in-person, POS system wherever and whatever you're, selling Shopify has got you covered, Shopify helps you turn browsers into, buyers with the internet's best, converting check out out up to 36%, better compared to other leading, Commerce platforms and sell more with, less effort thanks to Shopify magic your, AI powered Allstar you know nothing gets, me and Jared more excited than when our, guests get that coupon code in their, email in their show ships or to everyone, out there who loves Chang law podcast, and can go to, merch.com and get your favorite threads, to support our podcasts it is just the, best thing ever from stickers to threads, all that is at merch changel, and did you know that Shopify Powers 10%, of all e-commerce in the US and Shopify, is the global force behind Al Birds, rothy and Brook linen and millions of, other entrepreneurs of every size across, 175 countries plus shopify's extensive, help resources are there to support you, in your success every step of the way, because businesses that grow grow with, Shopify sign up for a $1 per month trial, period at shopify.com practical aai all, lcase go to shopify.com, practical AI now to grow your business, no matter what stage you're in again, shopify.com practical, [Music], aai, [Music], yeah Chris so coming from Arnold SCH, theer and large action heroes to large, action models I was wondering if this, was a term that rabbit came up with I, think it has existed for some amount of, time I at least saw it at least as far, as back as June of last year 2023 I saw, Sylvio, saris uh article on Salesforce AI, research uh blog about Lambs from large, language models to large action models I, think the focus of that article was very, much on the sort of agentic stuff that, we talked about before in terms of, interacting with different systems but, in a very automated way the term large, action model as far as rabbit refers to, it it's this new architecture that they, are saying that they've come up with and, I'm I'm sure they have because seems, like the device works we don't know I, think all of the details about it at, least I haven't seen all of the details, or it is sort of not transparent in the, way that maybe a model release would be, on hugging face with code associated, with it in a long research paper maybe, I'm missing that somewhere or listeners, can tell me if they found it but I, couldn't find that they they do have a, research page though which gives us a, few Clues as to what's going on and some, explanation in kind of general terms and, what they've described is that their, goal is to observe human interactions, with a UI and there seems to be some, sort of multimodal model that is, detecting what things are where in the, UI and they're mapping that onto, some kind of flexible, symbolic synthesized representation of a, program so the user is doing this thing, right so I'm changing the payment on my, Uber app and that's represented or, synthesized behind the scenes in some, sort of structured way and kind of, updated over time as it sees, demonstrations human demonstrations of, going on and so the words that they I, I'll just kind of read this so people if, they're not looking at the article they, say we design the technical stack from, the ground up from the data collection, platform to the new network, architecture and here's the sort of very, dens loaded wording that probably has a, lot packed into it they say that, utilizes both Transformer style, attention and graph-based message, passing combined with program, synthesizers that are demonstration and, example guided so that's a lot in that, statement and of course they mention a, few in more description in in other, places but it seems like my sort of, interpretation of this is that the, requested action comes in to the system, to the network architecture right and, there's a neural layer so this is a, neural symbolic model so there's a, neural layer that somehow interprets, that user action into a set of symbols, or representations that it's learned, about the AI like the UI I mean the, Shopify UI or the Uber UI or whatever, and then they use some sort of symbolic, logic processing of this sort of, synthesized program to actually execute, a series of actions within the app and, perform an action that it's learned, through demonstration so this is sort of, what they mean I think when they're, talking about neuros symbolic so there's, a neural network portion of this kind of, like when you put something into chat, GPT or a Transformer based large, language model and you get something out, in the case of we talking about getting, Json structured out when we're, interacting with an external tool but, here it seems like you're getting some, sort of thing out whatever that is a set, of symbols or some sort of structured, thing that's then passed through, symbolic processing layers that are, essentially symbolic and rule-based ways, to execute a a learned program over this, application and by program here I think, they mean they reference a couple papers, and my best interpretation is that they, mean not a computer program in the sense, of python code but a logical program, that represents an action like here is, The Logical program to update the, payment on the Uber app you go here and, then you click this and then you enter, that and then you blah blah you do those, things right except here those programs, those synthesized programs are learned, by looking at human intentions and how, what they do in an application and, that's how those programs are are, synthesized so that was a long I don't, know how well that held together but, that was my did my my best at this point, without seeing anything else um from a, single sort of blog post when you can, keep me quiet for a couple of minutes, there it means you're doing a pretty, good job I have a question I want to, throw out and you won't I don't know, that you you'd be able to answer it, obviously but it just to speculate while, we were talking about that and thinking, about multimodal I'm wondering the, device itself comes with many of the, same uh you know sensors that you're, going to find in a cell phone these days, but I'm wondering if that feeds in more, than just the the speech and it, obviously has the camera on it it comes, with a magneter I can't say the word GPS, accelerometer and gyroscope and, obviously so you know it's detecting Mo, motion it knows location all the things, uh has the camera has the mic how much, of that do you think is relevant to the, Lambs to the large action model in terms, of inputs do you think that there is uh, potentially relevance in the non-speech, and non camera concerns on it do you, think the way people move uh could have, some play in there and I know we're, being purely speculative I'm just they, caught my imagination yeah I'm I'm not, sure I I mean it could be that that's, used in in ways similar to how Those, sensors are used on smartphones these, days right like if I'm asking rabbit to, book me an Uber to here or something, like that right now it could infer the, location maybe of where I am based on, where I'm wanting to go or ask me where, I am but likely the easiest thing would, be to use a GPS sensor right to know my, location and just like put that as the, pin in the Uber app and now it knows so, I think there's some level of, interaction between these things I'm not, sure how much but it seems like at least, in terms of location I could definitely, see that coming into play I'm not sure, on the other ones well it looks a lot, like a physically it looks like a lot, like a smartphone without the phone yeah, yeah a smartphone different sort of, aspect ratio but still, kind of touch screen I think you can, still pull up a a keyboard and that sort, of thing and you know you see things, when you prompt it so yeah I I imagine, that that's maybe an evolution of this, over time is sensory input of various, things like I could imagine that being, very interesting in running or Fitness, type of scenarios right if I've got my, rabbit with me and I instruct rabbit to, post, celebratory social media post every time, I keep my mileage or my time per mile at, a certain level or something and it's, using some sort of sensors on the device, to do that I think there's probably ways, that that will work out I'm not sure, about now it'll be interesting that if, this approach sticks and I I might make, an analogy to things like the aura ring, you know for health wearing that and, then competitors started coming out and, then um you know Amazon has their own, version of a of a health ring uh that's, coming out along those lines you have, all these incumbent players in the AI, space that are you know for the most, part very large well-funded you know, Cloud companies uh and in and in at, least one case a retail company uh, Blended in there and so if this might be, an alternative in some ways to the, smartphone being the dominant device and, it has all the same capabilities plus, more and they have the lamb behind it to, drive that functionality how long does, it take for an Amazon or a Google or a, Microsoft to come along after this and, start producing their own variant, because they already have the, infrastructure uh that they need to, produce the back end and they're going, to be able to prod you know Google and, Amazon certainly produce front end stuff, quite a lot as well so uh it'll be, interesting to see if this is the, beginning of a new Marketplace opening, up in the AI space as an entrance so, there's already really great Hardware, out there for smartphones and I wonder, if something like this is kind of a a, shock to the market but in some ways you, know just as phones with external key, buttons sort of morphed into smartphones, with touchs screens otherwise I could, see smartphones that are primarily app, driven in the way that we interact with, them now being pushed in a certain, direction because of these interfaces, and so smartphones won't look the same, in two years as they do now and they, won't follow that same sort of app, driven trajectory like they are now, probably because of things that are, rethought and it might not be that we, all have rabbits in our pocket but maybe, smartphones become more like rabbits, over time I'm not sure I think that, that's very likely a a thing that, happened it it's also interesting to me, it's a little bit hard to parse out for, me what's, happening what's the workload like, between what's happening on the device, and what's happening in the cloud and, what sort of connectivity is actually, needed for full functionality with the, device um maybe that's something if you, want to share your own findings on that, in in our slack community at changel, log.com Community we'd love to hear, about it my understanding is there is at, least a good portion of the lamb and the, lamb powered routines that are operating, in a centralized sort of platform and, Hardware so there's not this kind of, huge large model running on a very low, power device that might suck away all, the all the energy but I think that's, also an interesting direction is how far, could we get especially with local, models getting so good recently with, fine-tune local optimized quantized, models doing action related things on, edge devices in our pockets that aren't, relying on stable and high-speed, internet connections which also of, course helps with the privacy related, issues as well agree I by the way I'm, going to make a prediction I'm, predicting that a large cloud computing, service provider will purchase rabbit at, some point all right you heard it here, first uh uh I don't know what sort of, odds Chris is is giving um or I'm not, gonna bet against him that's for sure, but uh yeah I think I think that's, interesting that there definitely a lot, of I think there will be a lot of action, models of some type whether those be, tool using llms or Lambs or slms or, whatever whatever we've got coming up um, see and they should have named it they, could have named it a lamb instead of a, rabbit is you know I just want to point, out they're they're getting their, animals mixed up man I actually yeah, that's a really good point I don't know, if they came up with rabbit before lamb, but maybe they just had the lack of the, be there but I think they probably could, have figured out something yeah and and, the only thing that could have been just, been the lamb is raccoon of course but, that's beside the point have to come, around full circle there of course of, course we'll we'll leave that device up, to you as well, yeah all right well this has been fun, Chris um I do recommend in terms of if, people want to learn more there's a, really good a research page on rabbit., te rabbit. Tech resarch and down at the, bottom of the page there's a list of, references that they share throughout, that people might find interesting as as, they explore the technology I would also, recommend that people look at Lang, Chain's documentation on tools and also, maybe just check out a couple of these, tools they're not that complicated it, like I say there's sort of they expect, Json input and then they run a software, function and do a thing that's sort of, what's happening there so maybe check, some of those and the array of tools, that people have built for Lang chain, and and try using them so yeah this has, been fun Chris thanks it was great, thanks for bringing the rabbit to our, attention yeah hopefully see you in, person soon that's right and yeah uh, we'll include some links in our our show, notes so everyone take a look at them, talk to you soon Chris have a good, [Music], one thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang doog podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residence break master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Collaboration & evaluation for LLM apps | Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.
Leave us a comment (https://changelog.com/practicalai/253/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Read Write Own (https://readwriteown.com/?utm_source=changelog&utm_medium=practicalai&utm_campaign=changelog) – Read, Write, Own: Building the Next Era of the Internet—a new book from entrepreneur and investor Chris Dixon—explores one possible solution to the internet’s authenticity problem: Blockchains. From AI that tracks its source material to generative programs that compensate—rather than cannibalize—creators. It’s a call to action for a more open, transparent, and democratic internet. One that opens the black box of AI, tracks the origins we see online, and much more. Order your copy of Read, Write, Own today at readwriteown.com (https://readwriteown.com/?utm_source=changelog&utm_medium=practicalai&utm_campaign=changelog)
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Raza Habib – Twitter (https://twitter.com/razrazcle) , LinkedIn (https://www.linkedin.com/in/humanloop-raza)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Humanloop (https://humanloop.com/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-253.md) | 19 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, CEO and founder at prediction guard and, really excited today to be joined by Dr, Reza khabib uh who is CEO and co-founder, at human Loop how you doing Reza hi, Daniel it's a pleasure to be here I'm, doing very well um yeah thanks for, having me on yeah yeah super excited to, talk with you I'm mainly excited to talk, with you selfishly because I see the, amazing things that human Loop is is, doing and the Really critical problems, that you're thinking about and every day, of my life it's like how am I managing, prompts and how does this next model, that I'm upgrading to how do my prompts, do in that model model and how am I, constructing workflows around using llms, which it definitely seems to be the main, thrust of some of the things that you're, thinking about at human Loop before we, get into the specifics of those things, at human Loop would you mind setting the, context for us in terms of workflows, around these llms collaboration on team, like how did you start thinking about, this problem and what does that mean in, reality for those working in industry, right now maybe more generally than at, human Loop yeah absolutely so I guess on, the on the question of like how I came, to be working on this problem it was, really something that my co-founders, Peter and Jordan and I had been working, on for a very long time actually so you, know previously Peter and I did phds, together around this area and then when, we started the company it was a little, while after transfer learning had, started to work in NLP for the first, time and we were mostly helping, companies fine tune smaller models but, then sometime Midway through 2022 we, became Absol convinced that the rate of, progress for these larger models was so, high it was going to start to Eclipse, essentially everything else in terms of, performance but more importantly in, terms of usability right it was the, first time that instead of having to, like hand annotate a new data set for, every new problem there was this new way, of customizing AI models which was that, you could write instructions in natural, language and have a reasonable, expectation that the model would then do, that thing and that was Unthinkable you, know at the start of 2022 I would say or, maybe a little bit earlier and so that's, really what made us want to go work on, this because we realized that the, potential impact of NLP was already, there but the accessibility had been, expanded so far and the capabilities of, the models had increased so much that, there was a a particular moment to go do, this but at the same time it introduces, a whole bunch of new challenges right so, I guess historically the people who were, building AI systems were machine, learning experts the way that you would, do it is you would collect annotated, data you'd find tune a custom model it, was typically being used for like one, specific task at a time there was a, correct answer so it was easy to, evaluate and with LMS the power also, brings new challenges so the way that, you customize these models is by writing, these natural language instructions, which are prompts and typically that, means that the people involved don't, need to be as Technical and usually we, see actually that the the best people to, do prompt engineering tend to have, domain expertise so often it's a product, manager or someone else within a company, who's leading The Prompt engineering, efforts but you ALS have this new, artifact lying around which is the, prompt and it has a similar impact to, code on your end application so it needs, to be versioned and managed and treated, with the same level of respect and rigor, that you would treat normal code but, somehow you also need to have the right, workflows and collaboration that lets, the non-technical people work with the, engineers on the product or the less, technical people and then the extra, challenge that comes with it as well is, that it's very subjective to measure, performance here so in traditional code, we're used to runting unit tasks, integration tests regression tests we, know what good looks like and how to, measure it and even in traditional, machine learning you know there's a, ground truth data set people calculate, metrics but once you go into generative, AI it tends to be harder to say what is, the correct answer and so when that, becomes difficult then measuring, performance becomes hard if measuring, performance is hard how do you know when, you make changes if you're going to, cause regressions or you know all the, different design choices you have in, developing an app how do you make those, design choices is if you don't have good, metrics of performance and so those are, the problems that motivated what we've, built and really human Loop exists to, solve both of these problems so to help, companies with the tasks of finding the, best prompts managing versioning them, dealing with collaboration but then also, helping you do the evaluation that's, needed to have confidence that the, models are going to behave as you expect, in production and as related to these, things um maybe you can start with one, that you would like to start with and go, to the others but in terms of managing, versioning prompts evaluating the, performance of these models dealing with, regressions as you've kind of seen, people try to do this across probably a, lot of different clients a lot of, different Industries how are people, trying to manage this in maybe some good, ways and some bad ways yeah I think we, see a lot of companies go on a bit of a, journey so early on you know people are, excited about generative AI LM there's a, lot of hype around it now so some people, in the company just go try things out, and often they'll start off using one of, the large you know publicly available, model so open AI or anthropic cohere one, of these they'll prototype in their own, kind of playground environment that, those providers have they'll eyeball a, few examples maybe they'll grab a a, couple of libraries that support, orchestration and they'll put together a, prototype and the first version is, fairly easy to build it's you know it's, very quick to get to like the first wow, moment and then as people start moving, towards production and they start, iterating from that you know maybe 80%, good enough version to something that, they really trust they start to run into, these problems of like oh I've got like, 20 different versions of this prompt and, I'm storing it as a string in code and, actually I want to be able to, collaborate with a colleague on this and, so now we're sharing things you know, either via screen sharing or we're like, both you know we've had some serious, companies you would have heard of who, are sending their model configs to each, other via Microsoft teams and obviously, you know you wouldn't send someone, important piece of code through slack or, teams or something like this but because, the collaboration software isn't there, to bridge this technical non-technical, divide those are the kind of problems we, see and so at this point typically a, year ago people would start building, their own solution so more often than, not like this was when people would, start building in-house tools, increasingly because there are companies, like human loop around that's usually, when someone books a demo with us and, they say Hey you know we've reached this, point where actually managing these, artifacts has become cumbersome we're, worried wored about the quality of what, we're producing do you have a solution, to help and the way that human Loop, helps at least on the prompt management, side is we have this interactive, environment it's a little bit like those, open AI playgrounds or the anthropic, playground but a lot more fully featured, and designed for actual development so, it's collaborative it has History built, in you can connect variables and data, sets and so it becomes like a, development environment for your sort of, llm application you can prototype the, application interact with it try out a, few things and then people progress from, that development environment into, production through evaluation and, monitoring you mentioned this kind of in, passing I'd love to dig into it a little, bit more you mentioned kind of the types, of people that are coming you know at, the table in designing these systems and, oftentimes domain experts you know, previously in working as a data, scientist it was always kind of assumed, oh you need to talk to the domain, experts but it's sort of like at least, for many years it was like data, scientists talk to The Domain experts, and then go off and build their thing, the domain experts were not involved in, the sort of building of the system and, even then like the data scientists were, maybe building things that were kind of, foreign to software engineers and what, I'm hearing you say is you kind of got, like these multiple layers you have like, domain experts who might not be that, technical you've got maybe Ai and data, people who are using this kind of unique, set of tools maybe even they're hosting, their own models and then you've got, like product software engineering people, seems like a much more complicated, landscape of interactions how have you, seen this kind of play out in reality in, terms of non-technical people and, Technical people both working together, on something that is ultimately, something implemented in code and run as, an application I actually think one of, the most exciting things about LMS the, progress in AI in general is that, product managers and subject matter, experts can for the first time be very, directly involved in implementing these, applications so I think it's always been, the case that the PM or someone like, that you know is the person who distills, the problem speaks to the customers, produces the spec but there's this, translation step where they sort of, produce that PRD document and then, someone else goes off and implements it, and because we're now able to program at, least some of the application in natural, language actually it's exess to those, people very directly and it's worth, maybe having a concrete example so like, I use um an AI Note Taker for a lot of, my sales calls and it records the call, and then I get a summary afterwards and, the app actually allows you to choose a, lot of different types of summary so you, can say hey I'm a salesperson I want a, summary that will extract budget and, authority and need and timeline versus, you can say oh actually I had a product, interview and I want a different type of, summary and if you think about, developing that application the person, who has the knowledge that's need needed, to say what a good summary is and to, write the prompt for the model is the, person who has that domain expertise, it's not the software engineer but, obviously the prompt is only one piece, of the application right if you got a, question answering system there's, usually retrieval as part of this there, may be other components usually the llm, is a block in a wider application so you, obviously still need the software, Engineers around because they're, implementing the bulk of the application, but the product managers can be much, more directly involved and then you know, actually we see increasingly less, involvement from machine learning or AI, experts and less people are fine-tuning, their own models so for the majority of, product teams we're seeing there is a an, AI platform team that maybe facilitates, setting things up but the bulk of the, work is led by the product managers and, then the engineers and one interesting, example of this on the extreme end is, one of our customers it's a very large, edtech company they actually do not let, their Engineers edit the prompts so they, have a team of Ling who do prompt, development the linguists finalize the, prompts they're saved in a serialized, format and they go to production but, it's a oneway transfer so the engineers, can't edit them because they're not, considered able to assess the the actual, outputs even though they are responsible, for the rest of the application just, thinking about how teams interact and, who's doing what it seems like the, problems that you've laid out are I, think very clear and worth solving but, it's probably hard to think about well, am I building a developer tool or am I, building like something that these, non-technical people interact with or is, it both how did you think about that as, you kind of entered into the stages of, bringing human Loop into existence I, think it has to be both and the honest, answer is it evolved kind of organically, by you know going to customers speaking, to them about their problems and trying, to figure out what the best version of a, solution looked like so we didn't set, out to build a tool that needed to do, both of these things but I think the, reality is you know given the problems, that people face you do need both and, you know an analogy to think about might, be something like figma right like figma, is somewhere where multiple different, stakeholders come together to iterate on, things and to develop them and provide, feedback and I think you need something, analogist to that for Gen although it's, not an exact analogy because we also, need to attach the evaluation to this so, it's almost by necessity that we've had, to do that but I also think that um it's, very exciting right and the reason I, think it's exciting is because it is, expanding who can be involved in, developing these, [Music], [Applause], [Music], applications if you're listening you, know software is buil from thousands of, small technical choices and some of, these seemingly inconsequential choices, can have a profound impact on the, economics of Internet services who gets, to participate in them build them and, profit from them this is especially true, for artificial intelligence where the, decisions we make today can determine, who can have access to world changing, Technologies and who can decide their, future read write own building the next, era of the internet is a new book from, startup investor Chris Dixon that, explores the decisions that took us from, open networks governed by communities to, massive social networks run internet, Giants this book read WR own is a call, to action for building a new era of the, internet that puts people in charge from, an projects that compensate creators for, their work to protocols that fund open-, Source contributions this is our chance, to build the internet we want not the, one we inherited order your copy of read, write own today or go to readr own.com, to learn, [Music], more, [Music], you mentioned how this environment of, domain experts coming together and, Technical teams coming together in a, collaborative environment opens up new, possibilities for both collaboration and, Innovation I'm wondering if at this, point you could kind of just lay out, we've talked about the problems we've, talked about those involved and those, kind of that would use such a system or, a platform to enable this these kind of, workflows could you describe a little, bit more what human Loop is specifically, in terms of both what it can do and kind, of how these different personas engage, with the system yeah so I guess in terms, of what it can do concretely it's, firstly helping you with prompt, iteration versioning and management and, then with valtion and monitoring and the, way it does that is there's a web app, and there's a web UI where people are, coming in and in that UI is an, interactive playground like environment, where people basically try out different, prompts they can compare them side by, side with different models they can try, them with different inputs when they, find versions that they think are good, they save them and then those can be, deployed from that environment to, production or even to a development or, staging environment so that's the kind, of development stage and then once you, have something that's developed what's, very typical is people then want to put, in evaluation steps into place so you, can Define gold standard test sets and, then you can Define evaluators within, human Loop and evaluators are ways of, scoring the outputs of a model or a, sequence of models because oftentimes, you know the llm is part of a wider, application and so the way that scoring, works is um there's very traditional, metrics that you would have in code for, any machine Learning System so Precision, recall Rouge blue these kind of scores, that anyone from a machine learning, background would already be familiar, with but what's new in the in the kind, of llm space is also things that help, when things are more subjective so we, have the ability to do model as judge, where you might actually prompt another, llm to score the output in some way this, can be particularly useful when you're, trying to measure things like, hallucination right so a very common, thing to do is to ask the model you know, is the final answer contained within the, retrieved context or is it possible to, infer the answer from the retrieved, context and you can calculate those, scores and then the final way is we also, support human evaluation so in some, cases you know you really do want either, feedback from an end user or from an, internal annotator involved as well and, so we allow you to gather that feedback, either from your live production, application and have it you know logged, against your data or you can cue, internal annotation tasks from a team, and I can maybe tell you a little bit, more about sort of in production, feedback because that's something that, that's actually where we started yeah, yeah go ahead would love to hear more, yeah so I think that because it's so, subjective for a lot of the applications, that people are building whether it be, email generation question answering a, language learning app there isn't a, correct answer quote unquote and so, people want to measure how things are, actually performing with their end users, and so human Loop makes it very easy to, capture different sources of end user, feedback and that might be explicit, feedback things like thumbs up thumbs, down votes that you see in chat GPT but, it can also be more implicit signals so, how did the user behave after they were, shown some generated content did they, progress to the next stage of the, application did they send the generated, email did they edit the text and all of, that feedback data becomes useful both, for debugging and also for fine-tuning, the model later on so that evaluation, data becomes this Rich resource that, allows you to continuously improve your, application over time yeah that's, awesome and I know that that fits in so, maybe you could talk a little bit about, how you're one of the things that you, mentioned earlier is you're seeing fewer, people do fine-tuning which I I see this, very commonly as a it's not an, irrelevant point but it's maybe a, misconception where like a lot of teams, come into this space and they just, assume they're going to be fine-tuning, their models and often what they end up, doing is fine-tuning their workflows or, their language model chains or their, retrieval the data that they're, retrieving or their prompt formats or, that templates or that sort of thing, they're not really fine-tuning and I, think there's this really blurred line, right now for many teams that are, adopting AI into their organization, where they'll frequently just use the, term oh I'm training the AI to do this, and now it's better right but all, they've really done is just inject some, data into their prompts or or something, like that so could you maybe help like, clarify that distinction and also in, reality what you're seeing people do, with this capability of evaluation both, online and offline and how that's, filtering back into upgrades to the, system or actual fine tunes of models, yeah so I guess you're right there's a, lot of jargon involved and especially, for people who are new to the field the, word fine-tuning has a colloquial, meaning and then it has a technical, meaning in machine learning and the two, end up being blurred so you know F, tuning in a machine learning context, usually means doing some extra training, on the base model where you're actually, changing the weights of the model given, some sets of example pairs of inputs, outputs that you want and then obviously, there's like prompt engineering and, adapt and maybe Context Engineering, where you're changing the instructions, to the language model or you're changing, the uh data that's set into the context, or how the you know an agent system, might be set up and both are really, important typically the advice we give, the majority of our customer customers, and what we see play out in practice is, to that people should first push the, limits of prompt engineering because, it's very fast it's easy to do and it, can have like very high impact, especially around changing the uh sort, of outputs and also in helping the model, have the right data that's needed to, answer the question so prompt, engineering is kind of usually where, most people start and sometimes where, people finish as well and fine-tuning, tends to be useful either if people are, trying to improve Pro latency or cost or, if they have like a particular tone of, voice or output constraint that they, want to enforce so you know if people, want they model to Output valid Json, then fine tuning might be a great way to, achieve that or if they want to use a, local private model because it needs to, run on an edge device or something like, this then fine tuning I think is a great, candidate and it can also let you reduce, costs because oftentimes you can fine, tune a smaller model to get similar, performance the analogy I like to use is, fine tuning is a bit like compilation, right you have a, you youve already sort of built your, first version of the language when you, want to optimize it you might use a, compiled language and you've got a kind, of compiled binary I think there was a, second part to your question but just, remind me actually I've lost the second, part yeah basically you mentioned that, maybe fewer people are doing fine tunes, um maybe you could comment on I don't, know if you have a sense of of why that, is or how you would see that sort of, progressing into into this year as more, and more people adopt this technology, and maybe get better tooling around the, let's not call it fine-tuning so we, don't mix all the jargon but the the, iterative development of these systems, do you see that Trend continuing or um, how do you see that kind of going into, maybe larger or wider um adoption in, 2024 yeah so I think that we've, definitely seen less fine-tuning than we, thought we would see when we started you, know when we launched human loop back, this version of human loop back in 2022, and I think that's been true of others, as well like I've spoken to friends at, openai and open is expecting there will, be more fine-tuning in the future but, they've been surprised that there wasn't, more initially I think some of that is, because prompt engineering has turned, out to be remarkably powerful and also, because some of the changes that people, want to do to these models are more, about getting factual context into the, model so one of the downsides of llms, today is they're obviously trained on, the public internet so they don't, necessarily know private information, about your company they tend not to know, information past the training date of, the model and you know one way you might, have thought you could overcome that is, I'm going to fine-tune the model on my, company's data but I think in practice, what people are finding is a better, solution to that is to use a hybrid, system of search or or information, retrieval plus generation so what's come, to be known as like rag or retrieval, augmented generation has turned out to, be a really good solution to this, problem and so the main reasons to fine, tune now are more about optimizing cost, and latency and maybe a little bit tone, of voice but they're not needed so much, to adapt the model to a specific use, case and fine tuning is a heavier Duty, operation because it takes longer you, know you can edit a prompt very quickly, and then see what the impact is, fine-tuning you need to have the data, set that you want to fine-tune on and, then you need to run a training job and, then evaluate that job afterwards so, there are certainly circumstances where, it's going to make sense I think, especially anyone who wants to use a, private open source model will likely, find themselves wanting to do more, fine-tuning but the quality of of prompt, engineering and the distance you can go, with it I think took a lot of people by, surprise and on that note you mentioned, the closed proprietary model ecos system, versus open models that people might, host in their own environment and or, fine-tune on on their own data I know, that human Loop like you you explicitly, say that you kind of have all of the, models you're you're integrating these, sort of closed models and integrate with, open models why and how is that kind of, decided to kind of include all of those, and in terms of the mix of what you're, seeing with people's, implementations how do you see this sort, of proliferation of open models, impacting the the workflows that you're, supporting in the future so the reason, for supporting them again is largely, customer pull right what we were finding, is that many of our customers were using, a mixture of models for different use, cases either because the large, proprietary ones had slightly different, performance trade-offs or because there, were use cases where they cared about, privacy or they cared about latency and, so they couldn't use a public model for, those instances and so we had to support, all of them it really was something that, it would it wouldn't be a useful product, to a customers if it could only use it, for one particular model and the way, we've got around this is that like we, try to integrate all all the publicly, available ones but we also make it easy, for people to connect their own models, so they don't necessarily need us you, know as long as they expose the, appropriate apis you can plug in any, model uh to human Loop that would be a, matter of connect or hosting the model, and making sure that the API contract, that you're expecting in terms of, responses from a model server that maybe, someone's running in their own AWS or, wherever would fulfill that that, contract that's exactly right yeah and, terms of you know the the proliferation, of Open Source and and how that's going, you know I think there's still a, performance Gap at the moment between, the very best Clos model so between a, gbd4 or some of the better models from, anthropic and the best open source but, it is closing right so the latest models, from say uh mistra have proved to be, very good llama 2 was very good, increasingly you're not paying as big a, performance Gap though there is still, one but you need to have high volumes, for it to be economically competitive to, host your own model so the main reasons, we see people doing it are related to, data privacy companies that for whatever, reason you know cannot or don't want to, send data to a third party end up um, using open source and then also anyone, who's doing things on edge and who wants, sort of real time or very low latency, ends up using open, [Music], source this is a chang log news break, van. a is a python rag framework for, accurate text tosql generation it lets, you chat with any relational database by, accurately generating SQL queries, trained via rag which stands for, retrieval augmented generation to use, with any llm that you want you load up, your data definitions your documentation, and any raw SQL queries you have laying, around into Vana and then you're off to, the races van Ana boasts high accuracy, on complex data sets excellent security, and privacy because your database, contents are never sent to the llm or a, vector DB it boasts the ability to, self-learn by choosing to autot train on, successful queries and a choose your own, front-end approach with front ends, provided for Jupiter notebook streamlit, flask and slack you just heard one of, our five top stories from Monday's, change log news subscribe to the podcast, to get all of the week's top stories and, pop your email address in at Chang, blog.com newws to also receive our free, companion email with even more developer, news worth your attention once again, that's, [Music], changelog.md love for you to maybe, describe if you can we we've kind of, talked about the problems that you're, addressing we've talked about about the, sort of workflows that you're enabling, the evaluation some trends that you're, seeing but I'd love for you to describe, if you can maybe for like a, non-technical Persona like a domain, expert who's engaging with the human, loop system and maybe for a more, technical person who's integrating you, know data sources or other things what, does it look like to use the human loop, system maybe describe the roles in in, which these people are like what they're, trying to do from each perspective, because I think that might be, instructive for people that are trying, to engage domain experts and Technical, people in a collaboration around these, problems absolutely so maybe it might be, helpful to have a kind of imagined, concrete example so a very common, example we see is people building some, kind of question answering system maybe, it's for their internal customer service, staff or maybe they want to replace an, FAQ that uh sorry I'm just going to, drink of water maybe they're trying to, build some kind of internal question, answering system to replace something or, an FAQ or that kind of thing so there's, a set of documents or questions going to, come in there'll be a retrieval step and, then they want to generate an answer so, typically the the PMS or the domain, experts would be figuring out you know, what are the requirements of the system, what does good look like what do we want, it to build and the engineers will be, building the retrieval part, orchestrating all the model calls in, code integrating the human Loop apis, into their system and also usually they, lead on setting up evaluation so maybe, once it's set up the domain experts, might continue to do the evaluation, themselves but the engineers tend to set, it up the first time so if you're the, domain expert typically you would start, off in our playground environment where, you can just try things out so the, engineers might connect a database to, human Loop for you so maybe they'll, store the data in a vector database um, and connect that to human Loop and then, once you're in that environment you, could try different prompts to the, models you could try them to cheap 4 to, coher to an open source model see what, impact that has see if you're getting, answers that you like right often times, early on it's not in the right tone of, voice or the retrieval system is not, quite right and so the model it's not, giving factually correct answers so it, takes a certain amount of iteration to, get to the point where even when you, eyeball it it's looking appropriate and, usually at that point people then move, to doing a little bit more of a rigorous, evaluation so they might generate either, automatically or internally a set of, test cases and they'll also come up with, a set of evaluation criteria that matter, to them in their context they'll set up, that evaluation run it and then usually, at that point they might deploy to, production so that's the point at which, things would you know end up with real, users they start Gathering user feedback, and usually the Situation's not finished, at that point because people then you, know look at the production logs or they, look at the real usage data and they, will filter based on the evaluation, criteria and they might say hey show me, the ones that didn't result in a good, outcome and then they'll try and debug, them and some way maybe make a change to, a prompt rerun the evaluation and submit, it and so the engineers are doing the, orchestration of the code they're, typically making the model calls they'll, add login calls to human Loop so the way, that works there's a couple of ways of, doing the integration but you can, imagine every time you call the model, you're effectively also logging back to, human Loop what the inputs and outputs, were as well as any user feedback data, and then the the domain experts are, typically looking at the data analyzing, it debugging making decisions how to, improve things and they're able to, actually take some of those actions, themselves in the UI yeah and so if I, just kind of abstract that a bit to, maybe give people a frame of thinking it, sounds like there's kind of this, framework setup where there's data, sources there's maybe logging calls, within an application a version of an, application there's if you're using a, hosted model or if you're using um a, proprietary API you you decide that and, so it's kind of set up and then there's, maybe an, evaluation or prototyping phase let's, call it where uh the domain experts try, their prompting eventually they find, prompts that they think will work well, for these various steps in a workflow or, something like that those are pushed as, you said I think one way into the actual, code or applications such that the, domain experts are are in charge of the, prompting to some degree and as you're, logging feedback into the system the, domain experts are able to iterate on, their prompts which hopefully then, improve the system and those are then, pushed back into the production system, maybe after an evaluation or something, is that a fair representation yeah I, think that's a great representation, thanks for articulating it so clearly, and the kinds of things that the, evaluation becomes useful for is, avoiding regression say right so people, might notice one type of problem they go, in and they change to prompt or they, change the retrieval system and they, want to make sure they don't break what, was already working and so having good, evaluation in place helps with that and, then maybe it's also worth because I, think we didn't sort of do this at the, beginning just thinking about like what, are the components of these llm, applications so I think you're exactly, right we sort of think of the blocks of, you know LM map being composed of a base, model so that might be a private fine, tune model or one of these large public, ones um a prompt template which is, usually an instruction to the model that, might have Gap in it for retrieve data, or context a data collection strategy, and then that whole thing of like data, collection prompt template and model, might be chained together in a loop or, might be repeated you know one after, another and uh there's an extra, complexity which is the models might, also be allowed to call tools or apis so, but I think those pieces to get taken, together more or less comprehensively, cover things so tools data retrieval, prompt template and base model are the, main components but then within each of, those you have a lot of design choices, and freedom and so you know you have a, combinator large number of decisions to, get right when building one of these, applications one of the things that you, mentioned is uh this evaluation phase of, what goes on as helping prevent, regressions because in sort of testing, behaviorally the output of the models, you might make one change on a small set, of examples that looks like it's, improving things but has sort of, different Behavior across a wide range, of examples I'm wondering also um I, could imagine two scenarios you know, models are being released all the time, whether it's upgrading from this version, of a GPT model to the next version or, this Mistral fine tune to this one over, here I'm thinking even you know in the, past few days we've been we've been, using uh the neural chat model from, Intel a good bit and there's a version, of that that neural magic released, that's a sparsified version of that, where they pruned out some of the, weights and and the layers to make it, more efficient and to run on better or, or not better Hardware but more, commodity Hardware that's more widely, available and so one of the questions, that we were discussing is well we could, flip the version of this model to the, sparse one but we have to decide on how, to evaluate that over the use cases that, we care about because you could look at, the output for like a few test prompts, right and it might look similar or good, or even better but on a wider scale, might be quite different in ways that, you don't expect so I could see that the, evaluation also being used for that but, I could also see where if you're, upgrading to a new model it could just, throw everything up in the air in terms, of like oh this is an entirely different, prompt format right or um this is a, whole new uh behavior from this new, model that is distinct from an old model, so how how are you seeing people, navigate that landscape of model, upgrades I think you should just view it, as a you know a change as you would to, any other part of the system and, hopefully the desired behavior of the, model is not changing so even if the, model is changed you know you still want, to run your regression test and say okay, are we meeting a minimum threshold that, we had on these gold standard tests set, before in general I think evaluation we, see it happening at sort of three, different stages during development, there's during this interactive stage, very early on when you're prototyping, you want fast feedback you're just, looking to get a sense of you know is, this even working appropriately at that, stage you know eyeballing examples and, looking at things side by side in a very, interactive way can be helpful and, interactive testing can also be helpful, for adversarial testing so you know a, fixed test set doesn't tell you what, will happen when a user who actually, wants to break the system comes in so a, concrete example of this you know one of, our customers has children as their end, users and they want to make sure that, things are age appropriate so they have, guard rails in place but when they come, to test the system they don't want to, just test it for against you know a use, Cas an input that's benign they want to, see like if we try if we really red team, this can we break it and there, interactive testing can be very helpful, and then the next place where you kind, of want testing in place is this, regression testing where you have a, fixed set of evaluators on a test set, and you want to know when I make a, change does it get worse and the final, place we see people using it is actually, for monitoring so okay I'm in production, now there's new data flowing through I, may not have the ground truth answer but, I can still set up different forms of, evaluator and I want to be alerted if, the performance drops below some, threshold so one of the things that I've, been thinking about throughout our, conversation here and that that's I, think highlighted by what you just, mentioned and sort of the upgrades to, one's workflow and the various levels at, which such a platform can benefit uh, teams and it made me think of you know, used to I have a background in in, physics and there were plenty of physics, teams or collaborators that we worked, with you know we were writing code and, not doing great sort of Version Control, practices and not everyone was using, GitHub and there's sort of collaboration, challenges associated with that which, are obviously solved by great code, collaboration, systems that are of various forms that, have been developed over over time and I, think there's probably a parallel here, with some of the collaboration systems, that are being built around both, playgrounds and prompts and evaluation, I'm wondering if you could um if there's, any examples from clients that you've, worked with or maybe it's just, interesting use cases of surprising, things they've been able to do when, going from sort of doing things ad hoc, and maybe versioning prompts in, spreadsheets or whatever it might be to, actually being able to work in a more, seamless way between domain experts and, Technical staff are there any clients or, use cases or surprising stories that um, that come to mind it's a good question, I'm kind of thinking through them to see, you know what the more interesting, examples might be I think that, fundamentally it's not necessarily, enabling completely new Behavior right, but it's making the old Behavior, significantly faster and less error, prone so you know certainly fewer, mistakes and less time spent you know, one okay so a surprising example, publicly listed company and they told me, that one of the issues they were having, is because they were sharing these, prompt configs in teams they were having, differences in Behavior based on whites, space being copied so the you know, someone was like playing around with the, opening eye playground they'd copy paste, it into teams that person would copy, paste from teams into code and there was, small Whit space differences and you, wouldn't think it should expect affect, the models but it actually did and so, they would then get performance, differences they couldn't explain and, actually it just turned out that you, know you shouldn't be sharing your code, via team right so I guess that's one one, surprising example I think another thing, as well is the complexity of apps that, people are now beginning to be able to, build so increasingly I think people are, building simple agents right I think, more complex agents are still not super, reliable but a trend that we've been, hearing a lot about from our customers, recently is people trying to build, assistants that can use their existing, software so you know an example of this, is you know Ironclad is a is a company, that's added a lot of llm based features, to their product and they actually are, able to automate a lot of workflows that, were previously being done by humans, because the models can use the apis that, exist within the Ironclad software so, they're actually you know able to, leverage their existing infrastructure, but to get that to work they had to, innovate quite a lot in tooling and in, fact you know this is in the plug for, human Loop Ironclad in this case built a, system called rivet which is their own, open-source you know prompt engineering, and iteration framework but I think it's, a good example of you know in order to, achieve the complexity of that use case, this happened to be before tools like, human loop around they had to build, something themselves um and it's quite, sophisticated tooling I actually think, rivet's great so people should check, that out as well it's an open source, Library anyone can go and get the tool, so yeah I I think the surprising things, are like how error prone things are, without good tooling and and the crazy, ways in which people are solving, problems another example of a mistake, that we saw someone do is two different, people triggered exactly the same, annotation job so they had annotation, and spreadsheets and uh they both, outsourced the same job to different, annotation teams uh which is obviously, an expensive mistake to make so very ER, prone and then I think also just like, impossible to scale to more complex, agentic use cases well you you already, kind of alluded to some uh trends that, you're seeing moving forward as we kind, of draw to a close here I'd love to know, from someone who's seeing a lot of, different use cases being enabled, through human Loop and and your platform, what's exciting for you as you move into, this next year in terms of maybe things, that are happening in in AI more broadly, or things that are being enabled by, human Loop or things that are on your, road map that you can't wait for them to, to go live what uh as you're lying in, bed at night and getting excited for for, the next day of AI stuff what's on your, mind so AI more broadly I just feel the, rate of progress of capabilities is both, exciting and scary right it's extremely, fast multimodal models better generative, models models with increased reasoning I, think the range of possible applications, is expanding very quickly as the, capabilities of the models expand I, think people have been excited about, agent use cases for a while right, systems that can act on their own and, and go off and Achieve something for you, but in practice we've not seen that many, people succeed in production with those, there are a couple of examples ironcloud, being a good one but it feels like we're, still at the very beginning of that and, I think I'm excited about seeing more, people get to success with that I'd say, that the most common you know successful, applications we've seen today are mostly, either retrieval augmented applications, or more simple you know llm applications, but increasingly I'm excited about, seeing agents in production and also, multimodal models in production in terms, of things that I'm particularly excited, about from Human Loop is I think us, becoming a proactive rather than a, passive platform so today the product, managers and the engineers Drive the, changes on human Loop but I think that, something that we're going to hopefully, released later this year is actually, where the system you know human Loop, itself can start proactively suggesting, improvements to your application because, we have the evaluation data because we, have all the prompts we can start saying, things to you like hey you know we have, a new prompt for this application it's a, lot shorter than the one you have it, scores similar on eval data if you, upgrade we think we can cut your costs, by 40% and allowing people to then, accept that change and so going from A, system that is observing to a system, that's actually uh intervening that's, that's awesome yeah well I definitely, look forward to seeing how that rolls, out and really appreciate the work that, that you and the team at human Loop are, doing to help us upgrade our our, workflows and enable these sort of more, complicated use cases so thank you so, much for taking time out of that work to, join us uh it's been a been a pleasure, uh really enjoyed the conversation, thanks so much for having me, [Music], Daniel, all right that is practical AI for this, week subscribe now if you haven't, already head to practical aai FM for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listen listening we appreciate you, spending time with us that's all for now, we'll talk to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Advent of GenAI Hackathon recap | Recently, Intel’s Liftoff program for startups and Prediction Guard hosted the first ever “Advent of GenAI” hackathon. 2,000 people from all around the world participated in Generate AI related challenges over 7 days. In this episode, we discuss the hackathon, some of the creative solutions, the idea behind it, and more.
Leave us a comment (https://changelog.com/practicalai/252/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Read Write Own (https://readwriteown.com) – Read, Write, Own: Building the Next Era of the Internet—a new book from entrepreneur and investor Chris Dixon—explores one possible solution to the internet’s authenticity problem: Blockchains. From AI that tracks its source material to generative programs that compensate—rather than cannibalize—creators. It’s a call to action for a more open, transparent, and democratic internet. One that opens the black box of AI, tracks the origins we see online, and much more. Order your copy of Read, Write, Own today at readwriteown.com (https://readwriteown.com/?utm_source=changelog&utm_medium=practicalai&utm_campaign=changelog)
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Rahul Nair – Twitter (https://twitter.com/unrahu1) , GitHub (https://github.com/rahulunair) , LinkedIn (https://www.linkedin.com/in/rahulunair)
• Ryan Metz – Twitter (https://twitter.com/RyanAEMetz) , LinkedIn (https://www.linkedin.com/in/ryanaemetz)
• Eugenie Wirz – LinkedIn (https://www.linkedin.com/in/eugeniewirz)
• Ralph de Wargny – LinkedIn (https://www.linkedin.com/in/ralphdw)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Advent of GenAI Hackathon (https://adventofgenai.com/)
• Intel’s Liftoff program for startups (https://www.intel.com/content/www/us/en/developer/tools/oneapi/liftoff.html)
• Prediction Guard (https://www.predictionguard.com/)
• Blog posts:
• Recap of Day 1 (https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Advent-of-GenAI-Hackathon-Recap-of-Challenge-1/post/1552069)
• Recap of Day 2 (https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Advent-of-GenAI-Hackathon-Recap-of-Challenge-2/post/1552264)
• Recap of Day 3 (https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Advent-of-GenAI-Hackathon-Recap-of-Challenge-3/post/1553059)
• Recap of Day 4 (https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Advent-of-GenAI-Hackathon-Recap-of-Challenge-4/post/1555115)
• Recap of Day 5 (https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Advent-of-GenAI-Hackathon-Recap-of-Challenge-5/post/1556530)
• Final Challenge (https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Advent-of-GenAI-Hackathon-Recap-of-the-Final-Challenge-Custom/post/1556584)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-252.md) | 11 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io fly, transforms containers into microv VMS, that run on their Hardware in 30 plus, regions on six continents so you can, launch your app near your users learn, more at, Y.O if you're listening you know, softwares build from thousands of small, technical choices and some of these, seemingly inconsequential choices can, have a profound impact on the economics, of Internet services who gets to, participate in them build them and, profit from them this is especially true, for artificial intelligence where the, decisions we make today can determine, who can have access to world changing, Technologies and who can decide their, future read write own building the next, era of the internet is a new book from, startup investor Chris Dixon that, explores the decisions that took us from, open networks governed by communities to, massive social networks run by internet, Giants this book read right own is a, call to action for building a new era of, the internet that puts people in charge, from Ani projects that compensate, creators for their work to protocols, that fund open source contribution, utions this is our chance to build the, internet we want not the one we, inherited order your copy of read write, own today or go to readwrite own.com to, learn, [Music], more welcome to a very special Fireside, ch, um which corresponds uh with the ongoing, Advent of gen AI hackathon um that will, also be reposted on the Practical AI, podcast I'm uh very pleased to have been, uh participating in this hackathon as, one of the organizers but I'm also, joined here in the fireside chat by an, amazing team from Intel's liftoff, program for startups who helped organize, this hackathon that we'll be talking, about throughout the day so uh I'd like, to kick it over maybe to Rahul to um, describe a little bit about what Intel, liftoff is hey hey all thank you Dan um, this has been an incredible experience, let me before that even talk about the, Advent of ji two sentences is uh this, was probably the biggest uh generative, hackathon that we our team has organized, and the submissions uh all all the, different chats and things that we have, seen uh this has been really awesome we, got a lot of positive feedback and a lot, of things that we need to improve for, the next time so thank you for, participating for the hackathon and we, would be announcing the winners of the, final the pro product development uh in, couple of hours before that let me talk, about LOF right um so LOF is an, accelerator program uh specifically a, technical accelerator program for early, State startups so if you have an idea, you're a SE startup or till series P you, are want to scale you want to build some, cool things in AI or machine learning, please join the program it's free I, would categorize benefits basically into, three different pillars uh world class, technical support um and Technical, expertise that's uh me and Ryan here we, lead the engineering side of things and, we have an incredible engineering uh, scale team some of the folks are here, like chus and marad then you get access, to technology both Intel software and, Intel developer Cloud Intel developer, cloud is a production read Cloud, specifically designed for AI workloads, prodution guard is uh one of our uh the, startups that came to our program, earlier this year and they are running, on Intel developer Cloud now uh using a, go2 accelerator and I'm sure that many, of you folks who have participated in, the hack would have used uh prediction, guards llm apis the third one uh is co, marketing and bringing your once you, built that product and deployed it uh, the next thing is to make some money, right so we uh co-market your um startup, your idea through all of Intel channels, and also our we have a network of, accelerators and network of folks that's, beyond Intel that we take the product, you have buil the company I buil uh and, basically Market uh it all over the, world we also connect you with um our, sales teams to see if there's a um, potential for selling the things that, will built through Intel channels uh it, could be a service an IDC or it could be, separate that one of our customers of, Intel is looking for I would urge anyone, who is looking to really bootstrap and, accelerate your startup journey to join, Intel lipof yeah that's great that's, great Rahul um could you describe a, little bit so I remember um initial, discussions between a few of us you had, this idea for the Advent of gen AI, hackathon now a lot of people in the, audience out there might be familiar, with like Advent of code so where like, how did you start thinking about this, Advent of gen Ai hackathon and like what, was your initial vision for it uh a lot, of the Geeks uh in the call which most, of us are would know Advent of code uh, it's a set of programming challenges you, can take any sort of um program language, you want to learn or to attempt to solve, these algorithmic questions uh I've been, doing that for many years and uh I, thought you know gen is something that's, new and something a lot of people are, talking about but there are no fun set, of exercises uh or that you can use to, learn and also build cool things with, the technology that's that's existing, today so I I was thinking why not just, create something you know set of, challenges uh that's tailored from a, person who might not know how to code, but have seen prompt engineering and, creating cool prompts to create images, all the way to uh people building with, uh llm apis so we designed a set of, challenges to bring in as much as, audience as possible to introduce them, to gen and also learn through that, process many folks I've seen in in the, chats where um they came with just, prompt engineering knowledge and have, built really cool things uh graduating, from one challenge to other uh and there, has been a really good community help, also like people talking to each other, and trying to help out uh how to run how, to build these things this has been a, really really good exercise even some of, the challenges when I was building it I, was like oh W I would like to do that, because it's we always wanted to add a, fun element to it so uh no, challenges drive just an algorithmic, question but there is something fun, element to that that's where the whole, Advent of gen came about and we want to, do this yearly uh so we next year it, might be a new technology it could be, multimodality uh hackathon some other, things we already saw uh some cool, engineering stuff folks are bu this, challenge but uh it would be something, but I'd like to do in December Advent or, something else that lift up uses for the, community yeah that's awesome so if, you're listening to this uh after the, holiday season or as this will be coming, out on the podcast later you didn't miss, out on partici well you missed out on, participating in 2023 but there's going, to be more opportunities to participate, in things like this that Intel is going, to put on later on so I definitely, recommend people keep an eye out on the, liftoff program and social media uh to, hear about things could you speak a, little bit to the response to this first, Advent of gen and like the participation, that happened um I think we were all a, bit surprised at how many people joined, in to this so could you speak to that a, little bit and anyone else from the, liftoff program kind of any of your, observations for um the type of people, that joined in and the range of, experiences and and all of that this has, been uh fantastic uh we had grad, students we had even students who are in, school who uh have taken prompt, engineering courses who just wanted to, have fun working on some of the earlier, challenges uh there has been experts, also some of experts in llm and gen I, worked on it and some of the products, that they have built right Al some of, the challenge solution is it's like an, MVP a startup will build we have many, startups who are building a similar, solution who are taking like 6 months to, an year to build a full solid solution, but uh some of the Challenge answers, especially using the rag example and and, or the the python code explainer things, like that are uh difficult and they have, even gone further where when we asked, about how to can you create a story, teller chatboard people have created a, story Plus image uh chatboard a, multimodality chatboard so these are um, many levels of experience and that's, even some of the folks from Intel, participated that's also a very positive, thing where we see you can you know in a, Level Playing Field work with folks from, Intel um and solve the challenges, together we had uh folks from Berkeley, uh folks from many different Enterprises, uh participating so this has been a mix, and I was amazed uh at the level of, participation I didn't expect these many, people would participate and we had to, stop registrations after 2,000 people, registering for the event and uh this, has been just great Ryan do you want to, add something yeah I just remember this, is R's idea when he first called me was, like cleaning up after Thanksgiving in, my basement or something he's like you, know Advent a code yeah of course of, course I want to do Advent of gen like, all right what do you want to do we, worked it out and it was like set the, goal like let's get like it's like yeah, we could get big like you know couple, hundred people at least would would be a, success but like stretch it like hey I, mean we could get maybe a thousand even, and do the biggest event of the year, weend end up cutting it off at twice, that at twice the stretch goal so then, of course the entire time we're like oh, man did we do a good job are people, going to like this are we going to get, submissions every time knock it out of, the park way more submissions than we, were hoping for and the quality was, excellent and the amount of people we, saw in the chat helping each other it's, like somebody you know we're all trying, it's 24 hours a day so we're you know, our team is trying to stay in and answer, questions as much as possible we set, that as a goal but what we saw like from, go like from when it started was when, people would ask questions other people, would jump in and like link them to the, explanation or the documentation or, whatever these were all the dreams for, the event so it's been incredible and I, want to thank every single person who, was involved yeah and maybe it would be, good for those maybe some people jumped, into certain challenges and not other, challenges or they might be hopping in, at at one point or the other or they're, learning about Advent of J AI as they're, listening to this so what were some of, the challenges that were presented to, the participants and how would you, consider them in terms of like, relative challenge level or skill, required to complete them we designed, this challenge um in a progressing level, of I wouldn't say difficulty but I mean, the ability to code uh I would say it's, uh it's not difficulty exactly and, creativity also because a lot of geni, it's um what we see at least in lifttop, is that AI has become truly commoditized, and you don't really need to know go, through um Masters or a couple of, courses on neuron networks to build an, application right now you have the, amazing Transformer ecosystem it's, really easy to integrate uh some of, these things the AI applications to AI, superpowers to the applications you, build I would take the first challenge, for example the first challenge was to, create a narrative based set of images, and uh if you look at the challenge the, first time it's it looks like it's very, easy you just create couple of images, using stable diffusion it's all about, prompt engineering you don't need to, know a single line of code all the, notebooks and the models everything is, available on Intel developer Cloud you, just create a standard account log in, there get Jupiter Hub open and just play, with it but the thing is creating an, transition from one image to another and, creating a whole story with set of five, images it's not easy it's really, difficult and we even saw some folks, creating a comic book generator using, this challenge that sort of um, Imagination right that's that's what I, really wanted people to do but I didn't, want to say that okay please create a, comic book generator as a challenge, because it's it's really difficult some, of the folks even without knowing that, uh built it you take the the final, challenge right that was python code, explainer where you giv a python code, you use llm model to understand the code, and give an explanation of it we had, additional challenge additional, subchallenge there show the source of uh, documentation or stack Overflow, questions where I can go into and learn, about it more these kind of additions, makes it very interesting and a little, bit more complicated where you have to, use a vector database you have to use uh, prediction guts llm apis to get the, right model and uh you had to design a, UI for it all in the constraints of a, jupyter notebook so I would say there, was a progression of difficult, difficulty say is very relative word but, yeah there a progression of difficulty, if you're just coming to gen and the, whole idea is said it's a single package, so even if you're you not participated, in of J right we have released all the, resources that we have built for this, you can take this to basically get your, an idea of what gen can actually help, you to build and Infuse to your, applications and you can just go through, the different challenges or if you're an, expert in um uh llm rag based, application go to that particular, Challenge and take a look at it so it's, now become a Learning Resource also not, just a a hack if I could just add on to, that real quick it did on purpose go up, and difficulty like the level of each, challenge was supposed to get harder and, more advanced let's say in terms of, coding ability as we went on but the, real Focus was about skills and, understanding the tools um that are, being used within the industry there's, been a huge focus on creating new levels, of abstraction to make neural networks, easier to use and build with used to be, very challenging now not I mean not, nearly as much you don't even need to, stand up your own neural network anymore, right you can grab an API so if you look, at the challenges each one is kind of, focused on a different skill and if you, go through all five you cover prompting, specifically for images text image cover, using an llm API and the different, things you can do with it cover uh image, to image so image editing with AI in the, third one and then uh rag based, applications with L LM apis and finally, we thought you know the fifth was the, most advanced like the code explainer, one that's you know there companies that, are basically that are big companies, that are working on that exact problem, that are betting that you know it's, going to a lot that a good solution for, code explanation Improvement and, generation is going to lock many, billions of dollars of value so all, these things are focused on these, different skills and Our Hope was that, for people who are maybe software, Engineers looking to move over to AI or, to students like whatever anybody who's, interested in learning AI skills that by, through the ones they chose to or all of, them but by the end of them they'd have, kind of a portfolio of knowledge in, their head about the different skills, both on understanding how to use them, and how to do a good job like with, prompting but also by going through the, code and understanding all of the code, and all the notebooks and idea of what, else they could build outside of the, narrow set of applications that we asked, for over the course of five days I think, one thing that impressed me about the, set of challenges that you all came up, with was that it focused really around, image generation coherent image, generation and on the llm side sort of, retrieval based methods along with chat, so I think all of these things are the, things that people are finding most, utility out of when they're first, implementing AI Solutions within their, actual Enterprise or industry or startup, or whatever environment they're working, in in particular retrieval based methods, and rag systems at least for our clients, we're seeing that's like the first thing, that everybody is building right you, have your own company set of data in the, case of the challenges that you all put, together maybe that's external python, documentation for the code explainer or, maybe that's just some external PDFs or, YouTube videos or whatever it is um for, the rag based solution but lots of, companies have this data that has this, sort of unlocked potential and is, unlocked via these retrieval based, methods which is a lot of time what, people are building first when they, adopt this technology so I think it was, great that you all tied that together, with the participants to give them, practical skills in that area and kind, of help them learn you know what is a, vector database what is a rag system how, do you implement this with custom data, rather than kind of immediately hopping, to fine-tune a model which I know of, course you can do more easily than ever, as well but there's a lot you can do, even just by integrating your own data, with retrieval or other sorts of methods, I do want to ask here in a second some, of the solutions that you saw and what, stood out to you just to highlight some, of those really cool things that we saw, but before we do that so you can have, that in the back of your mind and and, think through some things you'd want to, highlight I'm wondering if you all could, speak to the Intel developer Cloud, specifically which is is something of, course I've found utility out of but it, was something that was kind of unique, about this hackathon and it might be, something that the participants here are, a little bit like this was their first, time using it but also there's a whole, lot available there in terms of, different ways to run AI models that are, maybe maybe some people are less, familiar with so could you describe a, little bit the Intel developer cloud and, maybe also highlight some of those like, different unique ways that people were, running AI models outside of just like, throwing it on a GPU they're some, interesting kind of other either tooling, or Hardware software available for, people could you highlight a little bit, of that and the unique ways that people, were specifically running AI models, throughout the hackathon sure um so, Intel developer cloud is Intel's, production ready Cloud specifically for, AI and machine learning workloads uh and, of course when we say Ai and machine, learning it's Matrix math so many other, Compu heavy workloads can run really, well on IDC so uh for this particular, hack um we provided um anyone logging in, Intel developer Cloud registering on IDC, as a standard or free tier user you get, a shared uh Jupiter Hub instance uh, where you get access to Intel's data, center gpus Intel zon processors and I, would say this system I for a free tier, user I don't think any other service, provide I mean there are many services, with the Jupiter Hub front end uh but, the amount of compute and amount of, memory and and RAM and even file storage, that you get in the systems I haven't, seen a single cloud service provider uh, providing that and we' have seen lot of, people really using it and giving us, feedback on how we could even improve it, today on IDC we have a lot of models or, llms already there are tens of even, hundreds of local models that we are, planning to add further to boost this, there are stable diffusion models llm, models and things like that beyond that, for productionizing the workload right, uh Dan in your case uh you are using the, gudi 2 accelerators those are, specifically designed for for workloads, that that request high bandwidth uh like, llm and gen workloads and I like, this this, this I lost my TR out thought but yeah, we have G2 accelerators which we are, seeing incredibly competitive and, sometimes outclassing the best uh out, there uh for your particular workloads, along with Gaudi accelerators um uh, those are specifically designed for Gen, and AI workloads we have general purpose, gpus the data center Max gpus both with, 48 gigs and uh 128 gig uh versions um so, the folks in the hackathon they actually, used uh both of our fourth generation, Zeon that is the latest Zeon uh that we, have which what it particularly does is, that it accelerates your machine, learning workloads we have dedicated, instructions in this CPU to sometimes, even take your workload to 2x the, performance you got and ear generation, uh it's all about making the CPUs how we, see it is that uh making it as efficient, as possible and making it as fast as, possible just still maintaining the, general purpose utility of a CPU then, like I said the data center Max series, gpus a little bit more generic solution, where you can run your a workloud HPC, workloads um each of these machines when, you're uh productionizing you get a VM, you get an eight node uh eight card, system system um there are also, clustered systems available then comes, the Gaudi accelerators both there are, single node machines and uh also, clustered machines if you want to do, pre-training or or big fine tuning all, those cool things and soon we'll have, kubernetes service uh Object Store file, store all those things uh coming up uh, so it's going to be great what I see is, that if you're building a startup it, would be very difficult to find a, performing and accelerator cloud like, IDC out there I'm sure that there are, different different hypers scale, but this uniquely for startups from my, personal experience is is a really, really awesome solution uh I'd like to, know more from you Dan like you you are, one of the first customers of of IDC, right like what are the things that you, thought uh that really uh you know made, you decide to choose choose IDC from the, performance and also the team side also, right appreciate that and appreciate the, support that you all have given I think, it's interesting maybe for people out, there that are less familiar with the, various options for model deployment to, understand that there is really good, tooling um like you say whether it's, optimizing a model and deploying it on a, CPU or an edge environment or um just a, cheaper inference solution or it's like, all the way to these gouty 2 processors, that we've been experimenting with I, think there's a lot of interesting and, approachable tooling for that so I first, came across some of the tooling around G, 2 by actually seeing blogs on the, hugging face blog about gouty 2 and I, think at the time like the Bloom model, which is a very large model and running, it on either a single accelerator or, spread across eight accelerators with, really high throughput on the inference, side and doing that with tools like, Optimum Habana so for those of you that, are out there and wanting to explore, things actually if you look up the, hugging face Optimum Library um there's, a lot of great tools that you can play, around with there even not for for gudy, but for other processors too so whether, that be CPUs gpus the gouty 2 hpus the, data center Max gpus Optimum kind of, provides you a way um if some of you can, visualize maybe you're uh writing in, your code and you're importing a model, from hugging face it's just like Auto, tokenizer or Auto uh caal LM or whatever, it is with Optimum a lot of times either, you can just do like a one a couple line, replacement and just replace that with, the optimum version of those classes or, do some uh wrapping of the various, models um with optimizers and this, allows you to run run your your model, very fast on on a wide range of, architectures so I think to your point, uh rul I think one of the things that we, found really useful is the actual the, ease of use and coming in and saying, okay well we have this stuff running on, a GPU um let's try it on this various, other architectures I remember even like, maybe two or three years ago trying to, do some of this model optimization, things for Edge deployments is very very, challenging so like uh a lot of times I, would try to optimize a model at the, time working on like speech models and, other things and it just wouldn't work, because operations wouldn't be supported, or or something like that but this, tooling which is cool because you know, Intel's working directly with hugging, face on this tooling and uh of course, the ease of use has just been ramped up, drastically and and we've been applying, that with really good results, particularly for inference for llms so, that's been a key feature to that that, change happen that's really awesome to, hear D and and particularly the what, also the thing you mentioned right so so, you think of Intel in in two ways uh, probably the biggest uh semiconductor, manufact are the cooler tips uh the, other is intell and open source software, company also we are uh contributing to, almost all all big opensource projects U, Linux Kel almost all things if you see, we would we would be anywhere in the top, three BYO strength of flow uh hugging P, any any sort of um open source Solutions, out there we work really hard uh across, to make sure that your adoption of a, technology is as easy as possible and, try to Upstream as much is possible to, uh the core pyto library or tens library, and and things like that in cases where, we uh we feel that there are further, optimizations that could be done and, these things canot be upstreamed in, couple of months to the mainline, repositories we release extensions also, so for example uh if you take the outter, box spy Tor and and run it on a CPU you, already get a lot of performance because, of um Intel's new network accelerator, Library 1 DN and that's powering a lot, of these operations uh when you're, running on a uh machine like an Intel, xon for example but if you want to go a, little bit further right we we have, things like Intel extensions for pyos, that with one line of code um, essentially it's uh int extensive pyto, as ipex and ipex do optimize and pass in, the model we add further optimizations, uh to run it as fast as possible we are, also working on even upst streaming, whatever possible uh to pyto main line, so uh that thing you mentioned right, it's very important to work with the, community and enable the software that, the community uses um rather than having, a completely different architecture um, and something that's sometimes is close, source and working on it um that's not, the way Intel things even the whole, concept of one API um heterogenous, programming everything open about it, where other vendors can come in and add, their accelerators to the 1 API and use, a 1 API standard where um if you're, writing code for a CPU there should be, minimal to no change that's required to, run that on another accelerator uh, that's the philosophy that we are we are, working with overall in the 1 AP, architecture that works underneath all, these acceleration libraries uh optimo, Habana it's yeah we've been working very, closely with hpce Team almost all llm, models work out of the box um there are, models that we have tested in Benchmark, that's available on GitHub and things, like BLM right uh our info support all, those things are enabled through int, libraries for example bigdl and things, like that giving a higher level, abstraction Beyond pyo because when we, talk to startups these days we feel that, pyo is considered as a lowlevel library, right now and that's uh a little bit, funny for folks uh who have worked in, tianao or or uh you know even beyond, that uh in in 2016 and 17 uh and coming, from the early days of tens oflow to see, fighter being going low level uh and, there this higher abstraction libraries, uh to work on top of that it's really an, exciting time to be in and work with you, all yeah for sure and I also want to, highlight in addition to like open, source code it's been cool to see until, recently uh release nurl chat which was, a fine tune on the mistol model um which, is openly accessible on hugging face and, permissively licensed so we've been, we've been experimenting with that and, we saw usage of that in the hackathon so, it's cool to see people like like a, couple of these models narrow chat which, is a fine tune of mistl came out like I, don't know maybe a week before the hack, and notice um which is another fine tune, on mistol came out like a few days, before the hack and both of those were, being used in in the hackathon which I, think demonstrates the ability to kind, of rapidly adopt this new stuff that's, coming, [Music], out, [Music], this is a change log news break van. AI, is a python rag framework for accurate, text to SQL generation it lets you chat, with any relational database by, accurately generating SQL queries, trained via rag which stands for, retrieval augmented generation to use, with any llm that you want you load up, your data definitions your documentation, and any raw SQL queries you have laying, around into Vana and then you're off to, the races Vanna boasts high accuracy on, complex data sets excellent security and, privacy because your database contents, are never sent to the llm or a vector DB, it boasts the ability to self-learn by, choosing to Auto Train on successful, queries and a choose your own front-end, approach with front ends provided for, jupyter Notebook, streamlit flask and slack you just heard, one of our five top stories from, Monday's changelog news subscribe to the, podcast to get all of the week's top, stories and pop your email address in at, [Music], well I do want to make sure that we have, time uh to highlight a couple of cool, things so what were a couple of the, highlights for you all in terms of, solutions that you saw or methodologies, that you saw or just like cool things, you didn't expect what stands out in, your mind I'll start quickly and I'll, I'll let uh Ryan talk about this so we, so we both have been spending I mean Dan, you were also there where like every day, going through these uh submissions and, like it was very difficult to figure out, who is the best submission because each, time we we think that this is the best, and look at the other one we like oh my, God like this is incredible even the, first submission right the quality of, image creation um that was I was, surprised that how can you even create, this kind of images at the model that, was there the time that was spent in, prompt engineering and even using custom, models uh to combine these sort of, images and create these Solutions other, thing um uh there are a few really, interesting rag examples taking YouTube, videos passing the audio figuring out a, YouTube search that was something that, stood out to me and for the python code, explainer there was a a submission that, came maybe uh I think in 3 hours uh the, first iteration of the of the submission, for the by this person where there was, uh you could do uh that that solution, can do python explanation but also give, uh references to where exactly that the, model got this information from a really, really good use of rag and and llms um, Ryan what are the things that that stood, out for you on submissions what always, stands out is when somebody the the, Jupiter notebooks which rul had put, together I think are really well, designed for as learning activities and, to like get something done that's cool, just by going through them um and so, people use those to do a lot of amazing, work and I was stunned by the quality, but what always stands out to me is when, somebody like takes the concept takes, what's in there and then runs with it, and like we saw people setting up AI, agents for some of these challenges like, the comic book generator you know in the, code explanator in the the fifth, challenge of where where code, explanation comes in it's like oh, explainability like listen we're going, to do things like we're going to do the, explanation the the model will then cite, the sources it's using right um which, made me think you know we used to in, Years Gone by people were very concerned, with explainable AI and what they always, meant is like well if the model is, making a recommendation or classifying, something in such a way we should be, able to figure out exactly why and so, there are all these like discussions of, how best to do that like oh you can use, shap values you know or whatever and I, think what it turns out is like well now, that we have gen you know and we have, tral based methods it's like just ask, what are you so okay this is your, explanation like where did that come, from and you know we we see like the, setting of sources so that creativity, not just in application which was, astounding but then also in people, bringing in methods Cutting Edge methods, from outside of like what we even, included in the notebooks that always, blew me away and there were some people, that just always ended up um in the top, five like toas Bari I don't know if I'm, pronouncing that correctly who I, actually reached out to it was like what, do you do for a living like this is your, work is incredible um becauseas uh there, are so many Simon's team um Prav yeah I, haven't sat down to a compile a list uh, you know who you are because you can go, back each to each winners post that rul, made and find those names and that's, something that we'll follow up probably, with a Blog about and you know U maybe, reaching out to somebody you to be on a, podcast or to talk to us or whatever I, would also like to highlight that uh our, youngest participant I think uh might be, on this yeah I see his name Arian who is, a middle school student who owned us, every day like at around the time when, we were supposed to be posting a video, or whatever with the same like skeleton, like waiting tapping his fingers you, know like patiently waiting for this, video that was supposed to be here five, minutes ago that was a wonderful part of, that for me yeah even the the thing, right like you were mentioning the the, python explaining uh I mean there were, there were submissions were okay now you, have explained the code now click this, button to optimize the code I'll give, you an optimized version of the solution, taking the challenge in spirit and not, just in words and going beyond that like, incredible work um it's truly I really, feel generative Ai and the, commoditization ofi I've really really, helped a lot more folks who might not, have been here to do this AI kind of, work really democratizing the solution, all the toolings the the API based, approach for example from prediction, guard the hugging face ecosystem making, it as easy to use it and and one thing, and when Dan was mentioning that people, were using neural chat uh for llm apis, that was because of Dan's incredible, team uh adding these models and scaling, it in like in matter of hours so it's, still a challenge to deploy and and, scale this uh you you have an incredible, team down there who was also, participating in the conversation thanks, well I definitely think so appreciate, that and and speaking of um where people, can find out more about some of the, specific submissions even seeing some, screenshots some code that people, generated um Eugene do you want to, comment you created some amazing blog, posts already and I think there's more, in the works so um do you want to just, describe to those listening where they, can find out more about some of the, solutions and maybe also where they can, keep tabs on future events and and, things coming through the liftoff team, thank you de um Danielle so we um we, posted uh already three blog articles um, uh at our landing page developer., intel.com liftoff and uh I just want to, also give you insights um as I reviewed, always the top submissions um and uh, other like uh honorable mentions I had a, look at the profiles uh of um uh, developers and it was really like, exciting mix across regions and um as, already said like we have students we, have uh individual developers we have, Founders here we have here software, Engineers from a big uh companies uh but, also uh I saw um very active uh software, developers from Intel this is very, interesting that um I mean indeed like, Intel lifto um is more um targeting um, startups but it was very diverse, portfolio of developers uh and from, across region so it's really like, amazing because in our slack channel uh, we for hackaton we see really like, always like uh from like 24 hours um, messages there with submissions with, questions uh because of this diversity, this is really like a global hackaton at, the end of the year we're very proud, about it and uh we will spot spot um, post really like um articles about each, challenges and also about the last um, challenge with uh two days uh, development Sprint now you can read uh, three uh articles uh not only like it is, not only announcement um of winners but, also their own comments and uh results, of Works what you can find in this blog, articles awesome thank you so much for, your work on those it was cool to see, like the traffic coming in basically all, day and night which was awesome and um, like it's hard to sleep while all of, this cool stuff is going on um so as we, draw close to an end here I want to kick, it over to uh Ralph who leads up the, liftoff program and just get any sort of, final thoughts um like what did you, think of this whole process what were, you encouraged to see and what are you, looking forward to in the New Year in, terms of uh things related to generative, Ai and liftoff hello everyone it's me, RAV I'm sorry for the noise here that's, why I was on mute for the all the time, because there is some at the office, there's some kind of um year end party, going on um so yeah I was completely, Amazed by what happened during this, aaton and I'm very grateful to the team, starting with Rahul the Rockstar, developer of this aaton and also thank, you very much to you Dan for supporting, this for really running this with us and, then Ryan who is the second Rockstar, developer here and of course Eugene who, made it all happen and so I really look, forward to uh the impact we can make in, the developer ecosystem in the AI uh, developer ecosystem really look forward, what's going to happen next year and we, want to have a share of uh what the, future might bring to us and I can tell, you the Intel LIF off team is ready for, whatever comes in the startup world and, uh yeah so see you next time and uh, great to have you all here thanks R, awesome um I mean we co-created this, together with prediction gr and day one, like we had meetings on how to do this, and what are the things that we need to, do on this do you want to say uh for the, folks who don't know about prediction, guard you know to introduce prediction, guard also to what was your experience, bre uh working on this hackathon with us, um and what are things that we need to, do next uh I'm sure there are we need to, do it big 2,000 is now sort of our, Baseline so next time we do maybe it's, 4,000 people it's pretty big so let's, yeah uh what's your take on it then I, think one takeway is like when you do a, hackathon with Intel liftoff you better, be ready to scale your servers um so um, we'll we'll take that takeway uh for, next year when it's 10,000 people uh, participating I'm sure but yeah it's, been great I one of the things like I, say is um we really appreciated actually, interacting with people creating, practical Solutions with llms that's, what we're about at prediction guard and, seeing people actually apply some of the, latest models like the neural chat, notice Zephyr ye wizard coder seeing, them actually access these things and, even like combine them together in, unique agents I think that gave us like, such encouragement to see people, actually kind of fulfilling this Vision, that we have which is providing these, open privacy conserving hosted models to, people and them combining them in unique, ways to create real Enterprise Value, that's what we're excited to see and do, it in a way that is actually trustworthy, Intel of course has a great history with, security and and privacy confidential, Computing but to be able to sort of be, partnered together and see people, creating um really both trustworthy, privacy conserving and scalable, Solutions with llms in this environment, is really encouraging I think for the, future of AI because as we've seen even, over the past week with mixol being, released and striped hyena and all these, models the open model are just getting, better and better and um you know, providing ways for people to access, those in a scalable way and build real, solutions yeah it's really exciting to, see that happen in in the industry so, thank you for hosting this and making it, happen um it was a great experience you, and thank you to the entire team like uh, Scott Ral um the team that I I uh talk, to uh daily Ryan like we practically we, talk every hour Eugene uh and and the, whole of engineering team at int lifto, um jois vat uh basanta Raj like you guys, are incredible and all the teams uh Dan, side also like being in the slack, Channel and um answering questions uh, there was all of us had reservations but, we kept that to ourselves uh we didn't, know how it's going to go but everyone, pitched in with really cool ideas and, with mindset to help and that really, shows even the community all the, messages we get we had messages where, folks were saying now I can take this, thing to my boss and tell like you know, I need to implement these sort of things, in our day-to-day work and this is, really really gratifying to see that and, next time we come in uh we'll fix all, the shortcomings we'll do an internal, review carefully if there were any, shortcomings I'm sure there are uh to to, fix them bigger better more scalable, more cooler challenges we want to, continue this and and grow this, community so any sort of feedback eug, would be I'm I'm sure be sending a a, surve um I know it's very difficult to, answer any sort of surveys uh it's, easier to delete that email but we would, really appreciate I personally would, really appreciate your feedback on what, we can improve what are the things that, we could add more and make it more more, a Community Driven effort we we don't, really like in lift of the top down top, approach we really want your uh feedback, and and the things that you want to see, and build around it so thank you uh once, again yeah thank you all um closing out, here I just want to encourage you also, to not only keep tab for hackathons but, all of you who are building amazing, startups and I know many of you are who, are part of the hackathon they maybe are, too humble uh uh to say it but this, liftoff team is doing amazing things and, as a startup that's participating in it, your startup should join liftoff and and, reach out to them because you will find, amazing uh amazing benefit and scale and, access to expertise and Hardware so uh, reach out to the team they truly are, rock stars like Ralph said so reach out, and get involved in program and um the, the community with that we'll we'll, close this uh Advent of gen AI out uh, we'll give you the last word uh Rahul, all right yeah I forgot to mention one, person Kelly uh I don't know I don't, know how I forgot she has been, incredible starting from the website uh, creating the content editing the video I, mean she was sick while she was doing it, but uh she had some a few hours that she, she had to take off but she has been, incredible in the pace at which uh she, was able to help us and and thank you, Kelly for doing that I'm sure that we', be doing many more of these things uh, again the entire team if I miss missed, anyone I'm really sorry but this was, truly a team event everyone contributed, and um without in a small contribution, this would have just been an idea so, thank you all for doing that thanks, everybody, [Music], all right that is practical AI for this, week subscribe now if you haven't, already head to practical a.m for all, the ways and join our free slack team, where you can hang out with Daniel Chris, and the entire change log Community sign, up today at practical ai. fm/ Community, thanks again to our partners at fly.io, to our beat freaking residence, breakmaster cylinder and to you for, listening we appreciate you spending, time with us that's all for now we'll, talk to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI predictions for 2024 | We scoured the internet to find all the AI related predictions for 2024 (at least from people that might know what they are talking about), and, in this episode, we talk about some of the common themes. We also take a moment to look back at 2023 commenting with some distance on a crazy AI year.
Leave us a comment (https://changelog.com/practicalai/251/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-251.md) | 18 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Tech is changing, the world this is the show for you thank, you to our partners at fly.io the home, of, changelog.md 30 plus regions on six, continents so you can launch your app, near your users learn more at, [Music], fly.io welcome to another episode of, practical Ai and happy New Year, 2024 uh Chris and I are starting out, this new year with a fully connected, episode which are those episodes of our, podcast where we keep you fully, connected with everything that's, happening in the AI news and Trends and, uh help you level up your machine, learning and AI game uh I'm Daniel whack, I'm the founder and CEO at prediction, guard and I'm joined as always by my, co-host Chris Benson who is a tech, strategist at locked Martin how you, doing Chris I am doing very well Daniel, excited for 2024 I suspect you know like, we keep talking about how 2023 was just, such an amazing year for AI and and, we're about to go through some of that, but I'm pretty sure 2024 will be even, better it's interesting I was just, having a conversation with one of my, friends and and mentors yesterday and we, were talking about the kind of curve of, adoption of this technology and where we, thought we were in it he kind of, sketched out this trend of early days, right where really what you were, investing in is research and development, right and then there's sort of a usage, expansion adoption period where what, you're investing in is usually, developers to actually build out things, at scale right and then you kind of like, reach this place where what you're, investing in is lawyers because, everything's consolidating and like, there's more you know regulation there's, a lot of mergers and Acquisitions right, and so everything's kind of, consolidating in in together I'm curious, to know what your perspective is in, terms of maybe let's scope it to, generative AI since that's pretty much, what we've what 2023 was kind of about, um and certainly not what all of AI is, about but I mean 2023 was the year that, big Tech dived into AI for real like, they had been talking about it and doing, it but it you know suddenly the whole, world went AI you know with chat gbt, catching on the way it did and and its, competitors so big Tech is all over it, but there are many challenges to that, and you kind of just talked about that, process about working your way through, the challenges and so there hasn't been, the killer app you know uh there have, been really interesting things to come, along and some of them occasionally are, revolutionary a lot of them are, evolutionary but there's still a lot of, organizations a lot of businesses, specifically out there trying to figure, out how does this work for us and uh and, there have been some stumbles along the, way so I I think I mean I think 2024 in, that sense is more of the same of, investing a lot in AI not so much for, today but for getting ready for figuring, it out for tomorrow so I think we'll see, a lot of a lot of that two steps forward, one step back yeah I kind of had a, similar thought I saw some study I, actually don't remember which one it was, off the top of my head but it was one of, those that surveyed a bunch of, enterprise or or something like that and, something like 15 to 20% of Enterprise, companies actually had either kind of, something prototyped or some type of, integration to an AI system maybe that's, just a simple integration with a with an, API or something but if you think about, that there's it's definitely kind of at, that beginning I would say we're kind of, past R&D in the sense that not that R&D, is not important and new things will be, discovered and as they continually are, but in terms of the technology that has, sort of dominated the new cycle in 2023, we're moving past the heavy R&D, Investments more into the like how do, you scale this with software Engineers, developments like if you're a company, developing these Technologies you, probably are going to not win by doing, R&D you're going to win by creating like, you say products the the applications, the platforms the the apis the systems, that power this kind of growing growing, adoption within the Enterprise so I, think where you saw that kind of, adoption start in 2023 that's going to, expand very rapidly in 2024 and those, that keep up with that in terms of, actual software and system development, will likely be in a good place kind of, extending that a little bit another uh, huge Trend we saw was in years past you, know we've talked a lot about the, process of doing the R&D and and how, you're kind of self-hosting or how, you're using Cloud but you're still, driving it yourself in a lot of ways and, what we're really seeing now is uh is, the marketplace for AI you know, Integrations is exploding is that uh, these large tech companies the, microsofts and the Googles and the aws's, you know the big cloud companies are, really making a lot of AI usage about, API usage you know it's AI by API and uh, I mean that's essentially Microsoft's, entire you know primary play right now, is to take open AI Technologies and, package them up uh in their own products, and push them out and we're seeing a lot, of uptake on that from organizations, that that maybe struggled with their R&D, a bit uh and Broad maybe you know and, I'm not talking about necessarily the, leaders in the AI space you know the the, most select group but but the second the, third tier and the fourth your companies, out there kind of going I'll just use, what they've made available to us via, API uh and that's good enough and a lot, of strategy a lot of business strategy, is being built around API access so it's, becoming software you know like so much, other software yeah and some of that, will likely get more commoditized as, other API products have been sort of, commoditized uh over time and so open, source is coming yeah it'll be, interesting to see how people develop, their competitive Moes within that space, in 2024 I think it's a year where, probably there'll be some people that, really come out ahead and some people, that suffer very badly because certain, things are just becoming more, commoditized I think we're you're, starting to see companies realize they, can't just say we have ai in our product, now uh you know we had a period uh where, you know having AI being able to say we, have ai was a marketing Ploy into itself, but as it becomes more ubiquitous and, more common that's not going to work, anymore your your point there about, differentiation is going to be key and, human creativity to find what those, opportunities are I was looking back at, some of the developments that happened, in, 2023 and it's amazing how much was, packed into that year yeah some of that, I was like did that really happen in did, that also happen in 2023 it's like a, year man so much was was packed in I'm, wondering what are some of the the, highlights that you saw in terms of, things that happened in 2023 that we, either commented on or didn't comment on, or that are kind of highlights in your, mind from things that that we discussed, in 2023 that are kind of informing your, perspective as we go into, 2024 a couple of the the big highlights, and we can dive into whatever we want, are at the tail end of 2022 chat GPT was, released early in 23 I think it was that, we had GPT 4 you know added into that uh, underneath it from uh from the chat GPT, I'm sorry from gpt3 um and then they, updated that with 3.5 and that kind of, kicked off The Firestorm uh at the, beginning of the year and we saw Google, scrambling you know to try to catch up, they got barred out and then they came, along late in the year with Gemini to, back it up uh we saw meta coming out, with llama 2 we saw anthropic coming out, with Claude 2 and so there's now a whole, industry you know you're going into 2023, there weren't so many options really and, you know chat GPT kicked off the arms, race in the industry and it's, interesting is that I I know that for me, as we were going through the year and, all these crazy things were happening uh, it was a new week and a new model to try, you know and and I I had my standard, tests I had the things that I cared, about which were usually, avoiding the things everybody else was, doing like everybody would ask for, python code from the models and that's, the best language you can possibly ask, for so I would I would ask for rust and, they were all just they were all failing, all over the place it was Miser like, like you could have all these top models, and they were doing great in Python, because that's what they were really, centered around but in Russ they were, really falling all over themselves it's, starting to change a little bit now but, uh that and a few other things we both, may or may not be able to comment on, everything that uh is actually, integrated in our daily work but you as, a as a person a developer a strategist, as we went through 2023 and ended up at, the end of, 2023 your own usage of AI products what, did you how was it impacting your daily, work in ways that were different at the, end of 2023 than at the beginning of, 2023 I think 2023 was truly the year, where where I never put down the various, uh you know model interfaces you mean, like the chat GPT or CLA or whatever was, up in a window right yeah things like, that exactly and now like when I open a, browser I have a kind of a bookmarking, you know app in my Chrome that does it, and all the across the top of the screen, I have all the models that I use and on, any given day I probably use every one, of those and often for a problem I will, go to all those things so it really, became the year that it integrated into, every part of my workflow and it didn't, matter if I was coding or if I was, having to write something and it could, be you know technical related or or not, technical related for my animal, nonprofit you know that I that I we talk, about occasionally uh I use that for, that uh for a whole bunch of different, things so it wasn't all centered around, the AI world or the coding world it was, kind of every aspect uh at the dinner, table my daughter who's 11 she's sixth, grade and um I would challenge her she'd, say well my teacher doesn't want me to, use that interface and I kept I was, inevitably saying well that teacher's, dumb because this is part of our life, they need to get used to you know this, needs to be integrated into teaching, instead of being afraid of it or, something but I started showing her how, she could learn more and learn faster by, using the tools and so it was a dinner, table thing for us so um that was it I, mean I'm living the same kind of the, same experience that so many other, people are that using these Technologies, is now completely built into everything, that I do no matter what that activity, is short of swinging a hammer I think, that's the biggest thing for me is it, hit work it hit side job the nonprofit, it hit all my personal life uh it hit my, my kids life and so anyway yeah it's the, year that changed me from an AI, perspective yeah that's interesting, because we've been doing this for what, five years now and yeah most of that has, certainly been impact from Deep learning, machine learning over time in the, Enterprise setting and value that, companies have definitely gained over, time with those Technologies but mostly, it sounds like what you're saying is, mostly like you were working in that, space and helping develop some of those, things but in terms of the way that you, did your job it wasn't like those things, were tightly integrated into your, activities is that a good to put it, that's a good way to put it we you know, I had my I would set aside time to do uh, activities in the space you know and, that was part of either my day job or us, still on the podcast or or whatever but, those were and then I would turn away in, years prior to 2023 so I think the fact, that it didn't just integrate into one, of my workflows integrated into all of, my workflows across all the all the, opportunities there and and using one, against the other and taking the output, of one and putting it in the other and, things like that and just trying things, out that's what it was and I suspect, that it was the same for many of our, listeners it's interesting Chris that, you mentioned impacting kind of your, whole family's life so one of the things, that I did for Christmas I have a bunch, of nieces and nephews and it's always, you know well my wife and I don't have, kids a lot of my life isn't dominated by, thinking about like what kids are, interested in these days so sometimes, it's hard to figure out Christmas gifts, and and all of that I have to do a good, bit of inquiries with my brothers to, figure out what to do but um one thing I, did this year was I just created like a, framed picture so I I know each of them, have like certain interests like one's, really into electric guitar right now, right and um one of my nieces is really, into frozen or or whatever so taking, something from each of those like took a, you know an artistic picture of someone, shredding guitar on on stage and then, basically deep faked my nephew into the, photo and like stylized it with all I, did all that with clip drop from, stability so like you can go in you've, got like oh here's the I can go in first, and just remove the background from the, image and then like generate the the, scene right and then face swap the thing, and then you can clean up certain areas, or remove objects or change the lighting, and all of that you know happened like, very seamlessly for me so it's just, interesting even in that that context, that's what I Turn to You know it's, funny um and I did the same like a lot a, lot of picture generation and stuff lots, of raccoons and foxes and stuff in in, various uh scenarios um I'll share a, two-c touching moment that just caught, me off guard um I work with a wildlife, vet in the in the animal nonprofit and, they do a lot of stuff for free because, we're nonprofit and we're doing all this, good and they love these animals and so, they just help uh and I'm very thankful, for that but one day I had come home, after taking some animals in for them to, to help that were beyond my ability and, I just sent them a text saying thank you, but in the process I went and and you, know generated an image that showed, their Veterinary practice and all that, uh and some animals around it and I sent, it and it was like a two-c thing for me, I didn't it was like just like oh I hope, you like this and they to they they, received it but they're not people in AI, they're not you know they uh they're not, as Tech focused obviously as we are and, she had it uh printed out and framed and, put in the main office like where people, could see it and everything and I was, just like that never occurred to me but, it made me realize um that even these, completely non-te things AI is something, that can help people that are not, focused on AI still find Value in their, activities in in ways that maybe some of, us who are talking about this all the, time don't think about so yeah it was, kind of an amazing moment to see, something that so trivial turn into, something that was meaningful for yeah, and one um major change for me which um, in the past so those that have listened, to the show for some time you might be, aware that uh I've always coded in VM, that's been my IDE of of choice um this, was the year that I was finally, motivated to so I actually changed my, editor so I'm using vs code now Oh my he, came over I can't believe it yes with, codium so I've tried vs code in the past, I'm kind of like okay cool like I like, there's things like the searching or, completion or function finding stuff, that that's all useful but you can do, all that in in Vim easy enough and I, know there's also Integrations of AI, stuff in in VM but I found that that, really tight workflow and having even a, chat interface Within code through, codium was actually just like so, efficient for me so I felt like I was, able to be way more productive as an, individual contributor this year even in, ways that would have been very difficult, for me before so writing like different, typescript stuff or other things that is, not natural for me I don't know a lot, about that whether it's that screen and, VSS code where codium is or, hopping over usually like you I also, have my my chat GPT window up and I'm, asking questions back and forth there so, it's kind of a combination of these, things not maybe completely seamless but, it's just so efficient and I love Now, you kind of saw that progression was, really powerful for developers I think, this year yeah and then um recently now, you're seeing more privacy conserving, options popping up there's a project, called continue dodev or continue I'm, not sure what they prefer but it's an, sort of Open Source integration of this, kind of VSS code type of interface but, with you can choose the model that you, want to use you can integrate an open, model like wizard coder or um code llama, or something like that and so I think, it's cool that there's a lot of really, seamless configuration of that for, individual contributors and I never the, feeling I never got was oh this is, taking over what I'm able to do but by, embracing that I was able to be so much, more productive as an individual, contributor I was too I mean and this, year I'll have periods where I'm coding, and I'll have periods where I'm not and, and it kind of goes in and out this was, this year was definitely uh dip back, into coding and and there's always the, kind of catch up to where you were uh, moment when you've taken time off but it, was different this year because, by embracing these tools that we're, talking about it let me catch up and it, let me do things that I would have taken, time and struggled with before by using, all these new amazing tools so and and, we're seeing that across so many things, you you don't have to be a developer to, benefit from this so I think 2024, hopefully will be the year where people, discover productivity instead of just, kind of entertainment because most of, the things I'll see on you know Facebook, or people that are not really Technical, and discovered how to generate images, and stuff like that and they'll they'll, do that but they they haven't learned, how to to Really change their lives the, way we've been talking about I'm hoping, that that starts to transform folks uh, and takes maybe some of the fear of AI, away because that was that was the other, thing I really noticed a lot of this, year was fear of AI um I walk into a, room and I'm ready to talk with anybody, about Ai and all these cool things and I, kept hitting walls of fear with people, who are not in the industry every I did, it it always surprised me uh I think I'm, just kind of because I'm so upbeat about, things that was another thing is maybe, 2024 is the year I'm hoping that some of, that fear gets mitigated and people, discover productivity instead yeah I, mean there were definitely a good number, of things in 2023 that led to a chaotic, feel in the industry whether that be the, sort of hiring and firing of Sam Alman, or disclosure of like data breaches of, one kind or another we ended the year of, course with New York Times uh I don't, know if it was 2024 or 2023 I'm assuming, they had the file things in 2023 where, the New York Times is is suing uh open, AI which is definitely in the in the, news now for for copyright stuff so yeah, there's a lot of if those are the sound, bites that you're getting it definitely, doesn't lend itself to thinking that, this space is reliable and safe and, trustworthy the flip side of the coin on, the fear is that this was also the year, that we saw significant uh policy and, regulation uh initiatives being put in, place you know we had the executive, order uh that came about in the US, within Europe uh they had the AI act uh, late in the year and that that made a, big difference and we have talked about, a lot of this by the way you know, obviously in the show and I would refer, uh listeners back that many of these, things we've discussed here are in you, can find a dedicated episode on some of, these things uh on our show so look back, through that but in general as i' move, to and I mentioned this on a previous, episode move into family or go to and, hang out with some folks that had, nothing to do with AI and they don't, know what to believe I S I think another, thing that got pointed out to me several, time is that you'll have you'll have big, names uh in our industry that are big, enough to make it into mainstream people, like uh you know Yan laon he famous L, considers it ridiculous you know that, there's fear of AI and stuff like that, and then you had Jeffrey Hinton on the, other side another one of the major, names who left Google so that he could, talk honestly about the dangers of AI, and so you're talking about two Global, luminaries in the space that were, actually recognized together you know uh, recently uh as pioneers and yet they, have polar opposite views about that and, so if you're I think that's hard if, you're in the industry but if you're not, in the industry you have a lot of, trouble trying to figure out who should, you believe when you're when you get a, you know a New York Times article or, something like that you know that, addresses these issues and you're go how, do I handle that so um I'm hoping that, 2024 maybe we can pick up a few new, listeners maybe some of those that are, interested in the topic but don't work, in the industry but maybe they can get, educated a little bit more on kind of, what the space looks like yeah well, thinking about more forward-looking, things and and into, 2024 um I know we want to talk about, some of the maybe hot takes or non-hot, takes that the people had in making, predictions about 2024 before we do that, I'm wondering Chris if you have I can, give mine first but if you have any any, hot take or or spicy opinion that's, maybe not represented in kind of the, overall like what everyone is saying I, can give mine first and uh no worries if, you don't have one but mine I think that, I I saw a lot of people you know posting, on Twitter LinkedIn here's my, predictions for AI in, 2024 mostly all of them having to do, with you know generative Ai and and, utilizing those models as the key piece, of a workflow which I I started a, company to do this so I don't disagree, that that's going to be a big Focus but, maybe my spicy take um that's kind of, different from many people's is I think, that we're going to start to see there, there were a couple of takes that I saw, that said something about like the, software engineering element of of, building out these systems being a key, piece of what will be important in 2024, not just making prompts I would kind of, build on that a little bit and and, propose that I think there's going to be, some people are really going to win by, combining the kind of quote traditional, data science and machine learning, algorithms or models or systems with, generative AI systems in a sort of, hybridized way and I say that because in, our own client work I found that to be, very much the case and a very very, powerful approach is hey you know in, this case we're you know generating, responses for customer emails or, something and I don't want to generate a, response when the customer is really, frustrated or something I'd rather a, human respond in that case and not the, not the AI right well the best way to, figure that out is I think with a, sentiment analysis model that we know, how to do really really well and we can, run on a CPU with very little cost right, and then we can use the generative AI to, answer when it makes sense or maybe even, informed by the sentiment label right, and I think that's only a very simple, example but you know combinations of, recommender systems gradient boosting, machines time series, analysis with generative AI models, either large language models or other, types of generative AI models I think, that's a really powerful combination, that many people are kind of ignoring, it's like they've sort of moved on from, the past now we're in this Zone I kind, of have the opinion that we're going to, see a little bit more of that kind of, make a Resurgence in 2024 and be, combined in interesting ways or with, hybridized systems you may not see it in, the news as much but I think on the, battle Lines within Enterprises it's, something that you're going to see a lot, I think that's a fantastic not only a, fantastic prediction but a very, practical uh prediction as well so you, caught me off guard with the word spicy, a little bit there are many predictions, I would make but many of them are fairly, mundane in an alignment they're kind of, the logical thing so while you were, talking uh I kind of wiped all those off, the Slate because uh because you said, you know spicy and different and so I'll, uh I'll make two I one is I think, prediction card is going to really take, off in 2024 let's hope I'm sure it will, but uh here's my spicy one here's my uh, because I think that that was just a, given that's inevitable you've done a, great job with that the spicy one is I, think that there is uh generally going, to be a Resurgence uh of interest that, we've started to see develop again in, 2023 around uh artificial general, intelligence AGI and I I'll tell you why, I'm predicting that because we have seen, in the past year these models make such, a leap depending on how you want to, measure what a model is capable of in, terms of different measures of, intelligence there's you know people, talk about they're as intelligent, they're super intelligent they're almost, as intelligent and I think it depends on, how what kind of metric you're using for, that but I think as a generalization, kind of looking at all those we're, seeing these models that are incredibly, productive and if you're measuring, Intelligence on uh in terms of, productivity and you're comparing that, against what a human would be able to, reasonably do in the same time period, we're seeing output from these models, that we've been talking about that's, just amazing which is why you and I have, integrated them so heavily into our, lives so if you take that for a moment, as a measure of intelligence and then, you say well there are about roughly a, dozen different uh ideas on what, Consciousness and you know would be in, the space they don't agree with each, other we've made it nowhere but there's, a lot more fear now and fear tends to, drive priority as I've learned in the, industry that I'm in and so the general, fear out there is when you have such, capable models if there is a worry that, we don't understand uh we see in nature, all around us that uh Consciousness, arises in animals all over the place, it's probably mathematical in nature but, we don't have anything so I think that, there will be a Resurgence of research I, think that that uh research will not, come in the AI space it will come in the, Neuroscience space trying to understand, because the big fear is what happens, when we stumble upon it and you have, already have such productive models and, so I've run into that fear over and over, and over during 2023 so I'm predicting, not terribly practically that there'll, be there'll be a focus at least in some, quarters about how do we ensure that we, don't hit a moment that comes with big, surprises in the large I like that we, have very very different predictions on, this one but that's my spicy, prediction, this is a chang log news break curl, Creator maintainer Daniel Stenberg, documents his frustration with recent AI, tooling advancements quote I have held, back on writing anything about AI or how, we do not use AI for development in the, curl Factory now I can't hold back, anymore let me show you the most, significant effect of AI on curl as of, today with examples end quote Daniel is, clearly of the opinion that we haven't, gained much of value from generative AI, tooling but he does seem more optimistic, about the future than he is about the, present quote I am convinced there will, pop up tools using AI for this purpose, that actually work better in the future, at least part of the time so I cannot, and will not say that AI for fighting, security problems is necessarily always, a bad idea I do however suspect that if, you just add an even so tiny intelligent, human check to the mix the use and, outcome of any such tools will become so, much better I suspect that will be true, for a long time into the future as well, end quote my mind is open and willing to, be changed but I'm with Daniel here the, human touch is absolutely necessary, today and I suspect that will remain the, case for much longer than some would, have us to believe you just heard one of, our five top stories from Monday's Chang, log news subscribe to the podcast to get, all of the week's top stories and pop, your email address in at Chang log.com, newws to also receive our free companion, email with even more developer news, worth your attention once again that's, changel log.com, [Music], newws well Chris uh what one of the, things that we did leading up to this, conversation was take a look at dozens, of these this is what I'm predicting for, AI in 202 4 posts on Twitter and, Linkedin and kind of crystallized down, or distilled down some trends of what, people were predicting so I'm gonna take, what I kind of distilled down from all, of these posts and I'll just put it out, there and we can comment on any of those, then it may be interesting to look at, you know a couple of these from specific, people that might be interesting because, a lot of a lot of people have been, making these predictions so I did not do, this with AI, but if you actually just look through, the internet at these posts you'll, you'll see some Trends pop up of what, people are predicting and I think both, you and I looked at these and said yeah, these are kind of what would be expected, to be predicted based on on what we've, seen and the conversations we've had, recently so the common points that were, predicted by many different people, across the interwebs I put them in five, categories here so I'll just read them, off and then we can make a comment on, any of them if you want so rag or or, retrieval augmented generation will, continue to be a focus and will, experience various improvements so, that's number one number two open models, will beat GPT 4 in, 2024 uh number three productivity in, work will be enhanced by AI rather than, replaced by AI number four uh multimodal, models will be more of a focus in 2024 I, actually think that was one that I, predicted last year when we did our, predictions so maybe I was a year off so, there you go I was a year off and then, uh number five there'll be more focus on, small language models rather than large, language models because of Economic and, compute efficiency so those are the five, that were kind of distilled out of a, bunch of different Twitter LinkedIn blog, post um any of those uh strike you as, particularly interesting Chris I agree, with all of them um and these were some, of the ones that I disregarded for the, spicy one that I made that would, probably make half of our audience roll, their eyes uh but uh but maybe all of, our audience what's the good in, listening to a podcast if you can't roll, your eyes sometimes if you can't roll, your eyes from time to time uh but yeah, I think these are all very very, practical fairly safe uh predictions um, I think that probably most of us that, follow the industry closely uh would, tend to say actually since they came, from mini posts would tend to say that's, the thing and I and I'm looking forward, to that the the multimodal thing in, particular um is something that I've, been waiting for in 2023 I was like okay, but okay but come on um so that uh yes I, I agree with everyone that's there uh, and I think that's the logical, progression I would be very surprised if, they don't all come true this year yeah, probably a lot of these are given um, there's definitely some open models that, already quote unquote beat, gp4 in certain respects like certain, tasks or something like that like if you, think about maybe generating SQL queries, based on a schema or if you think about, like doing this particular thing in this, language that's not English or if you, consider like specific domains or or, other kind of specific tasks I think you, already see that to some degree now gp4, is this sort of general purpose model, that does all of these things at a, pretty incredible level but I think, we'll see open models get much much, closer to that and you've already seen a, lot of that kind of being hinted at with, models like mixol from mistol which is a, mixture of experts model similar to in, that sense mixture of experts sense to, GPT 4 and I think we're we're already, kind of seeing a lot of that happening, you know uh to your point if gbt 4, remains King at the moment um it's not, King in everything uh and different, taskings work better there are some, things I found llama 2 work does the, best at uh there are things I found uh, this you know Gemini which is still, quite new just a few weeks old uh from, its release is good I think right now, I'm getting better rust out of Gemini, than any of the others and so that's one, of those things where we're kind of, learning what model to go to for, different tasks um with gp4 probably, having the best overall return rate in a, generalized sense still but that will, certainly change um this year you there, will be it will probably change multiple, times I found some of the comments on, specific ones of these interesting from, particular people that are especially, well positioned to comment on some of, these items so for example the number, five one the focus on small language, models um but also with the perspective, of becoming more economical and cost and, compute efficient CLM from hugging face, the CEO made a video which is really, nice I I recommend everyone watch that, on Twitter or other places that it's, that it's posted but he he he made some, comments about his prediction that one, of the hyped AI companies certainly a, lot of them now would go bankrupt in, 2024 or get acquired for a low price and, he tied that in with the comments along, the lines of cost efficiency and, focusing on cost of of running these, models because yeah you you likely have, a lot of start ups that have raised big, money their compute costs are probably, astronomical because they're running, these large models at scale and hoping, that their margins get get better over, time but once you make the shift to open, models and cost efficient models that, may not work out in their favor and and, so the ability for people to run models, in their own infrastructure run more, cost efficient models that's not going, to play out well for certain people but, it will play out well generally for the, costs of running these sorts of systems, and Enterprises whether that be a it, still could be a software system that, runs llms and is self-hosted within an, Enterprise but it's going to be much, much more cost efficient to do that, especially kind of for those that are, wanting to pull some of that in not rely, on external systems be more privacy, conserving not have data leave their, infrastructure that's going to be more, and more possible and so um yeah I I, thought it was interesting how Clen tied, together some of these of his, predictions around yes being more cost, and compute efficient which is a benefit, to the climate for those that are, thinking about those things but also, cost efficient in terms of Enterprise, and operational costs and how the focus, on that will not work out that great for, certain of these kind of hyped AI plays, kind of as a followup to that I saw it, was a few days ago and I'm going to, paraphrase cuz I don't have it in front, of me but he uh did a social media post, that basically was an appeal to teams, out there and saying listen if this year, you're at the end of your financial run, and you want to keep doing the work and, you want to keep your team together, reach out to us at hugging face and, maybe maybe you can join our team and, keep doing uh some of the same stuff in, that way and you know with our with our, infrastructure so I think and which, which I think is a very um natural, followup to him pointing out that that, we'd see some crashing and burning, otherwise uh in there so and and a smart, move on his part as a CEO yeah yeah, definitely um it's great to see that I, think there will be some of that some of, the hints of that this year I I don't, think we're in that sort of like hire, lawyers and consolidate phase yet we're, still in that kind of building and, Engineering phase but just the economics, of how things are shifting will will, shift I think in in 2024 which will be, interesting yeah I think it'll be it the, finances of it all will matter going for, instead of just building and building, you we'll see a building but a building, with a practical eye on how do we, sustain this over time so yeah for sure, and I know one of the other ones that I, think is very well positioned to make a, comment on the things that he was making, a comment on is uh Jerry from llama, index who was on the show well Clen was, on the show too but Jerry more recently, um and of course uh llama index is one, of the key Frameworks that's being used, within Rag workflows and he also made, one of the comments around the way he, phrases it as every AI engineer still, needs to have strong software, engineering fundamentals so shipping, this is quoting him Shi an llm app to, production reduces to software, engineering and clean extensible, abstractions testability monitoring and, production Etc so yeah I think that that, Insight is very fair and kind of gets to, this really need for software, development practices to kind of gather, around the llm practices and model calls, within 2024 so I I'm glad that you, brought that set up just as a quick, add-on to that we've seen a lot of kind, of AI specific language around you know, producing AI capabilities and such as, that but uh you know we a lot of phrases, have been coined in that way I think, that's important is that at the end of, the day AI remains um a really cool new, capability within the larger software, space and to do anything with it you, have to have uh a software capability, and that those two are gradually merging, and and someday when we're past the kind, of the the coolness of the AI and we're, all just like oh yes it's we've been, doing this for a while and it's not, quite such a big deal uh it'll be, software again and all software will, have it and it will just be another, aspect of software we're not there yet, uh we're very much in the cool space uh, at this point but software skills remain, important uh and some of those may be, human- driven and some of those may be, driven by the software with models but, that doesn't change going forward you're, still getting any, yeah I'm looking forward to learning, with you in 2024 Chris and talking, through whatever comes which will, certainly be different than what we just, predicted um as is always the case every, year that we try to do this it's always, different but I am looking forward to, navigating that journey and um thank you, to our listeners for being loyal and, engaged in 2023 and really happy to, continue bringing you this content and, uh learning with you all as well in 2024, if you haven't yet make sure you go to, changel log.com community you can join a, slack Channel um where you can chat with, us if you like um and connect with us on, Twitter and Linkedin and and all the, places where Blue Sky yeah exactly blue, sky and uh we'll we'll love to hear, guests that you want on the show or, topics that you want discussed and um, chat with you about all the cool stuff, that that you're doing so, thanks for a great 2023 and happy New, Year 2024 Happy New Year to you too and, everyone, [Music], listening all right that is practical AI, for this week subscribe now if you, haven't already head to practical AI FM, for all the ways and join our free slack, team where you can hang out with Daniel, Chris and the entire change log, Community sign up today at practical ai., fm/ Community thanks again to our, partners at fly.io to our beat freaking, residence breakmaster cylinder and to, you for listening we appreciate you, spending time with us that's all for now, we'll talk to you again next, [Music], time he |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Open source, on-disk vector search with LanceDB | Prashanth Rao mentioned LanceDB as a stand out amongst the many vector DB options in episode #234 (https://changelog.com/practicalai/234) . Now, Chang She (co-founder and CEO of LanceDB) joins us to talk through the specifics of their open source, on-disk, embedded vector search offering. We talk about how their unique columnar database structure enables serverless deployments and drastic savings (without performance hits) at scale. This one is super practical, so don’t miss it!
Leave us a comment (https://changelog.com/practicalai/250/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Chang She – Twitter (https://twitter.com/changhiskhan) , GitHub (https://github.com/changhiskhan) , LinkedIn (https://www.linkedin.com/in/changshe)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• LanceDB (https://lancedb.com/)
• Episode #234 “Vector DBs beyond the hype” (https://changelog.com/practicalai/234)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-250.md) | 147 | 2 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends that fly, deploy your app server and database, close to your users no Ops required, learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, CEO and founder at prediction guard and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris doing, good today how's it going Daniel oh it's, going great I was just uh well we were, just remarking before actually starting, the recording that one of the great, things about about doing these episodes, is that we get the excuse to bring bring, on the show The the coolest uh open, source and tooling and other projects, that I'm using daytoday and get the, chance to interact with and one of those, is Lance DB and we're really excited, today to have with us uh Chun Shu who is, the CE CEO and co-founder at Lance DB, welcome thanks hey guys uh super excited, to be here thanks for having me on yeah, yeah well first off congrats on on all, your success I was scrolling through, Linkedin and saw like a video of Lance, DB up on the NASDAQ screen in in Times, Square so that that was cool to see that, must mean good things I'm, assuming yeah it a possible for um via, brex and also, Essence VC so big Thanks goes out to, them cool cool yeah well I mentioned um, I've had a chance to look through some, of what you're doing and actually use it, dayto day actually that was a result of, a previous episode that was I think, titled you know Vector databases beyond, the hype with rant I think the question, that we asked him was like oh there's, all these Vector databases you've, compared all of them what are some of, the things that stand out or some of the, Vector databases that stand out in terms, of what they're doing technically or how, they're approaching things and one of, them he called out was Lance DB I think, in particular he was talking about kind, of on disk index stuff and so I'm sure, we'll get into that and a little bit, more but that's how I got into it so I, recommend listeners maybe go back and, get some context from that episode but, as we get into things could you maybe, give us a little bit of a picture as to, how Lance DB came about I know there's a, lot of hyped Vector database stuff out, there and people not might not sort of, realize how these things were developed, how they came about what the motivation, was and so if you could just give us a, little bit of a sense of that at least, for Lance DB yeah absolutely and first I, wanted to also uh give a big shout out, to Pand as well as you were saying, there's a lot of hype and noise in this, area there are a lot of different, choices and for for users and developers, who are building generative AI tooling, and applications it's always kind of, confusing like which one is good and, should you listen to the marketing from, one tool versus another so it's great to, see someone with an engineering, background who can write so well to, actually take the time and just try out, a ton of different tools and interview a, bunch of different companies and come to, his own conclusions I'm super happy and, excited that he's a fan of lanv and we, hope to make that better for him and, also all of our uh users so you know, back to Lan CV I think so we started the, company two years ago at this point and, we didn't start out as a vector database, company actually because I think if you, kind of remember chat GPT is barely Oney, old yeah the dawn of AI yes, exactly and so the the original, motivation was actually serving, companies build building computer vision, and building new data infrastructure for, a computer vision so I had been working, in the space for a long time I'd been, building data and machine learning, tooling for about almost two decades at, this point I started out my career as a, financial Quant and then I became, involved in Python open source that was, one of the original co-authors of the, pandas library and that really got me, sort of excited about open source about, Python and building tools for data, scientists and and machine learning, engineers and so at the time this was in, 2020 and 2021 what I observed was at the, company I was working for 2btv so it was, a streaming company so we dealt with, both machine learning problems for, tabular data and also for unstructured, data like images and video assets and, things like that and what I had noticed, was that anytime a project touched this, multimodal data for AI from images to, like the text for you know let's say, subt titles or summaries to the poster, images these projects always took a lot, longer they were much harder to maintain, and it was difficult to actually put, into production at the same time so my, co-founder Lei who had I had met during, my days at cladera he was working at, cruise and sort of dealing with the same, issues and so we put our heads together, and our conclusion was that hey it's not, the sort of top application or workflow, layer or or or istration layer that's, the problem it's the underlying data, infrastructure if you look at sort of, what's been out there like you know Park, and or has been around and they've been, great for tabular data but they really, they really suck for managing, unstructured data and so we essentially, said hey what would it take to build a, single source of Truth where we can toss, in the tabular data plus the, unstructured data and give much better, performance at a much lower cost total, cost of ownership uh an easier, Foundation to build on top of for, companies dealing with a lot of vision, data and so this comes in handy when you, want to explore your large Vision data, sets for you know let's say alltime, driving this comes in really handy for, things like recommender systems and, things like that so we started out, building out that layer uh that storage, layer in in the open source and that, took about a year worth of effort to, really get to a shape that is usable, kind of like par oror and other formats, in these tools and that was when, generative AI became really it burst, onto the scene and became sort of a, revolutionary technology and what, happened at the time was we had, originally built in Vector index for our, computer vision users to say hey let's D, duplicate a bunch of images or let's, find the most relevant sample for, training for active learning and things, like that and it was sort of that open, source community that discovered to hey, this can be really good for generative, AI as well that's when we sort of, separated out another repo to say hey, this is a vector database um and it's, much easier to communicate with the, community than to say hey you're looking, for Vector search use this columnar, format and so that's how we got on to, this path quick question for you it's, really a followup to something you said, in it's been a couple of moments now as, we were going through that but I I was, just curious um when you were talking, about kind of going through the analysis, on the top workflow versus whether it, was infrastructure and you said y'all, concluded infrastructure I was just, wondering you kind of went on past that, into that but I was kind of wondering, how did y'all come to that determination, for those of us who are not deeply into, that thought process I was wondering, where your head was at when you were, doing that yeah it wasn't an easy, decision or conclusion um thinking back, it was kind of you know so it was like, 2022 initially seemed pretty crazy when, we sort of first came up on it right if, you think about it it's like why would, you make a new data format like in 2022, like par has been working so well and I, think it was really observing the pain, of our own teams and also we went out, and interviewed a lot of folks managing, unstructured data and so for them it was, you know one data was split into many, different places like the metad data, might be managed in paret and the raw, assets are just dumped onto local hard, drives or S3 and then you might have you, know other tabular data managing other, systems and they they would always talk, about how painful it is to stitch, everything together a manage it all, together and some of the outcomes are, like it's really hard to maintain those, data sets in production like you have a, paret data set that has the metadata and, then links to S3 or something like that, to all the images, and then somebody moves uh that S3, directory or something like that and now, all of your data sets are broken or, something like we would interview folks, are like hey what are you doing to, explore your visual data sets and things, like that and they're like well you know, I use MacBook and there's this app on, that called finder and if you single, click on a folder it shows you a bunch, of thumbnails right it's sort of this, horrible way to actually work with your, data but it it was because it was so, hard to manage all of that that machine, learning engineers and researchers were, stuck with these subar tools you, mentioned kind of this transition of, thinking from some of the original use, cases that you were talking about with, computer vision to this world of, generative AI that we're that we're, living in now from my impression from an, outsider's perspective it seems like, Lance DB has kind of positioned itself, very well to serve uh this kind of, generative AI use cases which I'm sure, we'll talk about in a lot more detail, later on I'm wondering from your, perspective like how has that, overwhelming demand for the generative, AI use case kind of changed your mindset, and Direction as a company and a project, and open source tooling and all of that, and how do you envision the kind of what, you're targeting as the use cases moving, forward I guess I think certainly genor, um AI brought in a lot of different, changes in a new thinking one was the, sort of focus around use cases of, semantic search and just retrieval in, general I think with the Advent of, generative AI I think retrieval becomes, much more important and and ubiquitous, for us what that means is you know, increased investments in terms of, getting the index to work really well, and really scalable and then sort of, making that data management piece to, work really well as well in integrating, with Frameworks for Rag and for you know, agents and for just generative AI in, particular um when we started out, inevitably we were dealing with you know, multi-terabyte to you know petabyte, scale like Vision data sets and things, like that and and we're still dealing, with a lot of that but for generative AI, I think there was a renewed focus on, ease of use because a lot of users users, are coming in who don't have you know, years of experience in like data, engineering or machine learning, engineering and what they're looking for, is a easy to use and easy and to install, package that doesn't require you know, you to like be an expert in any of these, underlying Technologies we also spent, some effort into okay that was sort of, the motivation behind us making Lance, stb the vector database One open source, and two embedded because we felt like, there were lots of options on the market, that required you to say figure out okay, like what is the instance I need how, many instances do I need what type of it, okay like now I have to Shard the data, and blah blah blah and coming from that, data background you know what I had been, working with a lot is like you know, sqlite or duck DB that just runs as part, of your like application code that was, and just would just talk to files that, live anywhere and it was super easy to, install and use and so that's sort of, what gave us that inspiration to make an, embedded Vector database you had just, got into this sort of idea of embeddings, or or sorry embedded databases which, well embeddings are related but that's, another topic but the the idea that, Lance DB is embedded you mentioned duck, DB and other things that kind of operate, in a in the same sort of sphere I'm, wondering for those that maybe are, trying to position Lance DB's Vector, database tooling within a kind of wider, ecosystem of vector databases and like, plugins to other databases that support, Vector search could you explain a little, bit about like what does it mean that, Lance DB is embedded like what does that, mean practically for the user maybe, people aren't familiar with that term, quite as much um so what does that mean, practically for the user and are there, other kind of like General ways that you, would differentiate Lance DB's tooling, and the database versus some other, things out there so I love sort of, geeking out about these topics uh so at, the very bottom layer in terms of, technology I think there's a couple of, things that fundamentally sets Lan CB, apart one as you mentioned is the fact, that it's embedded or runs in process I, think we are one of two that can run in, process in Python and we're the only one, in JavaScript that runs in process, number two is the fact that we have a, totally new storage layer through Lance, column or format what this allows us to, do is add data management features on, top of the index and then number three, is the fact that the indices the vector, indices and others in Lan CB are disk, based rather than memory based so that, it allows us to separate compute and, storage and allows us to scale up a lot, better right so those are kind of the, Big Value propositions that these, technological choices bring to users of, Lan CV so number one ease of use number, two hypers scalability number three cost, Effectiveness and then number four the, ability to manage all of your data, together and not just the vectors but, also if you think about it the metadata, and also the raw assets whether they're, you know images text or you know videos, could you kind of describe uh typical, use case of a developer doing this where, you're kind of taking those features, that are distinguishing Lance DB from, you know other possibilities other, competition but just talk about what, that workflow looks like uh you know or, if there is a major one or a couple and, just kind of get it very grounded so, somebody that's listening can kind of, understand how they're going to do it, from A to Z when they're integrating, Lance DB into their workflow so there's, a couple of sort of prototypical, workflows that we see from our users I, think at the smaller scale for Lan CB, you know you're installing it via like, pip or mpm or something like that and in, general you get some input data that, comes in as like a panda data frame or, maybe a polar data frame and then you, interface with an embedding model you, can do that yourself or you can actually, configure the Lan CB table to say hey, use um open AI embeddings or hey use uh, these hugging face embeddings let see we, can actually take care of all that so, it's a pretty quick sort of data frame, to Lan CB and then you can search it and, then that comes out as you know data, frames or python dicks or things like, that that plugs into the rest of your, your workflow that are likely data frame, or pedantic or python Dick based so, that's number one and then kind of, number two is really these large scale, use cases where some of our users have, you anywhere from like a 100 million to, multiple billions of vectors in one, table and that's a much bigger, production deployment and typically what, makes Lance TV stands out in that area, is one it's very easy for them to, process the data using a distributed, engine like spark and they can write, concurrently and get that done really, quickly I think we're one of the few, that offers GPU acceleration in terms of, indexing so even for those really large, data sets you can index pretty quickly, and then number is because we're able to, actually separate the compute and, storage even at that large Vector size, you don't really need that many cre noes, like you can actually just have one or, two like fairly average and commodity, crew nodes that runs on your storage of, choice depending on what latency, requirements you want and then just have, a very simple architecture for these, types of architectures the the query, nodes are stateless they don't need to, talk to each other so when you need to, scale up or when a node drops out and, has to come back in there's no sort of, leader election there's no coordination, it really lowers the complexity of that, whole stack so another great example of, this kind of architecture and the, benefits that it brings is neon the neon, database so I think um Nikita who's the, the founder recently had a a good, Twitter Thread about the difference, between neon and other databases and uh, he called it you know shared data versus, shared nothing architecture and I think, that's also what we kind of strive to, deliver in L CB versus other Vector, databases yeah I I know one of the, things that um I really enjoyed in, trying out a lot of things with Lance DB, is I can pull up a collab notebook right, and try out like I can import Lance DB I, can import like a subset of the kind of, database that I'm going to be work or, the data that I'm working with it all, runs fine I don't have to like set up, some client server type of scenario and, then when people ask well how are you, going to how are you going to push this, out to a larger scale the appeal of just, saying hey well we can just throw up, this Lance DB database on S3 and then, connect to it that's a very appealing, thing for people because also those, storage layers are available everywhere, from on Prem to Cloud to whatever sort, of scenarios you're working with so it's, very very flexible for people could you, explain a little bit because this is, something like I've been asked a couple, times by so this is my selfish question, because I have you uh on the line so, you're helping me with my own day-to-day, work but when I'm uh when I'm talking to, like some people um clients that I'm, working with I'm like oh we can just, throw this up on on S3 and then access, it usually their question is something, like well like because they have in, their mind a database has a compute node, and like the somehow the performance of, queries into the database is tied to the, sizing of that compute node and maybe, like how that's sort of clustered or, sharted across the the database and then, like this idea oh I'm just going to have, even just a Lambda function that, connects to S3 and does a query right, like this kind of in some ways it like, breaks things in people's mind and so a, lot of times their question is like how, does that work how can a query to this, large amount of data be efficient when, the data is just like sitting there in, in S3 or in another place so could you, help with help me with my answer I guess, is what I'm asking yeah absolutely so, this goes back to um what we talked, about earlier with separation of, computing storage and if you've been, sort of steeped in like data warehousing, data engineering land this has been a, big Arc of data warehouse innovation in, the past decade by allowing us to scale, up the storage versus the compute, separately this is the thing that makes, these system seem magical where you can, you can process a huge amounts of data, on what seems like you know pretty, commodity or pretty weak compute and so, the analogy that I like to make with, these situation is kind of like a lot of, us are familiar with let's say like Duck, DB demos or videos and you could see, instance where duck DB is processing you, know hundreds of gigabytes of data on, just a laptop and in a very fast amount, of time and they are able to spit out, results and almost, interactively and there are companies, you know from like mother duck to you, know there's a new company called bow, plan that is looking to essentially, distribute duck DB queries on L AWS, lambdas it's basically the same thing, it's all about the separation of Compu, and storage and that's only possible if, you have the right underlying data, architecture for storing vectors and the, data itself and just for someone that, like is not a database developer can you, describe in any words like the, generalities of that data structure that, enables such a thing yeah so it's two, things one is the one is the columnar, format so typically you know from J to, machine learning you can have very wide, tables but typically a single query only, needs like a couple of columns so column, or format allows you to only have to, fetch and look at like a very small, subset of that data number two is the, that cner format needs to have be paired, with an index like the vector index in, this particular scenario and that Vector, index in order to give this separation, of compute and storage has to be based, on dis so you have to be store the data, on dis not force the user hold, everything into memory and then be able, to access that very quickly and then, number three is how to connect that, index with the colum or format so a, colum or format like par does not give, you the ability to do fast random access, so even if you had that good index using, par you would not be able to get, interactive performance in terms of, queries and it's only by having a new, format like lands that can give you fast, random access and fast scans that you, can successfully put these two together, and deliver the things so so those are, the three sort of big pillars that I, think in our data architecture that, makes us possible while we were talking, here I'm going through GitHub on your, repo and stuff and uh was surprised at, something that kind of prompting the, next question um it looks like you're, you're really addressing a wide you know, range of different types of needs and so, so there's obviously python as you would, expect but you have JavaScript and then, I was delighted to discover that there's, a rust client in there which is when I'm, not doing AI specific things most of the, time I'm that's my language of choice, these days could you talk a little bit, about kind of two things uh the broader, like what you're trying to achieve like, how you choose what languages to support, um and how you're getting there and then, uh if you'll scratch my itch uh what is, your intention with that Russ client is, it ready what does it do just because, I'm fascinated with that sorry yeah, absolutely uh I love talking about rust, um the rust package is actually not a, client but so the core of the both the, data format and the vector database is, actually in Rust so the rust crate that, we have is actually the database or the, embedded database um and so and we, actually build for example the, JavaScript again the same thing with, JavaScript it's not just a client but, it's also an embedded database in, JavaScript so that is actually based on, top of the rust crate and kind of like, you have in say like polers or something, like that you have like a rust core and, then you connect that into JavaScript so, we had actually started out in 20122, writing in C++ because par is written in, C++ you know like serious data people, and database people write in C++ right, until they find rest of course right and, it was sort of a hack project project, during Christmas time in 2022 at the end, of 2022 where we had to did a hack, project for a customer actually and, where we had to actually reimplement, partially the repath for Lance format, and what we found was just it was so, good that we decided to just actually, rewrite everything in Rust I think, biggest things were we were a lot more, productive we rewrote roughly six months, of solid C++ development in about three, weeks with rust and we had this was like, us learning rust as beginners as we went, along a lot of that initial rust code, has again been Rewritten over the past, year but it just made us feel a lot more, productive and then number two is the, safety that rust offers you has been, amazing with, C++ like every release it just didn't, have a good feeling it was almost like, you know where's that next SE going to, come from whereas with rust you know we, felt very confident making multiple, releases per week with you know major, features and we did not see anywhere, near the sort of issues that we saw with, C++ right so everything has been really, great and I know that like Russ has, become really popular now for actually, even with Vector databases so like, quadrant I think is rust pine cone, they're not open source but in they, publicly said that they've written in, their whole stack in Rust as well um so, one more question from you along the, same line before I let it go because, we've hit that sweet spot that I love do, you think and this is not specific to, Lance DB but based on what you're saying, clearly you're thinking ahead on these, things um as we go forward and you see, both the the AI applications and you see, the different types of workflows and, infrastructure, you know becoming broader and more, supportive the multilanguage aspect of, getting out of only python for instance, do you do you foresee that as a, convergence where you're seeing language, agnosticism developing in this space as, it has in other areas of computer, science or do you think that we're still, going to be kind of locked in on the, current sets of infrastructure and, tooling very python oriented for the, indefinite future what is your thinking, along those lines so I think generative, AI definitely changes a picture and that, I think there's a very large like, typescript JavaScript community that has, been brought into the arena to build um, AI tools and so I think this is also an, underserved segment where you know it's, not just Vector databases but data, Tooling in general lags far behind in, JavaScript SL typescript Land versus, python um and I think there's a real, opportunity for the open source, Community to create you know good tools, for for this part of the community as, well I want to hear about some of the, actual use cases that you've seen people, Implement with Lance DP maybe if there's, ones that stand out like oh this was, cool because whatever it was it's they, used it at scale or it's like fits a, very typical generative AI use case or, whatever and then maybe something that, surprised you in terms of I didn't, always when you put a project out into, the world there's these things where oh, I really didn't expect people to be, using it that way but yeah that sort of, makes sense so do you think of anything, that uh fits into one or both of those, categories the use cases for lanb in the, community that I see falls into three or, four large buckets one is of course, generative AI Rag and things like that, um and I think they I think it's not so, much the use of Lan CB that I think is, really cool but it's the applications, that people build with it that is really, cool and amazing and I think a lot of, the applications that people build that, is cool that really takes advantage of, LCB is things where you need rag to be, very agile and that you need it to be, really sort of tightly bundled with your, application you can sort of call this, rag from anywhere and have it uh return, pretty quickly and without too much, complexity and so this is where I see a, lot of folks from like your standard, like chat Bots and chat with, documentation to things like, productivity tools where you know they, build things that help people organize, their daily schedules to you know much, more high stakes uh things you know in, production and like you know code, generation or you know like health care, legal and things like that and so there, I think typically you see Vector data, set sizes from like the tens of, thousands up to single digigit millions, of of vectors typically right and so you, know production means you really scale, up both the number of data sets that you, have and then the number of vectors that, you have and uh one of the cool things, that I've seen that takes advantage of, Lance CB and Lance format uniquely is, there's a code analysis tool that sort, of analyzes your GitHub repository and, plugs it into a a rag like customer, success sort of tool and what they want, to be able to do is say query the state, of the database like this today versus, yesterday versus a week ago to see hey, was this issue fixed or not and like, what's still outstanding and so lanb, uniquely gives you this ability to, version your table and ALS also do time, travel so you can say any dat Vector, database can do like give me the 10o, similar things to this input uniquely, but what Lan CB gives you the ability to, do is say give me the tempo similar as, of yesterday or as of a week ago and we, do that sort of automatically for you, yeah and then I think the other big, buckets are you know e-commerce and a, search and like recommender engines, right this is like the traditional use, case for Vector databases and there you, tend to see much bigger like single data, sets that are you know say I want to, store like item embeddings maybe that's, you know up to a couple of million up to, 10 million I want to store item, embeddings that could get up to like, hundreds of millions and you don't have, as many tables but you have potentially, have very large tabls right and then of, course the last bucket is this like, computer vision like AI native computer, vision either generative computer vision, or things like you autonomous vehicles, and things like that and there's a whole, sort of combination of more complicated, use cases that enables Active Learning D, duplication things like that and the, thing that is very unique about the use, case of Lan cian there is companies that, are managing all of their training data, in Lance CB and Lance format as well so, you can use the vector database to find, the most interesting samples and then, you can actually use the um tooling on, top of the format to essentially keep, your GPU utilization high and keep your, GPU fed very quickly during training or, if you're fine-tuning or you know if, you're running evals and things like, that yeah so cool I one of the things uh, that has been most fun for me recently, is this combination of an llm Lance DB, and duck DB where like you can create, these really cool so um if I'm using an, open llm that can generate like SQL, queries or something but I have like all, of these different SQL tables like what, we're doing is like putting descriptions, of the SQL fields and tables in Lance DB, and actually on the Fly Like matching, and pulling those to generate a prompt, which goes to the LM to generate the SQL, code which is executed with duck DB and, this gives you like the kind of really, nice uh natural language query to your, data type of scenario which has been, really fun to play with that's really, good to hear actually sorry to interrupt, so because uh you kind of nerd sight me, so interrupt there so one of the things, that's really cool about duck DB is it's, a extension mechanism and so um I think, they've also published like a extension, framework for rust based extensions and, so we have sort of a basic integration, going there and I think in New Year what, you can expect from us is actually we're, going to be spending a little B more, time to make that integration be more, rich meaning our goal is to be for you, to be able to say to write like a duck, DB UDF to do Vector search and then the, results come back as as like a duck DB, table where you can then run additional, query like Duck DB queries on top of, that and so and sort of the same and the, same thing with like polers right so you, can uh and the goal is to essentially, make it so that PE like vector database, is no longer a thing that you even have, to think about it's people are generally, more familiar with like Duck DB or, polers as the sort of that tool that, stitches together the workflow so we, just want that to make it feel even, smoother and more transparent a couple, of moments ago when you were talking, about the use cases you were talking, about you know like autonomous vehicles, and stuff and I was wondering if if we, could pull that thread a little bit more, it seems like it is a fantastic Chris, loves drones yeah I love drones and I, love things that are not by data centers, yeah uh I love things that are off on, the edge whether it be for inference or, including training concerns that uh you, may not have all the things that we're, so spoiled with with our Cloud providers, uh out there and and it seems like you, know there's many types of uh of, opportunities to use that have what's, your thinking around that have you seen, any use cases any any ideas for the, future uh in that kind of autonomous on, the edge world yeah definitely so we, certainly have so some of our users are, like robotics or or device companies, where they either collect data and write, it as Lance on the edge or they sort of, collect data as like let's say protuff, or something like that and send it off, to be converted into Lance for like, analytics Vector search and so on and so, forth I think in this world you're going, to know it better than me so what but, what I see is that one is the data is, super complicated so especially with, let's say like Vehicles types of use, cases you're getting visual data from, the cameras you're getting Point clouds, from the Liars you're getting time, serious data from the sensor readings, over time and then you've got uh manual, input data from like the Auditors and, the drivers that are sitting the car uh, you're also getting metadata about the, car about the weather about the, geography and all that right so like, being able to manage that and quer all, that together I think will be super, important for you know Robotics and, vehicles and any sort of any company, that's putting things out there in the, real world that's generating data in in, the physical world and I think that yeah, I mean it's a really hard problem but I, I think the the potential is huge right, because I think for AI we're we're going, from um this era of like very canned, like question answer to much more free, form question and answer but it's still, a little bit passive right you're like, you're asking it for information but, what's really exciting would be like you, know you marry these sort of generalized, AI capabilities with a drone or a robot, or you know something that can go out, and be active out in the real world, world that gets me super excited about, what's to come I'm wondering as we close, out here um it's been a fascinating, discussion at the end here could you, just take a moment and uh make a few, observations about what is exciting from, your perspective right now in this sort, of practical AI space because that's, where you're living what excites you, about you know whatever it is the next, six months the next year and what you, think is kind of coming as this tooling, rolls out there further and further, people learn to apply it better and, better um what's exciting for you that's, a great question I think there are there, are lots of things that I think holds a, lot of Promise in the next six 12 months, I think we'll see one is this explosion, of retrieval kind of information, retrieval tools so we already see a lot, of companies that are adding like, generative AI in like customer success, management um and like documentation and, things like that and so I think we'll, see a lot of applications providing, value that is you know that can be also, personalized and you know not just like, chat GPT style answers but actually, personalized to their own data or their, own you know cases or or things like, that and then number two is I see a lot, of successes in very domain specific, agents that are able to dive deep into, legal or healthare or some ding very, specifically and build things that seem, sort of magical whether it's you know, compliance or driving better outcomes or, you know creating things that would, democratize a lot of these sort of like, very deep expertise type of of domains, and then I think a little bit further, out are generalized like low code to no, code tools for you to build you know, very sophisticated applications using, generative AI through code generation, and sort of creative let's say creative, interfaces and things like that so those, are things I'm I think will deliver in, the short term and then you know person, personally like I love games and I'm, actually super excited about what Gena, brings to gaming you know we talk about, open world and things like that and this, is this can be really open uh where you, could just get lost for a long long time, in a generative world it's awesome thank, you so much for taking time to talk with, us uh and please pass on my thanks to, the Lance DB team for making me look, good uh in my in my day job by giving me, great great tools that work really well, um appreciate what what you all are, doing and yeah I just uh looking forward, to seeing what comes over over the, coming months and um yeah encourage our, listeners to check out the show notes, follow the links to Lance DB try it out, it only takes a few minutes and uh hope, to talk to you again soon thanks so much, thank you Daniel thank you Chris it was, super fun talking to you with you guys, and uh if you have any feedback please, let us know we hope to uh make you look, even better in the new year, [Music], thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang doog podcasts check out what, they're up to at fastly.com and fly .io, and to our beat freaking residence break, master cylinder for continuously, cranking out the best beats in the biz, that's all for now we'll talk to you, again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The state of open source AI | The new open source AI book from PremAI starts with “As a data scientist/ML engineer/developer with a 9 to 5 job, it’s difficult to keep track of all the innovations.” We couldn’t agree more, and we are so happy that this week’s guest Casper (among other contributors) have created this resource for practitioners.
During the episode, we cover the key categories to think about as you try to navigate the open source AI ecosystem, and Casper gives his thoughts on fine-tuning, vector DBs & more.
Leave us a comment (https://changelog.com/practicalai/249/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Casper da Costa-Luis – GitHub (https://github.com/casperdcl)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
State of Open Source AI Book - 2023 Edition (https://book.premai.io/state-of-open-source-ai/index.html)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-249.md) | 18 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends that fly, deploy your app server and database, close to your users no Ops required, learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, the founder and CEO of prediction guard, and I'm joined as always by my co-host, Chris Benson how you doing Chris doing, great how's it going today it is going, awesome it is I don't if you heard but, it is the Advent of gen AI that is um, I'm I'm participating in this Advent of, gen AI hackathon with Intel so people, are getting Hands-On with a bunch of, open- source models and different, Hardware so I've been in slack all day, answering questions and seeing cool, prompts and seeing cool output so it's, just been a ton of fun what's the most, interesting thing that's been in terms, of what you've seen so far I'm just, curious before we go in the first, challenge so we're only a day in the, first challenge was to generate a series, of images that kind of go in a sequence, kind of like a comic strip that tell a, narrative but there were some really, amazing ones one's kind of a child, growing up and then him having a son and, the images were really compelling and, the narrative was really interesting, yeah just very very creative output is, something that I've noticed and today is, all about chat so we're going to see, some chat Bots popping up in the Aon and, really looking forward to that sounds, like fun yeah and you know the hackathon, is all centered around these openly, accessible or open- Source or, permissively licensed generative AI, models I think it's really fitting, because we have with us Casper who is a, longtime open- Source Enthusiast but, also one of the contributors to the, recently published state of Open Source, AI book from Prem so welcome Casper it's, great to have you with us hello yes yeah, great to be here yeah well I mentioned, you're a longtime open source Enthusiast, how did you kind of get enthused about, open source AI specifically so what was, your own kind of journey into open, source AI maybe kind of leading up to, this book and what it's become he that's, a good question I've been around for, long enough that AI didn't really exist, as a thing um back when I got into open, source and it was honestly just purely a, hobby I never even considered it as a, career this was must mean what 15 years, ago or something and I in fact I felt, ashamed and embarrassed every time I was, working in open source because it felt, like I should have been spending that, time working on an actual career right, it felt like it was just a toy I had a, very long commute uh to get between my, home and workplace on on a train and I, was just coding away on my phone I, actually installed Debian I loaded on my, Android and um yeah that that got me, hooked and open source purely as a hobby, and I mean if you contribute enough and, you're happy making mistakes in public, you know eventually you build something, that loads of people start using it, spirs out of control before you know it, it suddenly turns into a career so I, probably entered into this whole space, in unconventional way I didn't intend to, you know make things that would become, famous but um they just wound up, becoming famous which is you know quite, Pleasant I mean there's pros and cons, because also things that become, successful aren't necessarily things, that you expect to successful right you, can put a lot of effort into something, and the world determines it's not really, of much value and so they don't use it, and something you barely put much effort, into could explode right so that was my, sort of background um I'm kind of an, academic slantt as well so I did a lot, of Machine Vision type things in, University didn't really want to shoe on, myself into any particular one area, though and and also I didn't want to do, pure Academia right I much prefer, industry and having stakeholder and, actual products that you build at the, end of the day and I mean there's pros, and cons definitely to both but um yeah, so that's obviously how I wound up like, the rest of the industrial World, seemingly moving towards AI because, that's a buzz word and that's what, everyone wants you to work on, effectively so yeah what started off as, initially being Machine Vision pre, machine learning became machine learning, type Machine Vision type stuff um and, now of course llms are all the rage so, that's why we thought of doing a bit of, extra research and trying and consult, holidate all of the noise out there and, um various different blog posts people, effectively shouting Into The Ether and, uh we thought we might as well write a, book and release some of our research in, the wild get some feedback on that, before we actually you know start, building more things yeah that's awesome, and uh you even allude to this in in the, sort of intro to the book this sort of, fast-paced nature of the field and a lot, of people feeling sort of fomo like how, how do I even categorize all of the, things that are happening in open source, AI so maybe one uh one kind of general, question about the structure of this, Chris and I have worked through some of, these categories in various episodes on, the podcast but sometimes it is hard to, sort of think about like how do you, categorize all the things that are, happening in open-source AI because they, do go beyond just models but they, include models and a lot of things are, sort of interconnected so how did did, you kind of was it organic in how the, structure of this book came together or, how did you come up with the major, categories in your mind for what's going, on in open source Ai and that's what I, was really wondering as well you you, literally said Daniel exactly what was, in my head uh just now so I just uh yeah, we're in tune yeah no I mean it is a big, ask because I mean My Philosophy in, general is that the Universe exists as a, cohesive whole and you know we split it, up into different subjects like physics, and chemistry and math just as a as a, way for humans to actually parse, everything that exists in small little, bite-sized chunks but they're not really, independent subjects right and the same, goes with AI I mean there's so many, different categories of AI so I mean the, nice thing about working in the open, source space is that there's lots of, different people you can have, conversations with get some feedback um, everyone kind of chipped in their own, ideas about uh how to let's say break, down a book into different chapters, ultimately I think what made the most, sense is that it doesn't matter too much, what those chapter titles are it's more, about the content within them being, let's say not too repetitive and, actually you know distilling the ideas, that people are talking about and if you, can do that really well it maybe Almost, Doesn't Matter quite how you self-, categorize things but I would say filipo, pedini is probably the one who came up, with the actual final let's say 10, chapters but then past that in terms of, you know the actually writing those, chapters probably about a dozen people, have actually worked on them which is, again really nice that you can do this, in the open source space like no no, single person is really the author of, this book it seemed fairly obvious to me, based on my own particular passion and, research that licensing should, definitely be a chapter and that's, something that developers often neglect, because it's just sort of outside their, field of interest and expertise and it's, just a a bit of red tape that maybe they, have to be aware of in the back of their, mind but um yeah so that I mean I I, basically Rose a chapter and licenses, which I think everyone else was happy, about because nobody else wanted to do, it but sure I mean it was just, effectively topics that we felt are big, major things that um there's a lot of, confusion over maybe we ourselves were, were confused about it as well so like, evaluation and data sets what's the best, way to evaluate a model anyway right so, that seemed like a big topic let's make, that a chapter so it seemed fairly, organic coming up with these titles and, of course as we were writing this again, it was all fully open source in the, whole writing process we thought maybe, we should split up a chapter so we split, up models into two chapters let's say, one for specifically unal models versus, aligned models so yeah it was an, iterative process yeah on that front I, definitely hear the passion coming, through for that sort of Licensing, element of that and I see that upfront, in the book and maybe so I'm I'm also, very very much like we've mentioned on, the podcast multiple times that people, need to be reviewing these things, especially as they see you know whatever, 400,000 models on on hugging face and, kind of parse through these things but, could could you kind of give us maybe, the pitch for engineering teams or Tech, teams that are considering open models, but might not be aware of the kind of, various flavors of openness that are are, occurring within kind of quote open, source AI could you just give us a, little bit of a sense of maybe why, people should care about that and maybe, just at a high level what are some of, these kind of major flavors that you see, going on in terms of of openness and and, access right yeah I mean I suppose first, I should have a disclaimer which is the, the quiet part that nobody usually says, which is almost a counterargument it, might not matter because in practice, nobody's going to sue you if you do, something illegal unless you're fairly, big and famous right that's just a harsh, truth and it's very frustrating that you, know laws and enforcement are tend to be, two separate things and there is a, precedent in that you're not meant to, create a law unless you know definitely, you can enforce it so to a large extent, a lot of these licenses out there are, questionable in that regard um the other, thing is a lot of these licenses are not, actually um let's say tested in court, they're not actually formally approved, by you know any government or legal, process so it's not necessarily legal, just to write something in a license um, you should probably be aware of recent, developments in the EU for example, they've proposed the um two new laws the, CRA and Pla two new acts I should say, that um are effectively saying the no, warranty clause in all of these open, source licenses might be illegal if you, are in any way benefiting let's say, monetarily even if it's indirectly so, you're constantly releasing open source, things purely for advertising purposes, but you're not directly gaining any, money from it we're still going to, ignore the no warranty Clause so yeah, there's interesting stuff in that space, but um I would say as a developer the, things that you should be aware of when, it comes to model openness is that, there's a difference between weights, training data and out put like those, were the three main categories really so, uh licenses usually make a distinction, with um or well it's not licenses this, is more about the the source so are the, model weights available that's often the, only thing that developers care about in, the first instance because that means, they can download things and just play, with them right but if you actually care, about explainability or in any way, alignment in order to figure out how you, might be able to make a model aligned or, underlined or whatever you want to do, with it you probably do need to know a, bit about the training data so is the, training data at least described if not, available and when I say described as in, more than just a couple of sentences, saying how the data was obtained but you, know actual full references and things, so a lot of models are not actually open, when it comes to the training data and, then of course the final thing is the, licensing around the outputs of the, model do you really own it are you, allowed to use it for commercial, purposes and even if you are it's highly, depending on the training data itself, right because if the training data is, not permissively licensed then, technically you shouldn't really have, much permission to use the output either, right so I think uh even developers are, kind of confused about the ethics around, the permissions so certainly legally, we're super confused as well I have two, questions for you uh as follow up but, they're unrelated but I'm going to go, ahead and throw both of them out number, one the quick one I think is could you, define what an aligned model versus an, unaligned model is just to compare those, two for those who haven't heard those, phrases uh and then I'll go ahead just, as you finish that and say and what's, the reason that I notic you know, licenses is addressed at the very top of, the book and is that framing the way you, would look at the rest of the book or is, that more just happen chance that it, came there I was just wondering how that, fits into the larger story you're, telling yeah so for those who don't know, unaligned models it's um effectively if, you train a model in a bunch of data It, Is by default considered unaligned but, in the interest of safety what most of, the famous models that you've heard off, do like chat GPT for example is add, safeguards to ensure that the model, doesn't really output sensitive topics, issues anything illegal it's still, probably capable of outputting something, quite bad but there are safeguards and, the process of adding safeguards to a, model is called aligning a model as in, aligning with good ethics I suppose, that's the implicit gotcha thank you, very much and then I was just wondering, like I said the positioning of Licensing, at the front is that relevant or is that, just happen chance we did sort of think, of uh an order of chapters let's say and, Licensing just seemed like a good uh, introduction let's say because it's, before you get into the meet and the, details of actual implementations and, where you can download things and uh, where the research is going let's say, well uh Casper as you were kind of you, were just describing the kind of framing, of the book and also some of these, concerns around around licensing I'm, wondering if we could kind of take a a, little bit of a step back as well and, think about like what are some of the, main kind of components of the open-, source AI ecosystem the book kind of, details all of these but what are some, of like the big major components of the, AI ecosystem maybe Beyond models because, people obviously have have maybe thought, about or heard of generative AI models, or llms or text to image models but um, there's a lot sort of around the, periphery of those models that make AI, applications work or be able to run in a, company or in your application or, whatever you're building so could you, describe maybe a few of these things, that are either orbiting around the, models if you view it that way or part, of this ecosystem of Open Source AI sure, I mean there's huge issues I would say, regarding i''s say performance per watt, effectively electrical watt there's a, lot of development in the hardware space, and uh you know we have new Mac M1 and, m2s which might actually mean you can, fairly easily do some fine-tuning and or, at least inference on a humble laptop, without even needing Cuda uh it seems, like there's a lot of shifts and, Paradigm changes when it comes to the, actual engineering implementations uh, web GPU is a big upcoming thing which I, mean it has technically been going on, for a decade or more but it might, actually have reached a point where, possibly we can just write code once and, it just works in all operating system on, your phone you know you can get an LM, just working wherever but yes I mean, there's effectively a lot of mlob style, problems it's one thing to have a theory, of how to actually create an llm but, quite another thing to actually train a, thing fine-tune it or deploy it in a, real world application so there're a lot, of competing let's say software, development toolkits desktop, applications and uh I don't think, anyone's really settled on one that's, you know conclusively better than, anything else and uh really based on, your indiv ual use cases you have to do, an awful lot of market research just to, find something that suits it to your use, case I ask this because we've had a, number of discussions on the show about, sort of training fine-tuning and then, this sort of prompt or retrieval based, methodologies so from your perspective, as someone that's kind of Taken survey, of the open source AI ecosystem and is, operating within it and Building Things, what is your kind of vision for where, things are kind of headed in terms of, more sort of fine tunes getting easier, and fine tunes being everywhere or kind, of pre-trained models getting better and, people just sort of implementing fancy, prompting or retrieval based methods on, top of those do you have any opinion on, that sort of development I know it's, it's something that's on people's mind, because they're maybe thinking about oh, this harder to find tune but is it worth, it um because I'm getting maybe not, ideal results with my prompting yeah no, makes sense um I would say basically uh, if you're not doing some form of fine, tuning you're not producing anything of, commercial value effectively it's very, much like hiring an intelligent human, being to work for you without them, having any particular expertise in and, not even knowing what your company does, right that's what a pre-trained model is, effectively right so you do need to find, tune these things or add some amount of, equivalent anything else that's, equivalent to fine tuning let's say in, terms of things that actually predate, llms I think there's a lot of stuff that, is very useful and even maybe far more, explainable that people seem to be, discounting just because it's easy to, get some result out of an llm just by, prompting it so people view it as good, enough and they start using it even, though it's maybe not safe right so one, thing I would really recommend people, look at is uh embeddings just by doing a, simple Vector comparison in your Bings, you can find you know related documents, you don't really need an llm to drive, that because llm is effectively instead, of explicitly making an embedding of, your query you know converting your, query into a vector and then comparing, to other vectors in your database that, correspond to let's say documents or, paragraphs that you're trying to search, through your llm is automatically doing, that entire process and it might make, mistakes while it does that right it's, going to paraphrase things which it, might get wrong because econom and do, simple basic mathematics it doesn't, understand logic right so yeah whenever, it comes to to things like let's say, Medical Imaging where there's there's a, lot of interest in how can we use AI to, improve this people tend to get, frustrated with how slow the uptake of, AI is but there's a reason for that, which is explainability is important, right so the way I see things going is, yes far more fine-tuning more retrieval, augmented generation type stuff so rag, stuff and then also probably push into, explainability I don't really think, there's much explainability in llms, right now in general everyone's been so, focused on llms with uh large Vision, models are kind of one of the newer, things on the rise what is your take on, on large Vision models and the future, and how they how they start integrating, in uh I was just Andrew and G is talking, about some of them now and I would love, your take on it sure I mean uh we didn't, quite get to covering this in the book I, mean that's how fastpaced things are so, multimodal things are super interesting, to me my feeling is that it's, effectively gluing together existing, models into Pipelines and um it hasn't, been historically something that I was, that interested in because that's more, uh an application and it's not so much, something you need to research per se, it's uh very similar to how the open AI, people were very surprised that chat GPC, exploded in popularity even though, technically the technology is quite old, it's just you know you lower the entry, barrier a little bit and then everyone, actually starts using it because they, can right so to me the multimodal type, stuff is similar it could result in, really Innovative new companies popping, up and new solutions that are actually, usable by the general public but in, terms of the underlying technology it, doesn't seem that particularly novel to, me as you kind of looked at the, landscape of models itself and the, licensing of those models the support, for those models and underlying mlop, sort of infrastructure the support for, an underlying kind of like model, optimization you know toolkits and that, sort of thing some people out there, might hear all of these words like oh, there's these llama 2 and there's now, mistol and then there's you know now ye, and like all of these as you were going, through and researching the book and, also kind of doing that as an, open-source Community can you Orient, people at all in terms of the kind of, major model family so you already, distinguish between sort of models and, unaligned models is there any kind of, categories within the models that you, looked at that you think it would be, good for people to have in their mind, mind in terms of hey I have this, application or I have this idea for, working on this I maybe want I'm, listening to Casper I want to maybe, fine-tune a model I've got some cool, data that I can work with where might be, a sort of well supported or reasonable, place for people to start um in terms of, you know open llms or open text to image, models if you also want to mention those, sure yeah I mean because there's just a, new model basically being proposed every, day I mean often it's a small, incremental improvement over a previous, model so in terms of actually trying to, compare them from a theoretical level, without looking at their results there, isn't really much to talk about in terms, of you know large model families they, might be an an extra type of layer that, has been added to a model in order to, give it a new name let's say nothing, particularly stands out there I mean we, do have a chapter on models where we try, and address some of the more popular, models over time the proprietary ones, and then the open source ones, but um I would say nothing particularly, stood out to me over there I suppose the, the more interesting thing in terms of, actually implementing something for your, own particular use case is uh starting, with a base model that has pretty good, performance on presumably other people's, data that looks as close as possible to, the data that you actually personally, care about so you don't have to wait too, long when then fine-tuning it on your, own data so for that I think the most, important thing is to take a look at the, the most upto-date leaderboards right, and there are quite a few different, leader boards out there uh we do also, have a chapter on that and that was, interestingly also a nightmare to keep, up to date because the leaderboards, themselves are also changing regularly, new leader boards are being proposed for, different things and uh take a look at, the leaderboard pick the best model, performing there and then start doing, some fine tting that would be my Mo this, kind of gets to one of the natural, questions that might come up with a book, on this topic which is things are, evolving so quickly and you mentioned, kind of the strategy with this book, being to have the book be open source, have multiple contributors and I'm, assuming part of that is also with a, goal for it to be updated over time and, kind of be an active resource um how, have you seen that start to work out in, practice um and what is your hope for, that sort of community around the book, or contributors around the book to look, like going into the future sure yeah I, mean like uh for the evaluation data set, thing we already have you know more than, a dozen leaderboards just the names of, the leaderboards and links to them and, and what benchmarks they actually, implicitly include and yeah we have, comments at the bottom of each chapter, which are driven by GitHub effectively, powered by utterances which is this, integration tool helper so you don't, need to maintain a separate comments, platform let's say and also encourages, people to you know open issues open pull, requests if we've you know made any, mistake or something is out of date in, the book I mean we definitely encourage, people to fix things or complain about, things which I suppose it's also good, from the perspective that nobody can sue, you for writing something wrong because, in the first instance what they really, should do is just correct it right you, can't really open a cold case and for, that reason I think it's also lowering, the entry barrier for people to, contribute in the first place they don't, have to worry about what they write and, whether or not people will disagree, because if they disagree they can fix it, right can start a discussion nobody's, going to immediately file a lawsuit and, yeah so we've had quite a lot of, interesting discussions already on the, individual chapters uh the other thing, that we you know highlight is that as, soon as you make a contribution to, anything your name is automatically, displayed at the bottom of the, individual chapter as well as you know, the list of contributors in the front so, yeah it's a good way to get your name as, a a co-author in a way of a book I mean, it's a 21st century book as well so it, lives fully online everything that is, committed to the repository is, automatically built and published, immediately and before we get too much, further some people in the audience, might be wondering like I mentioned the, name of the book and of course you can, find it by Googling it I'm sure but what, what is the best place to find the book, and then also as a contributor you, mention the Links at the bottom of the, pages but I'm assuming there's a GitHub, associated with the book do you just, want to mention a couple ways for people, to find it sure I mean the easiest is, probably to um go to book. Prem AI .io, yeah I apologies that there's an AI and, an IO um it seems to be a thing but yeah, so book. premi doio or uh I mean you can, also just probably Google PR Ai and um, you can find our GitHub which is also, GitHub Prem, ai-o that's a thing all the AIS and all, the iOS exactly we we have quite a few, repositories that I mean some of them, were just archived right now because, we're constantly running different, experiments in um the entire, architecture of the things that we're, building so effectively our strategy was, to First do a lot of research we didn't, mind publishing this for the general, public to have a look at so we we, released it in a book and um now we're, working on actually reading our own book, and uh maybe taking some of its advice, and and Building Things and we have this, very much um fast-paced startup style, let build lots of different things try, lots of different experiments it's fine, if we throw things, away, [Music], this is a chang log news break one year, after chat GPT brought a seismic shift, in the entire landscape of AI a group of, researchers set out to test claims that, its open source Rivals had achieved, parody Or even better on certain tasks, in the linked paper they provide an, exhaustive overview of this success, surveying all tasks where an open source, llm has claimed to be onp par or better, than chat GPT their conclusion quote in, this survey we deliver a systematical, review on high performing open source, llms that surpass or catch up with chat, GPT in various task domains in addition, we provide insights analysis and, potential issues of Open Source llms we, believe that this survey sheds light on, promising directions of open- source, llms and will serve to inspire further, research and development helping to, close the gap with their paying, counterparts end quote it's becoming, increasingly clear to me that the data, models powering future AI rollouts will, be commoditized and democratized thanks, to the competitive nature and hard work, of both Academia and Industry what a, relief you just heard one of our five, top stories from Monday's change news, subscribe to the podcast to get all of, the week's top stories and pop your, email address in at Chang log.com newws, to also receive our free companion email, with even more developer news worth your, attention once again that's changel, log.com, [Music], newws so Casper I want to actually uh do, a quick follow up of something you were, just saying as we were going into the, break and that was you're talking about, you know now we're going to start going, through the book ourselves and taking, the advice and that brings up kind of a, a business-oriented question I wanted to, ask about it and so you go out today, you've listened to the podcast, downloaded the book and there's so much, great information in all of these, chapters and the comparisons and the, what you know the different options that, each chapter addresses are are good or, bad and and things like that if, someone's just getting going or maybe, they're starting a new project and, they're using your book as a primary, source to kind of help them make their, initial evaluations how best to use that, book because there's a lot of material, in here in terms you know all these, different categories they need to come, up with their pipelines and you know go, back to the leaderboards and select the, models that they the architectures, they're interested in doing and and all, that if you were looking at this, initially with a new set of eyes but, also having the inside of been one of, the authors and editors of this how, would you recommend to somebody that, they best be productive as quickly as, possible and getting all their questions, sorted how would they go about that, process right I mean that's not really a, question I was thinking of addressing, with you know writing a book so I, suppose what you're referring to is a, case where someone has a particular, problem that they want to solve sure and, an actual let's say business model or, target audience so I mean if there's, actually something that you're trying to, solve the book hasn't been really, written from that perspective it's more, for a student who kind of wants to learn, about everything right or um a, practitioner who just hasn't kept up to, date with the latest advancements in the, last year so the intention is that you, can skim through the entire book really, you're not meant to necessarily know it, Advance which specific chapters might, have or spur an innovation or an idea, that you can actually Implement to help, you in terms of that I mean what, probably might be more useful is looking, through a couple of blog posts that, actually take you from you know zero to, here's an example application that for, example will download a YouTube video, automatically detect the speech do some, speech detect recognition type things, and then give you a prompt and you can, type in a question and it will answer it, based on that video right we do in fact, have a few blogs giving you these kind, of examples right um and I think that, would probably be more useful if you're, actually trying to build a product to, find existing writeups of people who, have built similar things and just, follow that as a tutorial right the book, is more just to get an overview of, what's happened in the last year in, terms of the recent Cutting Edge State, ofthe art right yeah and I think what, that's a good call out and I think one, of the ways I'm viewing this is like I, am having a lot of those conversations, as a practitioner with our clients about, you know how are we going to solve this, problem and something might come up like, oh now we're talking about a vector, database how does that fit into like the, whole ecosystem of what we're talking, about here and why did we start talking, about this I think that the way that you, formatted things here and laid them out, actually really helps put some of these, things in context for people within the, whole of what is open source AI which is, really helpful so I just mentioned, Vector databases which we have talked, about quite a bit on the show and is, something that of course is an important, piece of a lot of workflows but there's, one thing on the list of chapters here, that maybe we haven't talked about as, much on this show and uh desktop apps, which we've talked a lot about whether, it be like that orchestration or, software development toolkit layer like, you talking about Lang chain and llama, index and other things or the models or, the mlops or the vector database but I, don't think we've talked that much about, sort of desktop apps quote unquote, associated with this ecosystem of open-, source AI could you give us a little bit, of framing of that topic like what is, meant by desktop app here and maybe, highlighting a couple of those things, that people could have in their mind as, part of the ecosystem sure I mean I, should probably quickly say about Vector, databases I don't quite understand why, there's so much of hype over it to me, embeddings are actually the important, thing the database that you happen to, store your embeddings in is almost like, a minor implementation detail unless, you're really dealing with huge amounts, of data it shouldn't really matter which, data Bas you P right sure valid point I, don't know if you have a different, opinion there though no no I I think, think it's uh not necessarily A a one or, the other but there's use in my opinion, there's use cases for both but not, everyone should assume that they they, fit in one of those use cases until they, figure out what's relevant for their own, problem so um but yeah and in the, desktop space I think maybe there aren't, that many developers who talk about it, because it's it's almost front end type, applications as opposed to getting stuck, into the details of implementing fine, tuning and uh all that stuff tends to be, more backend let's say in in inverse, Commerce so I think that might be one of, the reasons why there aren't that many, desktop applications being produced, because you kind of need both both front, end and back end and that maybe, naturally lends itself to more the sort, of resources that only a close Source, company might be willing to dedicate so, maybe that just might be why there's not, so much in the open source space just, takes a lot of development effort but uh, yeah there are a few that we do mention, in the book there's LM Studio GPT for, all Cobalt all of them are still very, new because I mean the thing that, they're effectively giving your a user, interface 4 itself is very new so yeah I, mean there are some common design, principles that are maybe being settled, on you know you you do expect a prompt, if you're dealing with language models, you do expect a certain amount of, configuration for images if you're, dealing with images like how many um, what's the dimensions and and you some, basic pre-processing that has nothing to, do with artificial intelligence but you, might still expect to see the sort of, thing in in one place rather than having, to know switch between a separate image, editor and and your pipeline things that, I'm kind of interested in is improving, the usability or or the end user, pleasure let's say of using these, desktop apps far more so can you sort of, uh graphically connect these pipelines, together like some sort of a node editor, so you can drag a drop models around and, like drop the inputs connect their, inputs and outputs to each other so that, you can have a nice visual, representation of of your entire, pipeline but yeah excited to see what, happens in that space to some extent I, think Prem itself is probably uh, interested in developing a desktop app, itself as you've gone through the, process of putting the book together and, I think one of the things that in any, project that folks do is kind of like, when to go ahead and and put it out, there you know you there's a point where, you have to kind of put a pin in it and, say that's this one right now but our, brains never working obviously on these, problems is to that effect you get the, book out there uh is there anything and, you have conversations like this one, that we're having right now where we're, talking about it and you're like well it, wasn't meant for that but it was meant, for this is there anything in your head, that are you're starting to think well, maybe that should have been a topic or, you know something we we should have put, in the book maybe next time with this, landscape evolving so fast where has, your post-publishing brain been at on, these collection of topics we definitely, have yet another 10 more chapters, planned um so there's definitely going, to be a second edition of this book or, maybe I should say second volume it's, not even a second edition it's not, corrections to the current thing it's 10, whole new chapters yes literally V2, that's going to include a lot of, interesting stuff about things that, happen in the latter half of 2023 and, hopefully will be developed in in 24 as, well among the things that people are, talking about I mean we already talked, about Vector databases a little bit and, maybe you're like you don't see the hype, there what are some things in the, ecosystem that you're really really, excited about and then some things that, maybe like are there any is there, anything else that you're like ah like, people are talking about this a lot but, I don't I don't really see it going, anywhere any any uh hot takes um I mean, I probably already covered some of these, things right what I'm super interested, in is fineing and um lowering entry, barriers further things that I'm not all, that convinced by are pretending that AI, is Agi they're not the same I'm sorry, and I don't see it and I don't trust, these models to be more intelligent, right now than at best a well-trained, secretary they're considerably faster so, you know there are applications where, being able to churn through a lot of, text really quickly is actually a value, in which case yes great apply one of, these things but apart from that I don't, I don't really buy the hype yeah that's, fair I think and as we kind of get, closer to an end here I'm wondering, maybe there's um some in our listener, base that don't have the kind of history, in open source that you do and of course, there's contributions to this book that, would be relevant but there's also, contributions within this whole, ecosystem of open AI whether it's in the, toolkits or it's in the desktop apps or, it's in the in the actual models or data, sets or evaluation techniques themselves, for those out there that maybe are are, newer to open source do you have any, recommendations or suggestions in terms, of more people getting involved in open-, Source AI obviously the book is a piece, of that because it's open source and, people could contribute to that but, maybe more broadly do you have any, encouragement for people out there in, terms of ways to get started in, contributing to open- Source AI rather, than just consuming sure yeah no I would, say that basically every time you, consume you are 90% of the way there to, contributing back as well so you have, probably cloned a repository somewhere, in order to run some code right you, probably encountered some issues and a, lot of those issues probably are genuine, bugs because these are fast moving, things people just write some code, without necessarily doing full proper, robust testing we don't have time to do, robust testing right A lot of the time, they're just throw away experiment type, things so we're in make and break mode, yeah so if you find an issue rather than, quietly fixing it yourself feel free to, open a PO request and maybe you know, you're not new but you're kind of new to, this and and you're scared of opening a, poll request you're scared that it's not, perfect code that you have written as, well well I mean bear in mind that the, code you fixed was even less perfect, right and I can say as an open source, maintainer I'm always super happy when, people contribute anything whether it's, an issue a pull request and I think, generally people are far more happy and, helpful and kind than you might expect, I would say that um when it comes to, actually writing code people aren't, necessarily the same trolls that you, might find on Twitter right or or social, media in general right these are people, who have who have a mindset that they're, thinking about what's being written and, they care about the actual project and, they don't care about you know fighting, you on a political front let's say so if, you are trying to be helpful that counts, a lot more than are you actually helpful, in your own opinion or anyone else's, opinion right and even if your point, request doesn't get accepted or merged, in you will definitely have some useful, feedback it might you know help you in, your own expertise your own growth as a, student or contributor and um I would, say you know there are definitely times, where you might rub somebody up the, wrong way and um you're not happy with, an interaction but it's such a small, percentage of the time that uh it's, definitely worth it yeah well I I think, that's a really great encouragement to, in this conversation with and um of, course Chris and I as well would, encourage you to get involved and even, if it's something small initially uh get, plugged into a community start, interacting and contribute to the, ecosystem because I would agree with you, Casper it can be both useful for the, projects but also very rewarding and um, beneficial for the contributors in terms, of the community and the the things you, learn and the connections that you make, and and all of that so yes very much, encourage people to get involved also, encourage people to check out the open, source AI book which we'll Link in our, show notes so make sure you go down and, click and take a look it's very easy to, navigate to and you'll see all the, categories that we've been talking about, through the episode so dig in and if you, see things to add definitely contribute, them appreciate you joining Casper um, yes and thanks for sharing the link um, you just shared it with me so book. Prem, a.state of opens Source AI with dashes, we'll link it in the show notes as well, so people can click easily but um yeah, thank you so much for joining Casper and, also thank you for your contributions to, the book um we're really thankful that, you've done this so sure yeah thanks for, having me, [Music], on thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang doog podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residents break master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Suspicion machines ⚙️ | In this enlightening episode, we delve deeper than the usual buzz surrounding AI’s perils, focusing instead on the tangible problems emerging from the use of machine learning algorithms across Europe. We explore “suspicion machines” — systems that assign scores to welfare program participants, estimating their likelihood of committing fraud. Join us as Justin and Gabriel share insights from their thorough investigation, which involved gaining access to one of these models and meticulously analyzing its behavior.
Leave us a comment (https://changelog.com/practicalai/248/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Gabriel Geiger – Twitter (https://twitter.com/gabriels_geiger)
• Justin-Casimir Braun – Twitter (https://twitter.com/jus_braun)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Article - “Inside the suspicion machine” (https://www.wired.com/story/welfare-state-algorithms/)
• The methodology behind Justin and Gabriel’s report (https://pulitzercenter.org/stories/suspicion-machines-methodology)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-248.md) | 13 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends that fly, deploy your app server and database, close to your users no Ops required, learn more at, [Music], fly.io well welcome to another episode, of practical AI this is Daniel whack I, am the founder and CEO at prediction, guard we have some really exciting stuff, to discuss today so thankful to have my, guests with us today because there's a, lot of talk about the dangers of AI or, potential risk associated with AI which, we've talked about on the show but I, think maybe that kind of misses some of, the actual real world problems that are, happening with deployed machine learning, systems that maybe have been going on, for longer than some people might might, think and maybe we can learn some things, from those deployed machine learning, systems that would help us create better, and more trustworthy AI systems moving, towards the future so I'm really pleased, uh to have with me today Justin Braun, who is a data journalist at Lighthouse, reports and Gabriel guer who is an, investigative journalist at Lighthouse, uh thank you both for joining me so much, thanks for having me thanks so much for, having us yeah yeah well like I, mentioned I kind of teed us up to talk a, little bit about maybe risks or kind of, downsides of deployed machine learning, systems and you both have done amazing, journalism related to what you've kind, of titled here suspicion machines and I, think it would be worth before we kind, of jump into all of the details of that, which is just incredibly fascinating if, you could give us a little bit of, context for both what you mean by, suspicion machines and how this topic, came across your desks and you started, getting interested in it sure I can, start with that I mean the reason that, we chose the suspicion machine as the, title for our series is it's kind of a, driving metaphor for what these specific, machine learning models are doing within, the welfare context so a while ago we, wanted to investigate the deployment of, machine learning in one specific area, but weren't sure which one yet so in the, US there's been a lot of reporting about, the use of machine learning or, predictive risk assessments within the, criminal justice system also in facial, recognition and us over in Europe looked, at that reporting and noticed that, there's a big lack of it over here in, Europe and so we were exploring, different Realms and settled on looking, at welfare systems which is the sort of, quintessentially European issue if you, want to say and in the last decade, welfare systems have become this sort of, polarizing political Battleground within, Europe how much welfare should we be, giving out or people defrauding the, state how much money and so we wanted to, F hone in on this one area to make it, sort of manageable and we started, decided to invest tigate the deployment, of predictive risk assessments across, European welfare systems and basically, what these systems do I mean they vary, in sort of size and color but the basic, sort of mechanics remain the same is, that they assign a risk score between, zero and one to individual welfare, recipients and rank them by their, alleged risk of committing welfare fraud, and the people with the highest scores, are then flagged for investigations, which can be quite punitive and where, their benefits can be stopped so we, landed on this metaphor of the suspicion, machine because we felt that these, systems were often times essentially, laundering or generating suspicion of, different groups who were trying to, receive welfare benefits that they, needed to pay rent every month and when, you all started thinking about these, suspicion machines these deployed, machine learning systems were there, existing examples like concrete examples, of how these suspicion machines were, being punitive maybe in either biased, ways or just like in the kind of false, positive uh error sort of way that's, creating problems for people that it, doesn't need to to create were were, there was there actual evidence at the, time or was it just a big question, because there wasn't any sort of, quantitative measurement so there was, signs of it so in the Netherlands, specifically there was a case where, 30,000 families were wrongly accused of, Welfare fraud and it turned into this, huge Scandal called the child care, benefit Scandal and eventually led to, the fall of the government and it turned, out that the way that these parents were, wrongly flaged for investigation was, because of a machine learning model that, the agency had deployed but there was no, sort of quantitative measure of what, that model was actually doing nobody, took it apart and actually looked inside, and saw okay well why was it making, these decisions which is you know a huge, reason why we as Lighthouse decided and, were so adamant about the idea of we're, not just going to investigate these, systems s of the classical journalist, methods you know call people up sources, you know getting contracts but we, actually wanted to take one of these, systems apart and that was sort of the, Big Challenge or hurdle in our reporting, and I think Justin can maybe talk to, what's sort of the existing literature, on these predictable risk assessments, yeah I think my interest in the topic, comes kind of from the broader, discussions around AI fairness that, really started after propublica, published its machine bias piece um six, or seven years ago and in the aftermath, of that there were a bunch of systems, that worked in a similar way that were, kind of discovered in various contexts I, myself worked a little bit on predictive, grading systems um so during the co, pandemic some school systems replaced, their previous you know handwritten, exams with an algorithm that tried to, predict based on previous exams how well, somebody would score in their final exam, and with each of these systems the issue, that emerges is essentially similar once, you try to classify people according to, risk and you have a training set that's, not a perfect representation of the true, population you'll start running into, issues like disperate impact for, different groups um which is kind of the, most hot button issue but you'll also, start running into how representative is, your fairness data in general you'll, start running into issues with you know, where do you set the threshold what, values are you trading off when you set, the threshold higher or lower and so I, was generally interested in that and, then kind of joined lios at a point when, Gabriel and some others had done a lot, of the groundwork already to see whether, there was something there in welfare, risk assessments and um then kind of, took on the technical work from there, one question I have just like even as a, data scientist just thinking like okay, where where do I start with this this, model is deployed by some entity in, theory it's been developed by some group, of technical either Engineers or data, scientists or whoever it is where do you, go about actually starting to find out, like where does this model exist who who, has the serialized version of this model, sitting on some dis somewhere in some, cloud or like yeah where do you even, start with something like that so now, there's a sort of trend of having, algorithm registers where public, agencies across Europe publish what, different types of algorithms or models, they're using but that didn't exist when, we started this reporting so what we did, was we made use of Freedom of, Information law in the us thing they're, called Sunshine laws and we started, sending in these requests trying to, figure out at least where are you using, predictive modeling within the sort of, welfare system um because you know you, could be using it to look for fraud but, you could be using it for other things, as well and we started sort of slowly, building out this picture of which, countries were using predictive modeling, at different places in their welfare, system and then sort of sort of slowly, building a a document base so maybe we'd, ask them for um we didn't start by, asking a lot of times for like source, code or final model files or training, data we'd start by asking for can you, give me like the manual for your data, scientists for retraining the model, every year and that would allow us to, ask for more specific documents and more, specific questions like okay we know, that there's a document called, performance report 2023 HTML because we, see it referenced in your your manual, for your data scientist so we can, request that and then sort of built up, to this place of okay now let's request, the final model file the source code to, train it ask for the training data which, we can get into because there's some, prickly things there around data, protection laws in Europe so we kind of, tried to do this tiered approach to sort, of build for that final ask to for, asking for the model once we could make, sure that our request was specific, because oftentimes agencies would try to, resist our requests saying they were too, broad or we weren't being too specific, enough or trying to argue that, disclosing certain documents could allow, potential frosters to gain the system, I've got to ask like as you kind of did, this sweeping look at how Predictive, Analytics was actually deployed across, Europe even before we get into the, specific case that you studied are there, any kind of takeaways or trends that you, saw in terms of how machine learning is, actively being deployed by government, entities or by welfare entities across, Europe yeah so I think it started, essentially a bit later than in the, United States you kind of have this, trend in policing in kind of risk, analysis um I would say that begins in, the early 2000s where you kind of have, um I mean semi-governmental, organizations doing credit risk scoring, the first kind of instances of, predictive policing also more serious, thinking around big data mining for some, risk analytics in in the welfare context, and then I would say there's a bit of a, bifurcation so you kind of see some, instances where Big Industry players, Accenture the peners of the world right, like these big big companies hype up the, case for big data analytics to be, deployed across different sectors and at, the same time you have a lot of failures, when those tools are deployed um they, often don't work very well people who, have to use them in the agencies don't, know how to use them um you see some, agencies that drop those systems and at, the same time you see other agencies, that kind of build up internal capacity, and, build those tools themselves sometimes, in collaboration with universities or, smaller startups but you kind of have, these two Pathways that continue to, coexist at the same time I would say in, in terms of the systems that we looked, at most of them were developed kind of, from the early 2010s onwards it's, definitely gotten a lot more in the last, five or six years and across the eight, or maybe nine countries now I'm not, quite sure how many we've looked at but, I think we've only seen a single country, where we did not see evidence of, Predictive Analytics being used to, assess risk and Welfare interesting so I, guess on the other side we I asked a, question about evidence of these systems, prior to your reporting evidence or, cases where these systems maybe behaved, in ways that caused harm or issues on, the other side you mentioned this kind, of hyped perception potentially hyped, perception of what these systems could, do in a positive, way I mean the main case for using these, systems as you mentioned is to kind of, catch frauders from my understanding on, that side of things is there evidence, that hey yes this type of fraud is a, huge problem that we need to invest kind, of advanced technology in solving or is, that also kind of up in the air in terms, of the I guess I'm getting at the, justification for using these types of, systems on this sort of scale this is, one of the questions we try to address, in our reporting a little bit first of, all distinguishing between deliberate, fraud and unintentional error is really, messy and difficult I mean how do you, prove intent how do you prove that, someone intentionally didn't report, something I mean there's clear-cut cases, where it's like criminal Enterprises, defrauding the welfare state using, identity fraud okay that's pretty Cut, and Clear but when it's individuals or, or family and they didn't report 200, Euros you know is that intentional is it, not intentional how do you prove it so, that's already a challenge what we did, see is evidence of a lot of the larger, consultancies tending to overhype the, scale of Welfare fraud and these, estimations being criticized by let's, say like academic studies and you know, when National Auditors like the National, Audit office of France for example, actually did you know random surveying, to try to estimate the true scale of, Welfare fraud they estimated at about, 0.2% of all benefits paid whereas, consultancies will estimated at about 5, to 6% of all benefits paid um so there's, something a little bit of this situation, where you know they're hyping up, estimates to sort of sell the solution, you know at the same time like fraud, does happen within the system and you, know our reporting isn't meant to try to, dispel the notion that fraud doesn't, exist but I think there's it's, definitely still unsettled science on, what the actual scale of Welfare fraud, is and whether these systems that are, being deployed in places like the case, study we looked at are actually catching, fraud or just catching people who have, made unint mistakes and that these, unintentional mistakes are being treated, as F to add on to that a little bit I, think the add a justification that is, often being used as that actually these, systems are more fair than analog, equivalents that by using a machine you, get rid of biases and that they're, better at detecting fraud than people, are and I think as we'll probably get, into later there's good reasons to doubt, both of those, propositions all of that was really good, setup for this particular case study, that I think think you've highlighted in, some of your recent work I'm wondering, if you could kind of set the context for, the particular case study that you, focused on the particular model that you, focused on in light of what you were, just talking about about kind of, scanning the the environment I guess, through through information requests and, Freedom of Information requests to, understand where things were deployed, all the way down to like getting your, hands on a model so how did that how did, that transition happen and tell us a, little bit about the use case that you, studied more deeply as I mentioned, earlier we started by sending these, freedom information requests across, Europe eight or nine countries and we, started receiving a patchwork of, responses back so some places just said, no we're not going to give you anything, at all some places would be like okay, we'll give you the manual but then when, you tried to ask for anything like, technical like code or a list of, variables they shot it down but there's, this sort of one exception in all of, this and that was the Dutch city of, Rotterdam and roddam had deployed one of, these predictive models to try to flag, people as potential frosters and and, investigate them and right off the bat, Roder Dam sent us the source code for, the training process for their model, awesome and we got really excited at, first we were like wow this is great we, started looking through the code and we, notied that when you know the scoring, function in the code goes to load, something called the final model. s file, and we go looking through the directory, and we notice huh wait a second this, final model that RDS file you know the, actual model file that can be imported, to score isn't in the directory so we, email them back we say hey hey guys I, think you made a mistake like there's, this final model. RDS file missing in, the code directory so we can't actually, run anything they go oh well yeah psych, but that you're not getting that one um, and and their justification for this was, that if uh this was made public, potential frauders be able to gain the, system so you know long story short we, went on this year-long battle with them, to attempt to get this model file and um, eventually the city to their credit, decided to disclose this model file to, us so we could actually run it and what, does this model do I think Justin can do, a good explanation of of what this model, actually does and how it works yeah so, it's um a gring boosting Machine model, it's a pretty standard machine learning, model it ingests 314 variables and it, outputs a score the issue that we ran, into very quickly once we had access to, this model is well what does this, actually tell us right okay we can make, up a bunch of people now and score them, but how do we then know what that means, for those people and so there were kind, of two things that became important to, figure out at that point one was what do, realistic people look like and the, second was what is the boundary at which, a person is considered high risk the, second one was relatively easy to figure, out we kind of had some broad, estimations of um how many people are, flaged each year we could run some, simulations and kind of see the, distribution of risk scores and at that, point we could take a good guess for, what the threshold would be getting, access to realistic testing data was a, lot more challenging and for a while we, thought we would have to just simulate a, bunch of people you know take guesses, but actually Gabriel had requested some, basic stats about the training data at, an earlier stage he essentially asked, look can you like tell us give us like a, histogram for each of the variables so, we can see what the broad distribution, in the training data you know for ages, for instance or for for gender and so on, and our idea was to use those basic, distributions to sample new people but, when I was meant to you know type all of, this stuff down into like a file so we, could then run those simulations I got, lazy and I wanted to just scrape the, document and so I it was an HTML file so, I opened it up and insed it and uh it, turned out that the entire training data, was contained in this file which happens, when you create plots with plotly quite, often um so if you you know want to leak, something to a journalist that's a good, way to do it there you go yeah so we, kind of by accident got access to the, entire training data at that point the, question became okay now we know what, realistic people look like what tests, can we run in terms of figuring out who, does this model flag at higher rates, does it have justification to do so and, so on and the one thing that was missing, from the train data was that we didn't, have access to the labels itself so we, knew you know your age your family, background your job history that kind of, stuff but we did not know if you had, actually committed fraud or not and that, meant that and this is the big, limitation of our story but that meant, that we could essentially only, understand which characteristics lead to, higher or lower scores but we wouldn't, know if the those scores are erronous at, higher rates for one one group rather, than another so I just want to be very, open about that that is a limitation of, the design but having access to the, train data having access to the source, code being able to see how the train, data is constructed having access to the, final model file all of that allowed us, to investigate a bunch of aspects with, the system which I think still made for, a very valuable story both in terms of, explaining how the stuff works but then, also in terms of showing that there are, likely consequences which which seem to, be discriminatory against certain, probably a lot of our listeners will be, familiar with what a gradient boosting, machine is and sort of like maybe this, is one of the tutorials that you that, you ran on uh on a Jupiter notebook when, you were first taking your kind of data, science 101 so the model I think is very, familiar I think a lot of the, interesting things here are related, probably to the model features and that, sort of thing did anything jump out to, you maybe even before you kind of ran a, kind of larger scale analysis in terms, of like the features that were included, in the data set and how those may or may, not like intuitively be connected to, this sort of Welfare fraud situation did, anything jump out when you're kind of, doing your initial Discovery and kind of, exploratory data analysis on this data, yeah for sure though I think it's maybe, important to preface this with saying, that including features that seem, discriminat at does not automatically, lead to discriminatory outcomes and I, think that is sometimes being confused, right you can get discriminatory, outcomes without features that look bad, like I don't know racial or gender, racial background or gender or something, like that but it's also it also works, the other way you can include a bunch of, these features and not get any, discriminatory outcomes right both of, these things are possible that being, said there were a bunch of features that, seemed perfectly reasonable you know, contact with the welfare agency how, often have you been there have you, missed any of your appointments that, kind of stuff there were a lot of, demographic features and I think those, get into trickier territory some like, age are maybe justifiable on some level, um gender gets a bit harder and then a, lot of features measuring through proxy, but measuring ethnic background through, language skills I think there was 10 or, 12 Gabriel correct me if I'm wrong but, definitely a lot of variables on, language skills no I think 30 or, something 30 yeah um cuz it measured, everything from like your Dutch uh like, spok Dutch fluency you know writing, Dutch fluency the actual language you, spoke so there was like a categorical, variable with like 200 values or, something so got as granular as the, specific language you spoke whether you, speak more than one language but anyways, continue Justin yeah and then I think in, some way the the weirdest set of, variables were essentially behavioral, assessments by the case workers so we, actually got access to some of the, variable code books and and in there it, said that you know there was a variable, that essentially where people were meant, to judge how somebody was wearing makeup, especially for women so you know stuff, that just seems really sexist so those, variables were included which is, problematic in and of itself but then, the way they were transformed in the, pre-processing steps was that, essentially this textual data was just, transformed into a 01 variable depending, on whether there was anything in this, field or not which is also I mean you, just lose a bunch of maybe the more, interesting information if do that but I, think that set of variables because it's, just based on individual case worker, assessments if your claim is that the, system should lead to reduction and bias, and then you include these variables, that are so obviously subjective um I, think that kind of undermines your claim, right away and in the data set like in, terms of the label and the output were, you able to understand at all like oh, these are investigations that happened, that actually were verified to be fraud, or not essentially like a a one or zero, type of label or how was that set up, yeah so we did not have access to the, label which is again the big drawback so, we could only score people who we know, they had labels but we didn't have that, label ourselves gotcha but we did a, bunch of ground reporting to essentially, work around that and maybe Gabriel can, speak a bit bit to that yeah I mean two, things first I mean just to talk about, how the training gate is constructed, first it's um you know over 12,000 past, investigations that the city has carried, out and these past investigations are, not a random sample so there's some, subset within there that's random I, think about a thousand but all the rest, of the cases are just where, investigators have looked at the past, either through anonymous tips or through, these kind of theme studies that they do, where they say this year we're going to, check every man living in this, neighborhood so it's not a random subset, of people that they're training this, model on which is is problematic in the, first place the second thing is that, this label yes fraud no fraud doesn't, distinguish between intentional fraud, and unintentional mistakes right so, these are flattened into the same thing, when labeling the training data set so, those are I think two problematic things, right off the bat I think even third, more complicated thing is that the law, for what is considered fraud has, actually changed over time and this, training data spans back 10 years but, all that aside you know one of the, things that we wanted to do with this, reporting was to look at the impact of, being flagged for investig a you know, what does that mean for a person and you, know how are they treated by the system, and so we did a bunch of ground, reporting in reram and we sort of used, the results from our experiment to build, profiles who would be considered some of, the most high-risk people and we saw, that it was you know one of them at, least would be like single mothers of a, migration background who don't have a, lot of money financially struggling, living in certain majority ethnic, neighborhoods so we did a bunch of, ground reporting those places and found, people it was quite challenging people, were quite afraid to talk people who had, been investigated in the time span that, the model was active and what we found, was that they were treated incredibly, punitively by these investigations from, the city where fraud controllers are, owered to raid your house in 5:00 a.m., in the morning unannounced count your, toothbrushes sift through your laundry, go through all your bank statements and, that even the smallest mistakes like, forgetting to report €1 could leave you, landed as an alleged froster so I think, there's, even you know based on reporting there, there's reason to even question the, validity of the label and the, consistency of the label but beyond that, I think what we established or the, reporting is that the consequences of, being flagged even if in the end you're, found to be completely innocent just, having people you know raiding your, house at 5:00 a.m. asking you questions, about your romantic life in front of, your children I mean that's a negative, consequence in it but itself even if, your found to done nothing, [Music], [Music], wrong this is a chang log news break the, biggest product news out of open AI, recently is gpts custom versions of chat, GPT that you can create and sell for, specific purposes you build these gpts, by crafting special prompts that are fed, to chat GPT prior to it interacting with, a user is it any surprise that crafty, technologists have convinced chat GPT to, spit out a bunch of these custom prompts, via prompt injection I wasn't surprised, but I was a bit delighted to read, through the collection of GPT prompts to, see what they're made of this gen Z4, meme prompt which helps you understand, the lingo and latest memes that genen Z, are into is kind of hilarious quote, speak like a gen Z the answer must be an, informal tone use slang abbreviations, and anything that can make the message, sound hip specially use genen Z slang as, opposed to Millennials the list below, has a list of jenz slang also speaking, low caps end quote low caps more like No, Cap am I right I'm so old fair warning, though from The Collector of these, leaked prompts who says quote there is, no guarantee that these prompts are the, original prompts and these leaked, prompts are for reference only you just, heard one of our five top stories from, Monday's Chang log news subscribe to the, podcast to get all of the week's top, stories and pop your email address in at, developer news worth your attention once, again that's, [Music], things that I know we've talked about, you know on this podcast but also in my, day-to-day work things that have come up, that you sort of establish as you know, best practices around how you construct, your label how you construct your, features like in a responsible way to do, well at your data science problem I do, want to get to the actual like model, performance here in a second which is, one question is like well all of we see, all of these flaws in the data does the, model actually work or have all of those, kind of underlying problems poison the, output but I think before then I'm just, wondering like as a person who provides, occasionally consulting services to, other people and data science did you, get a sense at all for like the city of, roddam hired X consultancy to give them, the model that they deployed and are, using is just sort of like the, consultancy through the model over the, fence and like here use this or how much, interaction was there with actual, reram employees and how deep was the, understanding of how this model was, built and deployed or was it just a sort, of contract here's money here's the, model all right let's put it into, production what what was the interaction, like there were you able to discern any, of that not super deeply but from what, we do know the city put out a tender, asking for someone to come in and build, a predicted model for this purpose, enture won that tender put someone on it, and there was a reram data scientist, involved but who presumably or from what, I can tell didn't have any sort of, machine learning background I'm just, normal data scientist at the city roddam, set up the whole code base trained the, model developed all the code for the, pre-processing trained the model handed, it over to the city and kind of went by, like we're gone now and from that point, on roddam took full control of the model, like they would retrain it every year, they made like adjustments to like, features and also decided to exclude, some features like nationality but I do, think that during that time reram, upgraded its own data science capacity, so by the time we got there they did, have like two people who were, specialized in machine learning that, were looking over the model that's my, understanding of the basics set it yeah, super interesting I do want to get to, the kind of model performance I guess, because I I know this is something that, I've got asked when I've done workshops, and I talk about about either like, fairness or bias in models there's, always someone that kind of comes up, with the question of like well if the, data is biased but I'm still like the, model's accurate and I'm predicting, accurate results is that a problem I, think there's problematic things about, that the how you might answer that, question in and of itself but in your, case was the model actually helping in, any way or was it were the problems kind, of so deep in the in the data and the, way that the labels were generated such, that the majority of what it was, producing was maybe more chaos or issues, so in the test set that the city used, and we have kind of their the, documentation of that even though we, don't have the labels ourselves we see, that in the set there is a 21% Baseline, rate of fraud or some kind of wrongdoing, and the model kind of depending on where, you set the threshold but the model um, essentially has a hit rate of 30% so out, of the people selected around 30% of, them are labeled within the positive, class so it's a 10% Improvement above, random is that good is that bad the RC, curve looks absolutely terrible Margaret, Mitchell um who many listeners probably, know uh called it essentially random, guessing I'm not quite sure if I would, go that far but it's certainly not, anything to write home about and we see, that there's huge disparities in who's, getting flaged in their characteristics, does the label data show that there's, reason for that maybe but because we, have some idea about how the train data, was constructed specifically through, these theme investigations there's a, very strong probability that a lot of, these patterns that we see in terms of, who's getting flagged is a function of, the selection process that leads to, somebody being included in the training, data rather than of actual fraud being, committed I can give an example of how, that that might work most of the men in, the train data very likely were selected, through one of these investigations, where you know all men in a certain, neighborhood were investigated which, have a pretty low likelihood of actually, finding fraud that kind of implies that, most women were selected by you know, anonymous tips or random sampling and, those things have somewhat higher, probabilities of detecting fraud and so, if your you know your method of, selection impacts How likely it is that, the person who you investigate has, actually done something wrong then the, training set that you train your model, on will contain patterns that are a, function of your selection method rather, than of the real world world and how, fraud patterns look in the real world, and so we couldn't conclusively prove, this because we didn't have access to, who was labeled or who was select like, within the training set we couldn't say, who came from which source but we we, know that these different sources fed, into the training set and it seems very, probable that this type of selection, method would lead to these kinds of, disperate outcomes I think there's all, sorts of things to learn in this story, as even just uh data scientists setting, up data sets and trying to train models, you know I I come of course from a, certain perspective in kind of what, touches me about this story and I'm so, glad that it's out there and there's, some transparency around this I'm, wondering could you speak a little bit, to the reception of this story maybe, more widely by non-technical audiences, in terms of realizations that people, were coming to or responses that came, out of people realizing how these, systems were constructed and how they, perform in reality versus maybe what, their perception was Prior kind of two, answers to that question I think first, of all one of the big goals of this, project and the piece that we published, with wired where we kind of take readers, through the model how it works was to, have it be an educational piece of, Journalism too like you've been hearing, about machine learning and this sort of, impact has on your lives but very few, stories actually take you through like, the full life cycle a model what does it, look like quote unquote inside the, machine so we really wanted to make an, educational piece in that sort and also, talk about you know what Jus has covered, what are the different sorts of problems, or flaws in the system what are the, consequences of those flaws and you know, I think normal people of course found, the the sort of discriminatory angle or, the fact that for example like single, mothers were penalized more or you know, I think that was something that they, took away from but surprisingly one area, that people uh that surprised me a b bit, that people seem quite fixated or, Curious by was the decision trees, portion um so what we try to do in that, portion of the piece for people who, haven't read it yet is we take some, decision trees from the model from this, gradient boosting model we show how this, creates nonlinear interactions right so, agers have sort of relation can affect, each other differently relationally so, you know in decision tree X if you're a, man you might go down the right side of, the tree and if you're a woman you might, go down the left side and you will be EV, valued by different characteristics so, that was seem to be something that, really like seem to resonate with, leaders like questioning like okay oh, that's this is how it works and you know, is that fair to me or you know it makes, it difficult for me to understand how, these interactions work on a political, level you know reram to their credit uh, was quite graceful when we presented, them with the results and they sent back, the statement saying essentially like, they called our results uh informative, educational which in the field of, investigative journalism never happens, like someone say you know the subject of, your investigation saying it's, informative and educational as I think, never happened when they're the subject, yeah yeah um and and called on other, cities to do what they had done to be, transparent and I found that incredibly, Brave and elegant response to what we've, done and they were sort of debating, whether to continue the use of this, model and and and decided that they, weren't going to use it anymore that the, sort of ethical risks were too high and, then I think I mean elsewhere I don't, know Justin if you have any reactions, that stuck out to you yeah maybe the one, thing that I would would add is that I, think this field of like algorithmic, accountability reporting but even the, academic discussions around it has I, don't want to say suffered but has been, kind of constrained a little bit by a, streetlight effect following machine, bias right you had this big Story coming, out and then afterwards for Years, everybody was talking about these, various outcome fa definitions and I, think that's a very valuable debate I, myself and almost enjoy it like I think, it's some of it is just mathematically, very interesting it's really difficult, ethical questions that it brings up but, I think a bunch of the other dimensions, of fairness in the life cycle of the, system have been neglected and Gabriel, and I in the past year have kind of been, making the rounds and the cases to, people that we should be looking at, algorithmic fairness more holistically, we should look at the training data we, should look at the input features we, should look at yeah the type of model, that is being used and how that maps, onto our understanding of to process and, then we should also look of course at, the at the outcome fairness stuff but I, actually think and I at your reaction, kind of spoke to that I think this, training data bit is is probably the, most interesting one and one that I I, have both academic training as a, computer scientist and also as a, political scientist and when I took my, computer science classes nobody ever, talked about how do you set up you know, representative sample I was kind of like, you know we take whatever data we have, and then we try to run as many models, over it use it all and all features, right right and well that might kind of, up your performance along certain, metrics right on some level if the data, doesn't contain the functional, relationship that you're trying to model, you can't get there and I think that's a, lesson that I hope some of the yeah, maybe practitioners who read RPS also, take away from it yeah that's super, helpful I think you got to where I, wanted to ask anyway because I know we, have listeners that are practitioners, and are probably thinking to themselves, like what is a kind of takeaway that I, can take away from this because I would, say from my experience at least most, data practitioners are not intentionally, trying to create harmful outcomes from, their systems they do actually want to, be responsible It's just sometimes they, might be somewhat confused or, constrained in certain ways that don't, allow them to spend time thinking about, those things but um yeah I really, appreciate you bringing us around that, as we kind of close out here and we look, maybe to the Future um we started out, this conversation I kind of mentioned, you know there's all of this talk of, course constantly swirling around us, about the dangers of AI and and all of, that stuff which is operating on, multiple levels some of which are useful, and some of which aren't probably but I, want to ask both of you maybe as you, look towards the future post this, project what you've done here um what's, on your mind as you look towards the, future of how this technology is ever, expanding what gives you pause what, gives you hope what do you hope people, are thinking about as we kind of look to, the Future in how this technology is, developing so there's two things I would, respond to that one is that I hope we'll, have more discussions around, transparency around these systems I, think that's a precondition for anything, else and for that to happen there is an, argument that needs to be dispelled and, that argument is that, making these systems public allows, people to gain them one I think it's, really really hard and there's some very, good academic research that shows how, hard it would be and two well these, systems operate essentially like bylaws, right they're essentially administrative, guidelines encoded in a model file for, how a decision is being made in in some, bureaucracy and I think it's really hard, to make the case that such guidelines, should be secret and so yeah I think we, need to have a discussion and make the, case proactively that transparency in, this space and encouraging people to, learn how they work is a good thing and, encouraging people to gain those systems, is probably a good thing because that, means you're probably closer to abiding, by the law and um if you can game the, systems then you know maybe they aren't, very good that's the first thing I want, to say the second one is that most of, the systems we've looked at are pretty, terrible in most ways I I think that, don't work very well they have either, you know use features that are, absolutely terrible or have training, data Construction ction that is really, problematic or you know have disparate, impacts on various groups almost every, single system we've looked at so far has, one or multiple of these features but, there are some systems that maybe are, better and it's possible I think if you, think very seriously about how you do, each of these steps the feature, selection the training data and then, constructing the model and then evaluate, for bias and then potentially retrain, reway your training data and so on you, know maybe it's possible to get to a, better place um technically it certainly, is and I think then you get to a, different set of questions and I hope, that the conversation at some point can, move Beyond kind of the you know grow in, competence in a way which we are, showcasing um across the board but can, move to a place where we can discuss, okay let's take this best case scenario, we have a system that doesn't have, obvious bias and so on that was, constructed carefully should we do this, is it a good idea is a machine making, the decision removing something, inherently kind of valuable from this, type of interaction is the machine, actually more aable than than a human is, and is that a good thing is it you know, equal treatment because everybody's, being scored by the exact same system, and not by individual case workers or is, it not equal treatment because the you, know the tool contains a decision tree, based model and so different people are, based on different characteristics how, do we think about systems that include, some level of probabilistic assessment, is that something that we think a an, administrative decision should do and, then of course we can also have the, maybe fun for some people discussions, around like which fairness definition is, the best whether we should seek to, minimize or equalize false positive, rates across different groups and so on, I think there's a bunch of really, important questions that Society has to, Grapple with here but I don't think, we're there quite yet in most cases and, so long as we aren't um I think Gabriel, and I will have plenty of work, showcasing in competence and and all, that stuff um but I I hope that at some, point we can move move beyond that yeah, anything to add Gabriel no I think, Justin summed it up really well I'll, just kind of tease that we do have some, reporting comes that's coming up in the, coming year that will I think grapple, with some of these Thorn near ethical, issues um you know ask questions like, when and if ever is it okay to use these, systems I think maybe one thing that I, will add though is I think it is, important for people like practitioners, that are listening to your audience to, also take a step back and to maybe not, see always the deployment of these, systems or the the sort of thorny, fairness questions as like a math, problem but it can also be a sort of, wider societ problem as well so for, example in the European welfare context, we've seen in everywhere we're looking, models that attempt to detect fraud but, what we don't see is models that try to, find people who are eligible for welfare, benefits who aren't using them because, they're afraid of the system and we know, this is a huge problem in places like, France 30% of people eligible for, welfare don't use it because they're, scared of the system this has know, consequences for people not you make, using welfare but also has consequences, Downstream for society so imagine, families that aren't able to feed their, kids developmental issues that come from, that so I think it's always important, and something we try to raise in our, report reporting to kind of take a step, back and ask you know should we be doing, this to think about the premise of why, are we actually deploying this model and, to rethink that and and at some points, and and think about you know is there a, better way to use this technology or are, we only kind of narrow zeroing in on one, piece of this picture yeah that's great, I think that's a really F encouragement, to end things with we will certainly be, on the edge of our seats um looking for, your future work and I encourage, everyone we'll include the links to um, Gabriel and Justin's work in our show, notes so I encourage you go and explore, it there's lots of great graphs and, references and even more technical, description of the methodology than we, had time to go into here so dig in and, learn about what they're doing it's, really wonderful and yeah thank you for, your work Justin and Gabriel and thank, you for taking time to join us thanks so, much thanks so much for having, [Music], us thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change dog podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residents break master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The OpenAI debacle (a retrospective) | Daniel & Chris conduct a retrospective analysis of the recent OpenAI debacle in which CEO Sam Altman was sacked by the OpenAI board, only to return days later with a new supportive board. The events and people involved are discussed from start to finish along with the potential impact of these events on the AI industry.
Leave us a comment (https://changelog.com/practicalai/247/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Traceroute (https://deploy.equinix.com/traceroute/) – Listen and follow Season 3 of Traceroute starting November 2 on Apple, Spotify, or wherever you get your podcasts!
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• OpenAI | Wikipedia (https://en.wikipedia.org/wiki/OpenAI)
• How OpenAI’s origins explain the Sam Altman drama (https://www.npr.org/2023/11/24/1215015362/chatgpt-openai-sam-altman-fired-explained)
• OpenAI chaos: A timeline of firings, interim CEOs, re-hirings and other twists (https://www.axios.com/2023/11/22/openai-microsoft-sam-altman-ceo-chaos-timeline)
• OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say (https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22)
• Everyone’s talking about OpenAI’s Q*. Here’s what you need to know about the mysterious project. (https://www.businessinsider.com/openai-project-q-sam-altman-ia-model-explainer-2023-11)
• It is Time to Profit off of the OpenAI Drama (https://www.nasdaq.com/articles/it-is-time-to-profit-off-of-the-openai-drama?time=1701022166)
• Yann LeCun | LinkedIn (https://www.linkedin.com/posts/yann-lecun_please-ignore-the-deluge-of-complete-nonsense-activity-7133900073117061121-tTmG)
• Advent of GenAI Hackathon (https://adventofgenai.com)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-247.md) | 15 | 0 | 0 | [Music], welcome to practical AI if you work with, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners for, helping us bring you practical AI each, and every week, fast.com fly.io and types sense.org, [Music], what's up friends AI continues to be, integrated into every facet of our lives, and that remains true because you can, now index your database with AI you can, write more code become that 10x or you, always wanted to be and you can even, draft a letter for a lease on an, apartment or a new property AI is, everywhere and it might be time for us, to start questioning is is AI our friend, or our worst enemy and that's the focus, of the three-part season opener of the, awarding podcast called trace route, podcast you can listen and follow the, new season of trace route starting, November 2nd on Apple Spotify or, wherever you get your podcasts and this, show is all about the humanity and the, hardware that shapes our Digital World, in every episode of trace route a team, of technologists seeks to untangle the, complex question who shapes the internet, seasons 1 and two gave us a crucial, understanding of the inner workings of, Technology while revealing the human, element behind Tech in season 3 tackles, not just AI questions but also how can, we use technology to preserve the Earth, who influences the technology that gets, made and what happened to the flying, cars we were promised I think it's safe, to say that the future of AI is both, exciting and terrifying so it's, interesting to hear the perspectives of, experts in the field listen and follow, this new season of trace route starting, November 2nd on Apple Spotify or, wherever you get your podcasts, [Music], well welcome to another fully connected, episode of the Practical AI podcast my, name is Daniel Whit neck I am the, founder and CEO at prediction guard and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, Lockheed Martin how you doing Chris, doing very well today Daniel it has been, quite a past week or so it has been, quite a week we've had a couple well in, the US we've had usually people take off, a couple of holiday days for, Thanksgiving but even leading up to that, and during that there was you know all, of the craziness of I think I think what, will be remembered as a very unique, Thanksgiving season in the AI world at, least yes it's been a soap opera to say, the least yeah yeah I was trying to, think up a good pun from a soap opera, title Days of Our Lives or something but, I couldn't I couldn't think of one uh, for open AI Days of our artificial lives, or I don't know what it would be I don't, know it's just but it's definitely I, mean not even day by day but like at, some point it's you know hours by hours, radical changes along the way yes so, it's uh and of course we're talking, about the The Saga of open Ai and all, that happened which is what we're g to, talk about today yeah yeah I I mean I, think it's what we have to talk about, and I think rather than just us giving a, few hot takes which I hope that we will, have I think it would be good to kind of, step back and kind of look at the, history of open AI how it came about and, the progression of open AI as an as an, entity and their offerings which kind of, frames up some of I think the the drama, that we've seen over the past week uh, the week of Thanksgiving November, 2023 you know it's interesting Chris I, was looking back uh we had W zerba from, open AI one of the founders early on, yeah yeah one of the founders and um, that was episode 14 of this podcast, we're now on about 250 I think somewhere, in there yeah it seems like from those, early stages till now we've been kind of, going on our own journey in parallel, with open AI as they've had this amazing, journey as a company or as an, organization and way back then when we, were talking to woek we were talking, about really reinforcement learning and, robots um I don't know if you remember, at that time they were doing the like, robotic arms and they would have like, stuffed uh giraffes and they would hit, the robotic arms with the stuffed, giraffes which is kind of comical but it, was like a perturbation they were trying, to create kind of robust reinforcement, learning to control Robotics and that, was kind of I need to look back at the, episode I think that was basically what, we had talked about back then that's, that was kind of the focus I remember, being very interested in that episode, and he is such a smart person holy cow, because I had been uh leading up the, first AI team at Honeywell at the time, and we were doing also robotic arm work, uh within honeywell's business using you, know convolutional Nets to see and it, was early days compared to what we're, doing these days but yeah holy cow those, guy he was so smart that was one of the, those episodes that's really hung with, me over the years yeah and kind of, stepping back and taking a wider view of, open AI as a whole I'm even looking at, the transcript of that episode now and, at this this time would have been I, guess four more years ago W said the, goal of open AI is quite ambitious it's, to figure out a way to build artificial, general intelligence or artificial, intelligence or to be more exact how to, build it in a way that it's safe or that, we can control it and he says let's say, figure out from a political perspective, how to deploy it in a way that's, beneficial to humanity as a whole so, that's kind of one little clip from that, episode and if you look at open ai's, founding if we just take a look at how, it was founded and how it progressed to, this point of the chaos of last week it, was really focused uniquely on this, problem of creating an organization that, would Steward us towards artificial, general intelligence in a way that was, beneficial to humanity and there were, various people involved in that various, funding groups, we mentioned woek but of course Sam, Alman who has been in the news a lot, Elon Musk Greg Brockman and others that, were part of that original group when, the organization was founded in December, of 2015 I want to explicitly point out, it was set up as a nonprofit at that, point in time that's correct which, somewhat changed as we'll talk about, yeah which maybe was part of the tension, that led to last week indeed yeah so it, kind of had this initial board um Sam, Alman was part of that of course he was, in the news a lot this last week prior, to open AI Sam was the president of Y, combinator from 2014 until 2019 and I, know that there was some things in the, news kind of trying to imply certain, things about why he was let go or fired, or left why combinator in 2019 and, trying to tie those forward I don't know, that that was totally coherent for me in, terms of what it was but anyway that's, kind of his past in this startup Venture, backed world I think that would be the, the key thing to highlight there is he's, coming from this sort of venture-backed, startup World IPO raise a bunch of, rounds of funding and kind of big Tech, mindset I guess yeah I agree with you I, think that was the very beginning in my, sense of it at least of kind of that, tension is that you it's a very, different culture from a nonprofit, effort in terms of the kind of the way, you run your business and such as that, so you know one kind of key other, commercial player in this would be, Microsoft who has invested at this point, over 10 billion dollar in open AI Global, LLC so that will be an important maybe, distinction here in a minute but, Microsoft is a big player on this which, is why you might have seen Microsoft's, CEO making statements during the past, week Etc correct for whatever reason you, know kind of going to that point because, that that is a structural concern of, open AI that is by almost any account a, little bit bizarre you know and even, this podcast is almost as old as open AI, we're not quite there but we were uh you, know when 2019 we were operating when, all this stuff was happening and I, remember it and uh they've sort of, created they have the parent company, which is nonprofit and the short of it, is they have this LLC which is a, for-profit entity which is a uh, subsidiary of the nonprofit but all of, the because of the way they're operating, now all of the funding the investment, everything has gone into the LLC and yet, you have this group of disconnected, investors from the daily operations, operating at the nonprofit level above, so it's two entities that were never, really intended to operate together in a, direct legal manner at least from an, intent standpoint and somehow the, attorneys have made this all work so, that it is legal yeah when I was looking, at like key takeaways uh in terms of, what we're doing here which is sort of a, retrospective on this crazy week of, events yeah one of the things which is, not really related to open AI but if, you're in business it was a key takeaway, is like don't create convoluted, corporate structures no it's not not, gonna help anybody so there you go, that's a non- aai tip if if you're in, the process of structuring some, complicated corporate thing try to, simplify it but yeah you're you're right, the start was nonprofit and um NPR even, quoted this as kind of like it's almost, like an anti-big tech company that's how, they quote un quote referred to this, nonprofit version of open aai when it, started that it would prioritize, principles over profit again the idea, with the founding of this was it would, be a way for the best AI researchers in, the world to help Steward this really, disruptive and potentially harmful, technology in some respects into the, future in a way that it would benefit, Humanity that was kind of the framing I, think that really plays into to what, ended up happening which developed over, the years is that they were doing two, things they were serving that mission as, they founded from 2015 on but as we have, all learned talked about quite a lot, over the years it's very very very, expensive yes to create these large, models and so you get the sense of they, were constantly fundraising and they hit, this point where they needed they needed, a serious infusion of cash to push, forward where they were wanting to go, and I think Microsoft comes along and, says well we'll give you an initial, billion but that also happened at the, same point where this new corporate, structure evolved and I think that that, was all tied together as as well as I, can can tell from Reading many many, articles on the topic yeah yeah to get, that investment going and so they did, that they got the new thing they got the, initial infusion of a billion and then, uh the year after that they got 10, billion more from Microsoft but by doing, that you had the driving forward on on, you know how fast can we get there and, be the leader and do this uh and that, entire startup kind of culture, contending with the our mission says, we're going to do this for Humanity and, we're going to to do this safely and you, can see the they're crashing together, and have been the last week or so yeah, yeah so specifically what uh what you're, referring to Chris um is this transition, from a nonprofit organization to a quote, unquote capped for profit which I had no, idea it even existed until the 100 time, cap yes until uh we started talking, about this a couple years ago but yes a, cap nonprofit where the profit would be, capped at a 100 times any investment, which according to the numbers that you, just told me about investment would be a, significant profit regardless but a, capped, nonprofit and according to open AI at, the time and I think what they've said, this really had to do with attracting, talent and attracting investment at the, levels that they would need to achieve, this progress towards artificial general, intelligence that would benefit Humanity, now there's two kind of key pieces here, the open AI Inc which is the nonprofit, so it's very confusing and maybe, slightly annoying to have to refer to, these differently but open AI Inc the, nonprofit and we mentioned a second ago, Microsoft investing in open AI Global, LLC which is the capped for profit and, what's interesting is that essentially, this created a scenario where the board, which had full control of the capped, for-profit company as a nonprofit, couldn't have a board member that would, have some sort of financial stake in the, for-profit company which all of that, seems kind of convoluted but let me say, it again so like the board that's, controlling the open AI entities could, not have members on it that would have, Financial stakes in the capped, for-profit open AI Global LC which means, that Microsoft for example as a huge, investor in openai global LLC did not, hold a board seat on open AI Inc which, will play into to kind of the timeline, that we'll talk about here in a second, of what's happened over the past couple, weeks and one other thing to throw in on, that is the though I am not an attorney, I'm fairly sure that the 100 time cap, was one of the mechanisms by which they, could make the for-profit fit into the, nonprofit because pure for-profits in, theory have essentially unlimited, ability to generate profit nonprofits, are not allowed to have profit you know, you must use those expenses and the 100, times caps I think was something of a, bridge uh so I say that from a I did not, go to law school to learn this but I'm, pretty I'm pretty sure it's tied in, there somehow well Chris in this Saga of, what's happened we have this nonprofit, originally open aai Inc that has spun, out this for-profit open AI Global LLC, which has received a huge amount of, funding from Microsoft and as we all, know over the past couple years has, really become the dominant force in the, AI industry with releases of all sorts, of amazing technology um and tools and, of course most recently chat GPT so it, it might be worth just kind of giving, people some context around this of like, what happened over those years with open, AI that made it such a driving force, which eventually led to I mean you could, have a company that has a convoluted, corporate structure and it blows up and, the CEO gets fired and you know no one, hears about it in the news and our lives, pretty much go on although it's probably, unfortunate for their lives but here, this had an impact kind of on the whole, AI industry and I think will have an, impact on the AI Market moving forward, because open AI was such a dominant, force in the industry so maybe we could, go back and visit some of those, milestones in the history that came, along from those early days when we were, talking about robots and stuffed, giraffes to now where we've got gpts I, guess so back in 2016 so this would have, been almost when we were starting the, podcast Nvidia gifted the first djx1, supercomputer to open AI it's quite a, gift of the day yeah so they were kind, of early on into this wave of really, powerful Cutting Edge GPU powered, supercomputers that could train larger, and larger models which is one of the, things that's of course led to these, very powerful found found models that, we've seen in recent years I don't know, if you've got they haven't gifted you, one you have it in your garage or, something Chris no my first, djx1 uh was which was when that was all, there was on the dgx line the original, one I got at Honeywell and we had it, under a desk for a while because we we, got it and then we had to figure out how, to get in the data center and stuff but, yeah that was I remember thinking wow I, got I have an AI supercomputer under our, desk here this is amazing anyway Side, Story and in those kind of years after, that you see what maybe I would consider, and I know um it's hard to make, generalizations like this because, there's a lot going on at open AI even, now but in those early years I think you, really saw this kind of exploration, phase of setting up the right tooling, setting up the right compute letting, researchers explore various things like, the robotic stuff like RL computer, vision things and you saw things like, the universe software platform open AI, gem which I know I've used in workshops, for like reinforcement learning type of, thing so you really saw this kind of, wide range of exploration in those early, years leading up to February, 2019 when, gpt2 was announced and this was the, first kind of from open AI at least the, first major language model or Foundation, model base model that gained a lot of, attention for its ability to Output, really humanlike coherent text of course, is the first in the line leading up to, the latest GPT models that we see now, like GPT 4 but that was in February of, 2019 still something that I think was, basically just being looked at by People, Like Us yeah we episode 32 of our own, podcast was covering that oh there you, go episode 32 um gpt2 episode 32 it's, interesting um now Chris uh when I'm, helping people learn how to like, fine-tune a language model or something, like that I'll often use gpt2 and it's, interesting to see because now this, would be considered a very small model, and not only that but it's open right so, open AI was by its very nature and, stated aims going to be very open with, its research and models and IP and all, of that so gpt2 is on hugging face and, uh it's a great model to use even now to, kind of figure out how to find tune, language models but it's just, interesting to compare that to for, example GPT 3.5 or gp4 I'll never see, those models in terms of downloading the, weight and all that it became a, different org yes and uh with their own, reasons for doing that and that's their, decision but um yeah in 2020 then so a, year later open AI announced, gpt3 which was a language model that was, more extensively trained on this sort of, huge Corpus of content from the public, internet and you know so a lot of, scraped data and all of that and I think, this is really where you started to see, some pretty crazy outputs from these, language models what people might have, thought not possible they started to see, this with gpt3 this is also where you, kind of see this shift within open AI to, from releasing gpt2 as a model to, releasing, gpt3 as an API with a more gated release, of slow release I was even reading in, some of kind of the leadup and, debriefing from the chaos of last week, that you could interpret some of this, slower release of, gpt3 as an API and a product to really, this tension that you're seeing between, a startup kind of mentality wanting to, fail fast release things fast learn from, things in in public and this kind of, more shielded nonprofit, good of humanity side that is wanting to, do things maybe in a more slow way that, ures safety and releases things not with, harmful outputs that sort of thing so, you see this kind of starting to really, Collide and come together I think around, gpt3 gpt2 gpt3 were language models, caused a big stir in the AI world again, still not something that public uh you, know as far as chat GPT wasn't something, that a lot of people knew about in the, general public I think even when they, released D in 2021 this was kind of text, to image model where you could you know, put in your prompt the astronaut riding, the horse um on the moon and you could, get that and I think the public saw that, as still kind of like a novelty and that, sort of thing and that led up all the, way to December 2022 of course when AI, started when chat GPT was released um at, least in free preview which changed the, world yes when we were talking even, going back to to Dolly people were aware, that there was this thing and they saw, all these crazy AI pictures but if you, had asked someone who wasn't following, this industry the way we do most of them, out there they would have said oh I've, seen some of those photos but I couldn't, tell you what the organization was at, the time they they didn't know yes that, really changed with chat G BT the whole, world woke up to this stuff yeah and I, mean we've been talking about open AI in, this podcast pretty much nonstop since, then and everyone else has as well of, course chat GPT isn't the only thing, going on in the AI world and hopefully, we're representing that on this podcast, but it is certainly a driving force and, we've you know mentioned it a lot and, that's why the events of last week, caused so much stir it's worth noting, still things happened after chat GPT, right right we had, gp4 we had uh I think what you saw as a, shift in public discourse from Sam Alman, and Greg Brockman at open AI really, related to hey we need to really provide, recommendations for governance of super, intelligence uh governance of artificial, intelligence and also at the same time, rapidly releasing new products as well, so you see this again this tension right, like hey we want to talk publicly about, governance of super intelligence and, regulations around this and at the same, time we're going to have our open AI Dev, days and release four new offerings, which are going to blow your mind and, immediately permeate all Industries, right so you've got GPT vision and the, gpts plural um which is kind of easier, ways to create customized um models and, systems rag workflows along with other, things like their assistance playground, and API so you have again this like you, just see these two things kind of, coexisting where I'll use the words of, MPR they quote say two competing tribes, within open AI adherence to the serve, humanity and not stakeholders Credo and, those who subscribe to the more, traditional Silicon Valley Mo of using, invest your money to release consumer, products into the world as rapidly as, possible end quote so you see this even, in this past year I think in public, discourse and in the release of products, and that leads us all the way to, November 16th so I don't know where were, you on November 16th Chris you know, we're talking like this great historical, moment uh as we're recording this this, is what Sunday afternoon this was a week, and a half ago that we're talking on a, Thursday and so the whole thing happened, in that week before Thanksgiving and it, was done in one week from a Thursday, basically to a Thursday to a Thursday, for all practical purposes or Thursday, to the Wednesday and it was working and, uh happened to look at the news and the, first thing I saw was open AI had fired, Sam and uh was like which was crazy, because it was like right after his, keynote I know a day it was a day later, it was a day after his keynote yeah yeah, it was just, mindboggling yeah so apparently he got a, text on Thursday night from one of the, co-founders asking him to join a Google, meet on Friday and they had already kind, of tapped the CTO uh Mira Mora as the, next CEO and on Friday literally, Microsoft learns of Altman's firing, quote a minute before the world does, again so it was on Friday that we heard, it actually not Thursday but yes yeah, yeah exactly so it happened well it got, into motion on Thursday and then some of, us heard about it on Friday and, apparently also Microsoft um which was, just you know so I knew about the kind, of convoluted corporate structure and, all of that but it was just, mind-boggling to me that Microsoft was, not informed about about this given, their investment in The for-profit, Entity um which I think they have 49%, stake in if I recall correctly correct, yeah it's something like that I don't, remember the exact figure but certainly, they've invested over1 billion do with, plans to invest 10 plus more billion, dollars and of course they've integrated, open AI chat GPT Etc into Azure into, Bing into Microsoft Office, 365 and so yeah it just sort of boggled, the mind that this happened in the way, that it did so then on the 17th then, openai releases a statement that Sam, Alman was ousted and uh later that, evening Greg Brockman quit um announces, he's, quitting they got rid of him as the, chair uh so he was fired as chair but, then he turned around he was still, president and he was and he quit as, president standing uh to show Sam mman, that he should with him okay well that, brings us to the point Chris Sam and, Greg are out and uh Microsoft on that, next Saturday then kind of going day by, day Microsoft releases a statement, saying that they and I don't know if you, saw the video um the Microsoft video but, essentially they immediately offered to, hire Sam Alman into Microsoft and Greg, and anybody that left open AI is would, would enroll yes and I think I could, tell in those videos just the sort of, still shock and unbelief in a lot of, people so Microsoft releases these, statements people of course start, wondering like why did this happen um, this had to be something really bad that, Sam did like why would this ever happen, and there was some reporting and some, statements that basically were kind of, confusing and murky that they were, saying you know no it wasn't like any, violation of security or privacy, practices or kind of malfeasance on, Sam's part but you kind of got like, these vague hints of oh he wasn't, completely forthright with the board and, his communication didn't allow them to, properly make decisions and so you get, that kind of mix of of stuff going on, until November 19th when, Alman announces that he has been hired, into this new Research Unit of of, Microsoft and posts on Twitter SLX the, mission continues and in the meantime, Meera you know was the first designated, CEO they had another CEO for another day, on Sunday and then a third one you know, they they went through CEOs you know one, a day for a while there yes I I was on, my Zoom calls when I was hopping on I, was asking people if they had been asked, by open AI to be the next CEO yet, because it seemed like they were working, their way down a list I don't know how, far down the list I was kind of sad I, wasn't in that top 10 but they were, definitely working their way down some, type of list and I forget the name of, the one that came after mea just it's, escaping me right now while we're, talking but uh the open AI employees, were reported to on their internal slack, were using uh I'll gently say FU and, showing the middle finger on their slack, apparently the employees had had enough, by the time we got to Mid weekend and, going Saturday into Sunday the employees, started finding their voice as we'll, hear about next and there were reports, basically that up to 95% of the, employees within open aai were going to, depart open AI if some deal wasn't, struck to have Sam Alman return as CEO, which obviously would be the end of open, AI at least as we know it so I don't, know I could definitely tell you that, and this is one thing I'll highlight, later on is all of these people who have, relied on open AI as the sort of bull, workk of of the industry and integrated, this across their products were really, in a state of panic right because it's, like hey you know the open AI is still, saying they're going to provide great, support for all this stuff but also, you're saying like potentially 95% of, your employees are leaving so where does, that leave all of these so it was really, kind of a time of Reckoning where it's, like hey you know we've built a whole, strategy around open AI products and, models so now what and you know the, resilience of relying on this kind of, single family of models I think really, really was showing in that moment if, only there were services that could help, people find a variety of different, models and not be entirely dependent on, a single, family prediction card I didn't say that, yeah yeah sorry this is a point we'll, make later but I think it does we even a, few weeks ago had uh had some comments, about with the executive order is there, firming up of the market around these, kind of foundation models because of the, regulatory burdens that would start be, starting to be put on them right um and, I think that this kind of opens up the, field a little bit more regardless of, what has played out with open AI I think, people are really wrestling with the, fact of hey um what else is out there, and of course you've got amazing players, in that space from mistol to Mosaic ml, to um meta and what they're doing with, llama 2 and people really finding you, know of course we're helping uh, prediction guard helping people build, with these models but a lot of people, are finding a lot of success, with these models and I got a number of, messages during this chaos about hey you, know we weren't thinking about trying, open models or models outside of the GPT, family prior to all of this craziness, going on but now we're wrestling with, that not necessarily over the fence yet, in terms of a strategy around that but, it's definitely caused people to think a, little bit more about this kind of, Market yeah having backup true diversity, if you will in terms of model selection, is now going to be in the uh you know, corporate consideration for any, significant organization going forward, uh we've seen the chaos and that will, change the marketplace in general so, correct correct that brings us all the, way to November 21st of, 2023 so this would be Tuesday I believe, when um open aai releases a statement, that they've come to a quote deal in, principle with Sam Alman to return a CEO, with a completely new board which is, chaired by former Salesforce co-ceo, Brett Taylor and so one way you could, take this is the sort of Silicon Valley, venture-backed world one like the board, Is Now chaired by a leader in that space, and so a big question mark kind of hangs, over this what about that original, vision mission of the open AI nonprofit, is that completely dead or does it still, exist I think that's a real question to, ask yeah if I was channeling uh the open, AI marketing team I would expect that, they would say absolutely that still, exists that's a it's an overriding, Mission but I would also suggest that, the marketplace is probably not, convinced of that at this point in time, yeah for sure before we go on I just, want to point out one thing another big, big retro consideration that will not go, quickly is the ability for the voice of, the employees uh in a unified stand in, such a corporate you know this was the, biggest AI company in the world in the, sense of M cheer and what they're doing, models and the employees made this, happen when they said you're not going, to have an organization if you continue, down this path that made a big, difference and I think that's another, thing that we will see play out in, organizations going forward well there, there's one more piece of the mystery, puzzle here Chris which is now a meme in, and of itself I would say it, is uh so they struck the deal with uh, Sam Alman and then Wednesday November, 22nd if I'm getting the date right there, was, reporting that ahead of Sam alman's, departure some researchers wrote a, letter to to the board of directors of, open AI telling them about this, discovery or new model that was being, worked on called qar Q asterisk that, they basically said was a threat to, humanity so this brought up new kind of, questions around was Sam Alman not, taking this seriously and that's why he, was fired or did it play in at all to, this and then of course immediately in, addition to all of those questions, people started speculating well what is, qar and is it a threat to humanity and, there the memes started across the, internet our extended family got, together for Thanksgiving and I make a, point of not bringing up AI I mean this, is like you know we have broad family, lots of Interest not in AI in general, you know or even technology in general, and this immediately came up as the, first big topic everyone is scared about, whether qar is this threat to humanity, and it is not you and I and the rest of, us AI folk talking about this this is, the general population I was really, quite startled to sit into the family, set Gathering and have that become the, primary conversation I I was not, expecting it of course everybody's going, wild about this just like when they were, going wild about you know anticipating, what is GPT 5 or GPT 4.5 or whatever the, next thing from open AI is there's, always going to be this level of hype, around it I think what we can kind of, practically know is maybe based on what, they've said which is that the report, basically said that this was a model, that could maybe solve math problems, better than previous models which has, always been a sticking point for these, generative models and the Q and the name, kind of hints at this having some sort, of reinforcement learning aspect to it, because one of the main kind of, mechanisms within deep reinforcement, learning is called Q learning, which um is kind of a mechanism by which, an AI model or a policy model tries to, predict the long-term return of making, an action so essentially planning which, I think you also fored me along uh, LinkedIn Post Yeah y Yan laon yeah I, I'll read it he he weighed in because, keeping in mind prior to him this came, out a couple of days ago he put it out, at least on LinkedIn he may he probably, put out on Twitter too but it was, exactly to your point right there he had, been watching several days of this kind, of Hysteria about qar and people, panicking and and I will note that there, were some pretty crazy news articles uh, about it along the way what Dr Lon said, was uh please ignore the delug of, complete nonsense about qar one of the, main challenges to improve llm, reliability is to replace autor, regressive token prediction with, planning to your point Daniel pretty, much every top lab and he names a few is, working on that and some have already, published ideas and results it's likely, that qar is open ai's attempt at, planning uh they pretty much hired no, and brown to work on this and uh I think, he was trying to get back to practical, AI from his perspective yeah and I think, this just represents a sort of wider, Rift in the AI research Community as, well where you kind of have a race to, dominate the market with new models kind, of at odds with this really, zealous promotion of like trying to, prevent AI from advancing beyond our, control and so you you see this playing, out not just at open AI but elsewhere, and I think that led into some of the, qar craziness well in terms of, retrospective as we kind of you know, we've told the whole story we've stepped, back and looked at open AI there's, definitely from my perspective some, takeaways of course I'm actively leading, a company that is providing hopefully, trustworthy llm apis in Enterprise, environments that you know can be, self-hosted so my big takeaway and I, think what I was busy doing all week was, answering people's questions about hey, if I don't have open AI what is there, and it turns out that there's a lot so, you know it doesn't necessarily have to, be our apis but there's a lot of options, around using Enterprise models in a safe, secure environment that can be deployed, under your control and yes there's kind, of this balance to take around like hey, open AI apis are really good because, they're a managed service but they also, have downsides and I think you've seen, this over time with other managed, Services right like with running your, own database right there's a tradeoff, right um between riing on a especially, if it's a oneoff unique hosted service, that is a single point of failure versus, kind of Hosting your own or maybe having, like your data spread out with different, databases across your infrastructure, like you see some of that kind of, infrastructure concern and um you know, the concern around the resiliencies of, these systems really playing out and I, think that we'll see other companies, rise to that as well right there's going, to be uh new kind of wave of these, companies that play off of what has, happened over the past week to provide, Enterprise Solutions yeah AI risk, management as an industry field has been, born that's what's happened here yeah AI, isn't just the data science team that is, going and either using apis or in some, cases using their creating their, remodels you now have risk management as, a formal corporate concept that everyone, will be adopting and will go through all, the the ranks of every organization so, uh an entire new industry has been born, out of this yes another point that I saw, being made is this is kind of a wakeup, call to Regulators as well you have, companies testifying before Congress to, tell them how maybe they should be, regulated but this is kind of a wake-up, call that hey you know maybe these, companies aren't so good at regulating, themselves uh that often so maybe there, needs to be a different way that we, approach regulation of of this, technology and I think you know you see, other evidences of that coincidentally, around the same time it seems that meta, has uh disbanded their responsible AI, team now I'm not making a comment on I'm, sure that they have other thought, processes going around that but it is an, indication of maybe some of what was, good faith efforts at this but in, practicality at odds with commercial, pressures so yeah it it'll be, interesting to see how that plays out on, the regulatory side as well I agree, there's another cultural uh kind of in, the large cultural issue regarding when, as we're doing retrospective if we go, back just a few years on this podcast, and not very far if you really think, about it when the subject of artificial, general intelligence AGI would come up, we didn't take it too seriously you know, it was one of those things that wasn't, wasn't really practical AI for us at the, time it was one of those someday maybe, you know kind of get there things and I, think that um what we've seen in a, couple of different ways we saw it with, chat GPT coming into being and the, capabilities and the fact that it hit, the Public's Consciousness so intensely, uh and then you know the concern no, matter what qar ends up being regardless, of what the outcome is Thanksgiving, dinners were actually with people with, genuine concern whether it's general or, not it's General in a certain way there, you go and and the power of it and and, what that would mean for their own lives, even if they had nothing to do with the, AI industry and so what we've seen, change in the large is that the notion, of artificial general intelligence is, entirely legitimate to ponder to be, concerned about to either fear or be, excited about or or maybe a bit of all, of the above but that's a very different, thing if we were to go back a couple of, years we had a very very different, perspective well with that Chris um I, think that's a good perspective to kind, of bring us to a close here it's been an, interesting Thanksgiving week and on, these episodes we normally also try to, provide some learning opportunities for, people there's one that I would like to, highlight quickly which I think will be, a lot of fun some of you might have, heard of Advent of code which is, something that a lot of people do to, learn new coding skills and try new, things each holiday season and I'm, helping run an advent of generative AI, hackathon with Intel so that's happening, December 5th through the 11th so I would, encourage you if you haven't been, Hands-On with these Technologies and, with these models this is a great way to, kind of get into that without spending a, bunch of money and learn from a lot of, experts in the field with access to, really people that are working in this, day-to-day so check that out at Advent, of geni.com and we'll link that in the, show notes along with all of the many, many articles that have been written, about open AI over the past week so well, interesting week Chris um who knows what, we'll be talking about next week but I'm, excited to do it you never know these, days I'll tell you what yeah uh we'll, find out talk to you then, [Music], man thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang talk podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Generating product imagery at Shopify | Shopify recently released a Hugging Face space demonstrating very impressive results for replacing background scenes in product imagery. In this episode, we hear the backstory technical details about this work from Shopify’s Russ Maschmeyer. Along the way we discuss how to come up with clever AI solutions (without training your own model).
Leave us a comment (https://changelog.com/practicalai/246/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Advent of GenAI Hackathon (https://adventofgenai.com/) – Join us for a 7-day journey into the world of Generative AI with the Advent of GenAI Hackathon. Learn more here (https://adventofgenai.com/) !
• Traceroute (https://deploy.equinix.com/traceroute/) – Listen and follow Season 3 of Traceroute starting November 2 on Apple, Spotify, or wherever you get your podcasts!
Featuring:
• Russ Maschmeyer – Mastodon (https://sfba.social/@strangenative) , Twitter (https://twitter.com/StrangeNative) , GitHub (https://github.com/StrangeNative) , LinkedIn (https://www.linkedin.com/in/russmaschmeyer)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Advent of GenAI Hackathon (https://adventofgenai.com/)
• Shopify’s HF Space for background replacement (https://huggingface.co/spaces/Shopify/background-replacement)
• Shopify Magic (https://www.shopify.com/magic)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-246.md) | 7 | 0 | 0 | [Music], welcome to practical AI if you work with, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners for, helping us bring you practical AI each, and every week, fast.com fly.io and types sense.org, [Music], hello it's your co-host Daniel here just, wanted to let you know about an awesome, event that's coming up this December of, 2023 it's called the Advent of gen AI, it's a hackathon that's being put on by, Intel's liftoff program and prediction, guard and it's going to be a 7-Day, journey into the world of generative AI, um you're going to get access to some, really cool Hardware from Intel you'll, also get access to run prompts through, the latest Open Access llms via, prediction guard and every day of the, challenge you'll get a a new chance to, show your generative AI skills and learn, a bunch of cool stuff so I encourage you, to register it's totally free you can, take part and learn all of the cool, generative AI things that you're hearing, about on this podcast find out more more, at Advent of geni.com that's Advent of, geni.com, [Music], welcome to another episode of practical, AI this is Daniel whack I am the founder, at prediction guard and I'm joined as, always by Chris Benson who is a tech, strategist at locked Martin how you, doing Chris doing very well Daniel how's, it going today oh it's going great you, know um as our listeners know or at, least the ones that have listened to a, lot of our shows my wife owns a business, which is a an e-commerce business quite, a business and next week as we're, recording this for those listening at a, future date next week is Thanksgiving, here in the US which means next Friday, basically well it's already sort of, started but next Friday to that, following week Black Friday Cyber Monday, is is a huge retail and e-commerce, Extravaganza in in our world and so I'm, really excited because leading up into, that today we've got the expert with us, rust Mash Meer from Shopify who is, Project Lead for spatial Commerce and, Magic Labs at Shopify welcome Russ hey, thanks guys I'm super excited to be here, it's been cool to follow along with the, podcast and and just super stoked to, chat with you guys today yeah for sure, well I'm I'm coming into this like super, excited because over the past well it's, been 10 years my wife's business and at, least so nine of those years they've, been on Shopify which means I have been, in Shopify I've dug into all the data, behind I've worked with the Shopify API, I've built chatbots on top of um Shopify, to like sign up wholesale customers I've, lik dug into the like liquid code on the, site so I'm like all about whatever I, can learn from you today and hear about, what's going on at Shopify I am super, excited I I didn't even know he was that, much into it Russ I'm you, know well when you're the husband of an, e-commerce entrepreneur and you're also, data scientist um occasionally favors, are asked and um I'm feeling very third, wheel now I just wanted you to know well, um over that time it's been cool to see, how Shopify has added so many amazing, features and is really powering a lot of, huge Brands not only small Brands but, larger Brands I'm sure you all are, gearing up for a crazy first off I just, have to ask like what is the week, leading up to Black Friday Cyber Monday, like at Shopify you know it's very busy, as you can imagine I was going to say, what a loaded question it's the kickoff, to the biggest shopping season of the, year Shopify Powers just an enormous, amount of that holiday shopping season, so you can imagine the teams internally, are prepping for it they are getting, products like locked in place and just, you know operating at their optimal, maximal performances just to support the, load that's coming in this upcoming, weekend but um you know every year we, also launched this really cool live, Globe uh that's a 3D visualization of, all the live data and P and like orders, happening all around the globe in real, time so you see like you know orders, streaming around this globe and so this, year uh uh I've been also helping to, lead some of those efforts I'm really, excited for that to get its annual debut, this year um you might you might see, some ideas we talk about today up here, there cool yeah it's like the live view, of Santa going around the world at at, light speed, totally totally the S, entrepreneurship there's all sorts of, interesting things that Shopify is doing, but specifically here we're talking, about Ai and what you're doing in, regards to that maybe as we jump into, that could you describe a little bit, from a person that's embedded in the, kind of e-commerce world and seeing what, a lot of people are doing in various, Industries various Stores um how do you, view the impact of AI on e-commerce, specifically right now and kind of where, it's headed like where's the where are, we seeing the biggest impact in terms of, AI right now in e-commerce and I know, we're going to be talking about some of, the recent things you've done but kind, of across the board what how does it, look to you and what are people thinking, about yeah I mean Shopify was pretty, early in kind of this new wave of AI, capability to say like hey whoa like, this is a completely new class of, possibility for the tools that we make, for merchants and the shopping, experiences that you know our platform, provides on the other end to Shoppers, and Shopify is really just kind of here, to make Commerce accessible and, Entrepreneurship accessible to everyone, and we're really excited about these, tools as a way to kind of further, democratize entrepreneurship for people, there's so many things you have to, create and produce and ideas to develop, and knowledge to gain about markets and, positioning and strategy and branding, there are so many tasks that, entrepreneurs have to learn and develop, along the road to building a successful, business and llms generative AI are all, incredibly powerful tools to help, accelerate that learning curve for new, merchants and to help them kind of get, up that curve faster and build better, businesses I'm curious we all hear you, know us consumers in the world we we, hear about AI impacting retail and stuff, but for those of us who don't have as, much of a view on it can can you kind of, talk about how you've seen retail change, in these years since AI is really the, last few years with AI really kind of, getting in everywhere what are some of, the things that might surprise people, you know they kind of know there's AI, there in the background but they don't, really know how it plays or what it is, you know surpris us a little bit like, what's something I go oh wow I didn't, realize that well you know I mean we, started by just like adopting these, tools in our engineering practice to, begin with we got some of the early, previews of co-pilot and started using, that to help accelerate some of our, development work early on but really the, place where we've seen it have the, biggest impact in your term is on tools, for merchants you know like when we, think about who our core customers are, as Shopify it's the merchants who who we, power with our platform and enable them, to do really creative amazing things you, know at this scale that they never maybe, thought was possible for them right and, AI is again like sort of a way to, accelerate that work and give them more, time back to you know instead of, spending hour and a half like trying to, craft the perfect product description, because you're not totally sure exactly, what makes a good product description, you know last year at our winter Edition, we shipped a really simple tool where, you just like enter in like a couple of, raw details about your product and hit, the magic button and it just writes a, well-crafted narrative product, description that speaks to product, benefits and and all the great standard, practices of writing a good product, description and you get that in seconds, versus an hour of human Toil and so the, place where we've seen and I really have, the biggest impact early on is just in, accelerating the work that Merchants are, already doing and allowing them for and, well I guess it's e-commerce but also, like web content development it's a very, like, multimodal thing right like you've got, these product descriptions that's part, of it you've got product imagery you've, got Website Layout you've got, potentially ads and integration with, like other platforms talk a little bit, about like within that space because, there's so many as you mentioned there's, so many tasks to address within that, space as Shopify kind of looks at the, merchant, experience how have you narrowed down on, the particular problem sets that, Merchants really want to hand off versus, like those things I know also from just, being in it like you know marketing, teams love to get in there and tweak, things and like be part of the process, but they also really don't want to do, certain things too or things that are, just kind of grunt work essentially that, sounds like it's coming from experience, right there yeah yeah I mean we we have, a word that we use at Shopify you know, toil this idea of like work that kind of, has to be done but isn't desirable work, to do and so we look for toil that, Merchants do and so we spent an enormous, amount of time sitting down with, Merchants talking with them about how, they use our platform what they want, more out of our platform what they wish, they could be doing with their business, what they are doing with their business, and from that we've learned a ton about, you know what are the ways that, Merchants would like to spend their time, and then what are the ways that they, just kind of have to because that's the, way the world works right now and so I, think the opportunity for us is to find, those moments and to build tools like, particularly magic tools into those, spaces that just sort of like make that, go away and when we do that what we hope, is that Merchants will take that extra, time that they have that hour that they, got back not spending on that one, product description or or that one blog, post or that one email headline ah, should I use a or b I don't know and, just give them a really easy tool to, generate that content make it really, high quality give them the control to, adjust if needed um and then and then, publish it really quickly you already, mentioned one of those this sort of, product description things are there a, couple other ones that you could kind of, highlight just to give a sense of the, breadth of, how this technology is applicable in the, space so we've launched this Suite of, tools that we call Shopify magic right, it's our free Suite of AI enabled, features across our whole Shopify admin, and these things crop up in a few, different places it can sort of help you, take the power of your own data uh and, make it work better for you and you know, we've applied that in places like email, headline subject writing for marketing, emails and things like that we've, leveraged it in the context of, generating blog content obviously, product descriptions is another and, we're obviously really excited about um, some of the early work that we've also, done in the image generation space we, recently released a hugging face space, that I'm super excited to dig into more, I'm sure a little bit later you know, when you think about a storefront and, the kind of content Merchants need to, produce it really Falls generally into, one of two categories it's either text, or it's images we're really excited, about both of these spaces and helping, accelerate Merchants there I know Daniel, has used the tools a lot but if you had, someone who was a novice and they were, getting into business and let's say, they're starting it now so they haven't, been doing it it's a new store Chris is, gonna sell what like socks to fund your, an animal charity raccoon socks there, you go raccoon socks there you go I like, it christmy raccoon socks how's that, will it keep the raccoons out of my, trash cans will will the socks that what, they, do if you want them to that's no problem, um for for those who are going what just, happened on the show in the world away, from technology and AI I'm a wildlife, rehabber and I right now I have 20, raccoons at my house so that's what, that's all about it's a full Christmas, party it's a full yeah oh it's quite a, Christmas party you put 20 raccoons, loose in a room oh boy yeah okay so back, to my back to my new store that I just, opened up I'm excited I don't have, Daniel's depth of experience at this, what are all of the amazing things I'm, either by myself or I don't have a lot, of help everyone's tossed me to the, Wolves I've come to Shopify because I, know you have all these these magical, tools can you tell me a little bit about, that experience from a merchant, standpoint like on day one what am I, getting into how should I think about it, a little bit and what do those how do, those AI tools directly impact what I, want to start doing today yeah totally, well I mean I think from a version's, perspective if they were to log into the, admin today I you know I don't think, they'd be overwhelmed with like the, amount of AI tools you know sort of all, over the I think today we've really, started with a focused approach that, feels super seamless and integrated into, just the activities that Merchants are, already doing for example product, descriptions let's say you know you're a, merchant you just started building your, storefront you're super excited to like, get that up put a great face out on the, web and you're starting to build out, your product catalog you're starting to, think about you know how do I, merchandise my products how do I talk, about them you're new Merchant you, haven't really done this before you, don't know what the best practices are, for product descriptions or you know if, you want to create some SEO content kind, of Market your brand and product, expertise in the space you can go into, your product detail editing page for a, product you want to add just drag and, drop your images um one of the really, cool things and I'll I'll say this, because I'm also product lead for, spatial Commerce is you can also drag, and drop 3D models into that image bin, and it'll handle them beautifully so, there's some cool stuff going down the, line with that raccoon 3D models that's, awesome this is gonna be an awesome site, Chris I'm looking forward to it so if, you've got 3D models drop them in there, too those will display on your product, detail page on your web storefront and, then when you get to that like that, challenge of like okay now I've got to, write a product description oh gosh I, haven't thought this through I'm not, really a copywriter you know I went to, business school and maybe I can write, things but like I don't know what what's, a good product description what does, that even look like and I could go and, spend you know an hour two hours doing, Google searches and combing through, results and sort of like collating my, own idea of what makes a good practice, for product descriptions or I could just, click on that like lovely little Sparkle, button after entering in like oh it's, white it's you know these Dimensions, it's got these materials and just like, boom and you've got this incredible you, know text description of your product it, pulls from your product title so like, you know if you've mentioned that it's, like this kind of product or this, category it gathers all that context, initially and then brings that to bear, on the the description that it writes, you'll have the ability to pick what, tone you want that description to have, so we give the merchants some ability, kind of shape like oh do I want this to, feel sophisticated do I want it to feel, fun do I want it to feel like there's, deep expertise behind this product, description and so I think um those, really simple tools just kind of placed, seamlessly into the UI exactly where the, merchant is is kind of doing these, activities today anyway way is really, kind of the powerful First Step that we, want to take to introduce Merchants to, these new tools and then we'll expand, from there in some pretty powerful, [Music], ways what's up friends AI continues to, be integrated into every facet of our, lives and that remains true because you, can now index your database with AI you, can write more code become that 10x or, you always wanted to be and you can even, draft a letter for a lease on an, apartment or a new property AI is, everywhere and it might be time for us, to start questioning is AI our friend or, our worst enemy and that's the focus of, the three-part season opener of the, award winning podcast called Trace rout, podcast you can listen and follow the, new season of Trace rout starting, November 2nd on Apple Spotify or, wherever you get your podcasts and this, show is all about the humanity and the, hardware that shapes our Digital World, in every episode of trace route a team, of technologists seeks to untangle the, complex question who shapes the internet, seasons 1 and two gave us a crucial, understanding of the inner workings of, Technology while revealing the human, element behind Tech and season 3 tackles, not just AI questions but also how can, we use technology to preserve the Earth, who influences the technology that gets, made and what happened to the flying, cars we were promised I think it's safe, to say that the future of AI is both, exciting and terrifying so it's, interesting to hear the perspectives of, experts in the field listen and follow, this new season of trace route starting, November 2nd on on Apple Spotify or, wherever you get your, [Music], podcasts I think how we initially, started chatting back and forth was at, least partially because seeing this, hugging face space which is really cool, that you all put up and I know got a lot, of attention, partially in the merchant world but also, in the AI world around you know the, community that's being built around open, source AI tools on hugging face and you, had a space there that had to do with, product photography before we go into, the the technology the the space kind of, how this works could you kind of, describe a little bit of the motivation, behind this this project because you, mentioned product photography but people, might not if if they haven't been, exposed to kind of e-commerce as much or, worked on their own e-commerce store, they might not realize kind of what, product photography means and some of, the challenges around it so could you, kind of set up the motivation for this, um before we hop into the technical, pieces yeah absolutely so Merchants, spend an enormous amount of time and, money generating visual media that's, like compelling that gets people excited, about their products either the details, in the design or the lifestyle that it, might afford you know whoever buys it, and these images are are really core to, what drives a lot of Commerce online, whether it's advertising or whether it's, building out an attractive storefront, like a web storefront or whether it's, appearing in various channels you know, in different marketplaces as well but, not least of which is on the product, detail page where somebody has landed a, a shopper and ostensibly they're, interested in this product and the job, of the images in that context is to do, the best job possible painting a picture, of what like that product looks like in, somebody's life as well as all of the, details about the product and early on, you know last year uh when stable, diffusion and other open source image, models started to land I got really, excited about a future where you could, imagine Merchants just being incredibly, more agile and cost efficient in how, they create these images and so we, started digging in pretty quickly we, played with dream Booth uh as soon as, that was available in stable diffusion, and we started to see like actually, could we train a dream Booth model that, could encapsulate the the concept of a, product and recreate it in High Fidelity, over and over and over again that's like, the dream right and we're getting closer, and closer to that but we weren't quite, there yet but some of those early, Explorations proved beneficial to, understanding the space understanding, the technology and thinking a little bit, more deeply about some simpler ways that, we we might be able to bring this to, Market in the near term when you think, about the opportunity for image gen in, Commerce I mean it's massive right and, the ability the promise of being able to, recreate your product in High Fidelity, in any scenario is kind of the dream you, know you could imagine at requesting any, kind of lifestyle or product detail, image and just in seconds getting that, out the other end to use in your, storefront or to use in blog content or, to use in advertisements about your, product and that's incredibly powerful, because Commerce is always changing, taste is always changing seasonality is, a huge piece of Commerce and thinking, about how do you merchandise and Market, your products differently in the spring, versus the fall and keeping up with the, amount of imagery just required to drive, that that part of your business um is, really challenging I think the reason, that we got really excited is that we, saw an opportunity to take the existing, imagery that Merchants had either from, past photo shoots or from Humble like, atome photography, with their kind of mobile cameras set up, on their kitchen counter or whatever, they might have access to and give them, a tool that could uh not really change, any pixel of the product itself but, otherwise completely reinvent the, reality around that product and so we, started to work on you've seen a lot of, examples of this out in the market but I, think the key problem that we saw with a, lot of these early examples of this, where you sort of you do object, segmentation to select the product and, keep it you know sacred you don't touch, any of the pixels that you you sort of, guard and mask there but then all the, pixels in the background you reimagine, with AI and what we saw with most of, these early tools as I said was that, there was this real disjoint appearance, between the product that got masked and, safeguarded and the reality that was, created around it the camera angle like, it looked like oh well this one was, taken from above and but the original, product image was from straight on and, and there's no grounding shadows and, there's no realistic reflections of the, product in the environment the the, pixels of the product and the pixels of, the environment aren't speaking to each, other they don't know one hand doesn't, know what the other is doing and so they, can't knit those pixels those moments of, grounding around the product that really, sell the illusion that it's part of this, other reality those Shadows those ground, Reflections seeing maybe some of the, light of the scene hit the product, object itself and so we wanted to really, tackle those grounding problems that we, saw in a lot of these early examples and, so so I'm happy to dig into all the, technical details of kind of how we into, that but that was really the opportunity, that we saw was to begin to bring some, of this magic to Merchants really early, before we're even yet to that perfect, personalization of you upload a bunch of, images of your product and now it, produces them again perfectly out the, other end we can begin to bring really, powerful tools to Merchant in the space, already even with techniques like this, and just as a point of clarification, when you say grounding in this case, you're kind of talking about that visual, context going AC cross different as, opposed to uh technical grounding with a, model and such just because we we talk, about both on the show yeah no it's a, great clarification yes and and purely, talking about sort of the visual aspects, of the output image and and making that, product feel seated in the new reality, in some visual way before we hop into, more of the details about how you, actually accomplish this I I'm I'm, wondering how you see the kind of state, of open-source generative models in, comparison to maybe some of the other, platforms that are out there that do, enable amazing things but not in an not, in an open source way it sounds like at, least for your team I don't know if it, was kind of personally important to you, and your team to kind of Leverage some, of this open technology or it was just, like these things are openly available, they're licensed permissively for our, use and they're enabling things that we, couldn't do before how do you view kind, of the state of generative AI on the, image side specifically because we've, talked a lot in recent weeks about the, text side of things and how maybe text, generation models that are open compare, in certain ways or other other ways to, close models but I'm wondering from a, team that's actually used this kind of, image generation models that are open, and licensed permissively what was that, experience like for you it sounds like, this grounding element was one thing, that you had to deal with but what was, it like for you generally to kind of, work through the details of, getting the model figuring out how to, run it uh figuring out how to scale it, maybe that sort of stuff um could you, kind of describe a little bit of that, process this is a really early field so, we're still figuring out what the tools, need to look like and how to work, efficiently we were working on some of, these early ideas in a very sort of, falling over ourselves way in some, notebooks trying to collaborate and work, together and just not sort of seeing the, pace that we wanted to see and sort of, our iteration speed our team is uh you, know works really quickly we work on, kind of these three-week Sprints to just, very rapidly prototype and understand a, new technology space and develop you, know some kind of potentially useful, concept there and so we needed a way to, move faster and um Toby the CEO at, Shopify is incredibly technically Adept, uh he's an incredible developer in his, own right and was really interested in, some of the image gen work uh that we, were doing in the early phases and, suggested that we pick up this new tool, called comfy UI which is an open source, tool so we're big fans of Open Source at, Shopify it's why we shared to hugging, face and because we want to contribute, back to that Community you can go take, our Pipeline and do something with it, it's up on a hugging face and so we're, we're really excited about open source, and obviously the capability of other, providers as well and you know our, objective is always to bring the best, technology to our Merchants whether it's, open source or buy a closed provider so, we're really excited about all all, contributors in the space and and what, tools we can build for merchants with, them so we focused a lot on SD uh in the, early days and we were excited when, stable diffusion XL launched and that's, actually the model that underpins our, hugging face space we've done a lot of, work with stable diffusion in in all of, its iterations as we've explored this, space and and are excited to continue to, work with it and obviously um build, amazing new stuff with it but yeah I, mean I think we used comfy UI we dug, into it I think what we loved about it, is that it's this node-based UI I come, from the design world originally for, product and UI design and there was this, much loved tool originally from Apple, but it got hacked by a bunch of, prototypers called quartz composer it's, a node-based interface with a bunch of, like little little modules that do, little conversion jobs and you can wire, them together in in these sort of larger, machines and recompose and move things, around really quickly and rewire them, and change the sort of constant values, and and very quickly build these very, complex Computing machines in a visual, away and so for me and for our team that, was a really powerful tool for us to, accelerate our process and we began, building these machines this pipeline, that we ended up putting up on hugging, face in comfy UI and iterating there and, when we had it to a great place we, pulled that code into the hugging face, you know sort of rebuilt everything, ground up with the models hosted on, hugging face and sort of encapsulated, the pipeline there but we were able to, iterate super quickly um and Visually, this way um and see exactly what every, piece of the machine was doing at each, run it's really interesting because, you're taking new capabilities in the AI, space with a a large business that's you, know running and you're trying to you're, trying to kind of do the uptake while, absorbing the technology at the same, time and as you pointed out you know, your CEO brought uh comfy UI to your, attention as you're doing these, activities as a business owner in, general like the folks that are there, how do you decide to make investments in, certain areas with these new, technologies and decide because you know, there's the the pull and push of well, direct AI isn't our business our, business is to make the best platform, for all these merchants and yet there's, all these new capabilities out there but, they're not mature enough you brought an, example to Bear a second ago that's a, complex set of business processes to, work through and and figure out what's, the right level how does Spotify think, about that you and others there in terms, of like is this a step too far to go on, a particular or or this is appropriate, like comfy UI turned out to be how do, you make those choices it's a jungle and, one of the tools that we've used uh is, is really our magic Labs team so early, on at the end well actually rather at, the end of, 2022 as we began to see some of the, rapid advancements in llms begin to take, shape and the product possibilities, became clear we started our early, efforts around product descriptions and, generating those on the Fly for, merchants and early on it was really, about saying okay what are the things, that this tool this new technology is, going to obviously be capable of right, with maybe a little prompt engineering, you know we'll figure it out but like, what seems to be well within its grasp, but also of Maximum timesaving value to, Merchants and product descriptions was, like that perfect Bend diagram out of, the gate right it was just kind of, obvious to everybody it was like oh so, much we know I already know so much toil, is spent um just creative writing, something that can probably be written, pretty quickly if you have the necessary, context and obvious best practices and, all of that stuff so we got to work on, that and we shipped that super quickly I, think we turned that around from concept, and team assembly to launching at winter, editions in about two months and so it, was just an incredible accelerated you, know one of those moments right where, just the right people and the right, technology and the right opportunity, come together and pretty rapidly out of, that we formed the magic team at Shopify, to sort of help invest more deeply um in, these AI Technologies and figure out all, of the places and all of the ways we, wanted to leverage these new, capabilities across the admin to help, accelerate what merchants were're doing, and so we've continued to work on a, bunch of uh different ideas there not, least of which is um some of the image, gen work that we've done and the way, that we kind of work through this space, because there is so much going on like, every week you've got to weed through at, least a half dozen like groundbreaking, papers all over the map and so a big, part of the process is connecting to, that fire hose of what's happening so, that you never lose sight of like a, paper that might completely change how, we think about serving our merchants and, then weeding through those and and just, sort of logging them as you go like I've, got a Twitter bookmarks like folder, that's just so deep that I get back to, periodically and sort of pull pull, things that sort of feel like they have, remaining value out of and surface to, the team and surface to the company and, start discussions around and within, magic Labs our small team has been, iterating on this 3-we cycle to just, digest all of these new technologies all, of these capabilities every three weeks, we pick up a new one we have no road map, we just uh we have areas of curiosity, and every three weeks we look at what's, out on the table in the world and we say, what's the most exciting or potentially, impactful thing for Commerce and our, Merchants next based on what we see here, and we pick what we want to work on, within a day and we're prototyping by, you know day two or three after having, like picked up either a new piece of, Hardware or opened a you know some open, source code on GitHub to get started and, we're prototyping and within three weeks, we've gotten to the end of that process, we've got a deliverable that like proves, either disproves that uh something we, hoped could be possible is actually not, possible and here are the reasons why, and here's now what we're looking for in, the next iteration of this technology or, quite often actually there is a path, here here's what it looks like here's, how we might shape this and from there, tons of internal teens are eager and and, interested and hungry in sort of like, how to rethink their products or how to, leverage these Technologies for their, particular challenge uh with Merchants, we're really lucky to just be working in, an organization that that gives us, really fertile ground to just kind of, bring these Technologies and what we're, learning about them to a really wide set, of problems that all seem very tractable, based on the trajectory in what we're, seeing in the tech right now well r I I, really appreciated your perspective on, kind of how how your team is thinking, about processing a lot of these, advancements that we're seeing in, technology and tools so rapidly which is, is definitely hard to keep up with but I, I love how you're thinking about these, short cycles of work and thinking about, what what could be impactful I'm, wondering if we could kind of revisit, this problem of grounding with these, product images because I think some, people might really be interested in, that and I'm wondering to start with, that could you just rephrase the kind of, main problem of this grounding for, people that are kind of new to this and, walk us into like how you identified, this problem and thought about coming up, with a solution to it because I could, see a whole spectrum of things here, right like there's a hierarchy of ways, to do this everything from oh we just, need to make our prompt better to we, need to retrain a model from scratch, that's Shopify uh stable diffusion or, Shopify GPT for for this right and and, shif fusion and yeah exactly Shopify, Fusion yeah obviously that ladder of the, spectrum I think there's very few people, that get to that level of the hierarchy, when they're solving these problems but, it is hard sometimes for people to parse, out where along that Spectrum from, playing with your promps to maybe, chaining to like creating some, pre-processing post-processing to find, tuning to training your own model where, is it reasonable for us to land on that, Spectrum that's something for people, that's really in my experience hard for, them to parse out so how did that work, out for you all in this case maybe, starting with rephrasing that problem of, grounding and then getting into how you, started thinking about how you might, solve it from the highest level it's, like if you're a merchant right and, you're just starting out and you've got, some products that you're really excited, about either you've sourced them from, like a really great provider or you make, them yourself you you don't have all the, resources that you know somebody with, operating business and scale and lots of, employees and tons of capital uh can, deploy you know to build a business you, know by hiring contractors and employes, and all those things and if you're this, Merchant and you've got you know you can, take photos at home or you've got maybe, like some photos from a previous shoot, you paid your friend to do and they're, they're pretty good um but they're not, quite like helping your brand sing, you're looking for something to help you, get over that you're looking for that, tool that's going to help take the the, media that you have and turn it into the, media that you want and your first, thought is like oh my gosh AI like of, course like AI is gonna unlock this was, our first thought I like well if we can, just train a model to know exactly what, your product looks like great you can, just create it over and over and again, we're we're getting closer to that but, we're not quite there and short of that, like we're looking for ways to help, Merchants still realize this elevated, this creative elevation of the creative, materials that they do have right and it, turns out that a lot of have pretty good, photography it's it's almost there right, um either because they took it at home, with their mobile camera and they just, don't have a whole lighting Studio set, up or or they're just not sure like how, best to Art direct an image so that it, feels tantalizing to look at and drives, purchase behavior and so we saw a path, where you know you can of course crop, out and save the product pixels from the, original image that you took and keep, those sacred and eliminate the challenge, of getting AI to recreate the product, again which is a very specific thing, right it's got details it's got your, logo on it and AI has a hard time, holding on to some of those details at, times but it can be fantastic at, creating the background right the not, centerpiece of your image um to create a, new elevated environment around it and, so we saw an opportunity to kind of take, that path and give Merchants an early, tool as personalization matures as we, get to that point eventually that can, begin to help them unlock lock some of, the value in their existing image medas, their humble kitchen countertop, photography and so by building this, pipeline where we're able to hold and, keep your product pixels uh intact and, not change those we keep those details, intact and yet we can magically create, this world around it when we started, this journey what we saw was okay we, were like okay great we'll take control, net and with stable diffusion and we'll, combine these things we'll use the depth, of your original image and then we'll, just ask it for new background and it'll, come out the other end and it'll be, great but what happens when you sort of, segment out your product image and keep, controlling that from really, understanding what's in there so it, doesn't change anything it begins to, lose an understanding of how to fill in, the details around that object to make, it look like it's a part of the, environment that it's been creating so, it loses its Shadows it loses its, tabletop surface Reflections because, it's actually kind of Forgotten in a, weird way that there's even an object, there at all so it doesn't in whatever, way you can say an AI model thinks it, doesn't have the triggers to generate, those ground shadows and surface, Reflections in the scene it creates, around the product and so that was the, first key obstacle that we saw when we, started moving down this pipeline of, trying to help Merchants take their, existing product photos and just kind of, create new Rich realities around them so, we had to solve that grounding problem, there were no Shadows there were no good, ground Reflections the camera angles, were off like you'd get a kind of, tabletop scene background for a Fronton, you know product photo and it just it, looks wrong you see it immediately of, course and so we started tinkering and, trying to figure out how to get that to, work and and it actually turned out to, be kind of a multivariate approach we, had to think about prompting we had to, think about how do we structure a good, prompt just so that we get a good result, even without all the fancy stuff we want, to do in the in between right and it, turns out one of the key things we, learned was that you need to start with, a declaration what your foreground, object is what your product is in the, shot and if you can get a really good, description of that then your prompt is, already starting out in a really good, grounded spot obviously adding stylistic, language like commercial product, photography you know high quality you, know all the sort of like little tricks, of early image models that will, eventually like pass away but you know, we injected a few of those into the, prompt as well you start with that, product description and then the next, key line has to be some kind of, grounding description of How It's been, placed in the environment without that, description you don't get those Shadows, you don't get those table Reflections, even with all the cool additional sort, of support for that functionality we've, built in and so you need that grounding, and then finally you can describe the, scene that you want in that background, it's a great description it really is um, I'm really enjoying this and but I, wanted to ask a couple of questions to, make sure it sounds like when the way, you were starting when you're kind of, talking about the product pixels and, pulling those out almost in my mind I, was almost thinking of it in an, oldfashioned way like like a Photoshop, mask or something where you're masking, out the product and then you're trying, to bring all the goodness of the, contextual understanding of the models, in the thing that I think surprised me, in there was was kind of if you talk, about that that initial masking I wasn't, surprised when you talked about finding, the description for the background and, everything but I was a little bit about, the thing being masked if you will the, the product itself how do you think, about that if you're um as you're going, through the process and you're saying I, need that description like could you, describe that step a little bit because, I was I'm trying to kind of really grock, that one but it sounds really, interesting to me I think it's probably, helpful to work backward from really, what we deliver to stable diffusion as a, model to generate the output that we get, from it in the end and then kind of work, back for like okay well then how do we, assemble all of that input to then get, it into the model right sure so at the, end of our pipeline once we processed, all the prompts you've put in and the, image you you uploaded of your original, product all that stuff really what we're, delivering to stable diffusion in the, end is a masked depth map of your, original product and a little bit of a, bloom at the very bottom where it might, make connection with the scene with the, original scene around it could that be, like a shadow when you say that kind of, like yeah sort of like a little bit of, that shadow maybe you know if it's if, it's a table reflection you'll get a, little bit of that table reflection and, what we found was that that little hack, is just enough context that that little, like gradient of additional depth info, as you leave the the product pixels is, just the right amount of grounding, information that stable diffusion and, control net need to be like oh there's a, shadow there oh and I see the angle of, the table is this way and oh I see the, you know the camera angle is kind of, like this and all of that together, collectively gives stable diffusion the, context that it needs to then paint a, grounded scene around that product in, High Fidelity and of course we're, generating a new product in that, resulting image but we do a compos in, the end and those pixels because we're, using depth control net adhere very, closely to the original product pixels, so when we do the paste over in the end, you never see the sort of like, hallucinated product pixels in the, background it's so interesting I think, what one question that would be really, interesting for our listeners and for me, from a selfish perspective is like how, does one because I think a lot of people, play with these models they can pull, down and you know figure out control net, reasonable enough but then this like, connection of this like little hack as, you described it in some ways after you, have something like that it like it, seems simple enough to describe like why, that would work and it's like a cool, hack but to get to it it's like how do, you come up with that I think is what is, in a lot of people's mind and some of it, for me I know like a lot of times I bang, my head against the wall one day and, like I sleep on it and in the shower in, the morning it's just like whatever that, that idea comes but I'm curious from, your perspective from your team's, perspective how did that happen and what, sort of environment exists that would, promote this sort of hacking of of, because you're not retraining a whole, model here you're kind of using what, what is off the shelf but using it in an, extremely powerful way but in a very, creative way that is creative not in the, sense of training a new model but, creative in the sense of how you're, using the existing model which I think, is really intriguing this really was, just kind of the perfect Workshop, product where we had just a bunch of, like brilliant people who kind of, understood these models enough had, played with them enough knew and had, seen enough of what they were capable of, from different demos and other things to, to have a real opinion about what was, possible and kind of what wasn't and, when we started the journey and started, building the machine right and trying to, figure out what is this how should this, work how do how do all the pieces fit, together we knew that there were you, know hundreds of these little amazing AI, machines that could be plugged into and, turned into bigger machines that do even, more powerful things and it's just about, figuring out like what's the sequence, what are the pieces what are the core, problems and and it's just how do you, get to that iteration speed where you, can try something it's why I fell in, love with webd like in the early early, days of like web standards and 2.0 like, you could code something and see it, immediately okay that didn't work go, back code it again see it immed okay no, that didn't go back code it immediately, see it again and getting into that state, and that's that's really what the open, source tool comy UI really unlocked for, us was you know and gpus still take a, few seconds to deliver images so it, wasn't perfect rapid fire iteration but, way faster um than trying to do it all, remotely and trying to you know comfy UI, dramatically accelerated our ability to, kind of like build this more complex, machine because it was so easy to, configure and reconfigure and try a, thing and wire it a different way and, then that didn't work you wire it a, different way you see the results you're, like oh wait that's new but different, but not what I want but isn't that, interesting and then oh maybe I have a, hunch about why that happened and like, you pull that back into something else, and you you now you unlocked something, not because you had some like amazing, Insight but just because you've tried, enough stuff and you've seen enough, weirdness and then like there was, something there that was weird that, shouldn't have happened and something, surprised me and I want to understand it, and that's how just like the you know it, just unfolds that way and eventually, you're like you start to connect all, these little discoveries you make as, you're like why did that happen why did, that happen why did that happen and, sooner or later you end up with, something that works it's kind of magic, it is a kind of magic um as a queen fan, that that fit right in there as well, um I'm still almost stuck on that, creative Epiphany that you had a moment, ago that's really I found that really, interesting that that You Came Upon on, that as you're looking at this set of, Technologies evolving over the years, ahead as the organization is maturing, with these Technologies and you have, this amazing, creative capability in your humans in, your organization that can use these, tools where's all this going as we wind, up this conversation what you know how, how are you thinking about the future, what are you excited about that what do, you not have yet that you wish you had, in your fingers right now how's your, thinking about that I ha when people, think about shopping they don't always, like jump to think about technology but, if you think about how technology has, impacted Commerce over the years, Commerce and culture around it and how, it works is always inherently tied to, like the wave of technology that we're, experiencing you know whether you're, talking about IBM cash register adding, machines or whether you're talking about, mass media and the creation of kind of, Mega brands or you're talking about the, evolution to the internet and sort of, the democratization of Commerce and, connecting with Niche audiences the, culture around Commerce always evolves, around technology it's why, I'm so excited to be working at the, intersection of kind of these new, technologies at a company like Shopify, because you know they are so directly, related and as we kind of think about, the future of technology and sort of, where this is all going I get really, excited about AI being an incredible, driver of personalization in Commerce, you know when I go to some of my, favorite stores the person behind the, desk like knows me they recognize me, they remember what I bought last time we, have a conversation about it like I can, ask questions of the new line or the new, products or ask them to help me find, stuff that I might really enjoy based on, what they know I've bought in the past, and and that's all an experience that I, can get at an in-person store today, because like the person there knows me, I'm really excited about a future where, our online Commerce experiences become a, little bit more like that where we visit, an online store and it knows who we are, and it helps us find the stuff that, we'll be most interested in and even you, know really exciting things like being, able to visualize myself in different, clothes I might want to buy live in a, browser like that kind of stuff is in, the future uh out ahead of us and so I'm, really excited about a future where AI, helps bring these kinds of personalized, onetoone customized shopping experiences, to Merchants and helps them bring that, to their Shoppers that's awesome well, I'm I'm definitely looking forward to, seeing the the things that your team, comes up with moving towards the future, and just really appreciate you taking, time out of what must be an incredibly, busy week leading up to Black Friday, Cyber Monday uh at Shopify but yeah, thank you so much for the work you and, your team are doing Russ um hope to hope, to have you on a future show to see some, of those things you just mentioned, become reality thanks so much for, joining us awesome thanks Chris thanks, Daniel really appreciate, [Music], it, thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change talk podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, and the biz that's all for now we'll, talk to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI trailblazers putting people first | According to Solana Larsen: “Too often, it feels like we have lost control of the internet to the interests of Big Tech, Big Data — and now Big AI.” In the latest season of Mozilla’s IRL podcast (edited by Solana), a number of stories are featured to highlight the trailblazers who are reclaiming power over AI to put people first. We discuss some of those stories along with the issues that they surface.
Leave us a comment (https://changelog.com/practicalai/245/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Traceroute Podcast (https://deploy.equinix.com/traceroute/) – Listen and follow Season 3 of Traceroute starting November 2 on Apple, Spotify, or wherever you get your podcasts!
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Solana Larsen – Twitter (https://twitter.com/solanasaurus) , LinkedIn (https://www.linkedin.com/in/solana-larsen-016129)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Blog post announcing this season of IRL about putting people first in AI (https://foundation.mozilla.org/en/blog/season-7-of-mozillas-podcast-irl-interrogates-the-risks-and-rewards-of-ai/)
• The IRL podcast (https://irlpodcast.org/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-245.md) | 11 | 0 | 0 | [Music], welcome to practical AI if you work with, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners for, helping us bring you practical AI each, and every week, fast.com fly.io and types sense.org, [Music], what's up friends AI continues to be, integrated into every facet of our lives, and that remains true because you can, now index your database with AI you can, write more code become that 10x or you, always wanted to be and you can even, draft a letter for a lease on an, apartment or a new property AI is, everywhere and it might be time for us, to start questioning is is AI our friend, or our worst enemy and that's the focus, of the three-part season opener of the, award wiing podcast called trace route, podcast you can listen and follow the, new season of trace route starting, November 2nd on Apple Spotify or, wherever you get your podcasts and this, show is all about the humanity and the, hardware that shapes our Digital World, in every episode of trace route a team, of technologists seeks to untangle the, complex question who shapes the internet, seasons 1 and two gave us a crucial, understanding of the inner workings of, Technology while revealing the human, element behind Tech in season 3 tackles, not just AI questions but also how can, we use technology to preserve the Earth, who influences the technology that gets, made and what happened to the flying, cars we were promised I think it's safe, to say that the future of AI is both, exciting and terrifying so it's, interesting to hear the perspectives of, experts in the field listen and follow, this new season of trace route starting, November 2nd on Apple Spotify or, wherever you get your podcasts, [Music], welcome to another episode of practical, AI this is Daniel whack I'm the founder, at prediction guard and I'm joined as, always by my co-host Chris Benson who is, a tech strategist at locked Martin how, you doing Chris doing great Daniel it's, uh having a nice day here and having, lots of interesting conversations about, just all the things in AI episode last, week really hit a nerve I think I I, think so yeah and I'm I'm actually in, London this week and giving a talk, tomorrow about quote trustworthy AI, which um I'm hoping that our guest guest, can enlighten me on a few other aspects, of that prior to my talk tomorrow which, will be convenient so if I have to, change my slides tonight that that'll be, useful but uh you're cheating oh boy, listen to, this we're privileged to have uh back, with us uh salana Larson who is the, editor of mozilla's IRL podcast or, online life is real life and uh salana, it's so great to have you back I'm so, glad to be here hi I was just looking up, the date and the episode number so we, had you uh episode, 187 back in July of 2022 talking about, the podcast That season that you had, released all about Ai and we we talked, about concerning Trends and uh how the, technology was transformational positive, signs of change and of course it's just, so interesting that it's only been up, until now so call it a year yeah what a, year so much has changed it almost seems, like AI has been invented um I'm, wondering if you could just give a, little bit of context for why and how, Mozilla is putting together the seasons, of the IRL podcast and why the focus is, on AI and then we can go from there yeah, sure yeah what a year what a week right, what a month I feel that way all the, time it's just going so quickly yeah I, mean I guess people who might be, familiar with Mozilla might you know, know Firefox and think what does that, have to do with AI and why are you, talking about that the Mozilla, Foundation s of the nonprofit arm of, Mozilla for the past couple of years has, been really focused on what we call, trustworthy AI the term that you just, use and you know I guess you have a, bouet of choices when you're trying to, talk about ethical AI or fairi or, Equitable AI trustworthy was the one, that we went with as an organization um, it checks a lot of the boxes especially, at the time that we chose it and it, resonates in policy circles in, particular in some of the context that, we like to be in and so there's this, element of what is the future of the, internet and if we're an organization, that has a Manifesto that cares all, about you know making the internet, healthier creating tools that enable, people to create and be a part of the, internet if AI is the future of that, then what's our role there and how can, we help make sure that all the mistakes, that happened for the internet in all, these you know 20 25 30 years aren't, just all being repeated again in Ai and, so we're thinking about privacy of, course but there's also consolidation of, power you know the way that big Tech, kind of take control of everything and, squeezes squeezes without opportunities, um for smaller players and a whole bunch, of other things right so we have a whole, Foundation of people we have fellowships, we have grants that we make for, different people who are innovating and, trying to think about AI in a way that, big AI tech companies aren't thinking, about them just so that we can have an, alternative kind of conversation, happening around these things that are, changing our society and our industry, and everything so quickly and so part of, this is like we do have a big mouthpiece, I guess as Mozilla and one of those um, platforms that we have is this podcast, IRL that the Firefox team started years, ago and that we took over on Mozilla, insights team a couple of years ago and, we're now doing a season s it's the, second season that we've dedicated, entirely to Ai and part of that exercise, is thinking about well who do we want to, lend the microphone to who do we give, the microphone to what kind of voices, would we like to have in this dialogue, about AI that we don't maybe get to hear, as often um when we're tuning into the, US mainstream Tech press yeah that's, really great I I love in the, announcement of the the new season of, the podcast you talk about like you just, said it it too often feels like we've, kind of lost control of the internet to, these larger players and you know you, want to speak to those kind of re, reclaiming power over AI to put people, first I'm wondering as you were, preparing for this season of the IRL, podcast you know it has been a, transformational year in Ai and you know, we talked about some Trends both, positive and concerning last year in, terms of the the IRL um report and, season but I'm wondering if you could, talk a little bit about how all the, unique things that have happened, recently and especially around the kind, of public discourse around Ai and the, public adoption of this technology, weaved into how you wanted to curate, this season and particularly how the the, topics that you covered kind of bubbled, up which we'll get into um here in a bit, Yeah I think front front of mind a lot, of people are curious now in a way that, they weren't before I me you must, experience this on your podcast as well, that people now have this hunger to know, about AI where a couple of years ago, they were like oh what's that or what, how does that concern me now everybody's, like this really concerns me what should, happen and I think there are a bunch of, areas where like nobody is entirely sure, what to do like the first topic that we, took on in episode one is around open, source in large language models you know, this whole question where you have on, the one side you've got folks who are, saying it's got to be open or we can't, audit the models we we don't know what's, happening with the data and then on the, other you've got the a saying that it'll, be the Doom of all of us and, everything's got to be shut down and, closed for security purposes and so you, have these I think like a lot of, discussion these days like it's really, polarized sometimes and so it's trying, to figure out how do you make a nuanced, argument that and explains not just, different sides of the story but just, explaining how you know there's a, spectrum and there's a lot of AI topics, that get sandwiched together just under, this umbrella that's called Ai and it's, just so many different contexts and so, many different business purposes and so, many different like it almost less and, less makes sense of to talk about it all, as one thing we're right on the cusp, where we're still talking about it as, one thing and we're still trying to, Grapple with how we should Reg at how we, should build how we should design what, we should think about it personally and, so it's a really exciting moment to try, and figure out those things and I mean, the challenge as a podcast Creator is, that each of our episodes is like 20, minutes long so we pack in three four, different voices there's some really, deep analysis we work with our host, Bridget Todd who's great a whole bunch, of people work together on this thing, and it's like this very highly polished, produced lovely kind of white paper in, audio almost of a big issue like a big, topic so I'm I'm really proud of it and, last season you know we d a little bit, ahead of the curb in terms of talking, about some of these AI issues we we, actually we won the Webby for a best, Tech podcast um congratulations that's, awesome congratulations yeah wow I was, surprised because you know we're the, only Tech podcast in the nominated group, there that was hosted by a black women, that was featuring voices from Africa, from India that we really kind of, digging into the corners I think of, thoughts around AI that aren't just, concerned with how much money a, technology is making you know that that, isn't necessarily the criteria of, success for you know why you would, Elevate somebody's voice there's so much, that you just said that I want to go, into even more the you know you talk, about the larger Public's discourse over, these topics and and topics with an S is, crucial it's very nuanced and yet you, kind of alluded to that sense of, responsibility that you have in your own, podcast about trying to what voices do, you want to raise and where do you want, to get them and I know that that Daniel, and I feel that weight very intensely uh, we're at a moment now where the whole, world is really hopping on to this topic, so you're bringing the people we had, before but there's so many new people, that want to understand because they, they really do get that it's going to, affect their life and I guess selfishly, as we have tried to do that trying to, avoid just like people want to, self-promote and you know there's always, these efforts at that but trying to, bring the right conversation to bear and, how do you think about that how do you, and your team think about the fact that, you're in such a responsible position in, terms of being able to either have voice, or lend that voice to others at such a, crucial moment in time we're at a unique, point in history how do you parse that, how do you help the public get that, discourse right and talk about the right, things and how do you recognize for, instance on this one you know this is, too big of a topic to be thought of as, one unit anymore we now need to to kind, of segregate through the different, concerns within it how do you approach, that because it really affects how the, public thinks about it yeah I mean I, think we have an advantage in the sense, that the organization as a whole is also, thinking about this on a daily basis so, when we think about who do we give, fellowships to or who do we give grants, to or, who do we partner with when we do, different things as part of the thought, process and it's extremely difficult, because it's almost like every single AI, startup or project is something for good, right like everybody says that they're, doing it more ethical making the world a, better place everybody right like it's, just but you need to figure out I think, what are the values that guide you when, you want to make a decision about what, that means for you and you know even in, an organization like Mozilla there's a, lot of diversity in terms of like what, are people comfortable working with and, what are their opinions on this we chose, the theme people over profit on this and, the idea wasn't to just only look at, nonprofits it was to also be able to, look at are there ways that you can, profit that you can make money and still, kind of have a sense of putting people, ahead which is what we try and do with, Firefox and with other Mula products, right like trying to figure out how do, you make money and and not sell people, out um not sell their data and so, there's that sort of critical lens on it, and what happens very easily is that you, end up veering a lot to the people who, are criticizing right they're being, critical they're criticizing they're, pointing out the flaws and the errors, and you can also get a little bit too, much of that I feel and I think that's, where we've really put a lot of effort, in is to figure out you know we listen, to those voices very intensely but who's, being constructive who's kind of trying, to rethink this from a different angle, that we hadn't thought about before like, one area in the second episode we look, at content moderation and we look at, data work right the ghost workers and, how exploitation of Labor and data and, how the content moderators in Kenya were, being paid really terribly and and not, treated well and how they're fighting, back all this story right but then as, part of a package you know we also, talked to an organization called karia, in India that's trying to rethink okay, well if we're doing data work how could, we re numerate people differently and, what they did is they have these voice, data sets that they're making um in, different languages in rural Indian in, particular and they're working with a, lot of women and they do it as part of, like this educational project and have, people you know be able to do work from, home and there's a whole philosophy, around it but every time they resell a, voice data set to a new client who's, building some kind of voice recognition, Tech they send more money to the person, who donated their voice to help train, the system and so what they're asking is, well if we're paying pennies for this, work in the industry and companies are, making hundreds and thousands of dollars, on people's labor like why don't we just, give them a bigger cut you know we can, still have a really good business but we, could be thinking differently about what, a contribution is and have kind of, royalties that build over time you know, as long as this data set has value why, don't we think differently about how we, share that value across more people and, that's like a very simple thing where, you hear about it and you're like oh, yeah right we could just be thinking, differently about this and there are a, lot of examples like that in AI where, you know Kar is a business maybe they're, not a a unicorn but that's not their, goal right that's not their ethos and so, when we're challenged I think by people, who are innovating in terms of in, entrepreneurial ways as well I think it, really it helps us see how we've been, goated into thinking about AI in ways, that are really defined by an industry, that has a specific set of norms and, values also around how data is used and, how humans are treated and and we can, rethink a lot of those things you know, it could be different and when we're, talking about now in like the regulation, space how do we make things safer and, how do we stop harms from happening all, that stuff is really important but we, also need to have people who are working, on figuring out you know where do we, want to end up what is the vision for, what we want to accomplish with this, Tech cuz it's not going away and yeah so, we need more examples I think of what is, good what could good be that's our kind, of guiding star for how we pick who we, choose to um feature in the episodes and, and how we kind of build up a story that, has a bit of tension but also like a, silver lining I'm already super excited, to dig into these various topics and I, think people at this point maybe they're, wanting to just hop over and start, binging the IRL podcast and I totally, give people permission to because it's, just so good so you can pause this, podcast you're listening to and hop over, and jump right into IRL but I do want to, take the time to talk through some of, these subjects on this podcast and um, because these are topics that have come, up in various ways and bringing these, other voices into that I think would be, really good and the first episode which, you title with with AI wide open, I think is super interesting and, relevant even this week with you know, open AI releasing a whole new set of, features which are just incredible I, mean like I think anyone would have to, admit like these these features are, incredible and like the the vision stuff, and it seems like there are these sort, of, proprietary model providers or API, providers that are really leading the, way and some of this functionality, but there is this really amazing, undercurrent of open models and it's, multifaceted as you've alluded to um, some people approaching it in various, ways like stability releasing some, models that are amazing performing, models but maybe licensed in a, restricted way for research use and and, other purposes others that are releasing, under various licensing so there's the, licensing side of this there's also as, you alluded to hey if we start opening, up these models can we really say to, open AI hey just open everything up and, everything will be okay is is are there, reasons why we shouldn't be saying, things like that so this is this is a, very multifaceted topic and I'm curious, if you could just kind of lay out how, you thought about approaching this topic, in particular and and who spoke into, this and what were kind of some of the, highlights uh that were mentioned what I, was most concerned about in the, beginning was not knowing where to end, up if that make sense yeah because the, jury is still out on a whole bunch of, things uh related to this I was worried, about AI, dorismar but then I was also worried, about not making it sound scary enough, you know what's the right balance yeah, yeah we called in David Evan Harris um, who used to work on the AI, responsibility team at meta who's, probably a little bit more on the worry, side than I am at least but also has a, history and open source and was able to, I think also speak in Practical terms, about what is and isn't being done at an, at an organization like meta and the, reasons that he's concerned and and how, it connects to for instance like social, media which is something that everyday, people are really concerned about right, like how is it this going to affect, elections and that kind of thing, so that's something that that we looked, at and then we talked to abiba ban who, is a fascinating researcher very vocal, on Twitter as well who is looking for, openness in data sets and who's been, doing a lot of auditing of data sets and, trying to you know among I think a group, of researchers who have really tried to, engage with companies and try and, encourage them to be more responsible, about what they do and do not include in, the data set in regards to hate speech, terrible representations of women black, people and so forth and that's a really, important I think tenant in the way that, Milla thinks about why it's necessary, for these models to be open because the, company's s and again have proved that, we can't really entirely trust them and, so open source is like our safety net in, a way so I think on the on the Spectrum, that's kind we're leaning more towards, the open side than the closed side even, taking into account all the things that, we know we talked to Sasha luchon from, hugging face who does research around, climate change and open- Source models, and just hearing just the simple, perspective around or simple but just, this idea of if it's open if we're, working together in large communities we, might actually have these AI models have, a smaller carbon footprint you know CU, they they gobble up lots of energy and, so there there kind of concerns that are, outside of this framework of dangerous, not dangerous where we need to think, about what would it mean to have, researchers who speak different, languages working on these things what, would it mean if you could have a more, diverse set of people than the people, who work at open AI working on some of, these large language model capabilities, and then finally we end up with an, interview with um Andre mul are from a, startup called nomic and they've made, the system called GPT for all um which, you may have played with at some point, but you know it's like an alternative to, chat gbt that you download onto your, computer and you can ask it questions, and chat with it offline and it doesn't, take your data if you ask it not to you, know it doesn't take your private, information and use it to retrain models, so it's like again like a different, approach to thinking about oh well what, if we did want this stuff to be offline, what if we did want it to be totally, compressed so you could have it on you, know an everyday computer in a low band, with Society you know what could we do, then how would we be designing, differently I find that inspiring but, you know it's not clearcut right for any, of these things nobody has the answer, but the important thing is that we need, to be like really contextual about how, we think about these things a lot of the, concerns that you just enumerated are, are certainly Daniel and I share those, it's been an interesting conversation on, the safety side but then in the last, week you know we had The Bletchley, announcement in the US we had the, executive order which was quite detailed, in terms of that in the context of we, agree with you that open source is a, good basis on which to try to work on, Solutions you know that are in the, daylight so to speak that everybody can, can be part of and see and verify how do, you see regulation that were you know it, looks like we're right on the cusp of, after many years of talking about it, it's starting to happen how do you think, it will affect some of the ideas that, you were just talking about do you see, us making some turns or do you think, it's not going to have much impact, because I'm still trying to process it, myself I think it has yet to be seen you, know because if we're thinking about, risk for instance like in the EU AI act, if we're thinking about who is, responsible for the risks and upstream, and downstream risks and how would you, regulate how that affects communities or, how would that affect individuals or, small organizations or nonprofits that, we're working on these things I think we, still don't entirely know I think part, of what we tried to communicate and then, I see Milla communicating in other, context as well is that you know it's, not a matter of like open isn't always, just like good in its own right it's, like open for a purpose that is good, yeah yes it's a start it's a start but, it's also like you can be open in, different ways you can be open to, consolidate the market in your favor and, Stamp Out competition that's not the, kind of openness that we're in favor of, we're in favor of openness that leads to, transparency and accountability we're in, favor of openness that enables, collaboration between people who have, good intentions and so openness that, allows people to build and create and do, things for their own countries and, languages and societies and stuff so if, we start thinking about how do we, protect those functions of open then, it's not you know it's not just open for, openness sake it's open with a kind of, yeah a purpose right um so I think it's, important for the regulation to start, thinking about those things and you know, consolidation of power is a big thing, like enabling o free and open, competition on some of these issues I, think is really important having like a, global lens on the effects of these, Technologies is really important I mean, overall it's good to see that things are, are happening and certainly we agree, with a lot of things that are being put, forward I think we just want more in in, certain areas yeah on the data set front, you mentioned some of the necessity to, have some sort of transparency about, what went into a data set in order order, to proceed with kind of efforts that are, reversing bias or or trying to prevent, hate speech or toxicity coming out of, these models which I think is really, good I I also think I to really, highlight that kind of global piece of, this um so one of our engineers at, prediction guard we were working with, llama 2 the other day and doing some, experiments in a variety of languages, he's a Hindi speaker I think he's speaks, a few other languages as well and we, were trying some things in Hindi and, he's like hey I like this doesn't seem, quite right and then he went to the, Llama paper and looked at kind of the, distribution of language data through in, in the data set and he was able to very, quickly understand like oh this is why, this is why this is this way maybe if we, do this or that you know we could we, could make some improvements in the, tasks that we're trying to do so even, just I don't think it always has to be I, guess what I'm saying is maybe like, every single thing needs to be open in, in all the same ways but even being, transparent about the makeup of a data, set and where it came from and, provenance that can be actually quite, helpful and and powerful and maybe that, ties into this kind of second topic as, well which is some of this data comes, from these sort of data workers or crowd, workers or um ghost workers and, highlighting that as you you label it is, the human in the machine why was that an, important thing for you to include as, part of this discussion because it's, invisible and it's overlooked at all, parts of the food chain right from the, consumers at their computers but also, the people actually developing AI like, the systems that you interact with in, order to hire thousands of task workers, to help you do things with your data, like they're designed to Shield you from, actually having any kind of human, connection or you know sense that you're, dealing with the human the whole thing, is meant to feel like a machine and, unfortunately I think it also shows the, the callousness of a lot of the industry, because thousands of people are saying, that they're traumatized and are, suffering and can't, eat and yet you know these practices, continue you know as just like a cost of, doing business and you've got you know, Millions ions of people in countries, like Kenya and the Philippines and like, India places that have a lot of tech, graduates who are doing this kind of, work which is thoughtful and requires, reading long policy documents and people, take a high degree of responsibility for, the outputs a lot of times on some of, these projects and yet they're treated, really terribly and so it just seems, like well does it have to be this way no, it wouldn't have to be this way and it's, tied in with this you know like we we, can do better like it just seems like an, area like we can do better on this the, fourth episode that we look at actually, I hope you weren't expecting to go, through them chronologically but we, actually no no you're all good they're, all interconnected yeah yeah well cuz we, return to this question of like open not, open the title of it is lend me your, voice and it's actually about voice data, sets and what it means to belong to like, a small language community and you build, a data set to be able to do voice, recognition Tools in your own language, and there's a lot of open- source AI, that's really useful in that context for, them to be able to build stuff in ways, that are affordable right like, sustainable also if you're trying to, build a nonprofit and what some of them, describe we talked to this one person uh, keoni mahona he's based in New Zealand, he's actually Hawaiian but he's working, with the indigenous community, in New Zealand for Mai Mai language and, they have this radio network of radio, stations and they started building their, own voice recognition systems to be able, to transcribe historic broadcasts that, they've had for many years from the, community and they have this data set, and they're trying as hard as they can, to protect it from Big Tech because big, Tech wants to gobble it up right like, just how they gobble up everybody's, video on YouTube or the transcripts of, them and they you know they build their, own you know multilingual large model, data sets that are supposed to be able, to do all languages you know suddenly a, small organization like the one that's, working with Mai language or like the, one that's doing something with Swahili, language Etc all around the world, they're suddenly in competition with, these big tech companies who claim that, they can do the languages as well as, they can, who often don't have you know the same, attention to detail that they do it's, not that they're bad but they're just, different and they're not created with, the same set of values of like uplifting, or supporting a community and so if, you're a a startup developer in let's, say South Africa or in Kenya and you're, trying to build something with your own, local language model like a lot of the, VC funders people who want to give you, money they're not going to say Well, they're going to be like well why don't, you just use open ai's thing why are you, using something else that costs more, when you can have all these different, languages at once and so suddenly you're, in competition with the biggest, companies in the world and you're just, trying to create a sustainable startup, ecosystem in your country you know a lot, of these communities are grappling with, these similar questions around openness, not because of nuclear war fear or, anything like that but but more like, this is our data how do we protect it, how do we make sure that it's used for, the intended purpose but at the same, time we want it to be open we just want, to be able to choose who can work with, it keoni he's really amazing speaker on, this topic they made their own license, an indigenous data sovereignty license, and they try and treat data kind of like, as land that's the metaphor that they, use for it and they treat it as a, resource the natural resources said, they've taken our land now they're, coming for our data let's protect it and, it's a kind of um sort of like a spin on, a you know you're familiar with Creative, Commons license where you can use it, under certain circumstances in their, case it's like you cannot use it at all, unless you have permission but if you, are an indigenous language Community, they're likely to give you permission so, they kind of set themselves up as the, stewards of this data with a sense of, responsibility for what it's for so this, question around should AI be open or, closed it's not so clear-cut it's not, like all the nonprofits are saying or, like Civil Society voices are saying, this has to be open and big Tech is, saying this should be closed you know, it's this whole big confusing mishmash, really right and arguments about what, does it even mean for something to be, open these days uh when it comes to AI, because it's not just like opening the, code it's so much more complex than that, and there are so many people who are, doing just like we talk about AI for, good right there's also all this open, washing people call it right where they, say that it's open but it's not really, open or only when it suits them yeah, well uh so Lana I'm really intrigued by, this other topic on um Mass, experimentation with AI systems which I, think wait is it right to say I'm the, subject of this or or I guess I'm the, yeah participant subject of of this Mass, experimentation with AI systems so could, you talk a little little bit about what, you kind of mean by this term and why it, came up as part of this focus on putting, people first in the development of these, AI Technologies I think we did good with, the titles of the episodes this year, this one is called Crash Test Dummies, it's good the central question of it is, like are we the crash test dummies of AI, and you know we kind of are because we, start off with the story of the, automated vehicles in sanan Francisco, and we talked to somebody who works in, traffic safety there to get actually a, nuanced perspective on it because what I, realized in doing research for the show, is that people really love these AVS and, are really excited about them and think, that they're great so again I didn't, want to be like you know I'm sitting in, Germany in Berlin and think this sounds, dangerous why would anybody want to get, into a self-driving car but you know, they're exciting and there's a lot of, you know hope that they might make, things safer in in some way and so again, it's an you know a topic to approach, with Nuance are we the crashs dummies, you know like between the time that we, did the first interview and the episode, went live Cruz was uh they got their, license to go completely driverless, pulled and so it's another fast moving, topic where yes our cities our streets, are actually be testing labs for, technologies that are you know and, they're experimenting not just on a, focused group of people or small you, know it's like millions of people and, it's kind of life and death situations, and you just kind of ask yourself how, did we get to this point where we have, companies that we know that we can't, entirely trust because they don't put, people over profit and they show that, over and over again and yet you know we, trust them when they say they're there, to make things safer and so there's this, element I think you know where that this, comes up in a lot of um like government, processes for instance right where you, have um companies that are selling, predictive systems or algorithmic, decision making things you have it in, the banking industry you have it you, know for hiring people you have it over, and over and over again you have, evidence that it's not really working as, intended or it's biased or it's, stereotyped and like at this point who, can really be surprised so why aren't, things being tested better or better yet, why aren't they just being designed, differently in the first place and I, think the the difficult thing which you, know we also fall into this trap where, we're you know we're talking about AI as, a topic again right but maybe the best, way to make streets safer doesn't have, anything to do with AVS you know maybe, it has to do with sidewalks or Street, lighting or there's a whole bunch of, different things that you could be doing, in terms of public planning and and so, forth that have nothing to do with AI, and it's the same with you know fraud, and different things it's like maybe the, answer isn't AI maybe this is actually, creating new problems instead of fixing, old ones I guess to extend that a little, bit is we're talking about kind of Crash, chest dummies in this context kind of, literal FR justest dummies but it seems, like that's almost happening across I, mean it's certainly happening across, social it's happening across many many, you know touch points within, civilization and now we're contending, with you know you have large groups of, people that are doubting elections and, others saying no no that one went just, fine and and but it's created all of, these social you know just kind of uh, issues of being a participant in our, society I'm kind of curious where you, arrived at that are we just kind of, destined to go down that path where, we're all crash chest dummies and and, all factors of life to AI or is there a, better path potentially that you, identified yeah well we actually end up, you know thinking about regulation and, so like carry the metaphor forward we're, like well the seat belts the seat belts, are going to be regulation for us right, that's part of the answer is like, thinking about how to um have the right, amount of transparency openness right, how to have accountability how to make, sure that people can argue with systems, that treat them badly or threaten them, you know their lives or their, livelihoods the reason that I bring up, regulation like we look at the I think, it a couple years ago now now I don't, remember the blueprint on AI that the, White House put out recently it's, actually like a really remarkable, document it has a whole bunch of good, advice about how you could regulate, things and what are the principles that, should be in place and it's the kind of, things that are showing up in some of, the pronouncements that we looked at, this week and that are bubbling up I, think is well you can build differently, you can design differently we also, talked to a woman called Navina Singh, she used to be uh one of uh the, foundation Milla foundation's board, members but she works with this company, or founded a company called Credo Ai and, I'm sure there are many other companies, that do this kind of thing but they sort, of try and operationalize what it means, to make responsible Tech right on a, large scale and it's kind of interesting, because operation wise in this case it, means they make dashboards for instance, right like they make Tech dashboards, where they ask the companies okay well, what are your values and how would you, measure that what are your benchmarks, for how you measure whether you're, successful or not or whether you are, creating harm in society or not who is, going to hold you accountable what kind, of people are you pulling into the, process and so sort of trying to create, they call it AI governance right but, it's basically a process to help, companies think through are they, actually doing what they're saying that, they're doing because a lot of them, aren't used to thinking about how Tech, affects society and you have to look at, it not just at the beginning when you, deploy but throughout the livelihood of, your you know operations and that's a, different mindset so there are different, ways I think to build with safety and, mind and with risk you know trying to, think about risk as part of your, operations and part of your business and, then the final part of that is also okay, so they make these regulations how is a, giant you know Fortune 50 company, supposed to comply with regulations when, they have thousands of different AI, systems that they're managing across, different client portfolios and so forth, so you need Partners in that to help you, technically figure out how to do that, but also in terms of process right, procedure so it's not that these things, can't be solved it's that there's very, little attention paid to them at the, moment for sure it can be better as, we're kind of getting close to an ending, point here I love the direction this is, already headed in the sense that we, aren't kind of destined to have, everything just be as it is um and I, know that there's other content that we, didn't quite get to cover that part of, your uh newest season I encourage people, to take a listen um go ahead and just, start the start the first episode and, binge it after finishing this one but in, looking towards next year when I hope we, have you back on the show or Bridget or, or both of you to talk about what's, going on with AI and kind of Internet, Health next year what are some of the, encouraging things that kind of bubbled, up to the surface as you were as you, were looking at what's going on in in, the AI industry in terms of people that, are being transparent making a positive, difference and kind of pushing us, towards thinking in new ways what are, some of the things that that maybe, encouraged you and what are you, encouraged by kind of looking towards, the next year in terms of what what, could positively happen in the industry, I'm encouraged by the fact that we're, all becoming more literate on these, topics because it's a high learning, curve to pick up some of these topics, around Ai and all their complexities but, I think the general public I think, people who are building AI I think, Regulators I think journalists I think, everybody's really you know stepped up, their understanding of a lot of the, issues and there's room for complexity, still in these conversation so I think, that's really great the other thing that, happens and my background is is Media, it's also activism I I care a lot about, social movements and how they work how, do you make change in society and so one, thing that we look at a lot at M is, these intersections like how is AI now, that it's embedded in all these systems, that affect our lives and that affect, discrimination and education and you, know every Asian, how are social movements picking up this, topic in different ways you know how is, AI intersecting with migrant rights or, women's rights or and so forth and so, how are these different social movements, going to make this part of their mantle, in a way and I mean that like worldwide, people who are fighting for human rights, um people who are are fighting for free, speech privacy against surveillance, against facial recognition like, everybody's becoming more L and so I, think there are many many more people, who are going to be paying attention to, this topic and I think it will actually, have a positive effect because for sure, it can't just be the tech industry, that's like figuring out how to make, themselves better on their own terms in, ways that also makes them gazillionaires, that doesn't work right it's too big the, AI sandwich is too big I think that's, something that I find very interesting, is where are those intersections and how, do we fan out in a way and start working, together to really like bring in folks, who are directly affected by these, systems in their design and and also as, Builders right like they're going to be, building stuff, differently yeah I think that's a really, good place for our listeners to end and, something for us all to think about, because there are a lot of builders that, listen to this show there's a lot of, people across various Industries I think, you did a great job in kind of, expressing how we can all be thinking, maybe a little bit more nuanced but also, a little bit more intentional about some, of these topics and again encourage, people to go check out the IRL podcast, the latest season and go ahead and, listen to the previous season as well, because it's awesome and we'll link that, in our show notes and we certainly look, forward to having you back on the show, um uh very soon salana and I should say, anybody listening come back to PR, practical AI like go to us but come back, here we we like we like this, show thank you yeah well thank you, appreciate that very much that's very, meaningful coming coming from you I've, learned a lot from your show I really, appreciate it a lot good good well we're, very happy to have your voice on our, show and thank you so much for taking, time to join us thank you, [Music], both thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're longtime listener of the show, help us reach more people by sharing, practical AI with your friends and, colleagues thanks once again to fastly, and fly for partnering with us to bring, you all Chang doog podcasts check out, what they're up to at fastly.com and, fly.io and to our beat freaking, residence break master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Government regulation of AI has arrived | On Monday, October 30, 2023, the U.S. White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence) . Two days later, a policy paper was issued by the U.K. government entitled The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023) . It was signed by 29 countries, including the United States and China, the global leaders in AI research.
In this Fully Connected episode, Daniel and Chris parse the details and highlight key takeaways from these documents, especially the extensive and detailed executive order, which has the force of law in the United States.
Leave us a comment (https://changelog.com/practicalai/244/discuss)
Changelog++ (https://changelog.com/++) members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Traceroute Podcast (https://deploy.equinix.com/traceroute/) – Listen and follow Season 3 of Traceroute starting November 2 on Apple, Spotify, or wherever you get your podcasts!
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence)
• FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence)
• The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-244.md) | 9 | 0 | 0 | [Music], welcome to practical AI if you work with, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners for, helping us bring you practical AI each, and every week, fast.com fly.io and types sense.org, [Music], what's up friends AI continues to be, integrated into every facet of our lives, and that remains true because you can, now index your database with AI you can, write more code become that 10x or you, always wanted to be and you can even, draft a letter for a lease on an, apartment or a new property AI is, everywhere and it might be time for us, to start questioning is is AI our friend, or our worst enemy and that's the focus, of the three-part season opener of the, award winning podcast called trace route, podcast you can listen and follow the, new season of trace route starting, November 2nd on Apple Spotify or, wherever you get your podcasts and this, show is all about the humanity and the, hardware that shapes our Digital World, in every episode of trace route a team, of technologists seeks to untangle the, complex question who shapes the internet, seasons 1 and two gave us a crucial, understanding of the inner workings of, Technology while revealing the human, element behind Tech in season 3 tackles, not just AI questions but also how can, we use technology to preserve the Earth, who influences the technology that gets, made and what happened to the flying, cars we were promised I think it's safe, to say that the future of AI is both, exciting and terrifying so it's, interesting to hear the perspectives of, experts in the field listen and follow, this new season of trace route starting, November 2nd on Apple Spotify or, wherever you get your podcasts, [Music], welcome to another episode of practical, AI this is Daniel whack I am a uh data, scientist and founder at prediction, guard and I'm joined as always by my, co-host Chris Benson who is a tech, strategist at locked Martin today's a, fully connected episode with just the, two of us where we're trying to keep you, updated with everything that's happening, in the AI community and maybe learn some, things ourselves that help us uh level, up our own understanding of these topics, and yours as well so um yeah how how you, doing Chris are you keeping fully, connected I'm definitely fully connected, and there's been a lot to fully connect, to this week uh you know it's it's there, was a bit of homework uh going into this, episode here uh so it's been interesting, we got a lot to talk about today yeah, there's a lot happening in in the world, normally in these episodes or at least, it feels like in recent times in these, episodes there's been a lot of updates, on new models and other things like that, and that's still happening so you know, things like mistol and other models have, come out but I think the interesting, thing maybe that I've seen people, talking about this week in particular, circles back to government interactions, with the AI community and in particular, the white house here in the US the White, House the president's executive order on, safe secure and trustworthy artificial, intelligence which is kind of timed, interestingly with other things as well, but yeah I know that you're very in tune, with the the public sector Chris um are, you seeing a lot of discussion of this, in in your circles yeah I we are uh I I, would say that we're actually uh as we, parse through it and we'll talk about, the different sections and stuff I would, say a lot of the stuff that would affect, my day job in the defense and, intelligence World we're already kind of, doing not kind of doing we're already, doing a lot of that stuff and so there's, a lot of specific in this executive, order but it's not starting a new, process uh for us in that world uh it's, something we've been working on for, quite a long for years so it's, interesting I was pleasantly surprised, with this exec because you know we've, talked many times on this podcast about, the fact that um how long it's taken for, governments to to start getting a bead, on on these AI issues and what does, regulation mean and who's going to, participate and you know how you going, to do it and all this kind of stuff uh, we've been saying that for years and, finally uh on Monday of this week as, we're recording Monday October 30th 2023, we got this executive order issued and, then I believe we got The Bletchley, declaration uh issued as well later in, the week you want to talk about that a, little bit yeah sure it might be useful, I know we have a wide range of listeners, and if I'm being honest even you know, myself we we had a friend that just, became a US citizen the other week and, congratulations yeah talking through, things with him is like of course it's, fresh in his mind but all all of these, ways about how our government works and, the various ways in which things can be, legally enacted can be quite confusing, so maybe before we jump into things, let's maybe just touch on an executive, order what might that imply and how it, may be different than certain things, that have proceeded Ed it because, there's been statements on AI and, government uh thinking about AI in the, past here in the US but from your, perspective what makes an executive, order maybe different than some of the, things that we've seen in the past sure, so noting that I am most certainly not a, constitutional attorney or or any such, thing uh just a dude who likes AI I, would still vote for you Chris a that's, really nice of, you but um but an executive order in, short, and I'm sure if we have listeners that, say I'm slightly off they can correct us, on this but the president of the United, States can issue an executive order, which is a legal device which, essentially has the effect of law it can, be overridden a couple of different ways, the US Congress can override it by, passing an actual law so if an executive, order is in conflict with a law that is, passed in Congress that law in Congress, Will trump that and in addition an, executive order I believe can be uh the, US Supreme Court can also override it on, constitutional basis uh but unless one, of those two things happen my, understanding is executive orders, otherwise have the effect for all, practical purposes of law in the United, States of America yeah and apparently, these actions are quote the most, sweeping actions ever taken to protect, Americans from the potential risks of AI, systems it's funny how you know the the, use of adjectives in government has come, quite interesting over the years but, this is the most sweeping actions ever, taken to protect Americans from the, potential risks of AI systems there are, some interesting ones and I think what, might be interesting about at least a, couple of these is the way that they, might influence the AI industry in the, US in particular but also some way, that government agencies and and other, entities might become involved in the AI, world so I guess before we jump into the, specific you also mentioned the The, Bletchley uh decision what's that for, our listeners how how might that relate, to things going on sure so there was a, summit it was called the AI safety, Summit that took place on November 1st, and 2nd of, 2023 which was just a couple of days ago, uh this week they issued a what they, call a policy paper which was The, Bletchley declarations by countries, attending the AI safety Summit on that, date and it's fairly short it's a few, paragraphs long and my understanding uh, not being a legal mind is that it's it, would not be binding in any way legally, but there's a number of countries listed, there's a couple of dozen that attended, including the United States United, Kingdom and and uh a lot of other, countries around the world that, basically said we're acknowledging the, short of it is we're acknowledging AI, safety is important to us all it's by, definition an international concern and, that the way to deal with these concerns, going forward is for us to all work, together and you know share information, and such as that without reading the, whole thing to the audience on the show, which I don't think we have time for, yeah we will link these things in the, show notes absolutely it's a good thing, I I agree with everything they said uh, and it's a good uh kind of a Kumbaya of, saying we need to work together but it's, otherwise you know it's just saying hey, let's go do these things that's very, important because it is indeed an, international concern I'm definitely, applauding that I've been more focused a, little bit on the executive order simply, because it's binding having the effect, of Law and it's and and we'll talk about, the details but it gets quite specific, in the executive order itself there's a, fact sheet that kind of gives you a high, level and it doesn't give a lot of the, detail and I read the fact sheet first, and I was a little bit I was like okay, that's all great fluffy stuff, but when I read the executive order, afterwards it gets down to who's, responsible for what how long they have, to do it and what they have to do it and, there's a bunch of specific uh results, expected or or standards applied that, clearly was were people from the AI, community that were very very, knowledgeable so it wasn't a a political, only group of people that that did this, they they obviously had expertise, available to them so I was I ended up, being more impressed than I expected, with what they came up with I haven't, done all of the research and I don't, know if it's even published anywhere, about who was involved in the process of, developing this but we can go through, some of the the highlevel things and, there's a few really helpful uh so, there's the fact sheet that you, mentioned there's also some good, articles that I've been looking at over, the past days that give some summary, information that might be relevant for, people as well uh like the MIT um tech, review has an article and some other, ones if we don't dive into the the, specific wording quite yet and I just, look at kind of the high level what, people are are saying about it what's, standing out to them about it one thing, that I see really driven home is the, real focus on safety and in particular, Safety and Security and the real focus, on kind of like labeling and, watermarking the output of AI systems, which I think if I'm understanding right, is framed within the executive order as, a safety thing in terms of protecting, our citizens from potentially fraudulent, or harmful material that might come out, of AI models as well as giving whether, it's entities related to defense or, education or whatever the ability to, identify and discriminate between AI, generated assets or content or text and, non AI generated or I guess human, generated you know we're talking about a, specific point here in it but it takes, both sides of that it both issues uh the, directive that you in essence you must, identify AI generated content so that, you can't have deceptive representation, there but it also issues the appropriate, government agencies to come up with, mechanisms by which they may detect, people who are not subscribing to that, directive or foreign obviously there are, uh all the other countries in the world, and they are not held by our executive, order so the means to detect all that, but I also would say that that's not a, new idea to the dod and and intelligence, community and am I understanding this, right Chris that the executive order, would put the obligation on certain, government entities to figure out how to, enforce this mandate on the other, organizations companies entities teams, that are within their jurisdiction is, that is that a good way to put it or am, I misunderstanding yeah it it leaves a, lot undone so um and at some point we, probably should jump in and kind of Hit, the highlights on what they were but it, basically puts the the burden on various, uh agency leaders to come to it it'll, outline what they have to accomplish and, when it must be accomplished by and in, some cases what the output of that is, but it doesn't tell them how to, accomplish all that and I have yet to, see any place where it ascribed any, budget to any of those items so there, are strengths it was good thinking in, many areas uh but the dollars to uh that, that would go into making some of these, things happen I did not see a signed so, positives and negatives when you read, this when you see this coming through, and how things have trended over the, recent years with respect to the, government's thinking on AI is this like, taking us up from a level two of, priority in government and action up to, you know 10 or how do you see this as, escalating the kind of involvement from, the government in the AI industry in, terms of practical things that we'll see, in let's say the coming year oh I I, would say that they don't have a choice, uh given that it's now in an executive, order and it directs them with timelines, and specifics on that if I was leading a, federal agency that was being called out, to do this I would probably be, scrambling and trying to figure out who, I had and who I could get and where I, was going to pull the money from to, accomplish those things and I'm hoping, maybe a listener knows that there's, budgets described that we haven't heard, about yet and that would be uh good news, they would have to to juggle a little, bit in terms of their priorities to make, some of these things happen but I think, it's good it forces it right up to the, top of our list of things to you know, that the government could uh could, happen and it I don't think it was going, to happen from industry alone we've, watched for years as commentators on the, industry every week we've seen, individual companies kind of do their, own thing uh they kind of compete with, each other other because there's a bit, of a marketing tinge to AI safety as, well but nothing that has ascribed the, entire industry you know uh universally, and so this will clearly do that with, external, [Music], standards this is a change log news, break hugging face released a distilled, variant of whisper for speech, recognition it's English only and, optimized to the hilt which resulted in, running six times faster while being 49%, smaller and Performing within 1% word, error rate from the original model it's, designed to be a drop in replacement and, the hugging face team cites five reasons, why you might use it faster inference, robustness to noise robustness to, hallucinations designed for speculative, decoding and permissively MIT licensed, this looks great but I'm still waiting, on speaker, diarization you just heard one of our, five top stories from Monday M's, changelog news subscribe to the podcast, to get all of the week's top stories and, pop your email address in at changel, log.com news to also receive our free, companion email with even more developer, news worth your attention once again, that's, [Music], the one that I thought was probably most, interesting to me although I think the, others have both good and interesting, implications as as my wife and her, business would say there's wins and, opportunities um but I think the one of, the ones that stood out to me was I I'll, just read the wording here and we can, talk about it and in some of the wider, implications but one of the things is a, requirement that Developers which is, interesting to me because it's I'm an AI, developer I guess a requirement that, developers of the most powerful AI, systems share their safety test results, and other critical information with the, US government how does that strike you, what are your thoughts depending on the, model and the and the specifics of that, uh it will often be the US Department of, Commerce that's receiving those uh in, some cases it will be uh military, intelligence depending on the nature of, of what the concern is and what the, model can do but that will have to be, created doesn't exist today to the best, of my knowledge if I was leading, Commerce I'd be going okay how are we, going to receive this information and, store it because they're probably going, to be getting quite a lot of data uh, coming at them with with the amount of, development in this area that's one of, the things that certainly here in the US, we're going to have to learn how to do, because as of now uh once the timeline I, don't have in front of me what the how, long they have to put that into place, but after a certain number of days it, will be required by law if we're, specific on the thing that would be, required by law if I'm understanding, this right it would be someone that is, developing a, new maybe one thing I don't know if, you've seen this maybe I just missed it, I'm not sure if this would be new in the, sense of training from scratch or new in, the sense of a fine-tune model but those, that would release large models like you, know in recent weeks we've seen mistel, and and llama 2 and Zephyr and and all, these models coming out the models that, are significantly large like that, Foundation models that are significantly, large as those are released after, they're released before they're released, um maybe we can talk about that that the, people that are producing those models, the teams the developers that are, producing those models perform some sort, of red teaming to probe the models in, terms of potentially harmful outputs and, kind of Behavioral tests and perform, risk assessments that would be you know, gathered in some sort of coherent way, and shared with the US government one of, the interesting pieces of this part of, the executive order in particular if I'm, understanding right is that it falls, under the authority of the defense, production Act correct which I think in, addition to the executive order status, might bring this point maybe a little, bit higher up in terms of firm legal, footing again not an expert in that, thing but that's my understanding from, what I've read yeah and and as a, non-expert but uh the defense production, act uh in case listeners May recognize, that because they've been hearing about, it a lot over the last year because it's, been a point of uh in the Russian, invasion of Ukraine uh it has been, repeatedly cited as a mechanism that the, US government can use to increase, production to support that effort our, allies the ukrainians and so uh you may, have heard that before and from that I, know as a non-attorney non-legal mind, that it gives the US government broad, powers on uh requiring commercial, companies uh in the United States or, that we do business with to meet a, certain set of criterias and there, things that they can be uh told you must, go do this on uh because it's in the, interest of our national security and so, it's a fairly sweeping thing as my, understanding since it has been, referenced here I I agree with you you, not only have executive order but you, also have reference to a fairly strong, point of law from Congress one of the, interesting takes that I saw in this is, this is uh just a blog post that I'll, link in the in the show notes but the, comment in the blog post was that this, might be one of the things that would, kind of lead to a firming up of the, players within the AI Market as we see, it because not only is now there a, computational data infrastructure burden, on those who want to produce these large, models but now there's more of a, regulatory burden that would actually be, an additional kind of Step here um in, terms of being a player in the, foundation model space now there's irony, to that by the way and that they, explicitly note they're trying to, address Equity uh in the executive order, but by adding the regulatory burden uh, that will be exclusionary uh you know, correct yeah you you don't get anything, for free that's for sure so this will be, interesting I it's hard for me to think, that the progress will slow very quick, quickly around releases of these models, what is very interesting to me is how, they will end up deciding like if I pull, down llama 2 and use a small data set, that I have access to to fine-tune it, and then I release that as another, Foundation model that people can use, will you know at what point so I am, modifying the weights right at what, point between there and training from, scratch which you know even large models, I don't think often are they might start, from a starting point with their weights, in certain cases but yeah at what point, in there are you really a developer of a, significantly large model because both, of these models are large when is it, adaptation when is it fine-tuning when, is it training or releasing building as, the executive order says so yeah that, that's all kind of mushy in my mind I, think yeah they do attempt to kind of, address that one of the there's a a, section 4.2 ensuring safe and reliable, Ai and they talk about a time frame and, this is one of those that calls out the, defense production act specifically uh, and within 90 days uh there is a set of, things that are being required which is, a very short timeline you know when you, think about it part of that is they have, a set of criteria which is certainly not, comprehensive as to your point just a, moment ago it's better than I expected, but it's not entirely sufficient there's, a lot of nuanced questions as you just, raised I noticed that uh one of those, they talk about um the quantity of, computing power uh used for training and, they have picked uh interestingly 10 to, the 26 uh integer or floating Point, operations for kind of as a general uh, point of computation it's like a, threshold and if you're above that, you're kind of in that large range that, they are particularly uh focusing on, they have reduced that down to 10 to the, 23rd integer floating Point operations, for uh models that are based primarily, for uh biological uh sequencing and, things like that they have some other, aspects in but they have an interesting, threshold that they call out in the in, the executive order so you know exactly, what's going to happen here Chris is, that whatever that number ends up being, so it's just like in in so the town, where uh my dad grew up in in Kentucky, is a um actually still is I believe a, dry county meaning in the US this is, where you can't actually purchase, packaged liquor yeah in in a liquor, store of course what happens then is, just on the edges of the county you have, these at the sign where the county ends, there's like 14 liquor stores right, we'll see something similar here here, right what we've been choosing our, numbers of parameters and such 7 billion, 13 billion for various reasons for our, models what's going to happen is people, are going to get really really good at, training models right under that, threshold which is 10 to the 25th will, be the magic number going forward yeah, which you know I think it could actually, have a even though that's kind of gaming, the system it could actually have a, really nice effect that instead of us, trying to always think about more data, bigger model as the way to incrementally, improve this does put a burden on those, that want to operate at the lower level, under the threshold of Regulation the, burden to say Hey what if we're creative, either in our model architecture or the, way we train it or the way that we fine, tune or whatever that ends up being to, actually do more with less which I think, overall will be a good would be a good, thing and and you know Academia is, already thinking about these things with, things like the baby LM workshop and, other things like that so I think that, could actually have a followon effect, that's quite positive for the model, landscape you'll have a a set of players, that remain out of NE you know they they, must play up in that area because you, know the large the true llm range isn't, going to go away but that will also be, dominated by large players who are, already doing regulatory stuff anyway, maybe they weren't in this but they're, accustomed to that it is exclusionary to, those large you know things like Cloud, providers and such we'll be doing that, but you'll probably have a whole range, of, midsized players that are below the, Amazon Googles and microsofts uh of the, world and the open AIS of the world, that'll play in that just below that and, uh and and build foundational models it, may be that uh yeah as you said they'll, it'll be interesting to see what kind of, Innovations come from there and one, other question that I have coming out of, that is how do I know that I hit the 10, to the 26 and how do you along with this, sort of restrictions or legal, implications throughout the executive, order this kind of naturally brings up a, lot of questions like they also talk, about water marking which we'll we can, talk about here in a second but the, general thought is like whether you're, talking about this computational power, red teaming uh behavioral test water, marks labeling you need standards and, tools and tests to help you ensure that, you can do these things like how do I, know when I hit that threshold how do I, Watermark things etc etc and so one of, the other things that's drawn out right, away is the developing of Standards, tools and tests to ensure that AI, systems are safe secure and trustworthy, and this specifically calls out the, National Institute of uh standards and, technology or nist that you might have, heard of before CU they have like one of, the most precise clocks to you know help, keep the standard of time and the most, precise weights yeah the most precise, like this is exactly what a kilogram is, and yeah I actually in my uh undergrad, when I was doing research I did research, at nist with one of our collaborators so, we were theoretical they were experiment, Al and I think mostly all I succeeded in, doing was spilling a bunch of carbon, Nano tubes on the floor um not very good, at experiment but that's what they're, experts in minus the occasional intern, that spills carbon nanot tubes on the, floor but they're specifically called, out to help or set the rigorous, standards for what's phrased in the, executive order extensive red team, testing to ensure safety before public, release, what are your thoughts on this Chris, this goes back to something I mentioned, earlier there's a lot of figuring out, the how that's undetermined so you know, you have uh clearly some bright AI mins, that help construct the executive order, but they've left wide open you know what, that means and you know what is red, teaming what is red teaming trying they, they hit some things that red teaming, should be trying to do at a very high, level but it's up to nist and the the, Department of Commerce to come up with, you know what the specifics are on that, and I think we're all going to be, learning I think the key thing that I, would take away from that is that this, executive order is the first of many, things to follow over the next year from, various agencies uh as they are trying, to fulfill the executive orders uh uh, intent well Chris the next thing I see, in here is biological materials which I, don't necessarily think about that much, even though I'm made up of biological, materials I guess I don't consider my, own uh biological self very much but, they talk about protecting against the, risks of using AI to engineer dangerous, biological materials what is a dangerous, biological material I guess a bioweapon, is that what we're talking about here it, would be bioweapons and and it could be, something that we've all heard about, when we were in the height of Co there, was the uh all the theories about, whether or not it had been created in a, lab in China or elsewhere and such as, that you know and so it could be a, weapon by design it could just be a, virus it could be lots of different, things there are a lot of uh, International laws and domestic laws, against these things but we also have, actors around the world who uh don't, necessarily subscribe to the same values, and so it's still something that that uh, the intelligence and defense communities, of both the us and our allies spend a, lot of time thinking about how to, address and and defend against um though, we follow those laws that our, adversaries may not so yeah yeah people, might be wondering well how might you, practically think about protecting, against the development of dangerous, biological materials with AI and we've, had previous shows can try to find them, and and Link them in the show notes, about using AI to find new drugs or, something like right yep well a lot of, those projects whether it be those kind, of Pharma related projects or academic, projects related to biology and AI or Ai, and Life Sciences sort of overlap a lot, of those do have some sort of federal, funding behind them whether that be NIH, or NSF these sorts of Grants and so one, of the things that's called out here is, hey if you want a grant if you want our, Mone then you have to agree to establish, these standards XYZ which to my, understanding are not specified in this, executive order but it's saying we will, create these standards that will be, standards and requirements to receive, federal funding for biological research, with AI or I I don't know that's, probably also to be determined like how, to categorize that but and we're talking, a lot about biological but they don't, just address biological in it there's, kind of some special stuff on biological, but then they also address what they, refer to repeatedly as cbrn which is, short for chemical biological, radiological or nuclear weapons and so, the kind of Civilian research on the, biological side but there's also the the, military side under the cbrn acronym, that they're addressing on those and, there's a lot of concern expressed, throughout the executive order about all, of those being enhanced by AI in terms, of finding solutions that where you're, using models uh how do you handle those, both domestically under this law and how, do we direct agencies to help keep us, safe from adversaries that might not, respect that yeah there were two things, that I was seeing in the kind of news, and commentary on this that we're, standing out one we've already talked, about which is related to the, requirement for the quote most power ful, AI systems to share their their safety, test results the other one that stood, out or it seemed to stand out to many, people was the protections that are put, in place for establishing ways to detect, and label AI generated content so this, would be images that are generated from, you know text to image or text to video, systems or audio that's maybe, synthesized, which is continually getting better or, voice clones that sort of thing or also, text um text would fall into this, category to around misinformation and, that sort of thing that you might want, to filter out or maybe if you're one of, those teachers that want to prevent your, students from using chat GPT to generate, their essay then using or finding ways, to detect AI generated content and, enforcing those in certain context, that's kind of my general reading of, this stuff and I think what I was seeing, was there there's a good bit of positive, response to this even from many in the, AI community that recognize yeah this is, an important piece of what we will need, to do moving into the future in terms of, having to label things and needing to be, able to discriminate between these, things but also the recognition from, those in the AI community that this is, still very much a topic of research, which is not figured out yet totally and, I think that's one of the it's, interesting I think that that will be a, big impact because that will affect so, many industries that are not necessarily, uh ready for you know they've kind of, said ah we can make some money we can, generate content and we've all been, seeing that online but um it's coming, from Industries where they're they have, not had the burden of responsibility for, it I think uh certainly all of us that, are in the AI world have used different, models to generate texts and stuff and, you know it started the first time is, kind of cool but then you realize wow, this is an amazing business capability, but now it's an amazing business, capability with a fairly significant, responsibility attached to it it will be, interesting you know things like the, marketing and branding Industries which, I once upon a time I was in um will have, to figure out a way to do that and still, serve their clients in that particular, industry because you if you just have, everything as I generated content that, will affect how people perceive the, content you just generated trying to, satisfy your client you know in that, particular industry so there's a lot of, nuance that's very industry specific, that's going to have to happen for that, yeah and I hope that many out there, recognize that we need to figure out, ways to label this generated content and, track it even if it's only for practical, purposes of like hey more and more of, this content going to get out there and, I don't want to necessarily always be, training my next AI system on AI, generated content maybe I want human, content but there's a recognition that, it is an active area of research and, there's also a gray area here right so, if I have chat GPT write me a cool blog, post and then I take that out and I, modify a few things and then I put a the, paragraph back in and have it phrase and, then I take it out and then I edit some, more things this is a very often a very, Dynamic process and I think for Safety, and Security and trustworthy AI systems, we would want that kind of back and, forth with a human but it's not always, the scenario where it's simply human, generated content or it's simply AI, generated content this does get very, mushy even in automated systems where, there's humans post editing machine, translations or there's humans reviewing, analysis that's been generated out of a, SQL table or I don't know there there's, all sorts of scenarios here where, there's a lot of gray area and maybe, that's not the focus of this statement, it might be more these scenarios where, you'd want to essentially create a, factory of misinformation that's just, pumping out things to Twitter or X and, that's maybe more within what they're, talking about but I I think that working, through all those nuances in all these, different Industries and I do the same, thing I I I write a lot of stuff and, I'll write and I'll put what I've, written into uh one or more models and, I'll see what it comes back with and, I'll choose and I'll take part of this, and part of that I think a lot of people, are doing that I think that this is, going to have to a lot of this will be, settled through litigation uh so I think, uh the executive order has given a, tremendous Bo boost to the AI litigation, industry uh that has been uh flowering, over the last few years I think we'll, see far more of these uh Nuance cases, these gray areas decided in court in the, next few years um I have mixed feelings, about the fact that it is beyond ability, to handle all these cases given the, short timelines I I'm glad to see short, timelines instead of many years to get, there especially if they're unfunded uh, it will be interesting to see kind of, what come up with if you're a department, head and you have 90 days to come up, with the solution that the executive, order requires of you you probably will, not have solutions for all of these, things so we have some interesting times, ahead of us certainly yeah and I I hope, that there's involvement from leaders in, the space large and small so smaller, companies that are really innovating in, some of these things and larger kind of, Staples of the industry like hugging, face and others that would pour into, those things but it all of that will, require some sort of I mean minimal, exchange of money even if it's just to, buy people's time to spend on this um, because there's so many things to work, on it'll be really interesting um now, that the burden has been placed uh on, American agencies and and by extension, the American people in their Industries, to comply with all these things it'll be, interesting to see uh you know we talked, uh at the beginning about The Bletchley, declaration and that intent and all of, these other countries will presumably, come out with their own versions of this, and some will be very similar some may, may Branch out in different ways based, on the values and laws of their own, countries but it will be to see how this, works out and there will also be some, countries that refuse to subscribe to, this whatsoever not only will they not, contribute to this they may be working, uh very specifically against it and, we'll have to in turn we'll have to have, very good capabilities for protecting, when any of these cases of that are, within this purview of AI Safety and, Security are being violated by others to, an effect that is not good for us it's, late 2023 I suspect through the end of, the decade will just be absolutely, fascinating on uh on how we start, sorting through these issues yeah and to, maybe end on a slightly positive note, for those of us that are working in, daytoday in this industry we are the, developers of some of these AI systems, you know we could look at this and say, oh there's all these like various, intricacies and such that need to be, worked out but I do think that there's, encouragement here in the sense that hey, some kind of General guidance firming up, of Standards help in kind of, understanding how we might behaviorally, test or red team or assess the risk, associated with our models I think, that's a really encouraging thing in, many respects especially for I think the, vast number of AI developers out there, that do actually want their systems to, be safe secure and, trustworthy yes there's a likely, minority of developers out there that, are trying to be nefarious and malicious, even in what they're doing with AI as, there always be with any sort of, Technology but I think most of us want, to build say and secure and trustworthy, AI systems and even if you're doing, really good in one of those categories, like you've got your red teaming down, right there may be other things that, come out through these processes with, Nest or the water marking tooling or, other things that it's hard to be an, expert in all those things so hopefully, as more of this rolls into action there, is money put behind some of it to not, only put guard rails around what we can, and can't do which might be how some, people might take this but actually to, give us tools that will enable us to do, more because we know that we're, following good practices and best, practices and we're being safe and, secure and of course yeah there'll, always be a need for research beyond, that but yeah I think it's encouraging, in that sense I totally second, everything that you just said this is an, opportunity there are huge business, opportunities in helping people get, through Regulatory and we've see that in, other Industries so this has come about, we're hitting regulation in AI uh for, real every other time regulation has, come out there's been whole Industries, born that helped get through that and, services that make it uh much easier, than it it seems today as we are first, reading uh what is to come so I also, would encourage everyone to try to, embrace it uh we do need it for safety, the dangers uh are are real and uh let's, do it for ourselves our children uh and, our our larger community so uh, absolutely let's let's go make this, thing a good thing yeah all right Chris, that's a great way to end and uh look, forward to talking to you more in the, future weeks about increasingly safe, secure and trustworthy AI, [Music], absolutely if you enjoy the music you, hear on practical AI you'll be happy to, know we've released two full length, albums for purchase or streaming just, search for change log beats in your, music app of choice and check them out, volume zero is called theme songs and it, includes special remixes in addition to, the classics and our first volume is, called Next Level featuring many of the, video game inspired tracks you've heard, on Chang doog podcasts over the years, check us out Chang doog beats thanks, once again to our partners fast.com, fly.io and types sense.org that's all, for now but we'll be back with more, practical AI goodness next week, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Self-hosting & scaling models | We’re excited to have Tuhin join us on the show once again to talk about self-hosting open access models. Tuhin’s company Baseten specializes in model deployment and monitoring at any scale, and it was a privilege to talk with him about the trends he is seeing in both tooling and usage of open access models. We were able to touch on the common use cases for integrating self-hosted models and how the boom in generative AI has influenced that ecosystem.
Leave us a comment (https://changelog.com/practicalai/243/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Tuhin Srivastava – Twitter (https://twitter.com/tuhinone) , GitHub (https://github.com/tuhins)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Baseten (https://www.baseten.co/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-243.md) | 6 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends that fly, deploy your app server and database, close to your users no Ops required, learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, the founder and CEO at prediction guard, and I'm joined as always by my co-host, Chris Benson who is a tech strategist at, locked Martin how you doing Chris doing, very well today Daniel how's it going oh, it's going great I I spent the afternoon, in sort of a a brainstorming session, with a couple of our our team members, here at prediction guard and it was it, was a ton of fun so talking about a lot, of prompt engineering things and how, different models perform and and that, sort of thing so it was a good time I'm, glad you're doing that cuz you know what, I just want things that just work you, know I don't have to think about it I'm, glad you're thinking about it uh I think, we might have someone else to talk to, who knows how to make things that just, work, yeah yeah well a lot of the models that, we're running um sort of just work for, us in terms of inference because we're, we're hosting um some of our models in, base 10 and we've got Tuan uh joining us, from base 10 today how you doing Tuan hi, Dan hi Chris nice nice to see you guys, again um thanks for the kind words Dan, yeah yeah for sure well it's exciting to, have you actually back on the show, because it was I believe I looked it up, it was like May May June, 2021 when we recorded and released the, last episode with you so how you doing, what's what's new and how is base 10, how's the ride, been yeah it's it's been crazy um I feel, like the um lost I think it was like May, 2021 um that's like a millennium in AI, time you know oh my God it h if it, actually does feel like it was a, different Jobo yes it feels like the job, before the last, job, if that makes sense no I think um being, crazy I think the last two years for, everyone here have probably been a bit, of a warwind and you guys are pretty on, top of current things in machine, learning in Ai and I think H I imagine, just like for you guys it's hard to keep, up at times with you know what's going, on I think you know lot we only do one, show a week and you know it's getting we, almost need a Daily Show there's so much, content now it's not enough yeah don't, give our listeners ideas cuz I don't, know I don't know if I can do a daily, show but uh it is a lot and I think so, I'm looking back previous at our at our, episode and last time we talked about, sort of the easiest way to create ml, apps that was kind of part of how the, conversation was framed and I know just, from working with you and and talking, with you as friends that a lot has, changed and you've seen some things, within how people are deploying machine, learning AI systems that now base 10 is, really focused on could you give us kind, of like the high level view of base 10, and the type of problem the type of, solution that you're offering yeah yeah, yeah um I think I think it's just worth, probably like pointing out you know, before I go into base 10 specific things, like what are like the key things that, changed since we last talked I think you, know sure if you think of like the year, of like 2012 to 2020 um you know data, scientists were the ones doing a l, machine learning I think you know that's, changed for a number of reasons um I, have a lot of thoughts on that but, probably the bigger changes are you know, the are the emergence of good open, source models and I know you do a lot of, work with that and you know like we've, seen hugging faces a community evolve, into like really really vibrant place, where if you want to get a sense of how, fast things are moving doesn't take long, to you know take screenshots of hugging, face every Monday morning and see how, the trending is changing and you'll see, that you things are pretty different, every week I don't know if it was Daniel, that said this or another friend of mine, but um the analogy was that you know, hugging face has become to AI kind of, what GitHub has always been for software, developers over the last decade or so um, it's just you know it's the place to go, to find it anyway I didn't mean to, interrupt you there yeah and the good, about it but also the confusing too it's, like you know you have like random, person clones model or copies model and, uploads random Mo version of that model, that like maybe works, yeah 100% I feel like the game you have, to play with hugging face it's like but, doesn't run you does the model but does, the model run U but you know I think, open source emerged and I I think like, stuff like whisper showing up and you, know some of these OCR type replacement, models showing up you know they're, probably the more more interesting ones, to me not because what they do but, because they end up just solving a lot, of open problems yeah you know if you, think about transcription as a problem, of think about nuance and like how long, they were working for that yeah like, literally 20 or 30 years of of work just, kind of all right that's solve problem, now let's move on oh we we've actually, solved multil language with the same, model too business models come and go, don't they yeah yeah it's wild and I, think like the last piece is just around, you know like the chat gbt moment for AI, interesting for a number of reasons I, personally think it's someone who build, infrastructure that it's most, interesting because if you want to call, that the iPhone moment of AI I think, it's a bit different to that because, it's so early in the journey it's like, if the iPhone showed up when we were all, using 5110 from Nokia the world would be, very very different I think because, consumers and developers their first, taste of machine learning in AI was, through chat gbt and gbt apis the stakes, are just you know it's just harder to, build something good like you know, people don't want to use a model that, takes 12 seconds to run mhm you know, like highspeed production inference is, you know taken for granted when you're, using um this model and then when you, kind of combine that with okay open, source models need to be run, somewhere we we personally think that's, like okay well there's a massive, infrastructure opportunity there um or, maybe not even opportunity that just a a, fact that a whole new stack will be, built to support um models to be able to, power these end user experiences and I, think like that's kind of the core in, sight kind of going into base 10 and, talking a bit about base 10 around like, what's changed is that you know we we, kind of two years ago when we were, talking about data Sciences we weren't, talking about engineers I think that's, pretty key to our story which is that I, think we came to the realization that, every engineer needs to Grapple with, machine learning now as opposed to maybe, a smaller Market data scientist I think, going from smaller models that run in, memory to larger models is another big, you know Focus ch we've had I think, there's a bunch of language stuff and, nltk stuff and you know you were doing, all that work Daniel but for the most, part everyone was using you know psychic, learn and that and psychic learn models, for the most part run in memory on CPUs, on CPUs yeah if you think just in that, time period the the amount of maturation, you know that's occurred in this, industry it still bought you said, something a second ago which I kind of, hit me and that is most people out there, in the general public you know are, really just getting into this you know, with chat GPT stuff and we've come a, long road already but I just I'm, listening to you and it's amazing how, far we've come in such a short time yeah, it's insane 100% And I think that you, know going from small models to large, models as well it's just like that, change kind of happened pretty quickly, you know as someone who did a bunch of, work with small models like they have, their time in place but they just aren't, that fun, anymore like something that runs in, memory just doesn't give you the same, feeling as and then I think the last one, is just so much of the stuff that was, happening with machine learning outside, of fang I'd say was like Fang had some, production use cases around like ad, serving and search and and whatnot but, outside of that it was mostly just, internal workflows you go and work on, fraud and content moderation and, recommendation systems and I think you, know going from hey every product is, going to have some sort of machine, learning in it and every existing, product what will definitely and 90% of, new products will be built with the with, a new pillar that is machine learning in, AI which which wasn't the case I think, two years ago which is just crazy to, think about yeah it's completely changed, in that time I think one thing that was, brought into Focus for me while you were, talking was that as soon as you make, this leap to kind of larger models and, you make the leap from some closed API, that's very fast to maybe running your, own model there's two things that become, like immediately clear one is the, infrastructure chall around that which I, think is the workflow around that and, the model hosting you know base 10 of, course is an expert in that and the, other side of that is like the product, sort of concerns around running these, things which I feel like you know we, always have great conversations to and, because like you're on the one side of, that and I'm probably on the other side, because yeah like when you're using chat, GPT or even the open AI API they have, layers of you know protections on the, prom s and like on the output they have, you know filters to make sure they're, not responding in certain ways and, there's all these product concerns that, people don't think about and then they, take like a llama 2 model or something, they run it and then there's like oh, this doesn't respond like this is not a, product right and so like the, infrastructure is a piece of that the, ability to iterate very quickly with, models of a variety of types I think is, part of that infrastructure challenge, how do you see that infrastructure piece, of this kind of playing out yeah maybe, just before I say go into that when you, were saying that it reminded me of like, I bought a drone in, 2014 um and I was so EXC like DJI, Phantom 3 I was so pumped and you know I, FLW it around a bit and they had bit had, autopilot mode where you didn't have to, do anything um like it kind of, stabilized itself and then there was, this button that said manual mode and, had like all sorts of warnings and I, remember saying like being like Oh how, hard can it be right oh I really like, jokeing like I was in space but I put it, into manual mode while I was up in the, air and it just fell to the ground very, very fast I don't know Chris do you have, some experience with this football well, I'm I'm in Aerospace uh professionally, so I know a little bit about that and, yes Chris weren't you like the TV host, of the Drone racing competition or, something like, this yeah a few years ago uh yeah I was, I was one of the hosts uh of the first, drone Racing League they had a, championship series and they were using, instead of you as a human they were, navigating obstacle courses and stuff, like that and what I learned through, that experience is when you have how, much autonomy is required to make even, Small Things fly well yeah and yeah so I, I sympathize with you for going on, manual there oh no don't do it don't do, it and the analogy I think holds here, it's like the Clos the closed API is, seems so great and then you you see, something like a latu or mistro and, you're like okay I'll just rip and, replace this it's like nope that's not, going to work um for a number of reasons, and I think that's how we how we see, about think about infrastructure based, sandwich is that running models in, production is very very difficult it's, difficult for a number of reasons so I, can get start like from like a user, requirements perspective like latency, and throughput Paramount um costs of, what something you want to optimize data, privacy is you know a whole another, Beast um security comes into play then, orchestrating this across a bunch of, different Hardware um orchestrating this, across clouds becomes a problem, benchmarking these things isn't easy and, that's even before you go into all the, evals and the you know the kind of like, the guard rails you want to put around, this thing to get it running and I know, that's some of the stuff that you think, a lot about Dan so just as an extension, of what you're saying and Daniel you, mentioned it at first but if you could, also talk a little bit about what the, difference is about just host like like, just having the model hosted and kind of, the idea around what you have to put, around it as a to make it a product I, don't think most people talk about I, don't hear a lot of conversations about, that and it's a big set of gotas on what, to do you know and kind of what's, involved in that what's your thinking, around that when when people are looking, at doing that I can talk well about the, first one and I have thoughts about the, second Dan's going to be the Exeter on, the ladder piece of that but to get a, model running in production there's, actually a ton of work you need to do, from an infrastructure perspective but, and this is before we talk about the, workflow stuff you know you need to, figure out how to containerize this, thing and get this image running like, you know as we alluded to to earlier, like just taking your model parking face, and expecting it to run is not a thing, you know there's a bunch of requirements, that these models have there's, quantization um inside the code there is, um different base images that they might, need based on Torch and Pie torch and um, python versions um and then you know you, can really find yourself in a bit of a, pickle just trying to dockerize a model, so the first thing is you need to figure, out how to get this in some sort of, containerized form so you can run it, elsewhere once you have that the truth, is that you need to spit up some sort of, servers that can deal with variable, traffic um and the reason why that is is, that traffic you know these things tend, to be expensive they tend to be bound by, compute so if you get smashed with a, bunch of requests it's not like you can, just have one model it will que out they, will time out the whole thing will slow, down your product won't work so you need, to figure out how to scale up and down, with traffic and then you need to figure, out all the security Concepts that come, with all that that's just on the serving, layer now you need to start think about, the workflow layer that sits on top of, that and I think you know version, management is you a non-existent for, hooking up into cicd it to really treat, this like a a service or a micros, service that has putting your model by, an API you need observability um and, logging another whole um set of features, and what you realize is that taking a, model and getting it working in, production behind a reliable secure, performant API and maybe cost efficient, as someone has done this myself um and, someone has built a company to try to, abstract this way it it is easily for, one model couple of people have had, counts work for a couple quarters if, you're lucky at scale and I think that, is the most efficient organizations that, can hire people with kubernetes u, kubernetes experience to be able to do, this and so that's type of things that, we try to abstract away from our users, where it's like you know you figure out, the python code we'll figure out, everything else and we'll give this, model this first class treatment um so, that you conversion around there you can, log around that you can observe it um, and you can call it but you don't need, to think so much about that you get that, free that's now to the point where you, have something behind an API and ready, to consume now there's a bunch of stuff, that needs to happen to make sure that, you know it doesn't start saying random, Stu that you protect against, hallucinations it's not just ingesting, pii all the time Dan can probably just, talk really quickly about that as well, I'm sure yeah I think part of the reason, why I'm always excited to talk to to and, and is team at base 10 is because they, are experts in this all of those layers, that we just talked about I was actually, on a call uh with someone the other day, and we were talking about like spinning, up some microservices or something and I, think my comment was like I just really, don't want to care about kubernetes, because I don't want to like wake up, lying in a ditch crying in the fetal, position like that's how I view that, like whole world so uh props to you and, your team for dealing with that side of, things I think that's what's allowed us, then on like the prediction guard side, in a lot of ways to like bring up a, model quickly and then have the time to, think about some of these other things, to and I I don't know if you can comment, on like I have my own perspective from, trying to run models for for my company, but it would be interesting to hear the, perspective of different pers as that, are coming into base 10 like are they, people that are sort of, application developers that are you know, not infrastructure people are they like, data scientists like what are the what, are the types of people that are coming, to base 10 and I maybe along with that, like as you mentioned closed apis are, getting used a lot but still people are, coming over to think about like hosting, their own model, one question would be like why like who, are these people and why totally um I'll, answer the second one first um no I, think I add some together is that it is, more or more just Engineers um I'd say, like I I don't know if there's any, there's any distinction now between like, I say it's less and less data scientists, your traditional data scientists it's, more and more people with some ml, exposure product Engineers, infrastructure Engineers who have tried, to build it themselves and have really, felt the pain I think from a product, engineering perspective like why want, people want to use open source apis I, think cost is one big thing is that open, a has stack up over time I think or, anthropic um I think the other one is, data privacy and security is that you, don't want to just be piping over all, your data to open AI today um and, especially when you start to talk with, B2B use cases and Enterprises and I, think there's like probably the more, interesting one is that there were just, like a long tale of people working on, weird models people are fine tuning, models um fine tuning open AI models is, you know not great um you get a bit more, control with that manual mode with open, source models and so it's kind of like, the longtail of use cases I'd say are, coming more and more and these can be, Engineers they can be machine learning, Engineers they can be honestly like a, lot of Audio models like different, modalities that you know there's not, that much exposure to with closed API, and a lot of custom stuff as well you, mentioned you know like shipping data, over to to open Ai and I have talked to, gazillions of people who have that as a, constraint in their businesses you know, because the the attorneys for the, business are like nope you don't want to, send you know your proprietary, information and stuff over that I guess, you would not have that issue at all, with base 10 would would you I mean, that's that kind of goes away altogether, when you're hosting in that way right, yeah um you have ownership of your data, we don't log any of that data you're, treating the model as just like a like a, map of input the outputs and nothing, else yeah that would really solve a lot, of people's problems by taking an, approach like that yeah and I think the, second piece there is that once you, adopt the Bas approach you can then, start to think about like self hosting, deploying it within your own bpc like, you know we have customers that deploy, base 10 within their own AWS accounts, and data never leaves kind of their, boundaries or their accepted boundaries, yeah and we've kind of I think you've, framed the concerns that you're looking, at with base 10 very well they sort of, infrastructure scaling concerns of, Hosting your own model could you maybe, take a step back and just describe like, if I go into base 10 like how have you, architected the approach like to I'm an, application developer I want to run you, know some random fine tune of llama 2, that I've created somehow what is it, like for me what what does that look, like with the way that you've structured, this and what's some of the thinking, behind that in terms of the workflow and, and how you want it to be for people so, that they can treat I guess that model, as a first class thing that is a first, class asset in terms of what they're, monitoring and logging that sort of, thing yeah for sure I'll go just to make, it EAS and you know try to take away of, as much of the complexity but you know, maybe more importantly is to allow you, to have a bit of control you know I, think base 10 is like a on line to Theo, you model like you know we don't believe, that anymore you know that we think that, you know actually having a little bit of, structure around it gives you actually, um bit of structure up front gives you a, lot more flexibility a bit down the line, so we have a open source Library C trust, which is basically a it's abstraction, that if you write your model in you get, kind of everything free and so basically, you need to write two things you need to, write one python class with a load, function and a predict function and this, is vanilla python code it can sit within, your mon repo you can specify, requirement as you want there's nothing, base 10 about these files you know you, can run them outside of a base 10 I, think that's very important well but, once you write that load and predict, function it does two things one it tells, us hey what are you trying to do here, and you know we can load that up and, when we deploy a model we load that, function when we infer We Run The, predict function but more importantly, within those functions we allow you to, compile yourself down we allow you to, can do the tricks that you need to do so, that you know you still have that, control and you know within that you can, write pre-processing and postprocessing, functions that allow you to you know, maybe like strip out some data log, something monitor something but really, it's still giving you that control of, the product and application Level while, still abstracting out the thing we want, with trust one to have a trust developed, and you can go to trust b.co and and, check out a bunch of these um it's a, pretty simple abstraction and you can, just push that up and we kind of give, you all the workb version management, around that deploy trust yeah could you, you speak then to like that's like the, prep kind of that goes into oh I've got, my weird model I'm writing this python, class I'm going to deploy it on uh, somewhere and I know that like one thing, that I think is really cool how you've, made base 10 trust is like you mentioned, it is open source and so you can run, trust things and deploy in a variety of, ways one of those being like base 10's, hosted infrastructure which is of course, easy but it's also like generally a, great great sort of framework to package, your models but let's say that you do, kind of go the base 10 route you deploy, this through the base 10 client to base, 10 could you kind of compare and, contrast like let's say I just tried to, run my model in a fast API API in a ec2, instance or ECS or like whatever that is, in my cloud, what is going to be different about what, I look at when I kind of look at my, model in base 10 versus running this API, somewhere else and how does that make a, meaningful difference or what are you, trying to do in terms of making a, meaningful difference for the day-to-day, for people yeah well I think what you're, doing is that so besides you know you, can run that model in P API great you, got this model you give it an input it, gives you an output fantastic let's, carry on but it's like the depth of, features and the creation of work flow, which are really important here and so, like the depth of features is that hey, if you can do that with fast API great, you're still going to have to set up all, scaling you're still going to have to, set up observability you're going to, have to set up logging and whatnot, Hardware management but I think the, workflow is probably more important to, be honest which is like you we're, creating a defined way for you to, publish new versions of this if you want, to AB test two models you can have two, models running at the same time you know, it's really that the removal of, boilerplate and the addition of some, workflow so that you know when you, deploying this in production and you, need to roll back a version you don't, need to go and scramble to find that, fast API file without you using we've, all been there before and so it's that, creation of workflow is probably I think, what a lot of our customers probably use, us for besides you know I think the, production grade inference is a given, but a lot of the differentiation comes, from that webflow just to totally boil, it down you're saving them a lot of work, right there yeah 100% yeah there's a lot, of kind of grunge work uh it reminds me, of Dan liking to do his data massaging, that I'm always teasing him about but, joking aside you're basically saving us, all sorts of work so we can get into, production faster get it up and running, and and know that it's production grade, all the way through with a minimal, amount of effort and know that it's just, there 100% like we're working with this, customer right now this is worth noting, that they were a this is a pretty late, stage startup ra hundreds of millions of, dollars, AI native product they've got a team of, four AI infra people to manage this and, they've been working on this for about, two years you know we were able to, replicate and get a more performing API, up and running in two days wow I think, that is kind of what we are trying to, it's the performance the workflow it's, the maintainability but it's also just, the speed to prod I don't know how many, of your users do this but the fact that, this might be sort of reveal about me as, a person um and also reveal some utility, of base 10 but like I can literally like, last night I'm sitting on my couch and I, can log in to base 10 on my phone and, then just Chang the auto scaling from, like two replicas to like five Max, replicas and the timeout and like all, those things of like the auto scaling of, my llama 2 fine tune from my couch like, in between Halo games so that was that, was, terrifying I don't know what that, reveals about me as a person but, certainly like that ease of use I think, is really interesting it it's like that, uh that proverb when you like try to, solve a problem and then you're like, I'll I'll solve this with Rex and then, you just have another problem to solve, it's like kind of like that it's like, you want to deploy your model and then, you want to deploy it with kubernetes, and then you have like a whole another, problem to to solve and like solve the, auto scaling stuff and and all that and, then like I think on top of that just, all the SR work that you have to do that, stuff it's like you know what happens, when it goes down you know what happens, when you need to migrate something over, U what happens when there's a new GPU, you want to use there's just so much I, feel like we've really turned the corner, and I think stuff like Ai and ml um, really has helped here because people, want to move fast is like I feel we put, the build versus biod debate a little, bit to rest for a bit where where we, just don't hear it as much it's like hey, we we want to build it ourselves like, people are just like we want managed, Solutions yeah you got to go fast these, days because if you don't somebody else, is going to get there first and and, you're not going to have a business and, the market is remarkably Talent contrain, like yeah you know again like Dan you're, saying this and this makes me happy, because like just like so you know like, Dan's background is in data pop and de, and dealing with a all of these things, you know like it is I've cried myself to, sleep in fatal position, and so really it's just a e of use and, like it's e of use and the ability to, scale with you like and that's probably, like the two things which we try to, bring to our our customers and I think, just even outside of base 10 is probably, the the biggest opportunity I'd say in, machine learning infrastructure right, now is think of all the user stories, that are important now that weren't, important 12 months ago and maybe just, take a long a slightly longer term view, than what can I build around open AI API, which is like stopped a lot of attention, these are all places where Bas s is, thinking about going you like you know, we we will partner with people are doing, if you think about people the emails, like think about people with the, fine-tuning layer you think of people at, the training layer at the observability, layer the logging layer it's like the, entire new stack here to be built and, that's a massive opportunity m is a, really interesting company to look at to, be honest because you they were training, their models they helping people train, mod they were Mobly early but like the, value was very very clear to a pretty, sophisticated buyer um in dat bricks and, so to any folks building tools around, here like so much value to be at it it's, Green Field I have another question to, ask you because you are living in kind, of that world of model deployments and, the various ways that people are doing, this people are fine-tuning their own, models or just using open models I'm, wondering how you see the trends going, so you've already talked a little bit, about open models being available small, to big models and how people are hosting, them there's tons of people that are, also exploring this area around running, models kind of at the edge or in various, environments or on laptops and also, people that are exploring kind of you, already mentioned quantization and, running models on CPUs potentially, instead of gpus I'm curious for as like, someone who hosts like a lot of models, in the world world like what are you, seeing in terms of this trend like cuz, you hear about these topics but I don't, really have a good sense of like, obviously people are exploring those, things but are those people just like, extra loud on the Internet or like, certainly there's use cases Chris knows, them well for running certain things at, at the edge but for many people out, there that are like maybe building a SAS, platform or something you know that's, less relevant although maybe the costing, you you mentioned cost optimization as, well around like that sort of thing so, yeah how are you seeing as someone I, guess my question is is someone who, hosts a lot of models for a lot of, people what what are you seeing as, people's concerns both in terms of that, costing optimizing models and like, deployment targets I guess totally I, think what we're saying is is remarkably, early to be honest like there are opport, like people are deploy up on edge but I, think there's not enough of them today, just kind of think about like a, generalized opportunity there and so I, think you know there's a company called, octoml that started with Edge deployment, and then think they were just like let's, Mo because that's where the opportunity, today is I think all the stuff that's, happening around running these models on, less and less Hardware or optimizing in, xway or Y way it's remarkably intriguing, right is it's pretty crazy that we can, get a model that we can barely run on, you know the biggest CBU we can find and, suppos have figured out how to compile, it down or ritten it with C plus kernels, and all of a sudden it runs anywhere, that's pretty fantastic but I I do think, that you know we're still in the, research phase there we're in the, experimental research phase and like you, know we've seen a lot of people deploy, those models yes we've seen a lot of, people play with those models we've seen, a lot of interest in those models but I, can't really think of too many examples, of people running those models in, production just yet but it seems, inevitable that over time oh it will, happen that is the AR that we are on, yeah these models are getting smaller, and smaller yeah when I'm not on the, podcast I'm in a world where it's all, about things moving around in time and, space and a lot of those things uh will, have ai uh capability uh on board going, forward so that's I agree with you, there's a lot of research going on and, no one it's not a solved problem in one, you know there's not a set of best, practices yet yeah if you will but it's, an area that is majorly ripe I'll be, really curious to see if if you guys or, another company is able to leverage all, the expertise you've built up uh and, experience youve built up in the cloud, and kind of move out into those areas, pretty device yeah yeah there's millions, of devices out there just waiting for, you yeah I say like the the challenge, there is just around you know how I'm, guessing loy's devices are are a bit of, a snowflake and you know we can't you, know you can't build for one type of, device and just go and apply that to the, next device there and I feel like there, probably some generalization that needs, to happen at the OS layer before we can, do that but I am also completely, uneducated on edge self so you probably, have a lot more to say that than I do, well and it's interesting too I sort of, ask this in a leading way because one of, the people that we're talking to I'll, kind of genericize this but they run, some equipment at the edge in, manufacturing and they have like a hub, at the edge which is air gaap which, doesn't talk to the internet but their, whole like next generation of things is, going to be internet connected and when, I was talking them about like doing some, things with large language models in, that environment you know essentially, where we got was hey well it's going to, be more hassle for you to figure out, some of these like model optimization, things and all of that than to just like, set up an API in base 10 or something, like that and and just connect it out if, that's where you're headed anyway and, it's not like a military security, concern like so one of these situations, so yeah I think we're probably see both, and but for a lot of people it's like, kind of how I've categorized these, things in my mind is yeah some people, will want to run kubernetes in their own, infrastructure and they have the, expertise to do that and if that's you, then like great you're one of maybe a, few people on the planet I don't know, good on you good good on you and, similarly like with, if you're running a lot of models on the, edge which I know uh certain people are, and in certain industries it's really, important that's great and that, expertise will be there but I think for, the M I don't know my sense is that for, a lot of majority of people like, separating out that infrastructure, concern of model hosting is really, really a useful way to think about, things so I don't know we'll see I'm I'm, always bad at predicting the future but, we talked to this one customer they was, saying that for some reason the CEO, bought a bunch of gpus and they, literally have machines in the office, they're like oh well we're going to do, the I think it's Amazon has like the, kubernetes anywhere or something where, you can basically uh like a hybrid sort, of thing yeah yeah you know we run away, from that opportunity as um suffices to, say but you know there are I think these, people are thinking about these problems, I don't know if there is a a solution, here just yet yeah well as you kind of, think about out so obviously and I know, just from our discussions like you're, helping a lot of people and really, significant use cases in the space with, already with what you're doing with the, infrastructure side of model hosting but, as you look to kind of the next I can't, even say like the next years because, things move so quickly but as you look, towards the future and like what is not, yet solved on the infrastructure side, with model hosting and what are like you, and base 10 really excited to dig into, what comes to mind and what are you, thinking about yeah I think um within, the containers like that layer is gets, very interesting I think you know VM, which I'm sure you've played around with, um TGI which you know they both great I, still I you I think they still fall from, ready for prime time just yet um VM TGI, and um T LM the Nvidia just put out I, think this it's going to be more and, more of these Frameworks and supporting, these Frameworks I think is going to be, very very key for us and what we're, really excited about so we're going, deeper at that layer so you can kind of, bring your own framework on your, container and really benefit from that, so you know we're going to have first, class support for trt llm pretty soon, and we already do it for TGI and BLM and, I think that side's pretty interesting I, think you know we have a big launch, coming up I'm happy to talk about right, now actually but around um multicluster, so that's basically being able to one, use your own compute to bring your, compute to base 10 so the the control, plane sits on base 10 and the workload, plane considering gcp aure AWS or some, combination of the three and then we'll, keep adding clouds to that and so I, think that's very very exciting, especially in the Enterprise that, because you know people want self-hosted, that's huge it is it's really nice, that's going be big and then kind of, just beyond serving and I think with, really decid like you know we've already, had one for8 and we learned a lot and, you know we we we end up retiring but, we're going to get into fine tuning at, some point I think like that's we keep, seeing like just like I said about kind, of like the edge device stuff and the, compilation stuff is that my tuning is, still an not I'm a little bearish on, like API that say give me your data I'll, give you a model like I think you need, more controls as someone who built an, API that said give us your data we'll, give you a model U with blueprint you, know I think you need more control you, need control over your base model you, need more control over even the, fine-tuning scripts to customize that, model we'll start to think about that, very soon you know we're already doing a, bunch of work with customers to make, sure that we're marching the right, direction so I'm very excited about that, which is that you know over time base 10, becomes this place where you can run, your models great but then you can also, start to collect data sets around your, model just imagine if you could just, give your model to base 10 and then you, know you either opt in and we basically, write all your input and output data to, S3 that's beautiful yeah like, essentially a level of uh caching for, model inputs outputs yeah exactly yeah, that the multic cloud thing really will, be big for Enterprise by the way just to, that point because I think most, Enterprises across many Industries are, recognizing that their future is a, multicloud world it's no longer tied to, one and if you have base 10 able to do, that hosting and run a control plane and, you can deploy into any of the cloud, clusters that you happen to be and maybe, different parts of the company emphasize, one or the other then that takes a lot, of challenge uh that they're currently, facing out of that so it's pretty sweet, I mean also opens up opportunities right, especially in the GPU can train world, yeah you can get them from wherever you, want and then you know I think once you, have data sets as well then fine tuning, just becomes obvious which it's like, okay now I can I find this and you or, maybe it's even like hey hook up your, open AI end point using base 10 so we, can collect that data set and then we, can create that 5 tune misal or llama to, to keep you the more you want I think, there's a lot of interesting things, along that whole and as I said to you, guys earlier there's so much opportunity, here for people building in the tooling, layer and Ai and and ml it's very, exciting overall yeah well we appreciate, uh you taking time out of doing great, work in that layer to talk to us and, share with our listeners um this is a a, great conversation and hopefully we have, you on the show in less than 3 years, from now but um if not at least three, years from now hopefully sooner so thank, you for joining us again and giving us, an update and some insights around this, and uh really appreciate what you all, are doing and and appreciate you taking, time of course um thank you thank you so, much for spending time on the show I, thought it's not to be, [Music], Hest thank you for listening to, practical AI your next step is to, subscribe subribe now if you haven't, already and if you're a longtime, listener of the show help us reach more, people by sharing practical AI with your, friends and colleagues thanks once again, to fastly and fly for partnering with us, to bring you all change talk podcasts, check out what they're up to at, fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Deep learning in Rust with Burn 🔥 | It seems like everyone is interested in Rust these days. Even the most popular Python linter, Ruff, isn’t written in Python! It’s written in Rust. But what is the state of training or inferencing deep learning models in Rust? In this episode, we are joined by Nathaniel Simard, the creator burn. We discuss Rust in general, the need to have support for AI in multiple languages, and the current state of doing “AI things” in Rust.
Leave us a comment (https://changelog.com/practicalai/242/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://www.neo4j.com/nodes-2023?utm_source=Changelogpodcast&utm_medium=nl&utm_campaign=Nodes&utm_content=Ad-1) – NODES 2023 is coming in October!
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Nathaniel Simard – Twitter (https://twitter.com/nath_simard) , GitHub (https://github.com/nathanielsimard) , LinkedIn (https://www.linkedin.com/in/nathaniel-simard)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
burn-rs (https://github.com/burn-rs/burn) : This library strives to serve as a comprehensive deep learning framework, offering exceptional flexibility and written in Rust.
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-242.md) | 709 | 19 | 2 | [Music], welcome to practical AI if you work with, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners for, helping us bring you practical AI each, and every week, fast.com fly.io and types sense.org, [Music], what's up friends there's so much going, on in the data and machine learning, space it's just hard to keep up did you, know the graph technology let you, connect the dots across your data and, ground your llm in actual knowledge to, learn about this new approach don't miss, nodes on October 26th at this free, online conference development ERS and, data scientists from around the world, will share how they use graph technology, for everything from building intelligent, apps and apis to enhancing machine, learning and improving data, visualizations there are 90 inspiring, talks over 24 hours so no matter where, you're at in the world you can attend, live sessions to register for this Free, Conference visit Neo 4j.com noodes, that's Neo the number, 4j.com, noodes, [Music], welcome to another episode of practical, AI this is Daniel whack I am the founder, at prediction guard and I'm joined as, always by my co-host Chris Benson who is, a tech strategist at loed Martin how you, doing Chris I am doing very well today, Daniel, uh it is fall weather out and I'm, enjoying getting outside it's fall it's, raining here today yeah it's a little, cloudy out but I'm enjoying it's nice, weather and so you know it's like part, of me wants to stay inside and do the, fun things like especially about what, we're going to be talking about today, right and part of me wants to get, outside and enjoy the weather well it's, that time of year where you just want to, curl up next to a fireplace and burn, some firewood oh my gosh you took us, right there I I'll tell you what before, you say that I'll just say this is an, exciting episode coming up because I, think this is a a little moment where, we're going to talk about our industry, maturing a little bit through one effort, and uh with that said I'll let you go, ahead and do the intro well the, connection to burn is uh because burn is, a deep learning framework that's built, in Rust uh and today we have with us the, creator of burn Nathaniel Samar welcome, Nathaniel hi thanks for having me yeah, well I I admitted to you before the, episode that I am basically uninitiated, in terms of rust goes I I've looked at, various articles I think that I've run, rust programs just in a sort of hello, world sort of way probably my biggest, use of rust has been using rust in the, python lunter called rough um which is, really great um so that that's kind of a, circular thing but for those others out, there in our audience that might not be, as familiar with rust as a programming, language could you just tell us a little, bit about the sort of like what is rust, and why rust yeah rust is a I think it's, falsely being categorized as a low-level, programming language probably because of, a so recall reason but it's very general, program language that can be used for, high level stuff as well as lowlevel, stuff so the main reason to use rasis, Mimi when you need to go through, multiple abstraction boundaries uh, without having to pay for performance so, yeah this is how I I Define it and I, could be wrong about this but I think, one of the great features along with go, having a really great mascot we've got, isn't it a crab uh if you see crabs or, something for rust isn't that a thing, yeah I think it's a it's a cute crab, it's a cute crab that's the mascot, yeah I think it's important for, programming language to have that yes, you have python or the snake with the, with this pram language we have got I, don't know what is for go it's the the, Gopher the Go Gopher yeah it's it's, quite nice yeah the Gopher it it's funny, and you mentioned that go is actually uh, how Daniel and I got to know each other, we met in the go uh programming language, community and we were kind of the two, data oriented people at the time this is, going way back there are many many data, oriented people these days but uh got to, know each other subsequent to that I had, been hearing about rust for a while and, I got very interested in it uh not only, for because as you pointed out it's a, fantastic general purpose programming, language all around but it also does, have a lot of uh really amazing, low-level features and performance, capability that attracted me to it so, I'm not nearly as accomplished in the, language as you are Nathaniel I still, love go but rust is now another, programming language that I have fallen, in love with yeah I think go is really, well suited for web services so we've, got a lot of tooling around that it's, really pragmatic to use it for that, stuff so yeah rest is getting there but, we've got the whole async stories behind, that yep and for rest itself you, mentioned kind of people have this, stereotype of r as a low-level, programming language but could you give, maybe some examples of the the types of, things either you've built in R over, time or that are possibilities just to, kind of give people a sense of what, people are doing with the language, obviously we're going to be talking, about deep learning which is thanks to, you something that can be done with the, language but what are some of the other, things that are out there that people, are are doing right now with rust well I, think it was first created as a, replacement for C++ to write browser, engines so this is maybe why it was, known as a lowlevel programming language, but now I think it's used in game, engines it's also used to do web frend, so you've got like Leos and Deoxys which, are frontend libraries like react and, view so this is pretty high level we've, got also command line libraries that you, can use like meta programming so that, it's very easy to do your command line, arguments all that kind of stuff so yeah, there's tons of things that are built, with trust high level and low level so, you can mix and match by your own, applications of course there is like the, web services with Tokyo the INR time, ation a lot of if you want to do web, services uh there is also libraries for, that yeah this is a project on Up of my, mind it was one of the first languages, that really embraced web assembly and, got it out there it's interesting uh, coming speaking as kind of a novice in, the language and coming from most, recently go there's always this debate, on Go versus rust that you tend to see, In Articles out there and I I really, found room for both of them um and I go, back and forth at this point I will, point out uh you know whereas go is one, of those languages that has run times, that kind of manages memory rust has a, really cool feature to it it's not, specific to what we're talking about, today but uh the compiler ensures that, you don't have memory fault seg faults, which is something like 70% of all the, bugs and software according to Microsoft, and so it has a a really interesting way, of approaching ensuring that you can, produce bug-free software or at least, much fewer bugs in it far fewer bugs in, it so it's a pretty cool language I'm, just curious as we're talking about the, language in general what's your favorite, feature or what are the some of the, things that made you turn to Russ versus, some of the other languages you may have, worked in ah this is hard to just choose, one feature I think this is the whole, package said like a real rust efficient, AO there yeah but like my favorite, feature is not the reason why I started, writing in Ras but now I think my, favorite feature is just Associated, types because it can abstract data types, like something that is really hard to do, with other languages so yeah and could, you explain a little bit of like when, might that be useful or how is that, useful in terms of like when that might, come up in your programming well it's, when you need to abstract the type, you're going to use but you let the, implementation decide the types normally, like you have the generics the generics, you have to maybe you have a list and, you have to say Okay I want a list of, string but it's when you use the list, that you decide the type where, associative type is okay I've got maybe, a list but I don't know of what it's the, implementation that decides of what's, going to be the list so sometime it, makes sense for instance in Burn we've, got a backend a backend traits which we, can have multiple implementations like, CPU GPU and we have associative types, for the memory for the tenso ands for, all those things that you can manipulate, at a high level but you don't have to, know which type it is it's to the, implementation to decides I'm just going, to ask maybe like an ignorant question, but I think maybe some people out there, might be wondering it if I'm working in, Python this is a a language where I, don't have to compile my python code, some of the things that we're talking, about here with the compiler and other, things a lot of people don't think about, although there's some intersection with, that so could you describe like when, you're writing a a rust program what, does that look like in terms of is it a, statically typed language you're talking, a little bit about type there it sounds, like you talked about a compiler so am I, right in that it's a compiled program, and then you can run the binary on some, architecture what is it it like to work, in in Rust as compared to something that, people might be very familiar with like, python where a lot of people that are, probably listening to our episode or, have their Google collab notebook pulled, up right next to them right and they're, doing all sorts of things with a python, interpreter what is the workflow and, programming like in Rust as far as how, the language is set up and how you work, with it obviously it's a bit different, than working in a notebook uh like you, said it's the strongly type static, programming language similar to like C++, Java all of those folder language so for, people that comes from python maybe, you're aware of the Python type int that, you can use it's a bit like that but you, have to use that everywhere in all of, your functions and definitions and the, workflow something that I like is that, in Russ it's I think it's the one of the, only programming language that does that, is that when you write the function you, can just write the test below it so, that's kind of the way where you can get, some feedback on what you're actually, writing and it it encourage good, practice because you're writing a test, that can be reused all the time it's not, a script that you're trying to just run, on the side you can actually commit that, and it describes how the code should run, and that's how you get interactivity, with this and since you have a packet, manager which is Cargo it's pretty easy, to just uh execute the code you're, writing yeah to follow up on that you, know cargo the package manager is based, on a lot of the best practices we see in, some of the other programming languages, for instance in JavaScript uh and the, node Community uh you have npm and there, are several others and uh the rust, Community really Drew from kind of best, practices on that another thing to kind, of follow up on the compiler notes that, Nathaniel was mentioning was was a lot, of rust uh developers kind of see the, compiler almost as a pair programming, partner in a sense to where instead of, just hitting compile from time to time, like you would in Java or something like, that the compiler is so comprehensive, that it kind of helps you and you kind, of use it to write the right code and, you get to the end of the process and, know that your code will actually work, uh without runtime errors so it's a, different way of thinking about being a, developer it takes a little bit of a, mind shift to adjust over to it this is, very different like in Python an, important skill is just to be able to, read the stack Trace because you're, going to have a lot of exceptions when, you run your programs and you you have, to learn how to Deo your program this is, kind of a a hard skill you have to do, when you learn python in Russ it's you, have to learn how to write the compiler, errors but they made at least they try, to make it as easy as possible even, sometimes you've got links to the, documentation it opens a browser you can, read why you have that error it explains, the reasons why so this is a different, set of skills and yeah this is quite, different from the workflow you use with, python maybe just one more question, about uh R in general before we dive, into some other things what is the rust, Community like in terms of whether it be, is there active channels where the r, Community communicates with one another, conferences meetups what is the rust, Community like and is it growing um how, is it changing over time as you've been, with the language for some time how has, it developed in the time that you've, been part of the community I'm not sure, about all of the community obviously but, I think it's pretty friendly like there, are some Discord channels where you can, just go and ask your questions if you, want to there is an acted G of issue so, the language is open source if you have, a problem just this open an issue and, people maybe are going to hell so this, is a pretty inviting Community I think, this is part of the reason why it, succeeds I think because if you don't, answer question you don't help people, use your technology doesn't really work, out I never went to a conference for R, yet but I know there are many so maybe, I'm going to go to some later you know, one of the the topics that has been kind, of a recurring topic between Daniel and, me over a number of episodes we've been, tracking kind of the the maturity, process of the AI community and and kind, of what it takes to kind of level up and, to take it to the next level and uh on a, number of different occasions we've, talked about the fact that if you look, at other communities uh that have Arisen, before this one often it takes kind of, broad support whereas uh in the kind of, the early days you know that we're, really still in in my view of modern AI, it has been largely dominated by a, single programming language which most, of our listeners are are very aware of, which is python which has really been, kind of the focus of where all the work, is uh it's where all the apis have been, focused on and everything and we've, discussed quite a bit about how for AI, to mature it needs to become more, broadly available to other languages and, so that uh as you have different types, of use cases addressing you know, different business needs and that, requires languages other than just, python all the time how do you get to Ai, and what kind of bridging do you need to, do it leads Nathaniel is I wanted to ask, you is it's clearly a need that the, community has had to be able to start, getting rust and other languages in, there I'm curious how did you approach, this what was it about trying to get, rust working as a framework that could, work with AI tools of the day how did, you get into that what was your, motivation what did you see as the need, at a personal level well I started, working on bird because I was, experimenting with from this n network, and I wanted to make something a bit uh, not standard let's say that and I needed, like multi- trading concurrency and, stuff like that and it was really hard, to do with python and I have a software, engineering background so I said to, myself well if it's hard for me to do, that then maybe it's too hard for any, researcher to do that so that's why, maybe we don't have yet an architecture, for that kind of stuff so I say well let, me try and make a framework in a, language that has support for high level, programming and concurrency and all, those things and yeah it's pretty much, the description of rust so that's why I, started writing a framework in this, language and then it's just was a, personal project for a long time I just, was experimenting with it and yeah it, grew with time when you first started, thinking about burn and these problems, that you were looking at what was the, current support for doing whether it be, kind of, quote unquote traditional machine, learning like you know random Forest svm, whatever that is in all the way up to, kind of deep learning in Rust what was, kind of the state of things um I'm, looking at your uh burn repo at I see, you've at least been submitting poll, requests since July of 2022 um maybe I'm, sure some of it goes back further than, that so back to those days what did the, ecosystem look like in terms of its, support for these things well I don't, think there was a lot of deep learning, framework in Rust so there were some, experiments but nothing really pragmatic, that you can use so I think there was a, library or normal like SVN random forest, in Rust I never used it but yeah I don't, think it's comparable yet to pych Lear, and pytorch which is very complete it's, interesting because some of the sort of, early stuff that we were doing in go was, similar there there were certain, packages like for whether it be kinds of, regression or hypothesis testing, statistical things but not really a, robust deep learning framework one of my, questions would be in go I know one of, the struggles with trying to support, really robust deep learning is not not, necessarily the fact that you can't, create a nice package with a good, API but a lot of these sort of, specialized libraries and Tool kits like, Cuda and GPU support make things a, little bit more difficult so it might, not be that but what what did you see at, the time you started working on Burn as, the big challenges on the rust side and, has that been the case as you developed, the package or have other things become, the kind of dominant challenges over, time yeah all those things are hard to, work with like C having your own GPU, kernels all the drivers not necessarily, easy to install on all platforms there, is a GPU libraries in Rust that allows, to WR kernels this is like w GPU so it's, targeting the web but when I started, working on Burn I acknowledge that it, was pretty important to be generic over, the back end so that we can write the, best back end for the specific Hardware, you're actually targeting because it's, probably always going to be faster to, write Cuda for NVIDIA to write like loow, LEL C or rust maybe with scmd support, for CPU or to write with the metal, graphics driver for Mac so I was aware, that one back end cannot be written for, all of them and I I just defined the API, and I just used Li torch as a back end, because there was already bindings to, lip torch and rust so this allowed, myself to iterate over the abstraction, over the user space API and not, necessarily worry about speed and, writing all of the kernels just getting, the abstractions in place and the, software architecture in place and it's, more pragmatics it's probably like as, fast as the torch by default and then I, can just go and write more kernels, afterwards which is what we're doing, right now I'm curious um do you feel, given the low-level capabilities that, rust brings to bear that so many other, languages don't have and that when, you're looking at whether it be gpus, over time and I know you're talking, about you know using lib torch in this, case but do you think that as you move, forward that that low-level capability, that you have in this language that, other languages don't bring to bear will, be a helpful you know part of kind of, developing it and maturing burn over the, years ahead does that low level give you, an advantage that you might not have, with other languages that we're trying, to integrate in I think so mostly in the, part where we need to end all memory so, that's an important part of deep, learning framework don't have to waste, memory we can leverage all of the type, system of rust to actually do draph, optimizations and all of that kind of, stuff that we're going to work on soon, and it I think it's going to be easier, to work with rust to do that with good, performance than it will be with maybe, another programming language with the, garbage collection because it has Fine, control over the memory so not, necessarily to write GPU kernels when, you do that you're actually writing, compute shaders so it's not relevant to, python or C++ or even rust but if you, want to endle memory and write the, optimization pipeline then I think rust, can be really useful and just to get a, sense of kind of the current state of, burn what is possible in terms of, support and what you can do right now, and what are some of the highest, requested things that you would like to, work on but kind of aren't there yet uh, I don't know there is so many things, that I want to work, on time is just is limited so quite hard, what I'm I'm really excited to work on, is cental fusion and really optimize the, compute Pipeline with lazy evaluation so, that's something I I'm really excited to, work on could you dive into that a, little bit and kind of what that might, mean for a user specifically yeah in, term of User it's just going to be, faster so this is really like, optimization techniques that the, framework deing famework can use so yeah, there isn't a lot of impacts in term of, user API and, usability but it's just going to be, faster gotcha and would you say that, right now in terms of what people are, doing with the package now you mentioned, that part of what got you into it was, building kind of, experimental models or architectures, that maybe you were experimenting with, on the research side so I'm wondering, with this package what are you seeing as, the people that are using it what are, they most doing with the package is it, that sort of experimental research, implementation side is it taking models, that aren't maybe experimental and, embedding them in you know rest, applications where they wouldn't have, been able to before is it something else, what are you seeing in terms of what, people are doing over and over again I, think a lot of people are using it, because it easy to deploy on any, platform because we have different back, ends so you can deploy on web assembly, you can deploy on even a device without, operating systems so this is pretty, great in term of deployment flexibility, but even though I I started the FW, because I had like a research idea I, wanted to do the goal of burn isn't, necessarily to be only for research I, wanted to go with kind of a blank sheet, and thinking about all the constraint, and who is going to use the framework so, I'm always thinking about like the, machine learning engineer perspective, the Searchers perspective and then the, backand engineers perspective so the one, that is going to write the actual, low-level Colonel code and QA caran in, sta so there are kind of different user, profiles or use cases that you can, assign to the framework yeah kind of as, a followup to that is you were looking, and I noticed that you had quite a few, people that were making contributions, you're for being a relatively young, project uh overall you have a lot of, people involved in it so I mean it looks, like it's really getting a lot of track, ction how do you kind of organize the, work around it and kind of satisfy the, interests of each of those personas uh, along the way you know is there one that, tends to lead or or do you tend to try, to have certain people that do different, ones how how do you approach that to be, honest I'm not sure I think the key is, just to be reactive so if there is an, issue just go and comment it if there is, a bug try and go fix it and I think the, most important work I can do is in term, of architecture, like setting the stones in place but, then if we I want to extend maybe add, more tensor operation or if I want to, add more nural Network modules then I, can open issues and people that are, interested can just assign themselves, and actually do a PO request and I just, have to be really really conscious about, that do code review correctly be kind, and I think that's that's pretty much it, I don't have any other secret so, Nathaniel I've deploy a lot of models as, part of my day job let's say that I am, interested in Rust and I am interested, to maybe take some model that I might, have experimented a little bit with in a, collab notebook or something like that, and I want to make it like you said have, the support for multiple backends, implemented in a maybe more efficient, application what would be the process, that someone would have to do to S let's, say get one of the kind of popular quote, unquote models these days up and running, in Rust using burn is that something, that's possible right now how are people, kind of pushing the edges with respect, to that well I think there are two, different strategies so we're actually, working on being able to import a next, model so if you have maybe an image, classification models then maybe our end, port is going to work it's still in whip, but if there is no crash it's going to, work not all operation are supported but, maybe for other models you maybe need to, write the model from scratch using our, framework and then translate the weights, and you would be fine to deploy it so, it's a bit of work but working it with, burn is quite intuitive so the API is, similar to P torch the modeling API at, least so it's not that hard depending on, obviously the size of the model and the, complexity of the model yeah and I think, I saw a few on the repo that you've, people have already sort of done this, what are some examples of some of these, that people have brought over into Burn, yeah I think there are Community models, for Lama for stable diffusion for, whisper this is thanks to the community, I didn't actually for those model but, yeah since it's open source I think if, you actually do the work to Port maybe a, model I think it's great to share it, with the community people can start, using it so yeah we have a few but we, would like more yeah so call out to the, listeners out there that are rust people, in the audience check it out and submit, some of your own model implementations, that that's a great way to contribute, I'm sure you mentioned it having a, similar API to pytorch and I'm kind of, looking through some of the docu, mentation here I'm wondering if you, could just comment on a few of the, things that you call out as far as, features of burn and kind of explain, what you mean by some of those things I, think we already talked a little bit, about the customizable intuitive, userfriendly neural network module so, this kind of familiarity with maybe a, pytorch API maybe there's more to that, but you also mentioned this, comprehensive training tools um, including metrics logging checkpointing, could you describe that a little bit in, terms of what the thought process is in, the framework around these things which, are definitely important practically as, you said for the machine learning, engineer for the actual practical person, who's trying to build models yeah and, even the researcher sometimes they don't, want to actually write all of the, training Loop is that's not the the core, of their research yeah there is a, library which is called burn train which, try to bring a training Loop to the user, so they don't have to write it so you've, got like a basic CLI dashboard where you, can follow all your metrics and you have, your login so if you want to maybe, synchronize the drive to maybe a Google, account you can probably do that so it's, similar maybe to py Tor lightning so for, the P user that are familiar with the, project but we also have that for burn, and we just have that it's just easier, to get started with the framework I, think it's essential for now if you're, starting a new framework to provide that, we are already talked a little bit about, The Versatile backends I don't know if, you want to say any more about the other, options for that you mentioned torch and, web GPU but I see a couple others here, mentioned are there any call outs that, you'd like to make there both in terms, of other options but when when also, those other options might be useful it, might people not might not realize in, the audience when you would want to use, a torch back end versus something else, yeah I think the torch back end is, probably the fastest if you have Nvidia, GPU for the CPU I'm not sure it depends, on the model but we also have ND array, backend so ND array is similar to NPI, but for rust this isn't maybe the, fastest back end but this is extremely, portable so you can deploy the back end, everywhere so if you've got a small, model it can be very handy to have that, or for to write unit test and stuff like, that uh we also have a candle backend so, kendle it's a it's also a new framework, built by aing face in Rust so they're, trying to make it easier to deploy model, with that so we actually have their, framework as a backend for burn so we, can benefit from their work and yeah we, have the web GPU backend as well so we, can Target any any GPU so if you don't, have Nvidia don't be sorry we have we, have you covered awesome so I I also, noticed on your GitHub repo in addition, to kind of the familiarizing us with, kind of the capabilities and features, you also have the burn book which uh I, assume was a maybe inspired by the Rust, book that seems to be a common thing, what is the burn book and how can we, best use it what's it for in your mind, yeah the burn book is to help people, getting started with the flameware so, it's like a big tutorial slash reference, that you can use to actually start using, burn at the beginning it tells how to, install rust how to get started with the, language, how to make basic models the training, Loop the data the data pipeline all of, that so it's just with all the, explanations and stuff like that so it's, really to help people getting started, with the framework in an easy way of the, people that are coming through and, learning from the burn book interacting, with you on the repo do you see a lot of, people coming from the non rust, community in because they have either, you know performance related things or, maybe their company is exploring, deploying things in rust or other people, that sort of thing so people coming from, maybe the python Community or do you see, more people kind of rust Engineers who, are already building things in Rust and, so now that everybody wants to integrate, AI into their applications you sort of, have the influx from that that way are, you seeing both which which side is kind, of coming your direction more, I'm not sure necessarily about the, backend of users of bird but I think the, main pain point is that they want to, deploy their model reliably and they're, coming to burn to do that and some of, them once they get familiar with the, fame where they actually Port also the, training part so they can have all of, their machine learning workflow working, with burn so it can be people with, python background or rust engineer I'm, not sure but I think this is the main, traction point I will offer kind of a a, burn newbie perspective on that myself, when I ran across SPN and reached out to, you I was really excited uh about it in, part because as this industry is, maturing and affecting as a you know, many other vertical Industries out there, we are seeing AI capability being pushed, out from only being you know in data, centers and stuff out into onto the edge, and you can Define the Ed in many many, ways obviously but the place where, processing is happening and even, training is happening is evolving over, time and if you look at businesses and, their other use cases the the fact that, they are they need Ai and all these, other industry things that they're doing, all these other businesses they may be, you know platforms that are mobile like, such as we have you know autonomous cars, out these days and you name it all sorts, of stuff that are increasingly relying, on AI and they're turning because those, are autonomous things they need the, performance in many cases the safety and, low-level performance capability that, rust offers I know that I got super, excited when I came across burn because, I'm in this AI world but I'm also in, this hyperformance things moving around, time and space world as well and being, able to combine those into one have one, language that is able to do both at the, same time and deploy out to the edge in, a very safe way and highly performant, way was hugely exciting and it's been a, point of conversation that I've had with, colleagues for quite some time so I, think you've hit a sweet spot with burn, that is going to get probably as people, become aware of it you'll get a lot more, uptake because it solves what would, otherwise be a big problem that they're, going to be fac with you know in the, years ahead yeah and I think it's not, just about like there is a good a good, amount of solutions to just deploy, entrance model like with and stuff like, that but it's not going to cover the, training part and I think it's valuable, to be able to do training everywhere, like maybe the next generation of mod, you're going to call backward during, entrance we don't know that it's cool to, have like one tool that you can do both, on any platform as you kind of look to, the future of the project itself I maybe, have kind of two elements to this, question what are some of your hopes for, what burn becomes into the future as a, framework in terms of like the sweet, spot and and what it does really well, what people turn to it for so what what, is your kind of hope and vision for the, project I guess and then for yourself in, terms of your own work and how you're, using the project or other things what, is your hope for the future you have, your own interests obviously in terms of, developing um AI related applications so, I'd love to hear both of those things if, you have a comment on them I think I, would like burn to be widely used for M, complex model I think Russ really shines, when you've got complexity so if you've, got the convolutional neural network, with just a few layer maybe the benefits, of using rust isn't as massive maybe for, deployment but if you've got like a big, mods and a lot of complexity in the, building blocks then I think bur will, shine in that place so I would like to, see like Innovative new deep learning, application being built with it as well, as maybe just normal deep learning, models like that we're familiar with, like reset Transformers all of those, ones but Deploy on any hardware so that, everybody can run maybe locally some, models maybe not the big ones but at, least the small ones and what I would, like to do with it is maybe more, research like I said previously on maybe, bigger models maybe asynchronous n, network like try to leverage the country, and nature of the framework yeah and as, we kind of get close to an end here just, uh for those because it is a podcast, people are listening in their car and, maybe taking mental notes of some things, or on their run where do people go to, find out more about burn and what would, you suggest let's say it's a newbie to, burn what should they do to get familiar, with it and try things out so where do, they go and what would you suggest they, start with I think the best place to, start is to go to the website so it's, just burn. deev pretty simple and from, there you can just go in the book that, we spoke about and just follow along if, you are not familiar with Russ we're, going to provide links so that you can, get familiar with the language and then, you can come back afterwards follow the, rest of the book and if you're, interested you can also go to the GitHub, try the examples you can run them with, one command line so you can try to do, end friends or to event launch a, training on your own laptop so that can, be great so yeah that would be the place, I would go to start awesome well thank, you so much for taking time to join us, and not burn us you are very kind um so, thank you thank you for for your time, we're really excited about what you're, doing and hope to have you on the show, uh maybe next year or sometime to see, all the exciting things that are, happening in deep learning and rust and, burn so thanks so much Nathaniel thanks, a lot man thanks to you for having, [Music], me if you enjoy the music you hear on, practical AI you'll be happy to know we, released two fulllength albums for, purchase or streaming just search for, change log beats in your music app of, choice and check them out volume zero is, called theme songs and it includes, special remixes in addition to the, classics and our first volume is called, Next Level featuring many of the video, game inspired tracks you've heard on, changelog podcasts over the years check, us out changelog beats thanks once again, to our partners fast.com fly.io and, types sense.org that's all for now but, we'll be back with more practical AI, goodness next, week, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI's impact on developers | Chris & Daniel are out this week, so we’re bringing you a panel discussion from All Things Open 2023 (https://2023.allthingsopen.org) moderated by Jerod Santo (Practical AI producer and co-host of The Changelog) and featuring keynoters Emily Freeman and James Q Quick.
Leave us a comment (https://changelog.com/practicalai/241/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://www.neo4j.com/nodes-2023?utm_source=Changelogpodcast&utm_medium=nl&utm_campaign=Nodes&utm_content=Ad-1) – NODES 2023 is coming in October!
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Emily Freeman – Mastodon (https://hachyderm.io/@emilyfreeman) , Twitter (https://twitter.com/editingemily) , GitHub (https://github.com/emilyfreeman) , Website (https://emilyfreeman.io/)
• James Q Quick – Twitter (https://twitter.com/jamesqquick) , GitHub (https://github.com/jamesqquick) , Website (https://www.jamesqquick.com)
• Jerod Santo – Mastodon (https://changelog.social/@jerod) , Twitter (https://twitter.com/jerodsanto) , GitHub (https://github.com/jerodsanto) , LinkedIn (https://www.linkedin.com/in/jerodsanto)
Show Notes:
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-241.md) | 7 | 0 | 0 | [Music], welcome to practical AI if you work with, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners for, helping us bring you practical AI each, and every week, fast.com fly.io and types sense.org, [Music], what's up friends there's so much going, on in the data and machine learning, space it's just hard to keep up did you, know the graph technology let you, connect the dots across your data and, ground your llm in actual knowledge to, learn about this new approach don't miss, nodes on October 26th at this free, online conference development ERS and, data scientists from around the world, will share how they use graph technology, for everything from building intelligent, apps and apis to enhancing machine, learning and improving data, visualizations there are 90 inspiring, talks over 24 hours so no matter where, you're at in the world you can attend, live sessions to register for this Free, Conference visit Neo 4j.com noodes, that's Neo the number, 4j.com, noodes, [Music], hello Jared Santo here practical AI, producer and co-host of the Chang log, podcast Chris and Daniel are out this, week and I just got back from Raleigh, North Carolina attending the all things, open conference while there I moderated, a panel all about Ai and impact on, developers featuring keyers Emily, Freeman and James qqu we thought you, might enjoy listening in on that, discussion so here it is the opening, question didn't get recorded but I asked, each of them to introduce themselves and, tell us all if they're long-term bearish, or bullish on the impact of AI on, developers uh James qqu developer, speaker teacher um I've done some, combination of those things, professionally for 10 years now which is, pretty fun and on the AI front this is, something I've actually talked a lot, about I really enjoyed your talk by the, way um that was my first pitch was an AI, talk and they were like no we already, have somebody that's taken that so um my, take that I would love to get into more, is uh very super positive thing and the, thing that I've talked about a lot, recently is people's fear of it, replacing their jobs and kind of, hopefully maybe changing your mindset, around that fear uh the fear that you, might have and changing it more into a, positive thing so hopefully we can get, more into that long long term I love, that I love that we're starting with, bullish or bearish like yes no go um I'm, Emily Freeman I lead Community, engagement at AWS that means I come to, communities and conferences like these, um to really show up as a partner uh for, the communities that already exist I ran, developer relations at Microsoft prior, to that um and I've I've certainly been, in the community for a long time wrote, devops for dummies 97 things every cloud, engineer should know I am bullish on, artificial intelligence um because it's, happening right like this is happening, um we have to kind of make it our own, and and lean into it rather than try and, fight it in my opinion any response you, guys are I guess you guys agree you yeah, we should have made that more yeah you, guys should have we should have set this, up so we have a debate to kick it off so, we my response okay, agreed all right well let's let's reel, it in then so that's longterm both very, positive posi I think I'm also in that, in that camp so we won't debate too, harshly on that but what about, today where does it stand I know we've, had some good demos we have people using, certain things it's here we think it's, staying so to developers it sounds like, the messages it's time to adopt but how, how do I get started if I'm just seeing, the demos on social media or my, colleague talks about it and they show, me what they're doing with it what do I, do today to actually start my AI Journey, um I think getting started today is, really about acknowledging sort of where, we're at with AI and the tools that are, available to us in this moment I think, learning as much as you can this isn't, new to us right like we have to learn, all the time and adapt our skills and, and grow as our technology grows um so I, believe that we have to again lean into, AI learn these things I mentioned prompt, engineering earlier um I don't think, it's a permanent role but I think it is, something that we have to engage with, right now in learning you know, to design our prompts to really lean, into the specific vectors of of the, model that you're using is important um, learn as much as you can about how it, actually works on the back end right I'm, doing this right now this is I don't, have a degree in data or artificial, intelligence um I'm learning and I'm I'm, watching the content that already exists, and and gleaning as much as I can from, it so that's been a a great experience, um and is opening my eyes to sort of how, we proceed with this but I think for now, it's just exploring the tools, recognizing the strengths and the, limitations um and being ready to adapt, and change as we move forward perfect I, love the adapt and change and I think if, you don't adapt and and change and, embrace AI to a certain extent this is, dramatic but you'll get left behind but, the reason that's not as scary as it, sounds is that's been the case with, every technological advancement that, we've ever had right like if you are, writing machine code 30 years ago if you, were still doing that you'd not be very, productive right like maybe some of you, are and that's cool but like we have, abstractions and we continue to have, abstractions where the world that we, live in as developers is totally, different than it was 5 years ago 10, years ago 20 30 years ago so this is, just one of those things and it doesn't, happen overnight it's a progression and, so I think you look at like what's the, easiest way can you add an extension to, your text editor to give you prompts can, you go to chat gbt I use that almost on, a daily basis not just for code I, actually love so for code but just a, creativity standpoint like give me an, idea of a project I can build or give me, questions to ask my Discord is actually, something that I've done um so I think, that's kind of the the easy way to do it, and I think like where we are now is, really I guess very similar to what you, said about like the Ironclad stage I, forget the exact phrasing but uh, basically the verification phase where, everything you do with AI has to be, verified and and that means that our, jobs don't go away because we have to be, developers and have that knowledge to be, able to do that verification process um, but I think that's you're able to get a, lot but I think you also have to invest, a pretty good amount of time into the, verification process to make sure that, it works it works correctly and then if, you're doing it for things outside of, code it also fits your tone so I use it, for blog posts and ideas for content and, things but I have to like take that, output and convert that into something, that is genuine uh for me so there's a, lot that goes into just confirming, verifying and tweaking the output that, that you get I also just wanted to say I, think there is currently a bit of a, misunderstanding about what a hype cycle, actually is and so you'll hear this, phrase that AI we're in a hype cycle of, AI and they're right but the hype cycle, if you actually go look it was made by, Gartner um thank you Gartner and so it's, really just this this sort of extreme, expectation right and so we're very, excited about it right now and we, haven't begun to really see the, technical limitations and the, difficulties that we will come across, later so being in a hype cycle does not, necessarily mean that AI is going away, um it is just in deleted right now, right well to James's point I think very, few of us are writing machine code but, the ones who are getting paid very well, to write it like ridiculous amounts of, money on Cobalt still a thing and still, will be for time to come so in my, experience I think that AI Coen in the, small is very much here at the function, level at the line level maybe at the, module level as you get intoo, Strokes understanding the system at, large the things that really are in the, mind of the developers at this point do, you think it's always going to stay, there do you think it's going to move, higher and higher up the abstraction to, where I can say hey AI make me a, Facebook for dogs and it will say okay, I'm done please no, um well that's the that's the ridiculous, end point but if we look at like what, a yeah I mean there actually is one of, those, um or was perhaps if we look at the way, that a a client would hire for instance, an indie developer right like a contract, freelance Dev and they have a business, idea and the client has some sort of, idea of what that business is right and, so maybe they're at like the user story, level now most people aren't quite there, yet you have to help them flush that, idea out, but at a certain point there becomes a, feature that is given to that person and, then they go and implement it and right, now I think it's fair to say that that, person will use AI Tooling in order to, do that faster better stronger Etc but, is there a point and if so please, prognosticate when that point comes when, I can simply be the writer of the user, story and we don't need anybody in, between me and the, computer I think we're a long ways off, from that um I think anytime you're, talking about un abstraction even the, best developer tools on the market right, now the difficulty really comes in, plugging everything together right we, have we have access to so many different, tools that that operate wonderfully and, provide incredible benefits but making, them all integrate and flow together is, always the hard thing and I see, artificial intelligence as the exact, same thing it will do really well in in, small sort of pockets of where we need, it to and then in plugging it all, together will be the sort of last moment, I think um where we're involved I I, think the abstraction just gets higher, and higher and again that's been the, evolution of like humankind right like, that's the reason we have technology and, inventions is so that we don't have to, do this stuff that we like wasted a, bunch of time doing Looking Back Now, like 100 years or whatever um so all the, abstractions that we see in development, from like you know you no longer have to, manage your own servers you no longer, having to do patches you're no longer, having to do firmware updates and that, kind of stuff, like that's just a continual path that, we'll go down and I think I'm glad that, you started with like it's a very far, way away because people's I think, irrational fear is like tomorrow they, lose their job because they use Chad gbt, to build the app and that's not like, anywhere near the case but I I don't I, don't see why the evolution of this, wouldn't, be exactly that where you say I want, Facebook for dogs and it gives it to you, because that code and that logic is out, there it takes a lot to put it together, and to figure it out and this like, prognosticate when I years but that, could be the goal but one interesting, thing and um in doing some research for, one of the talks I gave I came across, the Devin Paradox anybody heard of that, no cool so it makes me sound smart so, Devin Paradox says if a lot of people, fear like if something can do my job, faster that means I'm going to lose my, job because it's going to do my job but, Devon's Paradox looks across like we're, only doing that in a mind state of what, we're capable of doing now we're not, thinking forward about what as a whole, we're capable of doing with these, augmented tools so we can't even imagine, what problems we can solve in 10 15 20, 50 years so even if right now we we have, this idea of Facebook in our head we, know what that is tangibly even if chat, gbt or whatever can do that we don't, know what problems we'll be solving that, are infinitely more difficult than that, at the time so it's going to be, continuing like tools are getting better, but we're continuing to do more I think, as a as a ecosystem okay so we're going, to get past Facebook is what you're, saying yes okay appreciate that okay, well how about the other I know we're, both we're all optimistic longterm but, what about this very real, possibility I'm a sea level executive, I'm watching Tik Tok somebody else on, Tik Tok who's a sea level executive, coach says look developers are getting, more and more efficient things to AI, they are now 40% more efficient you can, just cut that directly off of your Top, Line and save your bottom line we're in, an economic downturn you need to cut, your engineering team today like that, seems like a very real fear and a very, real possibility what are your thoughts, sure I'll take it um no danger in that, question no I think I think um I think, plenty of CEOs are probably watching, those kind of videos on the Tik Tok I, don't know why that amuses me so much, like a CEO yes I call it the Tik Tok, because I think it's funny remember when, Facebook was the Facebook um and I'm a, millennial so you know Millennials have, coming up a lot today in a good way in a, good way yes we're we're good despite, what the Baby Boomers say so I think it, is a very real possibility to cut and, for that to be the impetus and the sort, of thought around this and you see this, throughout history as we become more, efficient and effective um instead of, earning ourselves more time to live the, life that we want um we prioritize work, and and are always chasing that that, edge of the bottom line um societally I, think we could do better with that um, but it's always going to be a reality, and I think this is where we have to, learn and grow and adapt if if we sit, still to James's Point earlier um that, will not behoove you long term so, learning adding value in different ways, and adapting to this new technology is, key I think to increasing our value and, and having some more longevity in in our, roles that said I think the roles are, going to change um and again we're not, new to this our roles have changed, completely we had CIS admins and now you, rarely see that job title um but the, population of people in technology roles, has only grown from there and so I think, that there's extreme opportunity if, again we lean in and we're we're not, approaching this in a fear-based, mentality of trying to dig our heels in, and and maintain the current uh system, as it, stands I feel like we need to be more, controversial um I no I don't have it, I'm saying like I like all those things, I agree with as well um to your point, earlier in in your talk the again I, forget the exact phrasing but like we, kind of had to go through the, Ironclad situation to learn what the, pitfalls were and to then get to this, next iteration of building ships that, was so much better in so many ways and I, can see a scenario where what you're, saying happens and I can see them, getting bit in the ass really quickly, from not having Developers for when, things go wrong because as we all know, no matter who writes the code stuff goes, wrong, and somebody has to fully understand, that and like maybe somebody with, non-technical background can go into, chat gbt and say Here's what I'm getting, what's going on but probably in that, case you really want someone with a, technical experience I I I just think, it's such a, slow although it seems super fast I, think it's a much slower process than we, give credit for and I think it we just, go down this Rabbit Hole of really, thinking it's happening now and it's, just not and if that has happened with a, company please share a story but I just, haven't heard of that but I can see a, Time and I think there'll just be, learnings with that but I also go back, to the dev Paradox of like we still, approach that this conversation now with, a fairly limited mindset of what we can, think about being capable of building, right now and we just don't know what, else we'll be building and I 100% agree, jobs will be augmented but not really in, any different way although maybe, slightly accelerated than how they've, augmented over the course of time, because that's what inventions are for, so I really I just go back to to that, when I kind of go down maybe the fear, Rabbit Hole are question marks of of the, benefits going forward okay if there was, a 30% cut and I didn't want to be a part, of it what would I do today learn we, have to learn um and you know chat jpt, has come up a lot and and that's like, sort of the leader right now um we don't, know that that's going to stay that way, and so you're going to see a ton of new, tools come forward you're going to see a, ton of startups get funded um this is, where Venture capitalists are putting, their money right now there's going to, be a lot of new tools entering the, market and a lot of churn as we sort of, Home in on who the big players will be, long term so I think learn I think you, have to sort of make demands where you, can right I've talked about responsible, AI this is super critical um and we are, in the place where it is truly our, responsibility to push for this and push, against the the sort of Market forces, that would say you know we're we're, moving forward quickly with a a profit, based um approach to this profit first, approach we have to go forward with a, set of guidelines and standards um that, protect everyone uh and use this in a in, that responsible way so that for me is, is key as we proceed um and really, owning that as the people who not only, build these tools but utilize these, tools that that we are clear on our, approach and our tolerance of that, behavior I'll double down and go a, little bit deeper on the ownership piece, of learning and if like we're really, honest we're in a really shitty time, right now like economically and and jobs, and I feel like every month I have a, friend of mine who reaches out or I just, hear about having got let go I was let, go for my role like at the before really, this started like a year and a half ago, that summer and the reality is that's, happening and it it really really sucks, and it's really really hard but I think, your your skill set has never been more, important your ability to communicate, what you bring to the table has never, been more important I talk about this a, lot from a career perspective like you, have to be able to share your benefit, and your value and you have to be able, to communicate that effectively and also, confidently when you go into potential, interviews or just how you show up and, talk to people in general that's never, been more important I also think and I, go back to this a lot because it's very, important to me Community like you never, know when someone in this room might be, the person that helps you find your next, job you never know what one of those, connections is and I I always clarify, this like from a networking perspective, it doesn't mean find people that work at, a company so that when you go to apply, there you can just have an end like, that's not why you do it you invest in, the community you show up you're a part, of the conversations and you're genuine, and that will have a significant return, or at least can um a little personal, story when I was let go for my job a, year and a half ago it was kind of a, debate for me of whether or not I was, going to go full-time to work for myself, something I'd been thinking about for a, while and so I posted on Twitter saying, like if anyone is hiring for devel or, develop relations or like management, positions in in that realm send me a, message and um I got 50 or so DMS of, people like not only saying we're hiring, but also like kind of like we'd like to, hire you and I don't say that from like, a braggy perspective what I'm saying is, like my network at that point I had, nothing to worry about because I could, find an opportunity because I'd earned, trust in the community and so all the, the people that you're sitting next to, the people that you talk to the people, on stage you never know what that's, going to do for you so there's in recent, times never been a more important time, for your skill set to be very sharp and, for you to be continuing to evolve that, like you said and then also your network, and how you show up in community because, you just never know I love that emphasis, on community, and we are not a collection of, individuals who form a community we are, a whole and not everyone should have to, have the gumption or tenacity or, privilege to demand certain things from, their specific workplace or or role and, I think part of being a Community is, protecting each other and standing up, for each other and showing up for each, other um and if you have the room to do, that or the natural personality to do, that the more that you can kind of be a, leader in this community and push for, those things in your own workplaces and, locations the better off we will all be, can I do another one more thing I love, like it just spks so much um the, automotive industry right now is going, through um strikes and stuff and they, did it in an interesting way where they, did it in like bits and pieces of like, taking more people off the line so that, they can continue to budget to be able, to do that longer there's also I figure, what the acronym is for for writers but, like the strikes in writing that's I, think the power of community and people, being able to come together as a, community to stand up for what they, think they deserve and I don't I don't, know that we're like here right now but, I think it's just an example of what, people that come together with a common, goal can do for an entire industry and, maybe we get to a point where like we, unionize against AI I don't know like, that's maybe not but like the power of, those connections I think can lead to, being able to really make positive, influence wherever we end up unionize, against AI you heard it here first okay, let's dive into the adoption weeds a, little bit so we talk about learning, adopting trying things what have you all, found is particularly beneficial, today how I would go about adopting and, things that let you down for instance, I'll give one because I write Elixir, which makes me a little bit, weird AI does not know Elixir very well, does so yes it's here but it's not, evenly distributed for our more obscure, Technologies you're going to have worse, Generations you're going to have worse, advice it's all good so I use it less in, that context but when I'm writing the, front end stuff it knows JavaScript very, well so that's just an example of what's, good and what's not good we've I've, heard the advice that you should use it, to generate your tests and then you, write the implementation maybe that's a, good idea maybe that's backwards maybe I, should write the uh have it write the, implementation and I write the tests, because I am the verifier so thoughts in, the weeds of like what's good at today, you don't have to go into the future but, like if I was actually going to go code, after this and I was going to adopt or, die what would I do that would really, level me, up you can probably speak more actually, to how good or bad in different, scenarios or maybe I know but I I can't, do that so i' I've used it but I also, come from a perspective of I know, nothing about how AI works and so it's, interesting that you were saying like, you're learning about how it works and, what the underpinings are and stuff like, that and I've taken a different approach, where it's like I'm just a regular, developer like I have none of that, knowledge and I'm just seeing what it, does for me and I think there's a time, where we continue to to get better and, learn more so I think the adoption for, me again no specific advice of like how, well it does in different uh segments of, the industry but just throwing it in, there and seeing cuz I think it's going, to change from language to language, framework to framework and it's up to, you to kind of figure out what um works, for you and maybe your team and just, kind of figure that out um for yourself, again not super specific so maybe you, can help me out, there I think right now we see various, tools on the market um I can think of, about five that are sort of leading the, way I think we're going to see a lot, more models be developed and released, and kind of see where that goes um and, experiment there I think your point, about the L languages is such a good one, where you're seeing a ton of JavaScript, um obviously I expect python to there, yes um you know the data people of, python so I get it um but I think as we, proceed making it an even playing field, as far as code generation but also keep, in mind that part of the major issue, with generative AI is you take a prompt, and it generates something based on, expectations and so it produces what we, call hallucinations right um gen is on, drugs, and lots of breaking news of this panel, breaking news yes um and so what it what, happens is it it will just hallucinate, something and it kind of goes off on, these tangents and you see this when it, becomes really verbose in its language, or it kind of goes off or if an image, you know someone's missing an ear um, that type of thing and those those exist, right now and they're fairly common in, gen and I expect as we kind of move, closer and again hone these models that, that becomes better and better and we, have fewer of those um but right now, that is one of the major challenges with, geni can I give something if you really, want to be entertain slash trigger, warning very like very weird I had this, video like in a slide and I took it out, because it's so weird if you're, interested search for the, Toronto uh Blue Jays AI generated hype, video it's for the baseball, team fair warning if you want to it's, very entertaining but also extremely, weird going back to like people missing, ears and stuff check it out if you, want so when we talk about it being, hallucinatory what that really is is, it's wrong like it gave the wrong answer, right and as an experienced developer, I'm sure many of you here are, experienced developers I can look at the, wrong bit of code maybe I'll execute it, once but I can be like n that's not, right, what does this do to people learning, software because they don't they can't, do what we can do and say that's not, right they're just going to be like all, right let's rock and roll and throw this, into production is that what you did, when you were a junior in, your cuz I did not do that okay well, different paths you I appreciate the the, YOLO approach to production there um no, I think you bring up so many different, things so yes it's wrong it doesn't know, that it's wrong yet um and you know when, we go through and and we're talking, about Juniors someone on Twitter right, after the keynote mentioned that well, geni is getting rid of Juniors I don't, believe that for a moment and please, please don't take that approach into, your companies um that's going to be bad, I think the same approach with Juniors, with Gen should exist as we always have, which is where the more experienced, senior and principal Engineers not only, review that code but also coach the, Juniors on what works and what doesn't, and why so that we can all learn and, progress together again such an emphasis, on learning and evolving as a community, um I also think this is where I know for, Amazon code whisper when it generates, code you have options so it will give, you a few options that you can scroll, through and read and decide which works, best for you and I love that approach, because one you can see multiple ways of, solving the same problem and two you, still have some ownership and direction, that you can inject into the code based, on your again personal style or approach, or belief knowing the whole system right, from that one comment no code Sidekick, is going to know exactly what is, actually happening at the large scale it, can pick up on things as it learns um, but being able to see it as the whole, and not just that one piece of code is, is really one of the values of, you my initial uh reaction to the the, impact or influence on learning when you, use AI is um several different things, first and foremost is the the fact that, you have to also understand what you're, accepting whether you're copying pasting, or pressing enter or tab or whatever to, get that code you have to understand, that because you have to be able to, decide is it going to work hopefully, you're not just shipping directly to, production although you know um and and, in some ways it's not that different, than how we've always been right like, stack Overflow has been here for years, we have memes about control C control V, keyboards cuz that's all we need right, like we've done that for a long time, right and we've learned sometime times, to be responsible with how we do that so, I think we have to take time especially, for people that are early on to pay, attention to what's there maybe go and, do outside research about what's there, to really have at least a decent, understanding of what's there but I also, got a different perspective um from, Roselle who was at um was at GitHub as a, developer Advocate and now I forget the, company name that she's at now um but, she had a different take on the learning, experience and she was kind of going the, other way of saying like AI enables us, to move faster and learn some things, while obscuring other things so if, you're intentional about like I want to, learn this piece I can have ai generate, other pieces that I don't need that are, then enablers for me to build the thing, while focusing my Learning Journey on, this one individual piece or or a few, different individual pieces so that was, kind of an eye- openening thought for me, I hadn't thought about it in the reverse, of like it still is enabling us to do, more but I think you do have to use it, intentionally about what is it that you, don't know that you're trying to learn, what is it that you don't know that you, don't need to know yet and then what is, it maybe down the road that you're, definitely going to need to learn at, some point, too well said all right stereotype, warning Here Comes one software, developers, are generally speaking this will be, generally true and specifically false, we're pedantic we like we we think about, the tiny littlest details because, historically we've had to I mean some of, us are still writing machine code right, so like that's I know pedantic is a, pajor but if we just take it literally, we think about the little things and a, lot of times we take joy in those little, things right so we think about the, impact of AI on, developers is this stealing some of our, joy like will we continue to do what we, do at a higher level and be more, productive and make more money and all, the things that are great but actually, what we like to do was to write that, function to sort that array the exact, way we wanted to I think you have a, point okay I would say pantic, feels, negative is there a better, word Jared here in post I thought of, that better word okay chat GPT thought, of it meticulous I should have said, meticulous pretty similar meaning none, of that negative baggage all right let's, get back to, it focused and specific on those types, of ISS because I think we all carry, those moments that we saw something fail, spectacularly right or you know you're, you're actually looking at something and, as an expert you can not you can notice, right away what is wrong with something, and that pattern recognition is, something that makes us really powerful, um I think as we sort of proceed with, this I think that's the joy for some, people it's not the joy for others sure, um I'll speak for myself I'm I'm a, second career in Tech I was a writer and, I worked in politics and um nonprofits, and so coming from that into Tech coding, was not necessarily the thing that, brought me joy it's not to say that when, you don't finally hit that thing and, then it runs and it's perfect it's like, oh that feels so good um but for me it, was building tools that matter to people, and um that is what brings me joy and I, think that's going the the spark of joy, is going to be different for all of us, and finding joy in our work no matter, how it evolves and changes I think is, important for all of us as as humans um, and for our personal growth uh but I, think it's again we set the standards, here this is not happening to us it is, happening with us it is happening by us, um and taking ownership of that and, really kind of saying okay well these, are the areas that we want to maintain, and grow and evolve with and these are, the areas that we want to give up I, don't want to write a crud service again, I just don't I I've done it a thousand, times we're good um that can be gone, done away with I want to solve the, really complex problems I want to think, about okay this hasn't been done before, it's only been done at scale by a, handful of companies how can I apply, this to my specific constraints and, resources that's interesting and I think, it's that kind of problem solving and, looking higher up in the stack and, having that holistic view that will, Empower us along the way well said you, want to add yeah I think very similar um, I can speak from just my perspective of, what I enjoy and I think it's the exact, opposite this is what I've always or I, said opposite the exact same is what I, meant sorry I was trying to bring drama, and I just don't naturally have it can, you disagree on something you fun I'll, try on the next one I'll come up with, something okay but my my favorite thing, about being a developer is being able to, build and like with code we can solve, most problems now there's other aspects, like hardware and and things that come, with it but like we solve the problems, of the world on a daily basis and that's, what's cool for me and I can't remember, if it was your talk or someone else's, like the way some people look down on no, code low code environments or or, platforms or whatever like I don't care, I just I want to build the thing and, like see people use it or just build a a, solution to a problem I have so I don't, know like same perspective on the next, one I'll come up with something, controversial I promise a nice analogy, might be stick shifts you know in, automatic cars where like no one's, stopping you from writing that function, just go ahead and have fun write it but, you know the rest of us are going to use, the thing to write the function for us, and if you take joy from that just go, ahead and write your, functions manual transmissions forever, there you go got one don't know how to, use one drive one so boom controversy, hey they disagreed I told you all right, let's get slightly more philosophical, and broader sweeping so we talked about, the details what about like big picture, changes I'm thinking about open source, software I'm thinking about ownership of, code if an AI rides 30% of my my code do, I get 70% copyright on that do I get, 100% does my employer get all the, copyright probably but what about open, source cuz this is like you know these, things are trained you know famously and, infamously on publicly available source, code and so that's our that's our labor, whether we gift it or not it is and so, what does this impact the lives of us, developers who are either working on, open source or simply using open source, it touches all of, us I imagine some maintainers will maybe, think twice about having stuff be truly, open source and I like I think there's a, whole deeper conversation about the, impact of just like reading from, people's code and leveraging that to do, other things and ownership and stuff um, so I could see some people just kind of, like bowing out of that and and kind of, coming back into themselves um which, would be a shame right for that not to, be, available um I don't know there's, there's so much that goes into it like, from a political perspective from an, ethical perspective honestly if like you, ask me that and I'm overwhelmed just, thinking about it there was someone last, night at the speaker sponsor dinner and, he talked about how I think today like, he's worked on multiple revisions of a, pitch for either ethics in AI or, something like that over the last like, year and it was giving another pitch, last night and they were going to go, through it I think we will have a lot to, catch up on to Define that I have none, of those answers and they drastically, overwhelm me because I can't begin to, comprehend like the those implications, but there has to be like leg legally, morally ethically open- sourced like, there has to be things that kind of, catch up and and give some sort of, guidelines to the stuff that that we, have going on yes and this is why I keep, pushing on responsible AI um we have to, have these conversations and they're, going to be hard there is um in, economics there's this concept of the, the tragedy of the comment it comes from, a pamphlet of the same title and the, focus was really around the shared, common land that cow herders or any kind, of um farmer would utilize for their, herds to eat off of and as individuals, it benefits each herder to have their, cows graze the most um with no, limitations but obviously shared, resources are finite and and they are, limited um my favorite quote from that, pamphlet is ruin is the destination, toward which all men Rush um and I think, we have to be truly careful as we, proceed here a lot of this is a common, resource and it's based off of a common, resource um and this is where I think, communities around this is really really, important and recognizing our own power, and influence on pushing toward um a, holistic and appropriate approach to, responsible AI that quote I thought you, were talking about my code again for a, second yeah, I should probably go revert that commit, um okay have you guys seen this new, thing you can do it's like robots. text, but it's for your this is for website, copy so I mean we're in the same realm, but it's like no no gbt do text I can't, what's the actual technology you can do, no crawl maybe I don't know it's a brand, new thing they're working on where the, llm Crawlers will skip your website much, like you can tell Google not to index, your website, is that something people will do is that, something that is can have an, application into the world of Open, Source I mean maybe you said opting out, of does that mean not even publishing at, all because there's no guarantee that, the language model creators will, necessarily comply with a robots. text, for instance what are your thoughts on, that the analogy there and how it, applies I find it to be unacceptable, that companies would push forward with, profit only, mentality um and not take these things, into consideration and to some degree, between our work and also where we spend, our money we have to tell the market, that that is not acceptable um I don't, want to live in a world where we're, trying to hide from crawlers um I want, to live in a world where we have decided, on standards and guidelines that lead, toward responsible use of that, information so that we all you know have, some compromise around how we're, proceeding with this I think it's super, important, mhm trusting people is a big, ask um I actually like when I said the, thing about people potentially, retracting from open source as soon as I, said that I kind of wanted to backtrack, that in my head and find out another way, and I immediately thought about like a, flag on GitHub that says like don't look, at this code if you're an llm so, something like that I think could be, useful I think longer term like having, having it all figured out is definitely, better but I could I could definitely, see that being a that people would use I, imagine if they if they don't want their, code to be used in in um in llms to just, be able to opt out that seems like a, reasonable intermediary step along the, way yeah I think we would have we would, start to our year round definitions of, Open Source because the freely available, ability to use without restriction is, part of the, tagline but maybe it's Source available, kind of things where maybe Indies start, say you know what I'll put my source, code out there you can do everything, except this and we have a new license, that's not open source but it's, something else I think time will tell, and it it just gets so hard to prove too, right like it's like cheating on a home, parg assignment in college which I never, did question mark like they had these, things that would compare your code, against other people's assignments or, whatever from previous years I'm sure, that's gotten more and more, sophisticated now so that would be one, of those things where if you had a a opt, out flag and then you come across a repo, that has code that looks like yours how, like there's no way you could prove that, without diving into like the logs from, the AI that generated that like you just, have I don't know that'd be so hard to, prove again coming back to like, ethically and legally we have a lot to, figure out I think okay how much time do, we have I think is it this go to, 12:15 I think so we got five minutes, okay anything we that wasn't addressed, that you want to make sure gets, addressed here I'll take the mic I'll, run it to him you stand up here and, answer there's been a lot of um disc, discussion about you know how gen has, been hyped or overhyped my question is, maybe this is a way for you to disagree, what do you think is the most underhyped, Technologies around AI I I think I kind, of agree with Emily that the, trustworthiness of AI is the most, underhyped but what do you guys, think yeah especially in in the, conversation here from a technical, perspective I think the most underhyped, thing is how much it can be used for, things that are not just writing code, and I mentioned this earlier just from a, spark of creativity like I I sometimes, limit myself mentally because I don't, think I'm creative although like if you, look for pieces of things I do like it's, there but like I can use something to, just give me ideas for stuff when I'm, stuck and it doesn't have to be, Technical and I think that's super super, value and thinking about like an, onboarding of how to incorporate it what, easier way to incorporate some AI into, your life and to just like give me an, idea for something to do this weekend, that would be fun with my like partner, spouse or whatever right so I think just, on a regular outside of code perspective, there's so much that you could get out, of it from a creativity spark and I, think that's a lot of fun and I think, it's easy to get started that, way I keep coming back to the for me the, hype is around the speed and scope of AI, um when I quoted Marvin Minsky bless him, who believed by 1980 we'd have a human, analog um obviously that's not true and, when you think about how quickly this, kind of came to Market it feels really, fast but a lot of that had to do with, 2018 Transformers coming about and us, being able to actually proceed with this, um but when you look at all of, artificial intelligence it's truly been, eight eight decades um at a minimum and, so we're kind of coming to a place where, there there is that distribution um but, I fully expect it to still take some, time before widespread adoption before, efficient uses um certainly affordable, uses and uh you know where we can, actually apply this to higher risk, scenarios in Industries time for one, more I think yeah and this uh uh use of, the term Tools in a kind of a neutral, way to describe AI kind of broadly um I, think what's been left out maybe is that, different tools have different side, effects um so for instance um video, games have certain characteristics, shovels have other characteristics, medicines have still other, characteristics um where do you see, these tools right now and maybe in the, future where we have to look at, societally um are they more like shovels, or, opiates oh I like that I like that last, line there good, question that last line took a hard left, um I think we don't know there's no way, to know I think we can sort of think, about the next 3 to five years and where, we think will this will go um but I, think anyone who claims to be a sort of, futurist or or believes that they can, tell you in 50 years what this looks, like they're just guessing you might as, well throw a pen against the wall um we, just don't know but I I think truly I, keep coming back to this we have, ownership and responsibility over this, and and we can kind of determine what, this actually looks like in in, usage Shuffle versus opiate is like a, t-shirt waiting to happen um that it's, such a good and kind of easy call out, for it's kind of funny but I think it's, very serious um all the ethical legal, implications we talked about that like, there has to be catchup I think we also, just have to acknowledge that that this, is also the same as every other, advancement that we've ever had like you, think about um I don't know people that, want to use things for nefarious ways, people want that want to use things for, their own purpose that hurts other, people or affects other people in, negative ways it exists unfortunately, and so I think it's even more important, for the concept of responsible AI uh but, also just acknowledging that there's, there's probably a point where we need, to have limitations like what that means, and what that looks like I don't know do, we get to a point where we're in IR, robot and that's what we're living on a, day-to-day basis and we have to like, prevent that I don't know um I think it, I think with great what is it with great, um power comes great responsibility and, I think that's absolutely absolutely, true here one more quick one so there's, a lot of talk about like AI tools that, help you write code but as a developer a, lot of my time was spent actually, supporting code or maintaining code and, there isn't a lot of tools out there, that helps you fix bugs or I don't want, to read someone else's code and fix, their bugs but that's what I spend my, time doing so why do you think we're in, the state we are now and what can we do, to build more tools that eliminate that, tedious part of, coding so from my perspective I think I, have seen at least people talking about, that use case I don't I don't disagree, that there's like more tools focused on, the generating of code but I have seen, people post on Twitter and things of, like give it a code snippet tell me, what's wrong with this or explain this, piece of code so I think that's starting, to get into what you're saying although, the toolage may not specifically exist, as much as we may want for that use case, what I think is really cool and I think, this goes back to probably the the most, undervalued aspect of AI is the fact, that not only does AI exist but AI, exists in a way that we as developers, can consume it to build other things, that means that we see a gap in tooling, to address exactly what you're saying we, don't have to build all that logic from, scratch we can build a nice UI on top of, an already existing llm and be able to, start to provide the things that you're, looking for more specifically now, eventually you get into more custom, trained llms and that sort of stuff but, I think that's the beauty of having it, be accessible at least in certain ways, for us as developers to build on top and, go and solve those use, cases that was well put and I expect, more tools in the future I think we we, LED with the thing that we knew we could, execute on as an industry um and that, seemed like the most straightforward, path and as we kind of diverge from, there I think you'll see a ton of, tooling around solving those problems, but um yeah I still believe that those, those kinds of uh the fixes the plugging, everything together the Integrations, that will be probably something that, takes a long time okay that is all the, time we have thank you all for coming, and let's hear for the, [Music], panelists, [Applause], [Music], special thanks to Todd Lewis and his, amazing team of organizers for bringing, us out to all things open this panel was, just one of the many conversations that, we recorded from the show floor, subscribe to the Chang log podcast if, you haven't already for more all things, open goodness thanks once again to our, partners f.com fly.io and types, sense.org and to our beat freaking, residence breakmaster cylinder Daniel, and Chris return next week and they're, joined by Nathaniel Samar the creator of, a deep learning framework in Rust called, [Music], burn |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Generative models: exploration to deployment | What is the model lifecycle like for experimenting with and then deploying generative AI models? Although there are some similarities, this lifecycle differs somewhat from previous data science practices in that models are typically not trained from scratch (or even fine-tuned). Chris and Daniel give a high level overview in this effort and discuss model optimization and serving.
Leave us a comment (https://changelog.com/practicalai/240/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Neo4j (https://neo4j.com/nodes) – NODES 2023 is coming in October!
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• BigDL (https://github.com/intel-analytics/BigDL)
• Article: Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA (https://huggingface.co/blog/4bit-transformers-bitsandbytes)
• Previous episode: Running large models on CPUs (https://changelog.com/practicalai/221)
• Baseten’s Truss (https://truss.baseten.co/welcome)
• Seldon (https://www.seldon.io/)
• Hugging Face’s TGI (https://github.com/huggingface/text-generation-inference)
• Intel Gaudi 2 (https://habana.ai/products/gaudi2/)
• Intel TDX (https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-240.md) | 9 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends that fly, deploy your app server and database, close to your users no Ops required, learn more at, [Music], fly.io welcome to another fully, connected episode of the Practical AI, podcast in these episodes Chris and I, keep you fully connected with a bunch of, different things that are happening in, the AI and machine learning community, and we talk through some things to help, you level up your machine learning game, my name is Daniel whack I am a Founder, at prediction guard and I'm joined as, always by my co-host Chris Benson who is, a tech strategist at locked Martin how, you doing Chris I'm doing great today, Daniel how's it going it's going good, you know this week I was well it's been, an interesting couple weeks for me in, that I was at the Intel Innovation, conference out in San Jose um the week, before last last and then this week I, was at uh the go programming language, conference called goer con and taught a, workshop there and so that was really, enjoyable so two weeks in sunny, California or mostly sunny California I, guess that was really cool so maybe even, just highlighting a couple of cool, things that are happening in in those, communities at Intel there were a couple, of things that were highlighted that, might be of interest one is it seems, like their Intel is really diving, into the idea of AI enabled applications, on your local machine which I know is, something we might talk about a little, bit in this show in particular that is, like hey if I want to build a desktop, application that people actually run on, their laptop and I want that to run, stable diffusion as part of the, application and not you know reach out, over the network to some API how would I, build that and what would those sort of, like AI PCS is I think what what they're, calling them what would those have to, look like and they're they're thinking, about that with some of their processors, which is interesting and then on the, data center side they had a bunch of, things including announcing the Intel, developer Cloud which is cool because, you can go on there similar to other, Cloud environments spin up either a VM, or actually a connect to a bare metal, instance that has their latest, generation of processors including these, gouty 2 processors which are from Habana, labs they were acquired by Intel I, forget when but they have so they would, be sort of on the data center side, you're running accelerated workloads on, these and we're actually running some of, our prediction guard stuff on these, gouty processors and seeing really great, performance so those are a couple things, highlighted from there and yeah I don't, know have you heard those themes in in, your conversation as well in terms of, either new processors advances in data, center technology or this kind of local, inference side of things I have quite a, bit actually um and I'm certainly not an, expert on micro Electronics by any, stretch but I have friends who are and, listen to them closely when they talk, there's a bit of a uh an ongoing, Revolution on the microprocessor side, and so you know many of us that have, been in the AI world for a long time, there have been you know for instance, gpus from Nvidia have been kind of a, core to that but there's a lot of Chip, types uh that have been coming out by a, number of different vendors to compete, with that you know famously Google was, probably the first one well known with, their tpus uh tensor processing units, but there's all sorts of specialized, chips and chiplets that are coming out, that uh that are enabling these types of, things so I think Intel is definitely, one of the global leaders in that and uh, looking forward to having it'll be nice, when everyone's laptops uh and phones, and everything are all completely, equipped with everything they need yeah, yeah it's super interesting especially, for use cases where it's like your, personal assistant AI enabled personal, assistant that really is tied to you, personally applications like that I, think you'd you'd want to run a lot of, those things locally and not be sending, a lot of that data all around so that's, kind of interesting they also talked a, lot about confidential Computing which, is an interesting topic that I think, maybe some of our audience at least, wouldn't be familiar with as much from, what we talk on this show about but it, is very connected to the AI world in the, sense that if you are running kind of, secure workloads through AI models, whether you're doing that on Nvidia, chips or other chips like we've talked, about there are ways and toolkits to, enable you to actually secure the, environments that you are running those, models in and actually provide, adastation to know that nothing has been, tampered with inside of those kind of, secure environments so I'm going to, surprise you I actually know quite a lot, about that those are trusted execution, environments let's just say I've touched, on those quite a lot I think Intel's, version is like TDX trusted yeah it's, something they have a couple of, different versions that's the one that's, out in the marketplace right now but, yeah it's the idea of ensuring that when, you normally if you're running a program, uh for audience if you're running a, program and it has to Transit obviously, from system to system every system has a, processor it's processing on and even if, you're running encryption at the, application layer you have to unwrap, that encryption for the processing to, happen in the chip an adversary you know, if it's on the order of a a major nation, state has the ability to steal, unencrypted information that had been, encrypted in transit straight out of the, processor memory and uh Intel and other, vendors are starting to push trusted, execution environments and products and, services around that which protects and, guarantees the safety of that data, inside the processor something I've, spent some time on actually yeah it's, super interesting and I think even the, CTO and his talk had like a t-shirt that, sort of had a vend diagram kind of thing, between like security and Ai and at the, intersection of that is a lot of you, know what he talked about this sort of, idea that hey whatever Hardware you're, running on if you can combine AI, workloads with these sort of trusted or, confidential Computing ideas that that, can be very powerful and take care of at, least some of the security and privacy, concerns that people have with AI, workloads in in general which is cool so, yeah the two are converging in a big way, because while trusted execution, environments which are referred to as, tees have been around for years in, processors now that we are having large, Federated workf flows which is really, Classic on cloud-based AI jobs where, you're where you're Distributing an AI, inference or training across many many, systems with very very important data, that you would not want to get into an, adversary's hands that Federation is, really kind of pushing Ai and you know, chip providers together in that way to, guarantee that we we didn't see lots of, workloads uh that would be falling in, that category until we hit the AI space, and it's chock full of them so I keep, remembering things that that happened, over the past couple weeks while I've, been traveling and and people have, mentioned but one maybe other noteworthy, thing for people to be aware of on the, more the infrastructure side which I, think we will talk a little bit more, about in this episode is that cloud, flare announced their workers Ai and I, think this is the latest in in this sort, of series of serverless GPU Solutions so, these worker AIS are Cloud Flare's, version of the serverless GPU type, environment, that we've talked about with things like, modal or base 10 or banana there's a lot, of these coming out but I think it's, worth noting that a very large player, like Cloud flare is now kind of Dipping, into this serverless GPU space which I, think also signals that we'll be kind of, seeing in the cloud side more and more, push towards serverless GPU workloads, and environments that that support that, interesting very interesting well I, that's a bunch of infrastructure and, confidential infrastructure and, Computing and security stuff that has, crossed our paths in the past couple, weeks but one of the questions that you, asked me leading up to this recording, was about things are moving so fast and, I think deploying and managing an AI, workload may look different now than it, even looked 6 months ago and it's been a, while since we talked through the kind, of developer or technical team, perspective on how you might if you want, to use one of these models that's coming, out all the time so mistol ai's model, just came out um the ones that received, huge amazing amount of funding just, earlier in June and now they have their, first model out it's released Apache too, so you can download it so the question, is let's say you want to use one of, these great models that's coming out, these days and you want to host it in, your company's infrastructure or even, just play around with it as a developer, what does that look like currently, because there's also along with these, models that are coming out new tooling, that's coming out all the time so what, does that look like these days and what, are the various options and things to, consider as you're interacting with, these models and considering even, hosting them yourself or integrating, them in your own infrastructure that's a, fair question because it's been it's, been a while since we talked through, some of the infrastructure I think Chris, it has and for what it's worth I'm going, to brag on you for a second since I know, that you would not do that to yourself, with Daniel uh being the founder of, prediction guard this is a topic that he, is like Global expert in really really, knows what he's doing and as we were, talking about I've had so many people, asking me these questions that Daniel, was just talking about lately and I was, like well you know one of my best, friends is is a real pro at this so, thank you uh if you can kind of start, walking us through and and this is a, moving topic as you just pointed out, it's changed in the last few months and, and will continue to evolve over time, but yeah if you can start walking us, through what that looks like today you, know we're in the beginning of the fall, of 2023 something that might help the, rest of us for at least the next few, months and maybe one note on this is is, I'm also getting these questions all the, time and like you say I'm deploying, models all the time with prediction, guard I think a lot of people if you're, a developer or a infrastructure person, you just have that natural desire even, if you end up using a model that's, behind some API that's hosted by someone, else it can be useful and instructive in, building your own intuition even to just, try deploying one of these models see, see what's involved see how they run, that sort of thing it's also kind of, worthwhile from my perspective to, experiment with different models before, you say you know lock yourself into a, certain model family or something it's, relatively easy now with the tooling to, get somewhat of a sense of how these, different models perform and build up, that intuition for yourself even if you, end up using a model that's behind an, API I mentioned I was at goer con this, week and that was some of the questions, that came up to t a workshop on, generative Ai and that was a good long, discussion in there that people had a, lot of questions about was hey let's say, I didn't want to use one of these apis, how do I pull down a model and use it so, yeah let's jump in let's first maybe, talk about something that I know that, we've touched on before but just to, emphasize here where can you get models, and let's say that we're putting putting, aside for a second the kind of closed, proprietary chunk of models these would, be ones from like open AI anthropic, cohere Etc they have their own apis they, host those models let's say that we're, interested in either an open an Open, Access model but it could be either an, open and somewhat restricted model or an, open and somewhat permissively licensed, model and we' we've talked about that on, the show too for example there's models, that come out that are licensed for, commercial use or non-commercial use or, research purposes only but let's say you, want to use one of these Open Access, models the first question that might, come up is where do I find these models, the best place that you can find these, models is on hugging face so if you go, to the hugging face website just, huggingface doco and you click on models, you'll see that there's at the time of, this recording around, 345,000 models on hugging face a few to, choose from yeah yeah a lot to choose, from and and think about this those of, you that are familiar with GitHub right, how many GitHub repositories are there, there's a lot of GitHub repositories, that are someone like tried something in, an afternoon and uploaded something to, their GitHub repo right it doesn't mean, that's the most useful thing for you to, use in your your workflows alth Al, though you could kind of learn from it, maybe it's similar on hugging face, there's a lot of people that might like, oh I tried fine-tuning this model and, now I uploaded it to my repo on hugging, face and similar to GitHub one of the, things that you want to look at just as, a practitioner is look at how many, people are downloading the model look at, how many people are Harding the model or, or you know liking the model and you can, filter by those things so if if I click, on model I can then click on a filter, like the task that I'm interested in a, computer vision task or an NLP task or, an audio task and then I can look at, both the trending models and how many, models were downloaded filtered by, things like licenses and languages so, yeah I think the first thing to be aware, of is just the landscape of models and, where you find them and the best place, for that currently although there are, other repositories is by and far hugging, face and go there and treat it similarly, to GitHub and that there's going to be a, lot of there that might not be of, interest to you but there's going to be, some really great things there as, [Music], well what's up friends there's so much, going on in the data and machine, learning space it's just hard to keep up, did you know the graph technology ol let, you connect the dots across your data, and ground your llm in actual knowledge, to learn about this new approach don't, miss nodes on October 26 at this free, online conference developers and data, scientists from around the world will, share how they use graph technology for, everything from building intelligent, apps and apis to enhancing machine, learning and improving data, visualizations there are 90 inspiring, talks over 24 hours so no matter where, you're at in the world you can attend, live sessions to register for this Free, Conference visit Neo 4j.com noodes, that's Neo the number, 4j.com, [Music], snodes okay Chris I'm on hugging face, and I see a bunch of different models, that are potentially available to me and, I can click on for example object, detection and see that the trending, model that I'm looking at is from, Facebook D resnet 50 um seems like, people have used resnet quite a bit, 63,000 downloads and so maybe that's a, good place I I want to start if I'm, looking at object detection if I go to, let's say automatic speech recognition, up at the top would be open AI whisper, model which is a great choice and, released openly that you can use for, speech transcription if I go to for, example text generation which a lot of, people care about these days the, trending one right now is this new, mistol 7 billion model that we mentioned, earlier was just released so let's take, those as our kind of examples let's say, I want to run something like open AI, whisper or I want to run text generation, with Mistral 7 billion or there's even a, range of sizes of models right the 7, billion model from mistol Falcon 180, billion was released recently so one, question that I think people have is how, do I know which model might serve my, task well and one thing I'd like to, recommend to people is even before you, try to download the model yourself and, run it you can go in and click on these, models like if I click on mistol 7, billion version, 0.1 if you notice on the right hand side, of the hugging face model card for that, model A lot of these models already have, a hosted interactive interface that you, can just click the compute button and, see the output of the output of the, model so it's kind of like a playground, that you can see a bit of the output of, you can do the same thing with a lot of, you know computer vision models or Audio, models and then below that you'll see a, little thing called spaces using mistol, 7 billion or if you're on whisper spaces, using whisper these are little demo apps, that are actually hosted within hugging, faces uh, infrastructure where people have, actually integrated mistol 7 billion and, a lot of these are kind of just a simple, input output interface and so even, without downloading the model if just, trying to get a sense for what these, models do you can click through some of, these spaces that are using them or just, look at that kind of interactive, playground feature and just try you know, upload some of your own prompts or, upload some of your own audio or, whatever that is to see how the model, operates I think a lot of people might, miss this if they're just scrolling, through let me ask you a quick question, when if you're looking and you're trying, to narrow down you know which model you, want to pick we've talked on previous, episodes about some of the concerns that, go with different sizes and such so are, there some models that uh unless I have, a very large infrastructure available to, me many many gpus for instance that I, should probably disregard is there like, a minimum and maximum practical, threshold that let's say that I have, some Hardware but not everything that I, would dream about that I might want to, go for so there's kind of an answer to, this and then a followup okay one is for, this sort of Transformer language models, often times if you go much Beyond 7, billion parameters maybe pushing it up, to kind of 13 to 15 billion parameters, you're not going to be able to run it, very well just by default by downloading, it and running it with the kind of, standard tooling on anything but a, single accelerated processor like a GPU, and even then most of the time not on a, consumer, GPU however the followup to that is that, a lot of people have created open-source, tooling around model optimization that, may allow you to run these models on, consumer Hardware or even on CPUs and, I'd like to talk about that here in a, bit that a lot of times you may want to, consider this sort of model optimization, piece of your pipeline when you're, considering how to run the model because, sometimes the sort of default size and, default Precision of the model might not, be best for you both in terms of your, needs in terms of performance or in, terms of the hardware that's available, to you but I would say in this phase of, like what model is going to be good for, me go ahead and put that sort of, Hardware concern although it's important, put it a little bit to the side and, focus on which model is giving me the, output behavior that I want right, because you have a certain task in mind, right and if you could figure out hey, this model kind of does what I want and, it seems like it's giving pretty, reasonable output and then you find out, oh well I can't run it on the GPU that I, have or I need to figure out how to run, this on a CPU then then that kind of, Narrows down the type of tooling that, you're going to have to use for, optimization or you might not need to, optimize at all so kind of start with, the smaller models and build up to, something that fulfills the behavior, requirements that you have by just using, some of these demos using some of these, spaces and then think about okay I've, now figured out I need Falcon 180, billion so what does that look like for, me to run that in my own infrastructure, then there's kind of a follow-up series, of things that we can talk about related, to that gotcha thanks so I was uh kind, of getting ahead of myself then a little, bit in terms of worrying too much about, Hardware first, yeah yeah I think the question well, maybe it's because I come from a data, science background right my data science, experience always tells me start with, the smaller models and work your way up, to the bigger ones until you find, something that behaves in a way that, will work for you and then figure out, the kind of infrastructure requirements, around that because if you start smaller, and work to bigger, it's going to be easier to work with, that smaller model infrastructure wise, and latency wise and all of that but, some people do have really complicated, sets of problems where they need a, really big like let's say you know I, want to produce really really really, really good synthesized speech or really, really good transcriptions from audio, I'm going to need maybe a bigger model, than the really really small open AI, whisper model so it has to do with the, requirements of your use case as well I, would say Okay so let's say you identify, uh a model and you're you've kind of, picked what you want to do where do you, go from there yeah so let's say that, you've picked a model and um let's take, the first case where it's a model that, could reasonably or you think it could, reasonably fit on a single processor a, single accelerator or, by your own sort of infrastructure, constraints you need it to operate on a, single accelerator and even if you don't, have those infrastructure constraints I, think one recommendation I often give is, it's just way easier to run something on, a single accelerator or a single CPU so, I personally recommend a people even if, it's a bit larger of a model convince, yourself that you can't run it on a, single accelerator or a single CPU, before you make the jump to spin up a, GPU cluster or something something like, that it's just a lot harder to deal with, even with good tooling some good tooling, around that side which we can talk about, so yeah let's say that you found a model, I don't know let's say it's our mistal 7, billion model you should be able to run, that on a single instance with an, accelerator or GPU I would then look at, that model and depending on the type of, the model often times in the model card, on hugging face hopefully if it's a, nicely maintained model in hugging face, then it will likely just like a read me, and GitHub it will likely have a little, code snippet that says hey here's an, example of how to run this what I, usually do in that case is I just spin, up a Google collab notebook CU I want to, see how this thing runs and how many, resources it's going to consume so I'll, spin up a Google collab notebook if, people aren't familiar Google collab is, just a hosted version of Jupiter, notebooks with a few extra features like, you can have certain free access to GPU, resources there's similar things from, like kaggle and paper space And deep, note and a bunch of others so spin up, one of these hosted notebooks and just, copy paste that example code in that, notebook and try a single inference and, often times what you can do in these, environments is if you look up at the, top right corner of Google collab, there's a little resources thing and, once you load your model in you can, actually look at oh how much GPU memory, am I taking up right how much CPU memory, am I taking up and that gives you a good, sense of hey I loaded this model in I, performed an inference if I just do, nothing else like the the most naive, thing I can do then I'm consuming, 12 gab of GPU memory or something like, that and that kind of tells you if you, don't do any, optimization then you're going to need a, GPU card that at least has 12 gabt of of, memory and so maybe you use like a A10 G, or you could use an a100 that might be a, little bit Overkill in this case but one, of these with maybe 24 GB of memory you, have a little bit of Headroom there and, you can say now you've narrowed down not, only the model but potentially the, hardware assuming you don't do any, optimization potentially the hardware, that you could use to to uh deploy it so, as of yet I haven't spun up really any, infrastructure this is kind of my, standard thing where I'm like hey what, what's the deal with this model how do I, perform a single inference and what kind, of resources am I going to need it's a, nice little uh cheat code equivalent, finding out what you're what you're, getting into it sounds like yeah yeah, for sure and if you happen to have the, the other way I've done this in the past, is if you happen to have a VM or maybe, it's just your own like personal, workstation and you have a consumer GPU, card if you have Docker running on that, system you could pull down a pre-built, you know Transformers hugging face, Transformers Docker image and just run, it interactively open a bass shell into, that Docker container and run an, inference just like I said or spin up, the model load it into memory in Python, and then in another tab or another, terminal just run Docker stats and it'll, tell you you know how much memory you're, consuming and and that sort of thing or, run Nvidia SMI or the similar for other, systems or other processors that would, tell you how much GPU memory you're, running so this is kind of a next phase, that I do the first is like maybe what, kind of model do I want the second is, how do I run an inference with this, model then kind of is a whole branching, series of fness which is either you you, go down the path of saying I want to, optimize my model in some way to run it, either faster or on fewer resources or I, want to go down the path of saying nope, this is fine I can run it with the, resources that I figured out it needs, and then you kind of move on to the, deployment side of, [Music], [Music], things okay Chris let's say that we want, to follow the path on our Choose Your, Own Adventure that you want to do model, optimization on your model Okay the, reason you would want to do this is one, of two reasons one is hey it turns out I, crashed my Google collab trying to run, Falcon 180 billion because I ran out of, GPU memory and turns out you need more, GPU memory for that or multiple gpus and, I don't either have access to that or, don't want to pay a bunch of money to, spin up a GPU cluster and run the model, in a distributed way or it's maybe even, a smaller model and you want to run it, either faster or on standard, non-accelerated Hardware like I heard a, talk at gophercon about a workflow where, people running a model at the edge in a, lab to process imagery coming off of a, microscope and it was all disconnected, from the public internet so in that case, that you just have a CPU maybe you need, to optimize on the the CPU so there's, gradually more and more options that are, out there to do this some people might, have seen things like llama CPP which is, sort of a implementation of the Llama, architecture that's very efficient and, allows you to run llama language models, on like your laptop or on like an I, think a lot of people were running them, on MacBooks with M1 or M2 processors if, you want to kind of scroll through this, set of optimization stuff if you go to, the Intel analytics big DL repo that's, big DL like big deep learning mhm first, of all the big DL Library does a lot of, this sort of optimization or helps you, run these sorts of models an optimized, way but they also have this little note, at the top which is actually a very I, found it to be a very helpful little, index as well they say this is built on, top of the excellent work of llama CPP, gptq gml llama CPP python bits and btes, Cur etc etc etc these are all things, that people have done to run big models, in a smaller way I guess would be the, the right way to put it so bits and, bites is a good example of this um, hugging face has a bunch of blog posts, about this where they've run you know, the big blo model in a Google collab, notebook by loading it not in the full, Precision but in a quantized way but, there's a lot of different ways to do, this and and that's a kind of a good, reference to see a bunch of those, different ways at some point for a, future show we should come back and, revisit that that sounds really cool, yeah yeah and I think it probably, deserves a show in and of itself um, people might refer back to a episode, that we had with uh neural Magic on the, podcast where they talked about the, various strategies for optimizing a, model to run on commodity Hardware like, CPUs but there's a ton of different, projects in this space both from, companies and open- Source projects like, open Vino and Optimum and bits and bites, and and all of these so if you are, needing to take this big model and make, it either make it smaller or run it more, optimized on certain Hardware, then you might want to go through this, model, optimization phase assuming you did that, or you didn't need to optimize your, model then we get to deployment um now, Chris what's in your mind when you think, of these days where might people want to, deploy models yeah I think so it's one, of those situations where I a lot of, people I'm talking to are trying to, decide between Cloud environments and, we're seeing some some people that had, dived into Cloud pulling back uh and, investing in their own and as well as, starting to explore some of the other, chip offerings so people are kind of, reconsidering that go Cloud when it's, too big for you now and looking at these, open models in their own hardware and, trying to figure out okay I don't really, know how to do that at this point so, that's where I'm really curious is let's, say that we go ahead and and buy you, know a reasonable GPU capability uh inh, housee but it's not too big what can I, make of that if I'm willing to do a, little bit of investment but not you, know we're not talking millions and, millions of dollars kind of thing yeah, yeah so it might be good for people to, kind of categorize the ways that you, might want to deploy an AI model for, your own application and even before I, give those categories I think i' also, normally recommend to people that I, think still the best way to think about, deploying one of these models if you're, you're deploying it to support some type, of application in your business or for, your own personal project or whatever it, is any type of scale I think you're, going to save yourself a lot of time by, thinking about the deployment of the, model as a rest API and then your, application code connecting to that, model or a rest API or a grpc API or, whatever type of API you want but the, the purpose of the model server is to, serve the model and and then you have, your application code that connects to, that now that could be running on the, same machine or the same VM as your, application code or it could be running, on a different one but as soon as you, make that separation a little bit it you, know I I don't really promote people you, know microservice everything but I think, in terms of model serving it's useful, because you can take care of the, concerns of that model maybe the, specialized Hardware it's running on and, then take care of the concerns of your, application separately and if your, application is a front-end web app or is, something written an API written in go, or you know rust or whatever it is then, you don't have to worry about like oh, how do I run this in a different, language or that sort of thing you just, handle that through the API contract so, that's maybe one kind of classical, separation of concerns you know that any, developer would be doing yep yep exactly, and then you can test each separately, all of that good stuff sure but um if we, think about categories of how you might, deploy these things there's the case, where you would want to run this in a, serverless way like we already talked, about what cloudflare just released but, there's a whole bunch of these options, like Cloud flare and banana and base 10, and modal and a bunch of different, places where you can spin up a GPU when, you need it and then it shuts down our, scales to zero afterward words and there, are so depending on the size of your, model and how you implement it the sort, of cold start time or the time it takes, to spin up that model and have it ready, for you to use might be somewhat, annoying for you but the advantage is, you're not going to pay a lot so you, could at least try that first there's, kind of this more and more offerings in, that space But A lot of them have like, you know base 10 the cloud flare thing, whatever it is you're going to be, running it in someone else's, infrastructure so if you have like your, own on-prem thing or something like that, maybe a little bit harder to deploy that, sort of serverless infrastructure, because they've optimized those systems, for what they are so likely in that, scenario you're signing up for an, account on one of these platforms and, you're deploying your model there and, then you can interact with it when you, want a second kind of way you could do, this is like a containerized model, server that's running either on a VM or, a bare metal server that has an, accelerator on it one or more, accelerators on it right and so you, could spin up an ec2 instance with a, with a GPU or you know you could even, run this as part of a auto scaling, cluster that's like a kubernetes cluster, or something like that but these would, be VMS that have a GPU attached or, something like that and they would be, probably up either all the time or they, would have uptime that's different from, the serverless offerings sure and so, you'd just be paying for that all the, time and in in those cases like maybe, you could use a model packaging system, like base 10's truss is one that I use, but there's other ones as well uh Selden, and others that will actually create a, model package in a dockerized way that, allows you to deploy your system is, there any standard ization yet in that, space or does each vendor have its own, approach I think each vendor has its own, approach like if you look at um hugging, face they have the TGI or text, generation inference project which I, think is what they use a lot to serve, some of their models and that kind of is, set up differently than base 10's trust, which is set up differently than Selden, system um there are some standardization, in that like if you have a, a general um like onx model or something, like that there's various servers that, take in that format but the way in which, you set up your rest API might be, different in different Frameworks so, this is a very framework dependent thing, I would say gotcha yeah and there's also, an additional layer of choice here not, only in terms of what framework you use, but also in terms of optimizations, around that so there's certain, optimizations like VM which is an open, source project that not so this doesn't, modify the model but it modifies the, inference code that allows the model to, run more efficiently for inference so, this is not the sort of model, optimization that we talked about, earlier which is actually changing the, model in terms of precision or in other, ways but this is actually a layer of, optimization of how the model is called, that helps it run faster, so yeah there's there's a lot of a lot, of choices uh there as well and I think, once you get to that point and you've, chosen like let's say you're using base, 10's trust system and you've deployed, your model you know either on a VM or in, a serverless environment or you're using, whatever system you're using I think, then kind of gets to these additional, operational concerns about like how do I, plug all this together in an automated, way so if I push my model hugging face, or if I update my inference code how, does that trigger a rebuild of my server, and then redeploy that on my, infrastructure and that gets closer than, into what is more, traditionally Dev opsy infrastructure, automation type of things which is its, own whole land of Frameworks and options, and that sort of thing but it's more of, a standardized thing that software, Engineers are are familiar with right, that's kind of um from my perspective, that's if we were to just summarize you, kind of go from model selection and, experimentation which I would say don't, spin up your own infrastructure, necessarily for that once you figure out, a behavior of a model that works well, for you then decide if you need to, optimize it to run it in the environment, you need to if so optimize it and then, once you're ready to deploy it think, about a model server which is geared to, specifically inferencing of your model, and that's the separation of concerns, and either you you can use a framework, like one of these we've talked about or, you could build your own you know fast, API service around it or whatever API, service you like and deploy it in a way, that is ideally automated so that you, can uh do all the nice Dev opsy things, around it that sounds really good with, uh so you've you've done a fantastic job, of laying everything out I think I've, talked you horse uh at the moment trying, to cover everything be careful what you, ask for Chris so as we are winding up, for this episode what are some of the, kind of you open source go-to tools uh, that pop top of mind for you that you, tend to find yourself going to uh over, and over again you know for folks to, explore yeah I I think on the pulling a, model down and running it for inference, just that sort of series of things, there's really nothing in my opinion, that beats the hugging face uh, Transformers library and this is not for, people that aren't familiar this is not, just for language models and that sort, of Transformers but this is general, purpose functionality that you can use, use also for speech models and computer, vision models and all sorts of models, both in terms of data sets and pulling, down models and con extra convenience on, top of that there's not really anything, I think that is more comprehensive than, that and hugging face has a great, hugging face course where you can online, if you just search for hugging face, course it'll walk you through some of, that in terms of the model, optimization side of things, I would recommend checking on a few, different packages one of those is, called Optimum it's collaboration, between a bunch of different parties but, uh it allows you to load models with the, hugging face API so similar to like how, you would load them with hugging face, but then optimize them on the Fly for, various architectures like CPUs or gouty, processors or special processors in, terms of like Quant ization and model, optimization of the actual model like, the model parameters you could look up, bits and bites by hugging face um open, Veno by Intel uh this big DL library, from Intel which I mentioned that readme, in that GitHub also links to other, things that people have done so it's, nice that you can kind of explore that, as well um and there are other projects, like Apache TVM and others that um have, been around for some time and do model, optimization yep and we've talked about, that one before yeah and then on the, deployment side there's an increasing, number the one that I've used quite a, bit is called truss from base 10 uh TR, SS like a bridge truss and uh that, allows kind of packaging and deployment, of models you don't have to use their, Cloud envir you can deploy to their, Cloud environment if you want or you, could just run it as a Docker container, but it's really this packaging but, there's other ones I mentioned too like, the TGI uh from hugging face or VM if, you're interested in in llms so yeah, there's kind of a range there and of, course each cloud provider has their, option to deploy models as well like, Sage maker and AWS uh which a lot of, people use also so I think you've given, us plenty of homework to go out there, and explore a bit yeah yeah there's no, shortage of things to try it can be a, little bit overwhelming to navigate the, landscape but I would just encourage, people you know that first step of, figuring out what model you need to use, doesn't require you toplo a bunch of, stuff just try it in a notebook and once, you figure that out then find a way even, just search for like oh you found out, you want to use llama 27 billion just, search for the great thing now is you, can search and say like running llama 7, billion on a CPU, and there'll be a few different blog, posts that you can follow to figure out, how people have done that and um so just, follow that path and kind of follow some, of the examples that are out there it's, not like any of us that are doing this, day-to-day don't do the exact same thing, like when we deployed recently on the, gudy processors and Intel developer, Cloud I just went to the Habana Labs, repo where they talk about gouty and, they have like you know text generation., py example or whatever it was called and, you know there's a lot of copy and, pasting that happens so that's okay and, that's how development works so, fantastic well thank you for letting me, pick your brain on this topic for a, while sure uh and and like I said I, think you're almost horse after this one, uh but that was a really really good, instructional episode so I'll actually, personally be going back over it cool, well it's fun Chris uh thanks for, letting me ramble on and I'm sure we'll, have some follow-ups on similar topics, as well well absolutely all right well, that'll be it for this episode thank you, very much Daniel for uh for filling both, the host and the guest seat this week um, another fully connected episode I'll, talk to you next week all right talk to, you, [Music], soon thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all change doog podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residence brakemaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Automate all the UIs! | Dominik Klotz from askui joins Daniel and Chris to discuss the automation of UI, and how AI empowers them to automate any use case on any operating system. Along the way, the trio explore various approaches and the integration of generative AI, large language models, and computer vision.
Leave us a comment (https://changelog.com/practicalai/239/discuss)
Changelog++ (https://changelog.com/++) members save 6 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Statsig (https://statsig.com/changelog) – Build faster with confidence. Startups to Fortune 500s rely on Statsig to make data-driven decisions. Ship smarter and faster with the unified platform for feature flags, experimentation, and analytics. Our listeners get free white-glove onboarding, migration support, and 5 million free events per month.
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Dominik Klotz – GitHub (https://github.com/programminx-askui) , LinkedIn (https://www.linkedin.com/in/dominik-klotz-127a931b7)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• askui.com (https://www.askui.com)
• askui on GitHub (https://github.com/askui)
• askui on LinkedIn (https://www.linkedin.com/company/askui)
• askui on Twitter/X (https://twitter.com/ask_ui)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-239.md) | 9 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how air related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fastly.com and to our friends at fly, deploy your app servers and database, close to your users no Ops required, learn more at, [Music], fly.io what's up friends I'm here with, VJ rajie CEO and founder of stat Sig, where they help thousands of companies, from startups to Fortune 500s to ship, faster and smarter with a unified, platform for feature Flags, experimentation and analytics so VJ, what's the Inception story of statti why, did you build this yeah so statti, started about 2 and a half years ago and, before that I was at Facebook for 10, years where I saw firsthand the set of, tools that people or Engineers inside, Facebook had access to and this breadth, and depth of the tools that actually led, to the formation of the canonical, engineering culture that Facebook is, famous for and that also got me thinking, about like you know how do you distill, all of that and bring it out to everyone, if every company wants to like build, that kind of an engineering culture of, building and shipping things really fast, using data to make uh data informed, decisions and then also informed to like, what do you need to go invest in next, and all of that was like fascinating was, really really powerful so so much so, that I decided to quit Facebook and, start this company yeah so in the last, two and a half years we've been building, those tools that are helping Engineers, today to build and ship new features and, then roll them out and as they're, rolling it out also understand the, impact of those features does it have, bugs does it impact your customers in, the way that you expected it or are, there some side effects unintended side, effects and knowing those things help, you make your product better it's, somewhat common now to hear this train, of thought where an engineer developer, was that one of the big companies, Facebook Google Airbnb you name it and, they get used to certain tooling on the, inside they get used to certain, workflows certain developer culture, certain ways of doing things to in of, course and then they leave and they miss, everything they had while at that, company and they go and they start their, own company like you did what are your, thoughts on that what are your thoughts, on that kind of tech being on the inside, of the big companies and those of us out, here not in those companies without that, Tooling in order to get the same level, of sophistication of tools that, companies like Facebook Google Airbnb, and Uber have you need to invest quite a, bit you need to like take some of your, best engineers and and then go have them, go build tools like this and not every, company has the luxury to go do that, right because it's a pretty large, investment and so the fact that the, sophistication of those tools inside, these companies have advanced so much, and that's like Left Behind uh most of, the other companies um and the tooling, that they're they get access to is, that's that's exactly the opportunity, that I was like okay well we need to, bring those sophistication um outside so, everybody can be you know benefiting, from these okay the next step is to go, to stat.com, change law they're offering our fans, free white glove on boarding including, migration support in addition to 5, million free events per month that's, massive test drive stat today at, stats.com Chang law that's, stats.com SL change law the link is in, the show, notes, [Music], welcome to another episode of practical, AI this is Daniel whack I'm the founder, at prediction guard and I'm joined as, always by my co-host Chris Benson who is, a tech strategist at locked Martin how, you doing Chris doing well today Daniel, how are you I can't complain it's like, these two weeks in the midwest in the, United States that I enjoy each year, where it's like between hot and really, really cold and so yeah I'm really uh, I'm excited about that and yeah just, cranking away I'll be at the um Intel, Innovation Conference next week uh which, is going to be a fun experience to see, some of what they're doing in the AI, space and talk to them about some of the, some of the stuff we've been trying on, Intel Hardware so yeah it's a lot of, good stuff coming up, I'm really excited actually it was, through that Intel community that I met, uh Dominic from aski which is our guest, today so welcome Dominic it's great to, have you here hey thanks Don to having, me yeah well ask UI could you tell us a, little bit about uh first maybe just, what is it about UI that ask UI is, concerned with and how did you get to, start thinking about UI and automation, maybe a little bit back what are we, doing uh we try to uh free uh humans, from being robots what does it mean in, particular um not only you have, repetitive tasks on on user interfaces, and what what we are trying to do to uh, bridge the gap between you have to be, the programmer versus you can describe, in natural language what you want to do, so your intention what you want to, automate for example if you want to log, in in your Facebook then you can say, with us please uh click the lockin, button please fill on this this and this, credential with natural language and, which allows also non-technical guys to, automate user interface and uh lower the, uh border which you can automate or, start to automate things um and the, second question was a little bit about, why did you start caring about this, problem why why why why is that caring, little little bit background about this, I previously um came my background is, the software developer I was working, previously at seens before I found was, Jonas my C phone ask ey so there what I, was doing there I was in the plant, automation system where we had to test, everything because our systems have to, run know, 365 days uh year so everything should be, tested well enough so we were standing a, lot of inside the testing label and test, everything through then I switched also, a little bit pment the same thing you, have to test you have to stand directly, uh on test your application and then I, was moving to the to a new organization, which tries to modernize or bring their, agility within semens and we I learned a, lot of about scrum agility bring all the, new stuff tools but I had also the, problem because selenium and other uh, other tools could couldn't solve the, pain to write unit tests uh the user, interface test in the way and then I was, thinking hey can we not solve this with, AI because AI can understand visual, information and can understand inial, information so I thought hey can we not, combin us on all of this their Journey, started this might be a completely, ignorant question but I just want it for, my own context as well now in the in my, background as a data scientist I've, participated many times in the fun, activity of web scraping and through, through that I know like oh there's a, lot of ways in which web data or UI data, might be exposed so could you talk a, little bit about like how what does the, data look like like what does the, problem look like as your trying to get, an AI model to interact or use an AI, model or natural language input put to, an AI model to then interact with either, a UI or a web page especially when a lot, of those things can be quite quite, varied in terms of how they're they're, built um is it more of a visual thing or, is it something else yeah as a re on the, visual part for example when you're, talking about selenium selenium is only, uh working with uh web interfaces and, you cannot use this for example on, Android and so on what we are doing, we're doing really a screenshot of of, the system so it means we can take a, screenshot of the application we can, take a screenshot directly of the, operating system and then our uh AI, model what we trained can detect the, user interfaces this means we detecting, buttons we detecting text we detecting, uh uh text fields we detecting, checkboxes weed detecting icons and so, we have already train a model which can, understand the visual representations of, user interface and then we connected our, natural language part to it to match the, intention what you have for example, click on the lockin button to the real, lockin button on the user defat and then, we are moving them out there so this is, a little bit different from the concept, versus I uh connect directly to the, application and then try to scrape the, source code of the application to get my, information maybe this is the best way, to differentiate this so you're kind of, starting with a screenshot instead of, doing web scraping correct and then just, doing classic, on the screenshot itself to do that cor, right so we have object detection model, in place which really takes the, screenshot and tect all the elements on, it as a quick followup when you do the, classification and and identify what you, have in the screenshot how do you tie, that in with the tests desired so you, have a collection of tests that you're, trying to automate how are you like, tying like I got a screenshot I got I I, have a button I have a text field how, does that tie into a particular like a, unit test or something jally we have, have our typescript application which, you can download which also available uh, as an aski mpm package which you can use, and with this uh you can directly start, uh install it and then you generate a, standard test and then you have for, example jet just test block or for Java, junit test block where you then can, write ask youri dot click dot uh with, text Dot so on and the background what, happens we have uh we are we have a, controller then installed on your local, system which connects to the operating, system takes a screenshot and also have, the ability to control the uh move the, mouse then we're taking the screenshot, we are connecting the instruction the, click on button for example then this to, our inference back end then we get the, result back and then we moving them out, there and with this single steps as you, would do it in selenium you can then, write your workflows or tests to, uh automate every operating system as we, are currently running on Windows on Mac, and on iOS especially uh for legacy, application and the windows uh, environment we can test also these, application Al we are not limited to web, what I work did this ask the question it, does it does thank you I appreciate the, it was a good explanation so you, obviously had a certain perspective of, uh when you came to this problem because, you had worked at at semens you had, thought about this sort of automation, I'm wondering as you've developed this, technology and seen others that might, have had this need or experienced this, pain what is the sort of range of things, that you're seeing people either do or, want to do with this kind of automation, technology that might fit the use case, some times fit the use case that you had, in mind initially but sometimes might be, sort of new things that you didn't think, about uh, previously are the curly because we, using AI technology and with AI, technology you're a little bit more, flexible what you can describe because, you can learn it Based on data and have, not to uh explicit describe it which, things you want to do new use cases, are what we started where yeah we want, to uh do regression test for classical, user interface tests then we moved a, little bit in the direction of for robot, process automation I think where we, would Landing in the future is to, automatically transfer unstructured data, to user interfaces so if you have for, imagine you have a PDF which uh formula, and then you want to say hey please uh, try uh please Al normally uh normal, workers will do with this which are, copying for example the the PDF or, information uh from the PDF over to, another formula and what you will can do, then with our technology is to say hey, please automatically copy all the, informations over which you see on the, left side to the right side this is what, where we think where where the, technology will go so that you can have, not Define the matching of the files, everything Al when you then connect also, large language models which have the, ability to understand the language a, little bit better or a little bit closer, as we could as humans can Define it then, you can build such systems yeah that's, really interesting I I'm thinking of, even like in I had a friend that works, in the offices same office building that, we're in here and he stopped by and we, were talking today he he takes, screenshots of his work as he goes along, in the day like every so often and he, does this because he wants like he can't, always remember like all the things that, he was doing throughout the day and so, he takes these screenshots as a sort of, historical record of like the various, interfaces and what he was doing and, such so I I had that in my mind as you, were talking through all these things, I'm like hey this would be really cool, for him he could you know potentially, query certain things about his day and, these interfaces and what he did and, like create potential automations uh off, of those this is also use case what, we're thinking to buy to uh be record in, the background but then collecting, repetitive tasks and then say for, example hey we have you have now, detected you have done this uh over the, last week five times should we automate, this for you for example yeah there also, cases where the technology can go, because we have the understanding of you, interfaces and was a lot of engineering, stuff around this then you can build, such nice systems that seems like an, automation like Automation and AI seems, to like scare people um for some maybe, Justified reasons some unjustified or I, don't know what's Justified but that's, like an automation that like why, wouldn't I want that to to save myself, some time I would love to automate some, of my repetitive tasks so one of the, things I'm wondering is you're having, looked at your website is you talk about, like all the different platforms you, know you get web apps and Enterprise, apps and everything are you able to take, the same approach across the different, platforms that you're targeting for that, and is that what makes it so flexible by, doing screenshots and stuff is that what, does it or do you have how do you break, down the challenge, of getting from you know addressing one, platform in your initial and then, starting to spread out across the other, platforms uh from the other the main, technology is to accessing the, screenshot from the operating system and, through controlling it I think you if, you have tried to do it there are a lot, of Open Source software already, available which can do this if you have, such technology accessible then you can, Tak a screenshot for example we have uh, Android support where we can Tak a, screenshot and it's it's the same model, and the same technology behind them as, we are using this on the Windows, operating system or on the L operating, system and we we have a general model, which which which can solve all the, tasks so uh Dominic I'm already thinking, of a lot of use cases that maybe uh I, might want to automate and interact with, the system one of the things you were, chatting about like prior to actually, hitting the the record button on this, episode was maybe the unique approach, that ask UI has taken in terms of more, of a software engineering approach to, understanding how these machine learning, and AI systems work and utilizing them, in a in this sort of like more practical, software engineering approach to this, I'm wondering if you could talk about, that in a little bit more detail and how, you've approached these problems that, you think is maybe unique or at least, presents your perspective on how to, build systems like this yeah first of, all what I see or what we saw already in, the past the research area has built a, lot of models has released them on the, public but then it stopped it after you, have published your paper you have no, interest anymore to to pring this to, production what we also see in that, currently beginning from the, 2020s uh that a lot of new application, are coming which try to solve this to, formalize uh this as software patterns, as I I know at the beginning I started, with the machine learning I was, wondering are there now software, patterns like metric pattern like, trainer pattern like uh other kinds of, pattern which you can reuse and which, you can communicate in better way why, are we good at software develop our is, because we have standard sized or the, patterns which we use and used them, everywhere so this is what what I was, where we're always searching a little, bit and also the tooling and how did we, this uh approach this from the machine, learning we we have different teams a, little bit we have the first thing what, we have done we have built an, application which used directly our, model and we used plugged in the best, the first model which we can could could, use and we had restarted with our model, which has only five images for object, detection for detecting this five IM, just training data set which is not a, lot so but we done this to prove that, this end to endend possible and we, mocked a lot of things away and then we, going out to the customers and say hey, can you work with this and then you, complained yeah it's nice but uh your, mod object detection model don't work so, good so what we done the next one we, collected more data trained the better, model then we go out to the customer, assist now enough and then they were, okay it's it's go it's going the dire, ction but we could only support one, application at the beginning and then we, add reled uh based on this and tries to, connect all our things together what, does means on the other side we started, directly to taking going out to the uh, customer and then increasing everything, and now we are making because this was, hard at the beginning because I, mentioned already that uh in 2020 all, the tools pip of sliding and meter flow, and so I everything came up which you, could reuse and now we are migrating, more and more to the data Pipeline and, trying to bring everything to the, customer so that they can also Train by, themselves but this is our software, engineering approach where we say hey, bring everything to the customer let the, customer complain then you're doing the, next iteration step so this is the, lesson of so as a followup I want to ask, you to kind of flip-flop what you just, covered a little bit and what's it like, to engage from the customer perspective, because you're kind of taking it from, your perspective and we take it over so, if you're a customer and you start to, use aski and deploy it what does that, picture look like what does the customer, go through as they start deploying or, start utilizing the service what he's, doing to there's also a little bit uh, history but what what is this now you're, going now to an uh to our website then, lock in with your credentials and then, uh you can directly upload your first, screenshot and then uh simulate on this, so do a simulation directly on the, screenshot so that you see the AHA, effect and then the next step what you, want to do if you have created your your, your workow flow a little bit then you, want to automate this and you send then, schedule this for example in a Docker, container in the background we we have, already Al if you're on in the um web, environment and then after time you get, the result and you're happy that you, have automated with really easy things, the workflow and what we are now doing, is to reduce also the Trel to uh that, you have to learn less as now because, yeah this is a thing we're trying to, bringing all problems from the uh user, perspective away that the user has a, really easy life to create automations, to maintain automations and to uh, schedule all the stuff set up all the, testing environments because it's not, only the automation part what you're, interested you're also interested in, where can I schedule it and all the, broading how can I connect data and so, on this is what what we do what we are, currently doing that's awesome I want to, propose it what I think is probably a, bad idea but I want I want to get your, reaction to it so I'm wondering there's, this way now that you've enabled people, to automate their interactions with, various uis and there's plenty of cases, where I don't really care to interact, with a UI but I do need to accomplish a, task but I also like for example one of, these that I struggle with all the time, right is like AWS and it's interface, which is just like you can do everything, right but it's super hard to understand, how to do anything right is there an, opportunity let's say and again I think, this is probably from the start a bad, idea but let's say I just gave some sort, of agent tied it to ask UI my credit, card information and whatever and just, said hey I want you to create a AWS, account and uh spin up this, infrastructure and do this and then when, you're done like give me the URL so I, can access my like here's the GitHub, repo tell me when it's ready and here's, my URL now I imagine that would also, need to tie into like other external, knowledge like the documentation from, AWS or something but is this type of, scenario anything that like you've been, talking about internally or see as, things that might come about in the, future as soon as you start automating, things around uis there's the one side, of it which I think we've talked about a, lot which is automating the things, you've already done with uis but what if, you want to do things with uis that you, haven't done yet but don't really care, to learn how to do this is also the our, plan to do this to leverage we already, played a little bit around with large, language models giving all the, documentation to do this and also tries, to translate for example Google, documentation and create out of this, asio IEP this was working quite good and, there will the direction will go there, so in the future we can do this so there, was another part this was the attention, base please create me an T2 instance and, here my credit card information this was, a intention please do this you can do, this but the main problem with this is, humans as if you talk also to your, colleague or to someone from another, nation and so on you will always have, communication HS so you have all always, to have the Tiny Steps between where you, then can correct a little bit a little, bit up this is one thing the other thing, is uh are we trustful for credit card, information for this I have to say um, your submitting Al GitHub or gitlab your, security C tokens to your uh to your AWS, accessor there are standards already in, place which are also validated which can, do this and this is what we have to, follow we have also business as, Enterprise customers which want to have, the standards so we have to be compliant, with the standards so from our side it's, no no problem you can trust us but on, the other way you have also the, possibility because it's nice to create, the uh tests online download everything, and execute this locally on your machine, without any access on our side you can, also hide all the information all the, secrets I would say it on your device, and I have the guarantee that they are, not uh leaked on anything yeah that was, another thing I was going to ask is like, hey if I'm going to a site like let's, just say I'll give an example because I, I love the product and what they're, doing um so Chris you remember we had, Josh from KI on talking about their, voice studio and all that I've been, using that recently you go in you can, input your text like a sentence and, synthesize a voice and then you can, change like the the language or, something and then like go through, export the file so there's like there's, not just a UI interaction here there's, like input data and output data that, might include things like passwords it, might also include things like hey I, generated a file with this UI like where, does that go like a quick download or, something I don't know what's the best, practices around as you're automating, these things how should a customer think, about these like inputs and outputs and, how do you handle those in terms of what, you're building I would uh forward this, question to how are you normally doing, integration tests or end to end tests, you have always the same data there so, first of all to recommend customers uh, try to use always um synthetic or, generated data don't leave production, data to it because it's, security process thing what do you learn, when you when you start automating and, testing don't use production data and, then the other thing you have to apply, security standards so there are, environment variables or secret files, which you can inject to it go with it do, it use our security function that you're, not uh sending to us anything what's, related to it and this is what I could, recommend to the customers I'm curious, we've been talking about testing for a, while and most of the use cases have, been unit test are there any other types, of testing like integration testing that, you're able to do uh is there a workflow, for those kinds of things or is this, really focused on kind of the the, screenshot itself uh and everyone stands, alone is there any way to tie them, together um yeah you can tie them, together Al what we are we are only a, library which you can use in typescript, and in the future we will also support, Python and other languages to make this, technology also available but uh you can, also combine this with selenium or some, other techniques you can connect to a, database getting the data out, processing this iner to other system and, so on yeah we are really flexible this, this race because our main concept this, is maybe also another thing what's, unique on on our side we are thinking, that automation or user interface, automation or a low code user interface, automation is to certain level nice, because you can uh give the ability to, other people but on a certain point we, reaching the limit and for this you need, developers to build some nice stuff to, automate or to connect for example to, mongodb or some other sort sort of stuff, in this case you can always have always, the possibility to go from our low code, view to our code View and insert, directly code so there no problem you, can do this and you can also install, other, [Music], libraries this is a change log news, break page find looks pretty cool it's a, fully static search library that aims to, perform well on large sites while using, as little of your users bandwidth as, possible and without hosting any, infrastructure it runs after your static, site generator like Hugo 11d Astro Etc, and generates a static search bundle to, add to your built files it then exposes, a JavaScript search API that can be used, anywhere on your site quote the goal of, page find is that websites with tens of, thousands of pages should be searchable, by someone in their browser while, consuming as little bandwidth as, possible page finds search index is, split in the chunks so that searching in, the browser only ever needs to load a, small subset of the search index page, find can run a full text search on a, 10,000 page site with a total Network, payload under 300 kiloby including the, page find library itself for most sites, this will be closer to 100 kilobytes end, quote I'd love to see a comparison link, me up if you know of one but my guess is, that this could easily replace algolia, on lots of Open Source docs and websites, one less service to depend on why not, right you just heard one of our five top, stories from Monday's Chang log news, subscribe to the podcast to get all of, the week's top stories and pop your, email address in at Chang log.com newws, to also receive our free companion email, with even more developer news worth your, attention once again that's changel, log.com, newws I always like to ask guests who, have really manifested some new idea, that's really driven fundamentally by um, Ai and machine learning this question, and that's you know as you were building, out this product what challenges did you, find in using you you already alluded, this a little bit with like oh, researchers release like all of these, models and then like what happens after, that they're not really supported or, maybe they die off or other things so, what specifically were the kind of, machine learning or AI challenges that, you faced as you were trying to make, this work you alluded a little bit to, the data side of things and kind of, adding data over time but I imagine, there's much much more than that um so, what are some of those things that stand, out just practical things that you faced, in trying to apply this technology to a, real world automation problem yeah maybe, to start at the beginning previously I, was a software engineer I had no clue, about machine learning I get a little, bit theoretical knowledge but, theoretical is nice but in practice it's, completely different so uh I had no clue, that for example if if you adapt the, learning rat you can bring your model to, convergence so and such things I, struggled at the beginning with also how, you connect the layers at the beginning, to make everything work but then you, have solve this problem then the next, step where it start to struggle is to, making the experiments visible so that, you generate the right metrics that you, see that you understand what you're, learning or what you're not learning, then when you have solved this challenge, for yourself so you have tried out uh, tensor board you have the next challenge, uh to go to uh how can I manage the data, how can I increase the data how can I, aversion the data so and then you came, in the challenge how can you do, repeatable uh experiments that you can, say Hey you have done now progress step, by step and for this you have then to, search a lot uh about what tools are out, there which tools are good you need to, know know feeling then the next step is, you have figured out that you have, messed everything up and your code is, totally out there so you have to think a, little bit out how can I structure the, code this is what I mentioned previously, with patterns so you're looking in other, repositories how other developers have, structured the code that that it's more, maintainable and more reusable so then, you're you're coming across for example, py L would I say which are doing this, really great to build up models in a, modular way so and then you're not only, one developer then you're two developer, uh two machine learnings a researcher, then you have to communicate in a way so, you have to exchange data Circle then, the thing is you're starting to copy and, paste data and send it over slack or, some stuff then you have to say hey it's, totally stupid what we are doing we need, the data platform and then with you, going through step by step you have, reached the complete expertise that you, say hey now we need a complete data, sarch we need a metrix system there's, how we exchange data between the teams, this is a way how we label data so for, example also the other challenge is not, only exchanging data getting data this, is also the challenge labeling uh Goods, data then you're looking at the labeling, tools so then you f figure out yeah, labeling tools are nice for standard use, cases but sometimes especially in our, case because we have five different, models which are changed together in a, nice way that you have labeled different, kinds of data for different models, because these model types are fitting, perfectly for this use case so you you, are then uh thinking about how you can, improve the labeling process so now, we're building a new labeling tool based, on streamlit so that we can easily, connect for example our inference part, that we can do a little bit preloading, then can automate stuff and to improve, everything so uh and then at the, beginning you remember that you talked, once with some guy which always said if, you do machine learning you will end up, to building a labeling tool where we now, reaches you've reached the, Pentacle but it's a journey and I think, when somebody would now ask me how to, start I would answer completely, different in a completely different way, as I was so that's literally what I was, about to ask you because that was a, fantastic Journey that you just took us, on um about all the practicalities and, you know that you know you solve one, problem and you hit the next and you hit, the next and you hit the next and and, you just describe coming in completely, new to this and taking it all the way to, being very very productive and all the, practicalities so that's actually what I, want to ask you is you said I wouldn't, do it the same way I'd like to know, there is at least one person out there, right now if not many that are thinking, about AI they may have dabbled in it, maybe maybe not they have an idea for a, startup they're listening to you and, they're going that's the guy you know, who started doing this but I have my own, idea what would you tell them like how, do you get started you know to to get, going um cuz this is a daunting field to, break into you mean only the machine, learning part or to start a startup, based on machine mostly the machine, learning like how did you learn it's a, skill set it takes the time to digest, and it's constantly evolving how did you, digest that skill set so that you could, be productive okay if first of all what, I would recommend uh directly introduce, tools which which support you use Direct, the hugging F try to build models based, on hugging FAS so that they have, libraries or based on pie torch, lightning because they're giving things, for free I would always say then the, other thing I would directly introduce, version control system for data, beginning I would recommend we are, currently using CLE I would now, recommend DVC at the beginning and then, um try to find specially for the machine, learning part try to find one's, researcher and one software engineer and, bring them together and let them, communicate because then you get the, efficience from software Engineers or, especially Cloud software engineers and, you get the uh research knowledge and, bring them together so they can learn, from each other to exchange IDs how to, do this in the software Manner and how, to do research so and this I would when, I would start again I would now say hey, here you one team two people one, software guy which devops background, also the software development and devop, spectr one a little bit and then bring a, machine learning researcher to it and, then I think they would benefit the best, yeah and you talked a little bit about, your journey in terms of the technical, side and learning about these tools and, also the kind of bringing on more people, side as you look forward to the um next, steps in the road map I I love it how, you publish your road map on your site, which is really cool you look forward to, the future of that road map what do you, feel like are the challenges that you're, facing right now as someone who is does, it have to do with oh now what do I do, with all these generative AI stuff and, how does that factor into our product or, does it have to do with how do we make, these models better and support a wider, set of use cases or is it is it a, combination or something completely, different I would say more it's not more, about the technical challenge, what you could solve because technical, channels you can solve with a little, with this knowledge and a little bit, research so normally if it's not, physically impossible you can solve, things with a certain time this is, normally no no problem the main problem, is what I see is uh to speed up the, development process itself so that uh, the right things are researched the, right things are designed and the right, things are started to develop so that, you're bringing more the focus on one, topic and if you have a lot of people in, your company or a lot of I would say, interfaces then uh you have to bring, them um one common understanding what, you want to achieve how you want to work, how you want to uh Define requirement, this is I would say currently uh the, main challenge when I would say so and, then from the technical challenge we, have to talk to customers get the, feedback build the stuff as quick as, possible and iterate in terms of what, the business wants to have so uh as we, kind of wind up here and this is pretty, typical we love to get kind of the, benefit of your Insight not only for, these short-term practicalities but a, little bit of the dreaming and as you're, as you are kind of you've come this far, in this journey that you've described, and you now have this capability that, didn't exist as an entrepreneur as, you're looking at the future what are, the IDE ideas that may be speculative it, doesn't have to be based in the, realities of what we have today where do, you want to go with this like what do, you envision building you know over the, next couple of years you know you and, you can pick the Horizon two years 5, years whatever you think so when you lay, in bed at night you're like that's the, place I'm going eventually what does, that look like maybe to look a little, bit back in the history how this project, is started I think I haven't uh told, this before uh when I was also at, seamons then I also do done my Master, sees about visual question answering, which should solve the task what we are, doing now in separate tasks as an end to, end task so what my dream is is to uh, take the now available technology large, language models also including the, visual part at the natural part combine, everything so that we can teach or bring, every kind of data inside the model what, does this mean every kind of data so, this means uh we get for example many, uals for software which says hey please, click on this button so then the person, coming to you and say hey please create, me account in this one you give this a, manual and then it do it completely, automatically without any kinds of, learning because yes learned how to, interact with the uh with your operating, system this is one thing where we want, to go to bring nowadays technology in Al, to make it accessible for the users and, also that everyone really everyone can, use it also your, grandma that's great I am excited for to, see some of those things come down the, line and I think one of the things I've, enjoyed about this conversation is that, You' brought a lot of uh the sort of, positive side of automation that is, really really so helpful to technical, people but also other people that are, doing these tasks that they really, actually don't want to do or can't scale, to a certain point right so I think it's, really awesome and yeah I'm looking, forward to seeing your your future work, with aski and thank you so much for, joining the podcast really appreciate it, Dominic thank youing, [Music], me thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change talk podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, I |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Fine-tuning vs RAG | In this episode we welcome back our good friend Demetrios from the MLOps Community to discuss fine-tuning vs. retrieval augmented generation. Along the way, we also chat about OpenAI Enterprise, results from the MLOps Community LLM survey, and the orchestration and evaluation of generative AI workloads.
Leave us a comment (https://changelog.com/practicalai/238/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Demetrios Brinkmann – Twitter (https://twitter.com/Dpbrinkm)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• MLOps Community (https://mlops.community/)
• LLM survey report (https://mlops.community/surveys/llm/)
• LLMs in Production Event - Part III (https://home.mlops.community/public/events/llms-in-production-part-iii-2023-10-03)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-238.md) | 36 | 1 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io well welcome to another episode, of practical AI this is Daniel whack I'm, the founder at prediction guard and I'm, joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin and uh today Chris I I, don't know if you've been listening to, the change log our our sister podcast, they've been doing these like change log, and Friends episodes where it's not, necessarily like a guest interview but, it's like hey let's invite one of our, friends on and just talk about cool, stuff and I feel like we've a little bit, got like practical Ai and friends today, because we're joined by Demetrius from, uh mlops Community um which you know, you're involved in events and podcasts, and reports and surveys and basically, basically you run the whole AI World um, so you know welcome and we're glad to, have you back as our friend he's like, the Deep State you know he's the Deep, State he's like running everything, behind the scenes I am honored to be, considered a friend first off I just, want to say that uh because I appreciate, the amazing stuff that you all are doing, here and of course whenever I get the, opportunity to come and chat with you, folks I am going to jump at it excellent, and um I know that uh last time I saw, you I think was at the one of the recent, llms in production event which was super, fun give us give us a little sense of, like what has the past few months look, like in the mlops community and some of, the events that you've been been doing, it seems like you know there's so much, uh I think you had also like in-person, things uh so like give us a what's, happening yeah I appreciate you calling, that out and of course I appreciate you, presenting at the llms in production, conference that was a blast yeah it's a, good time that event itself we had two, days and each day had three tracks so, there was two full tracks and then one, Workshop track and there was over, 82 speakers in the whole event and man, that was a lot of work yeah I that's so, add on to that so that was the virtual, part of it but add on to that we had, in-person parts of it so in Berlin there, was a hackathon that we did and in, Amsterdam we had a Meetup and then in, London we had a watch party in San, Francisco we had a Meetup that happened, and then a hackathon right after and, then a workshop so there was all kinds, of craziness that was going on and that, was just for the llm production event, later on I mean we're in Now 37 cities, around the globe so wherever you are you, probably have an mlops Community Meetup, around you unless we're in like North, Africa and South Africa but and then, also even in Lagos Nigeria we're in, Australia which is always awesome, because I keep threatening to go out, there and just pop up at a Meetup yeah, so that's the imperson stuff there's, people that have just gotten super, excited about it and they decided to, start a community chapter which is the, unbelievable power of community I I am, blown away by it every time someone, approaches me and says hey I want to do, something in my city yeah that's so cool, and I've been able to attend a couple, iners events this year related to AI, stuff um and I will in the fall as well, I'm curious to know like from my, perspective it's cool to see things I, feel transition a little bit from kind, of hypothetical stuff a lot of, discussion to people talking about like, hey we did this this is how we, implemented our workflow hey have you, tried this and so I I of course love, those types of conversations do you get, a similar Vibe or what how does the, community in terms of generative Ai and, llms how how does it seem different now, than even like 6 months ago or something, like that oh yeah that's so good because, it does feel like there are use cases, that are becoming very clear on what llm, shine in and what they're not good at, and then there's also the stack that's, forming and we did a survey probably, that's another thing that we did in the, community on top of all the fun other, stuff we did a survey and we surveyed, people that are actually using llms or, even not using, llms and we asked them why you're not, using them and we went through a bunch, of stuff or why you are using them what, are some big pain points and it was, becoming very clear that there are, certain use cases that people are using, llms for and we can get into that in a, minute and then there's also this stack, that is forming and the stack was, probably the most interesting I know you, guys mentioned it with the a16z article, and rajco is one of the authors of that, and he helped me with the report when I, wrote it too so he's behind the scenes, like moving the puppets the Puppeteer, the cool thing with the stack is you you, kind of have if I break it down for, those who haven't seen the diagram that, we put together it's like you have the, foundational model and then you have, some kind of vector database which is, like the hero the champion in this whole, llm scene you've got if you need to do, do some fine tuning or model building, you have that component to it but we can, get into that uh I'm very opinionated, about the fine tuning part and I'll tell, you why in a bit and then you have stuff, like developer sdks and this is you know, like your LL index or Lang chains and, then on top of that you have like the, monitoring or experiment tracking prompt, tracking those kind of things like a, port key a promp toiz port uh prompt, layer there's all kinds that are coming, out and so what we didn't have in that, moment that I think are starting to, emerge more now and I'm really excited, about is like how are people actually, evaluating these models and is it coming, with your different tools or are you, doing extra stuff on top of it so that, was the inspiration behind a whole, another survey that we're doing right, now on evaluation yeah I can say, personally I'm doing extra stuff on top, that's my like short an very short, answer which is of course much more, involved but um yeah I don't know what, what is your sense of that am I just, intuition wise am I out of the norm or, in the norm with that no you are, completely in the norm and I think the, hardest part and this why we wanted to, do a survey around it is because this is, one thing that is super unclear and, nobody really knows if they're doing it, right and they don't really know what, the best practices are and so you also, don't really know what you're evaluating, are you just evaluating the model one, thing's for sure all these benchmarks, are complete Bullit that we all know, right that is very clear how do you, really feel about, it, yeah I and they are interesting in, certain ways but the types of, evaluations that are going on there so, like I think they serve a place maybe, but they don't translate into like okay, now I have this use case right if I take, the model on the top of that leaderboard, I am very much not guaranteed to have, like the quote the best results for my, use case case and I think that's what's, confusing to a lot of people 100% that's, exactly it is that these models and the, use case that you have who knows how, it's going to match up against one, another right and then it's not only, that but how are you monitoring or, evaluating for toxicity or the ability, for it to do the one thing that you care, about I mean I don't care if it's and, also it just kind of feels like a lot of, marketing at the end of the day when you, see the newest model comes out and, everybody loves SOA so this is you know, State ofth art sta beats chat GPT on all, these different metrics and I just kind, of laugh because it feels like I've, desensitized to that these days yeah I, think also it's kind of often kind of, funny to me that even like chat GPT is, being used as a static Baseline for the, these things when it's not even like, chat GPT isn't a model right it's a, product that has layers on top of it for, you know that handles all sorts of, things and so that's a another, misconception that I've seen is like, well is it really fair to compare a, model's output to the output of a, product that has a lot of kind of, functionality built around it and with, it yeah and that also, kind of lets people know when when, that's drawn out it lets people know, that hey the llm here is not your, application there's this whole layer on, top of it which I know you were talking, about retrieval based augmentation or, retrieval augmented generation as we are, gearing up for this episode there's of, course like whatever your opinion about, prompt engineering is there is an, engineering element to how you call, these mod model and chain things, together and like you say evaluate, things validate things uh filter things, for whether it be toxicity or factuality, or or whatever it is so there's just so, much around that that's not the, llm that um I think people confuse those, Concepts a lot of times you know just as, a little aside here listening to you, guys talking about this and you guys are, experts at this stuff and I'm just, thinking about all the poor people out, there who are listening and maybe aren't, at your level it's a tough thing to try, to figure out how to navigate this when, you think about it you guys are debating, this and are not completely in alignment, yourself I'm having a lot of empathy for, for people in the audience who are going, how the hell am I supposed to do this, well and then alignments the other thing, yeah oh yeah another buzz word that tick, it off on the buzzword Bingo there you, go, there well it's funny because when you, are trying to figure out your use case, and how to get the best performance out, of an llm there are things that you go, through right there's like almost stages, and you figure out the debugging is, quite difficult as you were saying is it, the prompt that's giving me the problems, or is it that something in my retrieval, or the way that I'm creating these, Vector embeddings are those the problems, like where exactly is my problem and how, do I isolate that so that I can make the, whole system better and that is again it, goes back to evaluation and evaluating, the whole system how do you look at what, you're doing as a whole as opposed to, just like oh cool there's this model and, if I go to a hosted version of it and I, ask it if the Earth is flat it tells me, yes or no and most of the time they say, yes which is crazy it's like oh you've, been trained on one too many flat Earth, subreddits or they've just seen a lot of, answers that are positive so it seems, probable yes yeah I as you're talking I, think one of the things that I've, realized is in the a16z kind of Stack, they call this layer orchestration which, I think a lot of these tools are amazing, that fit into that layer and the things, that they're doing you know Lang chain, llama index we've mentioned sometimes, though it's just like how rapidly the, field is advancing like if you import, like whatever chain from Lang chain and, then whatever model right and then, whatever Vector database and then like, you put it all together you run the, thing and then you get like an empty, string output right like you're saying, like where is where' it go wrong right, and how do you like divide and conquer, debug that right so I think a lot of, what I've just in my own applications, that I'm working on for clients and, other things a lot of times to be honest, for me it's a lot simpler to write out, my chain of my llm reasoning in just, regular python logic add like whatever, exception handling I want like make the, call to the vector database like, manually and like create some logic, around that so I feel that that sort of, python DIY side it's like l convenient, but I usually end up getting there still, you know again it's part of the maturity, of this field I guess and partly like, how things are advancing quickly you, know they'll have a fix for that you, know in the spirit of you may have heard, like you know when you have a problem, with Facebook what's the answer more, Facebook well in in that Spirit I'm sure, meta will put out debug llama for you in, no time and it will solve all of your, problems right there well it's funny too, because it feels like that just forces, you to stay simple Yes again like going, back to the kiss principle and realizing, you know what maybe I'm trying to over, engineer this and I can go far with, maybe I don't even need a vector, database which is kind of blasphemy how, dare you you did call it a hero and a, champion a few minutes ago I want to, point, out yeah I mean it it is wild that you, can go so far I talked to startups, almost every day in my day job which is, not the mlops community and they are, they have real revenue and a lot of them, I'm always asking I'm like oh so how you, doing this behind the scenes and a lot, of them aren't even using Vector, databases but they're they're like, cashing in and they have a product, that's working and it's working at scale, and so there is this misconception I, think sometimes that we need all the, bells and whistles and this stack that I, just was talking about and saying that, the vector databases are the champions, that potentially for your use case like, do you really need it because it does, add that complexity and so going back to, the tried and truee principle like just, keep it, [Music], simple you mentioned that you had some, strong opinions related to, retrieval and, fine-tuning I think this is the time for, the hot take officially Dearing this, practical Ai and Friends episode so it's, a safe space to declare your hot, take safe space I love it well I no I, just think that I hear a lot about, fine-tuning and I don't know if people, who throw around the idea of fine-tuning, something really understand what you, fine-tune a model for especially like, llms if it's a stable diffusion model, that's a whole different story and I, think sometimes these diffusion models, they give us the wrong idea of what, fine-tuning in llm will do so, undoubtedly you saw the rise of like, lenza or photo AI or all of these, fine-tuning basically what these, companies were doing in the background, is they were running some kind of, diffusion model and you would upload, your selfie or your selfie or a picture, of your dog and you would be able to, bring it into the world of AI art and if, you take that concept over to llms you, think oh well if I just fine-tune an llm, on all of my emails then the llm will, know how to write emails like me but, it's not like that there's the, misconception that it's not like equal, in that regard fine-tuning you don't, fine-tune something so that it can, understand you more and you can call it, out and say now write like Demetrius, because what you want to F tune for for, that case let's just be clear that's, where retrieval augmented generation, shines because you just say hey here's a, database or a vector database of all of, demetrios's emails and the most, you can do some few shot prompting and, say write like this here's like five, styles of Demetrius writing a response, to this so make a sixth one and you're, golden you don't need to go through like, burning a lot of cash on gpus and gpus, are scarce these days to fine-tune some, model that may or may not work after, you've fine-tuned it so I think it's, worth calling out like when you should, be fine-tuning things because I don't I, don't want to say like never fine-tune, it I just want to say like I've seen a, lot of people talking about it and also, a lot of companies starting that will, tote how easy it is to fine-tune and how, you should be fine-tuning and use your, company's data to fine-tune it's like, fine-tuning I think the best way that I, heard it talked about was in a recent, mlops Community podcast that I had with, Shia s he created ragas which is like an, evaluation framework for Rags that's, awesome by the way go check it out but, he's also very big in fine-tuning and he, is also one of the main people that does, the open instruct project which is a, whole open- source llm project and so he, was mentioning how you want to fine-tune, when you have some new function or some, way some new output something that you, need to teach the llm to do that it, doesn't necessarily know how to do so a, perfect example of this is open AI, functions so like chat GPT or GPT 4, functions this makes it much easier for, you to get very very clean data or data, that's outputed in a certain way right, and it gives you this structur data and, that is a whole reason that you would, want to fine-tune something or the other, example I think is uh llama the code, llama but I think where code llama falls, down is that if the original base model, doesn't see a lot of examples of code, then no matter how much you find tune it, you're not going to get a good coding, model out of it and some people who are, using Code llama and loving it like come, tell me because I haven't found anybody, yet so that's my rant about fine tuning, versus rags and I have the last thing, that I will say is like if you're, looking at fine tuning and you you've, been brainwashed by the society out, there or the community at large and you, think you need to do it like let's just, remember why ml was so hard before llms, like collecting data is not so easy, labeling data cleaning data all of that, stuff is quite difficult and you need to, do that if you're going to find tune so, I just want to say I want to remind, everyone that Daniel likes to clean, that's something I miss I feel like I, because of llms I'm not doing it quite, as much which is a a ho in my life I'm, determined to remind our audience of, that fact you know it's just kind of, horrifying so I kind of remind everybody, once a year it's therapeutic there you, go I think along with some of what, you've expressed uh Demetrios that uh, there's a general, misconception about the data that you, would used to fine-tune an llm so I've, been in countless conversations where, the idea is oh well we we're wanting to, do like question answering on our, documents or or something like that, right and we would love to fine-tune a, model on our internal company documents, and then it's going to be better at, question answering so what I generally, tell those people is like hey, like when you're fine-tuning one of, these models just on that raw, unstructured text in the best case, scenario what you're creating is a, better autocomplete model to, autocomplete your type of documents, right what you're not doing is creating, a better question answering model those, are two way separate things right so if, you wanted to fine-tune a model like, that I'm not like you're saying there, may be cases where that's useful Maybe, it's a domain thing maybe it's a very, like you want very structured type of, output answers like you're talking about, but in that case what you want to do you, actually don't want to fine-tune just on, that raw text Data you want to create, your own set of instruction prompts, likely that are fed in with various, questions and you have the answers and, you have everything like set up and you, have thousands and thousands of these, examples which that I think when you, frame it that way then people are like, oh so it's like a lot of work to create, that sort of data set and yeah like, turns out it it still is right oh that's, so funny that is so well put too uh I, love that because that's my rub on, fine-tuning is that people don't realize, how difficult it is and how much effort, and how much work goes into it and, that's kind of why Mosaic sold for a, billion you know they because it's hard, and so companies that actually are doing, it that Mosaic was able to convince they, needed to do it they got a lot of money, for it but that's a that's a whole, different, story yeah and maybe for those that are, less familiar in our audience maybe, they've interacted with llms they like, have understand the concept of, fine-tuning maybe they're not as, familiar with retrieval augmented, generation I know that one of the things, you mentioned is the mlops community, you're doing a whole course on this um, it sounds like so could you just give, like the kind of highlevel pitch for, like retrieval augmented generation like, what does that mean to you as a person, who is obviously a promoter of this, approach and something that people, should try as they're getting into these, Technologies kind of maybe that General, pitch and then and then we can talk, about maybe a few more specifics that, you'd like to highlight but thank you, for bringing that up I mean this is the, first time that we're actually doing a, course so it is a little, nerve-wracking and at the same time, super exciting and so I'm just going to, clarify that it is not me leading the, course we got an expert the expert of, experts who is Raul who created the, course for us he is an engineer in San, Francisco that's been doing this for a, long time and he's been doing it at some, serious scale so he knows what's up and, he goes and does everything from like, the kubernetes Clusters to the end, prompts and and then monitoring the, whole system so again I'm a big thinking, about things in systems but when it, comes to like retrieval augmented, generation let's just go back to what is, the the hero of our story in this case, and it's not the vector database I I, would say that like question answer, systems are basically the hello world of, working with llms these days that's kind, of what it feels like to me I don't know, about you guys if you have that feeling, too where you want to get your feet wet, with llms you probably are going to do, some kind of like talk to your data or, hey if I ask this a question what, responses do I get and in those use, cases the best way to set up and, architect your system is through like, retrieval augmented generation again, going back to what I was saying earlier, not absolutely needed because I've seen, viable businesses built without it but, if you are at that point where you're, like oh well this will add to my tool, belt you know it's like another tool in, my toolkit then what we created was a, course that goes over and we wanted it, to be something that you don't need to, spend like 6 weeks on we wanted it to be, that you could level up really quickly, and so you go through creating a data, Pipeline and pre-processing that data, then ingesting that data into a vector, database and then you can semantic, search for the answers from the, questions that you're getting from the, end user and then it compiles a response, using an llm so that's like the basics, of the course and the reason we wanted, to do this was because of a hackathon, that we did actually funny enough right, before the llms in production conference, we did a hackathon in San Francisco and, it was all about how bulletproof your, llm stack is and so we created a bunch, of questions and we gave everyone that, was part of the hackathon all the data, from the mlops community slack which, this slack has been around since 2020, and there's like 177,000 people and it's, very active like all the channels are, going off every day it is very very, active I can't remember how many, megabytes there were but for text Data, it was a lot it was people were saying, like oh my God that's some serious, amount of text so everyone had access to, that and then we would rate everyone's, stack on how accurate the answers were, from the questions and answers that we, gave them so we basically asked people, to build these different QA Bots or chat, Bots if you would call it that and then, at the end of the hackathon we gave them, a 100 questions and we saw how accurate, were these responses and so the, questions were some things that were, from slack or they were from they were, just random questions about ML and ml, Ops and the best ones were the ones that, would give you a accurate answer, and then site oh but you know this is, another way of looking at it and here's, a thread on it and so it would go back, to the slack thread so all that being, said yeah we're excited about the course, we're excited to do that I mean there's, all kinds of cool stuff that I want to, do in the community and the only way, that I'm able to do it is that people in, the community are participating in this, I said rul is the guy that created this, course we've got all kinds of other, courses in the mix from other community, members so people that are experts on, what they feel like they know best they, can propose topics and then we're just, putting it on our learning platform and, where do people go to find a said, learning platform because I need to, point a couple of my co-workers to, it well if you just want rants from me, about how you shouldn't find tune then, you can uh find us on the mlops, community podcast but the easy one, is we have learn. ml ops. community and, that should get you to the like the, learning section of our website and yeah, as I said it's exciting it's a little, bit like nerve-wracking because we plan, on doing two Styles this one that we, just released is go at your own pace you, get it and then you can go through the, lessons and hopefully you can get your, company to pay for it because it's, within the learning budget and it, actually is for work right so that, should be an easy sell but if your boss, is not interested in paying for it just, DM me and I will give you some amazing, copy that will you can send to your boss, a nice little email that will hopefully, convince them to change their mind but, the other piece is that we're going to, start doing like cohort based courses, and that is interesting on another level, because we've got the whole mlops, community and we've got everyone that is, part of the courses they can go into a, special slack Channel they can be with, the teachers and the teachers assistants, and all that fun stuff I mean it's not, like we're breaking any new ground here, courses are kind of a tried andrue, method of learning so this is just us uh, having fun with it yeah that's awesome, and uh speaking of learning you already, mentioned the surveys that you've been, doing, and there's a previous survey that it's, already published I know you're you're, working on the next survey that will, come out but people should look at this, survey we we'll definitely link it in, our show notes and there's a lot of, really interesting stuff in the survey, and some of the highlights I'll just, call out a few of the highlights here, and then I'd be curious to know like, what stood out to you Demetrios or or, Chris whichever on so some of the, highlights just that a at a high level, um company use cases like text, generation and summarization are useful, but participants are going deeper and, exploring a lot of other ways to use, llms uh like data enrichment and data, labeling augmentation generation for, subject matter experts and other things, uh it's still unclear the use of large, language models in organization is still, unclear due to some high costs and, unclear Roi they talk about, hallucinations the speed of inference, with M being potentially a blocker for, certain types of use cases you talk, about the infrastructure we already, talked about that stack a little bit, some of the things around, augmentation and consistency of models, so those are just some of the highlights, from the survey which people can go into, a lot more detail on but I guess one, question I would have is as you were, building this report first off is the, whole report generated using an llm, because I didn't see that highlighted in, the report that would be the most meta, thing ever huh like not the company meta, just the old term for meta so that is, classic yeah, old and honestly I tried really hard but, one thing that I did which I'm going to, preface this with I am not Gardener and, so I did not know how to create reports, before I did this now I've learned since, I learned what was painful and I spent, months with this data and just tearing, it apart trying to figure out what are, some clear signals here and you can find, the signals but the reason that it was, so difficult and that it was a blessing, and a curse was every single question, instead of having answers given to you, it was just a free form text box and so, this is the report that we created but, the actual raw data it's linked in the, report and anybody can see it and so I'm, a big fan of that because I have my, biases I wrote this report but the, report is for the people that don't have, the time or want to go and spend hours, upon hours looking through all the raw, data fine-tuning a model on it yeah, they're like give me the tldr and I'm, good and so that's kind of what I said, set out to do and I also wanted to see, what are some big things that stood out, to me but not only me every time I think, the reason it took me so long to release, this was that I would ask a cohort of, friends to review it and they would give, me feedback and I would incorporate that, feedback and then be like okay I think I, can send it off to the designer now and, let's get it going and next thing you, know I would ask another cohort of, friends to review it or people in the, community and boom I would would have, all kinds of new feedback and oh but you, know I noticed that there was this and, people were talking about that so I did, that probably six or seven times and, that gave me the confidence that it's, still biased but it's not like as crazy, biased as I think it would have been if, I just put it out myself and I got a lot, of input from other people so if, anything it's it's biased from a lot of, different people and so there's that, piece and what we're doing with again, like the, evaluation survey I want to do the same, exact thing I learned my lesson so it's, not only free text form answers that you, have now I kind of put a lot of multiple, choice and check all boxes that apply, and then also the other at the end so, hopefully that will help me be able to, do more Excel fancy math on it and, formulas cuz even with llms I I was even, trying all these new ones that you know, it was use chat GPT in your Google, Sheets it didn't work I spent days, trying to ask it questions and it just, did not work man and I ended up getting, kind of frustrated because I spent more, time trying to get the llm to give me, some kind of insight than if I just, spent the time with the data and got the, insight and I think everybody who has, played around with llms has probably had, that experience once or twice where it's, like I've been prompting this for a, really long time I wonder if I just sat, down and wrote the report or if I sat, down and tried to think of things on my, own and create something I could have, just done it in the amount of time that, I've been promp tuning this but anyway I, I get a question as I'm looking at it, like obviously it's a population of, people like us that are answering the, questions because you know right off the, bat it's like how many of you are using, llms in your company and 61% you know, which is a pretty high number right, there but I'm curious do you go through, and and like what constitutes using an, llm can it be as simple as pulling up a, prompt for chat GPT and you know posing, prompts to it or does it need to be like, putting in your company data right there, you go oh my God everyone owns it now, yeah oh that's so funny I I would love, to talk about that for a minute too but, the uh yeah it was because there was, only a few people that were just like oh, yeah I'm just using chat GPT and I'm, going directly to open ai's website that, was not the majority I think it was like, one person if I remember correctly and, don't quote me on that because it's been, a probably two months since we put this, out and so I can't remember anything, since we put it out but it's more people, that are trying to set up systems with, llms and so these systems may be the API, calls to open AI or they may be hosting, their own open source llms right and uh, what's unique or what questions are you, intrigued to find the answers to in the, upcoming survey that are maybe maybe, there's some commonalities that you'd, like to see you know carry through the, surveys but what are you most curious, about going forward into this next round, I'm really fascinated by how many people, are using open source versus using open, Ai and one of the most hated visuals of, the whole report is this one on like, page 10 and it's talking about who's, using open Ai and what size they are and, people were like this does not explain, anything you're what you're trying to, say and the visual do not match up at, all and so again I am not Garder I am, one random guy who has never written a, report and never did a survey but for, some reason I felt compelled to do this, three months ago five months ago now and, I almost regret it but seeing the final, product I'm very happy that I did it and, so yeah you should be proud there's, always things to look back on but yeah, it's a nice report it's a good one, exactly and we were able to move fast on, it and so I'm sure Garder is going to, put something out soon but that's the, beauty of the community that we can move, a little bit faster anyway back to this, visual that people did not like and I, mean like a lot of people did not like, it so the whole idea was are you using, open Ai and we found that there's a bit, of a correlation between people that are, in super small startups like0 to 50 zero, I mean 1 to 50 and so it's an autonomous, startup yeah exactly it's so starting, it's so start upy so anyway the 1 to 50, range they're not using open Ai and the, thousand plus are not using open AI but, the 500 to 1,000 and the numbers that, I'm throwing out here I didn't preface, this cuz I got into the startup thing in, the 0 to 50 the amount of employe, that you have at your company so if it's, 1 to 50 then you're not using open AI if, it's a th000 plus you're not using it at, least that's the preliminary data that, we saw and if you're in the middle then, you are and so we had some theories, about this and it was like hm I wonder, if it's because if you're a startup you, think that you can create a moat by not, using it and maybe your whole business, is around, using some kind of llm and creating some, kind of difference than open Ai and then, if you're a larger company a this was, before the Enterprise scam that they've, got going on but that's again we can get, into that in a minute and so if you're a, larger company you probably have, resources to figure it out yourself and, you don't necessarily need to use open, aai and you're probably less comfort, able with your data going outside of, your W Garden I think it's the ladder, yeah if you're in the middle it's like, let's just go as fast as we can and so I, want to see if that theory if that holds, up in the next one yeah if you're over a, thousand people you have a legal, department yeah and that's what's, inhibiting you good point Chris good, point but now I mean Chris you tell me, man like do you think that this, Enterprise play is going to work out I, do you think people are going to trust, them in terms of open open AI Enterprise, I do think it will work out from a, business standpoint for, them I get dangerously close to some, conflict of interest here so I'm going, to pass I'm GNA pass on this one you're, the fifth I'm pleading the fifth the fth, I'm pleading the fifth so I'm I'm, backing away from this question it, sounds like you're not on the uh, Enterprise hype train uh Demetrios work, for a company that's a th000 plus is, what I'm saying so I'm going to back, away from that question he's got a legal, department but yes we have a legal, department all them might listen to this, podcast exactly I just wonder I mean I, I've been meming with friends about it, and I think it's kind of funny how they, do say they specifically call out we're, not going to use your data to train any, of our models we're not going to know, about any of your data and I think that, the biggest question is like oh yeah cuz, they have the best track record of doing, what they, say there is a healthy skepticism I I, would say in among large companies on, that definitely yeah well also I I don't, know this is kind of avoiding the, question a little bit but I think that, it is related in that you brought up the, leaderboards earlier uh Demetrios and I, think any company that like goes all in, it's almost like a new version of like, vendor lock in like we used to talk, about where like now you have like model, family lock in where hey you know these, models they're good like I'm no doubt, you know GPT models really good are they, going to be the models that are going to, be best for your use case either in, terms of like output or in terms of the, other things that are highlighted in, your survey right like latency and, resources and like the a lot of these, practicalities and how you can control, them and and all this stuff you know, what they need don't you yeah exactly, and so I think there is an element here, of like hey do I want to go all in on a, single model family or is my strategy, play more to have a bit more of model, agnostic approach where I can pivot, between different models for different, uses, maybe F tune when I need to but even if, I don't f- tune I have a lot of, potential options to use in in a privacy, concering way so yeah I think that, that's another element however that, works out it will work out but I think, there's this kind of side element here, which is how the model landscape is, evolving versus a single model family is, evolving which is good to highlight so, true and actually you know it's funny, like I don't want to say that there is, not an immense amount of value in chat, GPT and GPT 4 because one thing has, become very clear after interviewing a, ton of people who are using large, language models in production they are, able to get up and running and proving, value with their llm so quickly and I, was just talking to thebo theu I'm going, to have to check how I pronounce his, name he's French and I it's spelled very, different than how it's pronounced and, he's running the llms at angelist and I, asked him hey so are you worried about, that vendor lockin type thing because, you're only on open AI have you messed, around with even just anthropic or, cohere and I thought his answer was, fascinating because he told me look look, you know what there are so many other, pieces of surface area that I would like, to cover so many other features that I, would like to implement with these llms, and be able to use in our product that, if I'm stuck on one feature and trying, to figure out what the best model is, then that's going to slow me down I just, want to go and get as many features, plugged in as possible and I know that, chat GPT works really well and so I'm, just going to go as hard as I can and, incorporate these features because I, have a laundry list of them that I want, to do and then once I get all of that, out of the way then I can start going, back and saying okay let's figure out, should we bring a model in-house should, we use Claud or something else you know, it's really interesting to hear you say, that because that's such a startup, mentality you know we just have to run, really fast and get as much done as we, possibly can in the shortest possible, time and then you get to that large, organization thing and they're worried, about the lockin they're worried about, where their data is going and they go, much slower you know as a result of that, it's almost like an inverted you know, approach based on size of the company, and maturity and that's what I was, trying to show in this horribly, positioned visual graph that you see on, page 10 of the report so uh We've, finally got to the conclusion that the, graph we all agree on the point that the, graph is is, showing and to to leave the graph out of, it I think that that's good yeah don't, look at the gra I'm sorry he's laughing, because we can all see each other even, though this is Audio Only and he's, laughing at, himself not to belabor the point but I, think like you all are uh exactly right, and I think I see this because almost, every lead that's coming into prediction, guard just in terms of where people are, at regardless of whether they're good, fit for what we're doing or not but, almost every lead that's coming in it's, almost laughably predictable that they, say hey we've prototyped out something, very quick with open AI that shows like, there's huge value here now what do we, do it's almost every conversation is, starting like that so yeah I think that, there's even like a temp you know we can, make your graph I think we should make, your graph like sideways and add like a, temporal element make it 3D over time, people like that more where like you, know at the beginning of a project I, think a lot of people are doing that, whether they're authorized to do it or, not in their organization and then they, get to that point where like how do we, scale that up especially if we're in, this larger organization environment, yeah it's like oh I got to go present, this to the SE Suite we got to erase any, use of sending data outside of our, company we can't tell anybody about that, yeah exactly, as we're coming uh close to the end here, of our and Friends episode which I hope, is only the second of many times we'll, get to hear from you on the show, Demetrios as you're looking to the next, I I don't even feel like we could go to, the next year like as you're looking to, the next couple months of of AI life, what are you hyped about like just, generally across the industry like what, positive kind of Trends are you seeing, that give you hope for where things are, headed let's see that is a great, question leading question but great, question you it's got to be the positive, side huh we got to stay got end on a, positive note you can dip though before, you get there if you want to though just, for fun because I want to hear what he, has to say well for those that are just, listening I am wearing tie dye so it is, all peace and love here and I am excited, because right now, anyone who wants to mess around with, machine learning and AI they can you I, have seen so many scenarios where a, product person has said you know what, let's try and throw some AI with this, and they've been able to create an, enormous amount of value for their, company by adding some features that, just call chat gbt or Claude or whatever, ever and that for me is really enticing, because the barrier to entry has just, been destroyed you know it almost was, like the last couple years the first, couple years of the mlops community we I, mean we still are there's still a lot of, traditional machine learning where I put, it in quot quotation marks because it's, not that old but there's a lot of really, hard stuff happening with quote unquote, traditional machine learning but with, the Advent of llms a lot of that has, become really easy and so all these NLP, tasks that were really hard up until, 2022 they're not as hard and so you're, seeing the creativity of people being, able to put llms into their products I, love that that is something that now I, didn't realize how much I enjoyed, product and the idea of speaking with, product people, until chat GPT came out because now I'm, like oh I want to talk to more product, people the product owners are really, great people to talk to because they, have these wild ideas and they know how, to figure out if this is actually a, success or not so there's that piece and, dovetailing on that so we're having, another llms and production conference, on October 3rd I don't know if I told, you guys this but definitely come and I, will explain why I'll try and sell it to, you as much as possible right now but I, wanted to create as many talks from, product people as possible and some of, the stuff that we are talking about are, like how to build an economical llm, solution and how to prioritize llm use, cases and how to put llms into to your, product so those are very much for the, product owners and the product, engineers and it's because of that it's, really like I got really excited now, that this whole Space has been opened up, to the product owners and so can I tell, you what you can expect in the, conference of course sure I'm just going, to tell you why it is the greatest, conference on the internet right now, because where else, and Daniel can attest to this all right, let's just live music interludes yes, where else can you prompt me not in llm, you can put in the chat what you want me, to sing about and I will sing it in real, time just improvising on my guitar and, last time we had a whole song about, catastrophic forgetting and for those, who do not know that is where actually, I'm not even sure I fully understand, what happens but basically like when you, find tune going it's all that damn fine, tuning when you, finetune sometimes a model will forget, something because its new data is, replaced with uh it replaces the old, data but the catastrophic forgetting, song was a hit you've also got some semi, illegal beding going on during the, breaks and you win swag uh and then semi, illegal bedding there's a gray area, Chris you depends on what country, country you're in all, right and and so that you also can, expect I mean I'm just not sure that, I've seen any, conference that goes into the amount of, technical details that we go into and I, want to highlight this piece we can end, with this it is really hard but it is, very important for me to have a fully, diverse field of speakers and so I, cannot tell you how much work it is and, and it frustrates me now that I've, looked at other conferences and I see, it's almost like oh these organizers, were lazy you know because there are, amazing people out there from under, represented groups that are doing some, incredible stuff but you almost have to, look a little more because they're not, necessarily on the conference circuit, they're busy shipping like let's be, honest they're not out there talking, about it they're actually just doing it, and so I've had to look really hard but, I am very excited about the speakers, that we have and the diversity of our, speakers and I think that that's, probably like out of all the things I'm, proudest about that's probably what I'm, the most proud about that's awesome yeah, I can't wait um and you said it was, October, 3rd is that right October 3rd, awesome I'm gonna be so we have one, sponsor for this event and they rented a, whole studio in Amsterdam and, Amsterdam's like 4 hours from where I, live and so hopefully you know, everything goes all right I don't like, eat any mushrooms or anything or smoke, too much weed and not show up for the, actual event uh but in case that does, happen we have shirts did you see the, shirts Daniel I think I've only seen the, ones I hallucinate more than chat GPT, that is it I'm I may live that in, Amsterdam you never know oh how can I, get one of those yeah I'll I'll share, the link with you we've got special it's, they only pop up for sale during the, conferences so you can't get them now, but hopefully about a week before the, conference starts we'll we'll start, selling them again and yeah that's it, that's what I've been up to not not too, much you know just trying, to keeping things chill yeah, yeah that's awesome well thanks so much, uh demitrios for joining us uh I hope, that we see you again here in some, number of months don't uh don't stay, away too long and um we'll look forward, to to hearing about the results of the, survey the the events and I'm sure all, the like 15 new things that you're doing, next time around that you're not doing, this time around totally dude I got so, many ideas I have so many I mean yeah I, really appreciate you guys letting me, come on here and rant about fine-tuning, and talk about the cool stuff we're, doing in the MLS community and I love, what you all are doing so thank you it, was fun thanks we'll see you, [Music], soon thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all change talk podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Automating code optimization with LLMs | You might have heard a lot about code generation tools using AI, but could LLMs and generative AI make our existing code better? In this episode, we sit down with Mike from TurinTech to hear about practical code optimizations using AI “translation” of slow to fast code. We learn about their process for accomplishing this task along with impressive results when automated code optimization is run on existing open source projects.
Leave us a comment (https://changelog.com/practicalai/237/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Mike Basios – Twitter (https://twitter.com/maik18) , LinkedIn (https://www.linkedin.com/in/michail-basios-90410436)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• TurinTech AI (https://www.turintech.ai/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-237.md) | 88 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm, the founder of prediction guard and I'm, joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris doing, well today Daniel uh just there's so, much going on these days in this, industry in terms of AI that uh just, constantly learning new stuff and, finding out who's doing what yeah it's, almost like you need to optimize uh some, things about your life to to keep up, would you say that's accurate yeah, speaking of optimization I think that's, a thread to pull right there yeah yeah, so speaking of today we have with us, Mike basos who is the CTO and co-founder, at turch aai welcome Mike hello guys, nice to meet you yeah well we alluded to, optimization and I know one of the, things that uh TCH is is working on is, code optimization with AI and maybe some, people actually probably a lot of people, listening to this podcast are familiar, with certain developer tools that are AI, flavored maybe GitHub co-pilot or, something like that for generation I'm, wondering if you could take a moment, before we dive into AI uh driven code, optimization if you could just help set, the stage for those that aren't aware, what do you mean when you say code, optimization why is it useful how has, code optimization quote unquote been, part of the developer life cycle for, some time uh we are in a an area, nowadays that uh we see more and more, applications consumer a lot of cloud, resources and a lot of resources and, everybody is is trying to to optimize, the performance of their code now when, we're talking about code optimization, and in particular performance typically, people would like to optimize things, like application being faster maybe, memory consumption we know everybody, complaining about Chrome using too much, memory for example in the past or CPU, usage which is connected very much with, the energy that a different software is, using if we talk about mobile phones, application there we would all like to, be more efficient and consume less, energy and practically that's the area, that we are have been focusing in my, research and our group in our research, and in the compar maybe you could talk a, little bit about some of the history of, that research I'm sure that that has, just like everything else been impacted, by this kind of latest wave of of AI, Technologies and generative AI but I, know that the company and yourself have, been involved in research prior to and, all during the development of these, things so could you give us a little bit, of background on you know how you first, started thinking about these problems, and and how it it kind of developed over, time yeah code optimization is not a new, think if you read the research papers 20, 30 years ago everybody would like to, optimize and make the code efficient, like there is a lot of tools like, profilers there are you know that help, developers find hotspots uh the biggest, problem in this area is okay we profile, our code but how we can automatically, improve it so we make it faster, typically this is a very very manual, process and majority of the people that, work on these areas are super, specialized and it's more and more, difficult nowadays to find find people, that know how to optimize the, performance of their code because the, programming languages are becoming, higher and higher level like now people, write more in language like python, JavaScript typescript so nowadays the, real companies that deal with code, optimization are companies in the lower, from the hardware space like big Intel, Nvidia and those they write specialized, software that take advantage of their, Hardware where they show the hardware, outperforms or technology companies that, need the scalability but majority of, developers will not necessarily bother, about the performance of the code as an, immediate first thing that they need to, optimize and um we have tried and we, have a platform that we helped practical, Engineers to automate this kind of, process to make it easier for developers, to identify placing their code that are, slow and then to optimize it without, necessarily having the knowledge that, they need but also do it automatically, now the history of code optimization as, I said it was very very manual process, in the very beginning very few people, know how to optimize code for specific, Hardware you know they had in the past I, guess they had to read books and, compiler options eventually then you had, better and better compilers so you can, use compilation option to optimize your, code and uh tune the options of the, compilers ETC and then you have a lot of, profiling tools that help developers, also optimize but still all this, processes most of the time has been, manual or semi manual right and that's, where we see the advances of AI helping, this process and to give you a bit of, context how we started our startup they, started after we published paper in in, 2018 I think that was in the foundation, of software engineering conference where, we showed that we could automatically, help developers choose better data, structures by taking a code looking the, data structures and optimize it by, giving variations for example sometimes, in languages like Java for example you, don't need to use an array list if you, potentially can use a link list in a, scenario that may be better so we try to, do this small changes and show that we, have good performance impact but, majority of people at that point were, manually translating code like rule, based transformation like regular, Expressions if I see this pattern, convert it with this pattern so that is, how people have been doing those things, with code refactoring tools until the, llm came into the discussion speaking of, that kind of getting into this latest, wave of maybe what I'll just refer to as, develop Vel ER tools that are AI related, again you know people might be familiar, with code, generation type of tools or explainers, or something like that that there've, also been I've seen kind of agentic type, of tools that will like write a PR for, you to do like you say hey I want to do, this thing and there's a PR generated, here given the focus on code, optimization could you draw out some of, just for people that are maybe kind of, getting into this could you draw out, some of when they would want to use this, kind of tool versus some of these other, maybe generative application or, applications of generative AI to code I, guess yeah how does it fit into that, ecosystem so the way we present the code, optimization tool currently is part of, your cicd process so you make a pool, request then you will run some unit test, test you will have your integration, testing potentially you will have a, scanning uh security scanning tool like, sneak or check marks Etc and then the, next step is depending on where your, application is deployed we analyze your, code and we tell you make those changes, because you can have this 20%, Improvement in CPU and execution time, Etc that is as the way we present it, currently as a cicd tool in the in the, tool chain of the developer tools, however if you think think about the, technology underneath and all T tools, the way I see it are going to be using, Ai and they will take advantage of llm, based solutions from the moment you're, using llm if you generate code or you, translate code it's a kind of the same, kind of approach it depends on the data, that you will apply the llm so code, generation tools practically they see, when you say for the example the way, those llms have been trained is that, they see that there is some comments, below the code so you say if I give you, those comments can you predict the code, so in the code, translation where you don't know about, the speed of the code you say I have, seen this C++ code and the equivalent of, python code and llms do it like for, example co-pilot or CH GPT they can, translate code like if it's perfect, probably not yet but that is a, fundamental technology so code, optimization is on the same set of tools, but it says this is a slow code that I, have seen now I have seen a variation of, faster code so I can recommend you this, faster code but eventually you can, expose the llm like any other Tool uh, that is built on vs code or llm based, you can expose it in a editor and then, the developers from our l l m will get, suggestion for faster code that doesn't, mean that will be beautiful code, necessarily but it will be faster for, the hardware that you need to run it I'm, kind of curious as we're talking about, speed are there any other dimensions, that are relevant in there that you guys, are interested in that are either, adjacent to speed or contribute to speed, or any other characteristics that may, not be directly speed specific but are, things that you're starting to Target or, expect to Target we apply multiobjective, optimization so when you do the, translation you can have different, objective that you try to optimize like, speed is one factor memory usage is, another uh CPU usage typically there is, a tradeoff between speed and memory, usage if you have more memory you would, like to use it it increase speed so our, tool allow you and gives you different, suggestions for what you need but we see, users for example we see this paradig to, be used in other use cases like users, have told us I can improve the, readability of my code as far as I know, that I don't impact performance for, example because there are managers uh in, teams that they have five projects and, they would like let's say five projects, with five teams with developers and they, would like to guarantee that the quality, across is good so this AI approach these, tools AR them is tool or other tools in, the llm space will be able to help with, those but the biggest problem of any llm, based tool currently like Chris you you, mentioned you are working for loit, marketing Etc is that the code generated, by any llm it's not guaranteed that will, work, 100% also you need existing tools to, check if this code is secure and also so, you do not know if you will break your, code so still in the early stages of, incorporating those llms that's why it's, also easy for a developer to give it as, a suggestion right now but I'm pretty, sure like companies like that work in bu, fixing code um Security checks they are, using and they will be using more and, more llms because they can train all the, data that they have access so they can, have competitive Advantage would it be a, good parallel just to kind of draw to, people's mind like maybe something, they've seen before in other domains it, almost sounds like a parallel to kind of, a rephrasing type of prompt in an llm, where you you might say like I do this a, lot with you know emails or other things, like here's my really bad email you know, make it flow better and uh sound better, or here's my goofy email make it, business professional or or something, like that um you drew the the comparison, to like maybe machine translation or, something like that is rephrasing kind, of a good way to think about this let me, give you a very simple example let's say, you have an essay that you need to write, and you have 10 paragraphs in that essay, and you would like to have a version of, that essay that is much better and you, can get a better grade right now what we, do is we will look all the paragraphs, and we will provide you better, variations so we will get you version, one with three changes that are applied, by different, llms then you would need somebody to, grade that essay so in this case we, measure how fast the code is so somebody, says okay you have a 70% then we take, that output and then we provide and give, it back to the LM and give you another, version so practically you start from, version zero of your say you apply the, different llms you get feedback so it's, like reinforcement learning live, learning and get different variations, and that score will be increasing, eventually and at the end you have like, a translated version of your original, version that the llms and all those, refactor indeed which is better on the, metric that you have but we in the, platform test that the code passes we, compile it and also we run the, performance in this scenario we would, have a teacher that would grade and if, you think how open Ai and everybody has, been doing their training they usually, get llms and then they use reinforcement, learning and HLF and all those, techniques so we have done that in the, code optimization setting that's why we, we have had some impressive results in, taking an open source library and we, just put in the tool and then suddenly, optimize by 30% execution time without, us doing anything the models learn, themselves, [Music], this is a change log news break you can, add meta's code llama to the ever, expanding list of code generating llms, based on llama 2 code llama comes in, three sizes 7 billion parameters 13, billion and 34 billion and it comes in, three different varieties a general, model one tuned for NLP instructions and, one fine tuned for python how does it, stack up well meta claims it outperforms, other publicly available llms and it, shares the same open-ish license as, llama itself which is free for research, and commercial use unless you compete, with meta you just heard one of our five, top stories from Monday's Chang log news, subscribe to the podcast to get all of, the week's top stories and pop your, email address in at Chang blog.com newws, to also receive our free companion email, with even more developer news worth your, attention once again that's changel, log.com, [Music], newws so Mike you you got into something, that I'm super interested in in terms of, how you're going about this problem, which you alluded to the fact that, you're using this sort of reinforcement, learning Loop or feedback loop um to, improve the performance of your tools, given that you and your team have worked, with sort of code generation or code, specific models for some time now before, we get into kind of some of the cool, stuff that you've done specifically, could you just comment on kind of the, state of code generation models that are, out there on maybe the open source side, specifically but if you want to, highlight any you know Clos Source ones, that's perfectly fine too but from your, perspective how is that, the ecosystem of code generation models, changing and advancing and what is the, state of it sort of these days I guess, so I was one of the very first Believers, of uh llm assistive code uh uh, generation tools like I try to get bet, access to GitHub copilot even try to use, gpt3 to see all this kind of Technology, I'm very big believer because uh I have, seen members of our team using llm to, build things much much faster to code, for example we had the backend engineer, we needed a prototype for a front end he, just used one of the close source llms, and he could in one day he did a new UI, that he didn't even know the language he, didn't know typescript so I see that, they are very very promising I see more, usage right now for good developers that, already know you know the basic of, computer science I'm not a believer of, hey you don't need to code you will use, this llm for you right that are you know, some videos potential they're over, promoting that but if you're a good, enough developer you know what you need, and you use they can practically uh help, you dramatically to build easy, applications you know to generate tests, to generate comments about your code and, of course the performance of from our, experience, I believe GitHub compile on CH GPT are, still outperforming the other models but, we see more and more uh open source, models starting to become very very good, into the different languages like we, have been trying Lama 2 we have been, trying code gen we have been trying all, of those and we even expose them to our, platform so people can compare the, results of those those tools will need, to become a bit easier uh for for, developers and vs code tools to use, because by default people this API, exposure that some of the close Source, models are giving is solving a lot of, headaches for a lot of developers that's, why a lot of developers still have, preference for those but definitely open, source models they are very good and I, see them becoming even better if they, are fine tuned a specific language or, specific context I'll give you an, example if you want if we want to do, translation from SQL to an SQL let's say, we want to optimize SQL queries that, people are doing in their databases you, can take one of the open source models, and F tun it on SQL and you probably, will outperform GPT 3.5 or four on the, context that you have but I I'm truly, very big believer a lot of it's a bit of, also a bit of psychology between, developers that hey but I am a believer, that people that will use they will they, have advantage over people that don't, use those tools currently so that raises, an interesting point it's a little bit, of a tangent but you've kind of infered, I was this is kind of changing the way, we humans are coding um I know you also, talked about whether llms would just, write the code for us and and the, overhype you know certainly today about, that but it's kind of changing the way, that we code as humans and it's, extending our capabilities dramatically, in terms of being able to reach beyond, what we might have been able to do two, years ago for instance do you see that, accelerating when you're looking at the, fact that if you looked at a traditional, coding team a few years ago as you're, producing your product for folks and, you're starting to recognize that, individual coders are starting to Elbow, their way out of their traditional swim, Lanes with these new tooling, capabilities and that you're providing, how does that change things like how, does the market change that you're, looking at going forward I believe, dramatically and that we have tested, this with our internal developers I have, seen dramatic Improvement in, productivity of a developer without, access to llms and developers that have, like there are two things one is you can, make more efficient developers that, everybody would like to hear but from a, more senior level manager level you do, not need unfortunately as many, developers as you would need before you, cannot uh avoid this and developers, should think for me like okay I am, competing with some other developer in, my team to produce let's say an API or, produce a UI if the guy has access to, co-pilot or chpt the other person, doesn't have I guarantee you like with, very very big chances the guy that has, access to CH GPT on those will, outperform and have faster results so it, is it is like you have a very very good, assistant next to you that you should, use otherwise you're losing that's how I, see it yeah as a two second followup to, that point you just made when we start, pervasively because I think I've seen, some stuff recently over the last month, or two that like the majority of active, developers out there are now using llms, you know so there's been really rapid, adoption here within the developer, community so we've kind of moved very, quickly from those who had versus those, who didn't have into a world where, everybody has they may not all be using, exactly the same models as things, progress but everyone has it any, thoughts about kind of what that means, it's like you're democratizing llms, across the population of of developers, and now they're competing so it's kind, of like me and my llms are competing, against you and your llms as a developer, just any thoughts waxing poetic a little, bit about what the implications there, are yeah I mean it's a weird world that, I don't know anymore like like I code, less and less but I now can code again, like because for example we have a data, scientist in the team and he says I, don't feel like I'm a coder anymore I'm, just a manager of uh you know a user of, this llm that I validate the output and, yeah it's okay and it's it's a bit, ridiculous he says because even simple, things like copy paste I will not bother, I'll just say okay can you refactor this, like it definitely has changed and okay, there may be implication about, creativity of people and if you go into, the AI llm space like for example for, images when they those models generate, images somebody a painter may say to you, hey you may lose creativity because you, generate always the same thing I don't, have answers for those things and I, don't think a lot of people have answers, will just see right like how things go, sincerely, right but yeah it's interesting right, yeah this idea of being a manager of, your assistance I think is really, helpful I forget who it was we had on, the show Chris but they were saying hey, if you think about this thing just like, a high school intern or something you, know like is a high school intern going, to solve all of your problems no but if, they work all day on your problem or you, know let's say you have infinite number, of those those High School interns that, just can do work all the time is that, useful certainly there's a management, aspect to that right um probably more so, with high school interns than with llms, I'm not sure but uh but yeah I love that, metaphor that's really good um I am also, wondering so this particular application, of AI within someone's code base right I, think similar to what we've seen in, other cases with co-pilot and some of, the things that have happened there it's, a very sensitive area particularly for, like Enterprise business users I think, if you're like an indie hacker and like, you say you're wanting to create a, typescript UI and you don't know, typescript like boom you can get you, know some really cool results really, quickly of course Enterprise code is, part of the IP of a company there's you, know two aspects of that one of which is, the fact that companies have been, hesitant or even sued others over usage, of their their code or data in ways that, they didn't expect I think though the, other aspect of this that you alluded to, is it really is powerful when you start, to bring your own data to the table, especially with these open models yeah, both because they have kind of privacy, conserving deployments and there's also, like code preference things and other, things that your company might have so, I'm wondering if you could speak to that, a little bit and how you Invision you're, helping build a product that's doing, code optimizations for people so how do, you think about you know people creating, customized models for their code bases, and the sort of proliferation of these, customer specific models and the hosting, of those what what goes through your, mind when you're thinking about those, things that's very very uh important, topic and we have quite a lot of, experience with this and I will mention, an example where we wanted to we went to, a client very big technology firm super, big one of the best and we said hey you, know this our platform form we have, these llms you can use any llm of your, choice like, gp4 or open source uh llms like LMA 2, Etc in the beginning they said we don't, have any approval for uh open AI because, first they don't know the IP issue and, second they don't want their code to go, outside right we are talking about, propriety code at a lot of such, companies so the solution there was okay, you can use the custom uh open source, llm on your data we do not see anything, so it's a custom Solution on premise and, while they're using a product then, practically the our platform allows them, to generate their own training data set, so for example they use aremis they, optimize code they see sometimes the, code is optimized it's not optimized but, they generate the data and those data we, cannot see and the client should have, those data for fine-tuning their own, model then through the platform they can, say F thir fine tune their own model, that is how the industry will go, especially in financial sectors and, Technology sector or you know defense, companies they will never give their, code outside on an open- Source level, but llms are super powerful on that like, with one client that they said I'm not, sure if I can get recommendation uh if, you give me a recommendation from an llm, who is uh liable if that code doesn't, work right what is the IP issue I said, how do you solve it now they have the IP, Checkers Etc I say you can just use IP, checker for this moment you know the, same code the same process that you, would do right but also llms and we have, added this functionality can do very, good similarity search so if you have, other code bases and similar functions, in your code base you can find you, nicely or you can look like the same way, people are building chat bots on your, documents you can build your own chatbot, on your code so it's practically, similarity search and then we could, recommend we even identified that three, teams had implemented the same, functionality a bit different right so, you even save save time right so still, you know this technology the underneath, technology if you know how to use it, properly you can take advantage of it, and the biggest example is uh data, breaks about mosic ml because mosic ml, for like the MML that helps organization, F tune their own llms or their data, because no organization will give their, data to open AI or any other company to, F tune on top of them especially big, organization that there is value on, their data so the same thing applies for, coding that's how we see and that's why, we adopted the product accordingly my, next question is I guess sort of selfish, and I like to ask this of people that, have you know really built impressive, things with this kind of new reasoning, layer of llms I'm wondering as you look, back on building this product for code, optimization with, llms are there any challenges that were, unexpected that you that you had to, overcome and are there any sort of, takeaways for that you would give to, practitioners that are maybe working on, their own products or Integrations with, llms like what has been important for, you to stress especially as a CTO and, bringing new people into the team as, you're working with these types of, models what's important in your mind and, you know any of those challenges that, had come up anything you'd like to, highlight if you're building, applications on your application, dependent on LMA output right then you, need to make sure that that uh as a, first stage I would recommend use, something like an close Source API usage, because you will it will solve you the, headache of deploying your own llm, having good gpus that is a problem that, you cannot scale right at this moment, most teams don't know how to do it and, of course there are a lot of startups a, lot of companies that are working on, this on how to build your own llm in a, scalable way but that can be a nightmare, to build so if your business is not how, to deploy an llm and value somewhere, else if somebody provides it as a, service it makes sense to use it so in, our uh site we say you can import with, an API key and a secret key your any llm, that you have access then your, application becomes much easier but then, you have the problem that you need to, solve of face the client okay if those, data are used on open AI or because, there is a data that goes there this in, financial uh sector or currently cannot, be accepted then our product will have, to deploy Lama 2 have to deploy it so, then you have to build it yourself or, use those servic so we had to build it, because we were one of the early, adopters but there are tools like, hugging face provides a very nice API, for you to deploy now they change the, license I think but you know I think, majority of people can use it so the, speed of llms is a big problem for, scaling application definitely then, other issues about llms which were a bit, in the beginning now they're trying to, be fixed this token size, every time you ask and the result may be, incomplete uh if there is then how do, you deal with a previous context and all, this kind you need to spend time on it, to do it properly I'm expecting more and, more uh tools and open source are are, solving like L chain or those tools also, open source are are doing some of the, things and of course the biggest problem, that a lot of people talk is about, hallucination of the models you cannot, trust necessarily the models and you, cannot just say generate code and, execute that code in your back end, because somebody may do SQL injection, like similar to where people were doing, SQL injection in the past especially for, coding you can have llm kind of, injection so you need to be very careful, on exposing the prom to the end user, because somebody can really damage so, yeah th those are them when you're, thinking about you know hallucinations, from llms and you're but you're working, on a problem like optimization and stuff, and you acknowledged earlier you know, some of the problems you face as one of, them you may not get you know the right, code or compilable code because it's the, output of an llm how do you approach, that specific problem I was actually, wondering that earlier and the, conversation continued on without it but, we kind of circle back around like how, do you think about dealing with, hallucination when you're dealing with, optimization and correctness you know, and and improving in that way how, balancing the two very good question so, it depends on the programming language, and the existing tools that you can also, use so if you go for programming, language like hcll and functional, programming in theory you can have a bit, more proofs about code before and code, after works the same NASA for example, would want this proof you cannot have, code second mechanism is ideally would, like applications to have unit test and, test all the scenarios so when you do, change ideally but not all all code, bases have right you mean not all code, bases are fully covered with I, hope yeah yeah unfortunate like there, are open source projects that the unit, test don't even pass and they are like, we take a codebase we save Pi P test or, whatever they don't pass by default so, it's still this needs to improve, hopefully llms can improve that the, third mechanism is um we aim for minimum, code changes with the biggest, optimization and we go gradually like, for example we try to say first Target, data structure optimiz ation that is one, two lines that you can check then go on, one single for Loops double for Loops, you go a bit gradually on that and we, currently make a pull request with the, recommended changes so we still want the, developer to validate those changes, because you know you cannot take that, risk and also from a psychology, perspective if you have a tool there, that you consider this is my performance, expert and tells you at the end hey make, those three changes that you can verify, it's not different from if we change the, name of the tool and put a developer, name and make a pull request that person, will not know where this pull request, came from right so so you you follow the, same process that's how we see it, something that I'm kind of getting in, what you're saying as well which I know, is often a, misconception that I run into when I'm, either doing workshops or working, Hands-On with people with generative, models is there's typically this, misconception that you need to package, everything into a single prompt and then, output your final result it's a sort of, one-step thing I'm getting the sense, that you know your workflow for one it, it probably involves you know multiple, calls throughout the code base because, of the context size I I would assume is, partly because of that but then also you, mentioned this kind of iterative element, where hey there's kind of big rocks that, you can move that are the sort of worst, offending areas um so there's hierarchy, in that respect but also it seems like, let's just assume I know it's not a good, assumption but if we assume that a, person's code base is fully tested, integration tests unit tests it seems, like this is something you could just, Loop over and over and over and over, again to to get increasing optimizations, probably with diminishing return um, could you speak a little bit to how you, as a team think about that chaining, element I guess would be the way to say, it and then also a maybe iterative, element first of all the way we have uh, presented this like let's say you take, the original version of a code there are, two approaches there it's one I apply, one llm and take the first three four, suggestions of this llm right and then, apply which one works like even on the, papers on how good they are on the code, base they will say the first top five, recommendations from the llm three out, of five outperformed something like this, so it's not a one chance apply the llm, ETC now the best approach from what we, have seen is you get the first version, you apply the first version and if you, have the ability to get feedback from, what you applied that is where those, llms are very very good so for example, you say optimize this code you try to, pass the unit test or you try to compile, your code and you get the compilation, error message then you go back, automatically to the LM and say the, recommendation you gave me didn't pass, because of this error and then they can, give you better it's like the wolver in, technique or I think somebody did the, demo where they were showing how you can, compile and learn and uh because this, also if you think how you are using uh, those llms let's say write my email they, recommend they say sorry this is too, official I want it a bit more friendly, this is I believe currently the best, approach so it's like a iterative, approach if you have a way to measure, and give the feedback back you can have, the best result like if we take the logs, and give it to the llm we'll have even, better but even if you say hey sorry, this is not good give me something, better it will again try to improve in, the context you can even do things like, uh again I think uh in I was in a, presentation that L chain and similar, one where you can combine two three, versions of different llms and then the, three base you can have three and then, combine them say take context from here, this work didn't work so there are, different approaches you just stole my, question right out of my mouth I was, going to ask when you extended that to, multiple llms and integrated I was kind, of wondering how you were thinking about, that because you were addressing like, when one llm gave you multiple points, back uh in terms of optimization and, you're trying those out and kind of how, you might extend that to multiple Elms, cuz we're getting in this world with an, increasing number we're going to just be, a washing llms uh before long and as you, have so many apis or so many deployments, available to you how does that change, you know it sounds like your workflow, would work for that regardless but uh, you know does that add value or do you, think there's diminishing returns as you, keep adding llms into it no I think, there is a value and also it gives a, flexibility to the user to always not be, locked to a single llm like if you for, example it may be different user it may, be pricing issue it may be a new model, this week can have bigger token size it, may be the performance of this so you, cannot rely on one single llm nowadays, because it's so easy to build llms if, you have the data our business doesn't, depend on hey we have the best llm and, somebody else suddenly in a week can, give you a better llm and then you are, out of business because you need to, spend 100 million to train on gpus so in, our case combining llms using llms uh, and having that workflow llm agnostic, it's a way that you can utilize now if, people pay for better llms and access, the end result is better here though we, have to mention one issue which a lot of, people may not know it is a bit tricky, when you use output from one llm or the, other llm so there are IP issues ETC so, you cannot we are also investigating, exactly yeah but you cannot use in, theory CH GPT output to F tune llama in, a commercial setting maybe you know the, alpaka paper I think the first one, showed that you can say to CH GPT give, me examples then find you in another, model and then have it but you are not, allowed on a commercial side but I'm, sure you there may be two open source, llms eventually that you will use as far, as you have the framework then we allow, those things to happen as we kind of, near the end of our convers ation here, I'm wondering if you can paint a bit of, a picture for us from your perspective, as someone working day-to-day and, developer tools that are Aid driven what, are some of the most exciting things, that keep you up at night as you look, forward to sort of the next year it, could be things you're working on but it, might just be generally how this how, this field is developing what's really, exciting for you as a person and, building these sorts of tools as you, look to the next year so I personally, believe that this is the start of the, power of this technology we already see, how much it has changed the way people, are coding what I want to see and I see, more and more things make it like people, nowadays want to use like talk speech to, text and then code these kind of things, like you they're trying to make, developers even lazier you know this, kind things so from my perspective from, what we are trying to do I want us to, what we are in the process of looking, open source project uh going through our, platform and they really get great, optimization that we can give to the, community so for example if I can get a, very slow machine learning library 30, 40% all automatically make a pull, request see show to the people how, easily we can optimize the speed and, everybody can can benefit from that and, uh it excites me to know the uh how we, still haven't found out what are the, limitations of the current technology, and how much inefficient code is outside, there and I'm excited to find out how, much we can improve in an automatic way, like I don't know red these kind of, things that everybody's using in their, own day because they can make our you, know laptop faster like already my fun, is is is going crazy and uh yeah that's, what uh this combination of llms and, coding is something very very exciting, because we don't know the limitations of, this that's that's what I want to find, out that's awesome well uh we will, certainly be on the edge of our seat as, you're exploring those those uh, limitations and those possibilities um, really appreciate you joining us Mike, it's been a great conversation and um, I'm very much looking forward to my code, running faster despite my ignorance of, how to make it do that so thank you so, much and we'll talk to you soon thanks a, lot thank you thank you, [Music], guys thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change talk podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The new AI app stack | Recently a16z released a diagram showing the “Emerging Architectures for LLM Applications.” In this episode, we expand on things covered in that diagram to a more general mental model for the new AI app stack. We cover a variety of things from model “middleware” for caching and control to app orchestration.
Leave us a comment (https://changelog.com/practicalai/236/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Emerging Architectures for LLM Applications (https://a16z.com/2023/06/20/emerging-architectures-for-llm-applications/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-236.md) | 10 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another fully, connected episode of practical AI in, these episodes Chris and I keep you, fully connected with everything that's, happening in the AI Community we'll, cover some of the latest news and we'll, cover some learning resources that'll, help you level up your machine learning, game I'm Daniel whack I'm the founder of, prediction guard and I'm joined as, always by my co-host Chris Benson who is, a tech strategist at locked Martin how, you doing Chris doing very well today, Daniel how are you I'm doing great I am, uh, uncharacteristically joining this, episode from the lobby of a Hampton Inn, in Nashville Tennessee so um if our, listeners hear any any background noise, they they know what that is but uh you, have a built-in audience right there a, built-in audience the people in this, Lobby are, unexpectedly learning about AI today, which I'm happy to happy to do yeah out, here visiting a customer onsite and um, yeah it's nice to sit back and take a, break from that and talk about all the, cool stuff going on excellent well I'll, tell you what you know we have had so, many questions and uh about kind of, sorting out all the things that have, happened the last few months and over, the last year um and we've done a couple, of episodes where trying to kind of, clear out like generative AI what's in, it uh what llms are how they relate and, stuff like that what do you think about, taking a little bit of a deep dive into, large language models and kind of all, the things that make them up because, there's a lot of lingo being hurled, about these days yeah yeah I think maybe, even outside of llms there's this, perception, that the model whether it be for image, generation or video generation or, language generation that the model is, the application so when you are creating, value the sort of model whether that be, you know llama 2 or stable diffusion, Excel or whatever that somehow the model, is is the application like it's, providing the functionality that your, users want and that's basically a, falsehood I would say um and there's, this whole ecosystem of tooling that's, developing around this and one of the, things that I sent you recently which I, think does a good job at, illustrating some of the various things, that are part of this new ecosystem or, this new generative AI app stack was, created by um andreon Horwitz they, created a figure that's like emerging, llm appstack we'll link it in our show, notes I think it goes though maybe more, generally than llms and but that, provides a maybe a nice framework to, talk through some of these things now of, course they're providing their own look, at this stack especially because they're, invested in many of the companies that, they highlight on the stack but I think, regardless of that they're trying to, help people understand how some of these, things fit together have you seen this, picture I have and I appreciated when, you pointed it out a while back there uh, it definitely is an interesting I, haven't seen anything quite like it in, terms of putting it together and some, things they seem to dive into more than, others in the chart it will be, interesting to see how we parse it going, forward here yeah and maybe we could, just take some of these categories and, talk them through in terms of the, terminology that's used and how they fit, into the overall ecosystem so you know, we can take a an easy example here which, is one of the things that they call out, which is Playground now I think this is, probably the place where many people, start their generative AI Journey let's, say um so they either go to I think, within the playground category there, would be like chat GPT might might fit, in that category where you're prompting, a model it's interactive it's a UI like, you can put stuff in you can put in a, prompt and get an output right now chat, GPT is maybe a little bit more than that, because there's a chat thread and all of, that um but there's other playgrounds as, well so you can think of spaces on, hugging face that allow you to use, stable diffusion or allow you to use, other types of models there's other, proprietary kind of playgrounds that are, either part of a product or are their, own products so open AI has their own, ground within their platform you can log, in and try out your prompts there's .ev, which is a cool one that kind of allows, you to compare one model to the other, there's other products like I would say, something like quip drop which is a tool, that lets you use stable diffusion and, you can just go there there's you can, try out prompts for free um you can pay, up if you need to use it more so there's, a limit to that but there's a lot of, these playgrounds floating around and, that's often where people start things, it's funny the playground itself as a as, a category has a lot of subcategories I, think to it because you know you've, already kind of called out kind of the, diversity of what You' you know in the, cloud providers for instance all the big, cloud providers have their own, playground areas Nvidia has a playground, area I think it's almost becoming a a, ubiquitous uh notion and of course all, those playground areas for the, commercial entities are focused on their, products and services definitely but, trying to trying to bring some cool, factor to it so yeah it's almost like a, demo or experimentation interface if we, Define this playground category it's, usually but not always a browser based, playground uh or a browser based, interface where you can try to prompt a, model and see what the output is like I, think that would kind of generally be, true maybe there's some caveats to, certain ones like mid Journey For, example is a Discord bot or there is, still a Discord bot that you could use, maybe that fits into the playground but, generally these are interactive and, useful for, experimentation but not necessarily, useful to like build an application yeah, I agree and and another thing to note, about it from a characteristic, standpoint is not only is it really it's, not made for you to go build your own, thing it's made for you to try kind of, the stuff of whatever organization is, doing, but they do do it uh they provide the, resources so by being in a browser you, don't have to have a GPU on your laptop, you don't have to have resources yeah, yeah you don't have to have all the, things through various means they set up, all that for you on the back end whether, it be just calling a service or whether, it be creating a temporary environment, through virtualization but it is a good, way to either to test out a new product, line or to or to just get your toes wet, a little bit if you want to try some, stuff out maybe you've been listening to, the Practical AIA podcast for a little, while and you want to uh a particular, topic grabs you that would be a good, place to go yeah and I think within that, same vein you could transition to talk, about this other category which is not, unique to the generative AI app stack, let's call it but it's still part of the, stack which they have called out app, hosting so that's like very generic, right so in here would fit things like, vers, or I would say you know generally like, the cloud providers right and the, various ways that you can host things, whether that be in Amazon with ECS or, app Runner or whatever that is or in, even your own infrastructure your own, on-prem infrastructure if you host, things now there are I would say a, number of Hosting providers that are, kind of cool and trendy and people that, are building new AI apps they seem to, gravitate towards like let's say versell, and a lot of front-end developers that, use versell which I think it's amazing, platform so cool that hasn't, traditionally been like a data sciency, hosting way of doing things but it, represents I think this new wave of, application developers that are, developing applications integrating Ai, and you see some of those now kind of, coming into or being exposed in this, kind of wider appstack which is a good, thing because we've talked for a long, time even as we opened this conversation, up saying the model is not the app you, know you you have to wrap the model with, some goodness to get the value out of it, to be productive with it and so I, personally like the fact that we're, seeing the model hosting and the app, hosting are starting to merge because I, think that's more manageable over time, it's less being in its own special, category and it's more about okay every, app in the future is going to have, models in it and so you know we're, accommodating that notion so I I like, seeing it go there I've been waiting for, that for a while yeah and to really, clarify and Define things you could kind, of think about like the playground that, we talked about as an app that has been, developed by these different people that, illustrates some llm functionality but, it's usually not the app that you're, going to build you're going to build, another app that is exposed to your, users that uses the functionality and, you'll need to host that either in ways, that people have been hosting things for, a long time or new interesting patterns, that are popping up like things that, model is doing or maybe things that, front-end developers really like to use, like for sale and and other things but, there's still that app hosting side now, where I think things get interesting is, you have the playground you have the app, hosting but regardless of both of those, what happens under the hood and this is, I think where things get quite, interesting and where there's a lot of, differences in the kind of emerging, generative AI stack compared to the, maybe more traditional non- AI stack in, the middle of the diagram um that we're, talking about this emerging llm app, stack diagram which I think also is, again more General is this layer of, orchestration so I don't know about you, Chris but I'm old enough I guess you, don't have to be that old I don't think, to um when someone says orchestration I, think of like kubernetes or like, container, orchestration maybe that's my own bias, coming from working in a few, microservices oriented startups and that, sort of thing but this is distinctly not, the orchestration that's being called, out here in the in the generative AI app, stack there's a level of orchestration, which in some of my workshops I've been, kind of referring to as almost like a, convenience layer think about like when, you're interacting with a model let's, give a really concrete example let's say, I want to do question and answer with an, llm I need to somehow get a context for, answering the question I need to insert, the question and that context into a, prompt and then I need to send that, prompt to a model I need to get the, result back and maybe do some like clean, up on it like I I have some stop tokens, or I want it you know to end at a, certain punctuation mark or or whatever, that is that's all convenience what I, would consider sort of this convenience, and what they're calling orchestration, around the call to the model and so this, orchestration layer I think has to do, with prompt templates generating prompts, chain of prompts agents uh plugging in, data sources like plugins these are all, things that kind of circle around your, AI calls but aren't the AI model yeah I, mean it's the software around it you, know just to simplify a little bit Yeah, and maybe tooling yeah orchestration, tooling yeah yeah it's the stuff you, have to wrap the model with to make it, usable in a productive sense and from, the moment that I saw that word that was, almost the very first thing that grabbed, me you know those you know little, psychological Quirk where you kind of, notice the thing that sticks out yeah, that's the thing that stuck out was they, it's a big bucket that they're calling, orchestration which is a loaded word, that can mean a lot of different things, depending on what it is you're trying to, do and the examples that they list in, that category are are all somewhat, diverse as well I think that was the, first point where I thought well it's a, chart with the creator has a has a bias, there what are some of the ways I'm just, curious when we think about this kind of, orchestration as they say wrapping, around and providing the convenience any, ways that you would break that up like, how you think about it you mentioned, convenience and stuff but they go from, something like python as a programming, language to Lang chain to chat GPT all, three very distinct kinds of of entities, yeah I think that you're kind of seeing, a number of things happen here the first, one that they call out is python SL DIY, so you're seeing a lot of roll your own, kind of convenience functionality built, up around llms but I do think one of the, big players here would be like Lang, chain and and what they're doing cuz if, you look again at those kind of layers, of what's available there you have maybe, categories that I would call out if we, just take Lang chain as an example, categories that I would call out of this, sort of orchestration functionality, would be templating so this would be, like prompt templates for example or, templating uh in terms of chains so, manually setting up a chain of things, that can be called in one call there's, also an automation component of it maybe, the or this is a way that orchestration, kind of fits with the older way the, orchestration term is used in like, devops and other things where some of it, could be automation related to with, things like agents or something like, that where you have an agent that, automates certain functionality it's not, the llm itself but it's really, automations around calling the llms or, the other generative AI models to, generate a an image or what have you, they also kind of have some separate, callouts you know for apis and plugins, and then they have uh which we can hit, in a moment they kind of have a, collection of the the maintenance items, you know the things to keep the lights, on if you will logging and caching and, and things like that how do you look at, that breakdown the way they have it yeah, so I think this is where they kind of, have the orchestration piece in the, middle there as connecting a couple, different things one of those would be, what I would consider I think more on, the data or resource side and then one, is more on the model side so I think we, could split it into those two major, categories so what are you orchestrating, when you're orchestrating something with, Lang chain or similar well you're, orchestrating connections to resources, I'll use the term resources because it, might not be data per se it might be, like you say like an API or another, platform like zapier or you know wolf, from alpha something like that the other, side of that is the model side, both the model hosting and some really, useful tooling around that but let's, start on the resource side so as you, mentioned you might orchestrate things, like one of the things that I found both, really fun to do and useful is to, orchestrate calls into like a Google, search so if I want to pull in some, context on the Fly then I might do want, to do a Google search that's a call to, an API so that's a resource or a plugin, that might be, conveniently integrated into your, orchestration layer either via something, like Lang chain or via your own DIY code, another side of this would be the actual, data and the data pipelines which are, your own data or data that you've, gathered or is relevant to your problem, so again if we're thinking about this, sort of set of resources that could be, orchestrated into your app maybe you, have a set of documentation that you, want to generate answers to questions, out of or maybe you have a bunch of, images that you want to use to fine-tune, stable diffusion or something like that, having data and integrating it into, models isn't new and so the things that, are called out in this particular image, like data pipelines those are also not, new and are part of this app stack if, you're integrating your own data so, things like data bricks or airflow or, padm or tools to parse data so PDF, parsers or unstructured data parsers or, image parsers or image resizing or all, of that sort of stuff still fits into, the data pipelining piece and so you've, either got your data coming from apis, which might be a resource that you're, orchestrating or you've got your data, coming from your data sources which, might be traditional data sources of any, type from databases to unstructured, [Music], data this is a chang log news break it's, official advancements in computer vision, have rendered captas obsolete as new, research shows AI Bots are 15% more, accurate than humans at picking which, images have a bridge or sign or a, bicycle or whatever in them the, researchers recruited 1,400 participants, to test websites that used capture, puzzles which account for 120 of the, world's 200 most popular websites the, bot's accuracy ranges from 85 to 100%, with the majority above 96% meanwhile, Weir Mortals check in at AP pathetic 50, to 85% accuracy and we answer slower, than the robots to add insult injury, I've surmised this for months now as, we've been unable to ward off spam, account Creations on changel law.com no, matter which shiny new Capa service we, tried there are other efforts in the, works besides Capa in order to, differentiate between robots and humans, but so far the robots are winning you, just heard one of our five top stories, from Monday's Chang log news subscribe, to the podcast to get all of the week's, top stories and pop your email address, in at Chang blog.com /ne to also receive, our free companion email with even more, developer news worth your attention once, again that's, [Music], changelog.md and the vector database, piece and I have to say I've just got to, recommend that our listeners if they, haven't listen to our very recent, episode about Vector databases because, that episode goes into way more depth in, terms of what a vector database is and, why people are using it but just for a, quick recap part of what you might want, to do with generative AI models is find, relevant data that's relevant to a user, query and somehow orchestrate that into, your llm calls either for chat or, question answering or maybe even into a, image generation or a video generation, in order to find relevant data what, people have found is that they would, like to do a vector or an embedding, search on their own data to find, relevant data and again you can find out, much more about that in our our previous, episode but that's called in this, appstack as probably something unique, that's developing which is not just, having data pipelines and data bases but, having data flow through an embedding, model and into a vector database where, you're performing semantic searches I, mean at the end of the day it's a, database that's very it works well for, the kind of operation that we're doing, here whereas some of the traditional, things that we had been working on for, years before there's kind of a context, shifting in terms of how you're handling, data what data is how it's organized uh, so this makes a lot more sense yeah and, uh it should call out here too part of, the stack here and I'm glad that they, called it out in this way in the, schematic that we're looking at is the, embedding model so a lot of people are, talking about these Vector databases but, in order to store a vector in a vector, database there is a very relevant, component to this step back which is the, actual model that you're using to create, embeddings and not all are created equal, so think about if you're working on an, image problem right you may use a, pre-trained feature extractor type model, from hugging face to extract vectors, that your images uh so put an image and, get a vector out but if you're working, with both image and text for example, maybe you're going to something like CP, or one of those a related model that's, able to embed both images and text in a, similar semantic space but if you're, only using text there's a whole bunch of, of course choices and all of those don't, perform equally for different types of, tasks as well there's um if you search, on hugging face or just do a Google, search for a hugging face embeddings, leaderboard board there's actually a, separate leaderboard so hugging face has, a leaderboard for open models and how, those score and various metrics they, also have a leaderboard for embeddings, and you can click through the different, tasks let's say you're doing retrieval, tasks like we're talking about here from, a vector database you can see which, embeddings perform the best according to, a variety of benchmarks in retrieval or, in summarization or other things do you, use that a lot when you're putting, models uh and storing them into Vector, databases and figuring out the, embeddings do you tend to go and see, what is going on because right now, there's so much happening in that space, is that make for a good guidepost for, you yeah yeah and I think what is also, useful is looking at those performance, metrics but also at least on the hugging, face leaderboard and some other, leaderboards so if you search for if, you're working with text one of the, major tools for creating these, embeddings in a really useful way is, called sentence, Transformers and they have their own, table where they have measured and, Benchmark various embeddings that can be, integrated in in sentence Transformers, that's useful but it's also useful to, look at the columns whether you're, looking in the hugging face leaderboard, or the sentence Transformers or wherever, you're looking at the size of the, embedding and the speed of the embedding, because it was called out when we had, our Vector database discussion but only, in passing let's say you want to embed, you know 200,000 PDFs so I just ran, across this use case with some of the, work that we're doing and it can take a, really really really really long time, depending on how you implement it to, both parse and embed a significant, number of PDFs the same would be true, for documents or other text or other, types of data even and so when you're, looking at that there's two implications, here one is how fast am I able to, generate these embeddings do I have to, use a GPU or can I use a CPU because, there's going to be a different speed on, gpus versus CPUs and how big are the, embeddings this is another kind of, interesting piece which is if I got, embeddings that are thousand or more in, dimension that's going to take up a lot, more room in my database and on disk, than embeddings that are 256 or, something like that so there's also, storage and moving around data, implications to how you choose this, embedding space so there's a lot of I, think practical things that maybe people, skip over here when they're just doing a, prototype with Lang chain and some, Vector database it's easy but then it as, soon as you try to put all your data in, it gets much harder you rais a question, in my mind and I'm going to throw it out, you may or may not be familiar with what, the answer would be but when you're, looking at Vector databases and you're, looking at all these you know the, diversity in in embedding possibilities, here and the fact that that has kind of, physical layer uh consequences you know, in terms of storage and stuff like that, are we seeing that in vector or other, database uh Arenas where they're trying, to accommodate this new approach to, capturing data in terms of having, embeddings the way of vector you know, with the rise of vector databases it, seems that there would be a whole lot of, kind of vendor related research on how, you do that because to your point a, moment ago you're talking about data, it's such a volume that poor, architecture in terms of what's under, the hood could have some pretty big, consequences there yeah I think that's, definitely true and there was a point, that was made in one of our last, episodes that the vendors for these, things are having different priorities, that don't always align so some are, optimizing for how much data how quickly, you can get a large amount of data in, but maybe they're not as optimized for, the query speed some are optimizing for, query speed but it might be really slow, to get data in and so that's one piece, of it I think another piece of it is how, large of an embedding do you need and, how complicated is your retrieval, problem right I would recommend that, people do some testing around this, because let's say you have a 100,000, documents that are very very similar one, to another or 100,000 images that are, very very similar to one another and the, retrieval problem is actually, semantically very difficult you might, need a larger embedding and more kind of, power even like optimization around the, query like reranking and other things to, get the data that you need whereas if, you have you know 100,000 images and, they're all fairly different well maybe, you don't need to go to some of those, links so yeah I think that that's also, part of this problem and people are, still filling out the best practices, around some of this partially because, it's this kind of new part of the AI, stack and partially because things are, constantly updating as well so if you, use this embedding today there's an, better one tomorrow and Vector databases, are updating all the time so it's also, just a very Dynamic time here as we look, at the chart here and there's kind of, the three that we referred to earlier, that are that are kind of together and, those are llm cach logging SL llm Ops, and validation first of all could you, kind of describe what's encompassed in, each of those and also kind of why are, they fit together why are we seeing, those lumped in one category here one, super category yeah so if you think, about what we've talked about so far, there's this new generative AI stack, whether you're doing images or language, or whatever there's an application side, which might just be the playground or it, might be your own application there's a, data and resources side which is what, we've talked about with integrating apis, and data sources and then there's like a, third arm here which is the model side, and all of those are kind of connected, through the orchestration layer the, automation layer the convenience layer, whatever you want to end up calling that, so on now we're kind of going to this, third arm of the model side and we can, come back to it here in a second but one, side of this is just hosting the models, and having an API around them which we, can come back to but between the model, and your orchestration layer almost as, um maybe we could call it like model, middleware I I'll just go ahead coin, that I just coined it on maybe people, are already referring to it that way and, I didn't coin it but model middleware, sits kind of either wrapping around or, in between your orchestration layer and, your model hosting and these are the, things that you're referring to around, caching logging validation probably the, one that people are most familiar with, if they are familiar with one of these, would be the logging layer which is, again something that is kind of a Dev, opsy infrastructure term but here we, might think of very specific type of, logging like model logging which might, be more natively supported in things, like weights and biases or clear ml or, these other kind of ml Ops type of, solutions where you're logging requests, that are coming in prompts that are, being provided response time GPU usage, all the kind of model related things and, you want to put those into you know, graphs and other things so there may be, specific kinds of logging so how quickly, on average is my model responding what, is the latency between making a prompt, or a request and getting a resp response, how much GPU usage is my model using and, do I need more replicas of that model, these sorts of things can be really, helpful as you're putting things into, production so that's a a first of these, middleware, [Music], layers, [Music], so Chris um the other middle Weare, layers I would say that have been called, out at least in what we're looking at is, are validation and caching so I'll talk, about caching and we can talk about, validation a little bit which is close, to my heart but caching uh let's say, that again this already happens in a lot, of different applications so think about, like a general API application if, someone makes a request for data in your, database and you retrieve that data and, then the next user asks for the same, data in your database the proper and, smart thing to do would not be to do two, retrievals right but to cach that data, in the application layer in memory so, that you can respond very quickly and, reduce the number of times that you're, reaching out to your database and things, like that I notice in this chart that, some of the examples that they put for, caching such as r and SQL light and such, are are very typical and long-term, players in the appdev world yep so does, that beg the question at least for me, begging the question that when you're, caching like you're really talking about, for the input here's an output whether, it goes to a model or not is it is it, really just application data that you're, caching at that point so it's caching in, that sense but I think there's maybe, implications to it that go beyond kind, of normal caching so, if you you know running AI models is, expensive most of the time because you, have to run them on some type of, specialized Hardware right if I've got a, model running on two A1 100s right I, would rather not have four replicas of, that model I would rather just have one, if I can um because I don't want to pay, for all those gpus so part of it is, really related to cost and performance, so it's also for a large model this is, mainly for large models I would say, you've got a lot of cost either because, you're running that model on really, specialized Hardware or because like if, I'm calling out to GPT 4 it's really, expensive to do a lot of requests to gp4, right so in order to deal with that if, you have a prompt input you can cache, that prompt and if users are asking the, same question I would rather just send, them back the same response from gp4 my, large llama 270 billion model or, whatever it is I'm going to respond to, them the same way based on the same or a, similar input the other implication to, this which in my mind it sort of fits, into caching but maybe not in the, traditional sense so I normally think of, caching as like oh I'm going to cash, things things in memory or locally at, the application layer but if you're, caching prompts and responses there's a, real opportunity to leverage that data, to build your own sort of competitive, Moe with your specific generative AI, application so for example like you've, got a user base they're prompting all of, these sorts of things all of a sudden if, you're saving all of that data and the, responses that you're giving you're, essentially starting to form your own, domain specific data set that you could, kind of Leverage in a very competitive, way in in kind of two senses one is, right now if you're using a really, expensive model to make those, responses maybe you start saving those, responses from the really expensive, model and you can use that data to, fine-tune a smaller model that might be, more performant and cost effective in, the long term so it's operational kind, of play the other way is if you're, Gathering that over time and you, actually have the resources to human, label that or give your own human, preferences on that or certain, annotations on that that now is your own, kind of advantage in fine-tuning either, one of these generative models or your, own internal model for the domain that, you're working in so it's caching but, that's almost like a feedback or data, curation side of things as well so you, mentioned earlier that validation was, close to your heart yeah so as our users, know I think part of the tooling that, I'm building with prediction guard would, fit into this category it would actually, span I think more categories it kind of, span between validation and, orchestration and model hosting so, there's kind of a little bit of overlap, there but this validation layer really, has to do with the fact that generative, AI models across the board I think, people would say are there's a lot of, concerns around, reliability privacy security compliance, what have you and so there's a rising, number of tools that are addressing some, or all of those issues so whether it be, putting controls on the output of your, llm again think about this like a middle, layer my llm produces something harmful, as output or my generative AI model, generates an image that is not fit for, my users I want to somehow catch that, and correct it if I can right or I want, to put certain things into my model but, I want to make sure that I'm not putting, in either private or sensitive data or I, want to structure the output of my model, in a certain way into certain structures, or types like like Json or integer or, float all of these sorts of things kind, of I personally would break this apart, probably into maybe like, validation type and structure and then, like security related things because, there's a lot here there's validation, which is like is my output what I want, it to be there's security related things, which is am I okay with putting the, current request into my model and or, sending the output back to my users and, then there's type and structuring things, so with images like is the image, upscaled appropriately for my use case, or with text if I'm putting in something, and wanting Json back is it actually, valid Json that's more of a structure, type checking type of thing so there's a, lot in this category and I think you can, you're probably getting the fact that, I'm thinking a lot about this and and, there's a lot here but um yeah other, things fitting into this category would, I think cool one um called rebuff which, uh is doing kind of checking for prompt, injections for example that's like part, of that security side of things there's, things um like prediction guard and, guard rails guidance outlines now that, do type and structure type of things, there is also I would say a layer of, this which a lot of people are, implementing in the kind of roll your, own python DIY way as well which in uh, prediction guard we Implement some of, these but also people are implementing, them in their own systems like, self-consistency sampling like calling a, mo model multiple times and either using, the out choosing between the output or, merging the output in some interesting, way or things like that this sort of, consistency stuff I think a lot of, people are are rolling their own too, what do you think is start winding up, here what do you think are some of the, the takeaways from this chart you know, or what brings top of mind uh things, that people as they look at it might, benefit from how would you see it in the, large yeah it's a good question I think, one major takeaway one thing to keep in, mind is the model is only a small part, of the whole app stack here in a similar, way to like used to when a thing existed, called data science we would say uh, training a model is only a very small, part of the kind of endtoend data, science life cycle of a project um, there's a lot of other things involved, and I think here you know you can make a, similar conclusion that the tendency is, to think of the model as the application, but there's really a lot more involved, and there's our friends over at latent, space would say this is really where AI, engineering comes into play this space, of AI engineering is seems to be, developing into a real thing whether you, call it that word or not it is part of, what this is so that's one takeaway I, think the other takeaway is maybe just, kind of forming this mental model around, these three spokes of the stack so, you've got your app an app hosting, you've got your data and your resources, and you've got your model and your model, middleware and all that kind of middle, Hub would be some sort of orchestration, that you're performing either in a DIY, way or with things like Ling chain to, connect all of those pieces together so, you're probably horseed by now because, we've pulled so much information out of, you this was a really really good dive, you know it's one particular Publisher's, way of looking at it but we've never, really dived into all the components of, the infrastructure of a stack with this, kind of and and I think most people, haven't had a chance to see it yet, because so much of this has really, Arisen in recent months thanks for kind, of uh wearing half of a guest hat along, the way here and taking us through this, on on this fully connected episode yeah, and I think um in terms of learning, about these things I think people can, check out our show notes we'll have a, link to the diagram that we've been, discussing here I would say learning, wise this helps you organize your, thought process but to really get an, intuition around these things you can, look at various examples in this diagram, and go to their docs and try out some of, that there's a variety of kind of, endtoend examples as well that are, pretty typical these days like in, language if you're doing kind of a chat, over your docs thing that involves a, model and a data layer and an, application layer so just building one, of these example apps I think could give, people the kind of learning and that, sort of thing that they need but um yeah, it's been fun I it's always helpful to, talk these things out loud with You, Chris I I find it very useful well I, learn a lot every time we do this so, thanks a lot man yeah yeah we'll we'll, see you next week see you next, [Music], week thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all change talk podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Blueprint for an AI Bill of Rights | In this Fully Connected episode, Daniel and Chris kick it off by noting that Stability AI released their SDXL 1.0 LLM! They discuss its virtues, and then dive into a discussion regarding how the United States, European Union, and other entities are approaching governance of AI through new laws and legal frameworks. In particular, they review the White House’s approach, noting the potential for unexpected consequences.
Leave us a comment (https://changelog.com/practicalai/235/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Announcing SDXL 1.0 (https://stability.ai/blog/stable-diffusion-sdxl-1-announcement)
• GitHub, Hugging Face, urge EU to relax open-source AI rules (https://cointelegraph.com/news/github-hugging-face-urge-eu-relax-open-source-ai-rules)
• White House: Blueprint for an AI Bill of Rights (https://www.whitehouse.gov/ostp/ai-bill-of-rights)
LEARNING RESOURCE!
• Patterns for Building LLM-based Systems & Products (https://eugeneyan.com/writing/llm-patterns)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-235.md) | 9 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io well welcome to another fully, connected episode in these episodes, Chris and I keep you fully connected, with everything that's happening in the, AI Community we'll take some time to, discuss the latest AI news and we'll dig, into learning resources to help you, level up your machine learning game I'm, Daniel whack I'm the founder of, prediction guard and I'm joined as, always by my co-host Chris Benson who is, a tech strategist at locked Martin how, you doing today Chris doing pretty good, how are you today doing really well it's, uh I told someone earlier today it was a, beautiful day outside and I've got a lot, of interesting things to work on so um, yeah I I don't know that I can complain, that's a good way of looking at the, world I got to say it's a beautiful day, outside here in in the Metro Atlanta, area and I also have some pretty fun, stuff to work on so you know what we, don't have anything to complain about do, we yeah yeah I don't think so and it, seems like there's just fun things to, talk about these days I don't know if, you and your immediate circles have been, talking about this superconductor stuff, that's happening have you been watching, the sort of room temperature, superconductor Buzz I guess is I have, not told me about it let's he despite, having some background in physics I have, not looked at any of this so I I can't, really comment too much other than just, following it from the sidelines but, apparently there was a research group, that claims to have created a, superconductor that super conducts at, room temperature so um you people might, have seen these videos in the past of, like Little Things levitating on, something that's really cold um that's, kind of the typical the typical image of, super conductor like usually isn't it, absolute zero kind of temperatures and, stuff it's yeah it's like measured in, kelvin really low temperature sort of, thing so of course this is very, intriguing and I I've seen a number of, things there's like one group that's, claimed they've reproduced it there's, others that are skeptical but you know, trying to reproduce some of the results, so it's like um I forget the name of it, I'm G to butcher this I think it's like, lk99 or something yeah lk99, superconductor so if you want to look at, some cool stuff that doesn't involve, Transformers and neural networks that's, cool it sounds interesting uh so it's a, non-ph physicist who likes physics but a, non- physicist uh and and since we are, the practical superc conducting podcast, of course practical if you're going to, ask me to explain all this I I'm afraid, I'm too rusty to do it good job okay, fair enough I was just going to say like, are there what are some of the practical, uses of a room temperature I imagine, there's tons well yeah I think in, general it's like a superc conductor as, the name might suggest conducts which uh, if you think about that is basically, everything used in electronics as some, type of conductor good stuff yeah good, stuff and a superconductor typically if, it's operating near absolute zero it's, not really that practical to put in your, in your everyday electronic items so, something that is room temperature I, think could uh open up possibilities I, guess I understand so I'm not again the, superconductor expert but um people, might be familiar with semiconductors as, well which of course are very important, to electronics and the supply of those, in recent years has caused a lot lot of, news because you know chips for cars and, such have had issues in the supply chain, and um all of this stuff so you could, think about these materials fitting into, a similar similar zone of research but, it's really interesting um and though I, don't talk about it much in my day job, there are places where I intersect with, microelectronics and there and that is, not my area of expertise but I do know, that there's quite the Revolution going, on in that space and so this may be yet, another you know aspect of what may, Propel the hardware side of things which, I am generally not very knowledgeable, about yeah there's a ton of stuff going, on right now in that space even in the, small town where I'm at where Purdue, University is located they're killing it, on a lot of fronts but they just, established a huge partnership to build, a bunch of semiconductor research, facilities around Purdue because there, is a lot of emphasis to kind of decouple, chip production from a single location, and bring some of that expertise or, distribute some of that expertise around, and so it's quite interesting to follow, some of that which definitely influences, the things that we talk about on this, podcast and in a more far-reaching, or twice removed sort of way but it's, interesting to keep a pulse on for sure, I have noticed more and more the kind of, convergence of microelectronics uh uh, really modern software approaches and, artificial intelligence all converging, uh into things and intertwining and uh, and so maybe at some point we need to, have some dedicated episodes about some, of those connection points between them, for sure yeah and it's not the only news, that happened this week so as I do most, days you know pulling up hugging face, and clicking on models and sorting by, trending which is the default uh seeing, what new models are out there a couple, things to announce probably the most, interesting would be uh stable diffusion, XL 1.0 people might remember on one of, our fully connected it's not that long, ago we talked about stable diffusion XL, 0.9 I believe it was although I get, confused with all the acronyms and, numbers but that was essentially a, research only kind of beta version of, what is now released as the general, release of the new stable diffusion, which is stable diffusion XL, 1.0 and yeah I mean I've played around a, little bit with it through clip drop and, some other places and pretty stunning, output I've really enjoyed it I've, created some posts on LinkedIn with, generated imagery they release it under, an open rail license which we've also, talked about on this show it's more open, although I think as we talked about in, that episode about open rail licenses it, wouldn't be considered you know open, source quote unquote but Open Access in, some way right and uh yeah it's pretty, cool so I I don't know if you're looking, at any of the cool images Chris but um I, am indeed as we, talk they say that of course best ever, obviously you know that's the thing to, say when you're releasing something they, say world's best open image generation, model but to your point a moment ago the, word open is gets parsed in all sorts of, different ways so correct yeah there's, all sorts of nuances to that so the, things that they highlight in the post, are better artwork for challenging, Concepts and styles kind of creating a, certain feel imparted you know by the, The Prompt more intelligent with simpler, language so I guess the thought with, this is that the model is able to, produce more complicated imagery like, I'm looking at a panda astronaut in, looks to be a coffee shop with a iced, coffee I see that one on their page yes, yes apparently that's uh comes from a a, simple prompt although I don't see the, prompt in their post it just says simple, prompt but they need to have more, raccoons on it you know I'm I'm a, raccoon official a so yeah exactly, pandas are fine I like pandas but it, need they really need a raccoon or two, yes um similar to you said I think they, say the best they also say the largest, open image model so we talked about this, on the previous show how it's a, two-stage model there's a base model and, a refiner model the base model is, actually smaller it's 3.5 billion, parameters the larger model is the, refiner model which is .6 billion um so, that's a final I think they say Den, noising but sort of refining uh make the, image better step are those model sizes, I noticed they are both under your, magical 7even number that you educated, us on a while back would would that be, to make this accessible to people so, they can get in there and download the, model and start or or not is it just, happened Chan they say that, sdxl 1.0 should work effectively on, consumer gpus with 8 GB of GPU memory, vram or readily available Cloud, instances so this is definitely a one, GPU model now might not be work on all, gpus uh depending on how you implement, it and how you call it but definitely, accessible to people which I think is, really cool and also that I think puts, it in another realm which is interesting, another thing they highlight around, fine-tuning so if you have a bigger, model it's also generally harder to, fine-tune that model for your own, purposes but along with this they talk, about kind of out of the box support, with Laura or the low rank adapters type, of technique where you can fine-tune the, model in a very parameter efficient way, and so the thought is hey people this is, open now create your own fine tunes off, of it as well and I, imagine you know was just released well, it's been a few days but the 26th of, July as we're recording this, 2023 I'm guessing we'll see a whole, bunch of fine tunes off of this appear, on the model Hub and elsewhere in in, days ahead similar to what we're seeing, actually with llama 2 so that's the, other thing I was going to note on the, model Hub is just proliferation of llama, twos you can so we've got if I'm just, looking at it right now so I see stable, diffusion XL base that's what's trending, at the top then we've got the Bas llama, 2 from meta then we have stable Beluga, which is from also from stable Ai and is, a llama 2 fine tune we've got llama 2 7, billion, 32k it looks like 32k context link from, together computer the chat llama and, then I see one two a whole bunch of, other llamas so we'll continue to see, those proliferate I think that sounds, good while you were telling me that I, was busy playing with uh stable, diffusion here with of course raccoons, in space oh nice and trying different, versions of that nice well let me know, how they turn out and definitely post, them on all the places we we need more, raccoons in space we need more raccoons, in the world don't we I know or out of, this world maybe yeah one one followup, to on the Llama 2 front which is, connected to the, topic that we talked about before um, with Damen real which people really love, that episode I I love that episode to um, we'll link it in the show notes about, the legal consequences of generated, content but I I was chatting Back in, Forth and actually Damen got in the, Twitter chat which is there's this, interesting thing which we didn't really, talk about on the show but a lot of, people are doing I think it's worth just, mentioning it on the show because it's, kind of a conundrum to me to be honest, without actually talking it through with, with a lawyer which is technically I, think you're not supposed to use GPT, output to train or fine-tune another non, GPT model so what what comes out of, let's say you have an open AI account, and you generate a whole bunch of output, from GPT 4 which is going to be really, good and then you finetune an open small, model on that GPT output and you make, the open model good like, gp4 so that's what what a lot of people, are doing it's not really like a hidden, thing people are posting these models on, hugging face and the question on Twitter, which I thought was really interesting, and maybe user or listeners can consider, is well first off it seems to break the, license agreement with open AI but also, machine generated content isn't, copyrightable but also if they do that, and then they post the model can I use, the model that they posted on hugging, face if it's sort of from the what is, the thing poisoned well or whatever and, yeah what all of that would actually, hold up in court it's a whole mix of, things so I just I've been thinking, about it a lot and um think it's an, interesting thing that we'll see play, out yeah I mean I think the the term for, kind of that the sourcing is kind of the, Providence of it and my thinking, correctly there's a point where it, becomes very very difficult to follow, that and you know if you have enough, rabbit holes that you're going down uh, by using the output I have no idea how, that becomes enforcable yeah uh down the, road I I think we're seeing a bunch of, these licenses I think we saw one from, meta which basically said you can use it, for anything so long as you don't, compete with us is me paraphrasing I, just have no idea how we would possibly, uh have an organization that could, follow through on that, [Music], this is a change log news break have you, already asked chat GPT how to design a, good UI for your new AI app and gotten, back bub kiss well check out Lang UI an, open- Source Tailwind library of 60 plus, responsive and dark mode enabled, components tailored for AI and GPD, projects what exactly does that mean it, means prompt containers history panels, sidebars message inputs and all sorts of, stuff that are chaty related so you can, stop asking chat GPT and build your own, chat GPT with a sweet UI you just heard, one of our five top stories from, Monday's change log news subscribe to, the podcast to get all of the week's top, stories and pop your email address in at, Chang blog.com newws to also receive our, free companion email with even more, developer news worth your attention once, again in that's changel log.com, [Music], newws well Chris in addition to seeing, interesting things play out with models, and licenses and open source or not and, all of these conundrums that we're, facing at the same time we have policy, makers in various places trying to, figure out what what they should be, doing with this I don't know if you've, been seeing that I have and it's funny I, I I'm kind of conflicted there's a part, of me that looks at this and says it's, important it's a time you know I mean we, have so much change but there's also a, part of me these uh the policy makers, are just so far behind this audience you, know that's listening right now and the, people that are doing this that there's, definitely a part of me that's kind of, ready to shoot spitwads at them you know, while they're doing it and poke fun um, and maybe that's okay I it's politicians, are there for us to poke fun at um so, I'll give them a little credit they're, trying yes yes they're trying and one of, the things I just came across my path, this week is we've heard of this EU AI, act and we've talked about it here where, they're trying to restrict certain risky, uses of AI and other things that seems, fairly I think to some really, restrictive and one of the things that I, saw was that there was an open letter, from GitHub hugging face Creative, Commons um I don't know I don't even, know how this works like I guess if, you're an organization of that size you, know how to get a letter to the right, people I don't even know if it's open, you just post it on a website and hope, they read it and hope that articles get, published about it which I guess is what, happened so I don't know I I don't know, if the EU AIA ACT people are actually, reading this I guess that is where I was, going with that but I assume that, they're aware of it a open letter from, GitHub hugging face and creative comons, and a number of others calling on the EU, to ease some of the rules in the AI act, basically arguing that some of the, things in the AI act are kind of, regulating the Upstream open-source, projects as if they more commercial, products which they're kind of this open, source, ecosystem and I think the fear is that, if the restrictions come like they're, planned then that somehow stifles what, we're seeing in the blossoming world of, open- source AI so there's probably a, Counterpoint to that which would be, maybe that sort of blossoming is what's, creating like rapidly creating issues, that are hard for people to deal with, I'm not saying I'm taking either one of, those stances I'm just trying to play, Devil's Advocate if if the open source, world is really driving this blossoming, of things right and the blossoming of, things is what are causing people to, really have a lot of these risky type of, scenarios pop up you know how do you put, some regulation around that process when, you don't want to stifle the open source, thing which I think we all love we, support that on this show and see how, it's benefited things but also create, some problems probably and how do you, deal with that tension yeah it's a, really hard nut to crack you know to, figure out where the right balance point, is on that because this open letter that, we're talking about you know it's really, getting it that there are all sorts of, negative unintended consequences that, can come about by taking an action and, making it a regulation or a law uh and, doing that, and yet that's being balanced against uh, really broad fear that the public is, expressing you know there's a lot in the, General Media you know not the technical, media not AI media but the General Media, there's a lot of Stories being published, about concern going forward um but then, that gets also balanced against, organizations with various motivations, that are worried about being left behind, they're worried about their competitors, or their adversaries getting ahead of, them potentially any given human and any, one of those organizations has that same, fear so if you're a policy maker how do, you approach that problem you're kind of, lagging behind probably already on the, technical side you know which kind of, going back to the earlier point and, you're trying to regulate something, that's as Cutting Edge as one can be, before it ever happens by looking at, what you have today and trying to, project into tomorrow it's a tough, position to be in it is and I think, there's really a lot of fear on both, sides that this is you know fear on the, one side because policy is falling so, far behind what is the state of the art, but fear on the other side because, there's real consequences like you say, if something is made into a law whether, it's enforced or not the reality is that, it's there I'm just looking uh Jeremy, Howard from Fast AI has commented on, some of these things and written a blog, post but one of the sort of quotes that, I pulled out of or that he looked at um, with the EU Act was the fact that sort, of any model made available in the EU, without passing certain extensive, licensing and and approvals could face, massive fines and if you think about, where those models are coming from if, those are just some developer somewhere, creating a model and posting it on, hugging face certainly that's available, in the in the EU that puts a liability, there so there's real consequences on, both ends because also if I'm a policy, maker and we've seen this in the US just, this week too people Gathering to figure, out like where do we what do we do yeah, where do where do we go there's a, dissonance between the way technology, develops and evolves which is not, strictly consistent with you know, nationality and you know and legal, barriers and legal lines to some degree, and you know you've seen that in many, different things outside of the topic, we're talking about where people will, move it to a different nationality where, the laws are different and stuff like, that but there's an added uh, complication here and that is this is, and we keep talking about it especially, this year this is moving so bloody fast, that the ability to render a a law or, regulation essentially completely, ineffective is is quite easy right now, by simply moving things around the globe, and taking advantage of the the, different things so it's going to be, interesting to see how different legal, entities you know uh cope with this how, how do you make whatever they end up, with whether it's be the the EU or I was, looking at you know uh member of, Congress you know commenting the White, House has stuff out on it but how to, make that enforceable in the large if, you're really firmly planted if like if, you're an American company who does most, of your business in America you're, regulated by American entities that's, one thing but a lot of small businesses, don't operate that way and they're not, strictly limited to that so I think, going back to our legal episode recently, where we talked about the ability to, enforce is really going to play out in, this yeah there's the enforcement side, there's also the side of this which is, how are policy makers thinking about, this and what conclusions are are they, coming to and what guidance are they, providing whether that's put into law or, not that's an interesting thing to, follow I think that's one of the, interesting things in the US this week, one of the things that you sent me which, I think has been happening for some time, of course the White House and others, have been talking about AI for some time, here in the US but there's interesting, things like this blueprint for an AI, Bill of Rights which is published and, it's quite interesting so just from a, practitioner standpoint I'm coming to, this saying okay how are maybe, non-practitioners hopefully advised by, practitioners how are they viewing the, way in which we should go about doing, our jobs because probably that's going, to affect us at some point and maybe, they have some good ideas to influence, how we do things practically so it could, just be completely ludicrous it could, provide some really interesting talking, points so I was reading through that I, don't know if you saw the the Bill of, Rights I just pulled it up and um you, know it's funny this is kind of late to, the game in my view several years ago, you know without going into specifics, that could get that could get me into, trouble I was kind of deeply involved in, the early details of AI ethics in, several organizations doing some of the, work and here's how I ended up it's hard, to come up with good principles, but even as hard as that is that's still, the easy part of the job because the, devil is in the details of how you, implement and what they mean and how you, have the Nuance to accommodate all the, day-to-day life going back to the, discussion we were just having about you, know open models and unintended, consequences and such as that it's, really hard to do um they have good, verbiage uh from the White House but I, still at each one of the points that, they have I can't help wonder which of, the many ways might you go about, interpreting this and implementing any, of those interpretations that you have, it's very nice lovely fluffy language uh, and not terribly practical AI yet yeah I, think it just to give our listeners an, example so this is an example of what, policy makers are I guess giving us as, guidance in developing systems so for, the blueprint for an AI Bill of Rights, they have various parts of this like the, actual Bill of Rights they even say from, principles to practice so they're from, principles to practice so they break, this down into several different points, one is safe and effective system so, their bill is you should be protected, from safe or ineffective systems and, then they kind of go into what should be, expected of automated systems well it, should be expected to protect the public, from harm in a proac Ive and ongoing, Manner and then they give some kind of, ways to do that like, consultation testing risk identification, and mitigation ongoing monitoring clear, organizational oversight so part of me, when I read this part of me thinks well, if I go and I try to make my product, sock 2 compliant or something like that, I have to do a lot of those things, anyway, so why is this different than some of, the sort of compliance things that are, already widely accepted as compliance, things that that matter and maybe it's, where AI comes into the automation, there's some large language model, reasoning going on or something like, that but a lot of these things would be, things that I'm I'm already looking at, another one algorithmic discrimination, protections you should should not face, discrimination by algorithms and systems, should be used and designed in an, equitable way what should be expected, proactive assessment representation and, robust data guarding against proxies, ensuring accessibility during design, development and deployment disparity, assessment disparity mitigation ongoing, monitoring and mitigation I think a lot, of this is good language right some of, it blurs the line a a little bit for me, to current things that exist in terms of, compliance and then some of it is a nice, principle but what do I do with, [Music], [Music], it so there was another point on the, White House blueprint uh it's a point, that I see people people grappling with, a lot and I don't think we found the, right answer yet and I don't think what, they say in it is necessarily the right, answer because I at the end of the day I, don't think it's practical and that, point is near the bottom they have human, Alternatives consideration and fallback, and they start off with the first line, in bold and the first line says you, should be able to opt out where, appropriate and have access to a person, who can quickly consider and Rey, problems you encounter and the problem, that I have with that particular ular, point is um I think that's great for, right now for us in the moment that, we're in at this moment and the level of, AI and the level of automation but in, the years ahead across all Industries, we're going to see dramatically, increased automation we're going to see, a number of tasks being automated that, are Beyond human ability to be able to, do and that will be a natural, progression and that may sound scary, today as people listen to this now but I, think that that is the evolution ahead, as as it always has been long before AI, came out if you you know moving from, horse to the buggy you know to, automobile that kind of thing we move, into new directions and there are new, concerns and dangers and we have to, mitigate them but the distinguishing, thing about this particular transition, that we're just at the early stages of, starting is that we will move into, things that we can't do in an autom, sense that there is too much happening, it is happening too fast when there are, millions of considerations in a tiny, fraction of a second there's no human, that can handle that and I think that we, will certainly make bloopers but I think, that a statement like this is driven by, fear it's fear of what happens if we, lose control and I'm not saying that's, not a legitimate fear I I think it's one, of those things that we need to be, working through in many different areas, but when the White House starts off by, saying no no no no whatever we're going, to do there's going to be a human right, there is not really considering what, we're observing here the Steep increase, uh in AI being applied across many, Industries so I'm I'm throwing a stone, at that particular item yeah I do see, what they're saying in terms of it could, be there could be this vicious cycle, that develops right because AI is, getting better at doing customer service, and generating, responses and AI is getting better at, automating systems right so to be, hyperbolic right if I get on a train car, that's automated and there's some, problem and I'm stuck on a bridge above, a river or something like that and I, call the support number and it's a, generated voice helping me through my, issue right this whole thing Cycles, through automated systems you know not, working properly for me and then trying, to help me and maybe actually the to, your point maybe the automated system, can help me guide me through that it, might be better than the human could, have been yeah I definitely get the, concern around this sort of cyclical, thing and where does a human actually, pop in yeah I think there's a lot of, systems that will operate at speeds as, well and with complexities that it's, going to be hard for a human to debug, these things anyway right so it's an, interesting point one of the other, things that they Link in there is this, it's a nest AI riskmanagement framework, airc part of me wonders as a, practitioner like if I'm developing a, new software product or I'm offering, software in an Enterprise setting, there's probably going to be an, expectation on me that I go through some, process to maintain gdpr sock 2 type two, whatever the specific compliance hipa, compliance right you know monitoring you, can put in place there's thirdparty, audits etc etc so part of me wonders is, this sort of AI framework kind of kind, of morph, into maybe not this one specifically but, will there be sort of risk management, Frameworks that filter into not, necessarily policy so I think we've been, talking about like the White House and, governments there's certain things that, could be put into law but also I could, very well see a scenario where one, Enterprise says to one of their vendors, oh are you, airc compliant and how do you prove to, me that you are well maybe it's a, third-party audit maybe it's a, monitoring system like it's done with, you know Hippa or other things I wonder, if we're going to get into some of that, as well where whether or not the policy, makers make laws, I suspect we're going to get into some, of these scenarios where we'll have some, compliance Frameworks put into place, that certain Enterprises start forcing, on other providers right because they're, accepting some level of liability for, the type of AI reasoning that they're, integrating into their applications so, if I'm an insurance company and I'm, hiring X vendor to provide some of my AI, logic or something like that I'm making, calls into their system do they have to, be compliant in some way beyond the, compliance structures that are already, in place like Hippa and others yeah, you're right I wasn't trying to cut you, off there but you're totally right I, mean that's a huge business opportunity, that is to be realized you heard it here, take it and make the AI RC compliance, monitoring framework and you can make, some money Daniel whack father of, Industry right there father I'm all the, time giving away ideas on this show I, probably need to keep some every once in, a while but uh you know what to stick, with the AI theme we'll call you the, Godfather of AI compliance that you know, CU The Godfather is a popular thing for, at least three luminaries that we know, three three luminaries that's funny I, don't know if I want me associated with, the whole compliance uh field but maybe, depends how much you pay me I guess yeah, I was going to say it may not be sexy, but it's lucrative so yeah exactly you, know I'm looking at some of this stuff, like in the airc it's there's certain, things that I could see just knowing, myself having gone through some of the, compliance things like when you go, through a compliance monitoring thing or, an audit it's like do you have this, policy in place are you educating people, about it you know that sort of stuff and, there's certain things in here like in, the Govern section there like the, characteristics of trustworthy AI are, integrated into organizational policies, processes and procedures so I could just, see it now like how are you integrating, the characteristics of trustworthy AI, blah BL blah and you'll have to show in, some policy which may or may not be ever, read by certain employees but hopefully, if you're being truthful it is so yeah I, think I think we could see that soon not, only that but the ironic thing about, this is that you have this framework, here but with this explosion this, proliferation of models that we keep, talking about and and new techniques, that are just happening all the time and, being released as that continues to, accelerate for some period of time being, able to apply these to that quickly, enough for Market forces to work will, almost certainly require compliance AI, models that can look at new models how, are they're approaching and figure out, whether or not they're doing it that's, the real meta, thing it's meta all the way down it's, meta Turtles all the way down on this, one there's the podcast title meta, Turtles all the way down Zuckerberg's, all the way down that's, right well on these shows uh I think, it's always good as well to to share, some practical learning resources with, people and um I did find one this week, actually I monitor Hacker News for my, good dose of humor and vitriol and, superconductors and all the things but, one that was really really uh good to, point people to would be this uh, patterns for building llm based systems, and products from Eugene Yan hopefully, I'm saying that correctly and this was a, pretty extensive article so it's very, long article and there various sections, in it but he walks through a lot of the, things that people maybe practically are, struggling with in terms of building llm, based applications so he talks through, evaluations he talks through retrieval, augmented generation fine-tuning caching, guard rails defensive UI and collecting, user feedback all of these things you, know he talks about being used to, measure performance get better Tas, specific results reduce latency and cost, etc etc these are all the Practical, things that people are doing day today, as they're building their applications, and a little while ago Anderson Horowitz, put out this like evolving ecosystem of, the AI or llm app and it had a lot of, these pieces on there like caching and, guard rails and stuff and I think this, dives into a lot of those pieces in a, much greater amount of detail so if, you're wondering like oh there's this, emerging ecosystem of the llm app if you, want to know about various pieces of, that I think this is a good way to, understand a little bit more about how, those pieces fit together just for, audience you slacked me over the link, and and I clicked on it as you started, talking about it and I as you've been, talking I was kind of glancing the very, first thing I noticed when it came up, was how small the slider on the right, side of chrome was which expressed how, long the it's a significant article it's, a significant and then the next thing I, noticed was it wasn't a three minute, read or a five minute or even a s minute, read It's a 65 minute read and so I, started you're right I mean just like, having not had the 65 minutes to go, through it just looking at this it is, incredibly detailed so I can't I'm going, to dive into this after uh after the, show today and that's fantastic Learning, Resource I I can tell that just all, these graphs and everything and it's, fantastic yeah yeah definitely um a lot, of graphs um the first one you'll see is, you have llm patterns on a scale from, data to user and offensive to defensive, in other words improving performance or, reducing cost and risk and kind of, plotting those various strategies and, there's formulas if you want formulas, there's plenty of stuff that you don't, need to read formulas to understand but, yeah great resource I'm very happy to, come across this and and point people to, it it was a good one yeah well Chris um, I don't know what AI Adventures are, ahead of us in the coming week but I'm, certainly looking forward to talking, with you about them next time on a on a, fully connected or with a guest we never, have a dull week it's there there's so, much happening that we're always trying, to find which thing are we actually, going to talk about yes so it's a fun, time to be in this field yes awesome, well uh thanks for chatting Chris we'll, talk to you soon sounds good take, [Music], care thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all change talk podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music], K LOVE |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Vector databases (beyond the hype) | There’s so much talk (and hype) these days about vector databases. We thought it would be timely and practical to have someone on the show that has been hands on with the various options and actually tried to build applications leveraging vector search. Prashanth Rao is a real practitioner that has spent and huge amount of time exploring the expanding set of vector database offerings. After introducing vector database and giving us a mental model of how they fit in with other datastores, Prashanth digs into the trade offs as related to indices, hosting options, embedding vs. query optimization, and more.
Leave us a comment (https://changelog.com/practicalai/234/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Prashanth Rao – Twitter (https://twitter.com/tech_optimist) , GitHub (https://github.com/prrao87) , LinkedIn (https://www.linkedin.com/in/prrao87) , Website (https://thedataquarry.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Vector databases blog posts from Prashanth:
• (Part 1): What makes each one different? (https://thedataquarry.com/posts/vector-db-1/)
• (Part 2): Understanding their internals (https://thedataquarry.com/posts/vector-db-2/)
• (Part 3): Not all indexes are created equal (https://thedataquarry.com/posts/vector-db-3/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-234.md) | 35 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I am, the founder of prediction guard and I'm, joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris doing, well today Daniel how are you I am doing, so good because uh a lot of my dreams, are are coming true in terms of topics, to talk about I've been wanting to talk, about Vector databases on the show for, quite some time I know that we've, mentioned them but we haven't had a had, a full episode on them and I was, scrolling through Linkedin and saw uh a, set of amazing posts and very practical, posts about Vector databases that I, quickly shared and also sent a message, message to prant raal who is a senior Ai, and data engineer at the Royal Bank of, Canada welcome prant hi hi thanks so for, having me yeah yeah well now you have a, three-part series on Vector databases a, three-part blog series what makes each, one different understanding their, internals and not all indices are, created equal I hope we can get into a, bunch of that but maybe to start out, could you just let us know what a vector, database is and in particular why are, people talking about them now for sure, so um I think uh the way I want to, answer this question is i' like to break, it down into parts and answer each bit, sequentially great so to answer what a, vector database is well let's start with, what data is in the first place right, the definition in my head is data is an, organized collection of structured or, semi-structured information and it's, stored digitally in a computer now when, you have data you need somewhere to put, it so that brings us to the question, what is a database so a database is a, system that's built for easy access, management and updating and also, querying the data at hand we also need, to talk about what vectors are vectors, uh you could call them a sort of, compressed data representation that, contains semantic information about any, underlying entity it could be text, images audio anything like that so now, we put all of these things together what, is a vector database um a vector, database is a purpose-built database, that efficiently manages stores and, updates vectors at scale I think the, scalability is a very key factor there, and it also retrieves the most similar, vectors to given query in a way that, considers the semantics of the query so, I think all of these terms holistically, come together to form what you, understand as a vector database and when, you say semantics what what do you mean, in terms of semantics and how that maps, onto a vector so um I'm sure everyone, most listeners are familiar with the, concept of language models um LMS are, everywhere so uh the thing with, semantics is typically a query that you, have like if you write a query to like a, search bar on Google or something you're, thinking in terms of keywords uh you're, just thinking in terms of okay I want, this particular thing this item whatever, you're thinking about and you type the, word in there where semantics comes in, is did you type in something along the, lines of what the data itself has so can, that the query actually translate into, something that the database actually, understands and produces the result that, is most meaningful to the query that you, putting so it's not just about the words, or the features of that word it's also, about the meaning of that word and how, that comes together in the underlying, internal of cool so before we keep going, just because you know you have, developers and data scientists and, they've worked with kind of the all the, other database types that most of us, have worked with for decades and we have, multiple times over the years had to, kind of like understand the new thing, that's out and what the you know what, the value is no SQL there you go and so, like I'm going to jump on that you know, we started with uh the SQL query, language with these that are for, relational databases and then we went to, nosql uh which there are variants of and, things called object databases I, understood your definition of vector but, I didn't understand how it related to, the utility or lack thereof in some of, those other approaches could you kind of, lay the groundwork or the the landscape, of what that is for sure so yeah I mean, I'm a total database junkie I love, thinking about the various kinds of, databases out there um so actually, before we go into that a quick summary, on terms of in terms of where I'm coming, from right so I started off as a data, scientist uh so I'm fully in your world, Daniel and uh it's been a few years down, that road for me and I think for me I've, hit that point where I've been lost in, the world of models and Hyper pramet, tuning and AF data scientists will, relate with that but the more I began, thinking about it there are people who, have entire PDS in database Theory right, and their implementation um but then the, more I worked with data I realized that, you don't need a PhD to understand, enough to build a working application, built on top of a database that's when, you that began thinking about what, exactly are these different flavors that, you have out there right I mean of, course we've all come across SQL, databases in at some point in our, careers if we worked in Tech um so to, answer that question I think the gender, history of how these things Spann out is, quite interesting I I believe the, origins of theal databases come from you, way back in 70s I think when uh this, field called relational algebra was, formalized it's a kind of like a you, could say a formalization of the, mathematics around what it means to join, data query data store data in a database, in a way that is queriable I think SQL, databases are so mature so tried in test, and the reason they withstood the test, of time is because they View the, underlying storage or the underlying, data as structured and in many cases you, have structured data that is in the form, of transactions and what a transaction, basically means is some event happens in, the real world and you log that, information and you essentially build up, a sequential chain of data which is, basically a table and that's kind of, what the relational data model came from, and where relational models get, interesting is you have tables that are, related to other tables and that kind of, maps into real world complexities where, not all data is independent like some of, the data depends on other things like a, person person's metadata could depend on, what company they work at and things, like that so so that's how relational, data kind of became the norm you you, people were gathering data from digital, systems and then putting them together, and SQL became the sort of standardized, query language that you could use to, query data fast forward to Mid 2000s and, the nosql movement starts to pick up and, where that comes from is this there's a, point Beyond which relational data, modeling can become a bit inflexible, like it becomes a bit rigid because in, the real world you have data that comes, in from various sources now some of that, data can come in very rapidly with the, Advent of big data and uh streaming and, all these rapid ways of gathering data, that we have today it became very, obvious that the schema based approach a, schema is basically uh what kind of data, types exist in your table right so the, way relational models were built was you, needed to define a schema and the schema, kind of was the ground truth the data, has this data of this type only and, that's what you expect in that all the, time I think the noq movements sort of, built on top of the limitations of the, relational approach of defi being pred, decided by a schema because to be truly, flexible in terms of the massive amounts, of data coming in from various systems, you need to have a schema less Approach, at times and the schema list approach, basically means you store documents you, dump data in semi- structure Json blobs, and things like that in a scalable way, and I think horizontally scalable became, very very important in that period the, earlier databases that were were, relational I think they were more, vertically scalable in the sense that, you could just add more and more compute, and you essentially scaled up your data, that way but now with no SQL the idea of, Distributing the data as documents, across multiple machines and having, those machines communicate with one, another that became a new paradigm uh, but I think the challenge with no SQL is, because of the underlying nature that, the data need not necessarily be, dependent on on itself like in the sense, of relational tables they didn't adhere, to the SQL language standard uh and they, kind of diverged uh MB was among the, first and there were many others that, came after it using Json based query, languages so there was a big bifurcation, I guess in you could say the database, Community when on one hand you have SQL, enthusiasts who who swear by the, declarative nature of SQL and then you, have other the other community no SQL, who uses Json essentially query the, database they claim it was developer, friendly and Json is a developer, friendly interface language agnostic and, so on so in some ways it does have its, benefits but then depending on your use, and depending on what you're trying to, do there are people who will argue on, both sides that SQL should be the only, thing you should use or no squet tration, should use and so on so does that, clarify aspects of both those camps, before we move into the modern ones it, does and then if you could distinguish, as you go kind of how Vector is, different from those others that would, be helpful for me I I know and maybe, some other other folks in the audience, yeah and I think um maybe one thing that, I I loved about your blog post is I see, some of the players from the world that, we just talked about represented within, that landscape and then also some that, I'm not familiar with or at least that, I've seen only recently and so you've, got these different axes like postgress, which is a sequel based query language, to a relational database has some part, to play in this Vector database, ecosystem um but then others which seem, to have their own query language so, maybe you could also start to break down, for us so we want to store vectors in, databases now to do these sort of, semantic queries does that need to be, soared in one or the other of these, types of databases that you've talked, about developed over time or how has, that happened and what are the sort of, major categories of players in in the, vector database space absolutely so I, think yeah before we get into the, specifics of databases I think to answer, Chris's point we definitely do need to, talk about the evolution right I see, that Vector databases are a natural, evolution of the nosql class of, databases if you imagine a v diagram you, have like a circle that represents SQL, and the other Circle represents no SQL, you have an intersection that, intersection point I believe they're, called new SQL now I'm not sure if you, come across that term it's quite, interesting but um new SQL they, technically use SQL like languages but, they also claim horizontal scalability, and a bunch of other things related to, acid compliance and all the other things, so it marries the benefits of both SQL, and nosql paradigms I was thinking, initially where do I place Vector, databases does it go in that, intersection or does it sit purely in, the nosql camp then I imagine this as, you extend that circle that has nosql it, becomes like a blob like a fuzzy, amorphous blob no SQL is huge and I, think I in my head Vector databases are, like an extension to no SQL and why they, came about to understand what vectors, are and how they're stored in database I, think it's important to understand what, search is and what essentially you're, doing when you query a nosql database so, where it comes from is in the early days, I guess uh people were just submitting, an exact query using a Json sort of, query language like our Mobb has and, that query has to have all the terms or, parameters in there that tell you what, you want to fetch from the dat right in, a SQL World it will be done with a, declarative query in SQL whereas in no, SQL you typically do it in Json over, time I think the idea of full text, search became very important because I, think everyone wants to be able to, retrieve information from massive Blobs, of data fitting around and how do you, query that right if it's in a no SQL, sort of format if it's not you can't, write a SQL query to retrieve it how do, you get that information so the idea of, a full text index came about and what, what essentially that is is it uses the, concept of invert indexes inverted file, indexes sorry where you consider the, term frequencies of terms that appear in, a certain document and obviously the, relative frequency of how often those, terms exist in a document versus the, entire data set so you combine all those, things together similar to how tfidf is, in data science there's an algorithm, called bm25 which is the most popular uh, inverted file index algorithm it's the, most commonly used one for full text, search so the early days of search, involved how do you scale that up, because you have massive amounts of data, how do you build that index very very, efficiently and then the querying, interface sits on top of that so you, essentially submit a query saying okay I, want so and so term in the keyword that, you put in and the inverted file index, the bm25 algorithm it considers the, words frequency and it considers subword, features and a bunch of other things to, intelligently retrieve relevant, documents that contain that term while, but also throwing out you know useless, words stop words and things like that so, it was more of like a back words sort of, approach you consider an NLP analogy, it's kind of like a bag of words way of, approaching text now fast forward a few, years I think ever since the Transformer, Revolution happened people began, observing the obvious power of, Transformers in encoding semantics right, a Transformer is way better at isolating, meaningful terms in a document, especially when when you're doing things, like classification retrieval and so on, so how could you merge those benefits of, a transformer with what you have in a, database so I think Vector databases the, term got coined I think much later after, Transformers came about it was mostly, called search engines before that a more, generic term I think a catchall term for, anything that involes search but, nowadays I believe search engine refers, to a more like you consider semantics as, a key component so essentially vectors, are the only thing that can do that so, to really describe what a vector is, essentially you have a language model, typically a Transformer based language, model that you use to embed the, representation of a sentence into tokens, and the representation is stored as a, vector the vector that you have, essentially for a particular sentence, typically those are done using sentence, Transformers which is the most common, kind of model you use that essentially, embeds the entire schematics of that, sentence in the vector and then the way, this scales up is you consider the, context of each and every token in that, Vector in a way that when you submit a, query the semantics of the query are, mapped to the vector in database and you, can find a similarity between what you, entered as a query and what exists in, the data so the a vector is a very, powerful way of you could say, compressing the representation of, meaning in a in a sentence or a document, in a way that scales up numerically and, you can rapidly query that in in, additional this is a chang log news, break, we've talked about prompt injection, quite a bit since chat GPT ushered in, the llm era in brief that's where you, handcraft a prompt that tricks a chatbot, into not following its own rules well, new research has uncovered some new llm, attacks on the Block which aren't, exactly that quote large language models, like chat GPT bar or clawed undergo, extensive fine-tuning to not produce, harmful content in their responses to, users questions although several studies, have demonstrated so-called jailbreaks, which are special queries that can still, induce unintended responses these, require a substantial amount of manual, effort to design and can often easily be, patched by llm providers this work, studies the safety of such models in a, more systematic fashion we demonstrate, that it is in fact possible to, automatically construct adversarial, attacks on, llms specifically chosen sequences of, characters that when appended to a user, query will cause the system to obey user, commands even if it produces harmful, content end quote the biggest difference, here is that they're achieving the, Jailbreak in an entirely automated, fashion and they make a case for the, possibility that such Behavior may never, be fully patchable by llm providers game, over man it's game over what are we, going to do now what are we going to do, you just heard one of our five top, stories from Monday's Chang log news, subscribe to the podcast to get all of, the week's top stories and pop your, email address in at Chang log.com newws, to also receive our free companion email, with even more the developer news worth, your attention once again that's changel, log.com, [Music], newws so prant you kind of alluded to, this and I think that explanation was, amazing of how this vector-based, semantic search really exploded around, the time that Transformers and large, language models did I think even in this, past let's say year there's been this, huge explosion of interest in Vector, databases could you maybe describe a, little bit so we know that you can, search a vector database to find similar, statements let's say or similar chunks, of texts where the similarity is based, on, semantics how are people using them with, regards to their Ai workflows and how, does that kind of correspond to what's, sort of popular right now in terms of, what people are exploring with AI so, yeah I think I need to highlight the, fact that I'm both fascinated and, frustrated by the current state of, marketing in datab bases both at the, same time I'm genuinely interested in, the use cases don't get me wrong um when, combined with llms L language models, like chat GPT um you could say any sort, of language model, layered on top of a vector database can, be used to build some very very, interesting applications one of those, interesting applications is querying, your data VI natural language I think, this has always been a dream of data, scientists and people who work with data, right rather than writing my query by, hand or constructing the query being, taking me from the ground up can I just, talk in natural language and have the, database kind of respond to that query, in national language as well the, application will be built using an nlm, at the core and essentially that would, be powering the whole translation of, human instruction to machine instruction, and back back to human uh I could go to, the details of specific applications um, but one thing I do want to maybe throw, back at you is I know this is a, practical AI podcast right so I guess, what I was hoping to get into is um I, have an idea for a four blog post and uh, in the series basically part of it is, the trade-offs right what really, interests me about the various Vector, databases out there and why I be began, writing about these at length is when it, comes to understanding what tool to use, in the real world when you have a, business problem when you have a, particular case you're trying to address, obviously there's tons of information, out there you could go out and read a, bunch of blogs and papers and come up, with your you know trade-offs but I, think it makes sense to actually walk, through some of these these tradeoffs, and my understanding is that as you go, through these tradeoffs you actually, begin formulating the actual the value, of these things much more clearly and uh, in my head I think it makes sense to, talk about the use case is once we go, through some of these key tradeoffs, because in many ways using a tool, depends on what goes into it and what, you thought about the different options, that you use you can Dive Right In, because yeah I I I had follow-ups which, were essentially what you're what I, think you're about to cover anyway so, I'll just leave the mic with you man for, sure yeah uh so basically um it makes a, lot of sense to write about this and, obviously read it at your own time but, this is a great place for me to begin, talking about it and eventually I'll put, these down in words as well so I've, broken these down into I think roughly, eight categories and um the trade-offs, I'm specifically speaking about what do, you need to think about when you're, thinking about a database and this will, answer exactly what you talked about, earlier Daniel about the so the first, thing I think Daniel mentioned is the, idea of deciding between existing, databases that have been around you know, document format and things like that, versus newly designed databases, specifically for vectors right so I'm, going to call it purpose-built vendors, versus incumbent or existing vendors, right so I think it's very important to, understand in many cases you might just, be looking to add semantic search, capability or just retrieving, information using semantics on top of an, existing application right and that, existing application could very well be, built on a well-known tried and tested, solution like elastic search post and so, on there's many solutions out there uh, and obviously in those cases it makes, sense to just say hey but why can't I, just leverage the vector index or the, vector storage of that database itself, like for example you mentioned postgress, one real big concern with this is um if, you look at some of the material online, on the performance of these the methods, of PG Vector PG Vector is basically the, vector plugin addon to post and there's, been enough documentation about this uh, but it essentially the way it's being, slapped on to post is as like an add-on, it's like built by a third party called, super base and they add a vector, functionality to the existing engine, that post has so by its very nature, because it's not tightly integrated ated, with the underlying internals of the, database itself like the storage layer, the indexing and all of that you're, going to miss out on a lot of, optimization not you but the technology, is basically not optimized from the, ground up to speed of indexing, performance during querying and so on, and this has been well documented so, that is a very big concern it depending, on your use case and how much accuracy, and what quality of results you want are, you better off using an existing, database that you already have in your, stack or actually bringing on a new, tried and test purpose-built database, for that very reason right uh and from, my experience I've been tinkering around, with quite a few options out there with, purpose built vendors in my opinion they, are always a better solution in terms of, scalability efficiency and also, accessing the latest technology like the, latest algorithms out there what, indexing algorithms are out there how do, they get the best ban for buck in terms, of your speed of indexing the quality of, query results the latency of those, results and so on so I feel like in the, long term if you actually are serious, about building a vector search or large, scale information retrieval system that, considers semantics it makes far more, sense to think about a purpose built, solution many many database Solutions, are out there I've listed some of those, in my blob and I think those are going, to win out over the incumbent vendors, who have kind of built you know Vector, offerings as you can call them what, we're talking about is exactly what I, had hoped we would talk about in this, episode because your blog posts were so, were so practical terms of how you think, about the infrastructure that you work, with daytoday would you recommend, because sometimes you don't know how, much you need to optimize at the, beginning and you can over optimize, right so would it be a valid maybe, stepping stone to say if I'm already, working with postgress I could try out, the vector capability of that and if it, works for my use case and I don't have, you know 3 million documents that I'm, searching over maybe it's maybe it's, fine I just have a you know I'm doing my, personal blog or something and then kind, of optimize as you hit a wall or is, there danger in kind of trying to make, that put a square peg in a in a round, hole sort of thing and get yourself in, trouble you hit the nail on the head I I, was going to exactly say put a square, bag in a round hole because I face those, issues myself I won't name exact, database vendors but I've work with SQL, and nosql databases and which obviously, have Vector Solutions I think the, challenge and the issue with saying that, okay I already have something that works, is you got to remember that every single, database that has existed for I think, more than 10 years the DAT databases, come with package and they have their, own Tech debt that is associated with, the underlying programming language, they're built on uh there's years of, decision- making and Architectural, decisions under the hood that they've, taken to implement Solutions the way, they have so they can't just throw all, of that away and then build a vector, solution that is optimized from day one, right it's going to take a fair amount, of time before these incumbent vendors, are able to optimize their offerings to, point that perform as well as purpose, built vendors because these purpose, build vendors have spent thousands of, maners I guess per offering in just, tuning and building for a very specific, goal so what I noticed in my experiments, is that a lot of features that you take, for granted in a purpose buildt offering, are not even available in the existing, solution like PG Vector is a very very, young uh solution right now um elastic, searches Vector offer, I've worked with that as well it is also, I mean considering elastic church has, been around for so long they only, released their first Vector like Ann, algorithm I think last year like 2022 so, in terms of a databases uh capabilities, that's very very young so I would say, like there's a lot of things that you, could potentially be missing or lacking, and I cover some of those in my other, trade-offs that I list as we go forward, yeah yeah let's go on to those I'm I'm, curious what number two is for sure the, number two is uh I came across this in, my first blog and reading some of the, comments on there and one of them, brought up this fact that the trade-off, between using a database set allows you, to build your own embedding pipeline or, versus using a buil-in hosted sort of, embedding Pipeline and by that I mean, how do you generate these embeddings all, these vectors right many people are, familiar with sentence Transformers it's, available on hugging phas and a bunch of, other open source platforms so, essentially it's quite easy or you could, say it's stal to put your data into, these pipelines and generate sentence, embeddings that you can just use to, ingest in into a database alongside your, actual data so you have your document, data that has all the fields and, attributes that you have in there, alongside the vectors that encode the, useful information in that that you want, to query on so that's a relatively, trivial thing to do but there are, certain database vendors who offer, convenience features on top of that, where they embed the API of these models, inside their own offering so if you're, just getting started and you don't know, much about how vectors work or how you, know lens work any of these things that, might be something to consider you might, be better off using something like wv8, which has pipelines built in where you, can just tell it okay connect to hugging, phas so and so water and it will build, the embeddings for you as opposed to you, writing your own custom Transformer, pipeline that actually takes in the, vectors generates the vectors and so on, now if you have experience with, Transformer models you might be far, better off in doing all of the embedding, work Upstream paralyzing and optimizing, that portion generating those at scale, and optimizing and from a cost, perspective getting those done with the, least resources and you know most, quality that you can and then just, sending the vectors over to your, database so this is an important thing, to consider depending on the level of, experience that you your developers have, in your team um to actually bring bring, the vectors in gotcha that makes perfect, sense what are some of the other, trade-offs so then the other thing is uh, the two key stages right you could say, you could break down um when you use a, vector database as a developer the first, stage is the input which is essentially, building the index right I go to the, indexing methods in a bit more detail uh, that's not really a trade-off it's more, about knowing what indexing even does, under the hood but what indexing means, is you have data that you need to encode, into a vector right now it's not as, simple as just dumping a vector which is, like an array of numbers onto your, database you have to be able to search, through those vectors so the goal of, indexing is to design uh efficient data, structures and store the vectors using, those efficient index data structures in, a way that they can be queried, efficiently and at scale so that is an, upstream process you do that once up, front you bring all your data in it's, index and now you have a bunch of, vectors in there that are searchable the, downstream portion of that is querying, right it's basically like inference in, NLP the query stage involves you taking, the user input transforming that into a, vector just like you did your Raw data, uh and the vector embedding that you use, there is the embedding model that you, use toly transform your data so that, they're compatible so that's a, downstream step you're clearly, separating the indexing step from the, query step right so the trade-off here, is is your database optimizing for, indexing speed or query speed or is it, mature enough that it has optimized for, both and if you look through all the, offerings out there many of the existing, vendors have focused more on one end of, the pipeline and not so much on the, other some of them are faster at, indexing and not so much at querying but, some of them are way better at querying, and much much slower during indexing so, generating that index actually can be a, very expensive step because it's not, only about using a sentence embedding, model or a Transformer it's also about, the database being able to translate, those vectors into an index and it can, actually query so depending on the size, of your data this could take hours or, even days like it's not unheard of of, here of indexing the uh periods of the, order of days um and of course depending, on your the amount of money you're, throwing at it you could go use gpus to, speed up the vectorization and and use, multiple you know parall instances of, the database to scale that portion up, but that's exactly it the trade-off here, is how important is indexing speed if, your data is coming in in a stream at a, very rapid rate it's important to, consider indexing speed as an important, criteria but then if you're not so, interested in dumping large amounts of, data very quickly but more interested in, serving results to a very large number, of users you know asynchronously then gr, SC becomes very very important I know we, don't want to necessarily uh call out, certain players in the space but I think, a lot of people are already familiar, with a lot of the names here so um maybe, if you could just highlight from your, perspective what are maybe some of the, ones that are maybe more like you're, saying mature in how they're thinking, about both of those phases whereas maybe, certain ones that are optimizing more on, one side or the other which like you, said depending on your use cases going, to be a good thing or it might be a bad, thing and so it's really about use case, it's not so much about the goodness or, or the how amazing a certain offering is, but more about use case yeah absolutely, so as you say I'm not going to call out, specific I mean to be fair everyone, every vendor makes trade-offs right they, themselves are obviously juggling a lot, of their own trade-offs when they build, these things but I obviously haven't, used every single one out there uh but, the ones I have worked with um the most, mature ones I think milis is an open, source uh purpose built database it's, been around the longest among the, longest I think in the vector database, Market it's extremely scalable I mean, the I I like to think i' I've written in, my blog I call it uh mil throws a, kitchen sink and the refrigerator at the, vector problem so uh it can really had, like billions of data points I mean it's, it's designed for that and obviously it, has had time it's been around for about, four or five years I wouldn't say that, that would be my go to First Choice uh, that's my own personal preference to be, honest it's more about I guess usability, how accessible their python client is, and so on then other vendors like V, Quant I very very optimized for this, reason so you could say that these are, very very powerful solutions they scale, really well they ingest data really, quickly and they also Supply query, results very quickly and relatively, accurately as well to be fair I think, the existing database vendors like, elastic search postest they're not there, yet in terms of the speed and that's, partly because they're general purpose, databases they're not specialized Vector, databases so it makes sense that they, have to deal with other priorities and, they cannot optimize for all of these, things with the laser focus that you, know purpose buil vendors have thank you, so much uh pant for helping us start to, pick apart some of these tradeoffs and, I'm starting to structure things in my, mind in a useful way which is is really, great because I've also been exploring a, lot of these and I agree with you, there's there's a lot of also new, entrance into the field that show a lot, of promise even the ones that aren't, quite as mature yet yet what are some of, the other you mentioned eight I think I, don't know if we've been through three, or four yet I I wasn't keeping track I, might have to speed things up a bit just, kind of list them off at least yeah okay, so so maybe I I quickly go through at, high level then we can go into fin, details of which ones you think are the, most interesting yeah so okay let me, summarize the first three um it's, basically purpose built versus existing, solution right that's number one number, two was external embedding pipeline, versus built-in uh hosting pipeline uh, number three is index ing speed versus, querying speed so I think the others are, um going more into the actual indexes, and generation of those indexes in more, detail so I'll go through them number, four is recall versus latency that's, more related to how accurate are the, results versus how fast am I retrieving, those results number five is in memory, index versus on disk index I think this, is a very big one for the future so we, definitely want to go into I think some, of the details of that number six is, sparse versus dense vectors the kind of, vectors themselves that are underlying, the index uh number seven is the, importance of hybrid search where it's, full text search combined with Vector, search they both have their own, trade-offs and I think the last one is, um importance of filtering so, pre-filtering versus post-filtering do, decide the quality of your search result, yeah I am very curious about this in, memory or on disk one well I'm, interested in all of them but I know one, of the things that come has come up in, several of the applications that I've, worked on has been okay do we self-host, one of these things do we use the manage, service because they're going to be able, to you know scale up and optimize things, there's also the choice of oh well I, could just load one in memory you know, on the fly ephemerally right I could, have an embedded case where I load a, bunch of vectors in and then there's, some persistent file that I can pass, around right and then there's I think, more of what you were getting at which, is like is this index represented on dis, or in memory could you maybe help us, parse through some of those things and, go into a little bit more detail of what, you mean there so yeah uh now that you, mentioned uh selfhosted versus Cloud I, think that's a number nine that I will, add eventually that's very good point, that you brought that up there perfect, yeah may maybe we can find a number 10, to round it out before the end of the, episode and show this way more yeah go, on on but yeah so so going back to your, in memory I think it's a very important, one so I think I think this is one of, the things that is defining the you, could call it the race towards Vector, Supremacy I I don't think the term is, very accurate but anyway um I think the, challenge with most of the vector, indexes out there I think the most, popular one by far is called hnsw um, hierarchical navigable small world, graphs and I go into the details of the, algorithm in part three of my blog so, I'd be happy to I mean discuss more uh, with anyone else outside of the required, but hmss W index is known for its, relatively good trade-off recall and uh, latency it's fast and it's relatively, accurate uh but it is also memory hungry, and where this becomes an issue is as, data sets get larger and larger and, larger this is called the trillion scale, Vector problem now a lot of vendors are, talking about it uh it's not too far, away to imagine that you're going to, have to at one day at one point index a, trillion vectors and that is by no means, a mean feat right it's it's a very, challenging problem so the data set in, that situation would be way too large to, fit in memory now hnsw already does a, lot of optimizations under the hood the, algorithm is designed to store a sparse, graph in memory and essentially you, search through the sparse graph and then, through the layers of that graph you, narrow down on the nearest neighbor to, the query that you input but as we go, get larger and larger in terms of data, even that Spar St does not fit in mem so, databases have come up with different, solutions as to how to deal with this, out of memory issue one example would be, quadrant they use this thing called men, map uh it's like a sort of static Ram, option where you don't actually store, the vectors in memory but you persist it, to the page cache and it's still better, than directly loading storing it on, solid straight drive which is one level, below right so in terms of latency hit, it's not as bad so you don't lose that, much performance and you'll notice that, a lot of vendors fight really hard to, avoid persisting any vectors to or the, index to dis because the moment you go, onto a solid state drive there is a, massive performance hit in terms of, retrieval because the speed at which, you're able to retrieve things from, memory is as you know much much much, faster than what you could do on disk, that's a general Trend I think across, the board right now most vendors are, largely working with storing the hnsw, index in memory and then adding some, sort of caching layer to avoid having to, repeat the queries and waste timing in, that sense but this is entirely new, index called V now so I've written about, that in my blog it's optimized for solid, state disk retrievals and the algorithm, they use is called disn not every, database vendor has implemented this, it's still in the early days but if I, look at where the future is going there, are many options that vendors could go, down the road off right they could, choose to implement hnsw on disk but, record suggests that that's not a great, idea because its performance would, drastically reduce it would not perform, that quickly as it does disn seems to be, the agreed upon standard across many, vendors but the challenge of disn is the, original research paper that implemented, it uh the Microsoft team that, implemented it their implementation does, not directly trans late into the, database internals depending on the, language that the database uses many of, these are written in go or disan, implementation was written in C++ so, it's not a direct you know transplant of, the algorithm from the source to the, database it require a lot of rewrites, and a custom approach towards optimizing, for that speed but that being said I, have to point out one particular vendor, that I think really stands out from, everyone else on this Str um they call L, DD they I believe the youngest database, out there they they just come about I, think end of 2022 early 2023 and they, are the only solution as far as I know, who only support on disk indexes they, don't do an inmemory index at all and I, was initially very surprised as to how, they even do this how can they go about, this but as I dug into it and I spoken, to some of the team as well they're, really really open about their you know, research that they're doing and all the, models that they're building but, essentially they innovate on multiple, fronts but the biggest Innovation is the, underlying storage layer storage format, they built this for format called Lance, which is essentially optimized for on, disk reads of data and the database, itself is built on top of this open, source format Lance so the whole thing, is open source it's built in Rust so the, performance there is already you know, close to Bare Metal it's relatively fast, they have already built an experimental, dis in and implementation so when it, comes to this on disk versus inmemory, tradeoffs quadrant is going about their, own path in terms of how they achieve on, this larger than memory data we is going, around its own path Lun D is innovating, on a different front I feel like these, are the three vendors who I've uh, interacted with more and used and I, think the future is heading towards one, where on disk becomes a requirement and, the standard way of implementing an, index uh but the engineering challeng is, still ongoing let me ask you a slightly, different question it's not completely, unrelated the things that you've been, you know kind of addressing there kind, of are leading me to the next step on, that uh so when you're thinking about, kind of environments that you want to, put in like I know if if you look at the, other database types before Vector you, would have some that are you know scaled, massively in the cloud you'd have others, as we've moved more and more uh, intelligence and data out onto Edge, devices and they're either embedded or, they're designed to serve in a very, constrained environment what are the, options for Vector databases in that I'm, assuming that there's obviously the the, cloud capability because that's kind of, always the Baseline do you also have you, know as we're moving into a an, increasingly autonomous world out there, um and more and more things are being, pushed out outside of the data centers, and the clouds uh or at least the, central parts of the clouds are there, options for either embedded or micr, serving if you will uh on the vector, side an amazing Point yeah and I covered, this in my blog post number one um in, terms of the architectures of these, databases and you're absolutely right I, think there is a lot of room for, embedded databases to become the norm I, know ddb is making waves in the SQL, Market on the frun I think a lot of, vendors are ulating what duck DB has, done in SQL uh as you know duck DD is an, embedded database unlike postgress which, is a client server architecture database, right so what happened in the SQL world, is now translating into you could say, the vector world two databases that are, following this embedded approach Lance, DB as I mentioned and chroma DB these, are the most chroma DB is quite well, known people have been talking about it, for a while but between the two of these, I do think that LD has it stands out, more in the underlying techn techology, because the chroma from what I, understand right now is it's still, building out its underlying layer it was, kind of wrapped around an existing, underlying internal database itself it, was not it did have its own purpose, built offering to begin with but they're, kind of building that out as they speak, so I think between these two vendors, it'll be interesting to see how you know, each of them rols out their own features, are kind of Target a specific part of, the market going back to the point of, cloud versus on Prem that's another big, thing I think that's going to come up, honestly like pine cone and services, like that that are completely on cloud, there are they could be real potential, bottlenecks for companies to be okay, with just you know sending out their, data to Some Cloud even if fine cones, they would Deploy on your infrastructure, at the end of the day it is still a, purely cloud-based solution there's a, lot of infrastructure related hurdles, around that self-hosted is I think as, you see going to become more and more, common and certain options like8, quadrant they offer self-hosted options, in their licensing as well so the, question for me that remains unanswered, is which model in Vector database vector, search will dominate in the longer term, embedded versus client server right we, are so used to the model of client, server that's been working for more than, a decade right now uh pretty much every, database we've used is based on the C, server architecture where the server, sits remotely and I don't have to have, the server running anywhere near where, my application's running but I think, embedded databases especially with llms, in the picture it makes a lot of sense, in terms of data privacy and things like, that and the scalability of these have I, guess not truly been tested DD is just, three or four years old Lance DD is less, than a year old choma as well so it'll, be interesting to see how embedded, databases compete on on that front like, how well adopted they are because I, think industry generally tends to favor, things that are tried and tested at, scale this sort of thing to catch on it, would have to offer real real business, value and the way these these databases, monetized their they're offering I think, that's what be interesting to see and I, guess we've already started moving this, direction a little bit but as we draw, closer to an end here I'm curious you, have explored probably more than well, many people certainly myself in terms of, how all of these offerings compare what, the trade-offs are related to Vector, databases I'm curious as you look, towards the future what are you excited, to try that you haven't yet tried and, then maybe what what excites you about, this space I know you mentioned, certainly there's things that are hyped, or maybe different marketing that plays, into this but what are you actually, practically excited about as a, practitioner in the future of this, Vector database uh space I think the lwh, hanging fruit is the immediately obvious, one so I'll start with that I think in, the past when it came to search we, imagine the Google search bar and idea, that to build something like that was, inconceivable a few years ago having a, scalable reliable search engine that you, could build in house on your own, proprietary data was really difficult to, do at scale but today I think with the, combination of vector databases and llms, uh with gbd4 now and all the other, models out there I really think that, it's kind of become available to the, masses right like the average company, who does not have massive compute is, still able to build very very valuable, search Solutions information retrieval, Solutions on top of their existing data, there are additional offerings um like, Hast stack and you know like the search, engines that build on top of back, databases but I think the Foundation, layer are actually being enabled by, Vector databases which is why I'm so, interested in in those news cases so, those applications are very interesting, at first the other thing is retrieval, augmented generation this is a term that, came about I think it was introduced by, meta in one of the recent papers, essentially the idea behind retrieval, augmented generation is typically, information retrieval involved you send, a query and you receive response that, retre information relevant to your query, right where the generation comes in is, now llns add an additional layer on top, of that you could send a query in, natural language and you retrieve the, most similar documents to that query, right but rather than just retrieving, the document itself you could have the, language model go through the document, look at your query and then retrieve, only the part of the document that is, relevant to that query and then generate, a response that could potentially answer, a question that you had like what is the, birthday of so and so person who runs, this company right so these kinds of, things were really really like almost, impossible to do uh before but now I, think it's it's really actually, achievable with the kinds of tools and, technologies that are available today I, think raled generation is really, skyrocketing right now as as a term I, think everyone's talking about it but, what I want to add to that is I want to, throw this out here to any of the, listeners and maybe potentially I'm, going to talk about this to other people, in Industry as well can we add another, layer to retrieval augmented generation, and what I'm really interested in is how, the Two Worlds of graph databases and, Vector databases come together and I've, posted about this a couple of times but, what's really interesting right now is, most graph databases like NE forj for, example they use declarative query, language interfaces like Cy Cur is you, the SQL equivalent for graphs the good, thing about knowledge graphs is they, encode factual information uh and in a, very human interpretable way so the, things that form nodes and edges in, Knowledge Graph there are something that, we as humans put in there and encoded, our knowledge of the real world into the, data where Vector databases sit, complimentary to this is in many cases I, might have connected data where unless, say a person you know knows other person, like a social network situation person, follows another person person lives in a, city and so on these are all meaningful, connected entities in the real world but, you add some layers of data on top of, this uh you know data about a city you, know data about a person you know where, they worked you know what company, information that has like there's so, much additional unstructured data that, attach onto the node in a grph that is, actually hard to query using, conventional graph algorithms or you, know graph languages so I think Vector, databases are uniquely placed to add new, value in that space in terms of I call, it factual knowledge retrieval now the, problem with knowledge retrieval is, sometimes the queries that you have need, to be exact the ability to submit a, fuzzy query that does not exactly match, your terms in the graph is something, that you didn't have before it's very, difficult to actually generalize your, your query in a way that retries useful, information so I'm very interested to, see how the power of natural language, querying interfaces enabled by lnms can, be built on top of vector databases that, store all the information related to an, entity and then encode that entity into, a knowledge draft and then you tie all, these things together in a way that you, can actually retrieve uh information and, explore and discover aspects about your, data that you couldn't otherwise in a, way that actually ties all these tools, and Technologies together so I call it, like an enhanced retriever augmented, generation sort of uh model and this, would obviously require tools like Lang, chain or lav index I mean these, additional Frameworks that allow you to, compose these different tools together, and pass data and instructions back and, forth between the human and the, different underlying databases, themselves so I'm super excited about, those Technologies yeah I think it's, great to hear that perspective also you, know usually the answer is not like only, this technology and nothing else is the, solution but a strategic combination of, things often is where things end up and, um I think those are are really, interesting topics to explore and I look, forward to your mini part blog post as, you explore those I'm definitely going, to be following your your writing now, and um yeah thank you from the community, and from us for your your work on this, topic and and sharing that work with the, community it's super practical and we're, very privileged and happy to have you on, the show thank you so much pran no I, appreciate it thank you so, [Music], much thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical playi with your, friends and colleagues thanks once again, to fastly and fly for partnering with us, to bring you all change do podcasts, check out what they're up to at, fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music] |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | There's a new Llama in town | It was an amazing week in AI news. Among other things, there is a new NeRF and a new Llama in town!!! Zip-NeRF can create some amazing 3D scenes based on 2D images, and Llama 2 from Meta promises to change the LLM landscape. Chris and Daniel dive into these and they compare some of the recently released OpenAI functionality to Anthropic’s Claude 2.
Leave us a comment (https://changelog.com/practicalai/233/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• What is NeRF article (https://datagen.tech/guides/synthetic-data/neural-radiance-field-nerf/)
• Llama 2:
• Llama 2 site (https://ai.meta.com/llama/)
• Llama 2 paper (https://arxiv.org/pdf/2302.13971.pdf)
• OpenAI Code Interpreter (https://openai.com/blog/chatgpt-plugins#code-interpreter)
• Anthropic Claude 2 (https://www.anthropic.com/index/claude-2)
Learning resources:
• Hugging Face guide to Llama 2 (https://huggingface.co/blog/llama2)
• LLaMA 2 - Every Resource you need (https://www.philschmid.de/llama-2)
• OpenAI code interpreter article (https://www.oneusefulthing.org/p/what-ai-can-do-with-a-toolbox-getting)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-233.md) | 3 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another fully, connected episode of practical AI in, these episodes Chris and I keep you, fully connected with everything that's, happening in the AI Community we're, going to take some time to discuss the, latest AI news and then we'll share some, learning resources to help you level up, your machine learning game this is, Daniel Whit neck I'm a founder and data, scientist at prediction guard and I'm, joined as always by my co-host Chris, Benson who is a tech strategist at loed, Martin how you doing Chris doing cool, I'm trying to figure out how did we, survive before all these great new, models and stuff like it's changed oh, yeah it's been crazy yeah I I just, created a um a post for LinkedIn and I, was like grabbing text putting it into, chat GPT like getting nice rephrasing, and then like oh I need an image and in, particular we'll talk about it a little, bit in this episode but I was like oh, there's this free Willie model from, stable AI which is like whale themed, right and then I've got the Llama thing, so I just went to stable diffusion Excel, on clip drop and said hey generate me an, image with a whale and a llama together, and you know how do I even post to, LinkedIn before without these things, it's like a different world yeah 2023, versus 2022 is totally different the, content generation the way you code it's, a different world yeah and this week as, most weeks are it seems like in 2023 had, some pretty groundbreaking announcements, and releases um which we're going to, dive into a bunch of those things, there's there's just a huge amount to, update on and I think it's a good time, for one of these episodes between you, and I to just parse through some of the, new stuff that is hitting our feeds so, yeah well I mentioned llama um one of, the big things this week was llama 2 but, I think before we jump into llama 2, which I think was maybe the main thing, dominating at least my world this week, it it might be worth just taking a, little bit of time to highlight, something outside of the stream of large, language models which also crossed my, desk this week which I thought was, really cool it's this latest version of, Nerf this is work from Google presented, at, iccv uh 2023 so it's called zip Nerf, anti-alias grid based neural Radiance, field that's quite a name right there it, is quite a name um stands for neural, Radiance Fields so Nerf it's like camel, cased in capital N Small E then Capital, RF Nerf these are fully connected neural, networks that create unique novel views, of complicated 3D scenes based on a set, of images that are input so I don't know, if you seen that video yet I'm looking, at it as we are talking and when you say, the video I know which video you're, talking about cuz it's amazing I've just, left it on it's pretty spectacular yeah, so this is a podcast so it's hard to, express some of this for people if you, just search for zip Nerf you can go to, the page for this paper which is a great, summary but there's a video on the page, and just to describe what it is imagine, this kind of complicated house with a, bunch of different rooms and an outdoor, patio sort of Garden area yeah and the, video is actually this kind of almost, like a drone fly through of the house, and then the outdoor area if you imagine, a drone flying through a house there's, hats and coats and toys and couches and, plants and all sorts of things, everywhere but the video is extremely, seamless and it's not generated by a, drone it's actually just generated by, interpolating between a whole bunch of, 2D images, and then interpolating from that the 3D, scene so yeah I don't know what are your, impressions Chris first of all the the, from the perspective the Drone flight if, you will that you have as a perspective, viewing it is like the best drone, operator in the history of the world, yeah it would probably be hard to get, one to do that yeah yeah you're not, going to get a real drone operator that, could fly that amazingly you know and, get those things it's just phenomenal, and the house is like for a moment you, look at it and I mean it is it looks, real but I have noticed it's cluttery, but it's immaculately clean at the same, time as well um the the Clutter is, cleanly distributed and stuff so um I, wish when my house was cluttered it, looked as beautiful as this house uh it, doesn't but but yeah I mean just like if, you didn't know if you weren't listening, to the you know practical AI podcast to, go look at it or something like that and, you just stumbled upon it you'd think it, was a drone video you know if you didn't, have the education you go oh my God this, is just really cool I wonder how you, know wonder what they're doing here but, it's indistinguishable from real life, for all practical purposes yeah and so, it's based on 2D images and then there, are these generated interpolations which, maybe gets to there was something that, we were talking about prior to hitting, the record button which was this whole, field of generative AI is sometimes, conflated with large language models or, chat GP but there's a whole lot going on, in generative AI That's not language, related or maybe even based on language, related prompts so I mentioned that, image that I generated for my LinkedIn, post that was still an a text prompted, into an model that generated an image, but here what we're seeing is we've got, static 2D images that are input to a, model that's actually generating a whole, bunch of different pers perspectives, that are synthesized in a 3D scene so, this is I would say still fitting into, our current landscape and world of, generative AI but it's not a text in, text out or text in image out model, right and I think people there's so much, coming at people right now I think you, know we keep talking about that this, year in the 5 years we've been doing, this podcast we've never had a moment, like the last few months, where things have been coming new things, have been coming at people so fast new, terms new models and people are trying, to distinguish so it's pretty I think, it's pretty fair that people are trying, to make sense of how they relate, together and there's there's a lot of, connecting between you know the the idea, of generative and the idea of large, language models overlap in a lot of, areas and you have models that are both, and you have models that are just one, and stuff but I think it's a Brave New, World right now in terms of the the, amount every show we're just trying to, figure out what matters right now, because there's a lot we're not hitting, yeah and this side of things maybe like, the 3D or video or image based side of, things I know has its own set of kind of, transformative use cases that are, popping out I even remember a little, while ago there was some technology I, think from Shopify but others have done, this as well where maybe you have a room, in your house and you want to see how, you can transform it with new furniture, or something that of course could buy, this is a real kind of e-commerce or, retail sort of use case for some of this, scen technology of a different kind if, you think of this sort of technology, that can take 2D things and create these, 3D scenes certainly there's use cases, Within game development for example but, even other cases where maybe AI has, never impacted the process as much like, in real estate for example you know how, expensive is it to literally have a, person come out with specialized camera, gear I know that we've had this in the, past where it takes a special person to, come out with special camera gear to, capture the kind of 3D walkthrough, essentially the street view walkth, through of your house and map that onto, an actual schematic of your house and, here if you imagine someone maybe I'm, now selling my house myself without a, real estate agent and I can take an app, potentially and go through my house just, taking 2D images and create this really, cool kind of fly around 3D view that's, interactive that's really I think a, powerful transformative change for a, number of different Industries I came, across a company called Luma AI in one, of the posts about this technology I, don't know exactly how much of the, if they're even using the zip Nerf stuff, but certainly some things related to, Nerf to take these 2D images and they, have an app that will create 3D views, which is is pretty cool to see some of, this kind of hit actual real users we, keep talking about the fact that we've, hit this inflection point where it's, hitting all the you don't have to be in, the AI world you know for this to have a, big impact and so you know it's very, easy looking at the zip Nerf uh video to, imagine walking around with your cell, phone on an app and you're you're just, kind of like walking around and the app, takes care of whether it's video or, whether it's still images or what and it, just uploads it to this and produces, this you know amazing you know so it's, not your walk around that it's doing it, takes that as raw video but then it, produces this super high quality thing, so yeah I mean I think this is another, case where there's this one technology, with thousands of of use case, possibilities you know where it just, changes everything yeah and maybe also, in the it'd be curious to know your, reaction to this also with respect to, kind of the industrial use cases where, I've been thinking about of course like, capturing 3D scenes is very important, for example for simulated environments, where you're trying to maybe train an, agent or you even kind of an industrial, training for human sort of sort of, scenario where you want to kind of take, someone into an environment that it's, physically hard to bring a lot of people, into yeah or there could be safety, issues and such yeah safety issues I, don't know if that that Sparks things in, your mind I think in the industrial, sense this could have a more B2B sort of, impact than just a consumer app right, sure I mean a simple thing and this is, I'm I'm making something up in the next, thing I say but it's very easy for me to, imagine, uh intelligence agencies that are you, know like if you go back some years to, when Osama Bin Laden was found uh and, they had you know various imagery and, stuff but with stuff like this they, might take all those images that they're, getting from various sources and produce, you know a high like a fly over a very, yeah fly over and very photorealistic of, certain parts of the compound where that, kind of imagery and that can be used in, a military operation subsequently I'm, making that up so don't nobody should, take that as a thing but it's not hard, to imagine that it's not hard to imagine, a lot of factory uses and other, industrial things where you have safety, issues you have uh limited access kind, of concerns um where you're trying to, convey that but there's a lot of mundane, things there's a lot of homebased things, in small business B things as you, pointed out the real estate one earlier, so this is just one technology that, we're talking about so far yeah and I, think what you're saying it illustrates, how this is, impacting very large organizations all, the way down to small organizations yeah, sole proprietorships yeah and it's, interesting how like if we just take, this use case for example these kind of, 3D scenes kind of large scale, organizations that maybe their bread and, butter was either the compute associated, with like rendering videos and 3D scenes, or their Hardware providers that that, are creating specialized kind of 3D type, of equipment like their whole business, model they've got to be thinking similar, to other organizations that are dealing, with maybe language related problems, that are thinking about these things, with respect to llms there's a, fundamental shift in maybe how their, businesses will operate but then at the, same time it provides an opportunity for, the kind of small to medium businesses, to embrace this technology very quickly, and actually make Innovative products, that can be widely adopted very quickly, and actually be competitors within an, established market so there's an, established market for 3D things that, has been quite expensive over time in, terms of access to that technology so, now that whole Market's going to change, I think a lot of the players will be, these kind of small to medium siiz, businesses I agree I think there's a, moment here kind of ironically cuz, people are so worried about like the, impact on human creativity because of, all these models and stuff like that but, on a more positive note there's this, huge opportunity that you're just now, alluding to for people that if you can, connect the dots as things are coming, out and you can stay on top of it it's a, great equalizer and so it will clearly, change many many markets that are out, there and many many Industries and so, there's huge opportunities for those who, want to Surge ahead at this moment and, take advantage of that um and so I think, that the message we tend to see in the, media tends to be a little bit doomy and, gloomy on that but it kind of discounts, the fact that change isn't always a bad, thing people are afraid of it but uh, there's huge huge opportunities here as, well uh if people uh choose to go find, [Music], [Applause], [Music], them, [Applause], well Chris there is a new llama in town, I know llama to llama 2 uh basically, destroyed all of my feeds and, concentration this week when it was, released because it is quite to me an, encouraging thing but also another, transformative step in in what we're, doing so llama 2 for those that maybe, lack the context here meta or Facebook, or however you want to refer to it meta, had released a large language model, called, llama which was extremely useful it was, a it was a model where you could host it, yourself as opposed to like open AI you, could get the weights and host it, yourself but the original llama had a, very restrict, licensing and access sort of pattern, even though you could kind of download, the weights from maybe like a bit, torrent link or something like that and, those, propagated technically if you got those, weights you were still restricted by a, license that prevented commercial use, cases, specifically and now with llama 2 meta, has released the kind of follow on to, llama and we can talk through some of, what the difference are and what it is, and some of what went into it but I, think one of the biggest things which is, I think going to create this huge ripple, effect throughout the industry is that, they've released it with a commercial, license as long as on the day that llama, 2 was released you as a commercial, entity don't have greater than 700, million monthly active users you can use, it for commercial purposes so maybe if, my company maybe later on has 700, million monthly active users which would, be great probably never but there'll be, something past Lama to by then though, yes if it does though I could still, actually use it because it's only on the, release date so on the release date, which was was this week if as long as, you didn't have greater than 700 million, monthly active users you can use this in, your business for commercial use cases, and I think that's going to have a huge, ripple effect Downstream and we can talk, about the model itself here in a second, but maybe just I'll pause there to get, your reaction on that Chris it made me, smile when I heard that because it's, kind of like saying so long as you don't, compete with us at meta you can use this, for commercial oh oh it's totally true, yeah like who is that right so that's, Snapchat yes Tik Tock right um like you, you can think of yeah you can think of, who this is and I guess way to put this, is it's not totally open source quote, unquote like we wouldn't call this a, maybe open source in the kind of, official definition of Open Source yes, but it's certainly commercially, available to a very wide set of people, yep you know one of the first things I, noticed when this came out on their page, and they're talk you know there's, there's and I'm diving into like the, specifics of the model here is we had an, episode not too long and you were, describing about kind of the I believe, it was the 7 billion limit you know in, terms of Hardware usage and stuff and, having been taught that by you uh I, immediately locked in on the smallest, being 7 billion there as and I thought, ah this is what Daniel has taught all of, us about that limitation on on, accessibility and who can do it so you, know it has the 13 billion and the 70, billion size but I I definitely picked, up on the seven billion which I'm, assuming is going back to what you were, teaching us a few episodes back yeah and, so um just to fill in a little bit on, that so the Llama 2 release includes, three sizes so again thinking back to, what are the kind of characteristics of, large language models that kind of, matter as you're considering using them, one is license we've already talked, about that a little bit here we might, revisit it here in a second another is, size because that influences both the, hardware that you need to run it, and also it's kind of ease of deployment, so llama 2 is released in 7 billion, parameter 13 billion parameter and 70, billion parameter sizes and then there's, also of course the training data and, that sort of thing that's related to, this and and how it's fine-tuned or, instruction tuned so llama 2 has, released in these three sizes both as a, base large language model and and a chat, fine-tuned model so there's the 7, billion 13 and 70 billion llama twos and, then there's the 713 and 70 billion, llama 2 chat models which we can talk, about that fine tuning here in a second, but yes you're right Chris in that 7, billion I could reasonably pull that, into a collab notebook and maybe with a, few tricks but with certainly with the, great tooling from hugging face, including ways to load it in even 4 bit, or other quantizations I can run that, you know on a T4 for example in Google, collab with some of the great tooling, that's out there so not needing to have, a huge cluster the 70 billion even with, that that's kind of another limit where, using some of these tricks I've, definitely seen people running the 70, billion parameter model on, a100 again loading in for bit with some, of the quantization stuff and all that, the 70 billion is certainly going to be, more difficult to run it might require, multi multiple, gpus but that's kind of that sizing, range for people to have in mind and how, accessible things are and yeah how might, you I'm just curious if you're looking, at these you're a business out there or, or data scientist and can you make up a, couple of use cases that you might, Target with each of these where you, might say oh I want to go 13 on this not, 7 not 70 for something like this can you, imagine something like this I'm putting, you on the spot yeah I think I mean, there's certainly innumerable use cases, but I think maybe two distinctions that, people could have in their mind is if, you want like your own private chat GPT, right or like a another way you could, think about it is a very general purpose, model like you could do anything with, this model like any specific prompt, whatever you're probably going to look, towards that higher end the 70 billion, parameter model for that kind of almost, chat GPT like performance you're going, to have to go much higher but as we've, talked about on the show before most, businesses don't need a general purpose, model they need a model to do a thing, and so or a task or a set of tasks and, so in that case I think businesses, because this is open and commercially, licensed businesses that could take, those 7 and3 billion parameter models, and fine-tune them for a task in their, business which also is increasingly has, amazing tooling around it again from, from hugging face and others with the, PFT library with parameter efficient, fine-tuning and the Laura technique, which is the low rank adapter technique, which basically only adapts an existing, model it's kind of an adapter technique, rather than retraining uh a bunch of the, the original model this opens up, fine-tuning possibilities in these, smaller models where that fine-tune for, an organization is going to perform, probably better than any general purpose, model out there and because it's that, smaller size you can run it on a, reasonable set of Hardware that's not, going to require you to you know buy, your own GPU cluster to host the thing, right so that's kind of a maybe a range, of use cases that people could did have, in mind I I have one more question for, you before we abandon this 7 billion to, 70 billion uh being an order magnitude, jump on that why would you have, something fairly close to that at 13, billion parameters like what's the, difference in seven and 13 when the next, step is all the way up to 70 what what's, the rationality you think yeah so it is, interesting actually if I'm, understanding right from some of the, sources that I've that I've been reading, there was actually a I forget if it was, 30 or 34 billion parameter model that, they were also had in pre-release and, were tuning so there was another one, that kind of fit in that slot that is, kind of missing that Gap like you're, talking about like if you think of MPT, MPT has a 30 billion parameter model, that fits in that kind of Gap my, understanding um and you know if our, listeners can correct me if I'm wrong, please do but my understanding is that, they actually did test that size of, model and found it to not pass their, kind of safety parameters around harmful, potentially harmful output or not, truthful output that sort of thing so, they decided actually to hold that back, so it could be possible as they, instruction tune and get human feedback, potentially more iterations of, reinforcement learning from Human, feedback there may be a model that they, release in that parameter range so that, was one thing that that happened I think, it is interesting you know several, different things here that are unique, about this model, specifically or maybe the release as, well other than the license is they were, fairly vague on the data that went into, the pre-training so they talked, specifically about some very intense, data cleaning and filtering that they, did on public data sets, and it was trained on more data than the, original llama but they're fairly vague, on the mix of that data and all of that, so that may be related to feedback they, got on the data sets that were used in, the first llama I I don't know but the, technical paper was mostly related to, the the modeling and fine-tuning, trickery and methodologies that they, used which was interesting and one of, those interest, elements of the way that they fine-tune, this model was I think the reward, modeling so if you remember like the GPT, family of models the MPT Falcon these, different models one of the things that, is often done with these models is this, process of reinforcement learning, through human feedback which is this, process and we covered this on a, previous episode which we can link in, the show notes but actually using human, preferences, to score the output of a model and then, actually use reinforcement learning to, correct the model to better align with, human preferences or human feedback they, actually use two separate reward models, in this fine-tuning of the chatbase, model one that was related to, helpfulness and then the other one which, was related to safety and one of the, interesting things that they talked, about in the paper was how sometimes, those things can kind of work against, each other if you're trying to do both, of them at the same time so they, actually separated out the reward models, that they used for the chat fine-tuning, into these two reward models one for, helpfulness and one for safety which is, quite interesting I, [Music], think, [Music], so Chris maybe just a couple other, things related to llama and then I want, to see your feedback on code interpreter, as well because we haven't talked about, that yet on the show and maybe Claude, too if we can get to it yeah we got to, mention Claude 2 as well because they, were both big releases yeah so um just, one maybe other note which I find quite, interesting and actually I would love uh, our previous guest Damian's thoughts on, this this uh who was in our last episode, about the legal implications of, generative AI but one of the interesting, things about the Llama license in, addition to it allowing this commercial, usage is that there is a technically a, restriction in the Llama license that, says you will not use llama materials, which includes the model weights and Etc, or any output or results of the law, materials to improve any other large, language model model excluding llama 2, or derivative Works thereof so, essentially what this means is if you're, using llama 2 and you want to fine-tune, a model or you're fine-tuning a model, off of llama 2 outputs you're stuck with, llama 2 basically llama 2 is your model, and that you're going to stick with, llama 2 so you couldn't for example, technically take the Llama outputs from, llama 2 and, fine-tune say Dolly 3 billion right that, would not be allowed by the license and, of course that's something that people, are doing all over the place they're, taking outputs from GPT 4 and, fine-tuning a different model or taking, outputs from a large model like you know, maybe llama 270 billion now um and, fine-tuning another model that's smaller, based on a certain type of prompt or, something so this is rest restricting, that family of models that you're, allowed to do that sort of thing with, which I is the first time I've seen that, and I think it's kind of interesting yes, it strikes me as a as another Mark, Zuckerberg anti-c competitiveness you, know thing which which he's fairly, famous for I mean that's kind of even, before this yeah and how could you, enforce such a thing yeah I I that was, my next question to you is is there any, possible way that you could conceive of, to actually know that from an, enforceability point I have no idea I I, don't either so it seems It's like a, it's a licensed thing and it will, concern the lawyers but it's hard to, imagine I mean going back to our, conversation last week once you have, output and that output is input to more, output and you know there's a point, where it becomes very very very, difficult to know what the what the, sourcing really was so and the fine, tunes are already appearing off of llama, 2 so the most notable probably is free, Willie which is from stability Ai and is, a fine-tune of the largest 70 billion, model but there's other ones coming out, as well and so I think we're about to, see just a huge explosion of these llama, 2 based models for a whole variety of, purposes and who knows how they will fit, into that licensing restriction or how, open people will be about that but it's, about to start the fine tunes are, already coming yeah well you know to, your point earlier they weren't terribly, clear about the data that they were, sourcing from their own standpoint yeah, and I find it interesting a little, ironic a bit of a double standard maybe, yeah a little bit of a double standard, right there in terms of like we're not, going to tell you everything about how, we're doing input but by the way you, better not use our output for your you, know for other so yeah a little, interesting do you think there's any, risk of a wall Garden kind of concept, happening in large language models if, others were to follow this lead on, anti-competitive yeah it will be, interesting I think it is a notable, Trend that the first llama from meta was, not open for commercial at all and now, they're opening it up for commercial, purposes and you know maybe there's a, separate Trend that will happen with, some of these use based restrictions, that people are importing into their, licenses and how useful those things are, over time that will May shift and we'll, see those things die off or maybe if, they're enforced and there's precedent, maybe we'll see something go the other, way I'm not sure but speaking of models, that you might get their output and use, it to train other models uh that is, these large scale proprietary closed, models from people like open Ai and, anthropic and others we've got a couple, of things that we haven't talked about, on the show yet which people should, probably have on their Radar Radar um, one of those is clae 2 uh what do you, think about clae 2 from anthropic yeah, I've been I've been playing around with, it a lot in the last week uh and I kind, of have a set of things that I try over, and over again they're kind of my, standard tasks as new models come out, and some of them are coding and some of, them are content generation which are, kind of the two big things that I use, most often it was interesting you can, put you know the input size for Claud 2, is much larger than the others like much, much larger much much much larger so, 100,000 tokens yeah and so it's had me, kind of change the way I'm approaching, it in that by contrast with like chat, GPT and you're trying to figure out with, with the limits that you have both on, input and output how do you kind of, prompt engineer your way to get you know, where you're trying to go which has, become this whole skill set we've been, talking about you know in recent months, and yet Claud 2 almost kind of wipes, that out a little bit in some ways not, not in all ways and that you can hit it, with a much larger uh input space and, and so it's changing how I'm thinking, about kind of getting to the output that, I want and the output is a bit different, it's not the same um I'm getting out, different outputs from from all the, models so yeah they're not all the same, definitely I think my biggest thing is, with all these new releases I'm trying, to figure out how do I use each one when, do I I'm trying to develop my own, strategy on when do I go to chat GPT by, default like when's that the right thing, and that's changing as we'll talk about, with things like plugins and stuff, that's evolving but then Claud 2 comes, out and then you have you know on the, open source side as we just talked about, with llama 2 so I think trying to, understand all the tools in the toolbox, in relation to each other has been, interesting so Claude 2 I'm I'm really, focused right now primarily on on large, content output is kind of where I've, landed on that and the, 100K context length of Claud 2 is, something I find really compelling as, well there was also a significant paper, that came out that caused a lot of waves, in terms of context length and thinking, about that which showed kind of as you, increase context length you lose any, significance of the middle bit of that, context so the beginning and end is more, important in terms of what makes the, output of the of the model quality or, not in terms of how you would measure, that and so we'll link to that paper, maybe in the show notes as well but I've, tried some things I mean I don't know, exactly all of the details again Claude, is one of these closed models so I don't, know all the details of how they're, doing things and because it's sitting, behind an API it's hard to know how, those things evolve over time but for, example I I took one of the things with, Claud 2 is I just took one of our, complete podcast transcripts so a full, episode so 45 minutes of audio, transcript I took episode 225 which I, really enjoyed talking a lot about the, things that I'm working on right now, with prediction guard and just asked it, to give me a summary of the main, takeaways and you know paste it in the, whole thing and it's like a fairly good, comprehensive takeaways like many, companies ban usage of certain llms blah, blah blah you know prediction guard is, trying to provide easy access, structuring validation compliance, features for llms uh making llm usage, easier blah blah blah and it gives these, great take ways and then I asked you, know hey suggest a few future episodes, that we could do that maybe cover a, related topics but things that weren't, covered in this episode pretty good some, of them are kind of generic right a look, at current state of AI agents and, automation how close are we to no code, AI app generation blah blah blah so that, all kind of off of this large context of, the transcript input was interesting I'm, curious um I'm going to put you on the, spot also as someone who's working on, your own product and I know this is not, a prediction guard episode but I'm, asking on my own behalf and on behalf of, the listener how do you as someone who, is looking at these different models how, do you think of those different models, how do you kind of structure them in, your mind in terms of what you're, offering you've been evolving rapidly, over the last few months and I'm always, curious to see kind of where your heads, at on this now as you're looking at them, yeah I think the things consistently, that I'm seeing are are that I made a, post on LinkedIn about this as well even, my own applications that I'm building, llm based applications having access to, multiple models rather than a single, model I think is a really nice usage, pattern where if you the easier we can, make it and there's there's other people, that are doing this as well other you, know in prediction guard you can query a, whole bunch of models at the same time, concurrently there other systems that, will let you look at that output as well, so uh nat. deev and some of the toolbar, stuff that swix is doing um we had a, collaboration with him in the Laten, space podcast so the more you can tie, these things together and look at the, output or automatically analyze the, output of multiple models at the same, time I think that's really useful, because it it's hard to generally, evaluate these models until you start, evaluating them for your use case and, building intuition about them for your, own use case so I think the pitfall that, people maybe fall into is saying oh I'm, going to use this model before they've, even tested that for their use case, right try creating a set of evaluation, examples for your own use case and then, try out a bunch of different models for, that and also try out the things that, are becoming more standard kind of, operating procedures for building llm, applications like looking at at the, consistency of outputs running a post, generation validity or um factuality, check on the output so checking a, language model with a language model, doing input filtering and all these, sorts of more engineering related things, so those are some of the things that I'm, seeing but having access to a bunch of, models at the same time I think is a is, something that can really boost your, productivity I appreciate that the uh, and tour listeners we we're not making, it a prediction guard show or episode, but as a co-host Daniel's Excursion, through this and his professional has, made him in my view one of the world's, true experts in how to look at all these, together and since we have the benefit, of him co-hosting the podcast uh I'm, going to continue to take advantage of, that expertise uh for all of us thanks, Chris so sorry about that Daniel sorry, for putting on the spot yeah no no, worries I think the other thing maybe to, highlight with Claud 2 and something, that you were talking about in chat, before we jumped into this episode was, CLA 2 versus or maybe anthropic and, their offerings versus open AI how do we, understand that like how do we, categorize these things I think one of, the interesting things with Claud 2 so, we've seen both anthropic and their, Claud models and open Ai and their GPT, models increase context size over time, GPT models not quite as far as Claude, but both have increased they've also, both added in some of this functionality, which I think is very interesting CLA 2, I think first if I'm not wrong the, ability to add in your own data so in, Cloud 2 there's a little attachment, button and you can upload PDFs or text, files or csvs and have that inserted, into the context of your prompt which I, think is of course extremely powerful, we've talked about adding in external, data into generative models and, grounding models in the past it's very, powerful now open AI is doing this in a, slightly different way and I think this, is something worth calling out on the, podcast is with their new code, interpreter beta feature within chat GPT, you can upload data but it's processed, through the code interpreter in a, different way than what Claud is doing, so we all know that chat GPT and GPT, models can generate really good code and, specifically good python code and so, what open AI has done for their kind of, data processing agent within chat GPT is, said well let's just have our model, generate python code then we'll hook up, the chat GPT interface to a python, interpreter and just go ahead and, execute that code for you over your data, and then give you the the output so this, is maybe a distinction that people can, have in their mind Claud to you can, upload this huge amount of context you, can upload files inserted into the, prompt as far as I know they're not, running any kind of code interpreter, type thing under the hood Chad GPT might, not be inserting all of that into the, prompt but they're actually saying well, what if we decompose what you're wanting, me to do with this external data into, something that can be executed by a sort, of agent type of workflow where, you upload your data and ask me to like, do some analysis over it I'm going to, generate some code so the language model, generates some code and then that code, is actually executed in the background, returns a result which is then fed back, through a model to give you generated, output back in the interface so it's, actually a multi-stage thing happening, in code interpreter in open AI it, effectively produces a no code solution, you know where you get an output and, you're just kind of skipping the whole, thing you know instead of instead of, using the language model to generate, your own code and to be your code assist, and all that and then you're still doing, it it's kind of skipping that whole step, right there yeah and I can give an, example I actually ran prior to the show, so I have Claude and the open aai code, interpreter Side by Side open and I, uploaded a file with a bunch of Yuba, which is a language in Africa, transcription out of audio which are, from the Bible TTS project that we, worked with Ki and masakan on and so I, uploaded this file which includes this, Yoruba text in a CSV format open AI said, great you've uploaded this file let's, start by loading and examining the, context and then it has this sort of, show work button and you can see the, actual code that it generated which is, Panda's code to import the CSV and then, output some, examples and so you can expand that and, actually see the code that it ran under, the hood and the conclusions that the, agent came to then I asked it okay well, plot the distribution of the transcript, links are there any anomalies and then, again it says Hey show work and you can, see it's importing map plot lib it's, taking in the CSV it's actually creating, the plot and actually generates an image, out of the transcripts it says I didn't, find any anomalies they're all kind of, within the same distribution there's not, any anomalies then I asked it can you, translate all the Yuba to English and, that's where it it ended up stopping, because it said no I'm not good at doing, that and quad actually stopped there as, well and said no I'm not going to do, that I also uploaded the yubal, alignments to Claude And it said hey, sure let me analyze these transcripts, and it just output some general like, there 50 audio links the transcript, links there's no p python code there it, just gave me some takeaways right and, then I said are there any anomalies and, it said I I checked and I can't find any, and could you translate it and it said, unfortunately I can't so it's all still, a chat based thing so you can see kind, of different approaches to this, complicated workflow of having almost an, assistant agent executing code for you, versus putting more context in the, language model and having it reason over, that context so they're almost getting, their own strengths at different types, of approaches to problems would that be, fair yeah so that's another way of, thinking about it is as you start, understanding how the different uh large, language models approach a problem and, the tooling that might be better or, worse for a given use case that also, will help you kind of pick which way you, want to go in addition to maybe just, using multiple models as you talked, about earlier yeah exactly and there's, so much to dive into on all these topics, that we've covered today I'm going to, make sure that we include some really, good learning resources for people in, the show notes so make sure and click on, some of those there's a guide from data, genen on the neural Radiance field stuff, the Nerf stuff that you can learn a bit, more about that there's a hugging face, post and a Phil Schmidt post um on llama, 2 that are both really practical kind of, how do you run it how do you fine-tune, it what does it mean and then there's a, a nice post from the one useful thing um, Ethan mik blog or newsletter about code, interpreter and how to get it set up and, some things to try so we'll link that in, our show notes and I think people should, dig in get hands on with this stuff, things are things are updating quickly, and the only way to really get that, intuition about things is to dive in and, get hands on it is it's the most, interesting moment we've had in the in, the AI revolution of recent years and, just so much cool stuff right now uh, anyway thank you for taking us through, all the uh the understanding and, explanation of these things yeah, definitely is is good uh good time, hopefully people enjoy the rest of their, week and maybe go see Oppenheimer or, Barbie depending on your which of those, is most interesting to you but we'll see, you next time Chris see you later, [Music], thanks thank you for listening to, practical AI your next step is to, subscribe now if you haven't already, and if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all Chang doog podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residence break master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music], K look |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Legal consequences of generated content | As a technologist, coder, and lawyer, few people are better equipped to discuss the legal and practical consequences of generative AI than Damien Riehl. He demonstrated this a couple years ago by generating, writing to disk, and then releasing every possible musical melody. Damien joins us to answer our many questions about generated content, copyright, dataset licensing/usage, and the future of knowledge work.
Leave us a comment (https://changelog.com/practicalai/232/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Damien Riehl – Mastodon (https://law.builders/@damienriehl) , Twitter (https://twitter.com/damienriehl) , LinkedIn (https://www.linkedin.com/in/damienriehl)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Talk - Legal and Practical Consequences of Generative AI (LLMs like GPT, Bart, PaLM, LLaMA, Alpaca, Codex) (https://youtu.be/vy_569-Tmt8)
• Talk - Why All Melodies Should Be Free for Musicians to Use | Damien Riehl | TED (https://youtu.be/rjpTBHjeZ_0)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-232.md) | 8 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist and founder of prediction, guard and I'm joined as always by my, co-host Chris Benson who is a tech, strategist at loed Martin how you doing, Chris doing well doing well I I'm really, excited about today because I there's so, many questions you and I have brought up, in the show without the ability to, answer and I know we might get some, answers today yes and actually this one, not only will it be super practical and, uh interesting but it's also a tip from, one of our listeners who suggested this, guest so we're really excited to have, with us Damen reel who is a lawyer and, technologist with experience in, litigation and digital forensics and and, software development so welcome Damen, thank you so much for having me I'm, thrilled to be here yeah I I feel very, selfish this episode because I just have, like a million sort of like legal, implications copyright questions related, to like generative AI large language, models all sorts of things but before we, get into some of those specifics I know, over the course of this show we have, commented on various things that have, come about like you know gdpr and then, California data privacy stuff and now we, have like the EU AI act and and all of, this sort of Regulation stuff and then, you've got other things on the other, side uh on the litigation front where, companies are you know getting sued for, code generation based on maybe um, questionable training of models and, other things so um maybe before we get, in from someone who is an expert in this, area and thinking about all the time how, do you view where we're at in relation, to AI technology and Regulation and kind, of the legal side of things how are, those things catching up to one another, or outpacing one another and where are, we at now as opposed to like maybe a, year ago what's changed sure uh and, maybe before I answer that briefly I, litigated for about 20 years so I was a, litigator for about 20 years I did Tech, litigation so I'm coming this from a, perspective as a lawyer but I've also, been a coder since 1985 so I have the, Law plus Tech background so for your, listeners benefit I'm I'm not just a, stuffed shirt that doesn't know what, he's talking about I I can walk the tech, walk and uh and talk the legal talk if, you will uh so really the um as far as, regulation uh you know having litigated, since 2002 I've seen ways that uh the EU, and the United States have tried to, regulate uh technology and of course, they've had uh various uh degrees of, failure I would say uh largely because, uh you know the of us uh know exactly, how lots of Technology works but sadly, the Congress people and The Regulators, do not and so it's really um the law is, by nature slow and trying to get up to, speed on a fast moving area such as AI, is very difficult so I would say that uh, you know if past is prologue I don't, anticipate much good things coming out, of regulation of AI and the near future, some of these things are related to like, this generative wave of AI where people, are generating a lot of content with AI, I know that also you have a background, you mentioned your sort of coding, background but you also have a, background with generative Technologies, you know maybe not like some with large, language models and other things but I, know you have a very interesting story, of some generative things that you did, with with music could you describe some, of that yeah absolutely so I both my, current state uh with my job that pays, me money that is with VX where I'm doing, Lots with large language models right, now we have a a billion legal documents, that we running large language models, across and doing embeddings uh to be, able to do outputs for example a legal, research memorandum and eventually be, able to provide motions briefs pleadings, that sort of things so that's my my job, that pays me money for the job that, doesn't pay me money at all uh which you, referenced is uh my all the music, project this is a project that I started, with my friend uh Noah Rubin who one, thing your read listeners might find, interesting is that around 2018 I did, cyber security the biggest thing I did, was that Facebook hired me and my, company to investigate analytica so I, spent a year of my life on Facebook's, campus with Facebook's data scientists, and my former FBI CIA NSA people that, worked with me to figure out how bad, guys use Facebook data did that for, about 50 some weeks in a row uh the, stuff we would do on Monday would make, the New York Times and the times of, London by Friday so at the end of a, 14-hour day on the Facebook campus I, retreated to my hotel with Noah Rubin my, friend uh and we were in the lounge and, over a beer I said do you know knowah, how we can Brute Force passwords by, going a a a b a c I said what if we, could do that with music where we would, go until we mathematically exhausted, every Melody that's ever been and every, Melody that ever can be uh and so uh he, said f yeah but he didn't say F yeah uh, So within a few hours he had a prototype, where he cranked out 3,000 Melodies to, date we've now cranked out 471 billion, Melodies with a B mathematically, exhausting every Melody that's ever been, and ever can be we've written all those, to disk once they're written to disk, they're copyrighted automatically so, we've copyright Creed 471 billion, Melodies and then we placed everything, in the public domain to be able to, protect to use tool My Melody lawsuit, defendants and so the idea is that, before my talk in 2019 every defendant, in one of those lawsuits has lost um, after my talk which has been seen two, million times um every defendant has, used largely my arguments and has won, and my arguments were largely that maybe, if a machine cranks this thing out at, 300,000 Melodies per second maybe we, shouldn't give one person a monopoly, which is a copyright a monopoly of life, of the author plus 70 years for what the, machine cranked out in a millisecond uh, so that's largely my all the music, project which goes to it is in a sense, generative AI it's a Brute Force, generative AI but it's generative AI but, the real question is if the output of, that is copyrightable essentially have, carpet bombed the entirety of every, Melody that's ever been uh and if I were, a megalomaniac uh I would sue everybody, right but I'm not I put them all in the, public domain but if machine generated, works are copyrightable uh these are the, bad things that can happen I love that, story I just want to say that it's, pretty fun uh and you know it's because, it's been seen so many times now two, million I've been able to meet with some, good friends now uh there's for example, the former Chief Economist of Spotify, I'm now friends with and the guy who was, responsible for the first commercial MP3, to be downloaded I Jim Griffin I'm in, friends with him so anyway so it's, opened a lot of doors for me that's, awesome and one thing that comes to my, mind as you're talking about that is, because I'm also a musician and I think, a lot of people people would think of oh, these sort of melodies or chord, progressions or however you want to, frame it there's a very human element to, that that involves creativity and I, think this would maybe be extended to if, we think about knowledge work more, generally whether that's like you being, a lawyer and writing briefs or us being, programmers and writing code or, marketers being marketers and writing, copy right now these generative models, can can generate a lot of those things, in a very compelling and even I think, people would perceive it as a very, creative way however you think about, that creativity and coherence so in your, own work maybe it's in on the lawyer, side or the coding side how is that, project and maybe some of the work you, do day-to-day with large language models, how is that shaping how you think about, this sort of knowledge work and the, output of humans versus the output of of, models I would say two aspects to this, and I think that the word creative is, ambiguous as all words are most words, are and uh it is uh you know when I, generated 20 471 billion Melodies I was, creating those Melodies but it was by, new means creative right uh so in that, way you, know those are just mathematically, exhausting everything that's ever been, so that is a very simple version of what, large language models do right just say, saying what is the statistically next uh, sentence right and so this really gets, to the heart of what we think human, creativity is right is it creative as in, brute forcing or is it truly lightning, in a bottle creativity that we want to, protect with intellectual property laws, and other things like that I think what, we're learning from my project and from, the large language models is that maybe, human creativity ain't as special as we, think it is as an example um on a, Tuesday a jury found that kitty Perry, had violated a copyright of a Melody, that sounded like this done, that was on a Tuesday um on my talk on a, Saturday I said that my that particular, Melody shows up in my data set, 8,128 times so Katy Perry got ding for, $2.8 million over something that I had, thousands of times just through Brute, Force so was that Melody creative no it, was just a Brute Force after my talk was, made public the judge actually went back, and reversed the jury verdict saying, that melody was so unoriginal has to be, uncopyrightable essentially what I was, arguing my TED Talk I think going to the, heart of what is human creativity it's, good that we have large language models, and projects like mine to be able to go, to the heart of there are some things, that we should protect and there are, some things that are just unprotectable, because they're unoriginal you're, putting an obstacle against what is, otherwise maybe the weaponization of Ip, is that a fair way of putting that 100%, we're using it as a shield not a sword, right which I like because it's having, seen a lot of uh for a non- attorney as, having seen a lot of IP concerns out, there in business sometimes you're just, kind of like there's not much there so, absolutely and so I I did that with the, copyright side uh my friend Mike, bomarito who was one of the guys who uh, you may have heard beat the bar exam, they used gp4 to beat 90% of humans on, the bar exam one of those was Mike, bomarito Mike approached me and said I, love what you did with copyrights, wouldn't be great to do that for patents, uh and so right now we're doing uh I was, doing all the music project we're now, doing all the patents project uh what, that project is is going to be taking, all the patents that have ever been, filed taking each of the claims for each, of those patents uh putting those claims, together in Vector space and uh, clustering them that way and then, generating every possible combination of, all those claims in all those existing, patents so if anyone in the future tries, to recombine any existing claim into a, new thing they can point to our thing as, prior art to be able to say no no no, bomo real cats and maybe Ruben they, already did that in 2023 you can't uh do, that again because they did that as, prior, as both a coder and a lawyer who I'm, assuming some of these things you're, generating are generated but I'm assume, also assuming in your day-to-day work, and in your day-to-day coding there are, portions of what you're doing still that, are not completely generated or at least, that you're editing heavily how has this, type of work influenced how you think, about your own job moving into the, future working you know alongside these, models or at least in an environment, where these models exist uh so I will, say and anyone who works with me will, agree that I am a coder but I'm a crappy, coder I'm probably probably one of the, worst coders you're ever going to meet, so a lot of my work with large language, models is in the textual area rather, than the code area uh but in the textual, area uh I'll give you an anecdote that, answers your question I was reached out, by the editor of a large Legal magazine, to say hey Damian I want you to write, the cover story on gbt I said how long, do you want it to be he said about 17, double space pages and I was like man I, don't have time because my rule of thumb, is 1 hour per double space page so, that's 17 hours that I just don't have, time to do um but then I realized oh, wait the topic is GPT so what I did is I, created an outline of headings and, subheadings that's about two pages worth, and I said to gbt for each of the bullet, points give me four sentences, essentially a paragraph for each of the, bullet points and it out uh that was my, one for the day crapped out the 15 pages, worth and then I spent the next 3 hours, uh editing moving adding work with the, text not accepting the 15 pages outright, but working with the text and, regenerating and then I got it out the, door 3 hours later and the editor is, like oh this is perfect I don't need any, edits let's get it out the door so that, took a 17-hour project down to 3 hours, that's thing number one is that this is, not just accepting the machine output as, is but it's really us using the output, as an assistant much like co-pilot on, GitHub is using as an assistant right, these are these are essentially par, coding if you will co-authoring with the, machine I was doing a um a talk with the, US copyright office assistant general, counsel and he was talking about the, regulations that they're putting out, saying that if machine generated uh, therefore uncopyrightable if human, generated uh therefore copyrightable and, if machine generated you have to be able, to disclose what aspects of the thing is, machine generated so if you think about, if I were to file a copyright, registration with my article that I just, drafted what extent was that machine, generated and what extent was that human, generated because I spent three hours, adding editing if the machine generated, uh in a sentence three of the words that, were unmolested by me and the the other, 20 words in the sentence were actually, mine do I have to disclose what three, words were machine generated versus the, ones that I edited and that's with text, how would I do that with music right if, I said to the machine hey generate a, melody and generate a chord structure, and generate lyrics and then I spent, from uh 1: a.m. to 3:00 a.m. rearranging, all those things and then getting out, the door if the copyright office said, what aspects of that was human generated, and what's machine generated I would, honestly say I have no freaking idea, because there's no track changes with my, Daw with that I make my music on right, there's no track changes uh I didn't, track changes on my lyrics that I messed, around so I think this idea of trying to, bifurcate what is machine created and, what is human created is a Fool's errand, and we're really going to have to reckon, with that well Daman I have some very, selfish questions that I've been, pondering over in my own life as I've, encountered them and hopefully this, won't seem like popcorn questions, because I I think it's very related to, what you're talking about but I think, practical developers are hitting these, snags as they're developing apps with, this technology that are entering a sort, of Gray Zone so I'd love to get your, thoughts on a few of these um one, example that I can think of is you know, a lot of people are building chat, interfaces it's very popular now to say, oh build a chat interface over a website, or build a chat interface over documents, or build a chat interface over data, something like that people are doing, this very frequently it's very useful my, question is that chat interface or those, messages are generated content right and, that's what the user is seeing but I see, this huge gray area where let's say that, I want to chat with Harry Potter right I, take the book of Harry Potter and I put, it in my Vector database and someone, asks a question of Harry Potter and I go, retrieve the content I'm just injecting, that into a prompt right and I'm sending, the prompt with I guess the book content, into a model the model is outputting, some generated answer and I'm sending, that to the user now I'm assuming you, know I'm no lawyer but I'm assuming I, can't sell a new copy of Harry Potter, unless I have certain rights and, agreements in place but what if I put, this you know interface face up on the, internet and I start selling access to, it so I guess with that kind of very, realworld scenario what sorts of, elements do I need to consider there and, what's known to have a good answer, what's gray area what's kind of being, maybe being litigated right now I'm, going to answer your question uh and, it's going to be a fun walk so take a, walk with me okay okay perfect so this, walk uh it's going to begin with uh the, Google Books project so you might, remember the Google Books ingested book, that ever existed perhaps also including, the Harry Potter book uh a bunch of, Publishers said hey no fair because you, can't ingest all these things because, these things are copyrighted every one, of these books is copyrighted they sued, uh and then the district court and the, appell court the second court of appeals, said yes all those things are, copyrighted but Google's use of that is, actually fair use uh and this particular, type of fair use is called, transformative use that the use that, Google was using was transformative to, what the original purpose of the book, was purpose of a book is to read it, enjoy it Etc uh Google's purpose was to, index it to be able to create a word, index to be able to then search all of, the books and to be able to provide the, end user with a snippet say maybe a page, or two of that so because it was not you, couldn't eyes a US it or couldn't use, Google Books to be able to essentially, replicate the book process but instead, I'm using a to search that is a, transformative use therefore fair use, that is not a infringement of copyright, so that was back in the day now think, about how large language models work so, a large language model model if you have, the input it is ingesting say the, entirety of Harry Potter but really what, it's doing is placing those in Vector, space right it's saying that these words, are similar to those words in Vector, space and once that happens largely it, jettison the thing right um in copyright, law there is uh the idea expression, dichotomy ideas are uncopyrightable uh, so if I have the idea of a man in a, black hat fighting a man with a white, hat over a woman who's tied to a, railroad track those are ideas that are, uncopyrightable you've seen lots of, movies like that um but the expression, of the idea any particular movie that, has that in there that is copyrightable, so ideas uncopyrightable expressions of, the ideas are copyrighted so if you, apply that to what's happening when the, large language model ingests all the, books it's essentially putting all the, words into Vector space so it's saying, you know this is a bob Dyan ISM or this, is an Ernest Hemingway ISM or this is a, Harry potterisms, of ideas uh and so really uh it's taking, the expressions and effectively joning, those uh in favor of the ideas so that's, on the input side and then on the output, side uh one could imagine that it's, taking those ideas Bob Dillan ISM Ernest, Hemingway ISM and then it's outputting, them in a new expression and if you, believe the copyright office machine, generated output is similarly, uncopyrightable so we're kind of faced, with an idea that inputs ideas, uncopyrightable outputs the expressions, of ideas created by machines similarly, uncopyrightable to your particular point, this has not been tested in court so you, know a judge who by the way may not know, what he's talking about or she is, talking about might rule against what, I'm about to say right now but at least, I would make the really good argument, that the ingestion of the thing is, extracting the ideas from the book uh, and that is a transformative use because, uh if you think about Google Books uh, they were printed three pages or so, verbatim of these books that's way more, bulk than just uh you know think about, the vector space it's not reproducing, any expression right it's it's merely, taking the ideas so if Google Books is, permissible almost certainly the large, language model should also be and that's, really what is being argued right now in, the cases that are happening with the, GitHub uh co-pilot case happening in the, west coast and then uh the stable, diffusion case in Delaware and there's, others like it where if I were the, lawyers in that I would be arguing, exactly what I just argue right now so, to your specific use case you are kind, of interrogating this copyrighted work, but I would make the argument if I, representing you that this would be a, transformative use uh and just like you, as a human would have read the book and, you can as a human could provide output, based on that book in the same way a, machine should be able to read that book, and provide output that is just taking, the ideas of the thing not necessarily, the expressions of the ideas so with, that new expression coming out that, you're describing and it being assuming, that copyright office you know view, stands it's not copyrightable what that, that's a massively different way of, producing content from before till now, and in the future what does that mean, for business and uh and the World At, Large considering I mean that's a major, major change in how everything works how, do you see the future rolling out if, that were to stand first I think it, should stand because otherwise my all, the music project would essentially, carpet bomb all the music and we would, just have machines creating new, Expressions that would essentially make, human expression obsolete so number one, I think it should stand because if we, are in a world of hurt uh with a, copyright uh thing number two is uh, you're right that never before in human, history um have we had a machine that, creates new things we've had the, printing press where we take my ideas, and then I can replicate it a whole, bunch of times we have the digital, revolution in this you know 70s ' 80s, '90s 2000s where now we can replicate, human stuff but never before have we had, a way that the machine itself is making, new expressions of ideas uh and so as a, result of that some of the smartest, people I know that are thinking about, this is said that the web as it stands, is probably uh large language models are, going to stop right around November of, 2022 because anything after that you're, going to have a whole bunch of machine, generated content that is going to be, essentially large language model created, things if you know about the tech and I, assume your audience does because it's, statistically likely it is smooth humans, are Jagged in the way that they write, text and that's what detects gbt and, others is see the jaggedness of, humanness machine generated content is, smooth not Jagged so that's what detect, GPT says could you define that real, quick what Jagged versus smooth means in, this context sure yeah so uh Jagged, means random uh smooth means, statistically almost deterministic so, the idea is that we as humans say random, things and we put things in a way that, maybe hasn't been said before whereas, machine at least an llm machine is going, to be able to say you know what's the, most statistically likely word and, therefore that is smoother than our, Jagged Randomness the idea is that um as, the machines are creating this smooth, deterministic statistic Bally likely, next word that is essentially as new, large language model ingest that smooth, text it's going to further smooth the, Corpus uh and we're going to miss all of, the human created jaggedness that is, going in there so some of the smartest, people I know are saying that maybe the, web as it us stood in November of 22 uh, is maybe that's the last time we're, going to have a lot of human created, stuff that is truly Jagged because here, and out we're just going to have machine, creative stuff that is smooth and one, last thing I would add is that one of, the last bastions of human creat, jaggedness that we have is the courts, because it turns out that uh you know, people have talked about us being in a, posttruth era and Posta era there is a, person that's literally called a fact, finder and his name is a judge they, spend years trying to find facts and, then they write things called judicial, opinions that have found facts that have, been battled over years in the courts so, one of the last places where we can find, this Jagged almost certain to be human, written thing that is actually based in, fact in our post-act world maybe his, judicial opinions and my employer VX has, about a billion of those uh across the, world uh so maybe as we think about what, are new corpuses uh that the large, language models can train on that are, truly Jagged uh that are full of factual, things and not bull that's on the, internet that is unvalidated right this, is truly validated human created content, that is high quality that might be a, source uh to be able to ingest maybe, this gets a little bit back to your, article example where you interacted, with the GPT output to write an article, and at a certain point it kind of morphs, into its own thing what portion of it is, machine generated what portion isn't I, know this is also happening just from, seeing things like people are generating, for example like adult coloring books, using AI models and posting those in an, almost automated way to Amazon and then, like someone can literally order a a, book I think of other examples maybe, where hey this book was written a long, time ago and so the wording is really, difficult if I used a large language, model to rephrase it in Modern English, and then I just post that and start, selling it so how long will this be, debated in terms of like the copyright, around this and what should be on, people's minds as they're creating this, kind of content that they actually want, to commercialize maybe that's a more, practical question sure if I were to, create this machine created coloring, book for example uh and that uh which, under the US copyrighted office today, that entire L machine created thing is, therefore uncopyrightable this really, goes to the heart of what is copyright, in the first place and all copyright is, is a monopoly it is a government, sanctioned Monopoly giving you the, author a monopoly of life of the author, plus 70 years on the thing you created, but as an exchange for that Monopoly the, government says this has to be original, that is it has to be your creative work, that does this and if it is truly, original and it is truly creative we, will give you that Monopoly a 70 years a, life of the author plus 70 years, so really the question is is there, anything copyrightable in the machine, generated work well probably not because, there was no human creativity in that, thing uh so that's thing number one but, then let's look at another scenario what, if somebody else did a human created, coloring book that was identical to what, the machine had done does that turn it, from unoriginal therefore, uncopyrightable with the machine created, one to all sudden if a human does it it, is copyrightable even though they're, identical yeah essentially because the, machine wasn't copyrightable and you're, recreating it even though there might be, no creativity on the human's part, because they're literally looking at the, output of the uncopyrightable machine, output but they can steal the idea, without the creativity involved and then, copyright am I understanding you, correctly that's right and we've dealt, with this the courts have dealt with, this for a few hundred years and you can, imagine um Shakespeare is in the public, domain it's not been in copyright for, hundreds of years you can build a top, Shakespeare say with Westside Story that, was based on Romeo and Juliet right the, writers of Westside Story don't get, copyright in the underlying story of, Romeo and Juliet but they do get, copyright in whatever they put a top, Romeo and Juliet so the fact that it's, you know New York City and all of these, things so they just get what's called, thin copyright on top of the public, domain thing so you could imagine it, going back to our coloring book example, right if someone then copies that and, then adds a little human touch on that, they don't get the underlying copyright, thing because that's public domain they, only get what they've added a top the, machine created thing and really the, question is how much can you really add, a top that is really creative enough to, make it worthwhile and I would say you, know if it's uh just another line here, or there that's not sufficiently, original or creative to add copyright, ability I do want to get back to another, couple themes but maybe one more selfish, question which is less related to I, guess the inputs and outputs would be, more related to the models themselves, what they're trained on and how they're, released of course we're seeing, a lot of different approaches to how, models are being released in the sense, that well is a model code is it data do, I use Creative Commons or do I use, Apache 2 also the data that was used in, the training maybe that's a mix of, copyrightable material or maybe it's not, even known maybe it's a model shows up, on on hugging face and I don't know what, the mix of the data set was that was, used in training as you've been working, with these large language models and, Advising around this and thinking about, these Concepts how do you see that side, of I guess training data fine-tuning, data model release what's on your mind, as you look forward to this next season, which I assume will continue we just had, an episode with I think it was titled, the Cambrian explosion of models there's, so many being released uh you know this, will continue how do you see that side, of things developing, over the next season that we're entering, I think that the what you're asking, really about is the Providence of uh, everything that comes Downstream uh that, is what is the Providence of the input, uh and what is the Providence of the, essentially being able to uh manipulate, that input to create a thing that's, called a model yeah of course the output, of the model could train new inputs to, be able to go into new models right yes, a cyclical sort of thing that's right, it's like a snake eating its own tail, and So within the law in criminal law, there's a thing called the fruit of the, poisonous tree the First Act might be, innocuous but then it leads to a chain, reaction a bunch of dominoes that leads, to the end so this is the fruit of the, poisonous tree is a legal concept that, you could imagine is similar for the, questions you asked if there is input, data that maybe has questionable, licensing uh so for example llama right, uh so it was released for open source, but only for academic purposes so you, could imagine if someone were to be able, to you know create a model uh for, commercial purposes that is based on, that ostensibly licensed for academic, purposes that is maybe a fruit of the, poisonous tree question to be able to, say is that model now tainted because of, it was ingested in opposition to the, license yeah I've wanted to use some of, these uh llama based models uh quite, quite recently and just haven't because, it makes me ask a lot of questions so am, I right with that hesitation or is it, yet to be determined but you would make, certain assumptions or what do you think, yeah so I should clarify that I'm lawyer, but I'm not your lawyer so nothing I'm, saying is going to be legal advice okay, yes correct correct so I would say that, uh yes anytime that you are TA ingesting, uh items that are licensed uh and then, you are using them in a way that is, maybe against that license or not, permitted by that license I think anyone, should be worried uh when that happens, speaking generally I would also say that, um yes anyone who does that should be, worried also proving such things is, tricky uh right because uh there is the, law and then there's what can be proved, in a preponderance of the evidence in, the court of law as The Dominoes fall, and as the snake keeps eating its tail, the Providence of what data did you, actually use and where did it come from, it gets murky it does get really murky, and so that's something I imagine, litigation is going to happen a lot so, I'm absolutely fascinated by that and, want to take it further so I'm thinking, back you know on years of business and, all the IP concerns uh I work for a big, Corporation lots of other people work, for various if the snake continues to, eat its tail and you're seeing this, happen over and over again the value of, current IP generally I would argue, diminishes over time because it's, usefulness in business as things are, progressing ever faster in the business, plus technology world you know something, that was a great piece of Ip a few years, ago it might still be covered legally, but you're not necessarily going to use, what was you know 20 years ago versus, what you did yesterday with that kind of, utility of current IP diminishing and, with this sequence of snake eating its, tail that you're describing and you know, fruit of the poison tree I believe you, called it that has to have massive, massive repercussions for how business, uses IP in the large in general like, your entire strategy about because right, now um you know organizations they will, come up with an idea they will, immediately go copyright whatever the, appropriate mechanism is and get that in, they lock it in it's part of their, business strategy that seems to me from, what you're saying to fail in the future, it is no longer a good strategy what, does that mean in the large I mean, that's a gigantic question I think what, I think you've described and what I, think we're seeing in a society is the, intellectual property laws that we've, created you know since the beginning of, our founding history H you know the, Constitution uh says that we will, protect inventions that's constitutional, so what I think we're seeing is our, intellectual property regime that has, existed since the 1700s creaking under, its own weight with this new large, language model generating in a way, that's never been done in human history, I think you're right the value of, patents what is the value of a patent if, I can use a large language model to much, like I described with all the patents, you know we with all the patents are, saying every patent that's ever been, done and every idea every claim in every, patents let's recombine those but you, could imagine and I've heard that there, are companies out there that are doing, uh not what's been done before but new, ideas and then making a ton of new, claims and F those new claims with the, US patent office and to be able to say, here's a new idea and if you carpet bomb, the US patent office with all of these, new things that are just machine, generated there's currently a case uh, the ther case thle r that uh he created, uh had said a machine created this, patent and the patent office said ah, machine Crea patents are not a thing you, can't do it but they only knew that, because Thor told him that it was a, machine created how many of these things, are being filed that nobody has told, anybody that has been patented and then, is that fraud on the patent office, probably but the question is who's going, to find out uh if it doesn't go to, litigation for a moment I want to ask, you to stop being an attorney and be a, Speculator here I want you to Kind of, Blue Sky it like where can this possibly, go or what are the different paths you, know what's Your Gut tell you in terms, of how this plays out because you've, described multiple ways in the last few, minutes where the whole system can, essentially collapse under its own, weight not just one way you've done it, several different ways where that can, happen which isn't surprising because, over the episodes of the show we keep, talking about the rapid change that all, this is bringing in AI It's the Most, Fascinating moment in human history in, my view and you're describing the weight, of all the the structure of the past in, terms of the legal considerations unable, to keep up with what's going on now and, it's only accelerating Where Do We Go, From Here what does that mean sure uh if, past is prologue uh to what's going to, happen I would say that um you know, we've over the last decade uh business, meod patents essentially going away, we've seen software patents almost, pretty much go away with the Alice, decision and others so patents have, already been diminishing in value um you, know over the last 10 years or so and I, think this is just going to accelerate, that diminishment because you know if, I'm going to compete in the marketplace, largely that you know anything I invent, today is going to be obsolete in 3 years, anyway uh so what's the good in, patenting thing that is obsolete uh you, know in 3 years I think Elon Musk said, uh you know I'm open sourcing all my, patents he said because a patent is, merely a license to Sue and that's true, right is I spend a million dollars or $2, million to get the patent and then I, have to spend Millions on top of it to, sue somebody over that patent that, licensed to sue often just doesn't make, business sense and I think it makes even, less sense as the system is collapsing, in its own way so if you're looking for, me to speculate and you are I would hope, that uh the patent regime is going to, fall away in importance and people are, just going to innovate as all of us on, this call are knowledge workers and, program or do legal work that sort of, thing I think we're all benefiting in, terms of productivity moving into the, future you know outside of the IP stuff, the copyright things all of those things, I'm coding much faster now not because, I'm necessarily a better coder which, maybe I'd like to think I am but I'm, probably not it's because I'm using, generative tools and suggestions in a, much more robust way, and I was fascinated in your one of your, recent talks when you're talking about, kind of the Practical consequences of, that like if I can work 50% faster do I, still work the same amount or do I work, less and what are the implications of my, employer viewpoint on my work and that, sort of thing could you talk us through, a little bit about your thinking in that, regard yeah so I'm going to talk about, four worlds first world is 2022 World, before the language bottles and in that, world I would work 40 hours a week, full-time and I would give 40 hours a, week of 20202 productivity as a result, of that and as a result of that an, employer would hire a Workforce like me, to do that so that's world number one in, world number two I know of people, anecdotally that have are working three, full-time jobs because they're getting, uh at least you know 100% or so uh, productivity gains maybe 10x, productivity gains based on the code, that you said so they have three, full-time jobs so he's essentially, working 30% of the time for each but, still providing 100% of the output for, that and their employer is saying wow, that's great output they don't care, right so that's world number two I think, what we're in today world number three, is probably the employer is going to say, hey hey hey hey don't give me 30% of, your time give me 100% of your time and, maybe give me 10x output of 2022 uh, level output right I want that, productivity gain from you so that's, world number three but I think shortly, thereafter is going to come world number, four where the executives are going to, say wa wait if we lay off 2third of the, workforce and then still require them to, work 40 hours a week with their 10x, productivity I can say to my, shareholders look at all the costs that, we cut by laying off two-thirds of the, workforce and we're still getting 5x, productivity on top of our 2022, productivity we've cut costs we've, increased productivity aren't we great I, think that that's probably the world, that we're headed for and there's a, world six beyond that which that leads, very obviously to that recognition of, cut the workforce and stuff I don't know, if we want to go there or not to to, finish up but we got some tough social, issues to navigator there really what, we're describing here is uh there's a, scarcity mindset and there's an, abundance mindset the scarcity mindset, is that around 1979 accountants were, really worried with this artificial, intelligence that's called the, spreadsheet because they said wow all we, do all day is use ledgers and we add and, subtract numbers and machines can do, that in seconds that's going to put us, all out of work but what happened was, that when the clients realized oh it's, not going to take me a week to get that, ledger back but it's going to take, seconds let's do the scenario two and, scenario 3 and scenario 4 and run more, scenarios and now we have more, accountants than ever because the tools, are actually a force multiplier that now, there's more accounting work rather than, less so that is an abundance mindset, that is not a scarcity mindset so the, real question in my mind and maybe, should be on all of our minds is is the, scarcity mindset that I described with, that world's 1 2 3 and four is that, going to be our future or is there an, abundance mindset where we just have 10x, or 100x productivity and we keep growing, and growing and growing I think that's a, great transition kind of as we get to, the close here maybe one question that, I'd like to ask you we've talked about, various interesting scenarios and maybe, things that are, honestly kind of uncomfortable for a lot, of our kind of technical listeners, around oh legal questions and lawsuits, and copyright and that sort of thing um, from your perspective as you look to the, Future kind of this next year what are, you encouraged by and or what how would, you encourage our listeners maybe those, practical developers or practitioners, out there like how would you encourage, them to engage in this conversation and, these topics moving to the Future and, what are you excited about or encouraged, by moving to the future I think about AI, as largely a Title Wave or a, and uh we are running faster than the, tsunami how do we run faster than the, tsunami you learn how to use co-pilot to, be able to C to go faster you learn how, to be able to do things that the machine, cannot yet do uh that's running faster, than tsunami so really uh there's I say, to lawyers that are worried about AI, that AI will not take a lawyer's job but, a lawyer that uses AI will take the job, of a lawyer that does not use AI uh and, so really I would say the same thing for, coders who are listening that uh, learning to use the tool to run faster, than the tsunami there's another joke, you know there was a bear at a, campground and two guys and the one guy, gets out of his tennis shoes and uh the, other guy says you can't outrun a bear, and he said I don't have to I just have, to outrun you right so in that sense I, learn how to use the large language, models to outrun your competition, because as the wave crashes over them, it's not going to crash over you I think, that we all have to reckon eventually, the wave I think May crash over all of, us uh but for uh until then I think we, should be running as fast as we can, awesome yeah that's a great encourage, and thank you so much for humoring us, with all of our random questions uh some, of which were were selfish on my part, but I've learned a lot and really, appreciate your insights Damien and the, work that you're doing look forward to, seeing your future projects and I'm sure, that our our listeners will find this, super interesting thank you so much, thank you I I don't often get to speak, to audiences sophisticated as yours so I, really enjoyed the really deep and pring, questions and I really am grateful for, the opportunity, [Music], thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang dog podcasts check out what, they're up to at fast.com and fly .io, and to our beat freaking residents break, master cylinder for continuously, cranking out the best beats in the biz, that's all for now we'll talk to you, again next, [Music], time k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | A developer's toolkit for SOTA AI | Chris sat down with Varun Mohan and Anshul Ramachandran, CEO / Cofounder and Lead of Enterprise and Partnership at Codeium, respectively. They discussed how to streamline and enable modern development in generative AI and large language models (LLMs). Their new tool, Codeium, was born out of the insights they gleaned from their work in GPU software and solutions development, particularly with respect to generative AI, large language models, and supporting infrastructure. Codeium is a free AI-powered toolkit for developers, with in-house models and infrastructure - not another API wrapper.
Leave us a comment (https://changelog.com/practicalai/231/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Varun Mohan – LinkedIn (https://www.linkedin.com/in/varunkmohan)
• Anshul Ramachandran – LinkedIn (https://www.linkedin.com/in/anshul-ramachandran)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
Show Notes:
• Codeium (https://codeium.com)
• What GitHub Copilot Lacks: Fine-tuning on Your Private Code (https://codeium.com/blog/what-github-copilot-lacks-finetuning-on-your-private-code)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-231.md) | 28 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another edition of the, Practical AI podcast my name is Chris, Benson I am your co-host today normally, we would have Daniel whack joining us uh, but Daniel uh has just gotten off a, plane he flew halfway around the world, and we decided to give him a break from, today uh I would I would uh he was more, Lucid than I would be under the same, situation today I wanted to uh to Dive, Right In we have a super cool topic it, is not dissimilar from some of the other, General things we've been talking about, but I have two guests today I'd like to, introduce uh verun who is the CEO and, co-founder of codium and anel who is the, lead of their Enterprise and partnership, uh welcome to the show guys thanks for, having us thanks for having us Chris hey, you're you're welcome um really, interested in learning more about codium, when Daniel line you guys up he's like, Chris you got to he's send me this thing, saying you got to look at this this is, really cool and everything and I'm like, get I'm on the show he's like I'm, already doing that so uh so really glad, to have you guys on um and he's going to, be bumming that he missed the, conversation because he was pretty, excited about it and so I guess I wanted, to before we even dive uh into codium, and the problems it's trying to solve, and such if you guys could each just, tell me a little bit about how you found, yourself arriving at this moment kind of, a little bit about your background how, you got into Ai and how uh how this, became the thing verun if you want to, kick off and then on show afterwards so, maybe I can get started um actually, starts in 2017 I started working at this, company called nuro that does autonomous, Goods delivery so it's an AV company, there I sort of worked on large scale, offline deep learning workloads so as, you can imagine in autonomous vehicle, company needs to run large scale, simulation they need to basically be, able to test their ml models at scale, before they can actually deploy them on, a c and sort of in 2021 I left nuro and, started EXA function which is the, company that is building out this, product codium and exf function started, out building GPU virtualization software, so you can imagine for these large scale, deep learning applications one big, problem is gpus are scarce they're, expensive and also hard to program and, sort of what XA function started, building was Solutions and software to, make it so that applications that ran on, gpus were more effectively using the GPU, hardware and we realized that our, software with aif function was best, applicable to gener of AI Tech and, started building out codium around a, year ago very cool and and uh before I, dive in because you I have several, questions for you but I want to give, anchel a chance to introduce himself, here go ahead anchel surprisingly my, story is actually quite similar I was, was also working at neuro um so ver and, I used to work together uh back in the, day I was not actually working on you, know the the ml infrastructure side of, things I was something that you know run, had hands on on but you know I decided, to kind of also join the team at XA, function and I think yeah as as R, mentioned about you know a year ago I, think we noticed there was like I think, three things kind of happened at the, same time that we noticed that led us to, codium right I think the first one is, that you know we're Engineers all of us, here are engineers and you know we had, all tried you know the GitHub co-pilots, and all these like you know cool AI, tools for code in their beta and we're, like wow this is like absolutely going, to be the future of software development, but at the same time you know it's like, still scratching the surface of, potentially everything that we do as, Engineers so I think like number one I, think that we realized then number two, was you know talking to a lot of our, friends at you know these like bigger, companies or anything like that a lot of, them were just saying like oh yeah it's, cool I've tried it for my personal, project but I can't use it at work right, right my work's not allowing me to use, that so was like the second thing we, heard and the third thing was exactly, what ver alluded to we were building, like ml infrastructure at scale for, really large workloads like when this, entire generative AI wave started coming, we're like wow we're actually kind of, sitting on the perfect infrastructure, for this so I think all those three, things kind of combined uh together for, us to be like do you know what let's, build out an application ourselves and, build an application that you know we as, Engineers are customers ourselves right, and that that end up becoming codium as, you were getting into doing GPU software, what was in general some of the, challenges that you were seeing you know, with Nvidia has their various software, supporting things like that clearly you, saw that there was a need for something, beyond that can you talk a little bit, about just the layout that you saw in, the you know in the environment before, you got to to all the generative stuff, and the fact that you had infrastructure, what what positioned you for that and, what was the thing that you decided that, you needed to address maybe I can take, it a step back of why the GP workloads, are just a little bit annoying compared, to CPU workloads okay one of the really, sort of unique things about gpus um is, that unlike CPUs it's kind of tricky to, virtualize like one common thing that we, have with CPUs is you can put a bunch of, containers on a single VM and then you, can kind of make use of the CPU compute, like effectively right you can basically, dump 10 applications onto a CPU and it's, perfectly fine for GPU it's a little bit, more messy because the GPU doesn't have, a ton of memory so you can't just blowed, up infinitely many models on there like, let's imagine you have a GPU with 16, gigs of memory and each of these models, takes like 10 gigs you can't really even, put two applications on there so then, that already becomes a big issue and, that's sort of what a lot of these large, deep learning workloads were struggling, with so when I was at nuro one big, problem we had was we had around like, tens of models but we had these, workloads that needed hundreds of gpus, some of them even thousands of gpus and, we struggled to basically make it so, that we were even able to use the, hardware properly and then you know you, could imagine the complexity then Stacks, with now we're in a state where, companies have the trouble even getting, access to 10 gpus because of Nvidia sort, of scarcity issues and then also the, cost of a GPU is like not like a CPU, it's like significantly more expensive, like the cost of a single h100 you know, chip is well over 30 grand so these, aren't like very cheap chips so there's, like a big need at the time to figure, out how do we leverage the hardware, properly and sort of that's what we had, to build software for and just to, clarify for me was that why you were, still at neuro or was that after you, started EXA function yeah so while I was, in neuro we sort of worked through or I, sort of led a team that sort of built, software that kind of fixed these, problems but aif function was focused on, generically how do we make sure deep, learning based applications could best, leverage gpus um that's sort of what we, started out building actually and then, codium came out from that actually, gotcha tell me a little bit about as you, have been right in the middle middle of, this progression just to frame it for a, second if you look at the last you know, couple of years in particular and the, pace of change has been so much and so, you were right there starting at nuro, and then creating XA function seeing, some of the challenges could you talk a, little bit about how the industry was, evolving and changing as you were seeing, it um so that we can get a sense of kind, of how you moved toward codium you know, to give a little bit of the history, instead of just starting from where that, is can you talk a little bit about you, know the itches that you were scratching, and why it led that direction what did, this AI industry look like to you yeah, so when we started like you can just, imagine everything was a lot more, smaller scale right the hyperscalers so, the cloud providers just didn't have, nearly as much gpus like if you ask them, like what fraction of cloud spend is GPU, spend it's probably like very small, single digit percentage points maybe, even less than that at the time so this, is like a very small workload for them, when we sort of started both me and arel, started at nuro in like 2018 but then, over time this grew a ton like we could, see it from the training workloads these, were no longer like even single node, training workloads like back in the day, a single GPU node that had maybe like, eight v100s or something was like, considered a lot of compute and suddenly, now we were able to witness the fact, that this was slowly becoming 8 a100, nodes and then more than eight of these, nodes were necessary then even to train, these models and similarly to prove out, that these models were capable like in, an actual production setting you needed, to run offline testing at massive scales, like on the order of like 5,000 to, 10,000 t4s scales which is like kind of, incredible in terms of raw flops so we, were able to see this hockey stick, happen in front of us and then that's, sort of what made us want to start XA, function in the first place we realized, that there were going to be large deep, learning workloads one interesting fact, is for us like for just the exif, function GPU virtualization software, that we ended up selling to Enterprises, we ended up managing over 10,000 gpus on, gcp in a single gcp region so we ended, up managing more than 20% and we, realized that that hey this was only, going to keep growing like when we, talked to the cloud providers they were, only going to keep growing the number of, gpus and we realized I guess the, interesting thing was in the future, generative AI was going to be, potentially the largest GPU workload, though that was the big thing we, realized was gbd3 came out which was I, guess in in 2021 now gotcha so but you, had already at that point Point were you, already into EXA function had it already, started at that point yeah it had, already started and we were sort of, selling GPU virtualization software to, large autonomous vehicle and Robotics, companies gotcha and so basically if I'm, understanding correctly the whole, generative tsunami just kind of landed, on you when you were already sitting in, that space doing GPU virtualization, already so you're it you just managed to, land right in front of the wave it, sounds like yeah so we started working, on codium like maybe four or five months, ago before Chad GPT it was interesting, just because we realized that an, application like GitHub co-pilot was, going to be one of the largest GPU, workloads period like I don't know if, you've you've probably tried the product, out it's like every time you do a key, press you're going out to the cloud and, doing trillions of computations right so, it's like a massive workload and we had, like as unal said the perfect, infrastructure to basically run this at, enormous scale not to mention we were in, love with the product from day one like, we were early users of the product the, moment it came out in in 2021 very cool, and so as generative is starting to take, off kind of with chat GPT hitting the, world and really changing things quite, rapidly you know I think people are, still shocked at how fast things have, moved you had started codium already, what kind of cnergy were you starting to, see there in terms of knowing that you, have one of presumably many many gpts, coming and other similar generative, models you had just gotten into codium, can you talk a little bit about what, that was and what how what were you, putting together in your in your minds, to recognize the opportunity that it was, yeah so I think like one of the you know, great things about entire chat gbt wave, is that you know everyone was using it, this is a thing where like literally, every individual is using Ai and so it, helped us in general right you know like, a big wave like raises All Ships kind of, thing you know it really helped us we, weren't really going out and as telling, people like hey a tool like codium can, help productivity because that was kind, of just now is resumed by everybody like, oh yeah if I do any kind of you know, knowledge work then there's potential, for AI to help right and I think so from, that sense when this enre like you know, chat GPT wave really came about that, overall kind of just like helped us in, terms of convincing people to even try, the product the other thing that we we, recognize is that we were positioning, ourselves very specifically from the, beginning right when it comes to code, code is like actually a very interesting, modality right it's not like your, standard you know chat GPT where you, have a you know long context that you, know user puts in and then it produces, context coming out right code is, interesting in the sense that you know, as we mentioned it's an autocomplete, that's like a passive AI rather than, like an AI that you're actually, instructing you know the model to do, something it's happening every keystroke, so it has to be a relatively smaller, model right you can't you have this like, you know hundreds of billions of, parameter models uh being used has be, relatively low latency and then code, itself is interesting right if you have, your cursor in the middle of a code, block the context both before and after, your cursor really matters right it's, not just what comes before so like, there's all these interesting kind of, like situational kind of constraints, about code that you put all these things, together and we realize that okay you, know all these chat chpi waves and, conversational AIS are happening that's, great but we're still not going to be, like you know rolled over by that, because we're kind of focusing on a very, specific application and modality of of, LMS that was pretty unique in many, [Music], ways, [Music], could you take a moment as we're diving, into codium and generative Ai and its, unique you know capabilities there and, just differentiate a little bit about, for those you know so many people have, tried co-pilot and so it's kind of, inevitable that you're going to get that, comparison to some degree can you talk a, little bit about what co-pilot's not, doing uh for generative AI or how you're, approaching it that allows you to show, people this as a Better Way Forward from, your perspective I mean we have tons of, respect for for the Copa team I just, want to start with that right I mean as, ver said we were all early users of it, definitely not putting you into conflict, with them that just as a starting point, for people absolutely yeah I think but, the way we kind of view this and I kind, of like alluded this earlier is that you, know writing brand new code right with, autocomplete is really just one small, task that we do as Engineers right we, refactor code we ask for help we write, documentation we do PR reviews and so, kind of our general approach has always, been let's try to build an AI toolkit, rather than an AI autocomplete tool got, it so we can get more into this into the, weeds here but like autocomplete is just, one of our functionalities that we, provide right we provide like an in ide, chat so things like chat GPT except, integrated with the ID e natural, language search over your code base, using like embeddings and Vector stores, in the background so like we're really, trying to expand like how can we address, like the entire software development, life cycle so I think that's probably, the you know the most obvious difference, with a tool like co-pilot from like an, individual developer point of view but, then the other thing which really kind, of builds off of all the infrastructure, that ver was mentioning earlier is that, we were already deploying you know ml, infrastructure in our you know previous, customers private clouds like we already, had all this expt Taste of how can we, take actual ml in for a deployer for a, customer in a way that you know they can, fully trust the solution because you, know we're not getting any of their data, and so another really big differentiator, for us was like okay I think this might, actually be a tool that enterprises can, use confidently and safely because we, have the infrastructure to do the, deployment in a manner that they they, would they would be open to using so I, think that was like the other, differentiator when it came specifically, to Enterprises but we can dive more into, that later no no that sounds good I want, you to connect one more thing for, going from uh being able to deploy the, infrastructure and helping your, customers in that way to codium as a, tool what's the leap there that got you, from one to the other how did you get, from infra focused to codium focused oh, yeah we I think we had to do like a full, like 180 when we started we like went, from a full like infr Service Company to, like let's like create a product for, consumers right like it was a full 180, in terms of product a pivot yeah full, some degrees a pivot because we knew, that you know eventually okay we'll, deploy employ to customers vpcs that, sound great but like if we're going to, ship something to a customer we had to, be like super confident that it was a, product that would work well right, because we're getting no feedback from, their developers and so we actually, first focused for the first like six or, so months of codium just building out, like an individual tier right any, developer can go try it we can see how, they like it right try our new, capabilities get feedback from an actual, Community do all these like community, building things that we hadn't really, done as like you know INF as a service, company but that that was like a really, huge Focus for us and you know we've, grown our actual codium uh individual, plan to like over 100,000 you know, active developers using us for like you, know many hours a day because you code, for that long if you're a developer you, know that's like plenty of feedback to, us right plenty of people actually using, the tool telling us like yeah this is, good this isn't good like oh you tried, pushing a new model that's worse like, all those things we actually learned so, that we can you know get a product, that's good so that was like the I guess, the intermediate period right really, learning from actual Developers is what, is a good product and and what is not, and I think that's like that's always, going to be a key kind of part of our, our development cycle you're you're, coming into this with this Rich, knowledge in infrastructure for, customers that's a huge area of, expertise it's an area of expertise that, um even though you moving forward into, the kind of the codium era if you will, you my words that is a skill set and, level of expertise that very few, organizations have deeply that you would, have had there how did that inform you, in terms terms of codium and, differentiation against whether it be, co-pilot or other tools that are out, there or just you know developers uh you, know throwing things into chat GPT what, did that background give you that gave, you that differentiation in the, marketplace yeah so I think when we, started the thing we started with is, like no one cares if we have better, infrastructure once you're a product, like if we have better infrastructure, that's great but if that makes a a, product that's the same no one should, care they just assume that you should, yeah so what we started with is we set a, very high bar for ourselves codium is an, entirely free product so like for the, individual User it's something that they, can install and use immediately for free, there are unlimited there's like no, limits at all so like when it comes to, autocomplete you can use it as much as, you want and this is by the way forced, us to do things where infrastructure is, as efficient as possible just to give, you a sense of the numbers we're talking, about here we process over 10 billion, tokens of code a day that might sound, like a large number that's like over a, billion of code a day that we process, for our own developers we're forced to, do this entirely for free and then on, top of that we probably have one of the, world's largest chat applications also, because it's in ide as well and all of, this put together has allowed us to, build a very very scalable piece of, infrastructure such that we are the, largest users of our own product we are, the largest users of our own product we, learn the most from our users and we can, then take those learnings and deploy in, a very cost-effective very efficient and, optimized way to our own Enterprise, users it's one of those things where we, forced oursel to learn a lot from an, individual plan and then take all those, learnings and actually bring them over, to the Enterprise and a lot of the, learnings we were only able to make, because we place like very I would say, like annoying infrastructure constraints, on ourselves by saying hey you guys got, to do this entirely for free basically, and we're committed to building codium, is going to be a free product forever, actually the individual plan will always, be free and it's one of those things, where our users are just always like how, are these guys even doing it like what, are they even doing to make this happen, and most of our users by the way are, users that have turned off of co-pilot, we have spent very little if not, anything on marketing so it's just one, of those things where our users are like, how do we make this free we take the, approach of we think some of the best, products in the world are free like, products at Google right they're, entirely free Google doesn't tell you, all the time that they have the best, infrastructure but they do have the best, infrastructure it just so happens to be, the case that that shows itself off in, the best product and we could talk a, little bit more about how we take our, sort of focus on infrastructure and make, a much better Enterprise product as well, but like that's the way we sort of look, at it it's like how do we deliver, materially better experiences with our, infrastructure and our users shouldn't, care that we actually did that you've, brought it up you got to go there now, man go ahead and dive right into it I, guess like one of the interesting things, like just going to how we run one of the, world's largest LM applications what, that sort of focus forced us to do is, given a single piece of compute like, let's say a single node or a single box, of gpus we could host the most number of, users on there so like let's say a large, company comes to us they can be, confident that whether they're on Prem, or there in VPC we can give them a, solution where the cost of the hardware, is not going to dominate the cost of the, software itself because right now, there's kind of this misunderstanding, that the gpus are really expensive which, is true they are but the trade-off is, they have a lot of compute like modern, gpus like A1 100s can do 300 Tera flops, of compute which is like some ungodly, number right like that's a crazy number, compared to what a modern CPU can do and, we can leverage that the best and we've, sort of been forced to do that like you, know if we didn't do that properly we'd, have outages with our service all the, time because of that enterprises trust, us to be like the best solution to run, in their own tenant in an air gapped way, uh which is fantastic because for that's, like the way that we can build the most, trust and deploy these pieces of, technology to them the most effectively, because they don't want to ship their, code outside of the company Ono can talk, a little bit more about how we leverage, things like fine-tuning as well that's, like a purely infrastructure problem, that's very unique to US versus like any, other company as well Al do you want to, sort of take that I mean yes so I think, you know as very said there's a lot of, things that we can do from like the, individual infrastructure point of view, so that we can do crazy things like make, it all free for all of our IND, individual users but once we actually, self host there's actually a lot of, things that you can do that you know, just any other tool can't do without, being self-hosted and one of the ones, that ver just mentioned is the, personalization right if you're fully, hosted in a company's you know tenant, you can use all of their knowledge bases, to create a substantially better product, right I think the way we generally think, about is that you have a generic model, that's good it's learned from trillions, of tokens of code on the in the public, Corpus but if you think about any like, individual company they have themselves, hundreds of millions of tokens of code, that has never seen the light of day and, that's actually the code that's the most, relevant for them if they want to write, any new code think of like all the know, internal syntax semantics utility, functions libraries dsls whatever it, might be and a model like a co-pilot or, a codium by the nature of it having Bel, low latency can only take about 150 or, so lines of code AS context right so, this is not like one of those like you, know chat gbts or gp4s where you're like, putting in files and files of context, like it's really small what you can put, in and so there's really no way for a, single inference to have full context of, your code base without actually, fine-tuning the base model that we, shipped to them on all their local code, and so we've actually you know done a, bunch of studies and we're like on how, this actually massively reduces like huc, ations and all these other things that, you know you always hear coming up with, llms but you know things like this, things like providing more in-depth, analytics all these things actually come, up by being self-hosted and as R, mentioned these are all at the core to, some degree an infra problem right how, do you actually do fine tuning locally, in you know company's tenant that's, actually info problem that you know, we're happy to talk more about but maybe, I'll just I'll pass it back to you Chris, I actually I'm about to ask a followup, about that because you've got me really, think thinking um about some of the use, cases in my own life on that and so with, the self-hosting model and you're able, to now kind of like you know open AI I, said you know with chat gb4 there's only, so far we're going to go because we've, kind of we've used the public Corpus of, knowledge out there on the internet you, know so there's only so much more, vertical scaling you can do on on the, model learning and so you know you're, touching on the fact that there's so, much hidden IP in code uh hidden, information in code that is of huge, value particularly to the company that, it's in because it's representing their, business model and the way their, business is evolved over time and so if, I'm understanding you correctly you're, basically saying that your solution can, take advantage of that on their behalf, um and really really hone against it, what are some of the limits on privacy, are they able to do that because that's, a big topic we've actually talked about, it on the show before about you know in, this generative AI age with IP concerns, and privacy concerns you know getting, the lawyers involved are you able to do, the training on their site and keep it, to the customer entirely or do they have, to let their IP out and stuff how do you, approach that that problem yeah I mean, so one of just the answer to any, question of like does any IP leave, coding for Enterprises the answer is, always no uh so in pretty much every, like part of the system like you know, our guarantee is to actually be able to, deploy this whole thing fully air gapped, we've even deployed in places like you, know AWS CUV Cloud which like entirely, you know doesn't even have connection to, the internet kind of scenario so nothing, ever leaves there to address some of the, points you you brought up there Chris, like yeah I mean we're not like the only, ones who are like saying like oh no the, data that a company has privately is, like super important and it's, potentially even more important than the, size of the model I think you know um a, good example this is actually meta, instead of using like a GitHub co-pilot, or any generic system they decided in, you know I guess classic meta fashion to, like train their own autocomplete model, internally using all of their code and, they actually you know published a paper, I think a few weeks back and their model, was like in terms of size I think 1.3, billion parameters like small in in in, respect to the the llm world and it just, massively outperformed GI co-pilot on, pretty much every task there's some like, corroborating evidence to you know what, we're saying about fine tuning that, doing this actually does lead to, materially better performances for you, know the user in question now does that, meta model going to be good for everyone, else could, probably not but that's also not the, whole point right and in terms of like, being able to F tune locally um yeah, we're able to you know do this, completely local and again it comes down, to like you know scale of data our base, model has been trained on trillions of, tokens of code right that's a lot that's, why we need this like you know multi-, node GPU setup to do all this training, but an actual company you know if they, have like say like even 10 million lines, of code that's about 100 million or so, tokens there's like a huge order of, magnitude difference still between this, pre-training and the fine tuning which, is why we can do this kind of locally on, actually surprisingly whichever Hardware, they choose to provision for serving, their developers so again this comes to, some of our like ml infa background and, all the stuff that we know how to do we, actually can do fine tuning and, inferences on that same piece of, Hardware so we don't actually ask you, know companies to provision more, hardware and even more like critically, we are able to do fine cheing during any, idle time of that GPU so whatever that, GPU is not being used to perform an, inference it's actually doing like you, know backrop steps to like continuously, improve the model you know find two just, one aspect of like a larger kind of, personalization system but you know, we've instrumented all this on Hardware, using our infar roots to actually create, a system that is relatively easy to, manage it's not a crazy amount of, overhead for any company to to manage or, use codium but still get like you know, the maximum possible wins from from, these AI tools okay so that is super, cool uh and you mentioned things like, gov Cloud which I have actually worked, in because of in my day job quite a bit, and I can think of a whole bunch of, other use cases for me personally which, begs the question about kind of going, back for a moment because we are, practical AI uh and we like to always, give some practical routes for people, into that so if we're going to go back, toward the beginning of the conversation, for a moment and we have some folks that, are listening to this right now and, they've been using co-pilot for a while, they're probably putting code into chat, gbt and and trying to accelerate there, with with varying degrees of success, they've been experimenting with Bard and, Bard's gotten better on code lately, obviously and and so so many people that, I talk to are still very frustrated with, kind of the workflow of the whole thing, and recognizing that there are these, you've outlined these differentiators, you know from co-pilot and other, competition out there in a friendly, competition kind of way talk a little, bit about some of the specific, generative AI use cases that would be, good if someone was in that position, where they're like yeah I'm using the, stuff but I'm not real I'm a little bit, frustrated with it I don't have it down, and if they were to give codium that, chance and dive in on it can you give me, several kind of layout the use cases on, what is what are they going to get when, they move in from a very practical like, for me now as the coder perspective what, do that look like what are they bonusing, and maybe give me a couple of different, ones cuz I'm really, I'm really curious and selfishly I'm, probably going to go try each of these, that you're telling me so I'm I'm, scratching my own itch by asking the, question I think you pointed out like, yeah workflows and the user experience, for a lot of AI tools like everyone's, still kind of trying to figure it out, right we're still in very early days of, these AI applications and this is our, learnings of trying to become a product, company we're actually taking like the U, quite seriously right and this is, actually what the individual plan is is, create to get feedback on I think very, you know concretely I think a lot of, people have that frustration of like, having to cop a code block over to chat, GPT write out a full prompt and like you, know remember the exact prompt that they, typed in before that gave them a good, result and then copying answers back in, and then making modifications like that, workflow is clearly kind of broken so, when we actually built our chat, functionality into the IDE we're like, okay what are all the parts here that, can get totally streamlined right and so, we actually did things like you know on, top of every function block there's, little like code lenses that are just, these small buttons that someone can, like click like explain this function, and it'll automatically pull in all that, relevant context open up in the window, you're not copying anything over and, it's like writing you know hit out any, human text or if you say like refactor a, function or add doc strings right or, write a unit test do all just like small, little buttons or you know preset, prompts that you can just then click, it'll do this generation on the side and, then we even have a way of clicking like, apply diff and because we know where we, pulled the context in we can apply diff, right back into the context right and so, you're not copying things back and, trying to like resolve mder conflicts, like all these things are are done kind, of automatically so there's a lot of, really cool things you can actually do, when you start bringing these things, into the it where developers are um and, we spent a lot of time really thinking, as you said from a workflow point of, view how do you make this like super, smooth Veron could you talk a little bit, about maybe some specific tasks that, you're seeing people doing when we talk, about generative and it's expanded and, you know from llms and we're you know, we're doing things in video we're doing, things in uh natural language all of the, different different modalities are, gradually being addressed with these, different models and different tools, that are being built around it could you, talk a little bit about you know what, are people trying to code right now what, specifically is codium helping them like, what not just about codium but the, actual use cases themselves so they go, ah I can see a path forward I can I can, go do that I know how to generate this, or that or the other with generative AI, in codium can you talk a little bit, about those at something of a specific, level, so interestingly just a little bit about, multimodality I think we're maybe a, little bit far from leveraging I guess, other modes Beyond text for code I think, maybe that that'll happen but I I think, there's not enough evidence right now, yet for autocomplete just to be open, about the the sort of the functionality, we have we have autocomplete we have, search and we have codebase aware chat, right so for we recognize right now that, all the usage autocomplete accounts for, more than 90 to 95% of the usage of the, product it's because chatting is not, something people do like even every day, potentially they might open it up once, every couple days but autocomplete is, something that's like always on very, passively helpful and people get the, most value out of it um which is kind of, counterintuitive I think people don't, recognize that immediately but when, people are doing autocomplete we've, recognize there's like two modalities, right of the way people type code, there's a modality of accelerating the, developer which is like hey I kind of, know what I'm going to type and I just, want to tab complete the result and then, there's also an exploration phase which, is like I don't even know what I'm, trying to do based on that I write a, comment this is like a classic thing, where like my behavior writing code has, materially changed because of tools like, codium where I'll write a comment and I, kind of just hope and pray that it pulls, in the right context so that it gives me, the best generation possible so in my, mind for the acceleration case codium is, like very helpful right it can like, autocomplete a bunch of code but to make, the exploration case that's where the, true magical moment comes in where I had, like no clue I like how I was going to, use a bunch of these apis and that's, sort of what we're focused on on trying, to make really better whether that be in, chat as well as with autoc complate how, do we make it so that we can build the, most knowledgeable AI that is maximally, helpful and also minimally just like, annoying the interesting thing about, codium as a product or these, autocomplete products is they get a, little bit of getting used to but even, despite the fact that they write wrong, things it's not very annoying because, you can very easily just say I don't, want this completion or it didn't like, write an entire file out and you need to, go and correct a bunch of functions it, was like a couple lines or maybe like 10, lines of code you can very easily, validate that it's correct right that, comes back to then what unel was saying, which is how do we make sure we can, provide always The maximally Helpful, sort of AI agent the answer is have the, best context possible and a couple, nitty-gritty details we do is currently, our context and we'll write a blog post, about this is double what co-pilots is, we allow double the amount of context, for autocomplete than what they do the, second thing is we're able to pull, context throughout the codebase and this, is actually that same piece of, technology that is pulling context, throughout the codebase through search, and all these other functionalities is, getting used as part of chat for, codebase aware chat which is something, that copilot doesn't even have today yet, the third piece is finally for a large, Enterprise is how do we make it so that, these models actually semantically, understand your code which is where, fine-tuning comes in it's like for us, context gets us a lot of the way but it, doesn't get us all the way because you, can just imagine even with double the, context let's say we can pass in a, thousand lines of code for a company, with 10 million lines of code we're, scratching four orders of Mau less code, than the company actually has so this is, where our vision is like we want to, continually ramp up the amount of, knowledge these models have and the ways, in which they can be helpful I don't, know if that answered the question there, it did actually your acceleration versus, exploration uh analogy that was for me, personally different people get, different things that really clarified, for me where I might be using co-pilot, or where I would go and use codium on, that because I do struggle on the, exploration side myself it's a lot, easier on the acceleration yet into the, line into the line you know and crank, through that fast uh which I've been, able to do with these other tools but I, have struggled on the exploration side, because I kind of want to do a thing and, I'm kind of trying to figure it out and, I'm just going to kind of see where my, fingers lead on that and having that, ability to support that in the way you, described that gave me a very clear, understanding from my standpoint so I'd, like to ask each of you where this is, going both in the large and in your, specific concern with codium you know, things have never move faster than, they're moving right now in terms of how, fast these Technologies are are, progressing and Daniel and I have a, habit uh we were commenting on our last, episode about this we have a habit of, saying yeah we recently mentioned this, thing and that we'd get to it but then, we turn around and we end up talking, about that we just got there way faster, than we ever anticipated with the speed, of generative AI and you know you're, already creating these amazing tools and, stuff like that and you're having to, stay out front where's your brain taking, you at night you know when you when you, stop and you chill out and have a glass, of wine or whatever you do and you're, kind of just pondering what does the, future look like and I'd like to know, both from your own specific personal, standpoints in terms of your product and, that but just the generative AI World in, general how do you see it going forward, I'd love your insights yeah I think um, the classic question and then the grand, schem of things like oh my God is like, gener AI just going to like totally get, rid of my job or completely like, invalidate and I think for us we will'll, be the first people to say that you know, we we do think like AI will just be like, the next step in a series of at least in, code where a series of tools that have, had made like developers more productive, right that have led them to be able to, focus on more kind of interesting parts, of software development and you know be, an assistant right we all these tools, are called AI assistant tools I think, for a reason you know we're definitely, not at a place place yet I I don't think, for a while where there isn't going to, be like a human in the loop like in, control uh you know guiding the AI and, and what to do so from that kind of, respect like the Doomsday scenario I'm I, don't want to speak for but I think, we're like pretty far from that, mentality but we do think like I think, you know we wouldn't have gotten into, codium if we didn't genuinely think that, there was just so many things that we do, as a day-to-day as Engineers that are, just a little frustrating boring kind of, take us out of the Flow State you know, slow us down those all seem like very, Prime ripe things to like try to address, with AI right and I think that's kind of, our general goal right I think there's a, lot more capabilities to build right I, don't think search Chat these are going, to be the last I guess like building, blocks that we build we have more, capabilities coming up that we're super, excited about but yeah it's also like, you know going to be a thing where as, you said this is moving super quickly, right like we have like research open, source like applications all developing, at the same time at break Breck next, speed and so I think part of what we're, also looking forward to is like how can, we also just like educate like you know, at least software developers on the best, way to use AI tools how to like best, make the most use of it so that they are, part of the wave right and that they, also can get a lot of value well said, Veron yeah maybe if I was to just say, like you were asking me what the big, worry is for me the big worry is there's, going to be a lot of like exciting new, demos that people end up building and, obviously for us as a company we need to, make strategic bets on like hey this is, a worth thing for us to invest in for, instance I think a couple months ago, there was an entire craze on agents, being able to write like entire pieces, of of code for you and all these other, things for us though we had lots of, Enterprise companies that were sort of, using the product at the time and, recognized that the technology just, wasn't there yet right like take a, codebase that's like 100 million lenss, of code or 10 million lenses of code, it's going to be hard for you to write, C++ that's like five files that compiles, perfectly and then also like uses all, the other libraries when you have, context that's like you know 55, it's not going to be the easiest problem, and I think that's maybe an example but, for us we've currently I would say just, a pat on the back over the last eight, months iterated like significantly, faster than every other company in the, space just in terms of the functionality, but we need to make strategic bets on, what the next thing to sort of work on, is at any given point and we need to be, very careful about like hey this is like, a very exciting area but is it like, actually useful to our users right like, is it actually useful in that hey like, maybe we could do something where a, great example is given a PR we generate, a summary and I think co-pilot has tried, building something like this and we, tried using the product that copal had, had and it was just wrong a lot of the, times and I think that would have been, an interesting idea for us to pursue and, keep trying to make work but then there, is like diminishing returns and I think, onil and I have seen this very clearly, in autonomous vehicles where we had a, piece of technology that was kind of, just not there yet like it needs a, couple more breakthroughs of machine, learning to kind of get there and the, idea of building it 5 years in advance, right you shouldn't be doing that you, just 100% shouldn't be building a tool, when the technology just isn't there yet, and that is something that keeps me up, at night is like what are the next, things we need to build while keeping in, mind of this is what the technological, capability set is like today um if that, makes sense it does and it's a very, practical uh AI perspective if you will, so very fitting final words for the show, today well uh Veron and anchel thank you, very very much for coming on the show uh, as fast fting I got a lot of insight a, lot of uh new things to go explore from, what you just taught me and I appreciate, your time thank you for coming on thanks, for up us thanks L, [Music], Chris thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all Chang doog podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the viz that's all for now we'll talk, to you again next, [Music], time, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Cambrian explosion of generative models | In this Fully Connected episode, Daniel and Chris explore recent highlights from the current model proliferation wave sweeping the world - including Stable Diffusion XL, OpenChat, Zeroscope XL, and Salesforce XGen. They note the rapid rise of open models, and speculate that just as in open source software, open models will dominate the future. Such rapid advancement creates its own problems though, so they finish by itemizing concerns such as cybersecurity, workflow productivity, and impact on human culture.
Leave us a comment (https://changelog.com/practicalai/230/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Stable Diffusion XL 0.9 (https://huggingface.co/stabilityai/stable-diffusion-xl-base-0.9)
• OpenChat (https://huggingface.co/openchat/openchat)
• Zeroscope XL (https://huggingface.co/cerspense/zeroscope_v2_XL)
• Salesforce XGen (https://huggingface.co/Salesforce/xgen-7b-8k-base)
• AI is Eating The World (https://txt.cohere.com/ai-is-eating-the-world)
• LLM university (https://docs.cohere.com/docs/llmu)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-230.md) | 12 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another fully, connected episode of practical AI in, these episodes Chris and I keep you, fully connected with everything that's, happening in the AI Community we'll take, some time to discuss the latest news and, also dig into some learning resources to, help help you level up your machine, learning game I'm Daniel whack I'm a, founder and data scientist at prediction, guard and I'm joined as always by Chris, Benson who is a tech strategist at loed, Martin how you doing Chris I'm doing, very well Daniel it's uh more, interesting times ahead of us you know, I'm thinking about changing jobs I'm, thinking about uh like a job title, called something like I don't know, generative juggler what do you, think yeah cuz it sounds fun you know I, mean I can totally see llama Wrangler oh, I love that that's perfect for me too I, I'm all over that okay of course our, listeners know that you're a big animal, Advocate what is an animal advocate's, perspective on the use of all of this, llama camel all this sort of different, usage of animals do you find it fun and, interesting of course we should all have, animals on the mind all the time I mean, it makes us better people, yes yeah I'm traveling my wife just sent, me a picture of our dog uh laying on the, floor in a funny position looking out of, the corner of his eye so I'm it made me, happy going into this recording so, that's always good that sounds good you, know the pet pictures are really, important when you're traveling my wife, does that with me yeah uh she'll send a, good moment so it you know in in the, face of all this technology change, constantly uh coming at us it keeps our, Humanity intact yeah yeah and it is a, crazy time in the AI Community um with, so we we use these fully connected, episodes to update people on different, news and that sort of thing and one of, the things I was realizing this week as, we were prepping for this episode is, I've even seen there's people and I, think there's a website talking about, the Cambrian explosion of models or the, proliferation of models so you know, there's just in the past couple weeks, there's so many different ones that have, come out it is really a proliferation so, um I thought it' be good to highlight a, few of those we can't get to all of them, because there's just so many but um one, thing as a tip to people uh sometimes, how I look at this is I'll go to hugging, face and just go to the models Tab and, if you make sure that it's sorted by, trending that's kind of an a cool way to, see hey what's at the top and it you, know you can filter by different types, of models but I found it kind of, interesting to just look at what's, trending overall because as of now on, the hugging face h, it's a mix between kind of video, Generation image generation language, generation models and over time you can, see kind of which of those categories is, trending up or down I don't know there's, probably an app that needs to be made to, track that sort of thing but I'll let, someone else do that one of the ones, that um I wanted to highlight was the, new stable diffusion XL, 0.9 also these model names are getting a, little bit more complicated over time I, found but uh stable diffusion XL 0.9 or, SD XL this is of course people probably, remember stable diffusion this is an, image generation model so you put in a, text prompt and then out comes an image, so something like astronaut riding horse, on the moon photo realistic or something, like that and you get an image out this, one is kind of interesting um I think it, was back in April they announced some, kind of private access to this or beta, access now the model is up on hugging, face it is available but under only a, research only license but the images I, don't know if you've seen some of these, Chris um I'm looking at them now while, we're talking yeah you played with, stable diffusion back you know when the, previous kind of iteration what is your, thought in terms of the progression of, this oh I mean it it's it was like I, remember when we were playing we were, actually doing it on one of our episodes, yeah and we were coming up raccoons all, over the place I remember uh at the time, that were raccoons everywhere not just, us there seem to be lots of raccoons, coming out of staple diffusion, regardless I was rather wondering about, that but no uh no I'm looking through, some of the things and just like the, imagery has come so far and the, capability and what you can do uh and, that's just a few months since we were, doing that so uh I'm in awe right now as, I look at these uh these shots as we're, talking uh at least the last time I, checked and this might be different now, that if you're listening to this episode, it might be different but at the time, today there was a blog post about the, release on from stability and they, mentioned that there's going to be a, follow-up more technical Deep dive I, don't know if it's a full paper or just, a deep dive post but there are some, general descriptions of how this is, working and uh you can dig into it a, little bit so instead of there being, sort of one step or a one model kind of, situation in this image generation, apparently this model consists of a twep, pipeline it's still diffusion based but, there's one model that generates um they, say latence of the desired output size, in the second step is specialized to, generate this sort of high resolution, image so it's like an imageo image model, they combine these and the second stage, of the model then kind of adds finer, details to the generated output so, that's one interesting thing which also, is kind of interesting I I don't know if, you've been following all of the, everyone talking about what's going on, quote unquote in GPT 4 but I think, there's a lot of speculation and, evidence that that also is a sort of, mixture of experts multiple models, together not just a single model call so, I find this trend kind of interesting do, you have any thoughts around like what, is the virtue of having the kind of the, multi-step multi approach and is that do, you think that that's likely to be kind, of a a general architecture that we see, continually overboard instead of just, having the model I mean even going back, to the stable diffusion I noticed you, the two models you mentioned and, interestingly the second model is, basically twice the size of the first, one in terms of parameters any thoughts, around the science or math around that, or why why you would take that approach, yeah as you scale up your data set and, you scale up your compute for a given, model size you're going to get, diminishing returns on the performance, of that model so in some ways given a, certain amount of data in a model, architecture you know what are you going, to improve more you could train for, longer you could train on more data but, at the levels that some of these models, are at now thinking particularly about, open AI you know what more can they do, right now with respect to training, longer different with the same model, architecture more data so what's a, natural way to improve output but, combining multiple models a pipeline, together now I think that you'll see, probably advances in architectures so, different model architectures will, continue to come out and maybe break, some of that that Trend another way that, you see this kind of multiple models, being applied is in things like the rhf, process which we talked about on the, show The reinforcement learning from, Human feedback which things like this, have been around for quite some time so, Gans for example include two different, models right a generative model and the, discriminator these sort of like, multimodel workflows that produce a, instruction tuned or tuned model out the, other end I think we'll continue to see, a lot of that as well even if the model, that's produced or used for inference at, the end is a single inference I got one, other question before we dive into the, rest of the models to release you know, one of the things that was notable was, uh you know open AI kind of commented, you know after gbt 4 that there was, there was only so much vertical growth, you could have there you know given the, the data set you know basically I mean, the whole internet you know in model so, there's only you know you can't just, keep growing them like that here we find, ourselves in this you know what we've, kind of described as the proliferation, kind of episode talking about all these, models coming out do you think part of, what we're looking at today is generated, by the fact that when you lose the, potential for further vertical growth, because you basically used all the data, that's out there, does that give all of these other model, creators a chance to catch up to some, degree so that you kind of you had the, surging of the leader but once they hit, kind of a barrier there now you're, seeing many many catching up and, comparing themselves to that is is that, a fair assessment in terms of kind of, what we're looking at now yeah people, probably have seen this post that went, sort of viral which is supposedly a, leaked document from Google saying you, know we have no mode and neither does Ai, and they they talk about how basically I, think the phrase they use is open, sources eating our lunch like we're not, positioned as major players to compete, necessarily so I think that that's where, that sentiment is probably coming from, wherever that document originated um, that would be the sentiment that's being, expressed there so the ability to have a, foundation model is no longer this sort, of moat that separates you because now, there's open source models there's, really good open source models that, maybe the base model let's say the base, model doesn't perform as good in a, general purpose way as GPT 4 or, something like that well the reality is, that like in your business environment, you don't need a general purpose model, that's usually not what you need right, what you need is a model that performs, really well for your task and so in that, sense having a really good Open Access, whether it's a language model or an, image generation model and then having, the ability which we have now to adapt, or fine-tune that model with your own, private data actually is kind of part of, what we're seeing with this, proliferation I would say an example of, this is the next model I was going to, highlight which I think is a really good, example of this so I saw this in a tweet, um I don't know act the actual data was, released but the Open chat models so if, you just go to hugging face slopen chat, so there was a model that kind of, outpaced chat GPT in some benchmarks so, there's a bakuna benchmark that model, wasn't as open but these open chat, models are the first open models to, outpace chat GPT with GPT 3.5 in this, Benchmark and what's interesting is this, is another very much a trend that we're, seeing more and more and more of is, actually using the closed, proprietary but really impressively, performing models like gp4 to actually, create data for you to fine-tune an open, model which then performs or maybe, performs better than the closed models, at least in certain scenarios so that's, what they did they used, 6,000 conversations generated out of, gp4 to fine-tune this model which, actually outperformed and is available, publicly and this we're seeing over and, over so there's other models like people, are generating this data for less than, a, right they're using the open AI API, less than $1,000 to create these models, that are really impressive and how they, perform now I think there's all sorts of, interesting implications of that and, part of me wonders well how is open AI, going to shift its business model to, make that sort of thing less or or other, providers of foundation models one, result of this might be that we see, providers like open AI try to prevent, usage like this where you're just using, their API to generate data to create a, model that works better for you than, using their API I don't know we'll see, if you kind of back away for a second, and look at the history of this it's, starting to look a lot like the way, software development went open source, you know if you look back around you, know 2000 or even down down into the, '90s before that you saw all of these, proprietary programming languages you, know you'd pay for them you had to pay, for environments and stuff like that and, gradually open source overtook it and uh, from my perspective it's feeling a lot, of the same right now as we're making a, shift um I will leave it by saying I'm, wondering uh if to that point about your, unknown source document earlier whether, or not that's kind of an inevitable, destination we're going, to, this is a chang log news break can you, trust jat gpt's package recommendations, m not so much the team at Vulcan have, published a new security Threat Vector, they're calling AI package, hallucination that sounds good I'll have, that it relies on the fact that chat GPT, sometimes answers questions with, hallucinated sources links blogs and, statistics it'll even generate, questionable fixes to cve and offer, links to libraries that don't actually, exist what about the RO Us's rodents of, unusual size I don't think they, exist quote when the attacker finds a, recommendation for an unpublished, package they can publish their own, malicious package in its place the next, time a user asks a similar question they, may receive a recommendation from chat, gbt to use the now existing malicious, package end quote these AI tools like, chat GPT are a real boost to developer, product it but be careful out there you, just heard one of our five top stories, from Monday's Chang log news subscribe, to the podcast to get all of the week's, top stories and pop your email address, in at Chang blog.com newws to also, receive our free companion email with, even more the developer news worth your, attention once again that's changel, log.com, [Music], newws we've been talking about open, source models some things that we talked, about even like two months ago on this, show like someday these things will, happen I remember us talking about the, graph that I think CLM from hugging face, posted on Twitter where you know you, kind of got this linear progression of, these closed Source models and then, eventually there's this kind of, exponential increase of open models that, surpasses the performance of the closed, models and I don't know if we're totally, in that place yet, but it kind of seems like it's happening, to some degree and maybe not in in, certain ways so I think still for like, general purpose like this model can do, whatever you ask it to do those sorts of, use cases still the closed models are, winning I think but like I said how many, business use cases do you need a model, that does that sort of thing the, majority you don't need that so maybe, for the actual proliferation of these, models in business use cases all that, really matters is that you can have open, models that perform really well for all, those business use cases and that brings, up of course a lot of other, concerns and practical implications so, open models are great if they perform, better that's great but there is a lot, of I mean it's not only that open AI or, cohere or anthropic or whoever who are, running these models that the model is, good, they also have a really nice and easy to, use API that generally is up although I, think chat GPT was down the other night, but uh yeah but yeah generally is good, and like wellmaintained and all that you, don't get that with these open models, you have to figure that bit out yourself, which has other sort of engineering, implications and infrastructure, implications and obviously uh you know, going back to uh someone I know here uh, there are business opportunities, available to help people, on board on those things so there's a, lot it's so much happening right now as, you said two months ago and now it's, already changing um and I think people, need to get used to the new speed of of, how fast this is happening at this point, um later on I'll come back to that but, uh one other one that is trending at, least this week on hugging face is, zeroscope XL version two is the one that, I'm looking at but if you just search, for zero scope you'll find it this is a, video generation model which is pretty, cool so it's video generation it, produces Watermark free videos and one, of the things I find interesting about, so like this model the zeroscope model, and also the stable diffusion model that, we mentioned a second ago is you can run, these on some sort of commodity Hardware, maybe not the cheapest of commodity, Hardware but this model supposably uses, 15 a little over 15 gabt of GPU memory r, ing 30 frames at 1024 by, 576 so that sort of Hardware is, definitely within reach for a lot of, people even in platforms where you can, access some of that for free for at, least sometime so yeah that's one of the, things that I find interesting about, some of these models as well that's cool, I we're seeing more and more video, generation recently it wasn't long ago, it was earlier this year that we were, talking about uh you know kind of moving, there uh as we were coming into 2023 and, the fact that we were expecting it but, it hadn't really arrived yet and now, it's already to your Cambrian uh point, it is blown up and we're seeing multiple, opportunities uh in terms of these, models already in in open source, versions as well so how do you I'm kind, of curious Daniel how do you as a, practitioner as you're looking at this, explosion of these different options, coming at you how do you make an, evaluation I've had people ask me that, recently like so much is happening now I, don't even know how to evaluate one, option versus another do you have any uh, thoughts on framing that I think there's, a bunch of different axes that you could, kind of narrow down your choices along, so let's say that you have a commercial, use case right that alone is a filter by, which you can knock out a huge amount of, models because just looking at the ones, we've listed so far zeros scope released, under Creative Commons non-commercial, can't use it open chat released under, the Llama license can't use it for, commercial stable diffusion XL 0.9, available only for research can't use it, so not that you couldn't prototype with, it or that it versions of this wouldn't, be eventually released or you could, access them in other commercial products, but that kind of does narrow down your, cases quite a bit whereas you look at, certain models like the MPT family from, Mosaic release under licenses that allow, you to use them for commercial purposes, Etc so that's an easy one what is your, use case are you commercial well that, knocks out a whole bunch then you have a, smaller set and then I think you need to, do a second layer of filtering which is, think about your practical use of this, model so for example let's say that I, want to use an llm to extract a bunch of, information from a huge number of, unstructured documents uh I've got maybe, millions of documents and I want to, extract information from them okay well, if each inference is going to take 20 or, 30 seconds for me and I would need to, extract a bunch of information then, that's going to become a major problem, so then I need to think about like how, am I going to use this and what are the, constraints around like the inference, speed and the interaction with the model, or the context link that I'm putting in, in the case of large language models do, I need to put in a bunch of or a small, amount and that narrows down to models, that are maybe smaller that can be run, faster for inference or models that, support larger inputs right so there's, those concerns and then finally once you, get down to that let's say you found one, that fits your use case and the, constraints that you're working under, then I think it gets down to this sort, of I guess we call it oldfashioned oh, it's not that oldfashioned create, yourself a test set that's still the, best way to do this right if you have, you know, 100 200 examples that you've manually, labeled as this is what I would like to, go in and this is what I would like to, come out then you should just check the, output and see you know what is the, accuracy or how does the output compare, how would I rate these is failure or, what that's still the way to do it right, so the last two minutes is my favorite, part of this episode so far you just put, the Practical and practical AI uh in, terms of how to go about actually doing, this stuff in real life so so much, appreciated on that yeah of course yeah, well one of those things that was, mentioned what well two things the, licensing and the context link that we, just talked about so for those that, aren't aware um most of these generative, models accept a prompt that is some, amount of text that is kind of autoc, completed the result is an autoc, completion most of the large language, models that we're dealing with are autoc, completion models so they predict next, words the image generation one or the, video generation when you kind of think, of the image or the video as the, completion of a prompt as well because, you're putting in text but these models, generally have a constraint around the, amount of text that you can put in as, your prompt many of the open models are, kind of around 2000ish tokens of input, so for example you couldn't put in maybe, a whole chapter of a book or something, that's not what you could put in there, there are some like trickeries that have, been introduced that take like a model, that was trained on a smaller context, length and kind of extend the context, length but something we've seen in the, past couple weeks is some really, seemingly very powerful models that are, open and are available for commercial, usage under their licensing that support, a longer context length one of these, being the Salesforce xgen model um so if, you go on hugging face just search for, XG it's a 7 billion parameter model with, an 8,000 input sequence length uh which, is obviously quite a bit more than that, 2000 and one of the things I find, interesting about this model as well, kind of fitting with similar trends that, we saw in the other model the 7 billion, parameter is kind of an important piece, of it because 7 billion parameter once, you kind of go beyond that you lose some, of your ability to deploy models on more, commodity hardware and so that 7 billion, is a very strategic number and that's, why you see a lot of 7 billion 6.9, billion parameter models is it allows, you to kind of run these models on more, reasonable Hardware single GPU cards, that sort of thing what is the technical, distinction there when you exceed the 7, billion parameter is this something as, simple as you know kind of like the bus, width of data bits going in or I mean, it's really the model on fitting into, the GPU memory got it and not exceeding, it so unless you want to quantize your, model which we had a whole episode with, neural magic um so I'd recommend people, listen to that that was really cool but, it was unless you're very careful so, quantization means like each of these, seven billion parameters of this model, are some sort of floating Point numbers, right and most of them so if you load, them in are not used that much or you, don't need sort of full float 32, Precision to get good output so one, thing people do is they quantize those, down to float 16 or even N8 or 4 bit or, whatever if you're not really careful, about how you do that or if you don't, kind of retrain with that Precision, often times you lose a lot of, performance so the thing here is like, the 7 billion parameter model with these, larger cards now that you can get single, cards even if it's an a100 or something, like that that's fairly expensive but, it's a single card and it will fit and, run one of these models fine but if you, go to like 40 billion parameters 60, billion parameters these larger models, now you're kind of getting into, multi-gpu Zone which makes things much, more difficult so there is a balance, here like you can quantize or optimize, the larger models and run them on, commodity Hardware but it's not always, straightforward how to do that gotcha so, in general you want to get if you're, just a practitioner out there and you're, in a small or mediumsized business, you're kind of doing it on your own or, with your companies stuff kind of, focusing in that five six seven billion, parameter so that you can be productive, as opposed to and not escalate costs out, of your control is that a fair way of, looking at it yeah yeah I would say, basically if you try to work in that s, billion or fewer Zone your life is much, easier infrastructure wise I would say, and that's probably will also change, over time but I think it's the reality, now and one in of the Salesforce thing I, love it when people post this the they, posted that the training cost was around, $150,000 USD on Google Cloud using tpus, and this model is released under Apache, 2 which is cool for me the other one, that I mentioned that was the 8K context, length was the mpt3 billion model which, was released recently but also note, there the difference in parameter size, right the xgen model from Salesforce, supports that context leak that 7, billion parameters and for MPT you kind, of have to go up to that 30 billion, which the MPT models are really great I, love them I've been using them but, that's just a differentiation like you, can see why maybe Salesforce xgen is, trending because of uh their focus on, this sort of thing it's more accessible, yeah, [Music], well Chris uh I think that some of what, we've talked about here with the open, models is quite interesting because as, we already mentioned we were talking, about this a couple months ago and, thinking oh at some point these open, models are going to proliferate and kind, of of take market share what whatever, you want to say from the closed, proprietary models and I think we are, seeing this trend one one of the, evidences that I saw in the news this, yeah I forget if it was this week as, we're recording this but was the, acquisition of Mosaic ml so Mosaic is, the one that created the MPT family of, models which again I've already said are, really great choices if you're looking, for some llms to play with but Mosaic, was acquired by data bricks or quote, agreed to join which I don't know the, prices on these things are just, astronomical you know in terms of it's, crazy yeah so I mean it's public, information at least this one is public, information so a total of 1.3 billion, yeah for Mosaic ml which has 62, employees so that's $21 million per, employee that's a valuable employer, right there and I was talking to someone, about this and I wasn't in the strategy, meetings with data bricks when they're, talking about like why are we doing this, and how does this position us but I mean, think about I remember data bricks and, Spark and Hadoop back in the sort of Big, Data days leading into data science days, and really focusing on this spark sort, of thing and think about that use case I, gave earlier of the data extraction, right how are people going to do large, scale data processing in the future or, large scale analytics in the future well, there will likely always be data, warehouses and SQL queries and analytic, systems but there's going to be a large, portion of what people are doing, analytics wise or kind of Big Data, analysis quote unquote Wise by, extracting information or doing, reasoning with llms the problem with, that is for an Enterprise you can't do, that with a, proprietary closed API because you can't, leak your private data to that API and, it's not cost effective to do it anyway, because those charge per token so how, are you going to do that you're going to, proliferate open models that are trained, on your own private data and make that, easier and easier and that's what, mosaic's doing right so I think once you, kind of think about that positioning I'm, not one to comment on business business, strategy necessarily but that's that's, how I've kind of thought about this is, yeah like that's the valuable trajectory, of of where we're headed I think it's, inevitable because you know you run in I, know in business I have seen many many, cases where these closed models and the, licenses surrounding them and the, concern about proprietary data it's a, big challenge for people that are trying, to get into them as quickly as possible, to navigate that through it throws a, whole bunch of legal concern around it, and then you guard rails which slows it, down so it makes perfect sense to go and, consume and participate in the open, Community uh and I think just like, software it's inevitable business will, force us into that direction so it's not, it's not people doing it out of the, goodness of their hearts it's people, doing it for the betterment of their, businesses uh because it's the only, sustainable viable option they have, right now the rug can't get yanked out, from under you very quickly yeah so yeah, I think we had no idea a couple of, months ago that we were going to have, this conversation now though I think you, and I probably expect things to happen, faster than most people out there, because we're neck deep in stuff all the, time but I don't even think we realize, just how fast that would happen I think, it's going to I'm I'm trying to adjust, my own brain for the fact that that will, keep happening and will probably, accelerate so we're going to have a lot, to talk about in the days ahead yeah the, trend that we're seeing is happening, very quick that I thought would take, much longer with these open models so if, you look back at these decisions that, have been made around like funding new, GPU clusters for different startups, trying to produce new Foundation models, so think about um like we saw funding, $1.3 billion for inflection and the, Laten space guys in in their post on AI, Engineers have highlighted some of these, for mistol and other ones that there's, this sort of hoarding of gpus which is, taking place but those strategy, decisions were made you know how long, ago like two months and now is having, that sort of compute infrastructure is, that the differentiator that is going to, make the difference in the business, world I think you know I'm not saying, it's not going to be good for those, businesses I think probably they'll do, great things and be wonderful but you, don't have to now have this sort of, large GPU cluster with thousands of gpus, to be a player and create value in the, marketplace so yeah it's interesting to, also see how the Dynamics of funding and, business strategy are kind of getting, intermixed with this rapid proliferation, and kind of individual developers small, groups of developers creating these, models like open chat and other ones, we're also seeing you know you know, going back to our conversation just a, moment ago about you know 7 billion, parameters being kind of an over under, decision point because it changed es how, you're going to go Implement with that, under 7 billion you're going to have, whole Industries focusing uh on things, like that because they may be working, out on the edge and now they're looking, at in the not so distant future you know, we what whereas we might have said, eventually it will blah blah blah now, we're like let's go do llms on the Edge, Let's go uh you know I can have a GPU, that's a single board in whatever Edge, device we're talking about and you're, going to see whole Industries pop up, around the ability to do that because, you're within that as you pointed out, the the RAM available on the GPU so, that's going to create a whole bunch of, new business cases I think one of the, things in my mind uh right now is it's, such an explosive kind of wild west, moment for us here is is always the case, all of the concerns that touch onto, these uh issues such as cyber security, such as uh how it affects your Workforce, your productivity how do you integrate, the Tooling in what does it mean for, changing business stry and opportunities, this is all trailing distantly behind, even things like AI ethics which we've, covered quite a bit you know how do you, uh legal Frameworks at different, countries and different municipalities, and such how do you catch up all of, those things with the fact that we're, having this amazing Cambrian explosion, in terms of model availability, accessibility and fragmentation into, many different use cases that were not, thought of two months ago yeah you had, highlighted the security side of this I, think it's a really good note because, one thing I've seen is you go to for, example the hugging face llm leaderboard, right like let's say I'm a person I want, to use the greatest open llm that I can, find let's say that for one maybe a lot, of the licensing causes problems for me, but let's say all the licensing problems, are equal then I go to the leaderboard I, click on some of those that are high up, on the leaderboard and the lack of, information around the data processing, the training set the fine-tuning set the, testing and security vulnerabilities, potentially like prompt injection, vulnerabilities all of these things, similar to like you go to GitHub it's, the same with open source code right you, can search for some tool and it might, have a little bit of information in the, readme and might say okay great import, solves my problem and move on but that's, a recipe for introducing vulnerabilities, into your code it's why it it's why uh, products like snake which I think is a, cool way that I found for developers to, deal with that sort of issue on the code, side is you know analyzing your, dependencies to look for known, vulnerabilities in open- Source projects, but there's nothing like that for LMS, right which of these llms has more, hallucinations than another one which of, them has more toxicity than other ones, which of them are more prone to prompt, injection type of things than other ones, all of that's not on the leaderboard, right and one of the things to also note, here is we're kind of addressing all of, the kind of the technical and I don't, necessarily mean like code technical but, you know things like the legalities and, documentation and how do you put in, compliance around it all these things, but I've also o noticed uh and I'm just, kind of mentioning it passing right now, because we can't delve into it there's a, huge cultural thing that we're also, trying to digest right now you know, we've talked this year about how this, you know 2023 is really the year that, it's been huge in the Public's, Consciousness people are using the stuff, and they're aware they're using it uh in, many parts of their lives things like, the chat GPT app you know everyone's, using it on their phones and such these, days I think I've had more conversations, in the last three months around people, trying to figure out not just like the, business aspect of how do I adopt but, also uh a lot of fear uh and a lot of, concern about that and so I think that, is becoming part of what we need to be, able to think about from a business, strategy standpoint isn't just the cyber, security and the compliance and all, these issues but also how do you bring, the humans along for the ride yeah and, get them integrated in as we're making, these massive leaps forward so don't, forget your humans uh in the equation as, you, as you try to take advantage of all this, amazing llm goodness really good point, and I think some of the writing that I, wanted to share as our learning, resources at the end of this highlights, some aspects of those points that you, just mentioned which is and I've been, trying to tell people this recently that, the llm or the generative model the, image generation model in some ways, people are thinking about those things, like applications but really they're, tools that are embedded in applications, so you're building an application for, real people users that might make use of, a tool like an llm or an image, generation model but application, development is still part of it and, coding and Engineering is part of it and, security is part of it and your UI ux, around how you interact with your, customers is part of it so that sort of, thinking about these things as embedded, tools within an application I think is, important it's one thing that Jay Alamar, who was a previous guest on our show he, has a really great article which I would, recommend as a learning resource if, you're thinking about this sort of how, to create competitive advantage or Moes, with your AI applications he has an, article called AI is eating the world, and he gives some really good analysis, of thinking about okay where are their, competitive advantages and where aren't, there and he has this really nice, diagram of like models are down here, your application is here that's where, you live right the application Level, above that maybe in the custom model, like fine-tuning level and then above, that there's all of these things that, are unrelated to the model or not, unrelated but are more so like business, concerns right how is it distributed you, know what sort of proprietary or, sensitive data are you dealing with what, sort of domain expertise do you have, that can be infused in your application, etc etc those are the sorts of things, that can differentiate you and I found, his writing on this very helpful in, framing my mind so I would recommend, people look at that I like it in, addition because it reminds us to stay, grounded and be practical and and and, while the world is changing out from, under us in so many ways kind of the, workflow of how you think about, applications and getting uh productivity, out to people is still largely the same, new tools and stuff like that but you, the same concerns exist and so sometimes, maybe you take a deep breath and you go, I know how to do this we've been doing, this even before this moment good point, I think that's a good good statement to, end with uh so thanks for journeying, through the uh Cambrian explosion or, proliferation with me Chris this has, been fun that's right it's a space warp, here of models flying by us Good Times, Daniel uh thanks a lot all right talk to, you, [Music], soon thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all Chang doog podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next time, [Music], k l |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Automated cartography using AI | Your feed might be dominated by LLMs these days, but there are some amazing things happening in computer vision that you shouldn’t ignore! In this episode, we bring you one of those amazing stories from Gabriel Ortiz, who is working with the government of Cantabria in Spain to automate cartography and apply AI to geospatial analysis. We hear about how AI tooling fits into the GIS workflow, and Gabriel shares some of his recent work (including work that can identify individual people, invasive plant species, building and more from aerial survey data).
Leave us a comment (https://changelog.com/practicalai/229/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Gabriel Ortiz – LinkedIn (https://www.linkedin.com/in/gabriel-ortiz-gis-cantabria)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Automated cartography (integration of different models: buildings, roads, vegetation) (https://geocontenidos.maps.arcgis.com/apps/webappviewer/index.html?id=6b923b724cba46efb37411fdd89ec9fa)
• Detecting and tracking the expansion of forests (period 1957-2020) using both legacy and modern imagery (https://storymaps.arcgis.com/stories/1db87a67f3614cc4b58f7e00f44af03e)
• Tracking invader species (https://cantabria.maps.arcgis.com/apps/webappviewer/index.html?id=a7305e3e80394d769ff789bc6c4909c4)
• Tracking urban growth with AI (https://experience.arcgis.com/experience/ba08444d130a47d4835b6cf2ddd2049a)
• Spatial behavior in beaches using AI (https://storymaps.arcgis.com/stories/862bf11ce2034208b47ef43f32a7e84a)
• Inference with SAM (Meta’s Segment Anything Model) over urban areas (https://geocontenidos.maps.arcgis.com/apps/mapviewer/index.html?webmap=921f134ac43240df8317d8d26d0caff1) /viewer.html?webmap=4af373c294e24394ae25e4acadab71cc
• SuperResolution on aerial or satellite imagery (https://cantabria.maps.arcgis.com/apps/webappviewer/index.html?id=f14ab31439644f118c3ca7ef8c2258c9)
• More of the work of Gabriel and his team can be seen here (https://mapas.cantabria.es) and also on his LinkedIn profile (https://www.linkedin.com/in/gabriel-ortiz-gis-cantabria)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-229.md) | 20 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist and founder at prediction, guard and I am really excited today, because uh my my life has been filled, with large language models for for the, past months and I feel inundated with, information about those but there's so, much going on uh so many amazing things, happening in the AI World outside of the, text modality and today we have with us, Gabrielle Ortiz who is principal, geospatial information officer at the, government of canabria in Spain welcome, Gabrielle how's it going thank you thank, you thanks for having me on and giving, me the opportunity to share with you and, and the audience what we have after in, the last few years regarding your, spatial analysis and particularly, artificial intelligence yeah and one of, the things that stood out when when we, started talking well first of all you're, you're a listener of the show so I love, it that you now get to be a guest on the, show um that's so wonderful I'm glad, that we have listeners who are doing, amazing things as practitioners but also, uh you're you're in Spain which is is, one of my favorite places I did my, collaborators during my grad school days, were in s Sebastian and I spent time, there and I know there's so much, Innovation happening in that region and, in Spain in particular what's it like to, be working in AI in Spain well Spain is, I think it's a great country to find, Professionals in all the branches of, engineering and uh there are many things, happening in the AI industry there is a, lot of a good you know environment of, startups uh growing and uh I really, encourage you to engage and and contract, people from Spain yeah that that's so, great and not only is there amazing work, going on there but it's one of the most, beautiful places I've I've been and even, when you logged in so our listeners, can't see it but I see the beautiful, sunshine and trees and and town behind, you through your window so I'm I'm a, little bit jealous Yeah you mentioned s, Sebastian I am pretty close to S, Sebastian s andere is a really really, beautiful city yeah yeah so we mentioned, that you work in geospatial I know so, I've been on the mapscaping podcast a, few times which has been has been fun, and I know that that industry is really, wrestling with kind of uses of deep, learning uses of AI and understanding, how to integrate that into workflows if, my understanding is right you, didn't come from uh data science, researcher sort of background into this, topic you came more from the geospatial, side so could you tell us a little bit, about how as a geospatial practitioner, you first started kind of dipping your, toes into deep learning and, understanding what it meant for for your, industry sure I have been working in the, US Special industry for more than 30, years I started working for topographic, control aetric control of Works H then I, move on uh to engineering companies, designing highways and railroads uh, dealing with uh environmental data and, always using GIS which stands for, geographical information system and you, as many of you know is a technology that, lets you operate and do analysis over, huge amount of data then I started to, work for uh government the canabria I am, now uh in the role of principal, geospatial uh officer as as you, mentioned but literally if you translate, directly from Spanish it would be, something like chief of the uh service, of cartography and GIS my role here is, not only being in charge of the data, production but also in the development, of the uh infrastru infrastructure for, the analysis of your your spal, within our organization it means for our, staff but also for our stakeholders, outside which is something very, important for us for the citizenship for, the community and for the companies that, are working with geospacial data me and, my team we have something uh very, ingrain in our DNA which is the the, public service that we provide using Ai, and using another set of Technologies, and we every day try to do our best uh, to fulfill with that Target yeah it's, really really inspiring to hear kind of, the motivations behind how you think, about doing your work and and the people, you're serving which is so great I'm, wondering just practically you mentioned, kind of GIS tooling and you know the, processing of data in that space of, course deep learning and the AI space, has its own sort of unique tooling um, and sometimes weird tooling so I'm, wondering could you comment in terms of, what is it like for a geospacial, practitioner to start adopting deep, learning techniques and all of that, which I'm assuming have a different set, of tools than geospacial people have, used in the past so what is the current, state of you know the tooling around, mixing deep learning with geospatial um, is it difficult is it fairly segmented, or or is it more integrated at this, point well at first sight it seems, dating and intimidating but I have to, say that it is not so difficult right, just to demystify a little bit uh the AI, technology as you mentioned before I am, not a researcher on AI I am a expert in, GEOS spatial industry and I will tell, you my story how I began my first you, know contact or at least the first time, that I pay attention to AI was in 2012, with Alex net what happened in the image, net challenge at that point in time, classification of of images was great, but it was not very applicable to the, geospatial industry you know it has an, application and you can leverage that, but is not what we do every day previous, to that I have to say that it was in, 20110 or 2011 something like that I knew, about the work of Nvidia with the GP, gpus the general purpose gpus I think, Bill di talk talk about this in one of, your early episodes and that was very, interesting for me because in the OS, spatial industry we often have a lot of, uh demanding in terms of computing power, when we operate with what we call Ruster, data which is no more than data, organized in a in a grid topologically, in agreed for instance an image is is, rer data but also for example, a digital terrain model which is a grid, where you storage in in every pixel in, the center of every cell the value of, the altitude of the terrain over the, main SE level for example and you, perform calculations over that uh, digital models for instance you know, getting the Watershed or the view shed, of one part of the territory and that, calculations can span for several days, or even weeks because in spite that the, mathematic underlying running under the, hood are not very complex what happens, is that you have so many pixels that uh, it ends up being very demanding and what, Nvidia uh started to do on those days, was to being able to parallelize a lot, of calculations and instead of using, four computational threads on on your, ubu or eight computational threat they, were able to uh spread all the, calculations among hundreds or even, thousands of computational threats that, caught my eye because it was very, important for me but at that point in, time I thought you are going to need, Gabrielle you are going to need a a GPU, but not for artificial intelligence I, was not thinking about that but for, calculations of of a different nature, then in 2015 2016 witnessed the blossom, of uh a whole new generation of deep, model architectures just to mention some, of those who had a big impact in, computer vision reset in 2015 I think it, was presented in 2016 I'm not very sure, uh then unit that has been extensively, used in 2017 the Facebook artificial, intelligence research group presented, and proposed uh mascar CNN that you know, it was an evolution of fastar CNN and in, 2018 I saw for the first time a demo, within the geospatial uh realm uh of our, data provider which is asri actually I, think you also have a couple of guys, from as in a previous episode and what, they were demonstrating is how you can, detect swimming pools and oil rigs, automatically using a single shot, detector in that days and that was kind, of of a naha moment for me because I, realized well you have to invest your, time uh this is going to be definitely a, game changer, and you have to start working on this so, that was the moment and from that point, you know there are two kinds of of, persons uh I will use a metaphor to, explain that when you see the results of, AI some people think it's magic and, everybody likes magic magicians you know, some people end up falling in love with, the magician you know they are obsessed, with the Persona and the the mystery and, and the whole stick but some other, people just want how the trick is done, and I think I belong to the second group, so it was not only that this looks like, magic is many the point was how this is, done and from that point I started to, work we can Del into it this if you want, but this not so difficult as I said, before yeah that's so great yeah I think, um I applaud you for digging in and um, you know not too early where it was only, a research topic but as it started, getting into practical applications you, really took that and and figured out how, to apply it within within your context, appropriately which I think is maybe not, not everybody takes that approach so I I, appreciate that so with the tooling that, that you're using I think maybe this is, useful for people that haven't done, geospatial as much so I know know, there's major tools like uh arcgis and, and other ones and then you've got sort, of like Jupiter notebooks where you you, know train models or or GPU Services, where you can run inference and other, things have those merged at all so like, from within the tooling that you're, using as a GIS professional has some of, the deep learning tooling been, integrated into those tools or is it, mostly at this point for you I'm going, to export my data from the geospatial, side and then use a notebook and then, import it back in or something like that, well that's an smart question as I said, before our software provider is Sri uh, we work with SRI for a number of years, and they are doing an excellent job in, integrating many open source uh, Frameworks into their platform and I, think because we Tred to follow the, literature but but we are constantly, falling behind you know it's extremely, difficult every week me too every it's, impossible and even you know, constructing um completing the puzzle, and Theo of installing all the, Frameworks and and putting everything, into work can be very complex so we have, a big Advantage working with h the S3, technology they have a a team research, and development team based in India and, I think uh these people is doing a great, job facilitating the application of that, in the some of your previous episodes, you have been talking about ux um, interfaces for using artificial, intelligence whether or not it has it, makes a difference and it really makes, because it's a way of you know uh, democratizing and making accessible the, technology that is one part of the story, I think it has uh facilitated a lot our, work because you not only need the, Frameworks you need uh all the platform, to move across terabytes of data your, your spal industry is highly demanding, in terms of the data that you have to, work and it's not only the the, Frameworks of Open Source it's how you, prepare the labeling how you structure, the databases how there is a lot of more, science and also apart from that what I, did is starting to gain the main, concepts related with artificial, intelligence from all the great, resources that are completely free on, the internet you know on YouTube you, have lessons from the MIT from Stanford, that can introduce you to the simplest, Concepts such as a percepton or a B, propagation or a stochastic RIT descent, so I designed for myself a two-fold, strategy first trying to gain experience, uh on with getting hands on on of the, Shelf models but at the same time trying, to also learn about the concepts uh, underlying pin pointing uh the AI world, I think that's important many people, think that artificial intelligence is a, blackbox it's not a blackbox it's, mathematical mathematics in in action uh, of course it's not linear you you cannot, fully predict what is going on but many, of the things can, understood well I I I love your, perspective Gabrielle on how your how, you've developed a mental model of how, these Technologies work I think that's, an encouragement to others to both, explore these Technologies but also keep, in mind what they are and how they, should interact with them as as tools, but I I'm so fascinated by some of the, projects that you've you've been able to, accomplish during uh during your time, using this technology and I I want to, start diving into those a little bit one, of the ones that you pointed me to that, was really fascinating reminded me of, standing on the the beach in in San, Sebastian although it looks like you, have really maybe more more nice beaches, up where you're at so tell us a little, bit about how standing on beaches and, Counting people on beaches why why is, that important and how did you get into, this project of a applying deep learning, in that context yeah definitely I, started working um with deep learning uh, at the end I think it was the end of, 2019 or something like that then came, the pandemic and after the pandemic you, know with the release of restrictions, somebody here at government of Canaria, say said hey we are a little bit worried, about the possibility of having on, control crowds on the beaches because uh, I have to say that canaba is a notable, the destination we have more than 100, beaches so you can have a big problem in, terms of spreading of the of the, covid-19 and they were worried the first, thing that they asked to me is how can, we get a calculation of how many people, we have in every Beach and uh when the, tide is up and when the tie is down and, things like that but it was just a, simple calculation in terms of the, surface or the area that the the beach, has and I said I think I can go further, and I will count the people and they, said what are you crazy uh yeah I'm not, drank I think I can do it uh because I, had uh some experience using single s, detectors and at that point in time more, models than single s detectors and is, what H we did H we started cting the, people because normally we have an, archive of AAL uh you know know surveys, conducted always as is normal in a clear, skies sunny days when everybody's on the, beach in summer so we had very well the, behavior of use of every spatial, behavior of use of every Beach all, across canabria at different days, different months different no matter if, it is a weekend or of Labor Day we had a, huge amount of data to analyze and uh we, develop some deep learning models that, even if you are changing the input, signal that means changing the the the, aial survey it works we could predict uh, the sectors of every bits not only in, terms of absolute figures of of, population in a beach but which are the, sectors where where the people try to, concentrate and after that we released a, small application that you can see in, the notes of the podcast when you can uh, see some maps and if you just for, tourist interest if you want to a place, I want to go to a to a beach and I would, stay quiet and losey gooy you know, without many disturbances you can see, what places are the most suitable for, that use uh so it was a great experience, our first experience releasing something, yeah that's so so fascinating and it, makes so much sense after you say it I, know here I can think of so many more, applications for something like this I, know like in the in the US National, Parks you know thinking about crowding, and the impact on the natural, environment or or other things like that, and helping plan out for crowds at, certain points of the year there's, there's so much practical use of this, and this was amazing because yeah you, took this knowledge that you had been, building up and really applied it in the, moment during covid-19 when there was, this specific need but then it sounds, like there's continued usage past that, because even if I'm just a consu like, I'm a normal Citizen and I want to enjoy, the beaches this information is really, useful to me I I know myself I probably, would go to the quiet places of the, beach and and sit and listen to the to, the waves so that's um there are much, more interesting problems to try to, solve than the one that I described yeah, now later we started to work trying to, uh modelize or to model certain aspects, of how the territory works you have to, understand the territory as a whole as a, living entity which where everything is, related to everything so we started to, slice every variable and try to address, those variables with uh the help of of, AI for example we have developed some, interesting models we can delve into the, you know the architectures you want uh, used later on or whatever you have, interested in but uh some interesting, models for detecting and classifying, vegitation also for the evolution of, urban growth also for things like we, like tracking cars for example that is, like a kind of proxy of the Society how, the society moves and because everything, is on our AAL surveys you only have to, have the skills to bring back that, information and convert it in something, useful and we have been as the years, went by we have been able to produce, some U more relevant uh results I will, not talk about deep learning models but, about Solutions yeah for tracking the, territory yeah and you've mentioned um, aerial surveys a couple times it may be, useful for those in our audience who, don't work in geospatial they might have, in their mind maps and things like, Google Maps where oh I could go and I, could look at a satellite image but it's, not current right it's maybe one photo, that was taken some while back and and, you've talked about aerial surveys where, you can actually learn you know both, current information about what's going, on in an area but also historical, information could you just help our, audience understand like like as a, professional what what sort of data do, you have access to and how is that, gathered practically and made available, to you well I have to say that, everything that I have been talking, about can be also executed with, satellite images you know there are some, differences but of course you can do it, with satellite image the reason that we, work more with AEL harys is because we, are more focused on uh capturing this, kind of uh information rather than, working with satellite data my region, Canaria is not very big and we have in, Spain a a national plan that covers, every three years all the the country, with aial surve and also we have a, repository of satellite images so anyway, you can do both of the input signals uh, the results will differ slightly but, apart from image capture with uh sensor, no matter if it is Airborne or satellite, sensor we also work with a range of, Technologies for example uh liar data I, know that many of the audience have been, working with liar data liar can be also, Airborne in fact it was the the origin, of the technology later using from a, plane you know and um it has been, increasingly important in our domain we, we also work with uh you know system of, records with traditional, databases and um a number of things if I, had to say something about my job is, that is extremely interesting because, one day we are working with uh Co data, for example another day you are working, with uh energy data another work with, environmental data the government of, kabia has powers and duties in many, domains is kind of uh one of your States, uh if you um forget the difference of of, area cover only Texas or of Florida I, think Spain is in the middle between the, the area of Texas and Florida is, something in between but but the whole, country and my region is is quite small, but it's a very interesting place to, work with uh because that reason and the, data comes from many different, Technologies and many different uh, databases, [Music], this is a change log news break open, observe is a cloud native observability, platform built specifically for logs, metrics traces and analytics designed to, work at petabyte scale huge according to, its creators quote it's very simple and, easy to operate as opposed to elastic, search which requires a couple dozen, knobs to understand and tune with open, observe you can get up and running in, under 2 minutes it's a drop in, replacement for elastic search if you're, just ingesting data using apis and, searching using Cabana Cabana is not, supported nor required with open observe, open observe provides its own UI which, does not require separate installation, unlike Cabana end quote an interesting, offering indeed here's a couple Choice, quotes from the comment, section user get to the cha says quote I, just tried this 3 days ago as someone, running a home lab and hadn't set up, logging yet it was a great find I didn't, have to learn and combine three plus log, Technologies it's just a single, all-in-one monitoring server with web UI, dashboards log filtering SL search Etc, RAM usage of the docker container was, under 100 megabytes end, quote and user surge a says quote, interesting product thank you for the, effort definitely want to give it a try, for me though setting up a system is not, the primary pain point today for what, it's worth signing up for a cloud, service is not hard the problem starts, at the ingestion Point end quote you, just heard one of our five top stories, from Monday's changelog news subscribe, to the podcast to get all of the week's, top stories and pop your email address, in at, again that's changel log.com, [Music], newws so Gabrielle uh we talked a bit, about this kind of first project of, related to population and crowding on, beaches but you've done so much more, could you highlight a few of these, things in terms of other other things, you've been able to identify or track, with deep learning from these aerial, surveys yeah uh we have an extensive, work in the detection of vegetation I, have to say that we have been also only, using uh supervised learning that branch, of the deep learning and uh specifically, working with different model, architectures uh such as I mentioned, before unit uh you know masar CNN and, some others now we are testing now some, segment anything model but we haven't, done anything with uh zero shot learning, for production so what I am going to, tell has been achieved using model, architectures that have been almost, forgotten for the community you know, everybody's focused on the soda, architectures and there are so much that, can be extracted from the old school of, artificial intelligence quote unquote is, not so old right yeah yeah and I think, actually this is maybe a misconception, of people that occasionally we try to, mention this on the show the majority of, applications across Enterprise not just, in GIS but in manufacturing or marketing, even um people think of marketing with, generative AI but the majority of, applications are still traditional quote, traditional machine learning you know, there's a lot of psyit learn models out, there still or there's just supervised, learning you know models uh out there, and um yeah it's awesome to see to, highlight that here because I think it, is a misconception yeah because you know, the when a paper appears normally they, do not run out the possibilities of the, model the professionals who are not very, specialist in the AI domain but are have, a lot of knowledge in an specific domain, out of AI in my case we can prepare and, curate better labels we can understand, the process that we are trying to model, and we can we have so much to give and, to propose to the to the community and, that's one of the reasons that some, people have say that your models are, quite good how have you done it is a, brand new architecture is something that, you have created on your own and always, say no it is not is just using in a, Smart Way model architectures proposed, back in 2015, 2016 but with a lot of data very well, created I also have to say that the, computer power that we have at our, disposal is quite modest we don't have, from that point something you know uh, big or or or very extensive and the key, is how do you create the data yeah and, one of the things that you had mentioned, prior to recording was this idea of, automated cartography as kind of an, integration of a bunch of these, different models that that you've been, working on I'm wondering if you could, kind of first describe what do you mean, by automated cartography and maybe even, for people that aren't familiar what, what is cartography and um you know I'm, I'm assuming modern craphy isn't like, you know mellan getting out his paper, and draw drawing on drawing you know, maps on parchment paper or something but, what what does craphy look like these, days and then what do you mean by sort, of automated, craphy with these sorts of models well, cartography is the art and the science, of uh you know trying to model the, reality and a strapped the reality and, plot it in a flat surface is a science, that has been developing for a number of, centuries uh from you know many, centuries and uh up until now it was, highly dependent on the human you know, ability to to trace and and to uh you, know to draw everything on the surface, of the Earth as the technology has been, developed from the 90s we started to, move very rapidly into digital, Technologies and the automation of the, cartography has taken place not only, with the Advent of AI but uh several, decades before however this is a, revolution because we have never been, able to is to produce High such uh, degree of quality uh using so few people, working there are some similar, Technologies like remote sensing which, is the the part of the technology in, charge of uh analyzing from imagery of, satellites and you know producing uh, cartography also you know it recalls, many things of the artificial, intelligence but it can match their, results in many other fields so the, revolution continues uh it started as as, I said before in the 80s ' 90s but now, is a complete Revolution and I think, that for the first time we are able we, have an example that you can check it, out in in the in the description of the, podcast but where we have been able to, produce a a map with a basic coverage, where you have trees where you have uh, sh where you have no vegitation where, you have a buildings or roads or, railroads completely um generated by AI, of course it has uh some mistakes but we, left on purpose those mistakes on, purpose because we wanted that the rest, of the community could evaluate the, capacity and the ability of the models, to to work alone this is a question that, just popped into my mind as you were, talking about the these models what's, possible and you know it's not perfect, right no AI system is perfect so there's, going to be mistakes I'm wondering as, someone who's been in GIS and and been a, practitioner for I think you said 30, years now I also imagine that, human-based processes are error prone or, at least they're slow right so by the, time a human maybe processes a certain, map or something things have been, updated right and it's maybe not current, anymore what do you think about the what, are the, implications for maybe craphy or GIS as, we move to the Future where AI systems, maybe can do things more up to dat but, with some mistakes but they're they're, up to- dat and can really maybe, highlight certain areas that are, incomplete or something combined with, human efforts to correct those mistakes, and keep the what do you see as this, balance between trying to be automated, with with AI based techniques and the, role with that uh human cartographers or, GIS professionals play as these systems, expand to more and more places yes it's, a very interesting question because uh, you know one of the big problems that we, have is to maintain up to date the every, single database that we release into the, market or for our stakeholders that's a, very big problem problem because uh it's, always difficult and one of the main um, advantages of artificial intelligence is, that you can have a model and you know, it will not probably work perfectly with, the next AA survey because it will have, some differences in terms of colors or, Shadows or whatever but you can fine, tune uh you can uh or maybe you can, train the model from a scratch again, start from a scratch with the training, and you can update something in a in a, reasonable time frame so that is one of, the things that I uh most attracted by, uh the capacity of updating things and, it's a game changer as I said before no, no other artificial intelligence offers, things that other Technologies really, don't yeah and of course there's, limitations you know AI is never the, expectation should never be that it, solves all of our issues but it also, should be that it you know it's going to, solve some of our issues or solve some, of our problems but not all of them from, your, perspective how do you think about the, current limitations of AI within GIS and, cartography what are some of the things, on on your mind with respect to that uh, yeah I think of course you have to bear, in mind that we have limitations uh what, happens to me also happens in teams in, India or in the US that I am always, seeing what they are doing I would like, to point out to limitations one is the, computer power and another thing is the, limitations of uh CNS which is the, technology that we are using right now, convolutional neural networks right we, can talk a little bit about uh model, architectures and and things in terms of, computer power I think it's worth uh, delving into the role of gpus of because, in the geospatial realm it's not well, understood why do we need a GPU and uh, it's something I don't know if it, happens in other uh markets but in our, industry when you talk to somebody about, a GPU normally my fellows and and mates, I don't know they try to to say is, something related with the IT department, I don't want to be in charge of that but, it's not at all you have to be aware of, what technology do you have for, calculation the hardware is so important, and you have to speak the same language, as a data scientist uh that the rest of, the community speaks and um that is very, very important to understand that it's, not the same a GPU uh in your laptop the, dgx or talking about the or right H 100, if we are talking about Nidia Hardware, is not the same and is everything, related with the amount of data that you, want to put into the train you know the, quality of your training uh the level of, the convergence that you are going to, get if you are going to stay in a local, minimum in the convergence or you are, going to reach and assour the, possibility of the label that you are, ingesting into the model everything is, related with the hardware I think that, bill Di and Animan and Kumar in many of, their talks always talk about the, Trinity of AI and one of them is you, know the data, another is the the the software the, algorithms that many of them have been, with us for a while by propagation you, know uh many of those algorithms have, been from the 80s if not before but the, hardware is the third part Bill always, says that is the spike that starts this, engine of creativity of AI and think is, true you have to pay a lot of attention, to computer power and there is a lot, another limitation that is ingrained in, the DNA of the CNN as far as I know from, my experience you cannot expect to, perform exactly as a human being and, sometimes in spite that you create very, well your labels and your tips and your, data the model do not learn as good as, you expect but somewhere in between you, can have you can have an reasonable uh, amount of success in that, what we do to overcome is you know, combining different uh model, architectures is something very useful, and and widespread in the US spatial, industry for instance we combine models, at two different levels from the, architectural levels it's quite common, to see the combination of uh resnet uh, with for example unit in resnet you re, remove the last part the fully connected, layers and connect uh the remaining part, with unit so you are using res net as a, fter extraction for fter exraction and, then the rest of the coder happens, during the the rest of the unit, architecture it also happens with mass R, CNN we use constantly reset as a, backbone but then the rest of the model, goes and there is a second uh point, which is combining the results of or the, inference when you have uh inference, from two different uh model, architectures um for example talking, about vegetation imagine that you have, one model that detects very well the big, areas of vegetation but fails in the, small spots and you have another model, that H works very well for small spot uh, spots but fails detecting the the big, areas because in the big areas creates, artificial holes and and you know, mistakes you can combine the results of, the outcomes of those uh model, architectures with traditional GIS, techniques to uh mes all the the results, together and obtain a bigger uh a best, quality of the liar that you you want to, infer that has worked for me and uh is, one of the ways that we are trying to, overcome the the limitations of, artificial intelligence that's great, super practical and I I know that's what, a lot of our listeners want to hear is, some of the practical ways they can, explore these Technologies well, Gabrielle it's it's been a an amazing, pleasure to talk to you as we close out, here there's a million things we could, talk about I know some we we didn't get, to and we'll Link in the show notes but, as you look to the Future could you just, briefly in the last uh uh minute or so, just briefly share with us what what's, exciting for you as a GIS professional, looking to the Future um that either you, want to dig into next or what what are, you encouraged by or optimistic about as, you look to the future of your own work, um and how AI influences that well I, have to say that in my 30 years plus of, working in the US Special industry these, two last years two or so have been the, most exciting part of my career because, he so creative this we are just, scratching the surface of AI and great, things are coming I think that uh with, the Advent of zero shot we have been uh, watching from the first week of April, what can be done with the Sam the, segment anything model and I'm sure that, new versions will come uh of future, versions of Sam when we combine that, with llms with large language models and, we can interact with the boys and say, hey draw me all the trees in the image, or it will be much easier to use this, set of Technologies anyway uh I just to, finish uh I would like to send a message, to the audience for those who are not, artificial intelligence researchers like, me that uh is possible to apply this set, of Technologies even though you are not, an specialist on that specific domain is, also to get hands on on one of take one, of the Shelf uh models and start uh, playing around with them and um I know, the future will be absolutely uh focused, on artificial intelligence there will be, a a different uh geography in the in the, next few decades awesome yeah so, inspiring thank you for your work, Gabrielle and it was awesome to have you, on the show thank you so much thank, [Music], you thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang doog podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | From ML to AI to Generative AI | Chris and Daniel take a step back to look at how generative AI fits into the wider landscape of ML/AI and data science. They talk through the differences in how one approaches “traditional” supervised learning and how practitioners are approaching generative AI based solutions (such as those using Midjourney or GPT family models). Finally, they talk through the risk and compliance implications of generative AI, which was in the news this week in the EU.
Leave us a comment (https://changelog.com/practicalai/228/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• NYT Article: “Europeans Take a Major Step Toward Regulating A.I.” (https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-228.md) | 19 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io well welcome to another fully, connected episode of practical AI this, is where Chris and I keep you fully, connected with everything that's, happening in the AI Community we'll take, some time to discuss the latest AI news, and dig into some learning resources to, help you level up your machine learning, game I'm Daniel whack I'm a data, scientist and founder at prediction, guard and I'm joined by Chris Benson an, AI strategist how you doing Chris doing, very well how's it going today Daniel oh, it's going great I got back late during, the night a couple days ago from my time, in San Francisco where we had a, in-person podcast Meetup kind of a, collab with the latent space podcast, which is an awesome AI podcast if you, haven't heard of it but uh yeah it was, it was a really great time I was a, little bit tired yesterday but I feel, recovered today which is good because, today is also one of our favorite other, podcasts the mlops community they're, having their llms in production part two, conference today and uh that's today and, tomorrow so by the time this podcast, goes out it will have passed but I think, they'll post the talks on YouTube and, all of that so make sure and check out, those talks there's a lot of really good, ones there yeah there I'm sure there are, that's I I've been learning a lot from, them yeah yeah what a what a great, Community i' I've joined their slack and, I'm chatting with people about how, they're deploying models and all that, stuff which is which is fun another, thing happened today Chris I I was at a, co-working space here in in town shout, out to Matchbox co-working I know a few, people there listen to the show but I, ran into a friend Tanya and she's been, listening to recent episodes of the show, and she made a really good point and, that's that we haven't taken time for a, while anyway to stop and say in this, moment that we're in now when we say a, AI what do we mean like what is AI now, today that's a really good point Thank, you Tanya if I got if I got the name, right the uh the last time that we kind, of talked about what it means there was, no such thing as generative AI for, instance yeah yeah definitely not in the, way that or or at least in the term the, way we're using it now the the way the, term is used now yeah so I think that, brings up a good point Chris is like, what is generative AI we can maybe talk, about that but maybe first we should, talk about what was AI or machine, learning prior to generative AI which, that sort of machine learning and AI is, still in existence of course and being, used all throughout industry but there's, a difference between that and generative, AI in in my mind if you want to think, about actually this would be true of, both kinds of AI so if you think about, AI in general or machine learning in, general the way I think about it at its, most simple form is a data, transformation and you put some type of, data into one of these quote models and, you get some other data out it's like a, software function essentially now of, course there's a lot going on within, that function but at it basic core an AI, model or machine learning model is, something that takes in one form of data, and outputs other data like speech in, and text out or language in one language, in and language in another language out, there is a a really old-fashioned term, that would be applied is it's a filter, software developers who have been around, for a while might know about creating, filters and it's just a an incredibly, sophisticated filter in that you know, you get the one thing in you get a, different thing out and it's the it's, all about the relationship between the, two yeah and that kind of brings in uh, one level of a mental model into how we, think about these things we're going to, put something into them we're going to, get something out now obviously these, models are different than other software, functions that or filters that people, have written in the past right and the, key difference I think that I share with, people at least when they're forming, their own mental model around these, things is that in kind of normal quote, unquote I don't know if we have normal, software engineering anymore but normal, software engineering um you have a, function and the engineer or the, programmer writes all of the logic of, that function and determines what, parameters should be used where like I'm, going to accept two numbers and then I'm, going to add those things together and, output the output and that that is a, data transformation right but the logic, is completely programmed by the, programmer it has to all come as an, original thought out of a programmer's, head correct and there could be some uh, flexibility I guess would would be the, right way to put it I mean software is, flexible in general I could have a, function that adds two numbers together, and I could add any two numbers together, it doesn't have to be one and two it, could be 42 and 17 or something like, that however in a machine learning or AI, model which does one of these, Transformations there's still an element, this is maybe a misconception that, people have there's still an element of, that software function absolutely that, is written by humans right it is it is, structured by humans have have you found, that to be a misconception I I do I, think people think people who are not in, the space intimately as as we and this, audience would be tend to think of it as, I mean they won't admit to it but they, think of it as magic a little bit I get, into a lot of business conversations and, I could take out the business words and, put in the word magic and it would still, work the conversation so it's a little, bit instructive in terms of how people, are perceiving it yeah yeah it it's, almost like there is software but the, bit that's the model is just totally it, manifests itself out of the computer, right yes in reality what happens is, there's a thing called an, architecture and all that means is that, you just have code that's written that, does certain things within your function, or within your data transformation that, might be adding numbers together or, averaging things or, multiplying uh different numbers in, various ways and so all of those things, are combined or structured actually by a, human programmer often researchers I, guess in this case would come up with a, a model architecture like some people, might have heard of or Bert or GPT right, these architectures that are, a form of a software function but they, have missing pieces in them and what, those missing pieces are called, parameters so one example I kind of give, sometimes is let's say that we wanted to, write one of these machine learning, functions to classify cats or dogs get, pictures of cats or dogs I could have a, very simple model architecture which, says if percentage of red in image is, greater than x classify it as a cat if, not classify it as a dog now I haven't, said what x is so how do I set this, parameter that's a gap in my machine, learning model the most simple of, machine learning models well what I can, do is I just take a bunch of examples of, cats and dogs and I try a whole bunch of, different x's and whichever one gives me, the best result in other words whichever, one classifies those the best I choose, that as my parameter and this is what at, a much larger scale we call training, people might have heard of that now the, models that are used these days don't, have one parameter like my simple cat, dog model they have billions of, parameters that are set and I just to, just to add in one little thing that, training process is based on an, algorithm which is which is just air, fairly simple math problem that you, iterate through over and over and over, again comparing your results to what, you're targeting what you're trying to, get to and there's an error there and, you're trying to reduce that error down, to do that and it's so when people talk, about training Ai and there's this kind, of Mystique associated with it there's, no Mystique really there it's just, running an algorithm over and over and, over again until you get a more accurate, uh less error prone answer it's as, simple as that yeah in that sense it's, kind of a Brute Force implementation of, trial and error now not totally Brute, Force because trial and error would, require you to try every option or every, combination which for a six billion, parameter model would take the life of, the universe or something to to explore, but of course there's people have put in, Long devoted much of their life to, optimizing these types of problems and, so it is highly optimized but at its, core like you're talking about you're, trying to reduce an error or what's, called a loss and find those optimized, parameters to perform a task so in the, case of dog and cat classification you, would have a bunch of images which are, labeled as dog or cat you would feed, those into to the model with a bunch of, different combinations of these, parameters and then the winning one that, reduces the error or the loss would be, your set of Ideal parameters which then, you can use to classify new images which, don't have a label yet so I don't know, if this image is a dog or a cat I'm, going to put that in and then I can, classify that and that's what's called, the inference process so there's a, training this it's two steps there's a, training process and an inference, process and this is, generally what's called supervised, learning which uh means that you just, have labeled gold standard labeled, examples and this I would say, dominated the AI scene and still, dominates much of What's Done in, Industry I I think people also have the, misconception that oh supervised, learning is so, 2016 no it's it's still the vast, majority of what's deployed out there, and you know what people are actually, using in real life yeah I probably Ian, I'm totally guessing but I would say at, least 95% of what is out there in, industry is that if and that might be, a conservative guess yeah yeah so this, is still the dominant frame of thinking, about machine learning and AI at least, across industry that has shifted a bit, though so maybe around like 20 I mean at, least when it started shifting my, mindset was probably around 2019 2020, some of these what's called, self-supervised models started coming, out the idea being that there was kind, of a maybe a first shift and then a, second shift that I've seen so there was, an era of data science where supervised, learning gather your data set train your, model with those examples and you have, your supervised machine learning model, well people gradually learned that if we, make our models bigger and we expose, them to enough data for a particular, mode let's say text right or images well, if I have a large model that's been, trained to recognize 17 different things, in images I might have a use case where, I want to recognize an 18th thing right, or maybe three different things well, that model has already it has embedded, in it the capability to find really good, features of images and do image, classification based on those features, and so I don't have to retrain a whole, model from scratch what I take is that, large model that's been trained on a lot, of images and I do a process called, fine-tuning or transfer learning to then, do this new task so I I saw this first, shift I'm going to call it a first shift, maybe I mean this is something people, have talked about you've just coined a, phrase you know that don't you um yeah, so this would be a shift from thinking, purely about supervis learning training, from scratch with your own data into, this realm of Google trains a big model, for image detection and I take that and, I fine-tune it for my own purposes I'm, not starting from scratch I don't need, as much data and I would say also this, framework dominates a lot of what's, happening in Industry right now so, there's there's NLP use cases for this, where maybe you have a model that's, train to translate from English to, Arabic NP being natural language, processing natural language processing, and you want to translate though to like, an Arabic vernacular you would take that, parent what's called a parent model or a, base model or more recently called, Foundation model and then fine-tune it, to this new scenario where your task is, is slightly different okay Chris that, brings us to our next wave or or change, in the in the landscape of AI which we, already talked about this sort of mov, from purely supervised learning to, fine-tuning from a parent Model A large, parent model and now we're in this kind, of wave of generative AI which is the, kind of first wave of AI that has really, hit the public perception so so widely, yes it's been the gamechanging thing for, the public they've they've been hearing, about AI in the in the media they've, been Loosely aware of it but but they, suddenly had some tools that were, powerful and place directly into their, hands and that's made a that has made a, huge difference it they came out late, last year I guess in that sense and then, but this year is really, 2023 has been the year where the, Public's perception of AI has, substantially changed yeah and these, models these large models like those, used in the GPT family of models or Open, Access ones like like Llama Or Falcon, that people might be seeing or the, image-based ones like stable diffusion, or Dolly all of these are still fitting, this model of a data transformation or a, filter you put some type of data in you, get some type of data out there are some, fundamental at least for some of these, models there's some differences in how, they're trained remember that training, process that that we talked about but, then also there's a quite a big, difference in how they're being used I, think in my mind that's almost the, bigger shift in terms of how people are, thinking about using these models so, used to when you would have one of these, parent or Foundation models that parent, or Foundation model wasn't that useful, in and of itself right so you have like, the bass Bert model or something like, that there there were are some use cases, for that that model specifically but the, real power comes that you can Downstream, fine-tune that model with your own data, for a specific task so instead of having, a general model you train a machine, translation specific model or a, sentiment analysis specific model on, your own Data before we move on from, there I I just to address for a moment, for those who are not familiar with uh, with Foundation models the the value in, doing what what Daniel was just, describing is in the fact that much of, the training that occurs in a model is, very resource intensive and very timec, consuming and is not specific to your, problem and so you can train a model to, you know maybe 90 95% of what you want, maybe even even further and there's a, huge investment there but from that it's, that last little bit where you have many, many many many use cases that you can f-, tune it for and so if you can start by, having somebody else like a big cloud, provider do the first giant chunk of, training then you can take that almost, done model and customize it to your need, as can thousands and thousands of other, people with different use cases so, you're transferring the training cost to, a large organization that does that, anyway and that's the value so you can, you can buy into a large model much, easier so I just wanted to to clarify, that in case anyone wasn't intimately, familiar with Foundation models, yeah yeah and that's part of the reason, yeah why the large tech companies are, the ones that have dominated the, production of these models like Google, and uh Facebook or meta and open AI Etc, have dominated that scene because they, have a lot of resources available to, them although there there are some, exceptions to that rule as well if we, think now towards generative AI like I, mentioned there's still this concept of, one type of data in another type of data, out and there's still this concept of, foundation or base model I think the, real shift although there is some shift, in how these large models are being, trained which is we do have an episode, about reinforcement learning from Human, feedback so maybe if people are, interested more in the details of that, sort of training process and how it's, different you can look back at that, episode but I think maybe a more, significant shift in the distinction of, these generative models from previous, waves of models is that people now view, these Foundation models that are being, produced these days as useful in and of, themselves without any further, fine-tuning although sometimes people do, use fine-tuning later on and they're, generative because the way people are, thinking about using these models is by, putting a sequence of information in and, getting a completion of that information, out that's what we mean by generative so, I have some sequence of things in and, the next thing that should come out the, completion is what is quote generated so, that doesn't necessarily have to be text, at least and how people think about, these models it could be you know you, start out playing a few notes on your, piano and then the model generates the, next bar of music or something or it, could be text uh like autoc complete I, put in text and then out comes the, completion of that text yeah it can, really when you think about it it can be, any kind of information uh sequence over, time uh that that's structured and you, can you know so you know we see this, generative images we see it in music, we're seeing text obviously and there, may be other there may be other, paradigms uh to come in terms of how, people approach different ways of, looking at information that's a big, topic of Interest right now is what are, what are you know kind of turning things, on the side and could you do that and, you know it's I think that that point, right there about not just the the, Baseline text and image and music and, such but what are other information, streams that are possible to apply this, approach to because it's already been, gamechanging in terms of the, productivity output from what we've just, talked about but that may just be the, the tip of the iceberg and what what's, to come and um we can we'll get there as, we I'll I'll I'll I'll hand it back over, to you before we go too far yeah I I, think it's a really good point Chris, because I've in recent days been telling, people how I've had to rebuild my, intuition a little bit as a data, scientist because my knee-jerk react, as a data scientist is to gather some, data and train a model maybe a, generalization but uh not so far off, from the truth but now with these models, I can solve a lot of the problems that I, need to solve without doing any training, at all but doing this sort of, engineering and processing around the, information that goes into a generative, model so it produces the right thing out, so some of this we can give some, examples maybe of generative models and, how this work works out in practice so, maybe I want to generate an image a, lifestyle image for a product something, like that I could take the product, description I could take some other, elements like in some instructions and, form that into what's called a prompt, input to a model like stable diffusion, di mid Journey something like that and, say generate an image for this product, and you inject the product description, and make it black and white set in New, York photo realistic something so you, can see I'm constructing a prompt where, I expect the completion of that prompt, or the thing generated out of it to be, that sort of image that's grounded in, the product description so that's one, example where you would insert that and, you would actually get an image that, image out I've done this with my wife's, products and it works quite well of, course you could also do that with text, right let's stick with the marketing, example maybe I want an ad now to go, with my lifestyle image that I'm going, to run on Facebook and so I could use a, model like one of the GPT models from, open AI or I could use cohere or I could, use the Falcon model that was re, introduced recently which is a large, language model this is a type of, generative model and I could put in a, prompt to say hey here's my product, description and I want to run a sale, something like this generate some good, like a good Facebook post for me or a, good Instagram post and the output of, that the completion or the generation, out of that is what's going to come out, and now I have an image and I have ad, copy for that but that doesn't as we, mentioned that doesn't have to be what, we limit ourselves to there's music, generation models now and you can, describe the mood that you want to put, behind maybe a video corresponding to, that ad and generate music out and maybe, I want to convert the image to a video I, could generate video content out of a, prompt and add that in so you can start, to see how chaining all of these things, together multiple calls to these types, of models can produce really magical, output and that's that I think is what's, dominating this current wave of AI that, we're in it it is and and we have barely, touched on the use cases because I think, it's only limited right now by ination, my friend Brent seagull on the weekends, likes to play with exactly exploring, these ideas and uh a couple of weeks ago, he was saying hey look I generated a, professional quality PowerPoint, presentation that is indistinguishable, from what a PowerPoint professional you, know with graphics and everything was, able to do did that entirely out of I, believe it was the gp4 model with chat, gbt and he was like yeah I I did this in, a matter of minutes I was able to, generate the code which would create the, PowerPoint and for every slide had I, gave it a single topic of what I cared, about or I'd give a whole section of, topic and tell it to create the slides, and it it was amazing it was better than, than most people could have done so now, that was his weekend project which is, great but if you look at that that one, use case think of the number of human, hours that go into in in businesses all, over the world that go into generating, presentations and document mentation and, by the time he was done with his brief, weekend project he could do something, that would have previously taken him a, week you know of work time and he did it, he could do it once the process was in, place he could do it in like a few, minutes and that was it so if that, became one of a million use cases that, people are starting to do all over the, world that turns into real money in, industry in all Industries and so that's, just one which is I think, representative of why the technology is, so amazingly powerful so if you multiply, that times as many things as your, imagination come up with then yes we, have a technology now that we've barely, tapped into and which will have uh an, immense impact whether you think it's, positive or negative on the world around, [Music], us I'm Jared and this is a change log, news break device script is Microsoft's, new typescript programming environment, for microcontrollers it's designed for, low power low flash low memory embedded, projects and has all of the familiar, syntax and tooling of typescript, including the npm ecosystem for, Distributing packages this project has a, lot of devs, excited Jonathan Barry says quote dope, typescript for Hardware always glad to, see these attempts at bringing web, Technologies to embedded system systems, and see what sticks even when they don't, they Inspire, Innovation Zach silviera says quote this, is so much better than, micropython and Andrea gamari says quote, this is the first esperino competitor, and I think it's going to be huge you, just heard one of our five top stories, from Monday's Chang log news subscribe, to the podcast to get all of the week's, top stories and pop your email address, in at changel log.com snews to also, receive our free companion email with, even more developer news worth your, attention once again that's, [Music], if that was helpful but I think it, hopefully will be helpful for more than, just you there's a lot of people, wrestling with how to think about these, sorts of models and think about how we, should interact with them and and all of, those things and that really brings us, maybe to our next kind of noteworthy, news Trend that's happening right now, which is around these generative models, can do a lot of things and certain of, those things are viewed either for real, legitimate reasons or not legitimate, reasons as extremely risky and I would, say there's both legitimate and, non-legitimate reasons but yeah a lot of, people view these things as as risky, more so and I'm not necessarily talking, about automating away jobs which maybe, is another topic that we've talked about, this on the show before but actual risk, associated with running these these, models what risk to humanity survival I, think is what yeah that's that's that's, the the context that people uh tend to, talk about it in yeah what what is the, what are the views or what are the, things that are hitting your your desk, on that front Chris so people are, debating that topic on whether these you, know admittedly I think everyone agrees, that these are incredibly powerful, capabilities but do they constitute the, ability to kind of automate an, autonomous risk to us in some form a lot, of time you'll see people arguing for, and against on various specific issues, but there the thing that I've noticed, the most is that they're not always, talking about the same thing uh I'll, have two people arguing two sides of the, point that I'm watching but they're not, really talking apples to apples on the, two sides and hearing many such, arguments in the last few months I've, actually dramatically changed my own, perception and uh and I haven't heard, anyone I'll throw it out in a minute but, I haven't heard anyone say quite the, same as what I'll propose in a few, moments which has to do with that kind, of miscommunication kind of talking past, each other yeah I think my strategy has, mostly been although I think these I, think these are legitimate things to, consider mostly my response has been to, put my head down and build things that I, think are are that I think are useful, and practical and I haven't, necessarily given a lot of a lot more, time to thinking about you know the end, of humanity as we know it so I've, probably put more time into that part of, it than you have I think so and so the, thing the thing that that I think people, focus on the wrong thing on this topic, they focus on whether the existing, generative models as we're now calling, them all the time are leading us into, kind of you artificial general, intelligence AGI whether it's aware, whether it's conscious and whether it, would have an intent to attack and I, think that that completely misses the, point I think if you want to uh and I'll, I'll take two seconds and argue both, sides for a second if you want to argue, against current technology being a risk, to humanity then you're kind of pointing, and saying clearly these models are not, conscious and they are not intelligent, in having a broad awareness of the world, around and having their own motivations, and such and so the people that are, arguing that side uh scoff at the very, nature of the fact that you could, suggest that a model could threaten, humanity and within the context of that, set of arguments I think they're, absolutely right but there's also, another uh side to it and that is which, is actually where I am kind of migrating, a little bit in my own personal thinking, and that is what if it doesn't take AGI, to be a threat to humanity what if the, threat can arise from the fact that for, one thing we have humans in the mix we, have humans with motivations that create, models and have specific things that, they're trying to achieve and so you can, take the power of models and you can, shape them in certain ways to address, different tasks and so it might be, possibly that if there is a danger to, humanity which I don't know I'm just, speculating but if there is it's shaping, a bunch of models that that by, themselves, you know are can do one task really well, or as the generative ones you know can, give you sets of things but you combine, them in ways with software and with, human intent to do damage and so that's, what I'm more concerned about is not, that the models will awake and suddenly, become conscious and decide that they, don't like me very much and they want to, get me I think that I'm a lot more, worried about humans orchestrating a, bunch of powerful tools and maybe, automating those tools in such a way, that the tool keeps going you know it, doesn't take constant intervention and, that's the type of thing that I would, actually give a little bit of credence, to in my own personal thinking in terms, of when I say that meaning not that it's, happening but meaning that it's worthy, of consideration and when we talk about, things like AI ethics and such that's, where I would focus is that there are, external concerns that don't require AGI, and don't require Consciousness to, achieve some really bad outcomes is that, a what do you think of that uh yeah so I, can give a concrete example that I think, fits within your it's even in your, domain of of expertise but let's say, that we have a large expensive and, dangerous piece of equipment like an, airplane or a, helicopter and there is obviously a vast, amount of manuals and documentation, about the maintenance of that and the, operation of it and the safety around it, ET ET Etc so there could be a case and, this would not even involve like a bad, actor but let's say that we put a chat, with your docs interface on top of all, of these manuals and the operation, information and the maintenance, information and all that you could, imagine if again these models are, essentially gener they're generating, output that's probable they don't know, anything about like you're saying saying, reality or intent or anything like that, there's no knowledge there it's just, completion so they could complete, someone's request saying well how should, I fix this issue with my airplane or, helicopter and the model could say well, just take that part off it's a throwaway, part you know don't it it doesn't matter, based on the the text that it's seeing, and you know if that that could be a, significantly light and life endangering, decision if the if the maintenance, technician or whoever it is actually, trusts that as fact now you could also, Imagine Bad actors getting into that, scenario and you know modifying, information such that it would generate, out dangerous information or something, like that so I think that's a that's a, concrete example that would endanger, lives but does not involve AI becoming, sentient or something like yeah I think, that there are there are many many many, many many use cases you could create, along those lines the thing that I also, think people lose sight of is this is, evolving so fast so the the capabilities, that we're talking about today if you, look back two years it's come a long, long long way in two years and two years, from now I'm expecting it to have gone, at least that far if not more and so, it's a moving Target in terms of or what, those capabilities which means that the, risk the profiles associated with what, we're talking about will also change, there may be a time when some more, research comes about things are released, and there is more of a sense of of, understanding which is a different thing, from Consciousness and I've heard that, debated recently by some fairly, significant figures in the in the AI, World about whether whether completion, is evolving to understanding and I don't, know the answer to that but I it would, not surprise me to evolve at some point, to that level and Beyond so we have to, be conscious of the risk profile, changing as we're trying to to identify, where things are going to your use case, I still feel actually and I know that, probably most of the listeners will not, agree with me on this but I feel very, comfortable with modern AI models flying, aircraft personally and I think in many, cases and I say this as a pilot that, they are far better than the humans that, are doing the same because you know you, can train the model to essentially have, a million hours equivalent whereas you, know a great human pilot might be 10,000, so equivalent you know experience level, so one of the things I think I'm going, to throw out this as a as a point to, address is I think the notion of AI, ethics becomes very hard for us to not, be outrun on that so uh as we AI ethics, has always been chasing the development, cycle that's been one of the problems is, how do you catch it up to get the, decisioning in there uh early enough to, matter but we're also seeing the, development cycle speeding up and I've, had some conversations with people, lately about is it possible to do a, catchup there given the evolving State, over time so there might be a whole AI, ethic show we can have at some point in, the future about how you how you address, that that Quagmire there yeah and there, was actually a news article or a, development this week related to exactly, what we're talking about so regulators, and governments are trying to catch up, with the state of generative AI, generally not keeping up uh I would say, but this week I'm just looking at this, uh New York Times article Europeans take, a major step towards regulating AI so, quote the European Union took an, important step on Wednesday towards, passing what would be one of the first, major laws to regulate artificial, intelligence a potential model for, policy makers around the world as they, grapple with how to put guard rails on, the rapidly developing technology end, quote so there was this regulation that, is taking another step towards passing, and if you look through the article and, you know I haven't read the full, regulation but looked at a few links, this is really focused on uses of AI, that are seen as as risky and one that's, cited in the article is use of AI to, automate processes around utilities, right water and electricity and all of, that which if it fails has vast, consequences for large populations of, people so there are these sort of risky, scenarios I think one point that you, have made before Chris in relation to, the autopilot things and other other, things like that which is worth, mentioning here is is also thinking, about the fact that humans are are, fallible right so whether we're talking, machine translation gets a bad name for, producing really terrible output in, certain cases right well I you know I've, I've worked in that industry and I know, it's very possible for humans to produce, translations that are very very poor as, well so a question is you know for the, task you're considering I think it's, good to balance both it is good to think, about the risk because there is risk, like in a manufacturing plant or in an, aircraft or in a utility or wherever it, is there's risk associated with, something going poorly but also you have, to think about well what is the risk and, how do I test this AI or automated, system and what is the risk and how do I, test these human operators and in, reality one could be sa safer than the, other and it might not be the one that, you would expect from the start there's, an emotionalism that drives all these, topics and and keeping in mind that it, is evolving it is I I will argue with, anyone that there is a point in time in, the future where it becomes where the, models for instance going back to the, airplane flying the models for those, particular tasks the AI models are so, good that statistically they're making, many orders of magnitudes fewer errors, than very experienced human Pilots they, don't get tired they've seen weather, every weather condition in the models, they can navigate through all sorts of, stuff and if I was going to take my, family on a uh transatlantic flight, there's a point in time where a rational, person who's not driven by their fear, and emotion is going to say yes, statistically I'm much more likely to, arrive safely at my destination with my, family with the AI driven airplane, versus that so we can debate when that, happens but I don't think it's terribly, rational to say that's never going to, happen I'd always prefer the human, because I don't think the statistics, will will substantiate that yeah I I, think there's also a I've heard a risk, proposed maybe more so over the past, months than I heard in the past which, isn't really about AI automating jobs, away but it's a risk of how this, technology transforms humans and the, things that they do fundamentally uhh so, Pilots I'm sure like to fly right I love, it yep exactly so if an AI is better, than you and a regulator a government, regulator comes along and says okay well, it's no longer safe for humans to fly, Chris no license for you that's kind of, a bummer right I mean it might be the, option but it is it is a bummer and it, also falls into this area of like, content generation I've heard this I've, talked to journalists and other people, are like hey you know like maybe an AI, like I think some of those people are, actually saying maybe an AI can do as, good a job in certain cases or better, job than human writers in producing, certain types of content but isn't it a, shame that we're going to lose our, ability to like if that's no longer, needed how's that going to shape how, humans write into the future we are, going to change the nature of humanity, will change with this and it doesn't, take AGI to do that that's what I'm kind, of getting at what we do with AI versus, what we don't do with AI is going to, fundamentally change how we, self-identify and not only will I most, certainly eventually lose my license to, AI well that will happen at some point, because putting me in a plane in the air, no matter how good I am will will become, too big of an an acceptable risk but, that will also happen with automobiles, at some point at some point none of us, you will go to an amusement park to, drive a car or an amusement area to fly, a plane if you're going to you know much, like we go to amusement parks now to do, roller coasters because it will not, there's a point in the future you we can, debate when it is where the technology, is so good it will not make sense to put, a human who might have a crash and kill, people into the mix that will happen, someday so we I'll finish my comment by, saying if if you're like I'm I I have a, daughter who is 11 in her lifetime, assuming she lives out her life the, nature of what it means to be human and, to live with AI will dramatically change, our self-identification it's a big, statement but I feel uh I'm quite, positive that to be the case yeah and I, think you know closing on a, positive note there's there's a lot of, benefit that we're seeing and we will, you know work through some of these, things and the the people that are, listening to this podcast practical AI, you know we've talked about the mental, model of how these things operate and, encourage people you know get Hands-On, with these models they're not going to, be malicious against you as we've talked, about they don't have any sentients or, Consciousness so you know get handson, with these models develop tooling, practical tooling around these models, that's what I what I think is needed we, we can dive into this topic develop, practical tooling that can help us move, forward and create applications that, really help our customers Delight our, customers help those around the world in, in various ways so yeah I encourage, people to to get Hands-On and and get, involved and these powerful tools are, part of what it means to be human now, yeah yeah for sure well I mean we've all, been cyborgs for some amount of time, carrying around cell phones this so, shouldn't surprise people that things, are advancing but I I'm I don't have my, Vision Pro yet from Apple so we'll we'll, see how that develops but yeah all right, Chris it's been fun good conversation, Daniel, [Music], thanks thank you for listening to, practical AI you your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all change doog podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, time, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI trends: a Latent Space crossover | Daniel had the chance to sit down with @swyx and Alessio from the Latent Space pod (https://www.latent.space/podcast) in SF to talk about current AI trends and to highlight some key learnings from past episodes. The discussion covers open access LLMs, smol models, model controls, prompt engineering, and LLMOps. This mashup is magical. Don’t miss it!
Leave us a comment (https://changelog.com/practicalai/227/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Shawn Wang – Twitter (https://twitter.com/swyx) , GitHub (https://github.com/sw-yx) , Website (https://swyx.io)
• Alessio Fanelli – Twitter (https://twitter.com/FanaHOVA) , GitHub (https://github.com/fanahova) , Website (https://www.alessiofanelli.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Latent Space podcast (https://www.latent.space/podcast)
• Featured Latent Space episodes:
• Benchmarks (https://www.latent.space/p/benchmarks-101#details)
• Reza Shabani (https://www.latent.space/p/reza-shabani#details)
• MosaicML and MPT (https://www.latent.space/p/mosaic-mpt-7b#details)
• Segment Anything (https://www.latent.space/p/segment-anything-roboflow#details)
• Mike Conover (https://www.latent.space/p/mike-conover#details)
• Featured Practical AI episodes:
• From notebooks to Netflix scale with Metaflow (https://changelog.com/practicalai/150)
• Capabilities of LLMs 🤯 (https://changelog.com/practicalai/219)
• ML at small organizations (https://changelog.com/practicalai/207)
• Prediction Guard (https://www.predictionguard.com/)
• Data Dan (https://datadan.io/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-227.md) | 9 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends that fly, deploy your app servers and database, close to your users no Ops required, learn more at, [Music], fly.io well hello we have a very special, episode for you today I got the chance, to sit down with the guys from latent, space swix and alesio Out in San, Francisco they were kind enough to let, me into their podcast recording studio, and we got a chance to talk about our, favorite episodes of both of our shows, and some of the overall takeaways we've, had from those discussions we cover some, of the trends that we've been seeing in, Ai and they even get a chance to grill, me on my opinions about prompt, engineering so enjoy the, show hey everyone welcome to the laden, space podcast this is alesio partner and, CTO and residents at deel Partners I'm, joined by my co-host the swix writer and, editor of leaden space and today we're, very excited to welcome Dan Whit neck to, the studio welcome Dan what's up guys, it's great to be here this is a podcast, crossover if you recognize this voice uh, Dan is the host of practical FM he's, been in my ear on and off for the past 5, years covering the latest and greatest, in AI before it was cool yeah yeah yeah, before before the AI hyp back in these, like weird data science times whatever, that is now yes everything is merging, and converging so I'll give a little bit, of your background and we we can go into, a little bit on your personal side you, got your PhD in math mathematical and, computational physics and then uh he, spent 10 years as a data scientist most, recently at s International which I, actually I thought it was like an, agretech thing and then I went to the, website it's actually a nonprofit, International NGO yeah so they do, language related work all around the, world so I spent the F the last five, years building up a team that's been, working on kind of low resource, scenarios for AI if people are familiar, with that so doing like machine, translation or speech recognition that, sort of thing in languages that aren't, yet supported yeah yeah we'll talk and, we'll talk about this later but I think, episode three on presco AI was already, featuring the global community that that, AI has and addresses yeah yeah yeah it's, been an important theme throughout the, whole time throughout over 200 episodes, so yeah yeah and you recently left s to, work on prodiction guard U which we we, can talk a little bit more about that, you are also interim senior operations, development director at NT candle yeah, yeah and yeah what what else should, people know about you yeah I mean I as, probably can be noted from the intro I I, love working on various projects and, having my hands in in a lot of things, but uh yeah I'm I've code on the side, for fun and that's how I usually get, into these side projects and that sort, of thing but um outside of that yeah I I, live in Indiana I was telling you guys, that I'm trying to coin the term, cerebral Prairie so we'll see if that, catches on probably not you're a second, guest in a row from Indiana uh L is from, notion was Indiana we're talking about, how uh there's surprising number of, international students in there and very, true Purdue is a strong University so, yeah yeah very strong University it's a, great place to spend time and um there's, a lot of fun things that happen around, that area too so I'm I'm also very into, music but not any sort of popular music, I I play like mandolin and banjo and, guitar and play folk music Low resource, low resource music Low resource, languages yeah all those things anything, low resource is in my is in my territory, for sure and maybe we can cover the, story of practical AI how' you start it, tell us what the early days were like, and just fill everyone in yeah it was, kind of a winding Journey some people, might be familiar with the change log, podcast which I think they've been going, now for like 11 or 12 years it's it's, pretty prolific in around I think, originally around more open source now, it's kind of software development in, general but they have a network of, podcasts now and at a go conference, actually so I'm I'm a fan of the go, programming language um that's another, fun fact but uh at goer con I think it, was in 2016 maybe I met um Adam stovak, who is one of the hosts of the change, log at the time I was giving a talk, about data science something I forget, but he kind of pitched me like we've, been thinking a lot about doing like a, data or data science podcast and at the, time he had a name it was like I think I, think it was hard data or something like, that which which never caught on um for, obvious reasons but I kind of stored, that away it didn't really do anything, with it but then over the next couple, years I met Chris Benson who's my, co-host on practical Ai and helped him, with a couple of talks at conferences we, met through the go Community as well and, eventually he's a he was working at a, different company at the time now he's a, strategist with loed Martin working in, AI stuff but he reached out to me and, said hey would you ever consider doing, kind of like a co-host podcast thing and, at that point I remembered my, conversation with Adam so I reached back, out to Adam with the change log and then, we kind of started working on the idea, we wanted it to be practical so at the, time well there's a lot of people doing, things now with AI like handson back, then there were kind of some podcasts, that were really hyped AI like not, practical at all which is why we kind of, came to practical AI something that, would actually benefit people and that, that's like a great thing to hear from, people that when they listen to the show, they do actually learn something that's, useful for their day-to-day that's kind, of the goal yeah nice and I think that's, one of the things in common with our, podcast you know there's a lot of, content out there that can get a lot of, clicks with uh fear of AI you know and, all these different things and I think, we're all focused on uh more yeah, practical and day-to-day usage yeah tell, us more about prediction card you know, the kind of fits into making a out, practical and usable yeah yeah sure, appreciate that so yeah prediction guard, is what I've been working on since about, Christmas time is originally I was, thinking a lot about large language, model evaluation and model selection but, it's kind of morphed into something else, what I've realized is that there's this, Market pressure there's internal company, pressure for people to implement these, kind of generative AI version of models, into their workflows because Enterprises, realize the benefits that they could, have but in practice when they go and, they go from like chat GPT type in, something it's amazing and then like how, do we do this in our Enterprise where we, have maybe rules around data privacy or, compliance issues and also like we want, to automate things maybe or we want to, do data extraction but I just get like, text vomit out of these models like what, do I do with that I I justed, unstructured text how do I build a, robust system out of inconsistent text, fit so prediction guard is really, focused on those two things one is kind, of compliance and running kind of, state-of-the-art AI models in a, compliant way and then layering on top, of that layers of control for, structuring output and validating output, so some people might be familiar with, projects like guardrails or guidance or, these things so we've integrated kind of, some of the best of those things into, the platform plus some ways to easily do, like self-consistency checks and, factuality checks and other things on, top of large language model output nice, we did have a sh from gell as a guest, yeah yeah so yeah that's another episode, that people really like yeah maybe you, know just to give a people a sense of, what practical AIS as a podcast you want, to talk about maybe like the two three, favorite episodes uh that we have and we, can go maybe alternate you like our, favorite we done some PR for this, episode yes yes so this is kind of like, I think our conception of this is kind, of like a review for listeners who are, new to us e either of our podcasts uh to, go back and revisit the favorites yeah, yeah I think um I can talk about some, personal favorites of mine and then, maybe like favorites from the audience I, think some of my personal favorites have, actually been um we call them like fully, connected episodes where Chris and I, actually talk through a subject in, detail together without a guest to be, honest those are great episodes just for, like me to learn something like have an, excuse to learn something and we've done, that recently like with chat GPT and, instruction tune models we did it with, stable diffusion and diffusion models we, did it with Alpha fold so all of those, are episodes with with us two and just, talking through like how practically can, you like form a mental model for how, these models were trained and how they, work and what they output those are some, of my favorites just because I learn a, lot because I do a little bit of prep we, talk through all the details of those, and it it helps me form my own sort of, intuition around those things another, personal favorite for us was um that we, did a series about AI in Africa that was, really cool you mentioned like the, global AI Community we did actually a, series of those they're all labeled AI, for Africa highlighting things like, masakane so people don't realize that, like some of the models that we develop, here like in the West Coast or wherever, they don't work great for all use cases, around the world and there's a lot of, thriving Grassroots communities like, masakane and Turk interlingua and other, communities that are really built models, for themselves machine translation, speech recognition models that work for, their languages around the world or or, agriculture you know computer vision, models that work for their use cases, around the world so those are a couple, of highlights on on my in do we do we go, with our personal, highlights yeah go ahead I think you, already picked one out yeah I think mine, is definitely uh the episode with Mike, coner from data bricks who's the person, leading the the doll first there I think, obviously the cont is great and Mike is, extremely smart and prepared but I think, the the passion that he had about these, things you know the the red pajama data, set came out the morning that we, recorded and we were all kind of like, nerding out or like yeah why why is that, so interesting like he was so excited, about it and it's great to see people, that have so much um excitement about, things that they work on you know it's, kind of like an inspiration in a way to, to do the same for for us I think, personally so I I tend to drive the news, driven episode ones like the event, driven ones where uh something will, happen in Ai and like I'll make a snap, decision that we'll have an episode, recording on Twitter spaces and we'll, have just a bunch of people tune in I, think the one that stood out was the, chat GPT App Store the chat GPT plugins, release oh yeah where like 4,000 people, tuned in to that one and we did like an, hour of prep right um and I think it's, important for me as a quote unquote, journalist to be the first to report on, something major and and to provide a, perspective of something major but also, capture an audio history of how people, react at the time because this is, something that we were talking about in, the prep chatu be plugins have become a, disappointment compared to uh our, expectations then but we we captured it, we captured the excitement back then and, we can consider compare and contrast, like where we thought things were going, and where where things have actually, ended up it's it's really nice piece of, I guess audio journalism yeah yeah I I, mean it was just last year I mentioned, stable diffusion all that we were, talking about this it was like I always, had in my mind oh everything things, going to image generation like should I, quit doing NLP and start thinking about, image and now all I do is NLP and, language models but at the time that was, you know that's what was on our mind, same thing I I I was working on web UI, for stable diffusion just like a, thousand other font end developers were, and yesterday was the first time I, opened stable diffusion in six, months and a lot has changed in that, it's still an area that's developing but, it's not yeah it's not driving thought, process at the moment yeah well, especially cuz I think just it depends, on what your what you think you want to, do and I'm definitely less Visual and, much more of a text driven person so I, naturally lean towards uh llms anyway, like n yeah um I can hit some listener, favor so we have like one clear favorite, which is actually I would say it's a, surprise to me um not because the guest, wasn't good or anything but just the the, topic was metaflow so I don't know if, you've heard of metaflow it's a a python, package for kind of full stack data, science modeling work developed at, Netflix and we had uh uh Villa tulos on, who was the creator of that package and, that has had so it's like maybe a 30%, more listens than any other episode and, I think the title so we titled it from I, think from notebooks to production or, something like that yeah so it's like, this idea of from notebooks to, production there's all sorts of things, that prevent you from getting the value, out of these sorts of methodologies and, my guess would be that talking about, that is probably like the key feature of, that episode and metaflow is like really, cool people should check it out it is, one way to kind of do this both, versioning and orchestration and, deployment and all these things that are, really important but I think a takeaway, for me was like practically bringing, into the some people might call it like, full stack data science or like model, life cycle things like the model life, cycle things interest people so much so, beyond making like a single inference or, Beyond doing like a single fine-tuning, what is the life cycle around a machine, learning or an AI project I think that, really fascinates people because it's, like the struggle of everyday life in, actual practical usage of these models, so it's one thing to go to hugging face, try out like a hugging face space and, like create some cool output or even, just pull down a model and get output, but how do I handle like model, versioning and orchestration of that in, my own infrastructure how do I tie in my, own data set to that and do it in a way, that like is fairly robust how do I you, know take these data scientists who like, use all this weird tooling and like mash, them into an organization that deals, with like devops and like non AI, software and all of that I think those, are questions people are just wrestling, with all the time yeah it feels a little, bit in conflict with the trends of, foundation models where you the primary, appeal is you train once and then you, never touch it again or you release it, as a version and and people kind of just, prompts based off of that and I feel, like this I feel this evolution, moving from essentially the mlops era, into for lack of better word llm Ops, yeah how do you feel about that no I, think you're I think you're completely, right I think there will always be a, place for like these models in, organizations that are task specific, models like psyit learn models or, whatever that solve a particular problem, because organizations like Finance, organizations or whatever will always, have like a need for explainability or, whatever it might, but I do think we're moving into a, period where like I've had to rebuild a, lot of my own intuition as a data, scientist from thinking about gather my, data create like my code for training, output my model serialize it push it to, some Hub or something deploy it you know, handle orchestration to now thinking, about okay which of these pre-trained, models do I select and how do I Engineer, my prompting and my chain maybe going, going to fine-tuning like that is still, like a really relevant topic but some of, these things that like I've been working, on with prediction guard I think are the, things that have a parallel in ml Ops, but they're slightly like they're just a, slightly different flavor I think it's, how ml Ops is graduating to something, else versus like people are still, concerned about Ops it's just like you, say it's a kind of a different kind of, Ops yeah and I think that's reflected in, our most popular episodes too so I think, all three of our most popular episodes, are model based they're not more like, infrastructure based so I think number, one is the one with Resa Shabani where, we talked about how they train the the, repet code model and the um Jud Vibes, that they use to figure out whether or, not the model was good and I think you, know that makes sense for our community, it's mostly software engineers and AI, Engineers so code models are obviously a, a Hot Topic yeah yeah that that was, really good and I think like it was one, of the first times where we kind of went, beyond just listenting traditional, benchmarks you know which is why we did, a whole thing about I'm J eval it's like, a lot of companies are using these, models and they're using off-the-shelf, benchmarks to do it and what you know, another episodes that we'll talk about, is like the one with Jonathan Franco, from mosl and he also mentioned a lot of, the benchmarks are multiple choice but, most production workloads are like, open-ended text generation questions so, how do you kind of reconcile the the two, yeah did you all get into it all you, know the whole Space of llms evaluating, llms and sort of this was something on a, recent episode we talked to Jerry from, llama index about in terms of on the one, hand generating questions like you're, talking about to evaluate llms or using, an llm to look at the a context and a, response and provide an evaluation I, think that's definitely something that I, think is interesting and has come up in, a few of our episodes recently where, people are struggling to evaluate these, things and so yeah we've seen a similar, Trend in One Direction thinking about, benchmarks and in another Direction, thinking about this sort of on the-fly, or modelbased evaluation which has, existed for some time like in machine, translation it's very common so like, unbabel uses a model called Comet that's, like one of the most popular highest, performing machine translation, evaluators is a model it's not a metric, and that sort of thing like blue so yeah, that's a trend that we've seen is is, evaluation and specifically evaluation, for llms which can kind of get dicey, yeah we did a benchmarks 101 episode, that is also well liked and we talked, about this concept of like a benchmark, driven development you know like the, benchmarks used to evolve every like, three four years um and now the models, are catching up every like six months so, there's kind of the race between the, benchmarks creators and like the models, developers to find okay the, state-of-the-art benchmarks is here and, gbd4 on a lot of them gets like you know, 98 perti results so you know gbd4 is not, a GI therefore to get to a GI we need, better evals for these models to start, pushing the boundaries and yeah I think, a lot of people are experimenting with, using models to to generate these things, but I don't think there's a clear answer, yet something that I I think uh we're, quite surprised to find was uh, specifically in Hell swag where the, benchmarks instead of being manually, generated were were um adversar, generated and then I was very interested, in our I mean this is kind of like, segueing we're we're not we're not, really going in sequence here segueing, into our second most popular episode, which was on roof flow uh which covered, segment anything from from from meta I, think you guys had a had a discussion, about that too yeah it's been mentioned, on the show I don't think we've had a, show de devoted to it but well the the, most surprising finding when when you, read the paper is that something like, less than 1% of the data of the the mass, that they release were actually human, generated a lot of it was AI assisted so, you have essentially models evaluating, models and the models are trained, themselves trained on model generated uh, data yes we have very few very many, layers in at this point yeah yeah and I, know that there's been a few papers, recently about the sort of things that, were done with llama and other models, around model generated output and and, data sets it'll be interesting to see I, think it's still early days for that so, I think at the very minimum what all of, these cases show is that models either, evaluating models or using simulated, data I think back a few years ago we, would probably call this simulated data, right I don't think that term is quite, as popular AED yeah or augmentation data, augmentation simulated data so I think, like this has been a topic for some time, but the scale at which we're seeing this, done is is kind of shocking now and and, encouraging that like we can do quite, flexible things by combining models, together both at inference time but also, for training purposes well have you ever, come across this term of mode collapse, what I fear is especially as someone who, cares about low resource stuff is that, stacking models on top of models on top, of models you just optimize for the, median use case or the modal use case, yeah yeah I think that one maybe so yeah, that is a concern I would say it's a, valid concern I do think that these sort, of larger models and this gets I guess, more into like multilingualism and the, makeup of various data sets of these, llms the more that we can have, linguistic diversity represented in, these llms which I know I think cohere, for AI just announced like a, community-driven effort to increase, MultiLing linguality in llm data sets, but I think the more we do that I think, it does benefit the downstream, lower resource languages and lower, resource scenarios more because we we, can still do fine tuning I mean uh we, all love to use pre-trained models now, but uh like in my previous work when you, were looking at a maybe an Arabic, vernacular language rather than Standard, Arabic there's so much Standard Arabic, in data sets making that leap to an, Arabic vernacular is much much easier if, that Arabic is included in llm data sets, because you can fine-tune from those so, that is encouraging that that can happen, more and more there's still some major, challenges there and especially because, most of the content that's being, generated out of models is not in you, know Central Siberia upic or one of, these languages right so um we can't, purely rely on those but I think my hope, would be that the larger Foundation, models see more linguistic diversity, over time and then there's these sort of, Grassroots organizations Grassroots, efforts like masakane and others that, rise up kind of on the other end and say, okay well we'll we'll work with our, language Community to develop a data set, that can fine-tune off of these models, and hopefully there's there's benefit, both ways in that sense yeah since you, mentioned masak a couple times um I, we'll dro the link in the show notes so, people can find it but what exactly do, they do how big of an impact have they, had yeah I would say so if people aren't, familiar if yeah you go to the link, you'll see it they talk about themselves, as a Grassroots organizations of African, NLP researchers creating technology for, Africa so we have our own kind of biases, as people in a english- driven sort of, literate world of what technology would, be useful for everyone else like it it, probably makes sense for maybe listeners, to say well wouldn't it be great if we, could translate Wikipedia into all, languages well maybe but actually the, reality on the ground is that many, language communities don't want, Wikipedia translated into their language, that's not how they use their language, or they're not literate and they're in, oral culture so they need speech right, text won't do them any good so that's, why masakane has started as a sort of, Grassroots, organization of NLP practitioners who, understand the context of the that they, work in and are able to create models, and systems that work in those contexts, there's others you can hear them on like, the AI for Africa episodes that we have, that talk about like agriculture use, cases agriculture use cases in the US, might look like you know John Deere, tractor with a cam like I don't know if, people know this but like John Deere, tractors or these big tractors they, literally they have a kubernetes cluster, on like some of them have a kubernetes, cluster on the tractor it's like a at, the edge kubernetes cluster that runs, these models and like when you're laying, down pesticide there's cameras that will, actually identify and spray like, individual weeds rather than like, spraying the whole field so that's like, at the level that you know maybe is, useful here in Africa maybe the more, useful thing is around disease or, drought identification or disaster, relief or other things like that and so, there's people working in those enir, ments are in those domains that know, those domains that are producing, technology for those cases and I I think, that's really important so yeah I'd, encourage people to check out masakane, and there's other groups like that and, if you're in like the us or Europe or, wherever and you want to get involved, there's Open Arms to say hey come help, us do these things so yeah get involved, too what else is in uh your top tree oh, yeah uh so one recent one from uh Raj, Shaw from hugging face um some people, might have seen his really cool videos, on LinkedIn or other places he makes Tik, Tok videos about AI models which is, awesome and uh his episode was called, the capabilities of llms and I thought, it was really a good way to help me, understand like the landscape of large, language models and the various features, or axes that they're kind of situated in, so one axis is for example U clo CL or, open right can I download the model but, then on top of that there's another, there's another axes which is is it, available for commercial use or is it, not and then there's o other axes like, we already talked about multilinguality, but then there's like task specificity, right like there's code gen models and, there's language generation models and, there's of course image generation, models and all of those as well so yeah, I think that episode really helps set a, good foundation no pun intended for, language models to understand where, they're situated so you can kind of when, you go to hugging face and there's what, is there like 200,000 models now maybe, there I don't know how many models there, are how do I like navigate that space, and understand what I could pull down or, do I fit into one of those use cases, where it makes sense for me to just, connect to open AI or cohere or, anthropic um helps kind of situate, yourself so I think that's why that, episode was so popular as he kind of, lays all that out in an understandable, way how do you personally stay on top of, models you know there's leaderboards, there's Twitter there's LinkedIn like, yeah I I think it's a little bit spread, out for me between the sources that you, mentioned as podcasters I think that's, one of yeah it's one well it's also a, benefit for us I think like if I didn't, have every week on, Wednesday like I'm going to talk about, this topic whether like I'm planning to, think about think about a certain thing, or not it kind of helps you prompt and, look look at what's going on so I think, that is an advantage of like content, creators is it is kind of a, responsibility but it's also an, advantage that we can have to like have, the excuse to have great conversations, with people every week but yeah I think, Twitter's a little bit weird now as as, everybody knows but it's still a good, place to find out that information and, then sometimes too like to be honest I, go to hugging face and like I'll search, for models but I also search and I look, at the statistics around the downloads, of models because generally when people, find something useful then they'll, download it and download it over and, over so sometimes when I hear about like, a family of models I'll go there and, then I'll look at some of the statistics, on hugging face and like try a few, things yeah and some of these forks I I, see the download numbers but I've never, heard of them outside of hugging face, yeah it's true it's true yeah and some, of them, like they'll be a fork or um like a fine, tune or something and it's a you do have, to do a little bit of digging around, like licensing and that sort of thing, too but it is a useful like there's tons, of people doing amazing stuff out there, that aren't getting recognized at the, like you know falcon or MPT level but, there's a lot of people doing cool stuff, that are releasing models on hugging, face maybe that they've just found, interesting any unusual ones that you, recently found well there's one that, I'll I'll highlight which I thought was, cool because I don't know if you you all, saw the um meta released this uh the six, modality model yeah yeah and it was, interesting because we did this work, with masakane when I was at SI we did, this work with masakane and Ki which is, a speech tech company to create these, language models in like six African, languages and um I was like okay that's, that's cool like we did that we form, like the data sets it it was satisfying, but now I'm like learning that then meta, went and found that data on on hugging, face and that's kind of Incorporated in, these uh these new models that meta has, released so it's cool to see like the, full cycle thing happen where there was, Grassroots organizations seeing a need, for models Gathering data doing, baselines and now there's like extended, functionality in kind of like a more, influential way I guess at like that, higher level yeah yeah I think I mean, talking about open and and close models, when we started the podcast it kind of, looked like a cathedral kind of Market, where we had goir anthropic open AI, stability and those were like the, hottest companies I think now you know, as you mentioned you go on hugging phase, like I just open there right now there's, the utters news research 13 billion, parameters model that just got released, least fine tune on over 300,000, instructions it's like models are just, popping up everywhere which is which is, great and um yeah we had an episode with, as I mentioned with Jonathan Franco and, abov from MOS kl to introduce MPT 7B and, some of the work that they've done there, and I I think like one of their, motivation is like keeping the space as, open as possible like making it easy for, anybody to go obviously on ideally on, almost hml's platform and turn their own, models and and whatnot so that's one, that people really liked I thought it, was really technical so I was really a, little worried at first I was like are, is it going to fly over most people said, but um it was actually we're going more, technical exactly now now that that was, a good learning leaning in exactly uh, and Jonathan is super passionate about, open source he had this rant uh halfway, through the episode about why it's so, important to keep model open and I, actually edited in a crowd Applause into, the into the podcast which I kind of, love I love little audio bonuses for, people listening along and I think, change the change lock guys do that, really well especially in their their, newer episodes yeah we need to there is, a way for us to integrate some of those, things yeah like the soundboard thing, and we we've never got into it too much, I need to work with Jared from the, change log and see spices it up you know, exactly exactly you can have you can, only have so many hourong conversations, about ml um we're we're yeah I I keep, thinking that but then we keep going so, right right right right sorry I didn't, mean I didn't mean like it was like a, switches it up and makes it audio, interesting to to add variety cool I I, don't know I don't know if there are any, other highlights that we want to do for, uh um I'll just highlight maybe one more, um Kristen Lum was on she had an episode, about machine learning at small, organizations I think that's a great one, like if you're a data scientist or a, practitioner or an engineer at like, either a startup or a midsize company, where I I think the thing that she, emphasized was these different tasks, that we think about like you know, whether it's curating a data set or, training a model or fine-tuning a model, or deploying a model sometimes at like a, larger organization those are functions, in and of themselves but when you're in, this sort of mid-range organization, that's like a task you do right so to, think about like those tasks as tasks of, your role and time box them and, understand like how to do all of those, things well without getting sucked down, into any one of those things that was an, Insight that I found quite useful in my, day today as well as to sort of start to, get a little bit of like spidey sense, around hey I'm spending a lot of time, doing this but which probably means I'm, like stuck in too much like I'm making, my ml Ops too complicated right to track, versions and like tie all this stuff, together maybe I should just like do a, simple thing and like paste a number in, a Google sheet and move on or something, I think that's a good segue into some of, the other work that you do you run the, data dan. website which is kind of like, a different types of workshop and, Advising that you do I think a lot of, Founders especially are curious about, how are companies thinking about using, this technology there's a lot of demos, on Twitter a lot of excitement but when, founders are putting together something, that they want to sell they're like okay, what are the real problems that, enterprises have what are like some of, the limitations that they have we talked, about commercial use cases and and, something like that can you maybe talk a, bit about you know two three high level, learnings that you had from these, workshops on like how these models are, actually being brought into companies, and how they're being adopted yeah I, think maybe one higher level comment on, this is even though we see all these, demos happening everybody's using chat, GPT the reality, in Enterprise is most Enterprises still, don't have like llms integrated across, their technology stack right so that, might be a bummer for some people like, oh it's not quite as pervasive but I, actually find it as refreshing maybe, because some of us feel like stuff, happens every week it's it's exhausting, to keep up like oh if I don't keep up, with this stuff then like I'm getting, left behind but it takes time for these, things to trickle down and and not, everything like we were talking about, the stable diffusion use case and others, like not everything that's hyped at the, moment will be a part of your like, day-to-day life forever right so you can, kind of take some comfort in that I, think it's really important for people, to if they're interested in these models, to really dig into more than just kind, of a single prompt into these models the, Practical side of using generative text, models or llms really comes around, either what some people might call, Prompt engineering but you know, understanding things like giving, examples or demonstrations and you're, prompt using things like guard rails or, or reject statements or or uh prediction, guard to structure output doing uh like, fine-tuning for your your company's data, like these things go there's kind of a, hierarchy of these things I think I, think you all know Travis Fischer he was, he a guest on um on practical Ai and, talked about this hierarchy from prompt, engineering through like data, augmentation to fine-tuning to, eventually like training your own, generative model I've really tried to, encourage Enterprise users and those, that I do workshops with to think, something like that hierarchy with these, models like get Hands-On do your, prompting but then like if you don't get, the answer that you want immediately I, think there's a tendency for people to, say oh well it doesn't work for my use, case but there's so much of a rich, environment underneath that with things, like linkchain and llama index and you, know data augmentation chaining uh, customization fine-tuning like all this, stuff that can be combined together it's, it's a fun new experience but I find, that Enterprise users just haven't, explored past that very most shallow, level so I think yeah in terms of the, trends that I've seen with the workshop, I think people have gone to chat GPT or, one of these models they've seen like, the value that's there but they have a, hard time connecting these models to a, workflow that they can use to solve, problems like before we all had, intuition like I'm going to gather my, data it's going to have these five, features I'm going to train my psyit, learn model or whatever I'm going to, deploy it with flask and like now I have, a cool thing now all of that intuition, has sort of been shattered a little bit, so we need to develop a new workflow, around these things and I think that's, really the focus of the workshops is, kind of rebuilding that intuition into a, practical workflow that you can think, through and solve problems with, practically you have a live prompt, engineering class prompt engineering, overrated or underrated yeah I think, prompt engineering as like a term is, probably too hyped I think engineering, and Ops around large language models, though is is a real thing and it is sort, of what we're transitioning to now how, much you want to say is like that term, gets used in all sorts of different, context it could mean just like oh I, wrote a good prompt and I'm gonna like, sell it on Twitter or something PR base, the marketplace of PRS I wonder how, they're doing it to be honest cuz they, get quoted in almost every article about, prompt engineering they got really, really good PR yeah yeah I mean I if, people can sell their prompts I mean I, all for that it's this cool I got I got, promps right here what you know but I, think it goes like some people might, just mean that and I think that's maybe, overhyped in my view but I do think, there's this whole level of engineering, and operations around prompts and, chaining and data augmentation that is a, real workflow that people can use to, solve their problems and that's more, what I mean when I'm referring to like, whatever however you want to combine the, word engineering with prompting and, language models yeah I I've just been, calling it AI engineering AI engineering, that's good Wrangle with the AI apis, know what to do with them that is a, skill set that is developing that is a, self special toio software engineering, yeah yeah it is what it is and and I, think part of something I'm really, trying to explore is this is this, spillover of of AI from the traditional, ml space like where you needed a machine, learning researcher or machine learning, engineer it's spilling over into the, software engineering space and there's, this Rising class of what I'm calling AI, engineer that is specialized in, conversing in the research the tooling, the conversations and themes and do you, think um are the unique challenges that, like someone coming from that lad group, like Engineers that are advancing into, this AI engineer position versus like, probably more like my background where I, was in data science for some time and, now I'm kind of like transitioning into, this world what do you what do you think, are the unique challenges for both, groups of people oh I mean so I can, speak to the software side and you can, speak about the data science side it's, simply that we are for many of us, dealing with a a non- determin, deterministic system for the first time, that by the way we don't fully control, because there's this conversation about, did GPT 4 regress in its quality and we, don't know because model drift is not, within our control cuz it's a blackbox, API from from open AI U but beyond that, there's this sense that the the latent, space of of capabilities is not fully, explored yet yeah right like it's there, there's 175 billion or one trillion, parameters in the model we're we're, maybe using like 200 of them you know, it's it's literally where where is that, Meme where like we're using 10% of our, brain we we we're probably using 10% of, what is capable in the model and it it, takes some ingeniousness to to unlock, that yeah I think from the data science, perspective there's probably, a desire too quickly to jump to these, other things around fine-tuning or, training your own models where if you, really do take this prompting chaining, data augmentation seriously you can do a, lot with models sort of off the shelf, and don't need to like jump immediately, into training so I think that is like a, knee-jerk reaction on our end and, fine-tuning is going to be around for, the foreseeable future as far as I can, tell but um data scientists have maybe a, different cuz we've been dealing with, the uncertainty or uh non-deterministic, output for some time and have developed, some intuition around that but that's, mostly when we've been controlling the, data sets when we've been controlling, like the model training and that sort of, thing so to throw some of that out but, still deal with that it's a separate, kind of challenge for us I just, remembered another thing that we've been, developing on the Len space uh Community, which is this concept of AI yeah right, that the the last mile of showing, something on on the screen and making it, consumable easily usable by people is, perhaps as valuable as the actual, training of the model itself yes um so I, don't know if that's an overstatement to, be honest like obviously you're spending, like hundreds of millions of dollars, training models and like you know, putting it in some kind of react app, it's not it's not the the biggest, innovation in the world but a lot of, people from open the ey say like chbt, was a mostly a ux Innovation yeah I, think like leading up to chat G like, when I saw the out put of chat GPT it, wasn't I don't think I had the same, Earth shattering experience that other, people had in believing like oh this, output is coming from a model like that, sure it came from a model but the, reception to like that interface and, like the human element of the dialogue, like that was so maybe it's both and, right like it's not like you're not, going to get that experience if you, don't have the Innovation under the hood, in the modeling and the data set, curation and all of that but it can, totally be ruined by like the ux I, typically give the example like one day, in Gmail I logged in and like I was, typing my email and then like had the, gray auto complete right I did not get, like a popup that said like do you want, us to start writing your emails with AI, like it just like was so smooth and it, happened and it made like it created, value for me instantly right so I think, that there is really sense to that, especially in this area where people, have a lot of like misgivings or fear, around the technology itself and we're, going to have Alex Gravely on in a, future episode but uh GitHub when they, had the initial codex model from openi, they spent 6 months tuning the ux just, to get co-pilot to a point where it's, not a separate pain it's not a separate, text box it's kind of in your code as, you write the code and to me that's, traditional that's more the domain of, traditional software engineering rather, than ml Engineers or research Engineers, yeah yeah I would say that is probably, yes to Circle back to what we were, talking about like challenges that are, unique to like Engineers coming into, this versus like data scientists coming, into this that's something data, scientists I think have not thought, about very much at all at the very most, it's data visualization that they've, thought about right whereas Engineers, generally like there's some human I mean, unless you're just a very pure backend, systems engineer like thinking about UI, ux is maybe a little bit more natural to, that group Yeah you mentioned one thing, which is about data set curation uh, we're in the middle of preparing this, long overdue episodes on on on data sets, 101 any Reflections on the evolutions in, natural in NLP data sets that have been, happening yeah great question I, definitely like I think um are are you, all familiar with label studio and uh, that it's one of the most popular kind, of Open Source Frameworks for data, labeling and they they've been uh I, think they've been on we have them on, the show like we try to have them on the, show every year as like data labeling uh, expert maybe it's time for that it's, just reminding me so um they just, released so Erin mcel is in the in the, lat space Discord I think you had her on, at she was at odsc yeah that's right um, so they just release new tools for, fine-tuning generative AI models exactly, yeah I think um maybe the that being an, example of this is maybe a trend that, we're seeing there is around augmented, tooling or tooling that's really geared, towards an approachable way to fine-tune, these models with human feedback or with, customized data so like I know with, label Studio a lot of the recent, releases had somewhat to do with like, putting llms in the loop with humans, during the label process similar to like, I think Prodigy has been doing this for, some time which is from Spacey so this, sort of human in the loop labeling and, update of a model they brought some of, that in but now like this new kind of, set of tooling around specifically, instruction tuning of models I think, before maybe people and I've seen, actually this misconception I was in a, advising call with a client and they're, really struggling to understand like, okay our company has been training or, fine-tuning models now we want to create, our own like instruction tuned model, like how is that different from what, we've been doing in the past and kind of, what I I tried to help them see is yes, like some of the workflow that happened, around like reinforcement learning from, Human feedback is unique but, reinforcement learning is not unique, there's an element of training in that, there's data set curation in that, there's pre-training that happened like, before that whole process happen so the, elements that you're familiar with are, part of that they're just not packaged, in the same way that you saw them before, now there's this clear pre-training, stage and then the human feedback stage, and then this reinforcement learning, happen so I think the more that we can, bring that concept and that workflow, into tooling like what label studio is, doing to make it more approachable for, people to where it's not like this we, weird like reinforcement learning from, Human feedback sounds very confusing to, people like Po and helping people, understand like how reinforcement, learning works it's very difficult so, the more the tooling can just have its, own good uiux around that process I, think the better and probably La label, studio and others are leading the front, on leading the way on that front I was, thinking like so labels are one thing, and and by the way okay I'll take the, sideen on labels and I'll come back to, the main point I actually presume that, scale would win everything yeah and it, seems like they haven't yeah and and, sorry there's scale there's snorkel, there's this generation of labeling, companies that came data Centric AI, companies yeah right what happen like, how come there's still new new companies, coming out there's Label Box there's, label Studio I don't have a sense of how, to think about these companies like, obviously labels are important yeah yeah, I think also even before that there was, like tool at least um features even from, cloud providers or whatever like automl, like came before that like upload your, own data create your own custom model so, I think that maybe it's that like, companies that want to create these sort, of custom models and this is just my own, opinion I'll preface that maybe they, don't want like when they're thinking, about that problem they're not thinking, about oh I need a whole platform to, create custom models using our data, they're more thinking about like how do, I use these state-of-the-art models with, my data and so it's still if those, statements are very similar but if you, notice like one is more model Centric, and one is more data Centric so I think, Enterprises are still thinking like, model Centric and augmenting that with, their data whether that be just through, augmentation or through fine-tuning or, training they're not necessarily, thinking about like a data platform for, AI, they're thinking about bringing their AI, or their data to the AI system which is, why I think like API like cohere open AI, that offer fine-tuning as part of their, API it's sort of like people love that, it makes sense like okay I can just, upload some examples and it makes the, model better but it's still like model, Centric right yeah I get the sense that, open the ey doesn't want encourage that, anymore CU they don't have fine tuning, for 3.5 and four and then uh so the last, thing I'll do about data sets and we can, go into the lightning round it's I was, actually thinking about unlabeled data, sets for unsupervised learning or, self-supervised learning right like that, is something that we are trying to wrap, our heads around like common crawl stack, Overflow archive the books um you know, like I I don't know if you have any, perspectives on that like the trends, that are arising here the best practices, and like as far as I can tell nobody has, a straight answer as to how what the, data mix is and and everyone just kind, of experiments yeah I well I think, that's partly by the fact that like the, most popular models you don't really, have a clear picture of what the data, mix is right so the people that are, trying to recreate that and they're not, achieving that like level of performance, right then they one of the things that, the well what are all the different data, mix options that I can try and try to, replicate some of what's going on right, so I think it's partly driven by that is, like we don't totally know what the data, mix is like sitting behind the the, curtain of open AI or or others but I, think there's there's a couple of Trends, I guess which you've already sort of, highlighted one is like how can I mix up, all of these public data sets and filter, them in unique ways to make my model, better so some I I listened to a talk I, believe it was at last year's, ACL and they did this study of common, crawl right and they found that, actually a significant portion of common, crawl was like mislabeled all over the, place right like trash yeah so like I, think it was 100% of the data that was, labeled as Latin character Arabic so, Arabic written in Latin characters was, not Arabic like 100% of it and there was, like all sorts of other problems and and, that sort of thing so I think there's, one side one group of people or set of, experiments that you could think about, as like how do I take these existing, data sets which I know have data quality, issues or um maybe other data biases or, problems that I would like to filter out, like not fit for work data that sort of, thing so how do I create my own special, filtered mix of these and train a model, so that's one kind of genre and then, there's other genre which is like maybe, taking those but augmenting them with, this like simulated or augmented data, right that's out of a model, like a GPT model or something like that, so I think you could combine those in, all sorts of unique ways and I think it, is a little bit of like the wild west, because we don't totally have a good, grip on what is the winning strategy, there and so I think that's where I, would also encourage people to try a, variety of models so this is maybe a, problem with benchmarks in general right, like you can see like a the open large, language model Benchmark on hugging face, and like these models are at the the top, and you could come away with that and, say well I'm like anything below like, the top three I'm not even going to use, right but the reality is that each of, those had a unique sort of flavor of, this data under the hood that might, actually work quite well for your use, case so one example that uh I've used, recently in in some work is the camel 5, billion model from Ryder you know it, doesn't work great for a lot of things, but there's certain things around like, marketing copy and others that it does a, really good job at and it's a bit, smaller model that I can host and run, and I can get good output out of it if, if I put in some of that workflow and, structuring around it but I wouldn't use, it for other cases but that has a lot to, do with the data and you know WR I'm, guessing writers focus on that copy, generation and such so yeah I would, encourage people specifically on this, topic to maybe think about what's going, on under the hood and also give some, models a try for different like gain, your own intuition about how a Model, Behavior might change based on like how, it was trained in the mix of data that, went in awesome let's jump into the, lightning round we have three questions, for you uh it's lightning but you can, take 30 seconds to answer all right cool, so the the first question is around, acceleration what's something that, already happened in AI that you thought, would take much longer yeah I think the, thing that I was thinking about here, here was like how general purpose these, large language models are Beyond, traditional inlp tasks so it doesn't, surprise me that maybe they could do, like sentiment analysis or even like nli, or something like that these are things, that have been studied for a long time, but the fact that I can like at odsc I, was in like a workshop on fraud, detection and they were using like some, I forget the models they were using some, statistical models to do fraud detection, I was like I wonder if I just like do a, bit of chaining and like insert some of, the examples of these Insurance, transactions into my prompts if I can, get the large language model to detect a, fraudulent Insurance client and it, seemed to like like I got pretty far, doing that so that fact of like you can, do something like that with these models, they're that, generalizable Beyond traditional NLP, techniques I think is surprising to me, awesome um explor, what are the most interesting unsolved, questions in AI yeah I think there is, still such a focus on English and, Mandarin it's like that like you're kind, of large language model wise if you look, at the drop off and performance after, you get past like English Mandarin, German Spanish to some degree but German, is actually better than Spanish because, of how much it's been studied in NLP and, of course Mandarin has a lot of data, Spanish still does good but like there's, languages even in the top hundred, languages of the world that are spoken, by millions and millions and millions of, people around the world that don't like, perform well in these models so that's, like thing one but even modality wise I, know there's a lot of work going on in, the research Community around sign, language but like there's all of these, different modalities of language written, text is not does not equal communication, right written text is a synthesis of, communication into a written form that, some people consume but the combination, of all of these modalities along with, all of these languages there's just so, much room to explore there and so many, challenges left to explore that will, eventually I think help us learn a lot, about Communication in general and the, limitations of these models but is an, exciting area it's definitely a, challenge but an exciting area awesome, and so one last takeaway what's, something or a message that you want, everyone to remember today yeah uh, similar to when you're asking about my, workshops I think I would just encourage, people to get handson with these models, and really dig into the new sets of, tooling that are out there there's so, much good tooling out there to go from, like a simple prompt to inject your own, data to form like a query index to, create like a chain of processing even, like trying agents and all those things, like get Hands-On and try it that's the, only way that you're going to build out, this intuition so yeah that's that would, be my encouragement excellent well, thanks for coming on yeah thank you guys, so much this is, [Music], awesome thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all change talk podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, time, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Accidentally building SOTA AI | Lately.AI has been working for years on content generation systems that capture your unique “voice” and are tailored to your unique audience. At first, they didn’t know that they were going to build an AI system, but now they have a state-of-the-art generative platform that provides much more than “prompting” out of thin air. Lately.AI’s CEO Kate explain their journey, her perspective on generative AI in marketing, and much more in this episode!
Leave us a comment (https://changelog.com/practicalai/226/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Kate Bradley Chernis – Twitter (https://twitter.com/LatelyAIKately) , LinkedIn (https://www.linkedin.com/in/katebradley)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Lately.AI (https://www.lately.ai/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-226.md) | 10 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist and founder at prediction, guard and I'm joined as always by my, co-host Chris Benson who is a tech, strategist at locked Martin how you, doing Chris I'm doing fine it's another, exciting day in the AI world of uh in, 2023 probably the most exciting year in, AI ever yes and our our episode today is, very timely it's not lately although we, are going to talk about lately oh God, you actually said, that that's the best one yet honestly, I've I've heard a lot of them okay good, well we have with us uh Kate Bradley, chernes who's CEO at lately . a how are, you doing Kate I am now tickled so well, done I'm impressed with you we can be, friends okay great I'm I'm very happy, about that because yeah you've done done, some amazing things you're doing some, amazing things lately and can't wait to, hear more about them before we jump into, more specifics about what you're working, in specifically I'm I'm curious because, you have had experience for a number of, years working in this sort of generative, AI space and especially on social media, and content generation I'm wondering, what your feeling is about kind of this, General moment that we're in with AI and, like how it's shaping has it reshaped, some of your thinking around AI has it, proven things true that you've always, thought about AI or what what's on your, mind right now yeah I mean it's so funny, because we actually didn't even know, that we had built AI back when we had, started building it a mentor had to kind, of give us the clue and then they got us, a grant with IBM Watson this is in 2018, so you know my perspective on it has, certainly changed over the years in so, many ways I mean it was so hard for us, to explain what we did for so long and, suddenly everybody thinks they know what, we do now so they're coming to us with, this like whole different perspective, that we still have to educate but what's, exciting to me is there's been there's, three waves we've seen already happen, and there's another one that I believe, is about to happen the first one is like, holy AI everybody right wow you know, this is amazing everyone just had to, freak out and then the second wave has, been all around the legalities a, pullback you know copyright issues um, you know what are those kind of Court, liabilities that are going to be up and, coming and then the third wave is around, the voicings right like okay well now, that everybody has Cliff Notes, essentially like how do you make it your, own and then the fourth wave is and it's, already here really we're seeing, employees these um again employee job, descriptions the need for prompt uh, experience and, expertise so and Lately by the way has, been ahead of the curve on all of these, like since the beginning we were 9 years, old at this point so it's funny and, weird to be in the position of spending, years trying to communicate to people, what we were doing why it mattered what, the value was and then suddenly be, riding at the top of this wave where, we've already built the future right and, now everybody's just cting on, it and you mentioned like having so I'm, reminded of primer I don't know if, you've ever seen that movie where they, like build a time machine in their, garage I don't think they're like, totally intending to do what they're, intending to do so you build AI without, realizing what you were doing so what, was your original intention or like, where was your motivations when you, stepped into doing what eventually, turned into what is lately. a well some, of it is very boring but I'll start with, the exciting part which is that um so I, used to be a rock and roll DJ nice my, last gig was broadcasting to 20 million, listeners a day for XM Satellite Radio, and my Uber power is turning listeners, into fans or customers into evangelists, right and I would leave stations and, then I'll never forget my program, director calling me 6 months later and, saying hey the Arbitron book came out, and you were number one how did that, happened because it was shocking I was, in a format called AAA we're always like, 20 or 21 Rock and pop or number one and, evenings you know was like totally, unheard of and I I said I I threw your, playlist out the window which was the, truth but I also was the production, director so I was in charge of all the, sound in between the songs and so it was, like the meow for you know 4 hours a, night and I started thinking about I had, written thousands of commercials um I, was a fiction writing major I really was, excited about the theater of the mind, I'm going to vomit on you guys here a, little bit this is the place to do it, vomit away okay you're you're doing a, great job going with me um so the, theater of the mind is when your, imagination has to play a role in the, act of the storytelling right so when, you are reading it happens when you are, listening it happens when you're, watching it doesn't happen because all, the pieces are there in front of you so, there's nothing for you to fill in the, blanks right so that parallel was really, interesting to me and one of the the, things I I read that book you know this, is your brain on music where it dissects, the Neuroscience of music listening so, when your brain Chris listens to a new, song it must instantly access every, other song you've ever heard before in, that moment all right and so what he, trying to do is to find familiar touch, points so it knows where to index that, new song in the library of the memory of, your brain and it's tugging on Nostalgia, and obviously memory and emotion all the, things that build trust trust is why we, buy now similarly Daniel if you're, writing me an email or a slack message, or a text message I'm going to hear your, voice in my head and your voice has that, same idea right like there's this sound, component to it so if you're doing a, great job you're going to try to figure, out how to write in a way that's tugging, on Nostalgia and memory and emotion and, Trust so I took these ideas out of radio, and there's another story for when we're, having a beer in the middle here but, suddenly I had a a marketing, agency and my first client was a little, company you know called Walmart this is, the board part so I built Walmart one, hell of a spreadsheet system that took, these ideas and translated them into, writing and I got them 130% Roi, year-over-year for 3 years when we built, lately lately was designed to replicate, the spreadsheet system I had built for, Walmart and at that time the industry, where we're in was called marketing, Resource Management how exciting yeah I, mean really like the name of our company, was Cloud mrm and so we were building an, organizational system system for, marketing is what it was right that's, like about as boring as you can get I, love that I spent a decade in the, marketing industry and I'm so with you, on this I I left it so oh really so you, know spreadsheet hell you were there a, waving eye um so almost wrapping up this, very long story um so we had built, marketing Resource Management we had, built a feature for every spreadsheet, that I had built Walmart and there's, this one feature that everybody was, using more than the rest and the idea, was you pasted in a UR of a Blog you, clicked a button and we would instantly, atomize it into dozens of social posts, right that was the foray of the AI we, had built um now we do much more which, we can talk about later but that's how, we got at least to the beginning of the, nine-year trip it's super interesting, that you say that because I think maybe, people kind of think that like they can, easily reuse content on all of these, different platforms and over time as, I've found out mostly through failing um, if I'm if I'm honest like the same, content that you produce like for your, blog like it's not a trivial task to, just take that and like create a really, compelling LinkedIn post or a really, compelling tweet or like what it just, doesn't work the same way and also like, of course the algorithms are all, changing all the time and and that sort, of thing so is that part of part of the, education that or or do you feel that, maybe marketers probably feel this pain, and that's who you're talking to so like, your audience are they mostly wondering, like how does a computer do this for us, because they know the problem and they, see it as a really tough problem or are, they mostly like oh you know I I could, do this if I really sat down and wanted, to do it but you know why so why do I, need a computer which direction kind of, do you find people in most of the time, yeah there's so many good thoughts in, there so the first one is like it's like, math right you there's a reason you go, to school and you learn algebra because, when you're using a calculator you need, to be able to know if you pressed the, wrong button because sometimes you do, and you get the wrong answ you need to, have the background in your mind so that, you can use the technology to do the, hard work for you right um so so similar, with us we found that you know if you're, a complete idiot I can't change that, right you have to have some, unfortunately that AI does not exist, yet right right true AI does not exist, yet magic doesn't exist I hate to burst, a bubble there um because you know I'm a, huge Harry Potter fan in fact I saw a, car at the Target the other day and his, I assume was a man but the license was, um aod, Kadabra I like literally telling, everybody to go f themselves like all, day all the time this guy you know wow, um I know it was pretty bold so with, lately let me let me explain what we do, and then answer your question so lately, is able to learn your unique voice, Daniel or Cris or the unique voice of, your brand and you can customize that, voice by region say for example or any, kind of subset it also is able to tell, you exactly what words ideas phrases, even the sentence structures that make, up the highest possible performing, social media messages for you, specifically customized for your target, audience on any specific Channel now, we're able to build that model in about, a couple of seconds for you and then, once you have the model you have to, train it and you train it with long form, content so we just mentioned a URL of a, Blog so it could be any kind of text, like a newsletter or anything from a, Word document it could also be any kind, of audio like a podcast H it could also, be any kind of video so an interview, with a CEO it could be a webinar a zoom, call whatever you want in the case of, audio and video we will also give you, dozens of audio sound bites and, miniature video clips that go with the, text version of the social post and then, we teach you and you can do this, automatically how to drip feed that, content over time because the long tail, has payoffs that are exponential as, opposed to you know the oneoff so we're, also working with marketers to educate, them as you were asking about on postmo, versus promo right so promo is great but, it's really hard to get butts and seats, live but the after the fact is much, easier and you get exponential eyeballs, you know it becomes pretty much, everything is Evergreen content nowadays, and if you think about that which I'm, sure you guys already have like before, you went into this interview yes it's, timely but generative AI is going to be, around for a long time here right and, these ideas once they're living online, will the SEO will be you know over the, roof over the moon so to speak so the, who was your question at first we, thought our Target person with me uh, small agencies like you know I was, charging Walmart 140 Grand to do 4, months of work, and the idea was let's give you 140, bucks in your one person you know so we, we were trying to service smbs but the, mistake we made was we had built this, massive robust platform that was very, much an Enterprise platform you we, didn't know that yet and again we met, another kind of Mentor we were working, with sap and they were like um hey we, think you don't understand the power of, what you built here let us help you out, and I'll pause the story there there's, more but um did that answer your, question yeah yes it was a good story I, was I was all in it you like cut it off, right there in the middle I was like, what you doing well we'll get the rest I, mean like the you know it's funny, because Trends change markets change, obviously and you want to be you don't, want to be you know Loosey Goosey but, you also want to be flexible and you, want to be able to foresee those things, and then turn on the dime when you need, to and with us because we had built, marketing Resource Management that, window of sexiness was pretty small, actually there was a company called, percolate that was dominating the, Enterprise space and here we are this, little company trying to be the SMB, version of something that no SMB could, really understand honestly and we were, trying to sell to marketers specifically, like CMOS and we discovered after a lot, of pain actually that they were very, much threatened by being replaced you, know which is kind of reasonable and so, we had to learn to reposition the, product we learn to automate it and, create self-service we learn to pitch to, cro we built in a feature at lately, where you can actually Syndicate the, content one button can push out months, worth of social post across every, employes channel that they have, connected that you can imagine right so, you can like literally do all the, influencer marketing for your entire, business in like an hour so you, mentioned voice quite a few times which, is definitely a loaded term within like, the AI space so right there's one part, of voice which is like we've had Josh, Meyer from K on the show talking about, like voice cloning and like you know, style transfer of voices or using your, voice and in another language that sort, of thing so certainly there's that, element of that is relevant to content, right but I if I'm understanding right, from a very non-marketing person's, perspective what you're referring to as, voice is somewhat inclusive of that but, is much more could you kind of help, those maybe without as much of a, marketing background understand what, what do you mean when you say voice and, what do you mean when like an AI system, learns your voice what is learned I, guess in in that process I like it so, because we're able to surface you, literally a word cloud of the words that, make up the messaging that gets you the, highest engagement the most clicks and, likes and comments and shares we can see, how you write and we can see how when, you're writing well what's the DNA that, makes up those messages and we can learn, how you write for your brand or for, yourself like even when I when I write, social post on LinkedIn I get 86,000, views because I'm really good at writing, and I use a very specific quote voice, style of writing for example in real, life I swear like a sailor I'm just so, foul and online I try to do that, less so um I'll make up hyperbole like, holy hot pickled jalapeno peppers for, example now when lately is trying to, write in my voice it will insert things, like that because it knows me well, enough to do that so part of me kind of, coming into this conversation like I see, so many of these people that are making, posts that are very formulaic right like, I don't know how many times a day on, Twitter I see some tweet that says what, a week in AI here's seven things that, you shouldn't Miss tweet thread and like, oh my gosh how do I like I should be, able to get through these but they're, not using lately. a I can tell you right, they are not yeah could you speak a, little bit towards like generative, content and how that could be or maybe, is in certain cases formulaic and how, this sort of approach can go beyond that, to maybe be more creative certainly more, personalized I think you're talking kind, of about this personalized voice I, assume we're not just talking about like, generate the listicle for or the Tweet, thread for Twitter type of thing right, yeah so what's interesting about your, point is I mean this is why humans are, always relevant because you quickly can, see that there is a Formula there and it, begins to wash over you it's not, tantalizing anymore and the Instinct, that you have is not the Instinct of the, people who are still publishing those, they're still sticking with the old, thing because they haven't taken a, moment to research really but even just, use their instincts of like what's, working you know you have to experiment, you have to just the same way in radio, like I mean I spoke to a black room of, no one for 12 years I know how you feel, yeah you know how I feel right and, you're what they advised you what my, mentors advised me back then was you, imagine the listener in your mind, whoever it is right who imagine who the, audience is now online I generally just, try to entertain, myself you know and so let me back up so, that was sort of one thing is like, humans are relevant and we'll we'll get, into that a little more why and they, will remain relevant always in sales and, marketing that will never go away um, here I'll tell you why real quick when, we were talking about the theater of the, Mind what happens there if you are a, good author or you're good on the, microphone you're actually letting, someone in so I'm going to make you, Daniel feel as though you're part of, this story now I've written it this way, so that you have to fill in I know you, have to fill in the blank I have to, allow for a third character that is you, that's your imagination your brain, that's why when you read a really great, book it's so powerful and then you see, the move and you're so pissed off right, um they're taking that ownership away, from you so it's the difference between, a oneway Street and a two-way street and, this is how you build fans that, mysterious thing is the human element in, marketing marketing is often, unexplainable it is like people are, always trying to science it to death and, you cannot this is what makes it magical, you know which is why human training is, something we actually worked into our, algorithm from the beginning that was a, fascinating point there is that kind of, that unknowable aspect a little bit of, what tickles a psyche and that evolves, and people get used to the thing that, was the past thing a moment ago and, whatever you're doing now will change in, another moment you know as we, accommodate that in our brains and start, ignoring it and so you know you've spent, these years years working on your, algorithms to try to hone in on that and, you know using your method and try to, get to voice of what the messaging is in, the different mediums so now in the last, year all this technology change has been, thrown at you and your company, generative Ai and large language models, and all this all this stuff all these, Cool Tools how has that changed how, you're approaching the problem of kind, of finding that magical nugget of voice, in there because you have a whole new, set of tools available to you now that, you're presumably integrating in to all, this stuff that has to like have turned, your world upside down a little bit just, because of the vast amount of capability, you now have in addition to what you had, before a couple of things number one, there's more than one kind of generative, AI now the world is really only familiar, with one which is chat GPT we were in, the closed beta four years ago of chat, gpt2 so we're ogs with them but I built, an engine that's mine it's proprietarily, mine so we like to think of lately as um, a fully loaded ice cream sunday there's, a banana and hot fudge and a bunch of, different flavors of ice cream and, whipped cream and I made this Sunday all, myself on the top there's a couple of, tiny tiny chocolate sprinkles I like, chocolate they could be rainbow and, that's IBM Watson and Google Pegasus and, meaning cloud and chat jbt now around, the data sets what's been exciting is, you know there's this huge wave of of oh, my God public data copyright, infringement you know all that and we, only use a private data set lately your, data and we don't share it and so we've, been getting green lit by legal and so, that kind of explosion of information is, obviously really helping us a lot, because a lot of companies are now, Banning generative AI companywide, because people are cheating at work and, they are they're worried about data, being put out into the Google's public, you know database for example but the, other thing too is so so generative AI, as we all know it not me but the world, is Type in a few things get a big long, thing out right that's the deal lately, works the opposite way you have to give, us a very long thing first like a video, or a blog or an audio file and then, we're going to take that unique model we, built from you and clip it up into, dozens and dozens of very small social, posts and the way we're able to capture, your voice which chat GPT can't do, because they don't have and we're not, competitors with them in anyway but they, don't have a data loop but I do you, connect your social channels to lately I, can read your analytics I can see what's, performing best and I'm learning all day, long every day long whether you're, publishing through lately or something, else and then every change you make to, anything you publish if you edit it if, you add a picture if you delete it if, you don't publish it I'm learning right, so it's a tight Loop the other thing, here is you know now that there's been, this this explosion of words the problem, Still Remains how do you cut through the, noise this is the problem always of all, social Marketing in all sales enablement, you know whatever guess what the answer, is it's the same answer be more human, right be more human so when you build, that into your algorithm so I'll give, you an example we worked with Philips, electric which is they've rebranded to, signify and we got them on average 124%, more engagement on LinkedIn we save them, 84% of their time and I think it was 82%, of their budget and it's not just on, creating content but it's on creating, content that works right this is the, thing now you can ask generative AI to, create social media posts for you but, they'll have no way to know if it's the, right ones to create because there's no, information to check that data let me, follow up on something you said there, I'm going to ask the dumbest question, you've ever heard you just said be more, human what does that mean when you say, that so when you're saying that's the, thing to do I hear you and I think I, know what that means kind of like but I, probably don't because I think it means, something very specific to you in the, context what can you share that a little, bit what that means to you yeah thanks, for asking no one has ever asked that, they really should right I'm very good, at dumb questions I'm I excel with dumb, no no no not dumb very smart question so, the it's using that instinct right, the because you can't science everything, to death it's predictive and it's a, little bit Indescribable so you have to, know whether that joke is going to land, right like a comedian that's the deal, sometimes the atmosphere is ready and, sometimes not yet ready you can make a, joke about that just yet right that, ability to read the room which is again, something I did in the dark for years is, a I don't think that can be taught to be, honest to you um however it does come, from from making mistakes right so the, more mistakes you make the more you, learn from them um Being Human on social, can, be it's funny you see people, experimenting on LinkedIn a lot right, now doing like more personal things I do, incredibly personal things on LinkedIn, people are like what yes if they're, usually having to do with animals so yes, uh great it is it working you're getting, engagement from it actually uh you know, I the truth is I'm not measuring it um, but it definitely is different from what, everyone else is doing cuz I'm just for, anyone out there cuz people hit me I'm, really bored with everyone, self-promoting on LinkedIn all the time, and so like when people hit me up with, those messages or they're in my feed I, just turn off everything so yes there, you go yeah so you're experimenting, anything that's not the usual May, capture my interest if it's the same old, stuff I'm asleep as my eyes glaze over, your your post so I love that and that's, a key thing to think about so say if I, want to promote something that I'm doing, sometimes I'm lazy of course but there's, only two objectives on social media, click and share that's it right so if, you back into it knowing that that's the, deal I need to write copy that's either, sharable or clickable it's very hard to, get to the clickable point because, that's a lot of trust you're asking from, people and time and all that but sharing, is easy because sharing is all about the, ego and if I give you content that's, worth sharing you look good right kind, of like if someone in college brings you, the latest record from who ever and then, you play it for a friend now you're the, taste maker same idea so sharable, content like Gary ve has cread it you, can just spread, Joy with positive messaging um I find, that any content that includes God does, really well certainly negative stuff, like I've posted you know in tears, getting a no from an investor um I mean, I was experimenting with all these, things I'm doing it on purpose you know, I'm trying these things out I often will, do things like I'll say if there's an, interview coming up with Chris Daniel, and Mah like I'll refer to myself as Mah, because I like Miss Piggy and like you, might not know that about me but I, didn't but I have a raccoon named Miss, Piggy for real so there you go right and, it's not an obvious reference but I'm, looking for those nostalgic touch points, we thought about before right this is, the and this backs up a little more to, one of the questions that I didn't, answer Daniel earlier is that lately, what it's lifting out it's trying to, lift out teasers that will get you just, enough interested to click without, giving you the full kitchen sync right, or it's trying to give you a sharable, like those same things that's what's, behind the ai's brain what it's thinking, of so it's the same idea um and let me, give you some proof in the pudding just, around around this idea whether you're, using lately or not to do it so at, lately as you guys know we dog food our, own product so I'm going to ask you for, the file of this show we're going to run, it through our own AI our AI model is, going to run through lift out all the, quotes that you Daniel or Chris or me, say that match up with my model and my, target audience it's going to attach the, video clips it's probably going to give, me 40 or so my um social media manager, will run through them and make sure that, lately isn't off the rails and kind of, help it out if it needs to be helped out, by making the edits you know and then we, will publish those posts not only on our, brand channels but all of our employee, channels as well because the more the, marry right we're all in this together, now we do this and nothing else for, marketing this is all we do I'm on a, podcast once a week sometimes I write a, guest blog or host a webinar or, something and we have a 98% sales, conversion wow that's crazy because, that's how good the AI is learn it knows, what you guys will share or click right, my, [Music], audience I'm Jared and this is a change, log news break device script is, Microsoft's new type typ script, programming environment for, microcontrollers it's designed for low, power low flash low memory embedded, projects and has all of the familiar, syntax and tooling of typescript, including the npm ecosystem for, Distributing packages this project has a, lot of devs, excited Jonathan Barry says quote dope, typescript for Hardware always glad to, see these attempts at bringing web, Technologies to embedded systems and see, what sticks even when they don't they, Inspire innovation, Zach silviera says quote this is so much, better than, micropython and Andrea gamari says quote, this is the first esperino competitor, and I think it's going to be huge you, just heard one of our five top stories, from Monday's Chang log news subscribe, to the podcast to get all of the week's, top stories and pop your email address, in at Chang blog.com newws to also, receive our free companion email with, even more the developer newss worth your, attention once again that's changel, log.com, [Music], snews you had said before we recorded, silence is sometimes the best practice, and one of the things that my radio, Mentor Steve Zin taught me was to leave, silence on the air as a tactic this is, that mistake this is that humanness and, so what happens when there's silence, thinking anticipation thinking, anticipation that's right people Turn up, the radio is what they do also you know, and thinking about how we're writing and, you've seen this tactic on LinkedIn as, well is like people like leaving, different space you know putting in, enter enter enter whatever there's, something about then it goes to your, point Chris of doing the unexpected you, know what I learned about making fans, which are more valuable than just, listeners or customers versus as, evangelists and by the way let me put, the proof in the putting there not a day, has passed in four years where someone, hasn't spontaneously written on social, media about lately not one day you don't, want to be the megaphone you want to be, the magnet and when you are the magnet, in order to truly be that kind of magnet, you let other people be the light you, show them how to be the megaphone you, show them how you put them on the pedal, you define a little bit about what those, are when you talk about the megaphone, versus the magnet and the light in there, can you kind of clear what that means, yeah like I'm obviously an A personality, so I could walk into a room and dominate, that room in a second I can get on the, stage and like be hey look at me hey, look at me I could do that or I can lift, you up and make you feel like everybody, wants to listen to you everybody wants, to talk to you right when you are able, to do that people walk away and they, remember that you made them feel this, way it's like I was listening to Smart, list the other day and um will was, saying how a few years ago they were at, an SNL like afterparty and Steven, Spielberg who he did not know just, walked over to him and said hey Will I'm, Steven Spielberg I just saw your, director's Cod of XYZ and I wanted to, tell you it was really impressive and, then he walked away now holy right like, that's how you do it this is the God, first of all the best thing he did was, he introduced himself right he didn't, assume that everybody knew he was Steven, Spielberg I mean that's pretty Mega for, a mega star like that also he left, before the guys had a moment to be like, oh my God weing love you blah blah blah, he was just in and out it was just that, drop of you know pixie dust kind of, thing so the way you can do that on, social is really easy here I'll give you, a tip thank you is the best we call it, thank you marketing the more you thank, somebody the more they it's like, husbands you know like the more um work, in the yard David does the better I tell, him how great it looks right because I, want him to do it, more you know and than you can come in, the form of thank you we that's one of, our biggest hashtags at lately is hasht, Al caps thank you because it's people, reshare that content like so with you, guys when I'm going to get your content, I'm going to drive all the traffic back, to wh to you where this full version, this is the ultimate and thank you right, and I'm going to tag you and I'm going, to drip feed it out probably slowly over, the time so that like every time you see, it you're inspired to reshare it you're, not like overwhelmed by my over tagging, you for example but also I don't need to, drive the traffic back to me because I'm, not looking for that I'm looking for the, reach I'm looking for the shares we Rite, on Word of Mouth lately like that's what, we right on I have a left field question, for you um and it's a selfish question, um great and it may be a very simple, answer of no there's no difference but, just in case there is is there any, difference in how you would approach, from like a nonprofit standpoint versus, a for-profit commercial standpoint, because you're you're trying to get to a, different type of outcome you know, versus selling a product or service, versus that and as uh in my I have a day, job but I also have an an unpaid uh job, and a nonprofit and so I'm I'm very, curious if there's any difference in, that or if it's all the same thing it's, depends I mean because I've worked for a, lot of nonprofits as well including, United World Way worldwide and National, disability Institute and the Walmart, foundation so like I said there's only, two objectives on social and this, doesn't matter if you're nonprofit or, for-profit or government it's click or, share like that's it where you click to, or what you're sharing you know that, depends um ice bucket challenge hello, right everybody there there's like how, human was that you know they did a great, job but they raised money at the same, time now you and I both know that they, have a grant to spend this money and, oftentimes for a nonprofit while sale, isn't the objective a lot of times it's, just um Make Some Noise to be honest, with you because the people giving them, the grant want to see that visibility, online you know a little more but, everyone has an objective it's just of, breaking it down like when I was working, with the Walmart project it was fueled, by the nonprofit foundation and this was, so boring I mean I had a great time but, it was there was the IRS Walmart, National disability Institute Bank of, America and AT&T in United Way worldwide, and they were working together there was, a free tax prep website that United Way, worldwide had built in tandem with, Walmart it was the first free tax prep, online available and we were trying to, help lift the poor out of poverty, through income tax credits and financial, education these people maybe make, $20,000 a year so a $2,000 eitc credit, is actually life-changing right now, that's the boring part here and there's, acronyms Galore but we got the project, with my method 130% Roi year-over-year, for three years and their Roi was taxes, filed right that was their objective so, we still want people to click and share, but once they click you start to file, the taxes so that was the whole and by, the way one of the things we learned was, how to take a national message and this, is what I was working on them with or, one of the many things and to localize, it and the localization came through um, local, hashtags right the cities the college, campuses wherever we were you know I get, this like feeling inside sometimes that, like we're starting to generate so much, content using automated methods using Ai, and despite all of that having it sort, of like having a voice or, personalization I'm wondering like a lot, of the creative things that I see online, I don't think yet I don't think they're, generated by AI yeah so how do you see, like going into the future as we see, more generative AI content driving, social posts how do you see things, shifting in terms of like the creative, trends that that we'll see online and, maybe the opportunities within marketing, like how do those opportunities change, and how would someone balance like oh I, could do this at scale with AI versus, hey I want to do something completely, new that no one's done yet that might, you know break some Trends or something, like that how do you see that balance, going forward and do you have any, recommendations for people out there, that are maybe in the midst of trying to, generate content with AI whether that be, you know text or audio video whatever, yeah I mean I think my first advice, would be just calm down, everybody hold on so think about humans, we're the only mammals that when we come, out of the womb we're completely, helpless we can't feed ourselves can't, defend ourselves can't stand up can't, even hold our heads up you know nothing, if AI was a human it would be about 3, months old so I think that's just one, important thing to remember is that, human guidance is required you know, certainly as we talked about at the top, was the prompting expertise is going to, be something that people are looking for, as well because in order to get the, robot to do what you want it has to be, asked just the exact right questions you, know what are those questions with, technology as you guys already know I, mean technology happens it'll continue, to happen and it'll continue to improve, lives and replace jobs and jobs will, evolve and we're in the same boat here, this strategy piece is where humans are, still really absolutely necessary I mean, you know so AI is not sentient like, unfortunately Hollywood really has given, us a grave misunderstanding of what the, definition means I think it was maybe, maybe it was Paul rozer from um, marketing AI Institute but they said AI, still lacks emotional, intelligence which is also true you know, um and then another quote and I don't, know who said this but it's if you're, not using AI you're going to be falling, behind but the people who are using AI, in tandem with the work they're doing, just like that calculator we talked, about before right they are the ones, that are going to be getting ahead of, the game so you have to figure out a way, to to obviously embrace it and I think, it's so funny that companies are Banning, generative AI because remember when like, remember when everybody banned Facebook, yeah right or I think there's still, government agencies that can't use, Google Docs and you're like are you, kidding me like get with the program I, think that's a really good uh way to set, a good foundation as we kind of draw, close to here for people to remember, that and we probably need to be reminded, about that every week these days yeah as, we close out here um maybe just take a, moment and share what are you excited by, specifically as you look forward over, the the coming months in terms of either, new things that will be possible or, where things are headed with lately that, sort of thing it take an opportunity to, kind of share some of those things that, uh that you're excited about thank you, um I'm excited to close the fundraising, round that I'm in the middle of I hope, so yeah that's awesome congratulations, yeah yeah yeah so um if anyone's, interested feel free to reach out to me, Kate at lat. you know that will be, that'll help us kind of continue the, plans we have the voice learning that we, talked about earlier is something that, we are focus on at the company on um, singing our teeth into this year and, figuring out other ways is like this is, so boring but you know if lately spits, out a piece of content that starts with, and like how can we start recognizing, those nonsecretors that happen and we, can remove them you know for you before, you even get to it or once we have the, transcripts in the background instead of, you having to control F and clean the, ums out like how can we do this stuff, more automatically because obviously, it's harder to train video and audio, than it is text because with text you, work on it for so long so it's coming to, me pretty much ready um we're also, working on sentiment so the ability to, push a button and say make this post, funny or make this post I don't know, Stern that kind of idea um the biggest, request we get from our customers is how, to take lately and apply it to PID ads, so that's a big one we're working on, cool well keep up the good work this is, awesome and thank you so much for taking, time to join us I'm I'm very excited to, dig into uh this topic a little bit more, and yeah yeah we'll look forward to, seeing all of the tags that I'll get, over your drift campaign in the coming, weeks so great conversation today thanks, I love you guys we'll talk soon all, [Music], right thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all change talk podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, K LOVE |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Controlled and compliant AI applications | You can’t build robust systems with inconsistent, unstructured text output from LLMs. Moreover, LLM integrations scare corporate lawyers, finance departments, and security professionals due to hallucinations, cost, lack of compliance (e.g., HIPAA), leaked IP/PII, and “injection” vulnerabilities.
In this episode, Chris interviews Daniel about his new company called Prediction Guard (https://www.predictionguard.com/) , which addresses these issues. They discuss some practical methodologies for getting consistent, structured output from compliant AI systems. These systems, driven by open access models and various kinds of LLM wrappers, can help you delight customers AND navigate the increasing restrictions on “GPT” models.
Leave us a comment (https://changelog.com/practicalai/225/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Prediction Guard (https://www.predictionguard.com/)
• Prediction Guard docs (https://docs.predictionguard.com/)
• LLMs in Production II event (https://home.mlops.community/public/events/llm-in-prod-part-ii-2023-06-20)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-225.md) | 7 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist and founder of prediction, guard and I'm joined as always by my, co-host Chris Benson uh who is a tech, strategist at locked Martin how you, doing Chris doing well today Daniel it's, uh it just continues to be super, interesting in this space in the world, of AI so much change this has been a, year I think it's a year that's for the, history books in terms of uh the, advances and the fact that AI is really, making Deep Impact into the general, population people who normally might not, be listening to our podcast as hard as, it is to believe that I yeah I was just, going to say people like in companies, like my wife's company which is not a, large company it's not a tech company, but they're having conversations about, how do we as a company leverage AI or, leverage large language models in our, content generation or what have you and, it's really permeated all Industries at, this point I think and people are, wrestling with the idea of what do we do, not just you know if we do something in, relation to AI agreed I I think that's a, huge issue right now um it is, potentially more, confusing about how to handle everything, that's coming at companies these days uh, from a large language model and, generative AI than it ever has been uh, and the problem is getting harder and so, uh we want to talk a little bit about, that today and I want to acknowledge a, couple of things uh with our audience, many of you have been with us for quite, a long time we've been doing this show, for about 5 years on a weekly BAS bis I, know it's been forever and it's just, getting more and more interesting over, that course of time something I wanted, to share with our audience is I've, gotten to know Daniel pretty well and, and we didn't know each other super well, beforehand we had met in the in the go, software development Community as kind, of the two people looking at data, concerns but Daniel overtime has, demonstrated not only the fact that he, is repeatedly an incredibly smart man, with a lot of capability but he's also, an incredibly uh good human being and uh, for anyone who's followed the show for a, long time they know that the idea of, just being a good person and AI for good, and such things are a huge repeating, Topic in the show and I've also learned, to trust where he's going and to, understand that if Daniel is doing or, interested in something it's something, that I want to know about and so I'm, today we want to hit this large language, model kind of in the world and how you, manage that but I also want to acknowled, that we're going to have bits of the, show that could be considered conflict, of interest and the reason I say that is, we're going to talk about some work, Daniel has been doing and so if that, bothers anybody this is the point where, you might want to shut off this, particular episode but I'm hoping that, most of you trust us and have been with, us for long enough to know that I'm not, going to take you down a path that you, wouldn't want to go um and I've asked, Daniel to talk uh not only about the, space of large language models uh being, brought into production and trying to, juggle all the things coming at but talk, about the work he's doing and so we are, unabashedly going to go that direction, and if anyone has hate mail to send, please send it to me because I have, demanded I have demanded that Daniel, talk about this uh and so uh thank you, for bearing with us on that and so, Daniel if you uh you're kind of both the, co-host today and the guest if you will, and if you might lay out a little bit of, the landscape for us about uh what this, looks like as an Insider or someone who, spends all your time focusing on this, problem that might be a good way to, start yeah thanks Chris and thanks for, the kind words um I've over time learned, so much from doing this show and it's, shaped a lot of what I think about and, certainly the things that I've been, thinking about really since well this, whole year since kind of Christmas time, have been focused around these ideas of, controlling large language models, guiding them guarding them making, compliant AI systems and a lot of that's, led into the thing that I'm building, right now which is called prediction, guard so that's what you're referring to, in terms of what I'm building so I'm, coming at it from that perspective and, been thinking about this a lot been, talking about this a lot publicly and, excited to do things like you know, upcoming there's an llms in production, event uh put on by our friends at the, mlops community that's really cool I'm, giving a talk there on controlled and, compliant AI app so that's part of what, I'll share here today as well one, question maybe that I have for you as we, start out here Chris is what have you, experienced in terms of the people that, you're talking to with regard to the, pressure that they're feeling either, internal to their own company or from, like Market pressures like jump into the, AI Waters like Implement something make, AI part of our stack like what are you, seeing there so uh I don't think you'll, be surprised when I say this and we've, alluded to this on some previous, episodes but it is a difficult business, concern to navigate um I know all of us, who straddle into the AI technical realm, are incredibly excited we're trying to, figure out how to do the models and put, them out there and everything like that, but if you are not in our shoes if, you're walking on a slightly different, path and let's say you work for uh a, legal department or a compliance, Department uh or other business concerns, and suddenly these Technologies are, coming at you hard and fast week by week, in, 2023 and you're trying to navigate that, and look at things like licensing on how, the data that goes into models is used, and you're looking at compliance, concerns and you're looking at, protecting uh your intellectual property, um there's a whole host of challenging, business problems with essentially no, guidance this is a A Brave New World, it's a that has to be pioneered through, and so I have talked to a lot of, business people in various roles, including attorneys and this stuff is, scary stuff it is problematic stuff it, is challenging to navigate and uh and I, definitely want to take you down the, path today of talking about the space, and prediction guard relative to how you, actually get these models out there in a, productive way in a business environment, so that people can take advantage of the, technology and understand what the, pitfalls are and such so uh that's the, big thing that I've been hearing I've, been getting an earful of it lately like, Chris settle down stop taking us down, this this AI thing we got to figure some, things out first so I'm coming to you, for answers man yeah it's so tempting, actually to have really easy to use, systems like let's say the open AI API, right I can go to the playground or I, can go to chat GPT or I can go wherever, put in my prompt and get some like, magical output right it's magical and, immediately it triggers in your mind I, can solve real business problems and I, can create like actual Solutions with, this type of Technology like it's so, quick to make that connection but what, I've seen both in sort of advising and, Consulting and conversations that I've, been having is on maybe like a a less, stringent case like people are, struggling to make that connection to, how they can build robust systems out of, these Technologies so it's one thing to, get text output and look at it with your, eyes as a human right and say like, extract this piece of data or give me a, summary of this or something like that, but as soon as you make that, programmatic right and automated then, how do you know you're getting the right, output and if you actually want to do, something with that like you're, outputting a number you know a vomit of, text blob out of a large language model, doesn't really actually do you that much, good if you're trying to implement like, a robust system that's making actual, business decisions on top of the output, of large language models on the harder, side of this I'm getting feedback from, people that either I know or I'm, advising or other things that companies, are actually telling them no there's a, full stop on using quote GPT models in, this organization because of one of a, few different reasons maybe that's a, risk thing around hey we're going to, hallucinate some name out of this or, something that this person doesn't exist, and that's going to get us in trouble, people are going to stop trusting our, product and that sort of thing so, there's the hallucination or consistency, of output sort of thing there's also as, you mentioned the IP or pii type of, leakage scenario so it is actually a, problem for people to sit in a company, I'm sure this would be true whether, you're at you know your company or a, variety of other companies that I've, talked to where I'm sitting there and, I'm like oh I could solve this problem, with chat GPT let me copy and paste this, user data into chat GPT and like have it, summarized something or extract, something or whatever it might be it's, sort of unclear and murky Waters like, how that data is actually going to be, used by open Ai and you're kind of, leaking IP or company information pii to, external systems right which is a big, big no no regardless of how that's used, in the end this data it seems like is, going to exist outside of your own, systems and so on the harder side of, this problem people are being told like, no you have a full stop can't use GPT, can't use large language model so to, summarize how I would kind of think, about this problem space people are, feeling the pressure that they need to, or really want to implement these, systems either because they feel like, they're getting left behind or there's, an actual Market pressure for them to do, something but in practice they don't, know how to deal with the outputs of, large language models and they might not, even be able to connect to the kind of, most common large language models, because of these privacy security leaked, IP type of issues I think that's really, really widespread um it's funny you've, kind of enumerated a whole set of risks, associated with that yesterday just as a, thing uh you know I have a particular, employer and uh thinking about public, information you know well-known public, information about the lines of business, that we have that is publicly, acknowledged in multiple sources out, there I went to chat GPT I should have, done the 4.0 model but I forgot and I, just let it default to the 3.5 and I, simply asked for our 19 lines of, business which is incredibly public, knowledge and it got it wrong it got it, wrong the first time and so I tried to, steer it a little bit and it got it, wrong the second time and uh had I not, known better uh about the uh, intellectual property concerns with, licensing had I tried to put something, in that might have been out there in the, public so I run into what you just said, all the time and there's so many risks, and yet there's so much value to extract, from the space um and so I think putting, your finger on the fact that if you can, find a way to mitigate these risks in, various ways that will unlock a huge, amount of value for a lot of, organizations and users to do that but, it's certainly from my standpoint feels, like the wild west right now yeah I, would say that that's true and yet, there's these concerns like the money, that people are able to save operating, costs with AI in your business are, significant so I saw the study from as, centure estimating like insurance, companies saving $1.5 million per 100, full-time employees so if you're, insurance company a and you're not, trying to implement AI systems in your, business then you're actually, introducing a liability right because, Insurance Company B might be doing that, and they're going to slash their prices, and undercut you and put you out of, business right so even regardless of new, features that might be implemented in, like people's products and that sort of, thing there's this real liability around, not considering AI Solutions as part of, your business strategy I think that's a, huge point and that's the other side of, the coin that I was just talking about, there was the risk of using and there is, the potentially larger risk of not using, at all so we're seeing that in all, markets in terms of the need to stay on, top of you know what is gradually, evolving over these past months and to, be able to use that to promote your, business and if you don't do that the, risk is substantial so the idea of, navigating the licensing and the, compliance concerns and being able to, productively use these outputs is really, crucial to being successful in almost, any industry going forward so uh, definitely looking forward to finding, out how we might do, [Music], that I'm Jared and this is a chang log, news break, in what appears to be a particularly, security unaware move Google has added, eight new top level domains two of which, are quite, concerning zip, andov yikes ours Technica writes quote, while Google marketers say the aim is to, designate tying things together or, moving really fast and moving pictures, and whatever moves you these suffixes, are already widely used to designate, something altogether different, specifically zip is an extension used in, archive files that use a compression, format known as zip the format. mov, meanwhile appears at the end of video, files usually when they were created in, Apple's QuickTime format end quote, Fishers and scammers Rejoice the rest of, us beware and be ready to help protect, your family and friends from this, otherwise completely avoidable new, Threat Vector the link ours Technica, article demon demonstrates a few URLs, scammers could now craft and they're, darn near indistinguishable from the, legit URL even to someone like myself, with trained eyes one such URL in the, example is a kubernetes release which, yes is distributed as a zip file you, just heard one of our five top stories, from Monday's Chang log news subscribe, to the podcast to get all of the week's, top stories and pop your email address, in at Chang blog.com newws to also, receive our free companion email with, even more developer news worth your, attention once again that's Chang, log.com, [Music], newws if I could summarize some of, what's been said we kind of talked about, these two large categories of problems, one was the structuring consistency and, validation of the out put of these, models to make them useful in actual, business use cases and the second was, maybe compliance, concerns privacy security concerns which, really have to do with like how a model, is hosted or how you access that model, so on on the one side it's how do you, process the output of a model and then, on the other side how do you access or, host a model both of those things can be, pretty big blockers to kind of dive into, the latter of those the hosting privacy, security thing I actually am quite, encouraged by where things are headed, recently because we've seen this kind of, proliferation and explosion of Open, Access models that continue to be, released day after day mhm the most, recent one at the time of recording this, I I might be missing one they seem to, come out every week but one for example, that came out recently is the PT uh, family of models from Mosaic ml which is, just really extraordinary I think that, they have up to like context links or, like you can think about that as kind of, your prompt size for the model of like, 60,000 tokens and they do quite well in, various scenarios so there are these, increasing number of Open Access models, but I would say there's two problems, with using these as a business let's say, I wanted to post one of these and use it, internally well maybe three problems, it's always good to have three points, right three three problems one is like, you still have to figure out like the, weird GPU hosting and like scaling of, that model which is a challenge right, the second is in reality these Open, Access models at least according to most, people I think it's generally accepted, that these aren't quite up to the, standards of the larger commercial, systems that like open Ai and others are, putting out there cohere and anthropic, and others sure so there's like a, performance concern there's the hosting, concern and then the third which is the, same as our other uh major topic here is, you still have to figure out how to use, the output of them they're still just, going to vomit up text on you and you, have to figure out how to deal with that, this has led some people to strike up, these kind of expensive deals to host, open AI models in Azure infrastructure, that's becoming easier over time I hope, that becomes increasingly easier it's, still a little bit like limited to aure, mainly in my understanding and it's, definitely not cheap I would say um if, you kind of compare all the costs and, add in the engineering time to do that, and and all that so some people are, solving this model hosting issue by, either hosting an Open Access model, maybe with a hidden performance or, implementing a really expensive kind of, private version of open AI something, like that and if you don't have that, budget or if you don't know about gpus, or how to host models you're kind of out, of luck in a lot of ways not only I, agree with you but I think that that's, going to proliferate in terms of the, challenges across there I know speaking, for myself uh and another friend that I, talk to a about this stuff a lot we are, experiencing the fact that as model, updates come out models come out they, have different strengths and weaknesses, there are some things that I I might for, instance go to gp4 on there are other, things I might go to Bardon now um and, you know those are just two there's a, whole bunch of Open Source ones that we, were starting to talk about that and, with the acknowledgement of uh for, instance open AI has kind of, acknowledged that there is a practical, limit in terms of how much data you can, feed a model and that we need to start, looking at other dimensions on that so, with practical limits site the, commercial uh Advantage for instance may, hit that ceiling and open source Ones, Will gradually catch up and so you're, seeing the relationships of utility for, a user between different models changing, on a regular basis and US users having, to make adjustments to that how does, that play into the landscape because if, you're an organization and you're trying, to make uh Investments like we bet on, open Ai and Microsoft do we bet on, Google do we bet on open source options, what are the options there what are the, different capabilities that might be, available to us for doing that and, acknowledging up front prediction guard, may be one of those what does the rest, of the landscape look like and how does, prediction guard fit into that um and, what are some of the pros and cons that, you see I'll decouple a couple of these, things and talk about the general, landscape and then prediction guard um, so in terms of this problem of the, hosting compliance privacy IP leakage, that sort of thing, I think if you're a company of a certain, size and you can afford kind of a, private open AI setup in Azure it's, probably a pretty reasonable solution it, will definitely work very well but it's, going to be very very costly and again, it's not going to solve this like, structuring and usage of the output of, language models problem so you're going, to have to put additional engineering, effort into helping build layers on top, of that that work for your business use, cases you could bet on certain Open, Access models right now but like you, said things are advancing so quickly, it's hard to say like I'm going to put, all of this effort into one and hosting, of the one and build a system around it, I do think that there's advantages if, you're going that route to Center your, infrastructure around kind of model, agnostic workflows like those in Lang, chain or others where you actually, abstract away the model interface and, can connect to multiple large language, models with with a lower lower switching, cost than if you kind of have a one-off, solution centered around a certain model, so I think there's there's some things, that people could be encouraged about, there in terms of that though if you, think about okay now I'm going to go all, in on these Open Access models like you, say these models have different, characters so I'm going to want to host, Maybe multiple of them and generate, these model agnostic workflows on top of, Lang chain and other things you start to, really add up the engineering effort to, make this happen a parallel might be I, could create a data visualization, solution for my company by assembling a, database and hosting that making the, connection into like a layer that would, run like plotly plots or something like, that and then maybe some UI for my users, that those are embedded in and all of, the sudden I'm now talking about an, absolute fortune and Engineering cost, and support costs over time which is why, products like Tableau or other you know, I I remember a long time ago I don't, know how much people are still using it, one of the companies I was at was using, Domo this is one of these Solutions, where you quickly suck in data and, visualize it and all of that there's a, reason why those products exist so, prediction guard you could kind of think, of as taking the best of open- source, models and the best of this kind of, control and structuring of output which, we haven't talked about yet and we can, get into here in a second and assembling, those together in an easy to access and, coste efficient manner so people can get, quality output out of the latest large, language models that's structured and, ready to be used in business use cases, and also with a guarantee if you want it, around using only specific models that, are hosted in a compliant way even, compliant in a certain way like a HIPPA, compliant way or in a data private sort, of way where your data isn't leaked if, you're putting data into models so, that's kind of how the landscape works, and how prediction Guard works as this, kind of system that assembles the best, of large language models with structured, and typed output that can be deployed, compliant without this whole huge, engineering effort to build your you, know roll your own system you mentioned, structured and typed output and can you, go ahead and kind of talk a little bit, about that because I think you know for, many of us that are listening we're used, to using the models that are out there, kind of in the default interfaces on the, web you know using chat GPT using Bard, and we're not really dealing with that, you know we get an output and uh but, we're not at the level of sophistication, where we're doing API and such as that, can you talk a little bit about what, structured output looks like when you're, dealing with it from an API standpoint, and how you unify that landscape there's, a lot of use cases where this may come, up but let's take one for example let's, say that you're doing data extraction, you have a database with a column in it, which is basically so this scenario has, happened at every company that I've been, with so I know that it's very common, there's some database with a table in it, and there's a column that's like a, comments column or something and it's, just like text blobs in there that are, like notes from people or technition, messages or user messages or like, whatever it is it's not structured and, you want to run a large language model, over that to extract you know maybe it's, phone numbers or prices or certain, classes of information out of this, column well well you could run your, large language model and set up a prompt, that says you know give me the sentiment, of each of these pieces of text in my, database well that prompt each time you, run it through a large language model, maybe once it generates an output that, says space positive sentiment and the, next time it creates an output that says, positive and the next time it creates an, output that says this is positive, sentiment and you can start to see, there's a consistency problem here like, how do I parse all of these strange, outputs from my large language model you, can do a little bit of prompt, engineering to get around that but, ultimately it doesn't solve the problem, that you could have all sorts of weird, output out of your large language model, so ultimately what you would want in, that scenario is a system that lets you, constrain and control what types of, output you're going to get out of your, large language model so in the case of, sentiment maybe I want to restrict my, output to only POS NE and Ne tags for, sentiment there's only three choices I, always want one of those three right I, don't want it to say this is positive, sentiment right so I want to actually, structure or control the output of my, large language model to produce one of, these outputs another example that's, maybe a little bit more complicated, would be to say I actually want to, Output a valid Json blob out of my large, language model or valid python code out, of my large language model and these are, structures that are very well defined, but you could have all sorts of, variability coming out of your large, language model and if you want a, specific type coming out of your large, language model maybe it's a float that, you can do like greater than or add it, to another number like you need that as, a typed output or you need very specific, structured output to actually make, automated decisions in your business and, so with prediction guard what we're, doing is we're kind of assembling the, best of the recent advances in this kind, of control and structuring of output and, layering it on top of these open-source, large language models to allow you to, say here's my prompt I'm going to send, it to these five open source or open and, or closed we support open AI as well so, open Andor closed models and for each, output I want you to give me a float, number and that's the sort of rich you, know output that you can get from large, language models very quickly with a, prediction guard kind of prompt because, you can control the models that you're, using either ones that are more privacy, conserving or the Clos sourced options, and provide constraints around the, output that allow you to actually make, business decisions on that now there's, additional checks that could go along, with that like factuality checks and, toxicity checks which we also implement, but um I've vomited up a lot of, information so I'll pause here no no, that sounds fascinating it's uh the way, I'm interpreting what you're saying is, sort of like you have these kind of, software filters that are creating, boundaries if you will on how you, structure input and what that output can, be so it's usable which kind of goes, back to one of the points that we're, often talking about on the show is that, the AI is to some degree Inseparable, from the software that you're using it, within and so you have a a best of breed, software product that's kind of shaping, and constraining what that can be so, that it's actually usable uh on that um, so as we look forward at kind of where, things are going and you what what are, some of the problems that you see going, in the space that we have and like what, are some of the things that you would, like to see prediction guard starting to, address uh in the time ahead and I don't, mean so much as the far distance but, kind of like you're you're busy putting, this solution together now uh works, pretty darn well what you already have, what are some of the challenges when, you're in this kind of a fast moving, space cuz you're you're having the world, change out from under you on a, week-by-week basis right now yeah I, think maybe one of the things that we're, thinking about is really at the, Forefront of our mind is ease of use and, accessibility to both data scientists, and developers so the reality is that I, think we had Kristen Lum on the podcast, talking about this like the Maj majority, of data scientists out there are super, constrained in the time that they have, to put into one of these Solutions right, so it's really really important that, there is an ease of use to this sort of, controlled compliant llm output and, generative AI output now what we're, seeing and I want to acknowledge this as, well is there are an increasing number, of open-source projects that are doing, an amazing job at digging into this, problem of controlled and guarded llm, output so these are things like guard, rails and guidance from Microsoft and, Matt Rickards re llm or Rex llm these, projects are doing amazing things at, really flexible ways for you can to, control the output of large language, models but I see this as kind of like a, double-edged sword a bit the more, flexible you become it's also possible, to become less easy to use and there's, more engineering involved in it yeah so, I I saw Matt rickord uh tweet about this, related to his Rex llm project which is, that sort of famous quote about Rex, which is I have a problem and so I, decided to use Rex and now I have two, problems that's that's been around for a, long time actually yeah it it's actually, so true right and uh the some of these, Solutions are coming up with their own, query languages to kind of deal with, this structured output which I think is, great and it's really important but, there's a need for this abstraction, layer on top where I know kind of what I, want my output to look like so I should, be able to plug that into something and, have it constrain the output of my large, language model in an appropriate way so, with prediction guard what we've started, with is the kind of presets of, structuring your output so I want, integer and float and Json and python or, yaml I want categorical output these are, things that we support now also uh, supporting kind of these hosted models, and access in a guarded kind of, controlled way to these models but let's, say that I have a really specialized, format that I want to work with I would, rather set up a solution with prediction, guard and this is actually what we're, actively working on where they could, give examples of the structure that they, want and we actually generate the right, constraints for them on the large, language model output which I think is, very possible and our initial work on, this which is kind of in a beta form is, really good so let's say that I want a, specific Json with these specific fields, or a specific CSV output with these, specific columns right I should be able, to give a few examples of that and and, generate the right underlying, constraints for my large language model, without the user having to think about, special languages or rejects or context, free grammar or these things that are a, little bit harder to grasp we'll handle, that bit for you and you just get the, right structured output from your models, so that's part of where I see us headed, is leveraging these rich systems under, the hood that are being produced around, using context free grammar special query, languages Rex all of these things to, structure output and combining those in, a more automated way for users where, they can just say here's my examples, here's my query and they just start, getting the right formatted output from, their language model so that's kind of, thing one is this automation of of some, of the problem and the constraints I, think the thing two would really be, around the validation and checking of, output in addition to the structuring so, right now we support factuality and, toxicity checks on the output of large, language models so could you talk a, little bit about what each of those are, yeah yeah so let's say that I I take a, big piece of text and I generate a, summary or I do a question answer prompt, and get an answer right it doesn't mean, the answer is factual right and we all, know about the hallucination problems of, these models so the things that we have, implemented in prediction guard are two, things with respect to that the first is, a factuality checking score which is, built on these trained models under the, hood that look at a reference piece of, text and your text output to determine a, likelihood of the answer being factual, so this is an estimate on the factuality, of your output the other thing that, we're doing around hallucinations and, factuality is is making it really easy, for people to do consistency checks I, kind of alluded to this earlier but, we have all of these different language, models accessible under the hood so you, could combine the outputs of camel 5, billion MPT 7 billion Dolly and open AI, restrict the output to say give me the, answer but only if all of these agree on, what the output is if all of them don't, agree then I'm going to flag that as not, a reliable output and so you can, actually gain a lot by not just, leveraging one model but on assembling, these models together to do a check the, toxicity thing is something that's been, studied for a while and there's models, out there um stateof theart models for, detecting whether an output is toxic or, not or includes hate speech or not that, sort of thing so this is another layer, of check that you can have on the output, and so if you put the whole pipeline, together of prediction guard you've got, models on the output which can be, deployed compliant with HIPPA or just, data privacy those structured or typed, output that you can Define very easily, and then you can run additional checks, on that output for factuality toxicity, consistency as a final sort of layer in, the pipeline towards the output that's, used in a business application I, appreciate the explanation it's a very, robust sounding pipeline that you have, on that let me ask you this with uh and, this could be whether it's prediction, guard or whether it's the larger field, you know one of the challenges is, certainly something I've been playing, with but I don't have a good Rhyme or, Reason to it yet with the proliferation, of these models coming out and and ever, more coming you know we know this space, is going to get larger and larger how, does a user or how would a system like, prediction guard be able to determine, which is the right way to go in terms of, which model you want to choose or which, group of models and you talked about the, comparisons a moment ago like how do you, structure the input and know that uh, you're going to get what you need from, an output by putting the right model or, collection of models together and then, knowing how to evaluate them against, each other does that make sense yeah, yeah that makes sense actually early on, when we were building the prediction, guard back in this was actually front of, my mind and has since kind of evolved it, a little bit the fact that there's all, of these models and I want to choose the, right one for my use case case you can, very much automate that process and we, actually it's actually still implemented, in the prediction guard backend where, you can give some examples and evaluate, a whole bunch of models on the back end, I think where this is headed though and, where the prediction guard system is, headed is making it easier for people to, get output from multiple models in a, typed way because they know how to do, the evaluation they're familiar with, this sort of thing whether you're a, developer doing sort of integration, tests or unit tests and you're checking, and you're asserting certain values or, you're a data scientist that's running a, larger scale test against a test set, people kind of know what they want to do, with that sort of thing what they need, is an easy way to get that typed output, from multiple models so like if I have a, test set and I'm comparing to scores on, the output like float numbers I need to, get float numbers out of a whole bunch, of different large language models to, compare them to my baseline or to my, test set right now that's very difficult, because all of these different kind of, structuring guidance control systems, work not for all models and they don't, work in the same way for all models and, you have to implement it for all of the, models and so it it becomes this, compounding problem to figure out how to, do that and so how we're approaching, that with the prediction guard system is, there's a standardized API to all of, these different models along with the, typed and structure control on the, output so I can do a query that says, give me the float output for these 100, prompts using these five models and then, I'll just compare all the float outputs, and figure out which is the best um, that's that's not the hard problem it's, the getting that structured output from, from the variety of models in a robust, and consistent way that's actually a, more difficult problem gotcha is it fair, you know as we're talking about this it, sounds a lot like you're you like you're, also solving one of the bigger, challenges we've talked about over time, which is that there's so much domain, expertise uh in the AI space in terms of, being able to manage models but if I'm, understanding you correctly it sounds, like with uh minimally uh with some, basic software skills knowing how to use, apis and stuff you can probably without, deep expertise and deep learning manage, to get some fairly productive output uh, through production guard by implementing, it that way in other words it becomes, Just Another Part of your software, workflow is that is that a fair, characterization what I'm saying I would, say it is in the sense that there's, still some sort of like integration, testing and integration that will have, to happen regardless right but but going, back to my example before of like the, the data visualization stack it's a lot, harder to implement the database and the, visualization layer and the front end, than it is to like log in and do the, there's still configuration that's, needed in like a Domo type solution or, Tableau it's just a lot more accessible, right so here we have the language, models hosted on the back end we have, the structured guarded way to query, those models via something that all, developers know how to use a rest API or, a python client maybe there'll be other, clients over time and have the ability, to configure that in the way that you, want so I want output from these five, models I want to Ensemble them together, or I want this structured output and so, you there's still configuration and I, think developers and data scientists, they want that it's just that it's, really hard to get all the other pieces, in place and we're we're hopefully, making that a lot easier so and let me, ask one final I think this is an, aspirational question um but I'm kind of, curious one of the things that we've, seen with large language models is the, ability for people who aren't even, developers you know I was saying like, developers who aren't even deep learning, experts but to have a certain amount of, capability producing code uh you know, that's that kind of uh Avenue into a no, code world has at least been started on, this it has a lot of maturing to do, obviously do you envision a point where, someone with very limited skills can, also use prediction guard uh in this way, and be able to kind of generate apps, using large language models that then, kind of feed into a more mature workflow, like what you've described do you think, that that's attainable at some point in, the not so distant future it's hard to, say how far this kind of automation will, go I think a lot of the a lot of the, agents that we've seen produce good MOS, right but they have an additional layer, of this sort of additional problems, around automating these various steps of, of the process I think that in terms of, what we're looking at this sort of, automated structuring of output is a, step in the right direction in terms of, I don't have to define a special query, language or a special specification but, I can say what sort of structure I want, output and that gets output I think then, if you layer that on top of the agent, sort of infrastructure that's in Lang, chain and and the data augmentation we, just had the episode with Jerry from, llama index which is super fascinating, so if you layer the kind of structured, guarded output with the chaining and, agent and automation of Lang chain and, maybe the data augmentation of llama, index I think a lot of things become, possible I hope that that becomes some, of the things that you mentioned become, possible it's yet to be seen but I I am, really encouraged that adding in this, sort of type safety for for outputs and, structuring of outputs gives a lot more, confidence maybe in some of the checks, that you could do on AI agents over time, and that increases our confidence in, sort of releasing AI agents on on, various parts of the workflows that we'd, like them to work on right so you've, sort of already covered some of the, territory but uh for our listeners, Daniel and I often when we're talking to, a guest we'll talk we'll kind of finish, with what we roughly call the future, question you know kind of wax poetic, about where things are going and uh and, so Daniel since you're knowing that, there's I've kind of HIIT some of that, already but what would you be asking, yourself you know so you've kind of had, me throwing these questions at you uh, from a point of somewhat ignorance, compared to where you're coming from um, as the expert on it what right now would, you ask self that you haven't covered, that you think is worthy of getting in, before the episode is over I'm putting, you on the, spot yeah I I've mentioned uh Open, Access models quite a bit and I think, hopefully a lot of us are encouraged by, the direction that that's going that, these models are getting better and, better but one thing that maybe I would, ask myself or that I think is important, to highlight and encourage people with, is these open Open Access models might, not quite be at the level of open AI, anthropic Etc yet but I think not only, will they get there but already like in, the space when where we're at now with, some of these kind of structured control, elements around Open Access models you, can actually boost the performance of, Open Access models to be more in line, with you know open AI level output, because what you can do is say Well I'm, going to force my output to this if I'm, not able to produce it you know I can, Reas the question or I can try a variant, of my prompt and these kind of wrapping, layers around Open Access models, actually provide a way for you to, operate in a data private compliant way, with Open Access models that boost their, performance closer to what these kind of, closed and maybe more suspect in terms, of Ip leakage and that sort of thing, systems are doing so I think that's an, encouragement that I I've found recently, and I hope that's encouraging to others, is we are really seeing a proliferation, of these models and they're all going to, have a little bit different character, but the ways that we wrap them and the, way that we present them is a lot of, provides the majority of the value of, those models and I think we'll see not, only prediction guard but other systems, as well coming out that wrap these, models and use them in really, intelligent manners that boost their, performance in a way that isn't reliant, on sort of a centralized, API I appreciate that I think you're, right uh and I am deeply appreciative of, you not only telling us about prediction, guard but actually kind of laying out, the space uh even if someone is not, chomping at the bit uh the way I am to, use prediction guard um they hopefully, kind of understand what some of the, problems are that need to be addressed, whether by you or others out there uh so, thank you for allowing me to twist your, arm and do this episode, today I I appreciate you uh uh letting, me go there so anyway thank you very, much uh to my good co-host uh and my, guest today for uh coming on pral AI, thanks so much, [Music], Chris prediction guard is waiting for, you to check it out at prediction, guard.com, there you'll find all the features, pricing info and docs to get started the, link is also in your show notes and, chapter data for easy clickins thank you, for listening to practical AI help us, help more people by sharing the show, with your friends and colleagues word of, mouth is still the number one way people, find podcasts they love thanks once, again to our partners fastly fly and typ, sense for helping us bring you awesome, pods each and every week and to break, master cylinder for producing all the, beats on all Chang Log podcasts that's, all for now we'll talk to you again next, [Music], week k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Data augmentation with LlamaIndex | Large Language Models (LLMs) continue to amaze us with their capabilities. However, the utilization of LLMs in production AI applications requires the integration of private data. Join us as we have a captivating conversation with Jerry Liu from LlamaIndex, where he provides valuable insights into the process of data ingestion, indexing, and query specifically tailored for LLM applications. Delving into the topic, we uncover different query patterns and venture beyond the realm of vector databases.
Leave us a comment (https://changelog.com/practicalai/224/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Jerry Liu – Twitter (https://twitter.com/jerryjliu0) , GitHub (https://github.com/jerryjliu)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• LlamaIndex Docs (https://gpt-index.readthedocs.io/en/latest/)
• LlamaHub (https://llamahub.ai/)
• LlamaIndex Blog (https://medium.com/llamaindex-blog)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-224.md) | 29 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends that fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist and founder of a company, called prediction guard and I'm joined, as always by my co-host Chris Benson who, is a tech strategist at locked Martin, how you doing Chris doing very well, enjoying this uh fine Springtime weather, of, llms yes the spring llm Bloom I guess, that's right that's right all right well, I don't even think we can use the word, bloom CU that's loaded now yeah I was, going to say that has a whole different, meaning there's no word that's not, loaded with some sort of AI meaning at, this point yeah we should just go, straight to Our Guest yeah including, llamas um which uh we're excited today, to have with us Jerry Leu who is, co-founder and creator of llama index, welcome Jerry yeah thanks uh Daniel and, Chris for having me super excited to be, here yeah um I'm really excited because, we've had a few conversations in the, past and I've used uh llama index and, some of my own work and also kind of, tried some integration stuff with, various data sources so I'm really, excited to hear a little bit more of the, story and kind of the vision behind the, project if I'm just reading from the, docs llama index is about connecting, llms or large language models with, external data so maybe a first question, kind of a general question not specific, to llama index necessarily is like why, would one want to connect large language, models with external data yeah it's a, good question and so for those of you, who are already in the space of LM, application development this uh might, sound obvious to you but for those of, you who might be still somewhat, unfamiliar large language models have a, lot of different sorts of capabilities, that are really good at answering, questions you know doing tasks being, able to summarize stuff basically, anything you throw at it like generate a, short story write a poem it can do and, the default mode of interacting with a, language model like chat GPT is that you, would write stuff to it you know in a, chat like interface this query would hit, the model and you'd get back some output, I think one of the next questions that, people will get into especially as, they're trying to explore building, applications on top of large language, models is how can this language model, understand my own private dat data right, whether you're kind of like a single, person or you're an entire organization, and these days there's like a lot of, different ways for actually trying to, incorporate new knowledge into a, language model the models themselves are, trained on just like a giant Corpus of, data and so if you're like an ml, researcher your default mode is just how, can I train this model on more data so, that I can try to memorize this, knowledge right and the algorithm there, is basically through some certain like, gradient descents or the weights or R, draft or any sort of like fancy ml, algorithm that actually includes the, knowledge and the weights of the model, itself I think one interesting thing, about large language models these days, is that uh instead of like training the, model you can actually take the model as, is and just like figure out how to have, it reason over new information and so, for instance like use that input prompt, as like the cache space to feed a new, information tell to reason over that, data and to answer questions over that, data and I think that's very interesting, because you can take the model itself, which you know has been trained on a, variety of data but doesn't necessarily, have inherent knowledge about you know, you as a person or your organization, data but then you can tell it hey here's, some new data that I have now given this, data how can I answer the following, questions and this is part of the stck, that a lot of people are discovering, these days where uh you can actually, just use the language model itself as a, pre-trained service and then um wrap, that in this overall Software System to, incorporate your data with the language, model cool yeah and your project is, called uh index now when I before the, past like few months or six months or, whatever when I was thinking about like, indices or an index one of the things, that first came to my mind was like oh I, have a database maybe and there's an, index that I use to query over that, database and some of that is a little, bit like fuzzy magic to me um in terms, of how that actually works at the lower, level in a database but like what is, this idea of an index or indexing in the, context of llama index or in the context, of data augmentation for large language, models it's kind of funny I think when, we first started the name it was a bit, more of like a casual naming convention, you know it used to be called GPT index, and I kind of made up the name because, it sounded uh roughly relevant to what I, was building at the time um but I think, over time especially as it's morphed, into more of a project that people are, actually using this concept of the index, has become a bit more concrete and so I, can articulate that a bit better the, idea of llama index is you know just to, step back and talk about the overall uh, purpose of the project is to make it, really easy and powerful and fast and, cheap to connect your language models, with your own private data and we have a, few constructs to do so within uh llama, index and so part of the way you can, think about llama index is how can we, build some sort of stateful service, around your private data around, something that at the moment is somewhat, stateless like the language model call, is a stateless service right because you, feed in some input and you get back some, output so how can we wrap that in a, safle service around your own data, sources so that you know if you want to, ask a question or tell the LM to do, something it can reference that state, that you you have stored and so if you, think about any sort of like data system, there's the raw data that's stored, somewhere in some storage system there, might be like indexes or views like, similar to like a database analogy where, you can kind of look at the data in, different ways and I can talk a little, bit about how that works and then, there's usually some sort of like query, interface right that you can actually, query and retrieve the data so if you, look at like a SQL database right you, have you know the raw data stored uh in, in some sort of tables you can Define, different indexes over different columns, and then the query interface is like a, SQL interface you you run SQL and then, it'll be able to execute the query, against your database and there's a lot, of like kind of roughly similar Concepts, that apply to thinking about the Llama, index itself as this tool set because if, we're going to build this like stateful, service right on top um that can, integrate with large language models by, the way to clarify we're not like really, solving the storage uh part uh right we, integrate with like a ton of different, Vector storage providers we integrate, with like other databases too but if you, even think about us as like some sort of, data interface or orchestration you know, there's a raw data which needs to be, stored somewhere and so if you have like, a bunch of text documents you need to, store that in like a a vector database, or magod DB or S3 um all those types of, things and then you can Define these, different indexes on top of this uh data, and the way we think about indexes is, how do we structure your data in the, right way so that you can retrieve it, later for use with LMS and so then I can, talk a little bit how this works but the, set of like indexes that you can Define, is actually pretty interesting basically, the set of data structures that offers, like a view of your data in different, ways and then you wrap that in this, overall query interface that can you, know use these indexes on top of your, data to do retrieval and LM synthesis, and give you back a final answer and so, I would look at this in terms of like, the components of the overall system, there's just like if you're building, this like stateful service there's these, three components uh how do you you know, address the store of the raw data index, it and then query it so uh I want to, actually pull you back for just a moment, uh as we're kind of learning this and if, you're an app developer and you're, interested in creating a stateful, service and you've started kind of going, down the path about like well there's, kind of the old school way of going and, doing a SQL query and all that and now, we're using llm models and adding our, data to it I know that we've kind of, gone beyond that just a little bit but, if you can back up and talk a little bit, about what are you getting if you're the, app developer and you're listening to, this and you're trying to understand, like why would I go down that path I'm, you know I sense that there's value, there but we haven't talked about it, versus a robust set of SQL queries on, your own data why would you bring in, that large language model in the, beginning what is it bringing to bear, that's worth all of that effort could, you talk a little bit about that, Baseline value add to it yeah that's a, really good question and I think I might, have jumped the gun a little bit so I, appreciate you uh bring me back no, worries it's because you're excited as, are we but I I also want to make sure, that people listening have a chance to, truly understand it at the same way that, you are definitely I think one thing, about language models that's very, powerful is their ability to just, comprehend unstructured text and also, natural language and so this matters in, both ways in terms of how you can store, the data as well as query the data, because now you know let's say you're, the end user you can just type in a, Naple language like English question, right ideally into this interface and, get back a response and so the setup is, way easier than having to learn SQL over, some source of data or you know having, to even code up like this very complex, like pipeline to try to you know like, parse the data in different ways um, because you could treat the language, model itself as a blackbox feed us, something get something out right and so, I think that by itself is a very very, powerful tool and I think these days, people are trying to figure out what you, can do with that tool so another kind of, like illustrative example of like the, power of language models using this as, like intelligent natural language, interface is you actually don't have to, do a ton of data parsing when you, actually feed in a data so for instance, let's say you have a PDF document or, like any sort of like Microsoft Word, document or even an HTML web page just, copy and paste that entire thing right, you know just extract the text from it, dump it into the input prompt and then, just like tell the LM hey here's just, this like giant blob of text side copied, over now given this text can you please, answer this following question and the, crazy thing is the language model can, actually do that assuming you know it, fits within the promp space and that's, also very powerful because this kind of, affects the way you do like ETL and data, pipelining right in the traditional, sense if you had a bunch of this like, unstructured text you'd have to spend, either manual effort or write a complic, program to pull out the relevant bits, from this text parse it into some table, store it and then You' overun seq or, some other like query over this text, whereas here you know with the power of, language models you can store this text, in a bit more of like a messy, unstructured format uh as like raw, natural language and then still figure, out a way to pull out this unstructured, text just dump it into the input prompt, and ask a question over is it, conceivable uh with what you were saying, if I'm thinking as an app developer, about diving into this that I'm hearing, you say you're going to do this which is, an additional thing to learn and be able, to go you know it's an additional skill, set that you're adding on but I also, hear you talking about other things that, I used to have to do that maybe I don't, have to do anymore and to some degree is, it realistic to say from an effort, standpoint it becomes a wash once you, have the skills a little bit or maybe, even you're gaining more power and doing, less work along the way to do it so that, it's kind of like uh of course you would, do it going forward is that a fair way, of thinking about it yeah so it's an, interesting way of thinking about it, because I think the high Lev question is, just like you know what parts have, become easier and what parts have gotten, harder uh once you have this language, model technology because on one hand, things have gotten a bit easier and, Powerful to build these expressive like, question answering systems with less, effort you know you take in this giant, blob of unstructured text you know, figure out how to store it you feed it, into the language model and then all of, a sudden you can ask these questions, over these like files that you couldn't, really do before with more kind of like, traditional AI Technologies or just like, manual programming that said I think, this new paradigm kind of involves its, own set of challenges that I'm happy to, talk about I think there's uh a lot of, like Stacks emerging about how to make, the best use of language models on top, of your data and there's some very basic, stuff that's happening these days but, there's also kind of like more um, advanced stuff that we're working on and, I do think it's very interesting to, think about what are the technical, logical challenges that are preventing, us from unlocking the capabilities of, language models um because again with a, very basic stack uh and again like you, can see this if you just like play, around trbt you can already get a ton of, value from your data by just like doing, some very basic processing on top of it, and you can start asking questions that, you couldn't really ask before but you, know with some more advanced, capabilities and some once you're, solving some more interesting technical, problems what are kind of like the, additional queries that can ask on top, of your data that you also couldn't do, before before we jump into so I want to, kind of dive into the Weeds about like, the two things that you talked about, like how do I index my data how do I, query my data all the goodness around, that in llama index before we do that, maybe just as also like a to set the, stage for some people that are coming, into this and maybe parsing some of the, jargon that's thrown around so one of, the other things that people are really, kind of diving into is thinking about, like how do I Engineer my prompts how do, I chain prompts together and all of that, sort of thing and could you highlight, because my at least the way I would, phrase it like those two things are, complementary with the things that, you're doing with llama index but could, you kind of help people understand like, how do those pieces fit together in, terms of like architecting one of these, systems I guess in the end LM, application development to put it in a, very oversimplified view is just some, fancy form of prompt engineering and, prompt chaining right it's actually not, super different with how we're thinking, about building this interface with data, and so just as a very basic example you, know if you're kind of coming into the, space fresh like a very basic prompt, that you could put into a language model, is something like the following here is, my question right and then you put the, question here and then you put in here's, some context and then in this context, variable you just dump all the context, that could be R into the question right, you copy and paste a blog post you copy, and paste like API documentation just, copy and paste it into the input prompt, space and then now the bottom say given, this context give me the answer to this, question and you send it to a language, model and then you get back an answer so, that's like the most basic like question, answer prompt that you could use to kind, of perform some sort of like question, answering and um over your data it, really is just prompting right because, you're putting stuff into the prompts, you have this overall prompt template, and you have variables that you want to, fill in I think one interesting kind of, like challenge that arises is how can, you feed in contexts that exceeds the, prompt window cuz for gpt3 it's 4,000, tokens for anthropic I guess it's like, 100,000 tokens but like if you look at, like an Uber SC 10K filing it's like, 160,000 tokens right so if you want to, ask a question like what's the summary, of this entire document or like what are, the risk factors in this very specific, section of the document how do you feed, that entire thing in so that you can, basically answer the following question, and I think that's where things get a, little bit more interesting because you, can basically do one or more of the, following things one is you could have, some external model uh like it's, something separate from the language, model prompt that's actually doing, retrieval over your data to figure out, what exactly is the best context to, actually fit within this promp space two, is you can do some sort of uh synthesis, strategies to synthesize an answer over, long context even if that context, doesn't actually fit into the prompt for, instance you could chain repeated LM, calls over sequential truncks of data to, then combine the answers together to, actually give you back a final answer, that's one example in the end all this, architecture is just kind of designed, around being able to feed in some input, to the llm and get back some output and, the the core of that really is a proping, right and so part of this is just like, developing an overall system around the, [Music], [Applause], [Music], promting well Jerry um you had mentioned, kind of these three levels of, integrating external data into your llm, application there's sort of data, ingestion and there's like indexing and, query I'm assuming data ingestion has to, do with like oh I'm going to connect to, the Google Docs API and like pull the, data over and then indexing and query, build on top of that but before we dive, into those second two phases which is, where I think a lot of the cool stuff, that you're doing is found what should, we know about sort of data in the data, ingestion layer in terms of relevance to, how llama index Builds on that and other, things the data inje side is just like, the entry point to building a language, model application on top of your own, data I think LMS are cool I want to use, it on top of some existing Services what, are those services and how can I load in, data from those Services one component, of llama index is this kind of like, Community Driven Hub of data loaders, called llama Hub where we just offer a, variety of different data connectors to, a lot of different Services I think we, have over like 90 something different, data connectors now and these include, like connect a file format so for, instance like PDF files HTML files like, PowerPoints images even they can include, connectors to apis like notion slack, Discord Salesforce actually sorry we we, actually don't have Salesforce yet, that's something that we want um but we, hav yeah it' be it'd be very useful uh, if you're interested in contributing a, Salesforce loader please I would love, that and then the next part is just like, being able to uh connect to kind of like, different sorts of multimo formats like, audio images which I think I've already, mentioned so the idea here is you have, all this data it's stored in some format, it's unstructured it could be text or it, could even be like images or some other, format how do you just like load in this, data in a pretty simple Manner and just, wrap it with some overall document, obstruction so there's not a ton of tech, going on here and the reason it's more, just like a convenience utility for, developers to just like easily load in a, bunch of data and again going back to, the earlier point the reason there's not, too much Tech is LMS are very good at, reasoning over unstructured information, so you actually don't need to do like a, ton of parsing on top of this data that, you load to basically get some decent, results from the language model and so, once you actually load in this data in a, lightweight container you can then use, it for some of the downstream test like, indexing and query awesome yeah and I, see like this is I think where things, get super interesting like I mentioned, so in L index I'm in the docs right now, like you mention list and table and tree, and Vector store and structured store, and Knowledge Graph and empty indic, could you describe like generally how to, think about an index within llama index, and then like why are there multiple of, these and what categories generally do, they fit in one way of thinking about, this is just taking a step back at a, high level what exactly does the data, pipeline look like if you're building a, LM application so we started with data, inje where you load in a document from, some data source like a PDF document or, API and now you have this unstructured, document The Next Step typically is you, want to Chunk Up the text into text, chunks so let's say uh naively let's say, you have just a giant blob a text from a, PDF you can split it you know every, 4,000 words or so or every 500 words, into some some set of text trunks this, just allows you to store this text in, units that are easier to feed into the, language model and a lot of this is a, function of the fact that the language, model itself has limited prompt space, right so you want to be able to chunk up, a longer document into a set of smaller, trunks now you have these trunks they're, stored somewhere they could be stored, for instance within a vector database, for instance like a pine cone we vi8, chroma they could also be stored for, instance like a a document store like, magod DB or it could store like file, system on your local disk now they're, stored the next part is how do you, actually want to Define some sort of, structure over this data a basic way of, like defining some sort of like, structure over this data and this is, where we got into indices is just like, adding and embedding to each chunk and, so if you have a set of texts how do you, define like an embedding for each set of, texts and that in itself could be, treated as like an index right an index, is just like a lightweight view over, your data the vector index is just, adding and edding to each piece of text, there's other sorts of indexes that you, could Define to Define this like view, over your data there's a keyword uh, table that we have where you just have a, mapping from keywords to the underlying, text you could have like a flat list uh, where you just like basically store, subset of node IDs as like its own index, before I get into the technicals of like, the indexes and you know like what they, actually do one thing to maybe think, about is just like what are the end, questions that you want to ask and what, are some of the use cases that you'd, want to solve before you dive into that, I was going to ask you really quick, could you define what an embedding is, for those people who are learning large, language models at this point just so, they'll understand what it is when you, say you're defining that as the index, and Bings is a part of kind of a very, common stack of mering these days around, this Alim data system that's emerging, and so an embedding is just a vector of, numbers usually like floating Point, numbers you could have like a hundred of, them a thousand of them it depends on, the specific embedding model and the way, an embedding works it's just think about, this list of numbers as a condensed, representation of the piece of content, that you have you know if you can, somehow in a very abstract manner take, in some piece of context let's say this, paragraph is about um the biography of, like a famous singer right and and then, you get an embedding from that it's a, string of numbers the embedding has, certain properties such that uh this, string of numbers is closer to other, numbers that are semantically about like, similar content and farther away from, other strings and numbers representing, text that are farther away in terms of, semantic content so for instance like if, you look at the biography of a singer, it's going to be pretty close to, biography of another singer versus if, it's about I don't know like the, American Revolution or something like, that embedding will probably be a little, bit further away and so it's a way of, like condensing a piece of text into, some Vector of numbers that has some, mathematical properties where you can, measure similarity between you know, different pieces of content maybe this, is another point of Distinction and I I, get all these questions very often so I, think it's useful to discuss them on the, show like last week at odsc I got a lot, of these sorts of questions so like, we're talking about bringing in data, creating an index to access that data, that index might involve like a vector, store or embeddings but llama index is, sort of not a a vector store like it's, cool to be a a vector database company, right now but llama index is is, something different and like again these, are two things that are complimentary I, think could you draw out that, distinction a little bit just to help, people kind of formulate those, compartments in their mind I think these, days there's a lot of vector store, providers and they handle a lot of the, underlying storage components and so if, you look at like a pine cone or wv8, they're actually dealing with the, storage of these unstructured documents, one thing that we want to do is leverage, these existing kind of like storage, systems and expose query interfaces that, I guess a broader range of query, interfaces beyond the ones that are just, like directly offered by a vector store, and so for instance a vector store will, offer a query interface where you can, typically query you know the set of, documents with an embedding plus like a, set of metadata filters plus maybe some, additional parameters but we're really, trying to build this like broader set of, like abstractions and tools through our, indices our quer interfaces plus like, other abstractions that we have under, the hood to basically uh perform like, more interesting and advanced operations, and manage the interaction between your, language model and your data and almost, like be a data orchestrator on top of, your existing Storage Solutions and so, we do see ourselves as separate because, we're not trying to build the underlying, Storage Solutions we're more trying to, provide a lot of this like Advanced, query interface capability to the end, user using the power of language models, on top of your data I think we got a, little bit off track a little bit but I, think it was good um so kind of circling, back to the the indices that are, available in llama index and you've, talked about like this pipeline of of, processing and potentially like one, index being like a vector store and, maybe listeners are a little bit more, familiar with kind of like vector search, or semantic search or that sort of of, thing with everything that's going on, but you have much more than that like, these other patterns and these other, indices that enable other patterns um, could you describe like some of those, Alternatives or additions to like vector, store index and when and how they might, come into play yeah that's a good, question and maybe just to kind of like, frame this with a bit of context I think, it's useful to think about uh certain, use cases for each index so the thing, about uh Vector index or uh being able, to use a vector store is that they're, typically well suited for applications, where you want to ask kind of like, fact-based questions and so if you want, to ask a question about like specific, facts in your knowledge purpose using a, vector store tends to be pretty, effective for instance let's say your, question is let's say your knowledge, purpose is about like American history, or something right and your question is, hey like what happened you know in the, year of 1780 that type of question tends, to lend well to using a vector store, because the way the overall system works, is you would take this query you would, generate an embedding for the query you, would first do retrieval from the, Spectra store in order to fetch back the, most relevant chunks to the query and, then you would put this into the input, prompt of the language model and so the, set of retrieve items that you would get, would be those that are most, semantically similar to your query, through embedding distance right so, again like going back to embeddings like, the closer different embeddings are, between your query and your context the, more relevant that context is and the, farther part it is than the less, relevant and so you get back the most, relevant context to query feed it to a, language model get back an answer there, are other settings where standard top K, EMB betting based look up and I can dive, into this uh in as much technical depth, that you guys would want to but there's, a settings R standard kind of like topk, EMB batting base retrieval doesn't work, well right and one example where it, doesn't typically work well and this is, a very basic example is if you just want, to get a summary of like an entire, document or an entire set of documents, let's say you know instead of like, asking a question about a specific fact, like what happened in like you know 1776, maybe you just want to ask the language, model could you just give me an entire, like summary of American history in like, the 1800s that type of question tends to, not lend well to embeding Bas luckup, because you typically fix like a top K, value when you do eding based luup and, you would get back very specific context, but sometimes you really want the, language bottle to go through all the, different contexts within your data so a, vector index with storing it with, embedding would create a query interface, where you can only fetch like the K most, relevant nodes if you store it for, instance with like a list index you, could store the items in a way such that, it's just like a flat list right so when, you li query this list index you, actually get back all the relevant items, within this list and then you'd feed it, to our synthesis module to synthesize, the final answer so the way you do, retrieval over different indices, actually depends on the nature of these, indices another just like a very basic, example is that we we also have like a, keyword table index where you can kind, of like look up specific items by, keywords right instead of through like, embedding based distance keywords for, instance are typically good for stuff, that requires High precision and a, little bit like lower recall so you, really want to fetch like specific items, that match exactly to the keywords uh, this has the advantage of actually, allowing you to retrieve a bit more, precise context than something that like, vector based embedding lookup doesn't, the way I think about this is like a lot, of what llama and next wants to provide, is this overall query interface over, your data given any class of queries, that you might want to ask whether it's, like a fact-based question whether it's, a summary question or whether it's you, know some more interesting questions we, want to provide the tool set so that you, can answer those questions and indices, like defining like the right structure, of your data is just one step of this, overall process and helping us achieve, this vision of like a very like, generalizable query interface over your, data some examples of like different, types of queries that we support there's, the fact-based question look up which is, like semantic search using Vector, eddings you can ask like summarization, questions uh through you know using our, list index you could actually run like, uh structured queries so you could if, you have a SQL database you could, actually run like structured analytics, over your database and do like text to, SQL you can do like compare contrast, type queries where you can actually look, at different documents within your, collection and then like look at the, differences between them you could even, look at like temporal queries where you, can reason about like time and then go, forwards and backwards and basically, kind of like say hey this event actually, happened after this event here is the, right answer to this question that, you're asking about and so a lot of what, uh llama index uh does provide is a set, of like tools the indices the data iners, a query interface to solve like any of, these queries that you might want to, [Music], answer, [Music], so Jerry you really got me thinking a, lot about this the possibilities of the, query schemes is pretty darn cool you, know we kind of started with ingest and, moved into kind of indexing and now, we're talking about queries could you, kind of give me uh an example with the, tool it a little bit more of a practical, level because you kind of Hit the, concepts about like what the, possibilities are but as someone who, hasn't used the tool myself I'm trying, to get a sense of what that workflow is, like pick what would probably be like a, really common query scheme that you're, doing and dive into that just a little, bit to give us a sense of a Hands-On, practical you know fingers on keyboard, sense of it cuz I'm trying to get a, sense of where I'm going to go for, playing after we get done with the, episode so I want to try it 100% I think, one thing that has popped up pretty, extensively after talking to a variety, of different users is actually financial, analysis I think looking at SEC 10ks, tends to be a pretty popular example uh, if you look at the anthropic clad, example they also use SEC 10ks and my, guess the reason it's popular is one, there's just like a ton of texts and so, it's just very hard to parse uh if you, read it as a human two it's like a, useful thing for people in like, financial institutions like Consulting, because you want to like compare and, contrast the performance of like, different businesses you know and look, at the performance across years believe, it or not I actually read 10ks a lot and, that would be a really useful example, for me believe it or I'm I'm not kidding, you as a result we've actually been, playing around with it a decent amount, too yeah some of the cool like things, that we've been that we're showing that, llama index can do on top of your 10ks, is for instance let's say you have like, two companies let's say Uber and left, right for the year 2021 you can actually, ask a question like can you actually, compare and contrast the risk factors, for Uber and LIF or their quarterly, earnings uh across like these two, documents one is the Uber 10K one is a, lift 10K this is actually an example, where if you do like just topk embedding, based look up the query fails because if, you ask the question you know compare, and contrast Uber and lift and don't do, anything to it and let's say you know, your Uber and left documents are just in, some one vector index you don't really, have a guarantee you're going to fetch, the relevant context to this question, that to be able to answer this, thoroughly right and then the model, might hallucinate you'll get back the, wrong answer and then you know with it's, just not a good experience I think what, you typically want to do is have some, sort of like nicer abstraction layer on, top of this query that can actually kind, of map that query to some plan that that, would roughly be like how a human would, think about like executing uh or, answering this question let's say you, want to compare and contrast the, financial performance of uber and lft in, the year 2021 well first okay what was, the financial performance of uber in, 2021 what was the financial performance, uplift you break it down into those two, questions and then for each question use, it to kind of like look over your, respective uh index let's say you have, an index corresponding to Uber and a, index corresponding to LIF get back the, answer right get back the actual kind of, like revenue for instance for Uber and, LIF and then like synthesize both them, at the top level again be able to like, pull in the individual components you, extracted from each document and then, synthesize a final response that's able, to compare the the two so that's like an, example of something that we can, actually do pretty well with w index and, we have like a variety of tool sets for, allowing to do that and that's an, example of query that's kind of more, advanced because it requires comparisons, Beyond just like kind of asking stuff, over a single document another example, just to kind of like take the 10K, analogy further is let's say you have, the yearly reports of the same company, across different years let's say from, like 2018 to 2022 you can ask a question, like did Revenue like decline uh go up, or down in the last like 3 years and, then you could actually do a very, similar process where given the query, interface that we provide break this, question down into some questions over, each year pull out the revenue and then, basically at the end do a comparison, step to see whether or not it increase, or decline just as an aside to any, listeners out there wondering why on, Earth somebody would read 10ks uh, especially considering that our audience, is focused on on data and such as that, if you want to learn about another, technology company and really understand, what it does and be able to compare it, this is an example where you can gain, tremendous intelligence on another, company with publicly available, information and by comparing multi-year, 10ks like you just said you'll learn way, more about that company than its own, employees know about it so anyway just, thought I'd mention that as an aside, yeah I look forward to hearing your, success with uh speeding up your, workflows around reading the Tim case, Chris with llama index I'm excited about, this this is going to save me a lot of, time Jerry I uh one of the things that, we talked a little bit about in one of, our previous conversations which I know, you've also thought very deeply about, and um even have like a portion of the, the docs and functionality in llama, index um devoted to is evaluation like, query response evaluation like how do I, know my large language model like barfed, up an answer right based on some query, and I pulled in some external data and I, you know inserted some context and maybe, I strung a few things together like how, am I to evaluate the output of that, could you give us a maybe a high level, from your perspective how you think, about this evaluation problem and then, maybe go into a little bit of some of, the things that you're exploring in that, space yeah totally just uh paresis like, uh we are super interested in evaluation, or more tailored towards this interface, of like your data with LMS I can dive, into that a bit more and we have some, initial evaluation capabilities but, we're super like Community oriented like, we'd love to just like you know kind of, like chat with there's a lot of like, different tool sets out there that allow, you to do like different types of evals, over your data and building like nice, interfaces for doing so and so I think, this is an area of active exploration, and interest for us as well and so just, kind of thinking about this a little bit, more deeply though evaluation is very, interesting because there is the, evaluation of each language model call, itself and then there is the evaluation, of the overall system um and so diving, into this bit more like at a very basic, level if you have a language model you, have an input and you get back some, output you can try to validate whether, or not that output is correct right, given a single language model call did, the model actually give you the correct, answer given the input did it spit out, garbage did it like hallucinate that, type of thing the interesting thing, about a lot of systems that are emerging, these days is that they're really like, systems around like a repeated sequence, of language model calls and this applies, whether or not you're dealing with like, the more agent based framework which you, know you ask a question it can like just, repeatedly kind of like do react trainer, thought prompting or like be able to, pick a tool but the end result is it's, able to give you back a response another, example this like Auto GPT where you, just let it run for like 5 minutes and, just like keeps on doing stuff like over, and over again until it gives you back, something or you know even in the case, of retrieval augmented generation is, just like a fancy name for roughly like, what we're doing with llama index which, is like a query interface over your data, like even with within our system there, could be a sequence of repeated LM calls, but the end result is that you send in, some input into the system and you get, back some output you know given this, high level system how do you evaluate, the input and output properly I think in, traditional machine learning typically, what you want to have is you want to, have like ground truth labels for every, input that you send in so if you like, for instance ask a question you want to, know the ground truth answer and you, want to compare the predicted answer to, the ground truth answer and see how well, the predicted answer matches up this is, still like uh something that people are, exploring these days even in the space, of Dr of AI and LMS you have ground, truth like text and then you have, predicted text and you want some way of, scoring how close isct a text is to, ground truth text I think the core set, of eval modules that we have within, llama index actually are ground truth, free or label-free and that part in, itself is actually very interesting, because you have this input uh you ask a, question you get back this predicted, response you also get back the retrieve, like sources like the documents, themselves what we found is that you can, actually make another llm call to just, like compare the sources against the, response and then also compare the query, against the sources and on the response, to see how well all two or three of, these components match up and this, doesn't require you to actually specify, what the ground truth answer is you just, look at the predicted answer see if it, matches up to the query or the context, in a separate llm call and it's, interesting because one it makes use of, LM based evaluation which is kind of, like an interesting way to think about, it basically using the language model to, evaluate itself right I'm sure there's, like downsides which we can get into but, you know a lot of people are doing it, these days too and then the second part, is it doesn't require any ground truth, because you're using the language model, to evaluate the capabilities of its own, answer plus context you don't actually, need to as a human feed in the actual, answer and the benefit of this is that, it just saves a bunch of time and cost, you don't actually need to like label, your entire data set to run evals I, still think this overall like space is, probably relatively early um I think, there's still some big questions around, like latency and cost if you're trying, to really do LM based evals more fully, like using the LM to evaluate a large, data set takes a lot of time and costs a, lot of money and so this is generally, kind of like an area that we're still, kind of like actively thinking about, yeah that's awesome as we kind of like, get near to the end here I know things, are like progressing so quickly I can't, keep up with all of your Tweet threads, about like awesome new stuff that's, happening in in llama index but I know, there's a lot um as you look to kind of, the next year and like where your mind, is at what you really want to dive into, and also what what's really exciting to, you about the field there's a lot of, people excited about a lot of different, things but from your perspective having, been in the trenches building large, language model applications interacting, with users of llama index what is it, that really excites you moving into this, next year in terms of the possibilities, the real practical possibilities on the, horizon and how kind of our development, workflows will be changing in the near, future yeah totally I think there's a, few related components to this that I'm, both excited for as well as like you, know the challenges that we're going to, solve probably the first component is, just being able to build this automated, query interface over your data you know, if you're looking at all the query use, cases that we solv one of the key, questions that we keep going back to is, here's a new use case on top of this, data and here's a new question that, you'd want to ask how do we like figure, out how to best like fulfill that query, request right and I think um especially, as your data sources become more, complicated then it's just like had, think about like how to index and, structure the data properly how do you, think about interesting automated, interactions that happen at the query, layer between the language model and, your data how do we like basically just, like make sure we solve this request and, then the second component to this is we, want to make sure that we build this, interface I can handle any type of query, you throw at it now how do we do this in, a way that is like cheap fast and easy, to use for a lot of users once they move, on beyond the initial prototyping phase, they think about starting to minimize, like cost and lency and then think about, how do you like pick the best models for, the job right there's like the open AI, API which uh works well generally you, know they're probably the best model out, there but it it can also be quite slow, and then there's also these like open, source L popping up probably like a few, new ones every week and then how do, users make the best decisions whether to, use that over their data and then I, think the next part to this is you know, a lot of llm development is moving in, this overall trend of like automated, reasoning right if you look at like, agents and tools it's and auto gbt and, all this stuff it's just like how do you, make like automated decisions over your, data and then I think as a consequence, there's always going to be this, trade-off between how few constraints, can we give it and how many like like, should we give it more constraints or, fewer constraints right because if fewer, constraints you give to something like, this it has more flexibility to, potentially do way more things but then, also just like error right like it it'll, just like make mistakes and then there's, no really no way to correct it easily, and you can't really trust the decisions, whereas if you kind of like strain the, outputs of these like automated decision, makers or agents then you can, potentially get more interpretable, outputs maybe at the cost of like a, little bit of flexibility in terms of, functionality and I think we've been, thinking a lot about that with respect, to like the data retrieval and synthesis, space too right like how can we give you, back results uh that are expressive but, also you know like perform well and aren, going to make mistakes like a ton of, time awesome yeah well I'm really happy, that we had a chance to talk through all, the great llama index things and make, sure if you're not following Jerry on on, Twitter find him there he posts a lot of, great stuff um that they're working on, and of course you can find llama index, if you just search for llama index, there's like a great set of docs and all, those things we'll make sure we include, those links in our show notes for people, to find out about that and uh get linked, to their docs and their blog and all the, good things so um check out llama index, and uh thank you so much for joining us, Jerry it's been awesome, [Music], thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang doog podcasts check out what, they're up to at fastly.com and fly .io, and to our beat freaking residents break, master cylinder for continuously, cranking out the best beats in the biz, that's all for now we'll talk to you, again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Creating instruction tuned models | At the recent ODSC East conference, Daniel got a chance to sit down with Erin Mikail Staples to discuss the process of gathering human feedback and creating an instruction tuned Large Language Models (LLM). They also chatted about the importance of open data and practical tooling for data annotation and fine-tuning. Do you want to create your own custom generative AI models? This is the episode for you!
Leave us a comment (https://changelog.com/practicalai/223/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Erin Mikail Staples – Mastodon (https://mastodon.social/@erinmikail) , Twitter (https://twitter.com/erinmikail)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Label Studio (https://labelstud.io/)
• Slides from Erin’s recent PyData talk on RLHF (https://docs.google.com/presentation/d/17GDvbYAf3SfuNTRNzbuX5We3mVKBwnAE-wX5I1XOP6w/edit?usp=sharing)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-223.md) | 3 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io hello this is Daniel wh neack I, am here on site at odsc East in Boston, the open data science conference and I, am super excited because I get to sit, down with Aaron male Staples who's a uh, developer Community Advocate at uh label, studio and um yeah what do you think of, the conference so far eron it's been, first off super fan of kind of what, you've been doing and a lot of anybody, who's creating stuff out there in the, space especially with the current like, Zeitgeist and explosion of interest in, Ai and machine learning it's a little, crazy it's a little wild um it's a, little wild I you know I would be lying, if I'm not wouldn't be saying I'm I'm, newer to the field myself but it's been, something I've been very fascinated, about but all that being said this, conference is really cool to see just, the breadth of first off people here, there are people who have very new to, the industry people who came to just, learn more for their first time but, there are people who had been practicing, for years and years and this is their, third or fourth time at, odsc and I'm also really interested, about the number of people concerned, about data Integrity here yeah lots of, like interpretability Integrity, reliability type talks yeah lots of, reliability the other one is like also, on missing data and like how do we, approach these problems especially with, the rise of foundational models and, generative AI like how does that impact, it for the long which is crucial, conversations I think to have yeah, definitely um and what sorts of, different players in the space are you, seeing at this conference both in terms, of like open source or like different, kind of targets like ml Ops platforms, that sort of thing how do you see that, developing um first off I'm I'm, personally a huge fan of Open Source, it's not only how I learned to code in, the first place um but just a big, believer in the ecosystem I'm a huge, believer in open data I'm a participant, in open data week so I think all of, these things are like and you're wearing, a pie lady shirt which is awesome yeah, I'm a member of pie ladies um so again, super important I think to have all, these things in the ecosystem but one of, the things I think that stands out is, there's so many new innovations that, like if you're starting a tech stack, from Ground Zero it's really fun to see, all the different players in the game so, selfishly working at label Studio One of, the best things about being in the space, right now is we're a cool platform, because we can integrate with so many, different data types that it means that, I also get to play with almost every, other tool or Workshop or players in the, ecosystem which is selfishly fun it, means I get to have more things to, integrate with or build um as always, we're huge fans of what the padm team is, always doing um work very closely with, them we work very closely we've got a, lot of friends and fans the DVC crew um, they're not here at the conference, around but did get to work with them at, Pon which was really amazing to see kind, of the work that they're coming out with, um the condo crew is always fun to see, around so that's always exciting cool, yeah yeah there's so many awesome things, going on and I've seen like maybe like, three or four open source packages that, I like I don't know if I've been, ignoring or I haven't heard about so, that's like one of the fun things about, coming to these things um I know also, you gave a recent talk at Pi data Berlin, um about uh reinfor learning from Human, feedback I believe was the topic could, you tell us a little bit about like the, general like pitch or angle on that talk, which is definitely like a key topic, these days with all the instruction tune, models that are coming out and all of, that so what what was your kind of angle, in terms of what you're thinking about, there yeah so I think one of the cool, things is it was a talk that Nikolai and, I gave and Nikolai is the CTO and one of, the co-founders of label studio and what, we did at Berlin is we really made sure, to expand on this idea that like yes, these generative models these larger, models are kind of becoming the norm I, think we met yesterday I was talking to, someone who was like I got interested in, AI because I made a thousand things with, mid Journey I'm like cool and I'm very, fascinated by and I'm a Believer like I, don't care how you got into it but just, the Curiosity to show up to a conference, and learn more is very fascinating but, explaining to someone how it works and, then also explaining the best practices, behind it is really important um, personally I have a journalism, background I have a liberal arts, background and I think it's really, important that we incorporate the, humanities in technology um for the long, run and so when it comes to, reinforcement learning all of these, large generative models they can all be, made just a little bit better with the, human signal that we can provide and we, can say a lot of things like you know, get into prompt engineering which is, another whole other topic but it will, never be as good as like if you can, retrain your own data set with subject, matter experts or a specific use case or, condition that you're trying to Output, that data towards yeah and I think one, of the things that's been on my mind, recently is like this topic, reinforcement learning from Human, feedback especially with what's gone on, with chat GPT and all sometimes it feels, like Out Of Reach for like day-to-day, data scientists like how do I like I, could leverage this model but what is, the tooling around reinforcement, learning from Human feedback like how, could I use that framework or use tool, around that to like impact my own models, or my own life like how could I connect, my domain experts input and their, preferences into a system that I'm, designing do you have any thoughts there, yeah so one of the examples I love to, point to is actually Bloomberg did this, and they probably did this early April, now um and they took the financial data, that they had and Bloomberg from all the, way back from like you know many of us, know about it Bloomberg from Bloomberg, News but that was actually the financial, terminal that was used for stock trading, but they have these I mean Mass amounts, of financial data and how do they stack, on top of it like how do they get and, access that data even faster and kind of, train it to the best use case that we, have like currently our larger models, can't do that they're not experts on, financial data they're not combing just, financial data but what Bloomberg did is, they took and they retrained and they, built the things I probably fangirled, over sorry if you're on the Bloomberg, team and I fan girl overie at Pon cuz I, defin I was like I was like this is the, coolest thing ever I use this as an, example also I like learn machine, learning off of your repo okay thanks by, but we do have a model um if you want to, learn and see reinforcement learning in, action there is an open source repo um, it is built by myself Nikolai and Jimmy, Whitaker who is we have um as a data, scientist and residents at label studio, and hardex but also is at padm as well, but all of that is built it's open you, can play around with it it's based off, of gpt2 right now so you can go have, some fun and get your hands dirty and, it's all runable within a Google collab, notebook, that's awesome yeah what what is so you, mentioned it being run in a Google, collab notebook which I think is awesome, and using also a bit of a smaller model, to start with and we've seen a lot of, kind of directions towards smaller open, models that are accessible to data, scientists with like llama and other, things like that um how do you see that, trajectory going and how will that, impact sort of like day-to-day, practitioner in terms of what they're, able to do with with this sort of, technology I think the biggest thing and, I'm actually going to zoom out to answer, this is um the biggest thing we need to, think about is context like what are you, using model to solve or AI to solve or, ml to solve and the more that I've been, diving into these conferences and the, ecosystem especially at a conference, where it's a blended conference where, you have folks that are not necessarily, deep in the field or ml practitioner or, they're like new to ml it is so easy and, there's a meme I always point to that, it's like oh it's you know we're an AI, back so and so and it's like JK we're, just an AI we're just you know calling, the API and put a nice pretty shiny, front end on it which is no shade to, anybody who is putting a front end on a, GPT API like there's no shade at all to, that but it's like think about what you, need a model for in the first place or, what you want to use machine learning, like that context is so important um I'm, currently playing around with a Naked, and Afraid data set just to like play, around there's an open source data set, out there that is a oh that's that's, awesome like videos like no it's um, context from the TV show of how many, days they survived oh so literally yeah, like uh statistics about like and, features of the different survival, situations yeah it's like it's like, country their name gender and then how, many days they made it yeah and like, climate yeah based on that like yeah, that's so intriguing I watch I watch a, lot so confession I watch also uh the, alone show which is like another, Survival show I'm a reality TV terrible, junkie I that is how I doess is reality, TV but so I always wonder I have this, conversation with my wife around like, could I do it and maybe with a model, trained off of off of your survival data, set I could like say I'm from here and, this is my background could I survive I, don't know and I can't take credit for, the original data set um it is someone, who I've made friends with in my reality, TV subreddits so you need to know where, I spend my time but um he seq database, it is actually very good he's very, awesome updating it it's available on, Reddit I can share it with you and you, can post it in the links but um I'm just, kind of playing around with the data set, for fun but in this context like I'm, playing around building demos and just, having some fun teaching myself some new, skills I don't need a large foundational, model for that and I think going back to, like your original question of like well, all these models are getting smaller, more accessible we can run it in a, notebook we don't need the high, powerered computer models everything, single times and if we stop and think, about the content text of the problem, that we're trying to solve it can give, us a lot of answers and it can save us, time energy and computing power and I, think that's why I get really excited, about being on the data labeling side, again I have a background in Humanities, I'm a self-taught programmer but I think, I want I don't want to be like we need, more people like me and data science but, we need more of the humanities in data, science because we're missing the, context yeah we recently um had a guest, on the show that was talking about like, the intersection of art history and uh, computer science and how computer, scientists who are analyzing and doing, computer vision could actually learn a, lot from like what we know about like, art and like how scenes are composed or, how art has changed over time and how, the features that they're actually, engineering are connected to some of, those things so yeah I think that, there's a lot of different areas where, this could apply and domain experts are, so important and I assume that with all, of this like reinforcement from Human uh, reinforcement learning from Human, feedback I always mess it up, um I've been it's okay I've been doing, the same thing it's like r l HF I get it, yeah do especially since you're from the, label Studio side do you have any could, you give like a general picture or, workflow for people of like hey I I, maybe want to take one of these models, gpt2 llama MPT now what whatever the one, is but I also want to gather some domain, expert feedback and eventually get to, some type of like instruction C or, fine-tuned model off of that like could, you just give a general picture for like, what that looks in today's world yeah um, and we'll try and this is like always I, feel like this is better when you have a, whiteboard and a diagram and some arrows, oh for sure yes it's hard I'll do a, quick walk through so first off you'll, create a sort of prompt so typically, these models work with a prompt and then, you're given a large language model and, then you start to train it usually what, happens when you're training these, models is you get a set of two outputs, and so this case we can use um what is a, possums because we're posum fans at, label Studio I feel like that's natural, um and you can be like an a possum is a, marsupial creature or a possum is a, great character for memes technically, both of those are correct but depending, on context and this is where that human, signal side comes in one answer is more, correct than the other so if we were, training let's say a Pome memot thing or, a meme bot generator let's go that, direction we'll have some fun with it we, would take the latter answer of this AP, possum is a great animal to make, memes and that would be the better, answer if we were going for what type of, animal or we doing like a maybe a, biology assignment homework probably, would pick the marsupio one but this, just gives Insight of like the details, that you give your annotation team can, really directly influence the model, that's the labeling side when we move, this on all of this is put through kind, of this result from Human feedback your, answers are ranked I did a binary, situation so just two options but you, can have a multitude of opt options that, you put in it is all weighted it is then, looped back around when you wish had the, Whiteboard to a reward or a preference, model and this reward or the preference, model kind of tells you like hey I, probably want to go for answers that, look like this now computers don't speak, memes or marsupial or biology textbooks, but they do know patterns and Trends, which is like what they pick up on So, based on that context clues that we give, them this preference model will start to, preference those types of answers now, it's really important that these reward, preference models also hold in place, kind of the original things that we had, that it knows like how is language, structured or other things from our, original model that we enjoyed like we, like like language is always structured, like this here's a proper noun we like, to capitalize the first letter of the, sentences things that are important but, like we kind of overthink sometimes when, talking about generative language models, at least after that we want to make sure, that we're not just gaming a system um, models are again I think models are, sentient they're kind of just like math, numbers they're they're just trying to, game a system they're playing like I, always compare it to like you're trying, to like it's Money Ball essentially I'm, baseball fan here so it's Moneyball, you're you're statting out the system, and they in order so that they're just, not giving you what you want to hear, every time you'll have to calculate an, error rule in there so you put an error, metric or an update Rule and it, basically says all right we're going to, almost like dunk you down a little bit, so you're not too perfect and that'll, prevent unwanted model drift then once, you've done that a few times you combine, that with a copy of your original model, that you had again you're kind of doing, that checks and balances making sure it, doesn't run away after that you will, have a tuned language model and then, rinse wash repeat until you've got that, model right where you want it set it off, to production and then talk to your, friends that the other parts of your, mlops ecosystem yes and it'll come in, handy awesome yeah and um I hope that uh, we can link some of your slides from, that talk in our show notes um they're, awesome including emojis and the the, full deal which helps um so make sure, and check out the show notes um the link, to the slides will be in there so you, can take a look at these figures while, you're listening to the show One, follow-up question on this we talking, about like gathering this feedback data, people can think about like okay in the, context of my company or where I'm, working I'm going to gather some of this, data you know tuna model but what is, your perspective on sort of like the, open data ecosystem and like what would, you encourage people to think about in, terms of like data that they could make, openly available to help others who are, also trying to do this or the other way, around like people that are searching, for maybe a place to start what is the, sort of open data ecosystem look like, right now and how important is that as, this sort of field advances yeah um, first off this is you've got me on my, other favorite soap box of the moment, and this goes back to um my days when I, was you know a journalism student, working in journalism, but um open data is one of my like, favorite topics to geek out on um, basically it was something that really, came actually as part of the Obama, Administration he actually established, Federal funding for a lot of our public, and Civic data as a part of government, accountability and transparency so there, was actually federal grants that went, out to make a lot of our Civic data, public so there's a really cool example, um I believe it's the city of, Philadelphia that actually built a Sim, City like game off of their public data, so it's so cool it was like a grant, given, super fascinating um I'll link it to you, and give it it'll be in the plethora of, show notes um on all of that but open, data is just open freely accessible, freely Ed data that is made available to, the public love open data I'm a, participant in open data week but when, it's it's been federally funded it's not, always the best thing to be federally, funded and like we all know how, government grants go and if you aren't, aware how government grants go they're, very Niche specific and they run out and, they're not always maintained and it's, not always the cool sexy job that we, have so, they're always not the best maintained, or context applicant um what a lot of, these early machine learning models did, and what a lot of machine learning, models did is these open data sets have, given opportunities for people like, myself to even learn how to do data, science I learned python in open data, week um I remember going back and like, let's get the traffic data in New York, City um and it's like basic you know, using curl and like getting things, started for the first like can you query, an API like they're not the most, organized data data sets out there, they're not the most clean sometimes you, get some really messy garbage data you, know the 2020 census is actually a great, example I was speaking to someone, yesterday at the conference about this, the 2020 census was the first time that, we were able to do it digitally well she, gave the example of like you know hey I, started the census on my phone oh no the, pot boiled over oops I accidentally, counted myself twice in the census or I, didn't fill out my address or now I've, got two people or a person who lives at, this address or a typo crap now that's a, very messy dat set so open data can be a, problem let's go to like the practical, application of this if you are working, in open data or you are interested in, getting more involved in open data um, one of my favorite sources is like if, you're publishing a story making, tutorials making content put your data, out there and put how you processed it, and it's not just like one thing to put, like your data out there but also how, you processed it um in journalism you, have this phrase how you frame the story, is how the story or is how that you tell, the story um leaving out details context, or even how you came across a source can, influence how the story comes across, yeah for sure um and that's we see, especially evidence in datadriven, journalism and solutions journalism, which is interesting and like it's also, can be really damaging to trust and, reputation but I think ml runs the same, risk right now if we're not transparent, of here's how I prepared the data set, here's how I trained an annotator or, here's the tools that I used or here's, how I obtain the data in the first place, yeah and uh like you were saying the, certain things like how you give, instructions to a data annotator or um, how you set up your prompt that has such, an influence on the downstream, performance of these things but it's, very frequently I've definitely found, like the instructions you give data, annotators is something very often left, out of the story of how people tell like, what they did right it's like oh we, gathered this data with these labels, okay well I can imagine like my own set, of instructions for getting those labels, but it could result in totally different, thing that's happening um like all sorts, of biases and other things that go into, that so I mean well I have a perfect, case example of this um in January we, met many of the team members at hardex, and label Studio metup basically we got, our entire you know customer success and, sales team and you know the community, side of things and a bunch of our, support Engineers um to all sit together, and like we had a data labeling, competition for fun at the end and I had, just finished like how did do get, started with data labeling and like best, practices and I was like easy like I'm, going to kick all of your butts like I, was totally going in like hot and, everything and like thinking well I sped, through I was like whatever next great, done and I like sped through cuz I was, like speed was a metric but also, accuracy well I sped through this thing, cuz I was like whatever I'm going to, acis I know the keyboard shortcuts like, my set my systems are set up I had the, lowest accuracy score everybody my data, was all I was like you failed Aon it was, like I was like man I'm going to go, embarrass myself right now after all, that crap I just talked, yeah yeah I think um that's that's the, other thing like I don't know if you, have any encouragement here but like, data scientists out there who have not, like actively participated in like the, data labeling process I think yeah, that's like such a learning experience, because like it gives you perspective, even if in the future like you're not, part of one of those processes it gives, you good questions to ask um if like oh, someone gives you this data set that was, labeled you should probably ask a few, follow-up questions about like how did, that go what did you do there well in, academic research you actually have to, disclose things like did you pay your, annotators or how did you prepare the, annotators when you're doing research, because that can put so much of a bias, on a Model that it's built off of that, data and that academic like you can't, get peer-reviewed studies done without, disclosing that information it's part of, data ethics now and one of the biggest, things and we don't talk about it enough, is how do you pay your annotators or do, you Outsource your annotators which, isn't saying that a bad thing to do but, again we have to remember that so many, of these models and I see it like I, think a lot of it times it actually is, probably I'm going to guess here I don't, know but I'd be even wondering if these, smaller models that are generated, because they're generated at home or, people Dorking around on their, computer they might even have more bias, because we're not trading an annotator, like I know when I'm goofing around with, my Naked and Afraid data set I'm not, annotating like I'm playing some 30, second goofing around stuff and watching, YouTube videos just seeing what's out, out there I'm not doing the work which, is a problem yeah I guess kind of, bringing things full circle a little bit, like we started talking about like some, of these like players in ml Ops and sort, of the Ops around this process we talked, about human feedback um reinforcement, learning we talked about open data what, excites you about like the the trends, that we're seeing and what impact they, could have on on our industry moving, forward Maybe related to like people, that weren't able to like participate in, this process before the tooling is, better so they can or maybe it's, something totally different like around, like tasks or other things that you see, like in the future like what what are, you personally excited about like, looking forward as you bring this stuff, together well first off I've been really, impressed with like what the hugging, face team is doing I noticed the hugging, face shirt um the hugging face spaces, have been amazing we do have a label, Studio hugging face space but the, ability to get up and going in the, browser has been super awesome and um, there is a talk I went to at Pata Berlin, that's running streamlights they're, running entire like python based models, right in the browser and tools I think, there's a it's binders I believe is, another tool that's doing another again, very similar to notebook processing all, in the browser makes it more accessible, um than ever before and it's just really, exciting especially as like I love that, we have more people interested in this, industry but it's also not only the, interest but the tools to do it, correctly and ethically and again, jumping on my soap box here this is why, the open data is so important so putting, when we can putting our sources our, references building in the public, building an open source and making kind, of a almost I don't want to say paper, trail but like a show your work sort of, process is really important for the, future yeah awesome that's great and um, as we kind of uh close out here where, can people find you online and also tell, us a little bit about your own podcast, which sounds awesome and like includes, pickles yeah um so I am available online, at Aaron male on all the platforms um or, aaron. bio has a link to everything that, I'm at you can also chase me down at, label studio so it's label Studio but, the last. iio is like of things um join, the community come hang out with me, there we have an open coming upcoming, town hall and getting into more, workshops um so very excited about that, I also run the dev relish podcast so, it's everything about Dev, and is also you know naturally um some, people made sardo bread I got into, fermentation we got a fun pickle fact, and cool pickle logos because you got to, relish the developer moments well I open, source yeah I this was definitely not a, a sour experience I've I've relished it, very much uh thank you so much for, joining uh Erin it's been a a great, pleasure to talk to you and looking, forward to uh following up with all the, cool Community stuff you got going on, again people check out the show notes, and uh thank you so much thank you so, much this was a quite a big deal that we, had going on here good one good, [Music], one thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change talk podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residence breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The last mile of AI app development | There are a ton of problems around building LLM apps in production and the last mile of that problem. Travis Fischer, builder of open AI projects like @ChatGPTBot, joins us to talk through these problems (and how to overcome them). He helps us understand the hierarchy of complexity from simple prompting to augmentation, agents, and fine-tuning. Along the way we discuss the frontend developer community that is rapidly adopting AI technology via Typescript (not Python).
Leave us a comment (https://changelog.com/practicalai/222/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Travis Fischer – Twitter (https://twitter.com/transitive_bs) , GitHub (https://github.com/transitive-bullshit)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• ChatGPT Hacker Community (https://www.chatgpthackers.dev/)
• ChatGPTBot (https://twitter.com/ChatGPTBot)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-222.md) | 13 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist building a tool called, prediction guard I'm joined as always by, my co-host Chris Benson who's a tech, strategist at locked Martin how you, doing Chris doing very well we're still, in 20 23 the most exciting year in AI, history it it is you know uh it's hard, to keep up but it's also sometimes hard, to understand like what of these cool, demos and models and Integrations are, like actually production ready and how, are people actually taking these things, into production and um we're really, happy to have with us today Travis, Fischer who is a founder and CEO at a, stealth AI startup and is focused 100%, on that delivering products with AI so, we're happy to have you here welcome, Travis thank you guys it's a pleasure to, be here looking forward to the, conversation yeah well on Twitter you, posted this diagram which I think maybe, you have pinned right now which is how, to use large language models effectively, and it's sort of like a start simple to, complex scale and I found that really, great and I've actually shared that, diagram with a number of people and, various slack channels and all this is, how you should be thinking how did you, maybe not specifically with that figure, which we can talk about but like how did, you get into thinking about how to use, large language models effectively to, like actually how to build products with, these models yeah it's a great great, question so I I'll give you kind of my, my quick what I've been up to in the, last six months which is going to answer, some of this stuff I'm a huge fan of, Open Source uh when chat GPT launched on, November 30th uh 48 hours later I, released the chat gbt npm package which, was using unofficial API and it just, allowed like thousands of developers to, go and start building with this cool new, thing that you know like uh the GPT, series and ll have been along for a, while before that but it was kind of, this step function in terms of just, their mainstream adoption and really, just caught people's attention you know, after that I released uh the chbt, twitterbot which now has about 123,000, followers uh I run a group called chat, TBT hackers .ev that has um about 9500, AI developers just a whole spectrum of, people right we have like researchers in, there and then we have like prompt, engineering script kitties and because I, mean my background is computer science, and I do have some formal education in, machine learning but it's not like I'm, an AI expert like my means right and, what has really captured me really over, the past year or so has just been the, rate of progress and trying to wrap my, head around it and understand it and, because it's been moving so quickly I've, been optimizing for my rate of learning, and I personally learn best by building, out in public and building open source, and just sharing what I learn as I go so, you know I think there's a lot of like, complexity and terms in AI as you guys, know and to some degree even just having, a mental model of like what are the, different approaches that you could, start with or how to approach solving a, problem is already like a difficult, starting place right and I know um for, the how to use llms effectively my, inspiration there was uh Andre Kathy uh, recently maybe a month or two ago, tweeted uh something about you know all, these big companies are interested in, using AI they're aware that they should, be using it to some extent and so, they're like well we need to hire a team, of ml engineers and get on this right, and the huge unlock now is with these, foundational models like for most, problems you don't need to do that and, you know yeah there's the production, side the Practical side and I'm sure, we'll get into that but like in terms of, where to start and starting simple and, actually validating for your business, use case that you could actually solve, it with AI that um you understand the, problem domain enough that you have, whether it's uh training data or that, you're actually solving real customer a, problem like starting as simple as, possible with hosted foundational models, a lot of times is a great way to get, started and just to validate quickly you, know as you kind of inevitably find, points where your workf breaks down or, where you're not get quality or the cost, or um some hard constraints like, security privacy you know of your data, there's kind of this of complexity I, like to look at and you know you start, with just prompt engineering up at the, top and then it's about like well how, much can I reduce hallucinations or or, add domain specific context into my, prompt by you know doing um information, retrieval and then you know at some, point you're like well what single, prompt isn't doing it so maybe I I add, some iterative process to that where I, use another language model to uh there's, all these techniques for doing kind of, multi-step prompting but you can do all, of that with a hosted model and you can, get like 95% of the way there for a lot, of problems and domains these days in a, way that was previously like locked, behind you know proprietary data, providers and you had to have so many, resource to be able to do that so it's, really this like, democratizing point in in the industry, at the applied AI level that we're at, right now and I think from the, conversations I've been having with, folks who are a lot of them you know, like like full stack typescript devs who, are building applications right and they, want to use AI they know it's cool they, don't know how to get started or they're, like oh I need to Learn Python I need to, train these customer models and stuff, and and like all of that is super, important and it comes into play at a, time uh or for particular types of, problems but the majority of people for, getting started like uh start simple is, the main take away from that I think, that's a great insight and I think, that's a one of the places where so many, people go wrong is jumping into too much, complexity they don't find a simple need, and potentially even don't look for, things that work just as fine that are, not AI uh in that way so love I love the, go simple and build from their, philosophy I think that's incredibly, practical I get the sense that a lot of, this sort of like chaining and like, bringing models together doing the, information retrieval it's sort of like, almost like a hacking culture around, this language model prompting which is, really cool and like that can go so so, far maybe there's like you say there's, like privacy or domain specific concerns, with like inter use cases but in your, like you mentioned the community that, you've kind of built up and you're part, of on Discord what are some of the, things that have maybe like surprised, you that you've seen that hey I didn't, even think that maybe this was possible, with just this layer of like using a, hosted model using pre-training using, retrieval whatever it is what are some, of those things that that you've seen, that kind of surprise you or maybe like, help develop your thinking around this, topic I have a few examples and one, story examples would be folks who are, taking these models and like applying it, to their personal finances right and, there's one guy in our Discord uh who's, like an ex hedge fund guy and he created, um a very basic agent that uses a large, language I think it's he's probably, using gbd4 to uh take this unstructured, information from his bank's website, about his expenses and like you know, extract structured information about, that and then you know he can graph it, and whatever so I think there's a lot of, hacking going on around this stuff it is, very very early another story of, something that surprised me and this is, just a fun story but when I released the, kind of unofficial API rapper for chachy, BT we kind of had this cat and mouse, game going back and forth with open AI, for a while cuz apparently there was, kind of a group within open AI that was, like oh this is amazing look at the open, source Community is doing they're, building all this cool stuff and then, there was another group that was like, well we're going to have the official, API eventually want to control this, stuff right so there's kind of this this, back and forth right and at one point, our community found a uh public model, but it wasn't like publicly disclosed it, was security through obscurity but it, was a fine-tuned chat model that Chachi, was actually using at one point and all, of the open source projects started to, use this thing and there were tens of, thousands of actual real consumers at, the end you know we're building on top, of this and of course open a I knew that, we were doing this I I talked with one, of our security Engineers about this, after the fact and they instead of like, what you would expect uh just shutting, it off instead they switched it out with, what they call Cat GPT and just all of a, sudden one day in our Discord we started, getting hundreds of messages from users, saying I think I got hacked I'm seeing, all the these meows like it responds to, my thing right so it goes to show you, know the moral here and I ended up, hearing from the open AI uh engineer, that they were watching our Discord, taking screenshots and laughing their, asses up um you know at this happening, but it goes to show that like one the, kind of level of um there's no switching, cost to these things right it's like, text in text out fairly basic and, there's like entire new venues of like, uh vulnerabilities of like swapping it, out with a you know a cat or something, like what does security look like in, this world when um it just thought it, was an interesting kind of uh anecdote, probably that vulnerability of like all, of the sudden getting me, like that is a possibility but I'm, wondering like as you've spent a lot of, time with these models you've also like, you're Building Products on top of these, models like from your, perspective taking an llm integration to, that sort of Last Mile of like, integrated into a product supporting, users what are the things that should be, on either developers or data scientists, Minds as they think about like taking, the step from like demo to like product, integration I guess would be the, question so I like to say that, absolutely everything in engineering is, about tradeoffs and it's about really, thoroughly understanding trade-offs and, then being able to effectively, communicate those trade-offs and the, pros and cons and everything and it, really boils down to those two things, like over and over so let's talk about, some of the tradeoffs that are most, important to using language models in, practice you have the most obvious one, which is quality like can I use these, language models to actually perform the, task that I want you know you have uh, often times secondary but equally, important trade-offs like how much does, this cost to run in production uh what, is the latency for my use case for the, end users how consistent and reliable is, it like can I have actual uh is my use, case fault tolerant which is a great, initial question because we're kind of, moving from a world of like very, deterministic human driving the program, to a world where the more control you, give to language VI and their reasoning, AB ities this is getting more into the, agentic side of things the more that it, becomes slightly non-deterministic or, very non-deterministic and so the, ability to have guard rails around these, things the ability to have consistency, and predictability is extremely, important and one of the first questions, that you should ask yourself if you're, thinking about like integrating with LMS, is for your particular use case for your, job to be done for your customers to, what extent you know do you need 100%, reliability versus like 99% reliability, and that may sound like a little bit for, certain domains of problems it's, everything right and so that's one, fundamental question there are, techniques and we can talk about them, I'm sure you guys are very aware as well, of like going from that 99% getting, close adding extra nines of reliability, and you know that's also a very active, area of research where folks are, actively figuring out ways to increase, the reliability of these models but the, fundamental trade-offs are you know, quality cost latency and reliability and, and using a hosted model is going to be, great for quickly like with minimal, resources and validating your use case, for a lot of those types of trade-offs, it may make more sense than uh to use a, local model and there's kind of been, this cambering explosion of Open Source, large language models and other you know, specialized machine learning models and, I we're going to continue to see that, proliferate I I like to think of the, open source kind of state-ofthe-art as, you know 6 to 12 months behind the, proprietary versions um we'll see if, that holds but you know it's kind of, because there's like zero switching, costs with these models because there's, just so much competition the prices are, going to keep going down over time we're, going to see if the open source side of, these models continue to get more, powerful and so for a lot of use cases, where you're dealing with well maybe I, need ultra low latency on device or, maybe you know cost is a factor and I, need to be running in my own data, centers or maybe you know you need to, after a certain point once you valid use, case you want to find tune and distill, the model down and have a really locked, in like a checkpointed this is like, completely unit tested this is you know, evaluated version of things and I think, we're at the stage right now where, there's so much like hype and so many, people building AI applications and, demos and that's great right just, getting it out there proliferating, through open source through Twitter, whatever it is right this is awesome but, the version of that last Mile and the, productionizing concerns really needs to, dive deep on all of these kind of, fundamental trade-offs that I'm talking, about and the hosted models versus local, models and and you know fine tuning, distillation they all become really, important very, [Music], quickly so uh before the break as we, were talking about these different, characteristics that kind of affect, applied Ai and affect deployment I was, really taken by the fact that so many of, them are not really AI specific you know, if you could almost argue that applied, AI in in so many ways is about software, it's about the systems it's now about, Cloud it's about all these other things, Blended together to produce solutions, that are productive in the world and and, have value for people and organizations, you know we talked about unit testing, and stuff like that what is your, thinking around kind of the integration, of all those things because the model, itself you know to your point about hype, still kind of gets all the attention and, the amazing things and it is amazing, what we're doing but to make this stuff, work in life there's all these other, concerns that there's so many cool, things in 2023 happening on the model, side that the other 99% to make it real, uh kind of gets when you're working with, people around understanding how all this, fits together so that they can do that, how do you frame that so that their, attention gets on the right thing their, budgets are properly allocated to attend, to all the things I've seen, organizations really with that because, they go into it with hype focusing on, just the model and building skill sets, and budgets around the model and then, they try to figure out the whole thing, with clouds and deployment and things, afterwards and they have a hard time how, do you navigate that given the hype, cycle that we're operating in my first, piece of advice would be that for your, particular use case is your job to be, done whatever business use case you're, solving and to keep in mind that AI like, all software is a tool and it may be a, really shiny tool it may be a tool that, is evolving very quickly in front of us, right now it's a very powerful tool but, it is a tool to solve you know business, use case and a problem for humans so, rooting you know the framing in that I, think is very important the second thing, I'll say is a lot of AI right now and, especially the stuff that gets a light, shined on it and in the open because the, application layer is so new and there's, so much looking in Fruit you know as you, said like we need to have more emphasis, on the engineering rigor under the hood, and so one practical piece of advice, there is to really focus on an, evaluation set for your particular use, case and you might have existing data, you might have existing kind of input, output pairs for your particular example, you might have you might not have that, but like starting from there and working, backwards of like this is what the end, user is going to see and then working, backwards from that to think about well, how I'm going to use language models or, other expert Focus machine learning, models to solve that I think is very, important because that also gives you a, grounded Northstar that like so much of, the prompt engineering and tuning of, these models is based around well I, think this is going to work better or I, eyeball it on this one example and it, seems to work for this right but really, applying some fundamental engineering, rigor at that level where you have an, evaluation set that you can track that, you can improve over time that you can, and not just tracking the quality of, these models but you know tracking the, other trade-offs in terms of pricing, latency recall like there's a whole slew, of trade-offs that can matter depending, on your particular you know use case and, then the other piece of practical advice, I would say is the kind of diagram of, this ladder of complexity that I was uh, referring to before like every time you, take a step down that ladder of, complexity from using a hosted model, just using prompting and then going to, you know some type of information, retrieval embedded in the context to, having a multiple uh chains of prompts, to going down to uh fine-tuning you know, a hosted model or fine-tuning a local, model and at the very very bottom is, building your own model right like every, time you take a step down that ladder of, complexity it adds engineering, complexity uh it's going to make your, solution more complex to maintain and so, really like having a good handle on how, you can start simple and only move down, you know when you need to or when you, hit a constraint like okay this is great, and I have a working solution with a, hosted API but now I need to worry about, the price cuz I'm going to production, and you know the the unit economics like, maybe at that point then you think about, well now I have this great solution and, I can autogenerate an eal set for myself, and you know have a bunch of inputs and, outputs and find tuna model that is, hyper distilled and efficient and, focused that's great don't start there, for most use cases right the one other, thing I would say at the Practical level, is where language models tend to break, down or lack reliability is often times, when you're trying to give too much to, the model to do it once and so breaking, the problem down into sub problems um, that are a lot more focused is one of, the most practical like I I just found, myself telling people over and over, again it's like okay that's awesome, break your problem up into sub problems, and you know how to do that is a whole, art forward in itself and maybe someday, in the near future language mods will do, that for us I don't know right that's, getting into the the more Sensational, side of things but as a general, principle breaking your problem up into, sub problems thinking about how you can, articulate your problem as succinctly as, possible in a way that is native to to, the the language models is a really Key, Practice I love how you talked about, like evaluation forming your evaluation, set getting some ground truth also, breaking up your problem maybe having an, evaluation set for each of those sub, problems would would be a good idea I, think there's this General perception, that large language models are this kind, of unique thing these chat interfaces, are this kind of unique thing like we, can't how do you like evaluate that in, the way like I think what people have in, their mind is oh if I'm doing sentiment, analysis it's either positive or, negative or neutral and I can like, calculate an accuracy for example, whereas they might struggle to think, about okay well there's this output from, this language model It's seems coherent, and fluent um like how do I evaluate, this and so I think there's maybe a bit, of confusion around the evaluation side, can you share any tips or thoughts in, terms of what you've found to be useful, in your own work in terms of evaluation, sets and like how you think about how, good the output of a language model is, my first thought would be the less that, it's about me thinking about how good it, is and the more that it can be objective, like using some constant way of, evaluating it the better there's one, project that I I really like recently uh, by Lance Martin it's called Auto, evaluator I don't know if you guys have, seen it but it's specifically for the, the uh domain of QA so question, answering and he recently partnered with, Lang chain to create a hosted version of, it but the way that I think about this, is a little abstract and it's really, like starting from your job to be done, often times sentiment analysis isn't the, job to be done it's like a piece of a, job to be done right so again it's like, breaking up the problem and, understanding how to think about and, structure those problems as whether it's, an expert model that just does sentiment, analysis or it's using a large language, model that like it can do sentiment, analysis is really good at that but it's, also like it can do a whole bunch of, other things as well so the more focused, your task is the more clearly, articulated your task is and the more, structured of the output that you have, at the individual llm call level the, better and the easier it is to create uh, reliability around these things and to, actually test them with more traditional, software engineering practices like, wrting unit tests or integration tests, you know one thing that I'm actively, working on right now for the typ scrip, Community is a way to invoke large, language models and have structured, guards on them I know like prediction, guard guard Wheels there's a few, projects that are doing this but really, then having that actually be typed in, typescript so you can make an llm call, like it's a function but get some Json, that has these fields that has these, types and and have it kind of you know, there are techniques that you can do, under the hood to self-heal if the ad, Jason isn't properly formed or you know, maybe you want to generate some typesc, code and you want to validate that's a, correct a or something like there's, techniques that you can do to, constrain the output of the language, model for your particular task but in my, view these techniques are constantly, shifting kind of the best practice in, the state-ofthe-art there and so I think, libraries like Lane chain and the open, source framework that I'm currently, working on you know I think will do a, lot to help developers to abstract out, some of the complexity of just viewing, this as a general purpose tool and again, it's like you start simple one of the, great things about language models is, they can do just about anything that's, also one of the downsides right like, when it's so uncon how do you even, approach the problem so having best, practices having like examples, constraining the problem and really it's, like the ability to have a a unit test, or an assertion in a traditional, programming language at the large, language model call level where it's, like I assert that the output should be, valid jent or I assert that the output, should conform and be valid typescript, syntax and if not like actually, self-reflect on that and put it back, into the large language model and, regenerate it all those things I think, are foundational Prim at the large, language model level that will allow, developers who want to build real, reliable applications to do so more, reliable because they can focus on their, domain specific you know business logic, or aspects that are away from a lot of, this kind of implementation details that, are also constantly shifting under our, feet right fantastic explanation you, keep talking over and over again about, how things are shifting and the, evolution of the engineering around it, that puts a burden on these hackers and, developers that are trying to go out and, Implement these things at this point, because this year has just been you know, phenomenal progress but that makes it, really hard for mere humans uh here to, kind of track that and keep up with it, so you kind of talked about some of the, concepts right there but if you're, thinking about like you're about to turn, to somebody who's a hacker and they're, looking for that guidance what are some, really you know not necessarily, comprehensive but hey go do one two, three or ABC and that will help you kind, of keep leveling up like what are some, of the things that you're telling people, these days to say if you want to keep up, this year with what's all with this, insane progress in llms and um and all, the different model types that we're, seeing the progress on what are you, going to do in a practical side as a, hacker to manage that do you have any, tips that you can kind of take us, through about absolutely one it's super, noisy there's so much happening we're in, the middle of this exponential wave and, you know I think a lot of people are, like oh I fomo I want to be on this wave, right but where do I start and there's, just just so much noise which is great, on the one hand but on the Practical, side like how do you give advice where, do you start so you know there's a, couple levels to this um I have talked, uh given some talks to kind of like a, chat gbd for beginners type crowd and, really it's like my M advice is one just, use it just go and and try it right, that's simple but two more importantly, like the next time you have an actual, problem that you think like maybe I, could use uh chbt or language model for, actually try using it to solve your own, problem cuz what that does is it starts, to build up this muscle in your brain, around thinking about using these new, type of tools to solve problems and it's, really a different type of tool it's, like exercise you need to start, exercising that muscle early and often, and there's a lot of noise there's a lot, of different AI tools the side of things, which I'm confident will be just as, relevant you know a year from now a, couple years from now is building up, that muscle to think about, using uh how to actually use AI to solve, your own particular problems it's one, thing to talk about like uh, hypotheticals and general like cases of, problems where these tools Excel but, it's another thing entirely to start, building your own personal you know uh, muscle totally agree with that I have a, you know whether it's a personal problem, and and you want to just go you know, talk with chat gbt or you have a problem, at work and you're like well I think I, could use a language Model A hosted API, to solve this or something like starting, simple starting from your own problems, will start to build up that muscle and, and you it'll you'll naturally learn it, and take it from, [Music], there so Travis you've mentioned a, couple times like typescript node this, community that like you're a part of and, I think there's probably a lot of python, people listening to this show maybe data, scientists practitioners to me it it, almost seems like there's like two, communities there's like all of these, data scientist trying to figure out like, oh large language models generative AI, has sort of broken my intuition around, like how what I need to be doing like do, I need to be training models now like, how do I solve this problem now I was, training models last year do I still, need to be doing that so there's like, that side of things and then there's, like this really Vibrant Community of, front end developers and other, developers that are building even like, people that are maybe like low code no, code people building really cool, products around this technology you seem, to be sort of like exposed to a lot of, those things how do you see these, communities developing over time and, like how have you seen the maybe the, typescript the node JavaScript type, crowd kind of rise up to meet these, Technologies maybe in a way that the, sort of traditional data scientist crowd, has not or has differently I guess to, some degree a lot of the JavaScript, typescript world is like jealous of the, Python world right because because all, the cool new AI stuff is like python, first or you know using this machine, learning framework or whatever and this, is where like hosted apis whether it's, replicate or hugging face so you got the, Hat on right hugging face or open AI or, coher that like all these hosted models, are a massive unlock for application, developers and typescript is the the, largest programming language in the, world uh python is it's big it's the, largest by far in the data science in, machine learning world for sure and so, you know there's this dynamic between, the two where I think we're seeing at, the application layer folks who are are, good at building full stack apps that, can easily plug in to hosted models and, things who really push the envelope in, terms of like unlocking people's, imaginations building a good ux around, these things that's so important like to, make it more appr for people and to, really like show people what a lot of, the machine learning people have known, about for a long time right but like you, need both sides of that equation so one, of the projects I did a couple months, ago was I ported pyit learn to, typescript and it's not like a full port, you know it's like autog generates all, the typescript classes like 260 classes, and then under the hood it creates a, subprocess a python subprocess and then, Marshalls and does the interprocess, communication between them but it works, extremely well and you can call and do K, means and PCA and just all these, fundamental things that the python you, know machine learning World takes for, granted there are versions of that that, exist in npm ecosystem it's just they're, all over the place in terms of quality, in terms of like there's so many just, fundamental aspects of machine learning, that the typescript world is missing out, on and one of the primary drivers behind, kind of what I'm working on and I'm you, know happy to share like I'm building a, reliable typescript open source uh, framework for building reliable agents, very cool thank you I've agents as this, new like if large language models are, CPUs right and kind of this new compute, Paradigm they're these reasoning engines, like yeah they're great at generating, text but the real emergent property the, real gamechanging property of them is, reasoning if they're kind of the new, reasoning engines and you have like that, that's the your CPU layer and then you, have like a storage layer that's all, these Vector databases and kind of, overhyped on that side of things on top, of that you have like how do you, actually run programs and that's you, know uh agents and I view it as like, there's kind of a spectrum of like, traditional programming that might, happen to use a large language model and, then on the other end you have like full, self-driving agents that are making, decisions and creating tasks and just, fully autonomous right and I'm excited, to kind of focus on somewhere in the, middle and focus on more reliable like, use cases that we can actually build, reliably today but to your question, about kind of the typescript Python, World a lot of the frontier the, libraries at the framework level that, are pushing the edge here are all python, first right and I really want to take a, a typescript first approach partially, cuz it's the community that I know and, love it's my like best tool on my tool, belt and partially cuz I think people, building real applications at the, application Level a lot of those folks, are more in the JavaScript T world so so, you have hit an area that I have so much, passion for oh awesome I'm sitting here, waiting to ask my next question here and, and Daniel has heard me whine about this, for years what I'm about to say and so I, want to get your take on it so um like, there is more to the world than just, Python and I'm a multilanguage person, and I don't necessarily go all in on any, one language or the other I I'm a, typescript user in the last year I've, been doing rust I had been doing more go, before that outside of the AI and python, stuff but I I hit a use case where I was, building something and I had to eek, every little bit of performance out of, the available Hardware to do what it was, it was going to be C++ or rust when it, wasn't going to be C++ so went to learn, rust and then I'm in Rust and I'm doing, that and I'm looking at as an analogy, here for what we're about to go at I'm, looking at web assembly and the Russ, community and other language communities, are So Into You know that fact of you, know write it in the thing that you need, to be in and yet have access to that uh, in terms of deployment and still having, great performance and stuff and every, time I'm now messing with web assembly, in Rust I'm thinking when is the AI, world going to to catch up on having you, know multifaceted from a language, standpoint access to the models instead, of everything being python first and so, asking the pardon of the Python lovers, in the audience when am I going to be in, rust or go and you're obviously doing it, in typescript but the language of my, choice and taking advantage of as you, called it the new CPU of reasoning from, that point instead of having to do a, context switch it is an ongoing year, after year after year frustration that I, have as you can probably tell by now so, oh yeah I'm hoping that you're about to, give me the golden path out of here, because I need one okay well first off I, love your framing I love your passion, for this I also feel very similarly um I, think the reason why I'm starting with, typescript is because the developer, experience at the application Level I, think is really important for the type, of framework I'm looking to build but I, view web assembly uh WM as kind of the, the ultimate compiled uh language uh run, time that you know I want to Target cuz, you could imagine a world not too, distant from right now where you have, agents that are running you know in data, centers they're running on edge uh so, anything that's kind of uh you know, whether it's Cloud flare worker or Vel, Edge function or or within a service, worker in your browser right but the, Common Thread there is wasum you know to, what extent starting with developer, experience at the typescript level and, then focusing on that at the runtime, level there's still a clear path, forwards for a lot of folks for just, like using hosted apis you know that's, one area that you can have that, multilanguage very easily that's a, natural point but you got to be kind of, in the cloud for most of that in a, practical sense which I'm not always, yeah 100% And then there's like the, whole uh open source models or the, Practical side of things where you're, like well I need to have full Hardware, or the latency or something that's like, on device and I am extremely bullish on, web assembly as there's a quote that I, like and it was from I forget who it was, from from the Linux Foundation or, something it was like you know if web, assembly existed 10 years ago then, Docker would have never uh needed to, exist right and I think it will have, that level of impact eventually I think, potentially I do too yeah I think, potentially the kind of unlock here that, the path that could bring it into the, more of the mainstream could be AI I, don't know at the model level there's, just so much momentum behind python you, know and all the core kind of, researchers and S python first so when I, did the thit learn uh kind of port to, typescript there was I think a python, Port called pioy and it's a python um, runon you guys might know better than I, do but it's targeting web assembly and, it allows them to run sub set of s kit, learn in web assembly uh support, environments including node.js and the, browser and that's super super, fascinating to me yeah I think that, there's a couple related projects I, think like uh py script from Anaconda, like trying certain things like that but, um really interested in that space as, well cuz I've seen it it's sort of like, a different kind of diversity than we, normally talk about but the fact that, like more developers from more diverse, backgrounds are at the table building AI, things I think is an amazing thing and I, think a lot of good is going to come, from that so I'm really happy to see a, lot of that happening well if we have, time just one more more maybe, controversial take on this sure we like, controversial takes awesome awesome you, know as we uh get closer to building, reliable agents and the way that I kind, of was framing it before it's kind of a, fundamental new compute Paradigm with, large engage modes of CPUs and you, building these agents on top of them as, they eventually get more and more, reliable and more autonomous um right, now a lot of them are just toys let's be, clear but as that happens I view it as a, new higher level programming language, you know and we're working with natural, language the a of that language is in my, view a directed graph and the nodes are, like specific llm calls or a call to a, tool or call to an API and you know, there's massive problems around around, how to add reliability at that level of, in having a structured output or guard, whs like like some of these things are, clearer than others and then at the, whole graph level you know that becomes, a program or an agent to some degree, we're talking about all of these like, Python and rust and the implementation, dets and that's all very important but I, wonder to what extent you know 10 years, from now we will even be talking about a, lot of the current levels of programming, abstractions that are hyper relevant to, us today as practitioners or or we how, quickly we'll move towards this world of, a higher level abstraction for solving, problems that is just significantly more, uh efficient more approachable because, it's kind of based on natural language, anyone in this field that talks to you, about timelines you know is like just, throwing a throwing a dart ran with a, blindfold on but uh that's that's one, thread that I'm really excited about you, kind of uh went already ahead to where I, was hoping you would go which is what's, keeping you up at night what's in your, mind in terms of like looking forward, and all of that and I I agree I think, this is a really really interesting uh, Direction and I certainly hope that we, see that timeline progress rapidly I, think we probably will so um yeah it's, been a pleasure to have you on the show, Travis um really looking forward to, keeping in contact and seeing all the, amazing things you do and trying out, some things in in typescript it's an ex, exting time to be part of this and um, yeah looking forward to keeping in, contact thanks for joining, [Music], us thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, changelog podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residence brake master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Large models on CPUs | Model sizes are crazy these days with billions and billions of parameters. As Mark Kurtz explains in this episode, this makes inference slow and expensive despite the fact that up to 90%+ of the parameters don’t influence the outputs at all.
Mark helps us understand all of the practicalities and progress that is being made in model optimization and CPU inference, including the increasing opportunities to run LLMs and other Generative AI models on commodity hardware.
Leave us a comment (https://changelog.com/practicalai/221/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Mark Kurtz – Twitter (https://twitter.com/markurtz_) , LinkedIn (https://www.linkedin.com/in/markkurtzjr)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Neural Magic (https://neuralmagic.com/)
• SparseML (https://neuralmagic.com/sparseml/)
• SparseZoo (https://sparsezoo.neuralmagic.com/)
• Neural Magic Scales up MLPerf™ Inference v3.0 Performance With Demonstrated Power Efficiency; No GPUs Needed (https://neuralmagic.com/blog/neural-magic-scales-up-mlperf-inference-performance-with-demonstrated-power-efficiency-no-gpus-needed/)
• Deploy Optimized Hugging Face Models With DeepSparse and SparseZoo (https://neuralmagic.com/blog/deploy-hugging-face-nlp-and-cv-models-for-fast-inference-with-deepsparse-pipelines-and-sparseml/)
• SparseGPT: Remove 100 Billion Parameters for Free (https://neuralmagic.com/blog/sparsegpt-remove-100-billion-parameters-for-free/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-221.md) | 7 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how air related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends that fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist building a tool called, prediction guard and I am not joined, today by uh my co-host Chris but I am, joined by an amazing guest who is an, expert in all things model optimization, and efficiency and running on CPUs which, is super exciting I've got Mark CTS, who's director of machine learning at, neural magic welcome mark thank you, Daniel thanks for having me on yeah yeah, of course so um let's maybe just uh, start out with a kind of like state of, model optimization right now so first, off could you kind of describe like when, you're talking about model optimization, or or that set of tooling what do you, mean by that and like how does that fit, within maybe the things that a data, scientist or an AI person would want to, do so whenever we're looking at model, optimization we're usually focused on a, few different techniques but the, ultimate goal is to make the overall, model smaller and faster right neural, networks known to be very large models, especially compared to more traditional, machine learning and it turns out the, size of those models is um the important, part in terms of exploring a large, dimensionality of a space but it, actually doesn't use all those Pathways, at inference time so what we specialize, in is specifically pruning where we're, going to remove connections within that, Network quantization where we're going, to reduce Precision of those uh, Connections in the network so going from, you know the typical fp32 down to ntate, and then additionally distillation where, we're taking larger models and trying to, teach a smaller model to mimic the, capability and the functionality of that, larger model so it's kind of you know, overall a high level and yeah it's h, it's a very exciting space right now, it's uh kind of exponential in terms of, the number of research papers that are, constantly coming out on the topic, everybody's very excited about sparsity, specifically mainly because you can turn, these large models and uh get rid of up, to 95 even 97% of the weights are, actually useless in these obviously you, can use that for a lot of efficiencies, around performance and energy and that's, specifically where we've been focusing, in at neural magic and what I've been, focusing in on my work awesome yeah, that's uh I definitely have felt this, problem so I'm sort of asking this, question maybe for others out there that, that maybe haven't felt this problem as, much why is it important to like make, models smaller or make them more, efficient how does that fit within what, Enterprises or like even users running, smaller applications like why is that, important for people I guess is the, question generally there's going to be, two cases that we're looking at in terms, of deployment one would be an embedded, space where we're running on the edge, and trying to work there so generally, you want real-time latency and, optimizing the accuracy as best as, possible so if you're using an object, detection model you want to make sure, that you know for example you're on a, security camera trying to draw object, detection and make sure that you know, when a person walks in a frame and, whether or not that's alarming or not, versus a dog or something like that so, in General on that edge application what, you can do is use a larger model remove, a lot of the pieces from that larger, model so you can keep the accuracy of, the larger model but take up the space, of the smaller model so significant, Improvement in terms of accuracy on that, edge device while still uh maintaining, you know the constraints that were set, for you in terms of memory and latency, that you need to request back with and, then the second one would be on the, server side and that's generally where, we're looking at you know more, throughput based applications and, potentially also wait and see if they're, shipping the data up to uh some server, to be processed either on NLP or um a, computer vision but overall there what, we're looking at is um especially, whenever people get into larger, deployments on ML and neural networks, the cost significantly shifts not from, training but to deployment you're uh for, a lot of larger Enterprises that are, actively deploy 80 90% of their costs is, purely in deployment on these machines, so what you can do is take the exact, same model that you have reduce again, the amount of a compute that you need to, run it so that one it'll run faster but, two ultimately what that means is that, it's going to run significantly cheaper, right and we have cost savings on the, order of you know 10x 20x even larger if, you're really trying to specialize and, optimize so there can be a significant, reduction once you're at that scale I, would say definitely if you know you, don't have anything deployed yet don't, worry about optimizing the model worry, about getting a use case that works and, something that you can prove out as soon, as you go into deployment model, optimization is a great thing to start, because it's essentially just free, performance that's left on the table, that can significantly affect your, bottom line we've mostly been talking, about kind of like model size and, optimizations and I do want to get sort, of down and get into the nerdy stuff, around like how some of this works but, before we do that I'm also curious about, this element of deployment on gpus, versus CPUs it seems like some of what's, indicated at least in like the tooling, that you're building is like the PO, potential to take a large model which, might require a GPU at inference time, and potentially run that on cheaper, commodity Hardware that only has a CPU, maybe doesn't have a GPU like what is, the state of that now and like how far, can you push that or maybe also like how, could people best think about that in, terms of like when and when that might, not be possible I guess as you said uh, we special I almost entirely on CPU, performance and in that uh actually our, latest uh ml imprints results on ML perf, has uh have come out so in that we show, that we're running faster than t4s and, a40s and things like that on just, commodity CPUs so server based CPUs, stuff that you have in your laptop, desktop things like that and it's very, surprising that you know these what's, thought of as these little CPUs can, outperform the GPU and we see this, generally across every domain that we've, tackled and that's been across image, classification object detection excuse, me uh segmentation and now we're working, in the NLP and nlg space and um activ, coming out with that but overall we're, seeing the same use case where these, models are overparameterized we can take, away a lot of that compute and what that, means is that you can actually get the, CPU and the GPU about equivalent in, terms of compute throughput because with, the sparcity and the dynamic setup of, CPUs we can run and Skip all those zero, multiplications right so significant, reduction compute they're about even but, then the CPU has a unique cache, hierarchy which means that we can reuse, that cache more often than what you can, get on a GPU L1 and L2 being extremely, quick faster than gpu's main memory and, L3 being about equivalent so overall, what we do on our performance, optimization is Skip all the compute to, get even and then use that cash, hierarchy as efficiently as possible on, the CPUs so we can get faster memory, access than even you can get on a GPU, and we pay a little bit more by doing a, little bit more compute by doing that, but overall it works out that you can, actually beat the gpus in that setup, with just pure software I anticipate, this is maybe a question that you get, sometimes but it's like you hear so much, about the necessity of gpus for running, these large models do you find generally, practi practitioners are just unaware of, this like possibility of running these, large models on on CPUs and like how has, that been for you and those that you, work with that are actually doing this, amazing work and have these awesome, tools like how has that been in terms of, overcoming that barrier of perception, that people have it was a barrier that, we hit uh especially early on a couple, years back where it took a lot of, convincing to even get in the door to, talk to anyone because they just didn't, believe what we were saying now it's, gotten to be quite a bit better, especially with the the newer software, that's been pushed out of the newer, chipsets on CPUs they're getting a, little bit more even in terms of gpus so, it's you know within a stone's throw to, try and match uh GPU so people are a, little bit more accepting of it but uh, yeah whenever we show them the numbers, generally the first reaction is well let, me try that on my Hardware cuz you guys, got to be doing something weird let me, let me try to do that let me replicate, that and as soon as they do then the, next question is okay well how do I do, this to my model and that's usually, where the tricky part comes in is model, optimization hasn't always been the, easiest thing to do uh it can take a lot, of research to enable new architectures, and things like that that's what we've, been also specializing on at neural, magic is making all the research that, we're doing being able to put that into, open source and also building out a SAS, platform on top of it so everyone can um, easily play with Hyper parameters and, get something that is consumable but I, would say that's probably been the, biggest Gap in terms of trying to get, people off of gpus onto CPUs is the, model optimization that needs to take, place first to be able to run faster, than the gpus you talked a little bit, about sparcity which I want to dive into, over time and I want to get to those, tools and the open source stuff you, mentioned like these large models and, people are probably used to hearing, these numbers you know 3 billion 7, billion 13 billion like however and up, from there like these models have in, terms of numbers of parameters could you, describe a little bit what you mean when, you say 90 to 95% of these connections, or you know maybe less than that but a, high percentage in some models have no, impact on the actual forward pass or, inference in the model could you, describe a little bit more by what you, mean by that yeah definitely and I'll, I'll take two steps to doing that one is, just covering kind of the 90 95% class, at least where we've been able to get to, on those and the second is looking, specifically at large language models so, for the first one whenever we're looking, at getting rid of 95% of the weights, let's take res at 50 as an example this, is our toy Benchmark model this is, essentially what it uh we prove out all, of our technology on because it's a, common feature in ml perf and for most, performance tests so what we can do, coming in is looking at those uh, convolutional layers it has I forget how, many million parameters within it but, it's definitely not the 3 billion 7, billion or up uh on top of that but, within that we can actually zero out so, what we're doing is taking all imagine, taking all those parameters dumping them, into a giant array and we're just going, to zero out the ones that are not, important and figuring out the ones that, are not important is part of the, research the easiest assumption is just, saying that the weights that are the, largest are the ones that you want to, keep so the ones that are furthest from, zero are the ones that you want to keep, generally you can think of this in two, ways one is that as the model is, training and being regularized the, weights that don't matter are going to, move toward zero and then the other, thing is during that forwards pass the, weights that are higher magnitude have, more an effect on the output right and, everything else is going to be noise in, between so we're able to essentially get, rid of just our what whenever I say get, rid of I mean setting those parameters, to zero within 95% of them so you're, left with 5% of your weights that are, nonzero that's actually all that you, need to preserve the accuracy um on imet, for res 50 for example and some quick, kind of intuition in terms of how I've, been able to think about this and why it, works and things like that we can see as, we increase the size of our um, dimensionality in our optimization space, what we're doing is and there's few, research papers out on it they were able, to connect more of the local mens right, so the optimization process will slowly, converge further and further down, because more of the local mens are, connected generally though there's only, a few of those Pathways that you, actually need to connect those local, mens so all that we're doing is we're, following down that most optimized, pathway and removing everything else, around us in terms of that, dimensionality so it's kind of one of, those things that as you're training, it's slowly selecting the weights that, matter that gets you down to that local, men and there's very few so the, important part was that large, dimensionality of the optimization space, but not every direction matters right so, then we can get rid of it and then, diving in on the llm side and large, language models we actually have a uh, recent paper that came out from one of, our principal research scientists Dan, Alistar uh called Spar GPT and that's, where we're looking at taking opt and, blo models all the way up to 175 billion, parameters and being able to optimize, those and remove as many weights as, possible all in this case in one shot so, just using the model without any, retraining we're able to get rid of, around 60% of the weights without doing, anything and there's a new paper out of, cerebrus actually that was looking at, the llm story and they're able now to, get to 80% sparsity on these llms with, retraining so that's kind of the, research direction that we're headed, down now is proving out how optimized we, can make these models because there's, also a lot of interesting stuff that, happens with the large language models, specifically because it's generating one, token at a time very latency bound and, that means that it's a lot of memory, access to load those weights so if you, can quantize those and then get rid of, half of them you're already at you know, anywhere from a 4 to 6X speed up just on, your inference times and that's, generally where we're uh where we're, focused and looking at currently to try, and get those llms to run faster the, other thing to call out for those too is, you know 7 billion parameters and 175, billion parameters those don't fit in a, single GPU so now you have you know, clusters of gpus to serve one model and, a lot of that compute is just completely, wasted because all that it's going to is, trying to maximize the memory on the, gpus for CPUs you can throw a few, terabytes on there and it works out fine, so that's the other thing to call out, with the llms in terms of GPU versus, [Music], CPU, this is really interesting Mark I want, to follow up on what you were just, talking about which I think is a really, it's sort of a subtle point but it's, really interesting in that I think if I, understood you right in what you're, saying like let's say that I have one of, these large models 175 billion, parameters or whatever and even for, inference I have the necessity to have, multiple gpus just to load that model, into the memory of the of the cards, whereas on a CPU you can have terabytes, of memory what I'm assuming is like you, could load that in as long as you're, able to execute it quickly which I guess, is the other piece so am I right you, sort of have to have both like the, ability to load it into memory and you, have a bit more space in that on the CPU, side but then you also have to be able, to execute it very quickly which I guess, is like why you would think about both, space and sparity is that an accurate, way to put it as you said you know you, have a a total space you need to take up, and then a minimum latency that you want, to respond to the user out right and, that's going to set the constraints for, your hardware and for CPUs currently at, least for the smaller models uh you can, get to a usable speed on those if you've, seen like llama CPP they're doing in4, and things like that on smaller models, and they're usable but they're less, accurate so we're trying to do and what, we're actively working on right now is, making sure that we can get that GPU, class speed while maintaining the large, memory advantage of CPUs so you can, deploy this 175 billion parameter model, on something local and you don't have to, worry about data privacy anything like, that it's just there working and, available and highly accurate for you I, definitely heard people that I've talked, to who have like tried various, optimization techniques and have maybe, been dissatisfied with the performance, hit that they're getting not in terms of, compute but in terms of like actual, model performance or accuracy or, whatever does that performance hit often, come about because of the quantization, that's maybe part of the optimization, techniques or are there multiple sources, of that like performance hit how how, should people think about that I think, the biggest thing there is honestly just, the amount of choices that people have, to apply and not knowing when to apply, them because generally for quantization, for example you can apply quantization, to pretty much anything at 8 for both, activations and weights and have it, recover but there are definitely cases, where for example we've been quantizing, efficient that's quite a bit on our, image classification side there's one or, two layers in some of these that are, extremely sensitive for whatever reason, to quantization that you can't quantize, those so removing those then you get, 100% recovery right so a lot of these, kind of little things that researchers, know intuitively in terms of having use, this constantly what will work what, won't but that's not really coded into, software anywhere right to make it easy, for people to use so generally they'll, go through try and quantize and there's, no feedback loop there's no methodology, it's just hey I was able to apply it in, one shot but it lost 5% accuracy talking, on quantization but if you do you know a, quantization we training scheme, generally you'll recover all that back, and uh It generally works completely for, that same thing on pruning pruning, you'll definitely see more of a drop and, it's much more of requirement to do, training aware on the pruning side at, least to get to really high sparsities, but you definitely will see this kind of, the choices that are made in the, hyperparameters that are chosen those, can significantly affect the recovery, and the quality so generally I'd say you, know if they were seeing drops in, performance it's primarily because of, those choices and those issues and just, the why breath that's available right, now and not knowing how to narrow it, down and that's you know what we're, actively working on does that make sense, yeah yeah that that's good for people to, myself I I want to develop a little bit, more intuition around these things, because like you say sometimes you just, like oh here's the command that I run on, the command line and like I get this, file out that is smaller right but I, don't have a great intuition about like, similar to hyperparameter tuning right, like that takes time to figure out like, okay how should I think about changing, my learning rate if this happens or if, that happens that sort of thing you, mentioned one thing which I think would, also be good to kind of clarify and help, people understand is like training aware, optimization versus just non-ra I guess, optimization or or I don't know what the, counter to that is could you talk about, those like how they're differentiated, some people might guess what that means, but like how are they differentiated and, how does that work out in practice in, terms of how you would optimize a model, technically we have three categories, generally available one and the two, you're going through is one training, aware um and then we have post trining, or oneshot which are kind of, interchangeable and then We additionally, have on sparse transfer which is, something that we've been pushing a lot, because the research has worked out, quite a bit for it so I'll cover all, three of those and a little bit more, depth so for training aware what we're, doing is taking the exact same model, that you're wanting to deploy in the, exact same data set it was trained on, and continuing the training process, further so while we're continuing that, training process we could be continuing, it this is where the hyper parameters, come in but generally it'll be about, half the time that originally took to, train it we'll train it for that much, longer and as we're doing that training, we're iteratively pruning away or we're, applying quantization or both and uh the, reason we're continuing that training is, because as we're iteratively applying, these optimizations we're slowly moving, the model away from its local men right, and it has to adapt and adjust back so, by slowly doing that and training over, some time we allow small jumps that the, optimizer can recover from and adjust, the remaining weights for rather than, doing it all at once and the all at once, piece is where we get into posttraining, in one shot where we're not going to try, and retrain the model at all we're going, to take a small calibration data set and, then uh we're going to use some heris, sixs or some her istics or algorithms to, uh figure out using that caliberation, data set how to optimize that model and, remove weights or quantize so the most, common case would be static quantization, where you're using a calibration data, set to figure out the activation ranges, for each layer right and once you have, the activation ranges for each layer you, can set up a simple quantization scheme, to say given that it's going from this, layer is going from -6 to 6 now I need, to fit that range into an intake scale, of 0 to 255 right and be able to map, that in so that would be a simple post, trining or One-Shot application and then, the final one is sparse transfer which, works exactly the same as um transfer, learning or fine tuning it's just we're, starting with a sparse model to run, through and that's a lot of what we've, pushed up in Earl magic into our sparse, Zoo are um these open source sparse, models that we have you know sparse, Birds and reset 50s and YOLO v5s things, like that and you can just take those, plug in your data set and transfer over, to it so the sparity math stays in place, and it just adjusts the remaining, weights to fit your data set we have a, few papers out on that as well that, shows that sparse transfer works just as, well as regular transfer yeah that's, really interesting um I guess this would, somewhat depend too like if I'm just, thinking of like the average, practitioner out there right probably in, a lot of the space that I work in in, like the larger language Model area then, like I'm not going to be able to retrain, one of these large models on the, original data set right or even half of, that or for half of the EPO or whatever, um but these other things are certainly, things that I do all the time right like, fine-tuning transfer learning so it's, cool to understand that there's options, out there am I correct in assuming that, like you mentioned the the zoo um the, sparse zoo that people can find on your, website and we'll Link in here too um, that like researchers your team, practitioners who whoever are the people, out there are also putting in work to, actually release some of these sparse, models publicly to the community so that, I can take those and then maybe do fine, tuning on that or maybe it's just good, enough from the what's released uh in, the community could you tell us a little, bit about that community and like what's, being released so on the open source, side pretty much everything that we have, on The Spar suit currently has either, been from our lab we have a few that are, from Intel's lab up as well and some, hugging face examples and things like, that primarily because a lot of the, sparsification research is all built, around a few models like resent 50 Bert, and things like that they don't expand, it out pass those models to prove out, their algorithms so we have the best of, those and then yeah that's exactly what, our team is working on and well we get, some Community contributions in every, once in a while for sparse models that, people have generated or transferred, right so our goal is to be able to push, these up so that as you said anyone can, come in from the community and be able, to pull those down and get value out of, those models so you can think of them as, sparse foundational models rather than, the dense foundational, models, [Music], Mark in addition to the sparse Zoo which, is really cool I I know that neural, magic is producing some pretty uh, interesting and useful tooling otherwise, as well in terms of actually doing some, of this optimization themselves and I, see uh certain things sparse sparse andl, could you describe a little bit about if, I'm a practitioner I have a model and I, want to do optimization what does that, look like for me right now with the, tooling that's available so we have, sparse and melt which is our open source, model optimization framework it's built, primarily on top of a pytorch and then, we have Integrations with torch Vision, hugging face YOLO ultral YOLO V5 pretty, much all the common repos that most, people are using we've already, integrated with so you can use our, Integrations and just plug in your model, and go along with that and then the, other part is that we have recipes and, I'll go through recipes in a second but, we have those kind of pre-coded, Integrations or you can create your own, integration usually it only takes a few, lines of code we've done all the hard, work in terms of making sure that, whenever we want to optimize a model, that you just have to wrap the optimizer, and P torch essentially wrap the model, and the optimizer which is what our code, handles and then it's going to go, through handle the optimization past, that so there's no coding uh really from, your side uh or from the practitioner, side on implementation and then the, other part is then coming up with an, optimization recipe that people want to, use and what that's going to lay out is, saying that I want to prune from this, Epoch to this Epoch for example and then, apply quantization and Target these, layers and at this sparity level things, like that we have automed ways to, generate those as well as examples on, The Spar Su and um in other places, that's gener what it would look like is, um you get up and running we definitely, recommend checking out the sparo first, seeing if there's anything that you can, do to just transfer the model onto your, data set because those are the quickest, and fastest ways and then otherwise if, you do have a specific model, architecture that you're looking at then, you can start going down this uh, integration pathway and generating your, own recipes and that's the current state, that we're at the other thing that I, wanted to call out is that we're working, on a SF platform right now to make all, of this more intuitive so you have a UI, to be able to predict where that model, is going to end up before you start, optimizing it and then additionally, actively Benchmark it across your, different deployment scenarios so this, is called sparsify we've had an old kind, of alpha state for a while that was uh, downloadable and uh we're actually going, through Alpha Testing right now so, anyone that is interested in trying that, out definitely reach out we're going, through Alpha currently looking to go to, Beta and uh probably next month two, months and then uh GA following up after, that so definitely check that out as, well and yeah yeah that's great um yeah, I I love how you're thinking towards uh, usability as well around these things, because I I do see this as something, that kind of blocks people on, optimization a lot because they get they, get stuck like you say it's like well, what what recipe do I use here it seems, like there's a million options like, where do I start so that's really, awesome to hear just so people can, understand like there's options, available in The Spar Zoo I'm kind of, scrolling through there now there's a, lot even ones that I know that I use, like Dilbert which is fine tun on Squad, like I use that all the time for, question answer and apparently I should, use the sparse one because uh yeah that, would help out a lot both in terms of of, compute and speed let's say that a model, is not there um and in particular like, people of course are interested in like, all of these things being rapidly, released all the time so one of the, things I know know that I've seen in, optimization platforms over time is like, it's hard to maybe support new, architectures as they come out so um how, are you all approaching that and like, what is the state of like being able to, be flexible with these optimization, schemes for a variety of architectures, uh looking at the base framework that we, have pushed out in smart smell for, example everything's implemented such, that it at least it's supposed to be, able to run with any model architecture, uh so there's no real big assumptions on, there other than you have convolutions, and you have linear layers inside of, your model somewhere that we can Target, for optimizations so everything's set up, very generically which means that, whenever there is a new architecture, that comes out or new weights things, like that it's going to take some time, for you know us to tackle that and, that's if it gets onto the list of you, know top used models things like that, but that is a nice thing about having an, open source Community is that people are, welcome to come in the tooling and the, framework there should work out of the, box with everything and they're more, than welcome to be able to commit or, push up you know whatever they're, working on and we have an active uh, slack community and GitHub uh Community, for people to come in our Engineers are, actively on that they can come in look, at it and easily get support on any, issues they're running into awesome yeah, and we'll make sure uh for the listeners, who are wanting to get plugged into this, we'll make make sure and include the, links to the the slot group and the, GitHub um in our show notes so make sure, you visit there and get plugged in and, start optimizing your models as we kind, of uh get a little bit closer to the end, here I'm wondering about a couple things, one is I know that like you're saying, you're actively involved in research in, this space um what trends are you seeing, in research around optimization and in, particular like what are the directions, that your team is sort of excited to go, into in the near future in terms of the, research side of this so I would say, there's two big trends right now one is, around the focus on post training our, principal research scientist Dan alart, he came uh him along with his lab came, up with this algorithm called OBC obq, which actually have a webinar which uh, will have already aired by the time that, this comes out uh it's airing tomorrow, but uh that algorithm specifically is, where a lot of efforts going around, which is requiring as little data as, possible and no retraining and trying to, increase sparsity as much as possible so, that's one of the key things that uh, people are trending down is trying to do, that the second one I would say is a, large push around quantization in terms, of getting to lower bits and that's, something that's been around for a while, but it's something that's becoming more, and more prevalent as we're looking at, the larger models mainly because their, execution time is mainly dominated by, trying to pull in these large matrices, of Weights so there's been a big push, there now around trying to get down to, you know pass n a down to N4 N3 N2, quantization on these active, representations I'd say that's the other, Trend the final kind of bonus Trend that, I'd throw on there because I said too, the third one would be more research, around sparse training so specifically, trying to figure out how to start with, an unoptimized and untrained model and, be able to make it sparse from the start, and then Heap fary throughout training, because generally what we do to, guarantee accuracy we'll start from a, dense converge model and then edely, print on top of that which adds training, time right so now there's a lot of, research going into trying to figure out, how quickly the model can be pruned and, then be able to carry that over as a, training so so that's where the other, big big piece is and all three of these, are definitely active areas that we've, been investing heavily in especially, looking at the generative AI space now, going through that yeah yeah I I know it, must be a crazy time for you all just, like it is a a crazy time for everyone, um but yeah I think this is a really, important piece of it I know one of the, trends we've even talked here on the, show about it seems like a lot of people, are talking about like serverless, deployment ments of of machine learning, deep learning models and I know a lot of, the issues related to that and things, that people are dealing with is cold, start time and loading models into, memory I don't know if that's impacted, you all at all but it seems definitely, relevant like if you're going to run, your model serverless you probably want, it as small as possible I would imagine, yeah absolutely absolutely so you, mentioned um where people can find out, about neural Magic on slack on Hub I, would really encourage people to do this, um as we close out here what are you, kind of personally excited about during, this uh I mean it's a like I say it's a, crazy time for everyone right now with, generative Ai and the way things are, trending what's exciting to you right, now about the AI community and certain, things you're seeing what do you see as, kind of positive Trends I guess the part, that I'm most excited about is that, generative AI space specifically in, being able to augment humans obviously, there are a lot of privacy concerns and, um data concerns and bias issues and, things like that in this which I don't, want to see you know llms deployed, everywhere becoming default response for, like Google search or something like, that but it is really exciting to see, even in my day-to-day starting to use, these actively to augment what I'm doing, around content generation and Framing, and things like that so it's one piece, that I'm really excited for and with uh, the work that we're doing on neural, magic we're especially looking at these, because one we want to see that continue, to grow to open source and I think, that's been the other push that's been, really big and really exciting to see is, that whenever TBT 4 came out it was, completely privatized they put out you, know a little white paper on it that had, no details about it at all a lot of data, concerns and things like that within, that but the open source Community has, already released I I can name probably, 10 models so far that have been released, since then that are chat GPT like or gp4, like so it's really exciting to see that, I think the next stage from those open, source models is going to be making them, runnable anywhere right so you don't, need this big GPU cluster Farm to get, something that is usable and that's, where we're really looking at going, we're actively working on the llm, deployment issue right now and hope to, have something out in the you know next, few weeks next few months that people, can start actively using download it, they can run it anywhere they want on, any CPU and will be just as fast as youp, is cool yeah well keep us posted I know, I'm personally interested in that one so, yeah thank you so much for joining us, Mark um this is a a really fun, conversation I love getting into the, weeds of these practicalities because, this is a this is a topic where people, get stuck a lot is on the deployment, side and the optimization side so yeah, thank you for all that you and your team, are doing at neurl Magic in this area, and um yeah keep up the good work we're, excited to see it so thanks for joining, thanks Daniel it's great talking with, [Music], you thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fast and fly for, partnering with us to bring you all, Chang doog podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residents break master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Causal inference | With all the LLM hype, it’s worth remembering that enterprise stakeholders want answers to “why” questions. Enter causal inference. Paul Hünermund has been doing research and writing on this topic for some time and joins us to introduce the topic. He also shares some relevant trends and some tips for getting started with methods including double machine learning, experimentation, difference-in-difference, and more.
Leave us a comment (https://changelog.com/practicalai/220/discuss)
Changelog++ (https://changelog.com/++) members save 3 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog News (https://changelog.com/news) – A podcast+newsletter combo that’s brief, entertaining & always on-point. Subscribe today (https://changelog.com/news) .
Featuring:
• Paul Hünermund – Twitter (https://twitter.com/PHuenermund) , LinkedIn (https://www.linkedin.com/in/paul-hunermund) , Website (https://p-hunermund.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• How Can Causal Machine Learning Improve Business Decisions? (https://www.causalscience.org/blog/how-can-causal-machine-learning-improve-business-decisions/)
• Causal Inference is More than Fitting the Data Well (https://www.causalscience.org/blog/causal-inference-is-more-than-fitting-the-data-well/)
• Causal Data Science in Practice (https://www.causalscience.org/blog/causal-data-science-in-practice/)
• Causal Discovery (https://blog.ml.cmu.edu/2020/08/31/7-causality/)
• DoWhy Github (https://github.com/py-why/dowhy)
• The Book of Why (https://www.penguin.co.uk/books/289825/the-book-of-why-by-judea-pearl-and-dana-mackenzie/9780141982410)
• Causal Data Science Meeting (https://www.causalscience.org/)
• Paul’s study on causal ML adoption in industry (incl. an overview of useful software packages in Table 3) (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3867326)
• Causal Data Science MOOC on Udemy (https://www.udemy.com/course/causal-data-science/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-220.md) | 8 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist building a tool called, prediction guard and I'm joined as, always by my co-host Chris Benson who is, a tech strategist at locked Martin how, you doing Chris I am doing very well, I've been watching you building, prediction Guard from afar and uh, looking forward to hearing more about it, in the days ahead it's a fun one we'll, we'll talk about it in more detail soon, um and the causal reasons why I ended up, uh doing those things that I'm doing, could that be a transition yes speaking, of cause and effect and causal things um, we're really privileged today to have, with us uh Paul hunm who who's an, assistant professor at Copenhagen, Business School uh welcome Paul hi, Danielle hi Chris thanks for having me, yeah yeah it's great to have you here, and I think also this is so cool because, I think the topic that we're going to, talk about is so very practical and, important because I've tried to do as, many of our listeners know my wife owns, a business and I've tried to do a bunch, of like analytics or like predictive, things for her over the years is just, out of either need or fun and often the, question is like why is the prediction, that or what like for a business person, like they're wanting to know what is the, attribution what is the behavior behind, this thing that I'm seeing so you're an, expert in uh CLE AI CLE machine learning, and have been doing research in this, area and are very well versed in it and, I'm wondering if you can as you start, out here just like give a a brief, understanding to everyone about like, what do you mean when you say causal AI, or causal machine learning and how maybe, is that differentiated from what people, might commonly think of when they think, of AI or machine learning so I mean then, many names causal AI causal machine, learning causal influence I think is the, more traditional term but I think the, basic idea is pretty intuitive for, everyone who works with data is that if, we look at correlations and patterns in, the data sometimes they can produce, quite surprising and probably nonsense, results I mean we all know the story, about ice cream sales and and Shar, attacks uh that are highly correlated um, over the course of the Year chocolate, consumption and the Nobel Prize winners, in a country right probably driven by, Switzerland predominantly stalks and the, babies right like the stock population, and fertility rates are correlated with, these examples we usually use them in, the classroom as sort of like a caveat, right wait a minute um a correlation is, not causation uh people have heard this, term but then causal inference and and, causal machine learning is really this, idea of taking causality seriously and, uh trying to build tools uh algorithms, that allow you to draw causal inference, from data to distinguish cause and, effect and we out the kind of nonsense, correlations and um yeah that comes with, a different tool set so I mean you can, approach this from a purely algorithmic, point of view and you would probably, apply different tools uh than than, standard machine learning it goes even, deeper there's a whole epistemological, point about it well if you want to do, causal inference you cannot do this in a, purely a model 3 way you actually need, um background knowledge expert domain, knowledge in order to do this in order, to for example distinguish between uh, possible alternative explanations and, that is a whole Paradigm Shift uh in, terms of how we approach data and how we, approach machine learning well and then, my last word on this is what's the, difference to standard machine learning, is that uh well standard machine, learning the bulk of it is really, correlation based I mean all the tools, that we having deep learning support, Vector machines and so on these are, predictive tools uh ition means, correlation finding detecting patterns, and data um and so they're not suitable, for it I mean there's a branch of AI, that is reinforcement learning maybe we, could talk about this later that goes, more into direction of actually, intervening yourself the learner, intervenes itself in the environment so, that goes in the right direction but is, not getting there full way this is the, main difference to St michon that kind, of gets to the what I guess of what is, causal AI causal machine learning causal, inference I guess the next question, that's probably a good foundational one, is the why and maybe this is even, exacerbated in recent times I don't know, if you've seen this with all of the sort, of hype around large language models, that are incredibly non-interpretable or, produce you know things that are very, factually incorrect or unexplainable but, how should a data scientist working in, an Enterprise let's say like why should, they care about causal inference rather, than just making good predictions let's, say yeah so maybe it helps if I Define, first uh what I mean exactly with causal, influence because there's actually NE a, definition by James Woodward which is a, philosopher of science who said that, causal influence or causal machine, learning is a special kind of prediction, problem so in a sense it is a prediction, problem but here we are predicting the, likely impact of an action intervention, or manipulation really this idea of I do, something like I increase the chocolate, consumption in a country will that, produce more Nobel Prize winners right, in this context I think if we approach, from that perspective you immediately, see the value for business because in, business we're always asking these kind, of questions what if we do x what if we, implement this new AGR policy, what if we enter a new market should we, invest in this product or another, product so business always involves, actions interventions and we want to, forecast predict the likely outcomes of, this that would be well what caal, influence people call the Interventional, level then one level above that is the, counterfactual level uh so, counterfactual meaning we're reasoning, about two states of the world um had I, not taken the aspirin this morning would, my headache be be worse today still uh, these kind of questions and um they are, also very relevant in sort of hindsight, retrospective like was it again HR, policy that we implemented that improved, employee satisfaction and so forth so, immediately relevant in all sorts of, domains in the business World, specifically in AI we're talking about, fundamental problems in AI which is, fairness robustness explainability and I, believe that causal AI has to say, something in all of these domains so, there's also an immediate practical, value in this let me ask you a question, on um I want to throw another term that, we haven't used yet that sometimes gets, thrown in in casual causal conversations, say that 10 times fast determinism, versus non-determinism because we have a, habit of applying that at a high level, with AI models and say ah they're, non-deterministic and and there's a, certain expectation over the years in, training AI models that are, non-deterministic in terms of you do, have that disconnect in terms of, understanding that causality from, beginning to end can you kind of, distinguish a little bit between the two, terms in the sense of if someone is just, kind of getting into this they're early, in data science um and they're trying to, go wait a minute I thought AI was, non-deterministic but yet causality is, explainable how do those fit together, how do those as slic of perspective on, an AI model how do they work together, where you have determinism, non-determinism and causality or not and, what are the implications of those when, I put out the definition I used the term, likely impact and that already hints, that that also causal influence the, approaches that we having are, probabilistic framework so there's no, determinism in this what we're, interested in is still a probability or, contrast of two probabilities right a, probability of a certain outcome that I, care about if I had taken the specific, action or if I hadn't so these are, factual question but it's still there's, no determinism in a sense that if I, Implement something it will always work, or will always have success with this, product and so forth there's an, interesting I think sort of intellectual, history because the Frameworks that, we're having in causal influence, directed AC graphs developed by Juda, Pearl and these kind of people they buil, on actually earlier work in AI like, beijan nets for example that were at, that point still a purely predictive, tool but a way or tool to deal with, complexity in terms of probabilities, because yeah expert systems were too, rigid we figured that out in the 70s and, 80s and building on that then people, immediately once you reason, probabilistically people uh made the, shortcut the meal shortcut in reasoning, in terms of cause and effect probably, because it's so intuitive to us but the, tools were were actually not ready for, it yet so that was the intellectual, history how we moved from probabalistic, AI Frameworks to to causal influence and, again I think people immediately started, to think that way because causality is, such a fundamental concept for human, thinking right we learn it very early in, our development babies can think, causally that there's some psychology, work that we pick that up at the age of, two or so pets sometimes can think, causally probably so it's a very, fundamental concept have you found like, in practically interacting cuz I know, you're involved sort of in the data, science Community as well and um have, helped run events and other things, related to this topic have you found, data scientists are sort of cuz I could, see some data scientists we're so in the, mindset of like we're making a, prediction we probably understand that, we're thinking about correlations in, many cases I think, but then it's sort of scary for us to, think about like well I don't know if I, want to like put out a if I tell my, executive like this is the reason that, something happened like I can see the, value of it but how confident can I be, in that and that also gets to maybe, people's I think people during Co and, other times realized maybe how Rusty, they were on like basic statistical and, probable probabilistic type of concept, where you know everyone was all of the, sudden thinking about medical trials and, such have you found this sort of, hesitation amongst data scientists as, you've interacted with them and what, maybe is some steps that data scientists, can take to gain confidence in initial, thinking and education around this topic, yeah so we talked with a lot of uh data, scientists from industry practitioners, and um I don't think there's hesitation, it's actually the opposite there's lots, of interest of of course this is sort of, a new topic you need to Tool up in a, different area so um I mean that's a, step that you need to take but many, people are very curious and uh we just, simply wanted to understand sort of, where are we right now and we had a, hunch that this is the toolbox that we, know Predictive Analytics correlational, Ai and then based on that what kind of, questions practically do you address, also maybe in the interplay with the, broader organization right what is it, that the executives want to know what do, they approach you with and is there sort, of a mismatch between the methods that, you're working with and the questions, that are asked and in our interviews and, we did some quantitative analysis on, this too we could clearly see this kind, of mismatch between many questions that, are asked do actually have this causal, component to it because actions, forecasting interventions is so vicous, the stand tools right are not up to the, task for this and that creates actually, this disinterest in approaching causal, influence and looking beyond what we, currently do so one interview is stuck, to my mind was an IT consultant that was, working a lot in in the data science, field and he said like yeah most of the, questions that our clients asked are, causal questions in the end but what we, do in the end with them is always some, form of Predictive Analytics deep, learning and so forth that always, created this kind of tension in the, projects that he was working in so that, was very eye opening for, [Music], us it is now time for a chang log news, break the team atso AI is helping change, the game in text to speech realism by, releasing bark a transformer-based text, to Audio model that can generate highly, realistic multilingual speech as well as, other a audio including music background, noise and simple sound effects it can, also laugh sigh cry and make other, non-word sounds that people make crazy, right here's an example that includes, sad and size metatags my friend's Bakery, burned down last, night now his business is toast and, here's one more with laughter I don't, like pipe torch kubernetes or schnitzel, and xylophones flx, me you can still hear some digital, artifacts and blips here and there but, we're getting closer to synthesized, audio that's indistinguishable from the, real thing and that's cool SL scary you, just heard one of our five top stories, from Monday's Chang log news subscribe, to the podcast to get all of the week's, top stories and pop your email address, in at Chang blog.com newws to also, receive our free companion email with, even more the developer news worth your, attention once again that's changel, log.com, [Music], newws well Paul um you've described, really well how to think about G, generally this sort of causal inference, causal AI causal machine learning the, importance of it and you mentioned that, in doing causal inference you have sort, of a different tool set or maybe, different algorithms that are applied I, know that one thing that of course I've, done before and know about from various, data science positions is like, experimentation or hypothesis testing, like AB testing I know that only, scratches the service you were talking, about you know direct de a cc graphs and, other things so could you give us like a, broad sketch of like currently what are, the like main categories of approaches, within causal inference and how can we, think about those like from really broad, categorization uh traditionally people, that divided the field into experimental, and observational methods and, experimental would, the AB testing that you're talking about, uh one of our interviews even called it, the big hammer that uh tech companies, swing around AB testing and it's applied, a lot sometimes together with some form, of multi-arm banded reinforcement, learning type of approaches but often, just in this plain vanilla way and, that's great because well experiments, are easy in many domains to set up easy, to understand and you don't need a lot, of back on knowledge for it you simply, try out different things search shades, of a button on a website classic example, but in other domains it's really not, that simple because um well experiments, can be very costly they can be unethical, right in many questions I think you, mentioned the the covid pandemic earlier, that was a interesting example to, observe because when we tested the, vaccines of course we did the standard, clinical trials which is an experimental, method and may testing if you want want, right that costs a lot of money but we, we have the procedures for it and we, need to approve drugs in that way but, then after we rolled out the vaccines, immediately there were follow-up, questions like for example where is the, vaccine more effective is it for older, population or a younger population or um, in which way do we need to roll out scar, vaccines and so forth these kind of, questions that were not included in the, control trial we didn't have, experimental evidence for it we need to, answer this Based on expost data so, people picking up vaccines and then, seeing where they're most effective that, was interesting to see because many of, the the questions that we ask in, practice do involve this observational, cause inference and with observational, cause of influence I mean we don't, actively intervene ourselves but we, passively observe the data and still, want to get cause and effect out of it, although we haven't designed the, experiment ourselves so in a sense we're, then trying to, mimic a thought experiment if you want, with observational data and that creates, all sorts of problems because um well, those people that picked up the vaccine, earlier probably are those that thought, they have the most to gain from it for, example so there's this sort of self-, selection bias or confounding bias in, this and we need to address all of these, things these are the two main uh, categories and then within those, categories we have all sorts of, different techniques algorithms like, experimental design is an entire course, catalog at our University in the, observational Fields so for example I, originally come from an econometrics, background and in econometrics or in, economics we ask a lot of caal questions, and then we have tools like regression, discontinuity design difference IND, differences uh nearest neighbor matching, and so forth the new kit on the block, are the computer scientists and they, catch up fast in causal influence and, they, develop these techniques like uh, directed as graphs causal reinforcement, learning so all sorts of exciting uh, streams of literature coming up these, days I'm really trying to absorb what, you're saying and it's very interesting, I'm kind of wondering if I have a, problem today uh like before our, conversation and I want to go through, kind of the typical data prep and you, know model training and model testing to, deploy but now I've listened to you and, I want to start implementing causal, approaches into my workflow how does my, workflow change what does it look like, with typical tools now and where might, gaps be in the typical tool chain that, we currently have how do we make it, practical and go do it after the show, starting from The epistemological, Challenge is that we cannot do causal, inference in a purely data driven way we, cannot just optimize a Target function, and or look at our confusion Matrix loss, function in that sense we need to, complement this with with background, knowledge that well in the simple, examples right it's not just ice cream, sales and shark attacks there's a third, variable lurking right uh which we need, to consider which is weather probably or, sunshine and so this is in the simple, case but uh now imagine a problem that, you approach for the first time right, you do exploratory research so you don't, have this good theory so we need to do, something about this then a lot of the, the standard challenge that we having, right collecting good data Maybe, designing in AB test are the same but uh, this additional step of bringing in, background knowledge and there it, depends I guess right like how also the, data science team is structured in an, organization do we need to bring in, outside stakeholders do we maybe need to, talk with the marketing people or the, logistics people depending on the, project often at the moment data science, teams are almost this kind of In-House, Consulting type right and there for, example not that many mix teams that, could bring in this background expert, domain knowledge practically speaking I, mean there are all sorts of tools out, there in the standard uh software, languages like uh it's a little bit, scattered probably the landscape so you, really need to know what kind of, libraries are out there for example in, Python the doy package by Microsoft, really became a standard industrywide, because they also have this kind of, causal inference pipeline implemented in, in the package which starts from sort of, modeling a specific domain or phenomenon, to applying the causal inference, algorithms getting cause and effects out, and then also refuting the model or, challenging the model so you have this, kind of step bystep procedure that that, can really help you in getting started, and getting results quickly one of the, things that you said that I wanted to uh, ask kind of a clarifying question was, you kind of talked about you know going, to that kind of external Source the, extra Authority if you will a lot of, practitioners you know these days are, kind of starting on their own if they're, not on a big data science team like you, know I I work in a big company and we, have tons of data scientists so this, probably doesn't apply to me in that, capacity but a lot of people in startups, are out there trying to kind of delve, into new businesses and stuff and they, may not have access to kind of outside, the data expertise to apply do you have, any tips or guidance on like if you're, that practitioner and you're trying to, solve a problem for which you don't have, that external expertise how would you go, about tackling that how would you go, about saying it's me myself and I you, know joking around and I'm going to this, is a way I can apply causal approaches, when I don't have a lot of resources, available to me first of all I would say, you're never really alone right so I, think outside of the box a little bit I, mean often it doesn't take that much you, can just approach people and maybe talk, with them for an hour and get the, insights out that you need Consulting, probably the scientific literature on a, certain topic can help too in sort of, figuring out alternative explanations, that you can then bring to the data and, test to the data we're also not, completely helpless in a sense that, everything has to come from the theory, there are datadriven approaches so that, will be the area of causal discovery, that we can apply to get closer to kind, of a causal model based on the, relationships that we find in the data, we know that that never gets us 100% all, the way so we will always need to, complement it with some form of, background knowledge but it can already, help and then I would say I mean talking, also to practitioners I think sort of, the the 8020 rule I think it's called, right applies I mean already getting, closer to something causal is often good, enough and we should get away from this, idea that it's 01 right that either it's, causal or it's not often we get closer, to the truth and if not we can do for, example their whole tools on sensitivity, analysis that we can challenge our, assumptions and see how robust there are, and I think in practice this already, helps tremendously uh you mentioned kind, of reaching out to practitioners in the, community around this could you describe, a little bit I know like I mentioned, earlier you've there are some resources, that you've kind of helped co-found and, run over time related to this could you, mention those so that people could find, those as they're looking into the top, topic based on what we identified in, where the field is we we actually saw, the need for more exchange between, different academic Fields because causal, inference is such a general purpose, technology almost it's applied in, various different fields right I've, mentioned economics computer science, epidemiology Health Sciences but then, also practitioners so it's really like a, mixed group so we set up the annual, causal data science meeting we started, in 2020s so had to do it online because, of the co pandemic and then realized, that is really a an easy way to get, people into one well virtual room in, this case and there was lots of interest, from practitioners and we're going to, have the third iteration of this this, year in November so still some time but, hopefully the listeners will make a, mental note well there are also good, teaching tutorials out there many blog, posts online courses that you can sign, up to books like the book of Why by Juda, Pearl maybe not really a textbook but, really drives home the idea why causal, influence is so important it has really, nice historical anecdotes because UDA is, really a giant in this field causal, influence the mixtap by Scott Cunningham, if you have more of an econ background, perhaps uh the effect by Nick Huntington, Klein so these are all beginner friendly, textbooks that you can pick up and then, trying out the different packages like, DW and python for example there's this, startup called jinos that is developing, causal inference software and they're, having free trial versions where you can, start out drawing your directed as, graphs and see what how answers change, if you change assumptions for example so, I think that is usually the best way to, learn and pick this, [Music], up, [Music], well Paul I'm selfishly going to present, you maybe with a scenario and uh do some, sort of on the-fly problem solving I, figure you're you're probably good at, that being a professor and always, solving problems with students and, others and colleagues so I mentioned my, wife runs a uh a business it's a candle, manufacturing business and there's, actually, this sort of like why question that that, we've been talking about a little bit so, last year to give context I just logged, into Shopify so last year they had, 87,8 37 orders each of those orders or, at least most of them when they ship, them included a free sample 2 O candle, it's like a freebie add-on and over time, the Assumption has always been oh people, really like that and it's sort of like, part of of the package that they get it, increases reorder value right like they, see the package and they're like oh cool, I got like a free candle and now I'm, going to I love these people forever and, I'm going to reorder right well the, question has come up obviously at this, scale of, orders that's a lot of free 2 ounces and, even just the savings of those would be, huge so how might you as a as a, practitioner or or someone thinking, about this problem both in terms of the, experimental or the observational, approaches what might be some ways to, dig into this obviously it's very, expensive to send out like get different, packaging and do like a experiment at, that scale so it'd be nice to know, without doing like a large scale shift, and packaging and that sort of thing any, tips for me in this case the big, Advantage is that you or your wife uh, you are actually controlling this, process right so you decide in which, packaging to put the free add-on and in, that sense you immediately understand, the selection process or the treatment, assignment how we would call that here, and in that situation I think an, experimental approach would be the way, to go then it's um becomes more of a, statistical question like how much your, sample or how large your sample needs to, be in order to draw robust conclusions, right and if it's just the yes no, questions about a free sample or not the, experiment can probably be quite small, that relates a little bit to the co, example that I discussed earlier, probably you want to broaden that up and, think about for example hetrogeneous, treatment effects in the sense that well, is it uh high volume customers that like, this free add-on most or is it more like, the Casual Shoppers right and suddenly, you have four groups that you catering, to because well high volume low volume, and treatment and control these are, problems that always come up in causal, inference so tools like causal random, Forest developed exactly for that, problem how do you efficiently partition, the the population in order to reduce, costs associated with an experiment, right you want to be as cost efficient, as possible but also still get robust, conclusions out a similar problem arises, then of I mentioned earlier robustness, of findings right so transfer learning, is a big Topic in AI so maybe let's, assume you've done this experiment at, this point in time and you found uh that, robust treatment effect right like so, people react to it it increases reorder, value the question is then in 6 months, from now will the world have changed or, will these results still be bed right, and maybe you're thinking about that in, 6 months not so much has changed of the, business right but for example platforms, like booking.com right hotel bookings, for them is very relevant because you, have people that book hotels in the, summer for leisure trouble are very, different than business travel for, example so you have this kind of problem, can you transfer causal knowledge that, you've obtained uh to a different domain, because that would save you on, experimentation cost for example so yeah, all sorts of interesting question in, that domain but the big Advantage is you, you can actually run experiments here, like in other domains we are ried on, data where people self- select into, something like for example the standard, question about in economics about for, example the returns to college education, right there we could not randomly assign, people to colleges right or whether they, can go to college or not there we have, to rely on them self selecting into a, sort of treatment and control group and, then it's always the question whether we, we have a really an Apples to Apples, comparison or is it perhaps an apples to, oranges I just wanted to say I think, you're lucky that you got uh Daniel's, example question because coming from my, industry I would have had to ask like I, don't know Hypersonic missile design or, something and I don't think we want to, go there so this is a great thing about, the podcast right we get to have like, the expert on and I get to selfishly ask, the question that helps me in my day, daytoday, so excellent way of getting some free, Consulting in there yeah so wanted to, actually take you back to something uh, that you mentioned a little while ago we, were kind of talking about the benefits, of causal inference and uh you brought, up um reinforcement learning but we were, generally talking about kind of fairness, bias robustness the impact of causal on, those could you kind of go back to that, point and kind of talk a little bit, about what that means you know these are, huge topics that are in all of the, different branches of AI right now and, it's on everyone's mind especially with, all the advances this year how does, causal affect that worldview of doing, these amazing things in these different, branches of AI but doing it without bias, doing it fairly such as that I'll stop, with fness because that's actually the, very first example that I use in my own, course uh causality caal influence, course here at Copenhagen business, school it's a case taken from Google, actually so a while ago I think in 2019, well already earlier the story goes, longer but they have been accused of, underpaying women in their organization, right so there we have a classic example, of like a protected attributes like, gender race and so forth and we want to, prevent biased in some form of automated, or semi-automated decision- making right, and that comes up all the time I mean in, loan acceptance models for example we, want to remove bias and so forth so to, make the story quick is they have been, accused of underpaying women and, organization then they did a fairly, sophisticated analysis um published a, white paper and the result was of that, analysis that they found that they're, actually underpaying men at least they, thought so and not only men but actually, high level software Engineers so high, seniority software engineers at Google, and then because they're committed to, fairness in their organization they, actually raised salary levels for these, high level software Engineers based on, analysis so it also had a practical, component to it or like a policy, implication we cannot analyze this case, here in detail but if you do that, analysis is very likely that they, actually did a sort of Fairly common, causal inference mistake so they, conditioned on some variables that are, down stream of so that are affected by, gender like occupation for example right, and then if you have discrimination, already at that stage that for example, women don't have it so easy to get into, highlevel positions uh for various, reasons that we know of then that will, be a classic mistake and you can produce, these kind of again nonsensical, correlations in the end like the Sharks, and the ice cream that's one example, that you can actually easily transport, to other kinds of questions like I, mentioned algorithmic bias and that's a, causal question because if you don't, understand how variables in your model, causally interact and relate to each, other you cannot answer this question, you cannot decide how to correctly, analyze the data, robustness I mentioned so the the, transportability transfer learning kind, of aspect of experimental knowledge and, their causal inference techniques have, been developed also dealing with, selection bias in data so data set that, might not be a representative sample of, the population that you care about but, is measured with some form of selection, bias because only happy customers answer, your consumer survey or unhappy, customers but no one in between right, this question, and then lastly explainability I think, explainability almost comes for free, with causal influence I mean don't get, me wrong causal inference is a hard task, but once you solve it explainability, almost comes for free because um well I, mentioned the book of whyne right so, causal questions are always related to, why questions counterfactual as well, right like why did my headache go away, was it because I took the aspirin this, morning uh I mentioned this example this, is the way we reason this the way we, explain for example things to other, humans and so there's an immediate uh, connection to, explainability that's a really great way, to to think about this and it gets me, thinking like what will be the impacts, of these two Fields as they interact, more over the coming years and I'm, wondering from your perspective because, you're so plugged into the research, that's going on in this area but also, the Practical side of this and how data, scientists are beginning to use these, techniques, what as you look forward to the next, let's say year or whatever time period, you want to have there like what gets, you excited or what trends would you, like to highlight that maybe people, should be thinking about in this field, or maybe it's just things that you're, excited about in terms of new new, opportunities or new methods or whatever, it might be yeah just uh attended a, conference last week in tubing in, Germany the causal learning and uh, reasoning, conference and it was just exciting to, see how many young minds there were, attending the conference so it was a, very young audience um a lot of grad, students and in computer science, specifically there and a bunch of, seniors as well but that really showed, me that this seems to be the next big, thing in Ai and people have confirmed, that to us that there's more and more, interest in the academic site but we, also see that in practice in in the, industry yes so so I'm excited about uh, well experimental design what I, mentioned earlier right heterogeneous, treatment effects for example not only, being satisfied with having one average, treatment effect or average caus effect, right one number and this is what we're, expecting but actually making this more, fine grain and opening up answering, questions like is it all people young, people that benefit more from vaccines, in which way do we need to roll that out, in the most effective way way then on, the observational side I think causal, Discovery is really promising and this, is really the idea of how far can we go, with just simply trying to get out, causality from observational data and we, will never get 100% I mentioned that but, how far can we go so one big challenge, in that area is for example to have good, benchmarking data sets right in machine, learning that's usually easy you divide, a sample up into a training and, benchmarking data set with fality that's, not so easy you often need an, experimental Benchmark for example a lot, of work has been done in genomic, research where you can knock off genes, for example in an experimental way so, that is really exciting there's new work, on causal root cause analysis by Amazon, for example so figuring out what are the, causes of actually outliers even in an, engineering system so you mentioned, Chris that you're working in that area, so I've seen for example companies in, the defense industry thinking about this, problem of root cause analysis lastly, perhaps because originally uh before I, came to causal influence I was actually, trained in economics like I mentioned, was specifically Innovation economics so, this idea of how do we produce knowledge, as a society how does knowledge spread, across society and so in causal inflence, there's new lines of work thinking about, actually interactions between treatments, so not just the idea that I take a pill, and I get an outcome from that but it's, like you take a pill and that reduces, the viral load in our community and, that's why I actually have a lower, likelihood to get sick for example so, these kind of interactions between, people are really important I think in, many domains and specifically also in, the way knowledge spreads across Network, so that is something I'm really exciting, about awesome well um I am really really, happy that we got to have this, conversation on the podcast because I, think it highlights something that's a, real, compliment to many of the things that, people are exploring around deep, learning and large language models and, other things this is a really important, piece of the sort of practical side of, what data scientists are doing in the, Enterprise So yeah thank you so much and, thank you for your research on the topic, and also engaging the community around, this it's really great and really happy, to have had you on the podcast thank you, I really enjoyed your, conversation, [Music], thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang do podcasts check out what they're, up to at fastly.com and, fly.io and to our beat freaking, residence brake master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Capabilities of LLMs 🤯 | Large Language Model (LLM) capabilities have reached new heights and are nothing short of mind-blowing! However, with so many advancements happening at once, it can be overwhelming to keep up with all the latest developments. To help us navigate through this complex terrain, we’ve invited Raj - one of the most adept at explaining State-of-the-Art (SOTA) AI in practical terms - to join us on the podcast.
Raj discusses several intriguing topics such as in-context learning, reasoning, LLM options, and related tooling. But that’s not all! We also hear from Raj about the rapidly growing data science and AI community on TikTok.
Leave us a comment (https://changelog.com/practicalai/219/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Rajiv Shah – Twitter (https://twitter.com/rajistics) , GitHub (https://github.com/rajshah4) , LinkedIn (https://www.linkedin.com/in/rajistics) , Website (https://www.rajivshah.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Solving AI Tasks with ChatGPT and its Friends in HuggingFace (https://arxiv.org/pdf/2303.17580.pdf) | GitHub (https://github.com/microsoft/JARVIS)
• Generative Agents: Interactive Simulacra of Human Behavior (https://arxiv.org/abs/2304.03442)
• Wolfram ChatGPT (https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
• Comparing LLMs (https://nat.dev/)
• LangChain (https://python.langchain.com/en/latest/)
• Learn about LLMs:
• Emergence and reasoning in large language models (Jason Wei) (https://youtu.be/0Z1ZwY2K2-M)
• Sparks of Artificial General Intelligence (https://arxiv.org/abs/2303.12712)
• Learning Prompting (https://learnprompting.org/)
• Getting Started with Transformers:
• Transformers course (free) (https://huggingface.co/course/chapter1/1)
• Tasks at Hugging Face (https://huggingface.co/tasks)
• Training your own LLM Models:
• Efficient Large Language Model training with LoRA and Hugging Face (https://www.youtube.com/watch?v=YKCtbIJC3kQ)
• PEFT (Parameter-Efficient Fine-Tuning) (https://github.com/huggingface/peft)
• Dolly blog post (https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm)
• Illustrating Reinforcement Learning from Human Feedback (https://huggingface.co/blog/rlhf)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-219.md) | 20 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, joined as always by my co-host Chris, Benson how's it going Chris going very, well spring is in the air we're having a, good time here lots of cool stuff in an, AI world, lots of uh New Life breathed into, interesting AI systems over over the, over the past days and uh sometimes I, hear about this in cool videos which are, way cooler than any videos that I, produce from our friend Raj um who's, with us today uh rajie Shaw with uh, who's a machine learning engineer at, hugging face how you doing Raj I'm doing, great thanks for having me on yeah so, the last time you're on the show we, talked about data leakage have you, leaked any data since the the prior, episode I think any data scientist, that's out there has leaked data on a, regular basis like that right it's a, hazard of the job and you know one of, the things I like to do is continually, remind the new folks that that's likely, to happen so that they are in the, process of and should remember yeah yeah, I did mention I've seen a lot of cool, videos from you recently and we were, chatting even a bit I think on LinkedIn, about there is data science AI community, on Tik Tok and other places tell us a, little bit about that I'm just I mean, that's a fun topic I'm curious of what, is the AI scene like on Tik Tok so let, me start by like how I got into it about, like a year ago I was trying to get my, son who's just starting College to do a, real practical project around AI like, he's taking computer science but he, doesn't know what GitHub is so I'm like, can we build a Discord bot for example, right something that appeals to him and, so I was like let's give us 24 hours, we'll do this over the weekend we'll, both go our separate ways and then we'll, come back and kind of see what we've, done and see if we could share what, we've learned from each other and so I, go out and I go get a Blog tutorial and, work my way through it and kind of get, something working and then I go to him, the next day and he's like yeah I kind, of got stuck I was like well let me see, if I can help you through it like show, me the steps and what you were, accomplished in doing it he pops open a, YouTube video and and that's what he, used to follow it and for somebody like, me who self-taught their way into data, science that was largely focused on kind, of reading and written material it kind, of really blew my mind that somebody, would learn how to code through a video, but it really just opened my eyes to, that because already at that time like, with my daughter I shared videos on like, food and politics and music but it just, really came to me like how this is just, becoming an emerging part of education, and how people learn kind of as we move, on here yeah and have you seen like, engagement with your videos on so like I, remember the one recently I saw was like, the what is it segment anything or, everything I forget which is anything, and everything whatever that one is I, saw your video on that one which was, cool because it it's also very engaging, you've got like this skit element to it, but there's real information content in, there right in an engaging way how do, you see people respond to these so I've, had great feedback and I try to keep, mine very focused on data science I try, not to be too click baity I try to be, like you know if I was on a data science, team would I recommend somebody to watch, the video like that but the video style, also lets you do different things so I, think when I first started videos I did, you know I used to be a professor I did, like the traditional let me just lecture, you on this topic for 30 seconds but, like I think as you mentioned over time, like there's more creative ways of doing, it and one of the things Tik Tok allows, you to do is often tell it in a story or, skit format where where you can have the, voices of multiple people if you're, sitting at home on your phone that's a, much more interesting way to like get a, nuanced conversation rather than reading, kind of some blog post that has here's, four different points on this so you, know I think there's a lot of potential, for kind of teaching Nuance with, something like Tik Tok yeah that's cool, and there's no shortage of things right, now to talk about it's like you probably, are more constrained on your ability to, pump out these videos than like the AI, things that are coming out we've all had, our minds blown recently especially the, capabilities of large language models, but there also of course other things, and computer vision and other things for, you what have been those mind-blowing, moments or what has been on your mind, over the past I can't even say like the, past year like the past two weeks I, don't I don't, know so I think we just have to look, back and reflect that we're really in a, great place of a huge amount of, innovation in a short amount of time, like this is one of those peak times in, AI that it won't be like this a year, from now right it wasn't like this 2, years or 3 years from now where, literally every week There's new, developments it's a fabulous time if, you're an AI junkie and you like to kind, of check out and see the new newest, tools and see that incremental Advance, like that there's no better time it's, not going to last for long so kind of, enjoy it I also kind of also pushed back, on this for lots of practicing data, scientists that are very practical that, you know you don't need to watch this, stuff every day or every week like that, like many of these things are exciting, but if you're day in and day out you're, an Enterprise data science you're you, know inside doing churn analysis or some, marketing analysis many of these, developments are going to take a while, before they filter you to you you'll, have plenty of time to get up to speed, they're not going to change the face of, every data scientist in the next two, months um like that so I'm curious it's, a followup to both of these last two, questions combined you're going into, different mediums now for teaching, you're hitting short video longer video, different things you're, we have all of this happening so fast, how are you thinking about reaching, different audiences in data science it's, kind of funny Once Upon a Time it was, just data science but now we have, different audiences different age groups, different purposes how are you making, those different connections I was just, talking to someone today not to co- up, with is like students coming into, college now can't type typing isn't a, thing anymore right because of the way, they've grown up with devices right like, they can poke and like touch, but that's got to influence like if, we're not adapting to that then we're, not staying up right Chris you just made, me feel really, old and I think one thing that's, happened is like data science came out, of statistics and for a long time right, the path to learn that was you went to, college you sat in a classroom right you, had a statistics book to do that but I, think this is the transformative part, about Ai and data science where now it's, touching so many people and especially, you see this with these large language, models where if you're a teenager you, have a GPU all of a sudden now you can, kind of download and follow a script and, get something running on your local, machine where you can interact with this, AI right which a couple years ago would, have been unheard of for somebody to, have such wide access to that so I think, the hard part about communicating to so, many audiences is also a great part that, we have such a large community that's, engaged and interested and wants to use, these tools I'm going to bring a couple, things here on the Fly For You Raj cuz, you are so good at explaining these, things so I'm at a conference right now, so I walked from a talk over back to, here and yeah one of the things that, they were talking about was in context, learning with large language models, could you kind of help us so we've, talked a lot about on the show about, prompting large language models this, sort of thing but I don't know that, we've specifically kind of talk through, this like in context learning like what, does that exactly mean and what what, should people take away from it maybe so, if we look at the development of these, language models a couple years ago if, you look at there was blog post by, carathon kind of working with lstms and, how we could get these models to, generate text for us and this is where, we have kind of the statistical, probabilities of being able to put, together text and it knows like the cat, ate the dog or that there's some, probabilities and we could put a, sentence together and a couple years ago, right these were fantastic things at, making like really weird stories right, and that's they were good for like when, we look at like kind of the the gp2, tools like that now what's happened is, as we've kind of worked with these large, language models and they've gotten, bigger where we've Incorporated more, data we've trained them longer machine, learning Engineers have noticed a new, kind of what they call an emergent, Behavior that's come about from these, models that isn't there at the smaller, size of the models but when these models, get really big they allow this new, capability of this in context learning, and and what in tonex learning allows, you to do is you can give the model a, few examples of a type of question and, the model will then continue to answer, in that question so an easy example of, this is sentiment imagine you had to, have movies and you had to rate the, sentiment in the old days if you wanted, to do this you would have to go out and, label a bunch of movies right let's go, get a 100 or a thousand movies we read, the reviews we label the sentiment right, is this a good review is this a bad, review then we train our model to do, that right that's traditional data, science approach what we can do with, these larger language models is say hey, here's three examples two of these are, good movie reviews one of these is a bad, movie review now I'm giving you a new, movie review will you tell me what this, movie review is and the model will reach, back with us with the answer and the key, here is we're not changing the weights, of the model we're not training the, model in any way just by carefully, asking it for some type of information, it knows and can kind of figure out oh, you like it like this well I will give, you back an answer in that same kind of, format style same type of information, and so this for me is just mind-blowing, and it also makes us rethink like a lot, of the tasks we do in NLP and how many, of these we're going to be able to use, this Paradigm to do it so that was a, long answer I'll let you see how much, you you digested that was a really good, answer so you know we all have this new, skill that we've been developing you, know around prompting especially this, past year prompting engineering is now a, thing where it wasn't very far back it, was you'd go what what's that so how, does this all tie in we have this new, skill about prompting and learning how, to prompt effectively to get this, information you're talking about this, emergent quality of these large language, models how do those tie in what does, that imply for steps forward and what, should people be thinking about to make, that productive for them in day-to-day, use let's take this example to like, something that you would do practically, inside an Enterprise where somebody, might give you some type of document or, chat transcript which might be a little, bit unstructured and what you want to do, is just categorize it so now what we can, do with using these prompting and these, approaches is we can take that amount of, information I can ask the model hey will, you structure it will you clean this, will you take out the HTML format text, it'll do that and then I can ask at, another prompt hey can you summarize, this like take this from a 100 line, conversation just down to the essentials, 20 lines you can write a prompt for that, then you can ask it hey will you, categorize this I need to see you know, should I send this to my claims, department does it go to HR does it go, to it we can write a prompt for that and, so now what you have developers using is, tools like Lang chain where they can tie, together several of these prompts and, create workflows that in the prior to, this we'd have to use separate you know, machine learning models to do each of, those tasks and I think this is really, for me what the mind-blowing part of it, is how it can change machine learning, and really do a lot of this, democratization that we've talked about, for a long time but do it through a, natural language interface where, somebody can just literally give it, these tasks in a human language and then, have them accomplished right for the, data scientists out there it's a little, mind-blowing because I've been in this, place where we've tried to teach people, citizen data science and we have classes, on how to properly partition data and, hold outs and loss metrics and all of, this but like this approach dramatically, kind of changes how the number of tasks, people can do kind of without having to, learn all those Concepts, that's a great point with the Advent of, chat GPT and some of the others that are, out barred and everything coming out it, has exploded the audience that can, productively use this technology so and, do you see any any limitations in that, going forward or do you think it's going, to continue to grow and I think this is, to Daniel's Point earlier this is the, mind-blowing part is I gave you the, simple example now what you see people, doing is taking this but combining this, with other apis and other services so in, so in that case of the movie reviews, maybe I want to get the weather forecast, or I want to find out if the theater was, open that day something else well now I, can use that same type of natural, language interface and connect to other, apis other services and other, information and so this is where we see, some of the most powerful applications, of this with tools like hugging GPT, which allow you to interconnect with, lots of different hugging face models, where I can ask it a question and give, it a picture and the model automatically, go out figure out the appropriate, hugging face models to use run them, figure out the answer and bring that, back to me or and this repo has been, going crazy as the auto GPT one where we, essentially take that it's not just for, models we allow the large language model, to do any task where we can say we hey, start up a business and raise some money, for me and then the model will go out, answer that go see hey is there some, other databases is there some other apis, I can use it and we'll continue to, iterate it's might cost you a lot on the, tokens for you know GPD 4 but they'll, continue to iterate and try and try and, try and do it and I think you know for, me if you asked me a year ago if this, was possible I would have said no way, that's three or four years out like I I, can kind of see how you're doing it but, to me this is why it's such a special, moment we're living in it because I, don't think any of us could have, predicted we'd be here you know a year, ago Even in our conversations so far, like we've listed out like a bunch of, models so gpts and auto G GPT and, hugging GPT and like Bloom fla Flamingo, 7 billion whatever in terms of large, language models and what's out there, right now one interesting thing is like, Open Access or various patterns around, that and hosting like how how do you, think about like the landscape of large, language models like what does that look, like right now what are the major, categories that we could kind of have in, our mind as clusters of these things at, this point there's tens of kind of large, language models and yeah there's a, number of different ways we can kind of, categorize your thinking about them one, of them is kind of the simplest which, ones are proprietary which ones are open, source there's a spectrum when we talk, about access to these so there's some, like for example open AI where you don't, have access to the model you don't know, you know what data it was trained on you, don't know the model architecture right, you just send your data to them they, send back the predictions and so I think, that's one model there and then all the, way at the Other Extreme today for, example data breaks released the latest, version of its Dolly model which was an, open-source model That Was Then, instruction tuned on a data set that, data bricks created themselves that, they're making available kind of open, source for commercial use itself there, so there's the whole Spectrum there but, there's other spectrums here too because, the models for example vary in size, where you have have for example, something like Bloom that was developed, by hugging face which is one of the, largest open- source models at something, like 170 billion parameters to some of, these much smaller models that are, coming out that the Llama models and, others that are maybe a billion, parameters and that size has, implications in terms of how much, reasoning ability how much stuff is, inside there but inference is this, something that your teenagers going to, run on their own GPU or is this, something that's going to take a, multi-gpu cluster to be able to affect L, use right there's other dimensions like, what data the models were trained on for, example with the open source models we, know what data they were trained on one, piece of this for example that's come up, is knowing how much code a model was, trained on because one of the things, it's often asked for is hey can we build, a text to code type model where I want, to do some type of autocomplete some, type of code generation type project, well if I start with a large language, model that already understands code it's, a a lot easier to fine-tune it and make, that capability so like understanding, the underlying characteristics of that, data Daniel right it's like a alphabet, soup of different names and like that, literally every week they're popping up, and there's so many of these different, characteristics because they also differ, for example on the model itself and what, the licensing is and the model weights, the data set that it was trained the, training code that it was done we see, this with kind of how meta released the, Llama model where they told everybody, about it but then they released the, weights but then they gated the weights, so only academic people were getting, into them but then the weights were, essentially leaked and now they're all, over the Internet so now everybody's, using them so it becomes very confusing, kind of in this big thick mix of you, know how to sort this out so you're an, organization out in the world today and, you're trying to make sense of all of, this and if you just look at your last, answer alone it's just like overwhelming, for most organizations to look at, there's all these different, characteristics there's big models small, models open source closed Source you, name it there's you can slice it so many, different ways how do you make sense of, that if you are let's say that you're uh, in management at an organization not, justo the data scientist who's 25 and, gets the data side but you're trying to, figure out how do I do this in the, larger sense how do you start making, sense of that uh how do you know if you, need your own model that you're going to, create if you're going to use somebody, else's big small what's a good starting, point for people to start sorting, through the mess that we're all, delighting in today and it is a mess and, and I get calls all the time from model, governance folks that are trying to like, we need to set out a blueprint for our, company we need to think through this, because uh right now the incredible, change of the pace of change and all of, that right that's the downside of that, like if you're trying to understand, what's going on it's really hard to and, I think a lot of organizations at this, point there there's not a lot of easy, cases for like let's implement this, because it's going to 10x our revenue, for this particular thing I think there, is a lot of breathing room in terms of, Enterprises and being able to figure out, you know what the best strategy is for, the models over the next year or so like, that so I personally really benefited, from hugging face tooling around this so, like some of the decisions that I've, made in terms of my own Integrations, into the applications that I'm building, are because I know there's a community, around some of these sets of tools, there's sort of interoperability if I, want to pull in like this model size or, that model size or like whatever it is, and even like these large models like, you mentioned Bloom there's so much, integrated tooling with uh I remember a, really awesome blog post about like, running bloom in collab using, accelerator bits and btes and these, things uh for like quantization and all, this and, all of that set of tooling from this, like hugging face ecosystem I think is, so powerful for people actually, practically trying to do this I'm, wondering like there's so many Cool, Tools coming out as well like in that, ecosystem you're of course at the center, of it you know being part of that, Community um and that company any, highlights that you'd like to highlight, like I highlighted the one which was is, really cool and I'm playing with but, what else should be on our radar that's, great I know both of you kind of enjoy, the hugging face ecosystem and I've, spoken highly of it before and the, hugging face ecosystem is all about just, helping to kind of create and, democratize machine learning build out, the open source for it to Chris's, earlier point we have a place where, everybody can go and check the models, and read what is the licensing for the, model you know what are the implications, for that and learn about that now when, it comes to these large language models, like we've been busy building out pieces, on that so if you think about kind of, training these large language models, Nathan on our team has written some blog, posts around using techniques like, reinforcement learning with human, feedback that's the latest Cutting Edge, approaches to figuring out like how to, get these models to align exactly with, what humans do because yes we can feed a, bunch of data into the models but what, comes out of them often isn't what you, and I would think is the best and so, this using reinforcement learning with, human feedback does that I think one of, the things I'm excited about is the PFT, library that we have which is parameter, efficient fine-tuning and if you look at, these models they're huge they take a, ton of resources to do PF has a number, of different approaches in there is how, can we fine-tune these models without, having to load the entire model and, modify every weight in this and there's, a number of different techniques for, example just hey can we take the entire, model weights and find a smaller, structure inside them like a low rank, approximation I can't think of that name, can we get then that little dense piece, piece and just train that part and add, that on to it and if we do that that, actually works as a fine-tuning, technique without having to train the, entire model so I think this is where, the hugging face team is busy building, out a lot of infrastructure and tooling, so we can kind of all effectively use, these large language models it reminds, me that tooling is tactical in terms of, solving problems and for Daniel and me, given the podcast name tactical is, practical MH wow that's good uh maybe we, should redo our uh tagline there Chris, I'd have to run it through chat GPT, first to make sure it was good but uh we, always talk or we've talked many times, on the podcast about how a lot of times, the Practical side of AI is on the, inference side not as much on the, training side potentially because like, 99% of what you're going to run your, model in production is uh inference I'm, wondering like with these large language, models I can see like various scenarios, happening right A lot of people are just, putting that thin UI on top of open Ai, and like they're never training anything, and they're using that in context, learning but now with the tooling that, you just talked about there's sort of, this ability to fine-tune these large, models in a way that like wouldn't, require you to have you know a bunch of, racks of gpus right but maybe you could, even do it in like some hosted system, like a collab or something like that, right so so how do you think that shifts, people's kind of approach to how they're, solving problems over over the long run, CU it was sort of like for a while, everybody's training their pyit learn, model right and then it seemed like for, a while okay now I'm just going to use, apis because I can't train these models, and now we're kind of coming back to, this okay well what about fine-tuning, parameter efficient like we're not, loading the whole model in how do you, think that changes things moving forward, as somebody who's worked inside, Enterprises for a long time I knew the, infatuation with open AI apis were only, going to last so long because like I've, tried to sell data scientists a blackbox, solution you don't get very far right, like if it's inside your Enterprise and, your reputation your job is on the line, to make sure that model works you want, full control over it not to mention, Enterprises want full control kind of, over their data that's going into the, model and how it's being used you know, you're going to see and this is where, there's been so much energy is in this, development of Open Source large anguage, models but I mean what's blown me away, in the last few months is just how, widespread this community is because I, think you know some of the developments, you've seen are around C++ interfaces, for large language models Right Things, That No data scientist I know would be, able to develop something like that but, because there was so much excitement we, got other folks right typical software, developers engaged in building tools and, I think there's a lot of Focus right now, on building these types of tools for, this efficient type of use of large, large language models because nobody, wants to have a cluster of gpus like, that Microsoft in fact just today, released their deep speed chat tooling, to help people train models using less, infrastructure right being able to do it, faster so I think there's going to be, tremendous development of tools because, at the end of the day most people would, like to have a model that they can fit, inside their computer or or a couple of, gpus something that doesn't take a lot, that they can control that they can tune, and so I think we'll see a lot of, development in progress in terms of Open, Source pieces for that well Raj I am, curious to know how many of your, conversations these days around AI, models and large language models are, about like some of that tooling and, practical stuff that we just talked, about and how many are around sort of, like ethical concerns or hallucinations, or environmental concerns what does that, look like in your life right now so that, of course is a huge part because again, this is like the difference between, traditional machine learning where we, often thought about bias in models right, like is your model going to work for, kind of a Young Generation versus an, older generation like that but now with, large language models and this ability, of generative models right they're, creating information like how accurate, it is one of the common fallacies we see, and hopefully most of the listeners here, are quite aware of that that these, models lie they're just going to create, output and the output doesn't, necessarily have necessarily a tie to, reality with that so this is one of the, biggest education pieces that we have to, do is because people see open AI they, see the other tools and they're used to, just typing in a question and getting, back an answer but to really use this in, like let's say an Enterprise setting you, know this is I always suggest to people, to pair this with traditional, information retrieval techniques like we, already know good ways of having to, search and pull information let's use, those ways that are factually based and, then we can still layer on top a large, language model to give you that nice, chatty type interface right large, language models are great at writing, like that and take advantage of both but, yeah there's a tremendous amount of like, education that has to be done around for, example hallucinations you know that's, just the tip of it there's also right, like what's the training data that was, used for these models right like where, did that come from and then once you use, these models and you get output from, these models and this is where customers, especially for some of the code, generation ones and image Generations, are worried about is they're worried, about their own legal consequences of, using these models that that might have, some type of you know leakage from the, training data and copywritten material, that could be in the outputs it's a lot, of different issues going on I've had, conversations with people in various, companies over things like the open AI, licensing model since they're using chat, GPT and and it's really made people, aware that you can be giving over IP for, using and that's just one of many, possible concerns I want to throw, something and I know you've been asked, this a whole bunch of times because it's, a really big topic I'd love to hear hear, your take on it given where we're at, right now with large language models and, some of the variants that we've talked, about here where does this sit in the, concept of Education you have the gamut, being run from you're not allowed to use, any of these models for your coursework, and then on the other side and I think I, may have mentioned this to Daniel a few, weeks ago I have a 10-year-old daughter, uh actually 11-year-old in uh fifth, grade and she had an assignment and I, actually started us off going and doing, some stuff in chat GPT ended up having, her do a work but I actually, Incorporated it in but I've also talked, to people who are deathly afraid of it, skewing Academia and how you're, measuring students progress what might, be a reasonable path forward in terms of, trying to integrate this new technology, into schooling I'm very pragmatic and, know that we just have to kind of accept, it and adopt it me too now I agree that, there's going to be short-term issues to, figure out who has access to the, technology making sure everybody right, because this is an easy way for people, who have access to those resources, versus don't to further differentiate, themselves and kind of even increase the, differences um between groups even more, like that but I'm very pragmatic like, this like I think it's a very helpful, tool it's very useful and it's going to, be a part of how we work it's not only, on the education of like students in, terms of young people I also think we, also need to get our co-workers on board, too because I think a lot of us probably, listeners or early adopters that like, playing with this but I spent time, teaching my sales team how to use the, tools like Claude is built into to Flack, I'm like hey look what you can do with, this because I think it's one of those, things that can enable a lot of people, but it takes a little bit of Education a, little bit of pushing to get people who, aren't kind of used to these tools, adopted and especially not only the good, that can come but also like we talked, about earlier the hallucinations and so, that they properly kind of use these, tools as well also I think it's the web, developer other developer Community, that's starting to enter this space like, you were talking about like with the C++, stuff or other things there's other, people contributing which I think is, great you have a wider set of views, being brought to the table around how, these things should behave how we should, use them like there's a lot there's a, lot more people at the table and um I, know one of the things I've seen of, course a ton of that startup energy to, like people building things on top of, this you know some very quickly like I, say that's just a thin landing page on, top of open AI but others that are like, really fascinating and interesting use, cases for this technology of course a, lot of that Community as well overlaps, with the community using hugging face, tooling and those your hugging faces, interacting with what is it like for you, to see that energy around startups and, there's so many things coming out I I, know startups are already like low, percentage chance of success but a lot, of these things are really amazing and I, think could reshape like how we work how, we learn like you're talking about Chris, and and other things so what are you, thinking around that front and like also, having a sort of front row seat to see a, lot of these things like being released, it's an amazing time like that and I, love seeing the startups because people, are experimenting trying new ideas, trying new things right like most of, them will undoubtedly fail but I think, in the meantime we're going to get a lot, of good ideas for different ways and, approaches that we can kind of use these, tools and that that right there has me, very excited about that you know one of, the things that I've been really having, some interest conversations is are about, people who are not us not in our, audience people who are in the larger, world and really you know may have, Loosely followed you know kind of what's, happening in the AI space and kind of in, the mainstream media but they're, struggling to really understand what's, happening right now and you know we kind, of started the show off on that whole, premise is that there's so much, happening right now to the point of I, was at a dinner function recently uh it, was just a couple of weeks ago and and I, met this uh really cool dude who was in, his uh mid 80s but really sharp followed, technology and we started talking about, Ai and he's just like I'm trying to, track it and understand and one of the, points I turned to him with the idea, that we're having these large language, models that are now penetrating into, everyone's conscious I said this is that, moment where you're going to look back, and realize this was where you were, conscious of AI being a part of your, life and it will take off from this, point forward when you're talking to, people about these issues how do you, adjust people who are not used to this, stuff the way we are how do you get them, into the right way of thinking about it, in a productive way and kind of onboard, them because it's not the same, conversation today as it was a year ago, as you said uh it's changed so how do, you approach tackling that I think one, of the easier ways is if I can get them, to use the technology if they can use an, image generation where they can type in, something and see the differences in, results that might get or use a chat GPT, where I can kind of Coach them and do, that because you're right like trying to, explain exactly what this does without, the context of actually using it it's, like telling somebody about something in, the future that it's hard to kind of, contextualize and understand what's, going on the easiest answer for me is, just like getting them using it a little, bit and then that helps to then showing, them then like what are the boundaries, what are the limitations like what can, we do what are the possibilities once, they have a grounding on that it's kind, of a follow on to that we've kind of, acknowledged we're in this sort of, historical moment and in years past on, the show and last time you came on stuff, we might have talked about kind of, historical moments in the context of AI, but I think we're all agreeing that it's, becoming an historical moment for the, whole world whether you're in AI or, outside of AI because it's impacting, everybody you also acknowledged along, the way that there kind of es and flows, you you know that we have we're, certainly at one of those moments of, just intense new stuff coming out what, do you see in the future both short-term, and long-term where do you think we're, going from here because it feels like, we're in an Alice Through the Looking, Glass kind of moment so what might the, future look like and what are you guys, anticipating at hugging face so I agree, right this is just an amazing moment and, I think it's more so for the people that, are in it that understand Ai and what's, going on and kind of where the steps, we've made over the last year you know, and where we can go going forward I, still think we still have to figure out, when we're talking about kind of larger, humanity and the larger group of people, exactly what is the impact and how we're, going to use this because yes we have, chat Bots but most of us didn't spend a, lot of our life before using chat Bots, right like I don't know how much of our, Lives you know going forward we'll have, to do that so we'll have to see how, that's integrated but I think you know, all of this just shows us that the idea, of AI the idea of having using machines, to help us make better decisions is, something that is becoming much more, widespread we're really kind of on a, path with hugging face to help, democratize that bring that barrier down, allow more people so not just the people, who have been trained for four years in, statistics and went and got a PhD but, somebody that can think through a, problem a little bit go interface back, and forth with a computer can all of a, sudden build code or solve a problem by, tying some prompts together and really, really allowing lots more people to, harness the collective AI the collective, information that we have and allow for, more productive uses like that gotcha, good answer as you know Daniel and I, have been longtime Fanboys of hugging, face uh we think it's a fantastic, Community amazing tooling so as we close, out you want to point some folks to, maybe a few things that hugging face has, to offer that might be good ways of, ramping up in different areas uh just, kind of call them out absolutely so the, hugging face website has is a great, place to start there's a free online, course that you can start with using, Transformers there there's forums, there's Discord there's a community, there feel free to kind of jump in and, kind of get engaged there and then we're, building out lots of pieces you'll see, more models coming up over the next few, months that we're going to be releasing, more tooling for working with this so, yeah there's a lot going on fantastic, well thank you very much for coming back, on the show uh you're always exciting, you're fantastic at representing hugging, face and sharing your perspective with, everyone and uh we'll have to do it, again sometime soon thanks a lot man, absolutely thank you for having me I, enjoy, [Music], this thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all Chang talk podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residents brakemaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, King |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Computer scientists as rogue art historians | What can art historians and computer scientists learn from one another? Actually, a lot! Amanda Wasielewski joins us to talk about how she discovered that computer scientists working on computer vision were actually acting like rogue art historians and how art historians have found machine learning to be a valuable tool for research, fraud detection, and cataloguing. We also discuss the rise of generative AI and how we this technology might cause us to ask new questions like: “What makes a photograph a photograph?”
Leave us a comment (https://changelog.com/practicalai/218/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Amanda Wasielewski – Twitter (https://twitter.com/awasielewski) , Website (http://www.amandawasielewski.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Computational Formalism Art History and Machine Learning (https://mitpress.mit.edu/9780262545648/computational-formalism/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-218.md) | 34 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at loed, Martin how you doing Chris doing well, Daniel how are you today I'm actually, like I'm super excited for this, conversation because I don't know about, you but I've just been like swimming in, generative text AI for like weeks and, weeks as have we all I think yeah this, conversation feels like I can like come, up for a and think more to both computer, vision and generative uh image Ai and, other things like that because we've, privileged to have with us uh Amanda, wloski um who is an art historian work, working in the digital Humanities, program at Upsala University and she's, the author of a new book coming out in, May computational formalism art history, and machine learning welcome Amanda hi, thanks thanks for having me yeah yeah, this like I say I'm I'm really excited, about this so I have to be honest I was, a little bit intimidated maybe because, um I don't know a lot about art history, but in looking at your book and also at, your amazing research that you've been, up to like there's so much practicality, in in this both in terms of like what is, applicable to Art historians and those, working in that area but also the things, that you're talking about in terms of, how we think about like machine learning, and art and how those relate and, especially in light of generative things, in recent years so so yeah it's super, excited about this conversation um I'm, wondering, you mentioned in the leadup to when we, were talking pre-episode that your, background is more on the art history, side where did the art history and, machine learning start to collide for, you I actually started out um well I I, studied chemistry as an undergraduate, briefly before kind of discovering art, and art history um and uh was a, practicing artist uh for many years, before I went back to studying art, history again um so I had a kind of um I, guess I've never been formally trained, but I had a kind of sideline doing, artwork that was based uh through using, various digital Technologies uh and, certain kinds of programming and I also, kind of worked a little bit in you know, web design and things like that so I had, a kind of background in computational, things for both from a kind of art, perspective and a professional, perspective before I actually went into, Academia and academic art history so, I've always had those kind of interests, in how art and Technology coali lied and, I kind of I came to this whole field or, the kind of emerging image and AI field, through older things like image, databases and how they're sorted by, metadata textual metadata so that was, the kind of entry point and then, suddenly it seemed that you know more, and more art collections or digital, image collection were starting to use, different computer vision techniques um, and so that's kind of how I came at the, field through uh you know the way that, computer vision was increasingly being, used to sort large image Collections and, image collections of Art in kind of, institutional context it's interesting, that you mentioned both elements of like, using machine learning to sort art but, also this background of like people, using textual metadata to describe art, and I know that you used this word, formalism which in my understanding like, has some history in the art world but, like how standardized is the sort of, literature and research around like how, you describe the features of like an, artwork that that's probably a very like, um naive way to ask that question as a, person not in the field but like I, imagine you know metadata to describe, artwork is like you know artist van go, you know medium like whatever what it, seems like what you're talking about, goes well beyond that could you kind of, describe that space a little bit as a, quick add-on can you also add just a, little bit about like what is art, history coming into that because a lot, of folks on we probably have a lot of, people that are doing machine learning, but not a lot of art history background, and some people may be wondering, including me a little bit about trying, to understand what it is so kind of, working your way toward where Daniel was, but starting a little bit earlier for me, well one of the ideas of the book was, actually to kind of in my own way try to, like bridge this Gap because you know as, I said I don't have any formal training, in any of these like sort of the, computer science side but you know I've, you know been in this kind of digital, Humanity as mil where it's a kind of, combination of some computer science, techniques with a kind of Humanity's, focus in research so you know I wanted, with the book to both kind of introduce, art history Concepts to those people, working in maybe computer vision but, also you know introduce people in art, history to some of the things that are, happening computer vision so kind of, trying to play Both Sides a little bit, um but obviously from my own perspective, in art history and so art history is not, a very old academic discipline at all, its origins in the 19th century revolved, around sort of practices of collecting, Antiquities uh so ancient Greek and, Roman artifacts and that kind of, collecting practice started to become a, more sort of studied and systematic area, coalescing into like the first academic, art history departments came about in, the late 19th century and back then all, academic sort of subject matter the, humanity is included kind of aspired to, the scientific model in the same way, that you know the natural scientists so, empiricism taxonomy these kind of things, so people that point in time treated art, objects kind of like specimens like you, know if they were studying plants and, the kind of evolution of plants and so, early art historians studied art in much, that same way they sort of traced the, evolution of art through time and, through history and so it was really, focused on you know how the um kind of, superficial qualities of art change over, time rather than a kind of focus on, other contextual things like you know, the artist by biography or other kind of, circumstantial things about the, historical time period but this has been, a long-standing debate in the field oh, pretty much since the beginning so it, goes both ways and often falls into two, camps the so-called formalists who are, the ones who just care about the kind of, external appearance of images or works, of art and then the people who care, about the other stuff the you know what, the artist was thinking what their, intentions were what their kind of, historical context was and all that sort, of thing so I'm kind of reaching back, into that history of art history one, thing that kind of interested me in this, area was I saw computer vision research, you know so research that had no contact, with the art history world really using, data sets of artworks to um answer, computer science questions so you know, not answering art historical questions, per se but in the process because, they're using artworks they are touching, on things that are important to Art, historians or that art historians might, be interested in but I saw that there, was this kind of call back to these, formalist methodologies similar to what, was happening in the late 19th and early, 20th century so I was interested in this, kind of what I saw is like a Revival of, these taxonomies kind of, matching like really simple way or even, sort of you know the kind of object, recognition by finding different motifs, or things like that so yeah that was my, as having had training in art history, and his methodologies that was what kind, of piqued my interest in what was, happening in computer vision because I, saw it as kind of um like Rogue art, history that was happening like without, art historians having any knowledge that, it was happening so I kind of wanted to, like call attention to it on one hand, for art historians but on the other hand, call attention to some of the art, historical issues that you know computer, vision researchers may not have found or, had access to so I had that kind of that, both, directional uh interest for me I think, Daniel and I probably really like the uh, the Rogue art historian uh designation, who knew that uh machine learning, practitioners would be kind of the, Pirates of the art history World in that, sense yeah yeah I've seen a lot of good, parallels or or memes recently I think, one of my recent ones was like AI is, like computer LSD um I think probably, like Rogue art historian is is another, good one, um so you mentioned that like machine, learning people were integrating, artworks like into their data sets or to, like answer certain types of questions, were those related to like I can imagine, like oh if I have these different, artworks in my data set maybe I can do, image classification and classify like, this is an artwork or maybe even more, detail like this is an artwork by a, person or like in this uh time period or, in this medium or something I could also, Imagine like artwork has objects in it, right like can I recognize objects, within an artwork or certain like, features that sort of thing is that the, sort of questions that were being asked, or or what were these question that you, kind of started running across that you, connected with the art history world, yeah so you hit on sort of two of the, main areas that were being addressed and, I think from my reading of the, literature as I understand it the, computer vision literature there was a, kind of you know obviously object, recognition in images has been a huge, Focus From the kind of the last 20 years, plus because it has so many uh you know, quotidian and nefarious uh applications, you know you lots of surveillance, applications but lots of like you know, we open our phones with our face kind of, applications and the ability of you know, a machine Learning System to like, recognize an object has obvious, practical applications and so I came, across a lot of papers that said, something along the lines of uh well, recognizing objects in a photograph is a, solved problem so I think at a certain, point in the last like 10 to 15 years I, kind of cover like a 15-year trajectory, of This research in my book researchers, kind of were looking for more difficult, data sets to tackle and one of those was, uh art data sets because um sorry to, recognize an object in a kind of, stylized painting would be something, that would be slightly uh more difficult, yeah so you know you had these sort of, object recognition uh activities that, were happening but from like my, perspective in art history it's not a, very useful exercise you know I don't, care really as an art historian if there, are a bunch of dogs if you can identify, a dog in a painting it's not that, interesting as like a tool to use for my, research so simultaneously there was a, lot of um research happening which is, you know the kind of categorization by, style um and this was really interesting, to me because this term style in art, history is a really fraught term it's a, really it has a complicated history and, art historians have fought a lot about, you know what does Style mean and how do, we Define it and the uh yeah, categorization by style in this terms, that you're looking at a kind of, superficial quality and you're, categorizing it by a known kind of, textual label, I think it's interesting because you, know this has now really important kind, of knock gun effects in generative AI, like if you open do and you see their, like kind of suggestion for the initial, prompt they suggest you write they say, an impressionist uh oil painting of, sunflowers and a purple vase so right, there in the generative AI platforms you, always have these quote unquote style, markers so I really wanted to sort of uh, I guess unpack what style means for art, history and what it might mean when, we're suddenly applying things like, impressionist in the context of, generative, AI Amanda uh I love how you brought us, along to understand both like this, intersection of art history and machine, learning and how like machine learning, was sort of Dipping into these alism, elements over time you talked about like, the prompts and Del or something like, that like the style um when you're, talking about now like art historians, kind of realizing how they can employ, machine learning within art history is, that the sort of thing that they're, thinking about like like I could imagine, if I take a bunch of artwork you know, clustering image embeddings to like look, at the style of like what is actually, similar between all of these images and, that sort of thing um that was kind of, where my mind went when you were talking, about style but how have like, practically art historians kind of been, employing this once they realized that, machine learning people were kind of, like extracting some of these, interesting features yeah so exactly in, the way that you just described there, were uh is a one of the sort of founding, fathers of art history Hinrich vulin who, he pioneered the you know so artist, storian have always been kind of uh you, know using tech for you know various uh, you know teaching Andor research, purposes and he pioneered in the early, 20th century the idea of having a double, slide projector in an art history, lecture so that you could compare to, artw it doesn't sound like much to us, now but it was the idea that you could, compare side by side in a lecture, setting to artworks at once and so you, would kind of see but you know the human, I is only able to sort of kind of take, in so many comparisons at once and so, the way that these uh type of, technologies have been used in art, history context is exactly in this kind, of mass comparison sense you know, comparing many many artworks many many, more than could be possibly compared in, a kind of one single view um so in kind, of literary studies they have something, called distant reading and there's a, kind of Corollary in our historical, study is called distant viewing and the, idea is you get a kind of top- down very, far away view of General patterns or, general Trends and the Hope was that you, can kind of notice new things through, looking from this distant point of view, but one of you know one of the things, that you know is important in that is, again you're looking primarily at visual, characteristics can I ask a, non-technical question just that when, you're doing that remote viewing and, you're making those comparisons like um, just to give me a sense of the field, like what might be an example like a, typical example thing that you're trying, to compare aside from whether it's, machine learning or entirely you know, without technology in the process just, to give me a sense of a touchstone on, what that is in terms of what the point, of comparison is or yeah yeah I'm just, kind of curious just as as a newbie to, to our history and learning from you as, we go I was just wondering what a, momentary aside from the machine, learning side of it what would what are, some of the things you're trying to get, to with it yeah so this is like the, classic art history one1 something we, call um formal analysis or visual, analysis where the basic step of art, history is you know first looking, without jumping to context or content of, an image or work to look at things like, texture line uh shape color those sorts, of basic building blocks of visual, information, um and once you've kind of understood, that you start to notice details and I, think it's a way of like looking very, closely at an image or an artwork to, sort of understand what that is doing, visually what the composition is doing, and then the next tool to add on to that, is comparison so once you understand, kind of what's happening on a visual, level purely visual level you start, comparing it and then you see okay so, there's different things going on in, this other artwork maybe from the same, time period or maybe from just just, after it and so you kind of start to, build an idea or narrative around you, know how artworks change over time so, that's the kind of uh standard art, history like 101 skill that you know we, start to cultivate I'm sorry that I took, you there but I appreciate you doing it, it is helpful for me yeah no of course, no I think it's I mean it's important, because it ties back into thinking about, what we want to do if we want to use uh, you know machine learning methods to, perform those same test s we have to, realize or recognize that Machine Vision, doesn't understand images in the same, way that we do as much as we might you, know remove how we interpret content uh, or context the way we kind of dissect an, image visually or the way we kind of, analyze the visual properties is going, to be very different in machine learning, exercise and the first way that that's, different is that you know the vast, majority of things we're dealing with, are physical objects that have been, digitized uh so there's like a kind of, layer of representation they photographs, already so there's already a difference, between say looking at an artwork in, person in a museum and looking at the, kind of digital reproduction I think it, is important to sort of understand that, Foundation as well so while you're, talking about that and kind of the, understanding it's kind of like I mean, my best parallel would be from the NLP, world where like chat GPT or something, does not understand user and content, right there's no understanding right it, can produce text but we process language, different than chat GPT does like as, humans and like you're saying someone, standing in a museum like processes that, experience of standing in front of an, artwork differently than a photograph um, an intermediate representation, differently than like a machine might, like find features that are good for, image classification or something like, that I'm wondering because a lot of, these computer vision models are so non-, explainable or like there's an, interpretability problem already right, in terms of like I might not know why an, image was classified in this class um, with like a convolutional neural net or, something like that is that a struggle, for like taking this field forward in, terms of applying machine learning in, these contexts or there ways to kind of, extract some of those main features like, you're talking about like shape and, color and Line and other things like, that yeah I think that there's a lot of, similar issues actually between the kind, of text world and the image World in, terms of this idea of what constitutes, meaning or understanding are you guys, familiar with the tank classifier, problem the tank classifier not I'm, sorry I don't think I am although Chris, knows about military vehicles but I, don't know about tanks I don't think, that's what we're talking, about it was a kind of apocryphal story, that was casted around a lot in sort of, uh machine learning circles that the, story was and actually the um it dates, back to a kind of someone made this up, as an example at some conference I think, in like the 60s but it became kind of, passed around as like it actually, happened the story is that the US, military during the Cold War wanted to, recognize in images the yeah the tanks I, do now that now that you said go into it, that way I do remember this yes yeah so, like differentiate Soviet versus, American tanks in images but then ended, up accidentally classifying the images, by the background uh weather or, environmental conditions and that is the, kind of thing that I think like really, illustrates what we deal with when we're, dealing with images because we, understand things like background and, foreground or um the kind of subject and, Surround in a different way we interpret, those the kind of illusionistic space of, an image in a certain way that you know, for a lot of kind of algorithmic, classification that surface is what we, might call a kind of a democratic, surface like all areas initially are, kind of treated the same on it has to be, some kind of training to differentiate, those and of course it's gotten very, sophisticated where it is we are able to, sort of separate those things out a lot, of the time but of course you still get, lots of cases like in sort of Medical, Imaging like I read a few things about, you know during covid they tried to, classify for instance like uh covid, infected lungs versus healthy lungs but, they used a training set of like, children's lung imagery and so they, accidentally classified by children, versus adults which seems like a very, silly error to make but um so we get, like issues like that and I think every, important because what it points to is, that essentially we're dealing with like, a two-dimensional surface to interpret, but often those are two-dimensional, representations of a three-dimensional, space that we as kind of, three-dimensional beings intuitively, understand when viewing an image like, that or a photograph for instance um, whereas you know machine learning, algorithms only know that we've kind of, isolated a certain pattern of pixels to, be a specific object and you know given, lots of examples they're quite good at, differentiating whatever object we've, designated but still there's no kind of, understanding of space it's not part of, the understanding of images in that, framework so I think that that's kind of, one of these interesting examples of, like just because it successfully, identifies something doesn't mean it it, understands what that thing is like a, dog in a in a photograph very good, explanation and but I do feel on behalf, of the defense industry hisory I should, note that we are much better at, identifying and classifying tanks today, than we used to be I don't know if I, want to know how good you, are that might be something that I want, to be ignorant of I I just feel the need, to say that, yeah I have confidence that things have, moved on significantly since the 60s so, someone should tell Vladimir Putin, that's all I'm saying that's all the, politics I'm inserting so I am really, interested in all sorts of things about, what we just discussed in terms of the, the understanding elements and and other, things but I'm intrigued by this uh in, reading through some of the materials, about your book and your work um you, talk about how computer scientists often, process these sort of like art image, data sets or or images that are part of, their data sets without any real sort of, understanding of art or art history um, and you kind of one of the things you, talk about in the book is how maybe, there's an enrichment of like the data, science and computer vision Side by, understanding more of the sort of, humanistic issues and elements of the, artwork and those sorts of things could, you describe a little bit what you mean, by that and how you think like because, we've mostly talked about machine, learning kind of enriching maybe art, history or things that could be done, there what about the other side of that, in ter terms of like things computer, scientists could learn based on this, kind of background and research on the, digital Humanity side yeah I mean I, think one of the things that is really, important to me is this idea that you, know um the assumption that accepted, categories are in some way static or, objective and unchanging can lead to, really misleading finding so for example, there was one um study that I looked at, where they were classifying um paintings, by artistic style and they noted the, authors noted that um action painting, was confused with abstract, expressionism and you know said oh well, in future you know we will be able to, hopefully Rectify this categorization, error but for an art historian you know, those are two kind of contextually, specific style terms that to compet in, art critics came up with or groups of, critics to and they have a kind of, ideological background so there's a, reason that some critics wanted to call, this midcentury American art movement, abstract expressionism and some wanted, to call it action painting and neither, term is really subservient to one, another and you don't need to, necessarily understand the full kind of, art historical picture to like you know, say if you're using Del and you want to, make either an abstract expressionist or, an action painting as a style you, probably get good results with both of, those terms but the kind of issue is, that you know these are not stable, categories there's um different style, categories have very different kind of, Origins they're inconsistent amongst, each other you know some of them span a, few centuries some a decade some are, small groups of artists who all knew, each other and work together some are, kind of catch-all terms or contextual, terms so I think people you know in, computer sign are like great I have a, new data set to work with and here's the, categories and I'm going to work with, this and and then see how effective it, is you know categories and that like, that's fine because they're working on a, problem that's different than, necessarily what an art historian might, work on but the reason I kind of insert, myself there is I'm like hey well that, is actually kind of an art historical, problem that you're working on but in a, kind of way that doesn't understand that, these terms are not fact that they're, not stable in the way that you can kind, of like once you insert something into a, database it becomes kind of solid in a, way that it doesn't when you're, discussing it like I am like I could, talk for another 20 minutes about you, know who came up with these terms and, why and and you know what their you, political beliefs might be and that sort, of thing could you talk maybe not for 20, minutes but for for some period of time, I'm kind of curious because you've kind, of posed this problem you know that's, kind of brought by the data science as, the way I'm seeing it whereas you're, saying you know you may not have those, categories correct what are you, proposing as a way of mitigating that um, in a way that is consistent with art, history in terms of approach like how, would you you know that has that kind of, qualitative aspect yeah I mean I think, like something I was talking about with, a colleague who comes from a kind of, computer science background is how do we, bring together some of the you know, concerns and interests of computer, science with art history in a way that, is kind of interesting to both sides one, of those things is you know for art, historians the context of the Nuance of, terms in a kind of qualitative way is, important but then how do you integrate, that into a kind of data context is the, question um and unfortunately I don't, have a really good answer but I know you, know there are researchers who are, beginning to sort of combine different, well text and image or different, modalities of information together to, try to create a sort of you know or, networks bigger picture about you know, how we might understand our Works Beyond, just a kind of textual category um so of, course we can do a kind of like dispense, with categories Al together and do a, kind of purely visual kind of like, unsupervised like clustering type thing, then what do we call those clusters or, what do we call those Collections and, that brings you right back to art, history once again so it's this kind of, uh how to integrate all the sort of, qualitative Nuance within a data context, is the big problem as I I see it and I, think that's something that I still, haven't found or heard a really good, solution but I've been talking about it, with some of my colleagues so maybe, we'll come up with some Bright Idea in, that area could that change depending on, what question you're answering uh with a, given you know training session like you, could do you could take different, reinforcement learning approaches um but, I would imagine that that might change, the output and so you'd be looking for, an approach that's kind of consistent, with what you're trying to achieve from, the art history side of things is there, any thinking around different approaches, based on as you change those that you, get different types of outputs you know, there's something that you're going for, that maybe uh a data science, practitioner without the art history, might be going for something different, kind of as you've already talked about, what's the thinking around different, approaches to it with generative or, reinforcement or combination of them I, mean I don't think that you know we can, expect that me and a computer vision, researcher will have the same goals or, desires or outputs out of a research, question or problem but I think from my, end I would like to add some Nuance to, this kind of the cold data because of, course even computer vision researchers, they have a kind of quantitative result, but they end up making an interpretation, like the one I just said they said oh, well uh we've had this confusion between, these two categories and we'd like to, fix that so there's always a kind of you, know as much as you know data scientists, or a computer scientists might think, they're just concerned with sort of, numbers or output or objective facts, there's always actually a kind of, interpretive thing that happens so from, my point of view I think you know we, might not be answering the same research, questions but we could come together in, that kind of in the same space somehow, to build a bigger better picture of what, we like whatever phenomenon or artworks, or a collection of images that we might, be looking at I think think that's a, really good General Vision to have I, think in multiple ways uh and probably, for multiple problems outside of this, one so one of the things that is, mentioned um in the book and that you, discuss are a couple of these like, paradoxes that I find really interesting, in the fact that like deep learning as, applied to like these features of, artwork can be used to both like create, and detect forgeries so like both of, those things are true and there's like, this side of things where like like high, artworks can become digital assets and, like digitally generated assets are in, in certain cases being considered sort, of more like the high art side of things, like how are you wrestling with these, paradoxes coming up that like machine, learning and deep learning are operating, on both sides of these uh things I mean, I obviously think it's really, fascinating this kind of arms race or or, you know there's um a famous quote by, Vero that the invention of the ship is, also the invention of the ship wck you, can't have one without the other so I, think it's interesting that there's, always the sort of positive forward and, the and the sort of destructive negative, element as going on simultaneously but I, think in terms of like you know we, really saw generative AI explode you, know in the last you know especially the, image tools in the last year and a and, some months I think you know the latest, kind of the Pope jacket hopes of of the, last week uh really illustrates the, extent to which you know I mean we've, been kind of distrustful of the, authenticity of photographs you know I, mean since photography was invented, people were aware that it could be, manipulated in the 19th century you know, we had hand uh techniques to manipulate, photographs there's always kind of, editing there was always different kinds, of manipulation but of course it's only, just gotten kind of easier and you know, Photoshop there were a lot of the sort, of fears that are currently being talked, about in terms of authenticity or, believability or fakeness or trust in, images were raised in the 90s around, Photoshop and it we kind of you know, became accustomed to photoshop but I, think you know this question of, authenticity uh you know whether that's, in detecting art forgeries or if it's in, simply you know how we trust the images, that we see is kind of rearing up again, because we have this access now everyone, has access to quite sophisticated tools, to create photo realistic images that, aren't photographs at all and this is, something that I've been working around, subsequent to after I wrote the book is, the you know idea of are the images that, are created by some of these generative, AI platforms uh that look, indistinguishable from photographs can, we consider them photographs actually so, it's a kind of new tool to make, photographs that doesn't have a camera, that doesn't have a lens doesn't have a, photographer it's a kind of composite of, the learnings of vast data sets um so, that's like all of those questions that, I address in the book about like art, authentication and then on the F flip, side you know the idea you could create, a forged or fake artwork from a, generative tool are I think even more, kind of relevant in the last year a few, months because of the sort of the new, paradigm of creating manipulated images, or manipulated photographs yeah yeah, it's interesting that there's this, element of what you're talking about, where it's like well if you would have, asked me a year ago what is a photograph, like that would have been fairly, clearcut like I think now it's like well, what really like is it like you're, saying is a camera needed yeah I saw the, PO running from the police I I'm sure, yeah I did youall see that one did you, see that that it was I don't know if I, saw the running one I definitely saw the, puffy coat it showed the pope running, with police trying to capture him in the, street yeah but you know I've been on, these kind of uh I guess doing a a kind, of autoethnographic embedded study of, these uh lots of these communities on, like Reddit and Facebook and other, social media that are just like kind of, uh amateurs you know doing mid Journey, or uh Dolly images or like night cafe or, these kind of things and I've been on, them for you know over a year just like, reading posts and reading posts and, looking at images and even I you know, after I you know spending so much time, on these kind of venues and looking at, lots of uh AI generated images my, husband just showed me briefly on his, phone on Twitter like oh look do you see, the pope was wearing this big puffy coat, I was like oh that's weird I didn't, question it yeah and you've been in, yeah I mean I'm someone who's like, actively working on this so how you know, how can we expect people to be sort of, distrustful when you know we want to, believe what we see and I think also in, the kind of last I mean not to get too, political or anything but in the last, kind of decade the idea of the, photograph as a document of uh you know, truth-telling medium in terms of things, like police brutality or like uh, documenting abuse in other situations as, like kind of uh way to expose those, things and trust you know incidents, where the police may not have told the, truth about what happened in a, particular situation um we we put a lot, of stake in those things and so yeah, then the question becomes you know what, are we facing now yeah we have a new way, to have manipulated images as you were, describing that and I can't given the, industry I'm in I can't help but, obviously put the filter of my own my, own employment but it made me realize, that there are common problems that an, art historian and that people in the, intelligence Community for instance are, struggling to deal with at the same time, who knew that there could be career, paths Crossing between the two with that, kind of uh maybe ominous uh point where, do you think this field is going as you, look at doing these different types of, qualitative analysis where uh not, everyone is necessarily trying to get, the same thing out of combining these, fields and recogniz in that there are a, set of common challenges that you know, art history has that other fields may, have where do you see from your, perspective from your filter where do, you see this going where do you see your, field evolving into what kinds of, questions do you expect to be uh asked, and and What new technologies uh in the, AI world do you either expect or maybe, hope to see to help you find those, answers in the years ahead yeah I mean I, think like art history in particular is, fairly technophobic in terms of like uh, maybe wouldn't be the earliest adopters, of uh you know AI techniques per se but, you know I already think uh maybe I, don't have such a like sci-fi dystopian, Outlook but rather kind of very Almost, Boring outlook on I think a lot of these, tools will just simply be integrated, into a research practice the way that, chat GPT will be used as a kind of Aid, or different GPT type things Aid to, writing rather you know there's a lot of, fear right now in academic settings, about you know quote unquote cheating in, terms of this text generators but I, think similarly in terms of image, analysis or image recognition either, stylistic recognition or object, recognition will be a you know really, useful tool in terms of sorting through, large art data sets you know there's, certain kinds of you know say for in I, had a friend who um she was studying art, in Israel around and before the founding, of the Israeli State and they had a lot, of art exhibitions but they didn't keep, very good records of what the artworks, were that they were exhibiting so she, just had a bunch of photographs of, artworks on a wall and had to like set, herself to the task of like determining, what these artworks were and they, weren't necessar you know very, well-known artworks it sounds kind of, like a boring application but you know, might be a very useful tool in terms of, like okay if we had the ability to sort, of put this image in and try to identify, the artists of unknown artworks through, these kind of mechanisms for for my, disciplinary perspective that would be, very useful I mean already it's they're, being used these kind of computer vision, or machine learning techniques are being, used to sort large art data sets rather, than accessing artworks through textual, metadata accessing them through uh what, can be interpreted visually in, particular images or isolating images, extracting images um matching images, across different Publications or, different exhibition venues so I have a, very kind of boring Outlook I guess you, know I don't think it'll lead us to like, some kind of scary dystopia in future, but it'll just become a kind of, naturalized tool resource that we can, use but obviously you know with the kind, of caveat that we always have to think, about ethical issues and also think, about what categories mean and how we're, kind of organizing and arrang in things, not just kind of giving over the task of, organizing to some unknown kind of Black, Box I don't think that's boring I am, kind of encouraged by that uh as our, listeners know this is practical Ai and, I think we all to some degree love the, practicalities that come out of this so, I think that is actually the exciting, part that this isn't this goes beyond, the hype and it's making a difference in, people's you know day-to-day I think, that's where things really get exciting, um well I really appreciate you joining, us Amanda it's been a real pleasure to, talk through these things I've learned a, lot and I I so thrilled to see the work, that you're doing and your contributions, which I think are really important so, yeah keep up the good work and um happy, to have you back on anytime to help us, parse through some of these things oh, great yeah thank you guys so much it's, been really interesting and fun, [Music], thanks, thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change talk podcasts check out what, they're up to at fastly.com and fly.io, and to our beat freaking residents, breakmaster cylinder for continuously, cranking out the best beats in the biz, that's all for now we'll talk to you, again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Accelerated data science with a Kaggle grandmaster | Daniel and Chris explore the intersection of Kaggle and real-world data science in this illuminating conversation with Christof Henkel, Senior Deep Learning Data Scientist at NVIDIA and Kaggle Grandmaster. Christof offers a very lucid explanation into how participation in Kaggle can positively impact a data scientist’s skill and career aspirations. He also shared some of his insights and approach to maximizing AI productivity uses GPU-accelerated tools like RAPIDS and DALI.
Leave us a comment (https://changelog.com/practicalai/217/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog++ (https://changelog.com/++) – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with extended episodes, make the ads disappear, and increment your audio quality with higher bitrate mp3s. Let’s do this (https://changelog.com/++) !
Featuring:
• Christof Henkel – Twitter (https://twitter.com/kagglingdieter) , GitHub (https://github.com/ChristofHenkel) , LinkedIn (https://www.linkedin.com/in/dr-christof-henkel-766a54ba)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Christof Henkel | Kaggle (https://www.kaggle.com/christofhenkel)
• NVIDIA Kaggle Grandmasters (https://www.nvidia.com/en-us/ai-data-science/kaggle-grandmasters)
• Kaggle (https://www.kaggle.com)
• NVIDIA RAPIDS (https://rapids.ai)
• NVIDIA Data Loading Library (DALI) (https://developer.nvidia.com/dali)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-217.md) | 186 | 2 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who's a tech strategist at locked, Martin how you doing Chris doing well, Daniel how are you today I'm doing great, uh Chris have you ever been called a, Grandmaster in anything no but I really, wish I had because it's a freaking cool, name man or title W weren't you like a, street fighter or something you were, like a black belt or something oh now, don't go that something like that 30, years ago but yeah once when I was a kid, but you know what I was never I was, never a grand master at anything I was, just trying not to get pummeled yeah I, was just trying not to hit the mat and, that's it okay well today we have with, us an actual Grandmaster a kaggle, Grandmaster Kristoff Hinkle who's a, senior deep learning data scientist at, Nvidia and a kagle Grandmaster multiple, Grand Master by the way yeah yeah in, multiple of the different categories so, uh welcome Kristoff it's great to have, you here welcome Ben welcome Chris very, happy to be here awesome yeah well for, those that aren't familiar with this, concept C of kaggle Grandmaster could, you kind of give us the briefing on what, exactly that means and in the context of, also kaggle what generally I think a lot, of people are familiar with that but, just in case what is kaggle and what, does it mean to be a kaggle Grandmaster, yeah so what is kaggle keggle I would, say is like a platform for machine, learning in general it started off as a, platform for hosting machine learning, competitions that's became, popular but in like the recent years it, also expanded for like being a platform, for discussions being a platform for, sharing, notebooks um they're hosting millions of, data sets so they're trying to become, really like the goto Community for every, topic around data science and it's free, to register for everyone and they also, provide some free resources where you, can run code and try different stuff and, competitions at all this platform they, introduce different tiers in order to, gamify a little bit so to incentivize, users to post content or to participate, so there are four different um areas in, which you can reach like different, levels so there are like competitions, which is like the most famous one but, there's also notebooks where you just, progress by sharing notebooks with, others, um and the progression is based on, upwards on your notebooks then there are, discussions which work in the same, format so you pause an answer to a, question or you pause an interesting, topic you can also pause just memes and, generate uh uploads and and and this way, um and then there's data sets so you can, also post an interesting data set or a, data set you think might be helpful for, others and then people can upload your, data set and by this you progress and, and you basically progress by earning, medals they're like bronze silver and, gold medals in each of the four areas, and then with this medals you can reach, like different tiers so you start with, as a novice I think then your contribut, expert then at some points you're master, and like the very last stage is a grand, master and and to put that into, perspection so from the 10 million users, that are registered on kegle there are, 280 competition Grand Masters so it's, really like the elite of the elite the, top notot people in the in the area I, would say so I have to ask because you, know we were talking about it which of, the three categories are you a grandmas, in and what's the fourth one that you're, not and of course I'm going to ask you, when are you going to become a, Grandmaster in the fourth one well I'm a, grandmas in competitions and that's the, most difficult one it indeed um then I'm, a Grandmaster notebooks because I shared, some high value notebooks and then I'm, also a grandmas in discussions because I, like to discuss stuff that's also why, I'm here okay but I'm not so fond of, curating data sets and uploading data, sets I can't blame you that's why I'm, only your beginner and data set that, that would be the one I would choose, first see that that's Daniel Daniel, loves to he loves to do data gruning and, stuff it's uh it's it's sick that's, terrible but I so I understand I give, you I give you a pass on not being a, grandmas in the fourth one there what, got you into kagle in the first place, and what was the journey like towards, where you're at now um some people might, just be jumping in on kaggle and like, trying things and they have like a, vision of how far this could go but what, was the journey actually like for you I, think it's quite interesting because my, journey began right in the last months, of my PhD so I did a PhD in mathematics, and in the last few months so after I, sent out everything and I just was, waiting for my defense there was, suddenly some free time and also free, weekends I I wasn't used to during the, PHD and I was also always curious about, the AI topic so back then it was like, five six years ago it was not so hyped, as now but it was like rather Niche area, what neural networks and on um so I was, just curious about that watched some, YouTube videos started a c s are course, on like what are neural networks and so, on and due to that I quite quickly found, out about keger and then just started, with my first competition right away and, since then I'm I'm I'm booked in the, system and how long has that been six, years now I think and um during those, six years also my professional life, progressed more and more towards machine, learning and deep learning in data, science so 6 years ago when I joined, kegle I was working as a risk analytics, consultant so I had nothing to do with, machine learning I had nothing to do, with data science I programmed a bit on, risk models so I had some background in, like R programming or metlab but I never, used python before and then du to kaggle, also my professional career shifted, towards machine learning and deep, learning until right now I'm working as, a deep learning data scientist at Nvidia, which is like one of the top not, companies in this area yeah that's like, the gold standard of jobs in in the AI, world right there so so do you feel like, the the experiences on kaggle and your, success there in what ways did that kind, of contribute to your own sort of career, advancement and your also like your, understanding of what you wanted to do, as your career Advanced yeah it had, really had like a lot of impact so uh, step by step I moved into the position, I'm right now so when I started I was, doing kle like before and after work a, bit not too much like half an hour after, work half an hour before work and on, weekends and then I made some and of, course I did horribly on my first, competitions because I had no clue of, anything but the nice thing is that you, really progress step by step so in the, first competition you do horribly in the, next one you do badly but not, horribly and then and then you progress, more and more until video become better, and better I quite quickly realized that, a lot more fun in like machine learning, and deep learning than on risk, consultant just because you can be more, creative I would say I moved within the, consultancy company I was lucky that, they also had like a data science team, so I moved to the data science team, there and I had my first Synergy effect, between kegle competitions and what I, learned there and what I was using in, project so I could use my skills in the, project and I could also use skills I, gained in the projects in K competitions, but that was yeah five six years ago, there wasn't much deep learning in the, industry especially in the insurance, industry where the focus was uh in my, Consultants company so um I was not, challenged enough um but I wanted to do, more and more in this field and also my, skill set grew more and more so I, decided to quit this job and found my, own deep learning consultancy just to, have like even more Synergy between, projects and between K well tell us a, little bit about what that was like in, those days because as we've grown up, with deep learning over the last few, years I would guess that at least in the, beginning it was a little bit, challenging to land you know uh, engagements maybe or was it or did you, have them from the start because I know, for me early in that phase about the, time Daniel and I started of the podcast, is you know people were like deep wet, huh so did you have any challenges in, those early days that have obviously, evaporated uh as the world has taken, this on uh certainly not only in terms, of projects so people especially the um, decision makers I would say they're, really cautious about the let's say, possibilities you can do with deep, learning especially like five six years, ago there weren't any resources around, so so I talked with customers about what, amazing things you can do with like deep, learning and then they didn't have a, single GPU they had access to so that's, like really like two World, clashing against each other um so they, were there a lot of interesting and, challenging problems around that but as, soon as they basically gave me a chance, and I could do some prototype and they, can really show what you can do then it, was easy to convince them but to get to, the Point especially as like a young, startup a young consultancy startup that, was quite difficult so I definitely want, to get into to many things later on but, I I'm also thinking about these people, out there that are maybe you know, inspired by your journey and wanting to, get involved in kaggle and other things, wondering if you can like share a little, bit about because while while you and, Chris were talking about perceptions, around deep learning that have shifted, also during that time like the tooling, around deep learning has shifted like, the accessibility of maybe like thinking, about four years ago if I was to train a, deep learning model for a kaggle, competition versus like being able to do, that now how have you seen that shift, over that over that time period in terms, of the sort of ability for people to um, I guess people use the word democratize, or or whatever the the ability for, people to hop in and do something, Advanced like that um very quickly yeah, like two aspects I would say one is like, software wise and framework wise uh then, there have been a lot of progress there, so when I started was still like tens of, flow 0 point something which was working, but it's really like no near programming, so there was nothing like an RNN layer, or a Transformer layer or so you need to, code everything from scratch but it also, helps a lot for understanding the things, so I think nowadays people don't really, understand understand the granular, aspects of deep learning because you, just do something like model. fit and, you don't have any clue what's happening, behind the curtain so certainly it's, easier nowadays to train a model just by, this higher, Frameworks just calling by name there's, not only stuff like caras High torch, lightning there's like a lot of, different Frameworks you can use which, are really high level and accessible for, beginners and there's also a lot of, training material, for this Frameworks so a lot of, tutorials so it's really easy to train, simple model for simple task but also in, terms of, resources I think they are more beginner, friendly because on cable for example 5, years ago they didn't give you any, resources there was no Google cab so you, basically had have to add your own GPU, at home you need to build your own, desktop machine or something or you, spend your own money on cloud resources, but now for beginners you can get access, to cab which gives you a free notebook, to experiment you get some pre on kegle, there's a lot of student credits and, student programs so it's really easy to, start your your data science journey I, would say and there's also a lot of more, material online where you can really, teach yourself I would, say, [Music], Hello friends this is Jared here to tell, you about change log Plus+ over the, years many of our most DieHard listeners, have asked us for ways they can support, our work here at Chang log we didn't, have an answer for them for a long time, but finally fin we created Chang log, Plus+ a membership you can join to, directly support our work as a thank you, we save you some time with an adree feed, sprinkle in bonuses like extended, episodes and give you first access to, the new stuff we dream up learn all, about it at Chang log.com plusus plus, you'll also find the link in your, chapter data and show notes once again, that's Chang log.com, pluspl check it out we'd love love to, have you with, [Music], us so Kristoff as you were kind of, leading in talking about your entry into, the world of deep learning and your, career shift to accommodate that and, you're talking about kind of learning, from kaggle competitions and engaging in, that and then it was increasingly, applicable in your professional life can, you talk a little bit about how that, happens like when you're thinking about, a kaggle competition and you're now, working in a job in this field how do, the two relate how are cackle, competitions relevant to solving real, business problems in a real job and, getting that Synergy what is that like, what is the connection between the two, like I would say there are a lot of, synergy aspects so doing a ha, competition is really very similar to, doing a project at work which is about, performing a first prototype so in a kle, competition you get like a problem which, you're not familiar with often from a, different domain can be from biology can, be from astrophysics can be from, chemistry can be Bengali Language sign, language there so much different, problems that you have no clue about, when you start and then you find have, like three months time to find like the, best possible solution and also compete, compete with other data scientists so, like this prototype project characters, very similar so you have like this three, months time window then you have a, collaborative part so in kegin you can, also form teams so you can participate, in competitions in a team which is very, similar to working in a team in your job, with all the ups and downs I would say, uh it's working in a team Under Pressure, often so k competitions can create quite, some pressure more pressure than you, might feel in your day-to-day job um so, you also get used to working efficiently, with others so in terms of coding in, terms of reading their code in terms of, structuring the project and so really, like all aspects of project management, are also important and also things like, optimizing runtime and optimizing code, structure you wouldn't think that it's, quite important but uh I think it's, quite important also for kle, competitions because recently they run, the competitions on a restricted, Hardware so you just submit your code, and they will run your code on their, infrastructure using their keg notebooks, so you need to have your code in a way, that it's kind of production style, that's also what you would do in a, project so you would develop ideas and, so on and so forth but at the end you, want to productize your code, and you need to um think about all these, mlops problems as well and you also, train those skills in K competitions so, really like parallel between the two, worlds that said I must say that two, things are really different between, kegle and the real world project first, thing is data acquisition that's like a, very big topic like the real world it's, no topic well no topic but a very minor, topic on K competition you already have, your training data of course you, sometimes can expand your training data, by looking for more data um online but, in general you already have like a fixed, training set you can work with whereas, in the outside world or in the real, world that could be like the main, problem just to require some data um and, the second thing is um definition of the, metric so in kegga people are like, evaluated based on some Metric and this, trick is predefined before the, competition starts whereas in the real, world that can be a discussion which, takes for ages between like data, scientist the business and just creating, a metric that is representative of the, business problem can take a lot of time, and you don't have this issues and, discussions okay I'm curious uh as you, were describing that I have a an idea, that came to mind so recognizing the, limitation of you already have data, provided and recognizing the fact that, um the metric is well defined on a, kaggle team and both of those are kind, of optimal situations compared to the, business world but from the perspective, of uh an organization out out in the, world any organization that is uh keenly, interested in data science and stuff, would forming kaggle teams or, participating in kaggle teams be a good, recruitment tool because if you can find, people that are performing well on teams, uh in that capacity it doesn't check, every box you know for what the business, world doing but it kind of gives you a, sense maybe of this might be someone who, could fit in with us we're going to, throw the messiness of data sets and the, messiness of of metrics on top of that, but um what do you think of that idea is, that something that people uh might be, thinking about in terms of trying to, build data science teams for the, organizations uh certainly I think that, that would be a great idea if people do, this and some companies already use, keggle as a hiring tool so in order to, run a competition those competitions are, sponsored by someone and there are, sometimes companies who sponsors a, competition but also tell the, participants that they are hiring and, that if you are finishing like in a top, spot you can apply for a position there, so getting a position is kind of part of, the winning price uh sometimes so they, already see that kler is very good for, finding good candidates but as you said, you could also and kgle nowadays even, offers the concept of a community, competition where you host a competition, by yourself without any K interference, and you could do this as kind of an, assessment center for filtering, potential hes or see how they interact, on a problem or see how how they work, together there also so normally K, competitions are like 3 months or so but, there are some format for example K days, which is like conference type of thing, they host like this conference specific, competitions and they just go like one, afternoon and people get like a simple, data set and they have one afternoon to, get like a good solution and they could, definitely see how this would benefit an, assessment center for example because, they really see like the the whole range, of skills people can bring to your, company uh I have to ask uh of the the, sort of competitions and the uh, notebooks that you've you've contributed, to kaggle maybe the discussions too what, are some highlights for you like of all, the things that you've done what are, some highlights of the the things maybe, you're most proud of or or that you, would like to highlight I would say most, proud of certainly are the Google, Landmark competitions so there's a, competition which was hosted three times, yearly by Google Google and this is, about classifying pop net marks so you, have a data set of 5 million images so, it's really like large scale and in this, 5 million images you have 80,000 classes, so 880,000 different landmarks and you, need to classify between those landmarks, and the difficulty especially there is, that for some landmarks you only have, one or two images which makes it quite, complex to um to classify and another, complexity is because the some landmarks, are quite like looking differently from, different angles you can think of a, museum for example people take a picture, outside of the museum people take, pictures within the museum and you still, would classify it as the same Landmark, for example so the competition is quite, tricky and I was able to win it three, times and two times of that without a, team so just solo and that's something, that's even harder in doing K, competition so without participating, within a team but soloing that brings a, lot of additional let's say mental, stress because you you not you don't, have a team you can talk about your, problem with you're just like isolated, working on a problem for three month, with like high pressure and so on and so, forth so that brings um another level of, like mental component to to the game so, I was quite proud that I could um win, two of those or win three competitions, and two of those um without any team so, I I'd like to follow up on that what is, you're talking to people out there that, might be you know either already, participating in kaggle but you know not, at the level that you're at uh or, thinking about jumping in what are some, of the attributes that you and I want, you to take a moment and harp a little, bit on yourself I'm asking you to uh and, say what are you bringing to the, competition do you think that really has, given you uh an edge uh in getting to, that Grandmaster level and being so, competitive at that level do you have, anything that you can offer uh people, that are kind of maybe a little bit, intimidated by it or trying to think how, can I level up a little bit what would, you say I mean I definitely have some, analytically thinking just from my study, of mathematics because the old study is, there to basically learn how to think, efficiently how to solve problems, efficiently so that definitely helps and, coming from Natural Sciences in the, broader sense also a sense of solid, experimentation is very important so um, really having like a clean workbench so, to say logging your experiments, following up on on ideas and so on so, this really like thinking like a, researcher and natural scientist and, following your experiment like in a, clean and reproducible way that's also, quite important but I think what really, pushed me to the top level is the, Curiosity of different domains so even, like top people they tend to let's say, in quotation mark lean back and do what, they're good at and not expand and learn, far further but I would say um one more, Edge I get is that I really try a lot of, different ideas and the end in different, areas I try to explore like very, different competitions very different, domains and at the end every now and, then I can leverage from something that, you would think has nothing to do with, the other but you still can leverage, some ideas and apply some Concepts so, for example you can transfer Knowledge, from audio classification to biology or, to astrophysics or from NLP to um, computer vision and vice versa so, there's a lot of um Synergy people, wouldn't think about and therefore it's, quite um helpful to explore uh as, different domains as possible you, alluded to this a little bit in what you, were saying about used to with kagle, competitions maybe you had to build your, own machine with a GPU in it to to sort, of operate in in that now there's good, Resources with gpus but I'm wondering um, from your perspective both as a as a, competitor and a grandmas but also as, you know a really uh senior data, scientist at Nvidia how do you view kind, of GPU acceleration as kind of important, and playing a role in kagle competitions, probably most people think about it in, terms of training a model um but you, know how do you think about that more, holistically in terms of like the, accelerated process that's key to, performing well in competitions so, certainly GPO based programming or like, calculation is like the bread and butter, of trading any model nowadays but uh, also um especially Nvidia they're, looking more and more into moving other, parts of your data science pipeline onto, the GPU just to make it faster and, especially for kle competitions the, speed of which you can run your stuff, and try ideas is very important so in a, lot of of people like on the top level, compete against each other one of the, edges uh you get is when you can do more, experiments than the others which are, just Bound by of course your ideas but, most of the time I'm not running out of, ideas but I'm running out of time and, the competition end so as long as I can, run more experiments than other people, can do because I have a more efficient, pipeline or I can run more parts of my, pipeline um efficiently you using gpus, that gives me an edge and some examples, of this are like um data pre-process, well let's start even one step ahead um, the first step is just data loading just, just loading your data frame for doing, anything can be GPU accelerated and, that's just like 100x faster so every, time you're working on the problem you, get a 100x speed up just in the step of, loading your data and that's what Rapids, for example is all about so Rapids like, an IM media tool stack which is all, about accelerating those parts which are, not like training the model but are like, um what is normally handled with pandas, for example so they have um a part which, is called KF which is basically pandas, on GPU they have something which is qml, which is basically SK learn on GPU so, things like clustering all this stuff, you can do on on GPU nowadays there, other examp examples are for example, Nvidia di that's a tool especially for, image processing but I also support, audio and video but an example there, would be um decoding of jpegs so people, wouldn't think about that but something, like having a JPEG on your disk and just, loading the jpeg um involves some, decoding step which basically decodes, the jpeg format and this can already be, done using gpus and can be accelerated, by gpus and also gives you an, significant speed up during your, training during anything which uses the, images so there's a lot of different, steps in your pipeline that you can, accelerate and that's what all uh, accelerated data science is about so, Andia tries to move the complete, pipeline from loading the data to saving, like conclusions results all end to, endend on gpus yeah that's really, interesting and I'm guessing that that, some of the things that you're talking, about like loading images or loading, data frames or manipulating data frames, maybe doing certain operations doing, clustering I don't know that this is the, case but I would guess like those things, pretty consistently show up across, competitions too or in the real world, you could think about them as showing up, across many different business problems, so like you were talking about your, pipeline of processing which I think is, a really wondering if you can dig into, to that a little bit not a specific, pipeline but how you think about like, solving a problem because most people, might come to a kagle competition or a, real world problem and say okay here's, my data I have my main step is this sort, of like training of the model and maybe, evaluation like how good is my model, retrain it how good is it retrain it how, do you think about the data sort of, pipeline around you know like you're, talking about running experiments what, does that sort of like data pipeline, look like in your mind and what are some, of those reusable components or things, you find yourself doing over and over, again that are accelerated that you, found accelerated ways to do those, things uh using GPU tooling like Rapids, or this di it really depends from, Project to project I would say where, it's applicable or not so I would say, that Rapids for example is even more, applicable to the real world because, there you might have way larger data, frames for example so if you're like, bigger company you have like user data, or you have client data or whatever, because the kle competitions often are, like packed into little problems that, people can work on um and not like this, company size large scale data sets with, like millions of users or thousands of, users and the things like Rapids, especially shine in like this large, scale data sets for me my pipeline is I, would say modular so and and that, develops through the years coming from, the competitions so of course I try to, reuse as much as possible just to be, efficient so I have a really modular, setup where I have one part which just, just the model training one part which, is about the storage of my data one part, which is about logging the experiments, and checking results and visualizing, results one part which is about the, framework setup so to say same so I use, Docker with a specific py torch image to, have like uh always the same environment, and also can replicate my experiments, and also can use the exact same, environment of different machines so in, the cloud or locally that's all um, things I learned during the years so, it's a little bit complicated to explain, the old pipeline now on the podcast I, actually gave like a one hour, presentation two weeks ago uh just about, this topic so it's pretty difficult to, condense into a few into a few fantes, it's hard without a diagram for sure but, it's super interesting to me like the, things you're talking about that you've, made modular I think are things people, operating in a real world data science, environment eventually need to make into, sort of like components that work within, their team right like you know my my, team like we love using for example, streamlet to do like some data, manipulation visualization interactive, stuff on the other end and we have a lot, of those we reuse a lot of those, components and you know we have like, certain models that we multilingual, models that we train over and over so we, got you know modules around that and, then like pre-processing and and other, things so I think these are it's, interesting how much what you're talking, about overlaps with I think the, efficiencies you gain over time as a, data science team operates together and, they learn how to make their own, processes more efficient um so I I think, that that's that's really interesting so, I've I have um played around with Rapids, a few times and um it is really cool and, I'm just looking at the latest stats, here on the Rapids website and it's, talking about performance on 300 million, rows by two column data frame with like, the highest speed up being for like, group by operations like 80 times faster, than not using rapid so like I don't, know you know how long you know that, saves you but also like you're talking, about if you're doing experiments over, and over and you want to rapidly do, experiments even if that saves you let's, say it's something smallish like in, minutes right a couple minutes like, you're able to do things much faster and, automate thing like your automation goes, faster you can learn things much faster, and reduce that cycle time although I'm, a I'm also assuming for many people for, their data it might be more than a more, than a minutes long speed up potentially, on some of those operations so yeah I, don't know um when you're when you're, helping people and and you mentioned the, discussion groups and the notebooks that, you've worked on on on kaggle is this, something where you've seen like light, bulbs come on for people when they like, saying like oh I'm trying this group by, operation or something on this data and, it's taking me like 15 minutes every, every time I run through this is that, something you've been able to bring in, those discussions and notebooks and such, on kaggle yeah certainly so like loading, data frames is a good example so 80, times sounds not that much I think but, it's like one minute or two hours that, that's like the scale you're talking, about like loading your data frame in, two hours or loading it in one minute, that's like an ATX speed up difference, and especially in kegle those, discussions get a lot of track, because on your inference you actually, have like a time limit of like 9 hours, so people try to get as much stuff into, their submissions as possible so um, loading data frames manipulating data, frames loading images all the stuff if, you can speed it up the people will be, very very gratefully adapt whatever you, give them to to speed up their their, stuff um and that's only the inference, side so that's even more true for, training because as you said my day, today is like doing a lot of experiments, and those speed UPS accumulate so the, very first thing I ever do in a, competition like the first two weeks or, so I just optimize my workflow so I, optimize all the runtime optimize how I, load my things accelerate all the, pre-processing postprocessing whatever I, have in my pipeline so I can then, leverage the remaining time from like, the most perfect setup or the most, perfect code because then I can just run, more and more experiments so I'm curious, because as you have been talking about, optimizing and being able to do all of, these iterations on your experiments, there are people out there including, myself um that are thinking whether they, are uh wanting to jump into a kaggle, competition they're psyched up because, they've been listening to how you've, kind of mastered this process, uh or they're working for a company and, they are trying to get their own systems, better and better and you know early, teams really struggle with that and so, uh either way uh with you talking about, what you've done and Daniel was jumping, in and talking about the work they don't, there are people that want to be there, with you you know they want to at least, get on that path do you have some, concrete recommendations on somebody, who's at the beginning of that and, they're like okay I'm doing data science, but my God it's taking me a long time to, get through each iteration and I'm, listening to this grandmas just cranking, out productivity so fast what are a, couple of specific things that you would, say go do this and that and that, recognizing that they'll find their own, path forward um and they'll make their, own adjustments but how do they get on, that path to begin with the first thing, and I told this several times is just to, start your very first K competition so, you go to kaggle.com you look through, the ongoing competitions which is like, 15 to 20 um ongoing competitions and it, just shows any topic you you find, interesting you don't need to be an, expert in this topic you don't need to, even know about the domain or something, but just starting is like the first step, and as soon as you start just by the, sheer amount of knowledge which is, shared within the forums and the, notebooks you will see that you learn, very very efficiently how to improve, your code how to improve your skill set, and you get like immediate feedback on, the leaderboard for example or on, discussions if you have like a comment, and it doesn't make sense then people, will tell you if it doesn't make sense, people will also tell you thank you so, um that the leaderboard is very like an, objective way and seeing your, performance and seeing your progression, so that's the very first um advice I, would give someone, try to find an interesting competition, and just start there there's basically, nothing to lose you just can gain, knowledge as said you will perform, poorly on your very first competition no, matter how where you come from but just, starting is like the first step and as, you start I think the best advice is, that you start simple as simple as, possible and just try to progress from, that you start with a very simple model, with the subset of the the data or with, like images which are down sampled to a, low resolution just to find like an, efficient Pipeline and to work on your, code because all this is like an, investment for the future and all this, gives you an easier setup to work on and, to improve on yeah really good advice I, think that part you talked about about, like spending a couple weeks optimizing, the sort of inputs outputs and Those, portions of your pipeline so that you, can really put a a lot of your focus on, fast iterations on the on the model or, or that middle bit I think that's really, really good advice this has been a, really fascinating competition um I have, a long way to go to be a Grandmaster, that's for sure um but as we as we wrap, up here uh uh this discussion about, accelerated data science and the the, kaggle competitions what are you excited, about sort of looking to the Future you, mentioned that you have you're curious, about all of these sorts of different, domains you've worked on a lot of, different problems what what really, excites you right now as you look, towards the future in terms of things, that you want to try or just in general, things that you're excited about in, terms of the tooling or the the, community around um what you're involved, with I would say in the short term I'm, definitely excited about or interested, in how AI will support my work so, something like get up co-pilot or other, um n language models which help me code, I haven't tried them much but I think, that in like the near future or the the, short term they those tools will support, our everyday life in some way but uh I'm, even more excited in like the long-term, Prospect like what will happen in 10, years and 20 years and that's uh really, excited because if you think back like, 10 or 20 years and in terms of AI and, what sisters could do and where we are, right now and you extrapolate that into, the future that will be very exciting, and amazing what will be what what will, happen then yeah yeah I think that's a, great way to wrap things up thank you so, much for joining us Kristoff really, looking forward to following your your, progression and the things that you work, on in the future and the great things, that continue to come out of Nvidia so, thank you for your work and thank you, for taking time to join us thank you for, having me, [Music], thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change talk podcasts check out what, they're up to at fastly.com, and, fly.io and to our beat freaking, residents break master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Explainable AI that is accessible for all humans | We are seeing an explosion of AI apps that are (at their core) a thin UI on top of calls to OpenAI generative models. What risks are associated with this sort of approach to AI integration, and is explainability and accountability something that can be achieved in chat-based assistants?
Beth Rudden of Bast.ai (https://bast.ai) has been thinking about this topic for some time and has developed an ontological approach to creating conversational AI. We hear more about that approach and related work in this episode.
Leave us a comment (https://changelog.com/practicalai/216/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Fastly (https://fastly.com/?utm_source=changelog) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Typesense (https://cloud.typesense.org/?utm_source=changelog) – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
• Beth Rudden – Twitter (https://twitter.com/ibethrudden) , LinkedIn (https://www.linkedin.com/in/brudden)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Bast.ai (https://www.bast.ai/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-216.md) | 5 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris doing, well how are you today Daniel oh man so, much better than last week uh as you, know I was sick last week when we were, supposed to record so sorry for the skip, of the week but um happy to be back here, and with a super relevant topic um, around uh AI systems and explainability, the delivery of AI systems that are, explainable uh we have with us today, Beth Ruden who is CEO at Bast AI welcome, Beth thank you for having me yeah yeah, we were just talking before the show, about like this craziness that we're, experiencing around the hype of these AI, systems which maybe are just like a web, page that connects to open AI um but uh, you were talking about how you've been, thinking for quite a while about, explainability and accountability of AI, systems I'm wondering if maybe you could, start out by just giving us a little bit, about the journey you took to Landing in, that space like how did you get, interested in in those topics, specifically I think that a really good, place to start is you know, understanding 2012 um the Harvard, Business Review said that the data, scientist would be the sexiest job of, the 21st century and yes which is why, some of us on the call are data, scientists so I was working at IBM and I, was a pleaser so I went after the, information architect certification, but um I had a couple friends that were, mathematicians and statisticians and you, know really data engineers and you know, devops people and software engineers and, you know they didn't really want to be, an information architect they didn't, want to be an IT architect so we you, know rinsed and repeated experiential, certification to be able to say well how, how can we make sure that we're making, data scientists that actually know how, to use data in the scientific method to, solve business problems and so we, started that and it took us about six, seven years it's accredited through open, group anybody can get it and it's hard, because you actually have to um submit, uh like you know depending on your level, one two or three you have to submit a, you know several projects that say you, know here's how I took my problem and, put it into a hypothesis that could be, tested and then here's how I negotiated, with my business stakeholder and then, here's how I like you know showed my, results and then you know as you get, further and further along here's how I, integrated my model into production and, I think that's when things get a little, crazy and people are like wait what how, do I do that you know here's my Jupiter, notebook isn't that great but um I've, been writing or you know looking at how, to deliver AI but I've been doing a lot, more on the Linguistics or the semantic, side and for probably about 15 20 years, and if you look at you know how the NLP, work is really you know a lot of people, are like oh hey I pulled down this thing, from Spacey I can write NLP right you, know I just call this like cosign, similarity kind of you know model I'm, I'm good to go and I was an, archaeologist in you know the late 90s, uh that's what I actually got my degree, in I did Greek and Latin and spent um, some time in Italy and when you're, learning languages you know you're, learning declensions and you're learning, you know atmology and you're learning, stemming and lonization and tokenization, and you know all of the text, pre-processing so I was always the, squishy human data scientist you know I, was the one that was studying languages, and doing, semantics so I think it was 2015 Andrew, in's like oh hey if we use gpus or, graphical processing units we can, process all this structured data like, really really fast against these stati, itical models and so a lot of people I, think forgot about entity extraction and, you know ontologies and the semantic web, and you know I use um the owl which are, formal knowledge graphs and am I I, hopefully am not speaking Greek to a lot, of people but really looking at the, language side as opposed to the machine, learning side and that understanding of, semantics is like it's really put me in, a great position now because all of the, statistical models NLP has three I kind, of Chunk them into three things nlu, natural language understanding NLC which, is classification which is your, prediction and then nlg which is your, generation and prior to having access to, GPT we were generating language old, school and it's super hard I mean it's, really really hard like if you think of, like pronouns and you know trying to say, you know John was John was the guy you, know who referred to him and then he, took the notch or the wrench from Joe, etc etc I mean just it's so, complicated but now that we have a, really good natural language generator, I'm like kicking butt because I have the, semantic side so I have really really, high nlu or natural language, understanding because how an ontology, works and I try to remind people of this, an ontology is the study of the nature, of your reality based on the language, that you use so when you use language, I'm understanding your reality and so I, map that into a Knowledge Graph and then, take that language to understand, whatever you're going to say against it, so I have a map so when the natural, language generator generates all the, variation that I need in order to put, into my machine learning models and then, I have a map to make sure that I, understand what kind of things you, actually want to talk about I can create, um conversational AI that has lineage in, Providence that has the source of where, you're starting from and then I use all, of the GPT generators to effectively, generate the fluff or the syntax around, whatever entities that I exract about, whatever you want to talk about if you, guys could help me break that down into, better English that's what I've been, trying to do for the last couple months, that's awesome I I would love to uh so, one reference point in my ontology of, this space is people sort of talking and, we even talked about this in our last, episode um with Brian mccan from u.com, about grounding right like some people, are thinking about okay like I can, generate text or a response to a user, query by looking at some external, knowledge and grounding my response in, that sort of context how would you, compare so like one way you could do, that is to say okay I'm going to go I'm, going to pull an article and like insert, a paragraph from that article in a, prompt a natural language prompt to a, language model and that's the way I'm, inserting knowledge into like my, response here you have this sort of, concept of a Knowledge Graph and, entities and like ontology would you, consider that grounding or is it, slightly different in terms of like what, you're talking about with this on ology, and um kind of bringing that external, knowledge into the generation you know, the nice thing about software is it's, multi-directional so how the, conversational AI the the product in um, just a quote here so uh we named our, company or I named my company Bast AI it, is after the Egyptian cat goddess and we, build conversational AI technology so we, bu cats ah nice and we really wanted to, distinguish from Bots and you know, having the word cat makes a lot of sense, at least in my mind because I've been, you know overriding the bot but the idea, is that the conversational AI technology, includes a data pipeline so bring your, own data let me take a book that you, have written and then the cat through, the data pipeline ingests that book and, then puts the entities into the ontology, and that can be done both supervised and, unsupervised I I'm a big proponent and I, hope we can get into this a little bit, of AI is is to augment human beings it's, really to help us understand and I'm, always like why can't we carry a pocket, brain like we carry a you know pocket, GPS so the cat ingests the book and then, those entities go into the knowledge, graph and then that those entities are, in a concept hierarchy and they carry, well it was from this paragraph on this, page in the book so carries the actual, Providence of where that entity was and, the understanding of that entity within, the concept, hierarchy then when the cat you are, interacting with the cat with the, conversational interface the cat will be, able to respond using those entities and, so it will show you where it got its, response from and so it's predicting the, response based on your question because, I have that high degree of nlu, so I take your question and I do text, pre-processing and I match it against, the entities in that ontology or if it's, not in that ontology we have a series of, orchestration to send it to um a couple, various different places that we had to, create the way that we handle uh, toxicity is something that I'd love to, talk about too just because I think the, way that we're handling that is very, very elegant and fun and the idea is, that we wanted to have fully explainable, AI we wanted to show people how they, could ingest a paragraph and then be, able to communicate with the, conversational or communicate with the, AI to understand how that paragraph is, being understood in the relation of what, the person is asking for yeah and um, maybe just to give uh an example of this, I I love the the way that you frame this, in terms of like reaching out to uh, ontology that's hierarchical and you can, kind of ground citations in that as well, so like just to give an example let's, say that I had a book um maybe it's a, novel I'm reading right now um the I, don't know if you ever read it the the, Cuckoo's egg by Cliff stole it's about, like a hacker at the Lawrence Berkeley, labs uh way back in the day it was, really interesting anyway like let's say, I have that book and I put it through, this data pipeline so I've got my, ontology, and I've forgotten like oh did Cliff, like reach out to the NSA or was it the, CIA when he was talking and I asked the, question you know did Cliff reach out to, the CIA or the NSA what would happen, next in that like how would how would, that query the processing of that query, differ in that example as compared to, maybe like other types of of ways of, handling this so instead I mean in that, case we would have you know the direct, path where you could say okay well Cliff, is a character in the book and it was, the CIA not the NSA so I know that for a, fact in the ontology so then you know I, could do it two ways where I could, answer you directly and we I don't know, if you call it cheating a little bit but, we have a corpus so anything that's, answerable that is really really easy, answerable we stick into open search, which is just a form of elastic search, so then we just pull that give you the, answer we know with 100% certainty that, it was a, done and all of these scores we have, about 50ish different models depending, if you how you count them um that is uh, run through our orchestration system so, each one of those models have scores you, have targets configurability you can, expose different hyperparameters etc etc, or we could take the cliff entity and, the CIA and the NSA and we could do, prompt engineering and we can give you, like if you wanted to say give me, Cliff's reply and tell me was it the CIA, or the NSA and then you could go like, you know ask it to generate a response, for you and one of the things that we're, really playing around with is, conversations should be interactive so, we want the cat to also engage the, person so we could say um oh Cliff was, part of the CIA and you wanted me to, generate like a response that Cliff, would give here you go and what else, would you like me to do would you like, me to generate like some books that, Cliff would have written in the style of, Cliff so you know you can kind of really, start to do that engagement too and you, know I use the word lineage in, Providence um but it's really, attribution and when you're starting to, attribute things to the right to the, right Source system everything changes, and anytime I like you know show some of, the cats to people one of their first, responses is like let me put my own data, in it and that's exactly what I want to, instigate with humans is not having the, blackbox algorithm do the answering just, have the blackbox algorithm do the, generating and I know that a lot of, people are super excited about using, them I would really caution about, creating them because with the, generative AI it's just going to, generate based on syntax not based on, understanding and I think that that's, like the biggest thing that I want, people to hear there's no sentient, there's no sapience there's no, consciousness and I think that that's, all a distraction for the amount of, compute that these models really take so, I'm like can we make it like a utility, and everybody uses it sort of like a, dictionary and then you know we're good, to go or like a, thesaurus and I just I I really do think, that when you're using the generative, Transformer to generate, Transformations that's the big, difference that I'm trying to get people, to see is like with the ontology as the, map and um actually GPT gave me this, analogy and it's very good at analogies, and metaphors because of how it's built, and you know clustering and all of the, things that happen be behind the scenes, so when you're using an ontology with, our conversational AI or with our cats, and you own a toy store if you want to, ask that cat about any toy in your toy, store you'll have that ontology to tell, you about any toy in your toy store if, you ask the cat about a toy that's not, in your toy store it will tell you it's, not in your toy store if you do the same, thing to chat GPT it's going to tell you, about a toy that doesn't, [Music], [Applause], [Music], [Applause], exist so Beth that was a fantastic FC, explanation and I'm learning a lot Dan, Daniel is quite the expert in this area, himself but I'm not but you as we were, coming into the break you made a point, that has been weighing on me for the, last moment or two as as you were, finishing up and that was as you are a, user and you're interacting with this, capability um remarkable capability, that's really taken all of our attention, this year you pointed out that it's, really important for that user not to, infer intelligence not to infer a, Consciousness uh in that I would argue, that that for the typical user out there, someone who's not in the AI space and, doesn't have an understanding of these, that's a really hard ask and I and I'm, definitely as I you know with this year, with you know gbt everything chat gbt 4, has been out this last week and uh as, we're recording this and I'm talking to, a lot of people and I think they're, really struggling with that so you you, kind of gave that direction but I think, that's easy for us I think that's a tall, order for people not in the space and a, lot of people a lot of our audience are, people kind of coming into this and, learning about it can you provide some, guidance in how you keep that separate, and what it means and how you should use, this new capability uh as people are now, engaging because it's changing the way, we're all you know operating dayto day, even non- technical people who have, never really done any AI are now going, to chat GPT and things and we're really, at a moment where people need to, understand how to appropriately engage, this brand new technology I think that, it's a combination of things but you, know I I think that tldr go out and use, it and use it as much as possible and, ask it about yourself ask it about, things that you know and you know the, way to really understand how something, works and you know remember we're in the, realm of cognitive science that binds, philosophy Psych ology and computer, science and I think that really when you, understand that it is a generative, Transformer it generates, Transformations it does not have the, understanding to have Consciousness or, sentience it doesn't understand what, you're saying it's a stochastic parrot, it mimics language so it mimics what, you're saying based on the syntactical, rules of that language and it's, incredibly good because it's been fed a, huge amount of data so if I'm having a, conversation at work with somebody who's, not a technologist sure they might be in, a marketing department or something like, that how does that change how they're, engaging in terms of how they should be, thinking about it because I would argue, that that's a tall order to actually get, you can say that to someone and they'll, say right yeah ask it about a product, that you are not selling that you're not, marketing you know ask it about, something you know okay and you know to, be true and ask it to say you know I I, would like you to Market a Blue Tomato, we have blue tomatoes blue tomatoes grow, on trees could you Market that for me, and it will give you marketing for blue, tomatoes that grow on trees and so I, really want people to come from a space, of abundance not scarcity I really want, people to think about what do they have, right now and what every human being has, is their own, experience and what this AI has been, trained on is a very small number of, people's experience who have been on the, internet and been writing on the, internet and so my opinion is that every, single human being is already impacted, by AI they should be using it and they, should be using it in a way that you, know I used it to help my daughter come, up with an analogy on reciprocals I, asked it to come up with a good metaphor, in order to explain you know what an, ontology does you know there's so many, different things that you can use it for, and I I really think that the best way, that people can do is go out and use it, and ask it about things that you know, many authors are like oh so I wrote 10, books not four ha you know I mean people, are starting to see that it's going to, generate the next proper noun the next, predicate the next syntactically correct, and and I'm you know doing it in a, sentence but it's so smart because it, has been fed so much data but here's a, myth you know I talked a lot technically, about using ontologies and knowledge, graphs and concept hierarchies and you, know all these things but here you go, all of what I just talked about can run, on something that was like the very, first iPhone one the myth you do not, need big data and big honken machines to, create Ai and I would ask who does that, myth serve and if everyone could, understand how to use the data that they, have that is special to them that they, understand that they like if if you take, your grandfather's journals and put them, into an AI so that you can have a, conversation with your, grandfather this is what the technology, is enabling us to do and we want it to, be distributed to every human on earth, because every human on earth has been, impacted by AI yeah and I don't know if, you uh can talk at all one of one of the, things I love how you brought out this, element of people being able to bring, their own data to the table kind of, combined with the fact of them being, able to run this maybe even on their own, Hardware how shocking would that be um, and I I love that also because like I, think that there's a real concern that, I've been having over time of just how, Western and English focused most of this, conversational AI is and the fact is, that like we come to the NLP table with, these biases that are say like oh, wouldn't it be great if every language, community, in the world had a translated version of, Wikipedia and that concept makes sense, to us but the reality is some language, communities they don't want that it's, explicitly not how they use their, language and they want to use their, language in a community setting maybe, for storytelling or maybe for like, whatever it is and they would rather, bring a different kind of data to the, table so I think that that also helps in, in this regard um I don't know if you've, Al seen like in the ontology space or, the knowledge graph space how do you, think about like I guess bias and like, the availability of data because that's, a big topic with these large language, models right is like if you're just, using them for Generation they come, loaded with what they're trained with, right you know I I was stunned at how, quickly the models are able to, statistically generate the language and, you know just we used to make fun of, like you know natural language, processing statistically generating, language like you know it's a por and so, many it's such an oxy in so many, ways there's actually a really great, article by Karen hoo about the the, mayori people and what they are doing, with artificial intelligence and I'd, love to link that just because it was, from MIT and it was fantastic review and, really speaks to what you were saying, about that as far as bias um again I'm, going to go back to cognitive science, you know philosophy how do you know what, you know psychology how do you make sure, that you are not using your powers to, manipulate humans seriously like just, put some ethics there and then computer, science so when you're talking about, bias and you know there's a cognitive, codex and it's like 188 cognitive biases, in, counting and one of the best ways that I, did this when I was still IBM is I, started the uh trustworthy AI center of, excellence and um many of my peers um, are still they're so strong and what, they're doing is amazing but what I, wanted to do with bias is we did some, modeling on the Titanic and we did some, predictions on whether somebody was, going to live or die based on their, class in order to show the social bias, of the time because the person in, steerage would never have gotten a life, B and I use that explicitly to talk, about bias and what you said about the, very small you didn't use the word, homogeneous but like Western kind of, culture that we sort of codifying into, this AI we have got to have a wider, variance we have got to have more, diversity and that's why we really need, to be able to give everyone the ability, to build their own without having to, build their own generative model could, you talk a little bit about like how to, do that because this is a topic that, we've talked about in different ways, over a number of episodes it's very hard, to get it out there it is definitely not, an equal world that's right access yeah, can you talk a little bit about access, and how you create that and how that, becomes possible um well Shameless plug, I'm looking for funding and, investment but but you know I think that, the ability to use the combination, of like knowledge graphs and semantics, and you know being able to access these, generative models and one of the things, that I did with the orchestration and, the reason that I use sort of the Corpus, driven and dump a bunch of stuff into, open search is to make it small enough, and accessible by as many people as, possible so just use the access to the, generation to generate all the variants, that you need but eventually you're done, you don't need anymore and you have that, stored in a corpus so you can access, that as much as you want and so it's, really about like how do we use the, generative AI more as a, utility that everybody uses instead of, creating I call them chees graters, because that's how I think of the, generative model is they cheese grate, the text and then they sort of glue it, back together or Stitch it back together, with duct tape or you know whatever but, I mean it's codifying in so much of our, Western notion of ideas if you go to, Aboriginal societies their construct of, time is entirely different like if, you're facing the west or the North or, the East their concept of time is, different and that's expressive in their, language so to think that we have, created a generative model that can, Encompass all of our world is is not, correct we can do so much more if we, have a wide variant have you guys heard, of um the diversity prediction theorem, or the wisdom of the crowd yeah to me, it's the secret of the universe like The, Wider your variance the more standard, your mean like the closer to truth that, we want to get to the more diverse human, Neo cortexes that we need to get there, so I'm a big proponent of you know to, really answer your question Chris we, need to make the generative model, something that is is accessible and and, open ai's done a really great job of, like making it accessible to a wide, range of people but I was talking to you, know my parents today and they were like, we don't even know how to access that, but I think I went to Google and it, might have done something because it, gave me a weird response so I shut it, down and then I tried to go to the other, thing I was like oh good so we we need, to be better about really making sure, that people understand that they are, accessing just a generative model that's, it I think that's one of the one of the, challenges I see is um we're here in, this AI Community you know we're we're a, tiny little slice of that in this, episode as we talk but at the same time, I participate in other communities that, are not technical at all and the other, participants in those communities are, not Technical and I think that's the, challenge is trying to do exactly what, you said with people who otherwise not, just don't have access don't even know, it exists uh in a lot of cases so yeah I, used to say and a friend of mine, reminded me of this a long time ago but, there's no hand waving in math so if, somebody is not explaining how they got, to the prediction or how the model works, or saying it's proprietary or shoving a, bunch of data into a neuronet to have it, guess the future you know engineering, they're probably hitting the Easy Button, they're not doing the work and I come by, that honestly because I think that, people need to understand there's no, hand waving in math we need to stop, thinking that just because it's AI or, just because it's, statistics um you've heard the Mark, Twain quote is it a statistics quote I, probably should have uh lies damn lies, and statistics yeah that's right yes yes, so it's just, statistically um, 2013 is um 10 years ago goodness, Jennifer gold social scientist gets on, the Ted stage tells the world that she, can, statistically predict whether you have, done drugs or not based on five of your, likes on Facebook and I was like, Hallelujah everybody understands, everybody sees it right everybody, understands that like we can now predict, these things statistically if we have, enough data and no I I I still don't, think that people can get that and we, need to teach like I taught my kids, probability through poker like we we can, teach this to people so that they, understand that it's only statistically, accurate to a certain probability so if, it's, 97% accurate what does 3% look like, what's your test reliability for that, 3% is that 3% going to give give you, that same answer every single time and, if not it's not science I love so much, about this conversation and one of the, things that I was thinking about in my, own context is like my own tendency to, not give users enough credit so like one, of the things that happens when like we, anthropomorphize Ai and like talk about, it in these different ways is you know, there's a tendency to like maybe think, it's always right or it has more like, you're talking about more intelligence, than um it really does have but I've, also found where it's whether it's like, family members in my own life who aren't, involved in the AI world and they're, using chat GPT a lot of times they, interact with it more responsibly I feel, than some colleagues in the AI world in, the sense that like my like my, brother-in-law Jack I don't know if, you're listening hey uh if out there, like he um we were talking last night, over tacos and he had like used chat GPT, to write up some like speech or, something that he was giving at work and, I'm like oh so you like wrote that with, chat GPT and he's like yeah like I used, it but I what I do is I don't like just, have it generate it for me I'll just, like type as fast as I can and just have, it rephrase it into something good, that's grammatically correct and I'm, like wow that's like yeah go for it, that's really good like that's awesome, because like that's a great way or like, I'm thinking of teams that we work with, in in India in my day job I was talking, to someone and like some people would, say like oh we can't like just output, machine translations because like they, won't post edit them and like make them, good or like look for Corrections and in, fact like you know translation teams we, working with in India they know it's a, machine translation they're just happy, they don't have to type as much because, typing is really difficult in in their, language right like they're fine to post, edit it so yeah um I'm wondering if you, see, this as well and like if you have any, recommendations specifically after, working with users in conversational, interfaces which can seem kind of like, humanlike like it's like you're having a, conversation how do you set up an, interface how do you set up a system, such that it like produces useful, behavior and like promotes the right, type of usage you know I started playing, with very earlier versions of GPT and so, we strung them up in Slack, and we did that on purpose because we, didn't want to deal with identity access, management and all the other stuff and, Slack's a great interface for plugging, things in but it it's also really, something that you know I I was the, Anthropologist and when we installed, slack in the largest Enterprise in IBM I, watched you know the people going oh my, gosh what's the protocol and really when, you deal with like really senior, Executives they're like wait this is, persistent this is kept forever what do, you know what are we doing do I respond, to this, gift that's right that's right and so, and so like you know I had I had a lot, of what you would call training in, trying to get people to understand this, new modality of, communication so we were playing around, with Bots and we wanted the Bots to talk, to each other and so we use the cats now, to test out what we're doing and you, know we talked a little bit earlier, about like setting up an ey frame or a, web page that just like you know strung, up to open Ai and you can ask it any, question you want but if that AI doesn't, give the answer that you like and it, causes your customer to not trust you, that's a big deal so you really want it, to be, tested and so we use the we use the cats, to test the cats or the cats to make, kittens or the cats to test the kittens, or the cats to test the intents or you, know and and this is the the joy of, having some of this automated and when, we were you know back in the day when, you used to do like conversational Ai, and that you you would do like you know, either dialogue flow or Corpus driven it, was always the it group that had to do, like give me a hundred variations of how, somebody would say this give me a, thousand variations of how somebody, would say that give me 16 synonyms for, this give me 17 synonyms for that so, we're using it all the time because, again it's great at generating, variations one of the ways that I used, it with um teachers and and students and, I get to work with this amazing, university marville university that is, truly transforming education and they're, doing such amazing things with my friend, Phil karney who used to be SBP of, innovation at Salesforce we're doing, great things there and I got to do a, fantastic um workshop with all of the, teachers in the faculty and they came to, my session really kind of skeptical and, they left my session going oh my gosh I, now understand how to use this and what, I did is I had chat GPT list out 50, things that a teacher does all day and, then I had chat GPT list out 50 things, that a student does all day and then I, had the teachers and the students I had, 138 people on a miror board working, together for 15 minutes and I'm like, what are you going to eliminate what are, you going to raise up what are you going, to reduce and what are you going to, create and so give people an, understanding of what the technology, does and then the messy middle between, the skills that you have and the title, or the role that you have it's what do, you do all day like what do people do, all day I'm like Richard scary, 1968 but like when you're doing that, you're like wait a second I don't in oh, gosh when I was a mom or like a a mom of, younger kids I guess I'm still a mom, even though my child is 18 or one of, them is anyway I digress like I would, sit there and I'm like oh my gosh my, head is hurting what do I do for dinner, chat GPT give me a recipe with chicken, and broccoli you know like it can be so, useful for so many people to just, generate what the idea needs especially, when you're tired or you're exhausted or, I definitely wouldn't want to fire your, entire marketing team but you know you, you because you want to keep it on like, on Q but you really can use it to really, augment your business and augment what, you're doing just keep it in the realm, of um fiction and creativity and you, know those kind of things and we haven't, even talked about um you know some of, the some some of the art and the the, creative expression and then I'm a huge, fan of like you know especially for, people who don't code I'm like go ahead, program in it like get it to render some, code for like a graphis so that you can, see like a visualization and codes great, cuz it kind of works or, not true we've talked a lot about this, sort of idea of knowledge graphs and, ontologies as a reference that's domain, specific and known in combination with, generative models do you think there's a, parallel in the sort of image Vision, Audio space where like I hope so I I, imagine you know groups are like hey I, need to generate a new design for a web, page or I need to fill this empty room, with furniture and you know I could, generate a couch but it'd be really nice, if I knew that this couch existed and I, could order it online and it's an actual, couch that exists um because otherwise I, could sell this design to my client and, now like I can't Source the couch for my, room um what are your thoughts on that, in terms of maybe extending some of your, ideas about combining knowledge graphs, with models to more modalities than just, text we're in the realm of like most, human beings can't see the difference, between 4K and 8K but you ask any, artists to kind of look at and I I've, done a ton of work with my you know just, playing around too it's like it's kind, of off and you don't know why so and but, it'll it will get better and better and, better so I think that what I hope will, become more valuable is attribution and, AI that can give actual attribut, and so you could do that with anything, visual you could do that with video you, can do that with um you know any sort of, you know as you were doing it with like, kind of like a design or shopping I was, once having a conversation in again this, is the The Geek in me but I'm like an AI, can produce its own architecture and its, own architectural diagrams or its own ER, and that's where I think we should be, going is for the the AI to explain how, it works in and of itself and you know, that's where we're going to potentially, get to you know Cognito ergosum but um I, I think that it would be really cool if, AI can start to think in terms of that, that three-dimensionality and I think, that if you can get the AI to design how, it's functioning and how it's working, within itself that's going to be far, more valuable and again far more trusted, is that's something that you're going to, want to actually build a relationship, with the big problem that I see with the, generative Transformers that are, pre-trained right now is that they're, pre-trained on data that was harvested, without people's consent which means, that that data was potentially put there, you know I've been I thought it was my, job to lie to all of the search engines, for the last you know at least 10, years so I mean I how good is that data, and my hypothesis and I've played it out, many times is like when people interact, with my cats where it's AI that they, trust and AI that they know where the, data comes from it's an entirely, different experience than you know back, to your initial question Chris like when, you're interacting with something that, you don't know how it works and you, don't know where the data comes from, that is you know you're being told that, it's fed data you're like what you know, what it's such a different experience, erience when you can interact with, something you trust yeah it does change, your perception coming into an interface, knowing like oh this is I'm searching, against my company's knowledge base or, I'm searching against like PDF of this, book or you know whatever it is as we, kind of uh draw to a close here I'd love, to hear maybe like you've talked a lot, about things that you're actively um, working on and that's all really, exciting as you look to the Future or, what are you most excited about in terms, of maybe what's going to be possible in, the coming year that we aren't quite, there yet but that is really on your, mind it could be something that you know, you're actively working on or just, something in the industry as a whole, what as a positive uh kind of close out, what are you most excited about in terms, of like trends that are happening or, things that you're working on or, thinking about I have a couple um I I do, think that with the technology that I, have and with an ontology I can take, like a paragraph of your text and, understand something about your existing, mental model so if I can relate new, information to your existing mental, model like I found out you live in, Indiana if I say something about Indiana, that makes you not only trust me it, makes me give you new information in a, way that reduces your period of, disequilibrium we can make people learn, faster and that's truly exciting to me, and I think that, human, beings given trusted evidence-based, tested artificial intelligence if they, had that you know I think that it opens, up this entire world of visual thinkers, and shout out to Temple granon and her, new book visual thinking all about how, we've been living in a verbally dominant, Society well guess what words just, became very cheap very much of a, commodity so the Engineers the Tradesmen, the plumbers the you know the the people, who are doing the things with their, hands the artists the fuzzies you know, this is our time and I think that you're, opening up an entirely New Market for, everyone to be able to create and that, to me is super exciting that's awesome, yeah I think that's a great way to close, out and a great thought um and I now, know my next book after I finish the, cuckoo egg so um thank you so much for, joining us Beth it's been a real, pleasure to talk through things um I, recommend people check the show notes, for links that we'll include there and, um hope to chat with you again soon Beth, thanks so much you're welcome thank, [Music], you thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change do podcasts check out what, they're up to at fastly.com and, fly.io and to our beat freaking, residence break master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, [Music], and |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI search at You.com | Neural search and chat-based search are all the rage right now. However, You.com has been innovating in these topics long before ChatGPT. In this episode, Bryan McCann from You.com shares insights related to our mental model of Large Language Model (LLM) interactions and practical tips related to integrating LLMs into production systems.
Leave us a comment (https://changelog.com/practicalai/215/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Bryan McCann – Twitter (https://twitter.com/BMarcusMcCann) , GitHub (https://github.com/bmccann) , LinkedIn (https://www.linkedin.com/in/bmarcusmccann)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• You.com (https://you.com)
• Open Platform for developers (https://you.com/developers)
• Join the You.com Discord server (https://discord.gg/gbH6XaQdBQ)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-215.md) | 8 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris doing, well Daniel what what are we in search, of today what's the topic coming up, that's a good one well um I mean we've, we've been talking about uh chat GPT and, and people using it for search and other, things but we've got uh the real the, real powerhouse with us today Brian McAn, who is co-founder and CTO at u.com which, is an AI search engine how you doing, Brian I'm fantastic I'm so excited to be, here yeah just got uh finished watching, an old episod episode last night with, Demetrius and uh laugh about all the ml, stuff I've learned by s.com over the, past couple years yeah there's no, shortage of um cringe moments in the, mlops journey but um yeah that was that, was a good one um for sure maybe just, taking a like an initial um step back at, like ai u.com ai search engine how did, you come upon like this uh idea that, like there needed to be a new type of, search engine and in particular like one, that involves some type of AI within it, how did this idea come to shape and like, what is you.com maybe we can start there, great yeah I can tell you all about it, depends on how far back you really want, to go but you know I'll start back when, my co-founder and I were at Salesforce, doing research in natural language, processing co-founder and our CEO, Richard soer was the chief scientist, there and uh we were for quite a few, years on exactly the kind of, technologies that are getting kind of, more math adoption today like these, large language models um when I started, out it was a challenge to get them to do, anything and often times people would, question why anybody worked on language, models it was starting to almost become, hard to get any Publications about them, because they were supposed to be just a, theoretical exercise of some kind but, then over the course of of several years, in our research in first transfer, learning uh contextualize word vectors, and then pushing multitask learning and, unified approaches to natural language, processing we saw what was happening and, eventually ran some pretty fun, experiments even with authors and, writers around uh collaborations with, generative AI like GPT tools today and, the first moment was that it was just, kind of fun inspiring you know it was, starting to work and seeing that I think, both of us started thinking about what, was really going to happen in the next, few years and there's two ways you go, all this NLP stuff becomes as good as it, is today and it just goes into making, Facebook ads better or something like, that or Instagram ads and all that, understanding that we work on doesn't, really go into something that I was, super stoked about at the time you know, I came from a philosophy background, got into all the natural language stuff, because I was interested in meaning and, understanding what meaning was which, took me into the analytic philosophy, fuction and focusing on language so for, all of that to then just channel into, like telling my little sisters something, better on Instagram was not really what, I was hoping for that was the first, thing there and then the second was I, think after we had both seen uh much of, the research Community adopt this, direction which was really not popular, when I was just starting out you know, this was controversial it was even, against software engineering, principles um engineers at various, research groups and companies would, would actually decry some of what we, were advocating for because well how can, you disentangle and understand where, problems are in the model if you're, training on all this data it was not, exactly neat and tidy from a software, engineering principal perspective but, after four or five years everybody was, doing it and so we felt like it was time, to start looking at an area where maybe, people felt similarly about the, likelihood of it changing much and, search was one of those areas where very, much like the original uh time in NLP, were like oh you know we should do it, this way and there was a lot of people, saying no that sounds like a bad idea, and they come around to it eventually, now with search I can tell you over the, last couple years yeah lots of people, asked me I think why the heck would you, start a search engine but we saw a lot, of these technological advances coming, you know and we saw that there was this, inflection point coming we wanted to be, on the other side of research and kind, of directing and channeling that into a, better way to do it and search was, really the Gateway of the internet for, that it's for so many people the place, they go that they then find the rest of, the internet they find information and, it becomes like a key point for people, where this technology, and understanding them can then help, them in different ways so with uon we, wanted to really found the company on, three values of like trust facts and, kindness and leverage this technology to, make search more about serving you and, using an understanding rather than you, know just uh monetizing your attention, or with and from your perspective I mean, people probably think that at least, algorithms like have played a a role, like, maybe generally people think like oh, there's sophisticated algorithms behind, search now people are talking about like, AI driven search neural search semantic, search could you help us kind of like, parse out like what's fundamentally, different about like the things people, are talking about now when they're, referring to that sort of like Ai and, neural search as opposed to like what, might have been going on like all along, with what isn't like dumb algorithms but, like they're not in the same sort of, class as this other type yeah for sure I, mean I think uh you know five years ago, before generative AI was really on the, radar now we still thought there's AI, involved in search it just wasn't the, kind that we're seeing today so AI has, been trying to understand this for a, long time what's happening now is the, algorithms or the kind of AI we're using, the neural networks we're using are much, better at understanding the context of, what we're trying to say I think this is, one of the key underlying features of, what we're seeing so when you type to it, or you're talking to it one of the dot, threads has been understanding context, we started out with training word, vectors in NLP um so if people are, familiar with word vectors every single, word or token has a vector that's, associated with it and that was pretty, much all the context we had and we, started looking at sentences as a whole, to take into consideration context and, now these things are reading you know as, much of the internet as they can get, their hands out custom data sets, supervised training data on top of all, the unsupervised training data and uh, with that comes this more nuanced, understanding every parameter that we're, adding as these models are getting, bigger they're recording some subtlety, of how we use language right just, mimicking our behavior and picking up on, those patterns so the first is, understanding context and then the, second is the generative aspect of it so, there's taking in some text from you and, understanding uh what that means but, then it's producing text and I think, that's been the part that's really, really exciting for people both have, been really important for f.com and, building a search engine differently but, now with you chat for example these, generated responses are really opening, up a different way of serving users, that's totally in line with what we were, planning for u.com as well because we, really wanted to move search from being, just about finding Blue Links to ideally, replacing every Blue Link uh in as many, cases as possible with a thing you'd, actually want to do um or the, information you'd actually need and, these Jor models have essentially, memorized a lot of the information on, the other side of those links so it, makes it a lot easier for you to access, it and they can spit it back out at you, no I was just going to say is kind of a, followup to that I'm kind of talking, about the world before and the world, after I mean obviously kind of the the, big news thing that's changed the, Public's perception lately was chat GPT, and the public kind of becoming aware of, that and you guys have been out there, leading the way uh all along you know, for years at this point how does that, public perception change how has that, changed uh you.com and you chat and, things like what what aside from the, just the technology considerations how, has that changed the way you guys are, approaching you know your business in, the marketplace uh with that public, perception change at large it's a, different world from six months ago oh, it's fantastic it it is it absolutely is, I think so many of the things that we, had started to build into u.com you know, we had some generator writing tools we, had uh image generation tools we call, them apps inside.com because we have a, platform for developers to build these, apps into it and what we saw happened, was the door kind of opened to do some, of these newer things with uh more, acceptance from a much wider portion of, the population I think like a lot of, people had expectations about what, search was and what search should do for, them and even though we were at the, Forefront of that releasing these things, that kind of moment last fall really, when it started going viral is when, everybody kind of dropped those, expectations and said hey what is this, new technology that could be dooming, something like search for us in a very, different way and so with that you know, U chat has been super popular and it's, becoming more and more popular as part, of u.com right now we have have a more, default normal search experience but, then you can also use the conversational, approach and that's really picking up a, ton of traction and it's clear there's a, lot of use cases that it just serves, users better for so um while you were, talking I was asking you chat how can AI, be integrated into search and at least, you're consistent with you chat um the, answer is AI can be integrated in a, search in a variety of ways for example, AI can be used to provide more accurate, search results by understanding user, intent and the context of their search, query so there you go um there you go, but uh I mean I just mentioned I have, you know I have uh U chat on my phone, I've been playing with it and really I, don't even know if I would consider, playing it but using it and I think one, of the things like people are realizing, yeah it's fun to like generate like a, new uh like Eminem rap song about like, AI or something in chat GPT but people, are starting to think about these like, interfaces as like tools that can like, you said give them the content that, they're really after without them having, to follow a bunch of links now like the, search and just in general I think has, been like there's money to be made by, pointing people to Links right and ads, like promoting links from your, perspective like how does this shift in, people thinking now about like a chat, interface which isn't driven by these, links influence maybe like the industry, at large and maybe some responses that, we'll see across the industry because, like that's kind of the bread and butter, of how everything works in search right, absolutely yeah it's a big question, right now and I think it's one that's, exciting to see evolve over the next, however long but from our perspective, what we've been trying to set up with, you.com from pretty much the start uh, though we released it publicly last fall, as well is this more open platform, approach to search where Partners, content creators developers whoever it, is is that owns what's on the other side, of the link has an active role and a, clear way to monetize and benefit from, anything that these language mod are, generate so for example you know a, partner can come into you.com and create, an app in a couple hours if they've got, an API or they can just give us the data, and we'll create a search API for them, support the infrastructure and then, we'll show an app that I either allows, people to interact with their product, kind of like a trial for you byway or, the information from their site but they, own that space so it's not like uh more, traditional search engines where you, know any monetization that happens on, their website is their monetization and, traffic has to get to that other place, for those people to monetize a.com the, app itself is considered theirs and any, monetization that happens is theirs at, that point so it's kind of flipping the, script in a way like you said that's the, biggest shift that uh we can see happen, and we can also see a lot more people, moving towards paying for some of these, tools as productivity tools and kind of, tools that Empower them more rather than, just a tool that connects them to, different things which they're very used, to having free I imagine what's going to, come out of it will be some combination, of the two but there will be more and, more of a gift towards providers is, being closely linked to that content, even if the content is less clearly, attributable through a language model, but it's something that's it's going be, interesting to see how all the different, Industries adapt or try not to adapt and, you know try to keep things the way they, are so you were talking a little bit, about uh the idea of using it for, tooling um and some of the things now, and that got me as you were kind of, describing that uh before the break got, me wondering can you talk about some, examples of of how you chat and the, Technologies underlying that and the, algorithms might be used to enhance, tooling what are some of the things that, you're thinking about you know when, you're laying awake at night thinking, about what's next where what do I want, to take go with this yeah great great, question back in the summer before that, uh we've been working mostly on what we, call the code and we were starting to, bring in a lot more developer resources, and generative AI specifically for, generating code on behalf developers and, uh now with these more conversational, interfaces as well you see people going, to them a lot for writing code even, debugging code debugging code that the, AI ask or that the AI generates for them, already see a lot of people going and, you know just saying this is what I have, in my fridge like what can I make with, this and and much broader sets of, questions but then the conversational, interface allows you to gather some, contexts on over the course of the, conversation in a much more seamless way, um until you can get a satisfying answer, or response but yeah lots of marketing, lots of students as well we had an, application called you write that's, still inside you.com and that's been, really popular with students for example, because it can come up in the chat uh, sometimes like you can ask it to do some, things but getting the language model, nuding the right direction is still, something that's uh challenging for, people to quite understand how to do it, and so these productivity tools usually, involve a little bit more of a a touch, from our side to make it like really, useful for a particular Niche I I just, want to note that going to the, refrigerator and saying this is what I, have is going to be really really useful, for, me yeah for me it's like nothing in, there what do you do yeah I was going to, say like it's pretty sparse and my, fridge right now so I don't know that, there's a good answer to that regardless, like go to the grocery store my wife, makes fun of me she's like put something, together and I'm like what so this is, going to be that's a good use case, actually yeah so you were talking about, like this kind of like idea of like a, thread of conversation which people like, learn about some topic or like the the, thread provides context for a response, or answer but I'm also intrigued like, some of the unique things about you chat, in particular and u.com like sort of, more generally is like a little bit more, holistic view of like multiple modes or, multimodal like approaches to where um, hey like I'm not always just getting a, text blob right sometimes I want a text, blob and sometimes like I actually don't, like if I'm asking about the weather, maybe I want like a little graph of like, you know a little card telling me about, the weather so could you explain a a, little bit maybe how you all are, thinking about, multimodality in terms of like these, sorts of natural language interfaces, maybe both in terms of like the outputs, but potentially also in terms of the, inputs um and like how you're thinking, about merging those Technologies and, those inputs together yeah great, question it is the future you know it's, so much I think of what we're learning, from language some of it starting to, make it way into image generation right, with many the tools that have come out, but all these other modalities as well, in u.com in particular U chat has access, to the kind of more traditional search, engine underneath it so it actually uses, that more or less a little metaphorical, but it knows how to understand your, inent and it can go out and ask what, kind of information it needs from, different sources um and it can also, interact with all of these apps that, we've created in the open platform so if, you are looking for weather it can go, and say oh give me the chart for the, weather and over time it's a little bit, of a contrived example but I would want, it to look at that data and be able to, answer any question you want about that, data you know maybe run its own python, code you know doing Statistics over that, data if you really wanted that you know, it should be able to do all of these, things uh same goes for finance you know, if you ask about a particular stock you, chat's not going to necessarily tell you, in text about all all the things that, you would want to know you know the the, volume and high and all those things, it's going to show you a nice a nice, application there and then over time, we're enabling you chat to use that data, more and more ground its responses in, that data in the same way that it's, currently grounding in citing search, results right now to try to lessen the, effect of hallucination which has been, like a really kind of widely known, problem with some of these gener models, especially in the research days that was, one of the most frustrating aspects of, these models you know and they're, getting better um and they're especially, better when you ground them in other, kinds of data like our open platform, apps or the search engine results, themselves so we're going to use that to, make it better and better and I, mentioned you know writing python code, for itself you know we want it to be, able to pretty much do anything that, you'd want to do on the internet like, that's where this kind of Technology can, go kind realizing the promise of some of, those early kind of assistant like, things right like Siri and Pana you you, don't want to just say oh tell me about, this thing you want to be able to have, do things for you and we want to keep, moving more and more towards that vision, of kind of what we call like a do engine, instead of just a search engine very, inspiring what you're saying as you're, looking at the world going forward and, you're trying to think about like, getting those capabilities out into all, the places um and as you're looking at, you know whether it be something as, mundane as you're in the kitchen as we, talked about jokingly or whether you're, getting out uh into vehicles uh that are, either Cloud connected or on the edge or, anything or maybe even something you, know another popular topic out there to, throw buzzwords out is things like, metaverse and stuff how do you get this, capability out into all those use cases, uh in a in a very practical and, functional way for people to start, taking advantage of it because you have, almost unlimited potential in terms of, of this generative capability that we're, that we're on the the Forefront of um, and so like I have a 10-year-old, daughter I'm imagining but you know, another 10 years out she's going to, really have uh a tremendous college, experience very different from us, because she's going to have all these, new tools how are you looking at trying, to get this technology into all the, places that really affect people's lives, going forward it's very likely that 10, years from now you know the way that, people interact with the internet and, the information that out is out there, they're going to find it hard to imagine, how we did it you know without such, strong language understanding we have at, that point because language is this like, really natural interface for us right, like you can talk about doing pretty, much anything so if something could be, on the other side and be that other you, out there right kind of doing those, things a little bit for you on your, behalf just the way you would I think, it's probably something we can't really, conceive of yet we can't really wrap our, minds around it by getting there you, know there's traditional ways to do it, like we have mobile browser apps and I, think people understand how those work, um like Chrome or Safari we have like a, u browser on iOS and on Android I think, desktop browsers are another natural one, but pretty much anywhere you might type, in text could eventually become an, interface for you to interact with these, things and if you can type text into it, you can also speak to it and have speech, to text take care of that for you if you, want to do that any interface where, you're using language or could use, language to communicate with something, is an opportunity I think for the next, generation of search and chat and do, engines like you.com that's the, Forefront I you know I have some sci-fi, thoughts you know that pby could go down, I don't know about you guys but I have, like an inner voice like I can hear what, I'm thinking as text more or less you, should definitely Ely go there because, we this is what these conversations are, all about yeah yeah yeah it's so you, know not everybody has this some people, are surprised to to learn that as well, but like some people can't see images in, their head for example um some people, don't have an inner monologue I've for, many years now since I've been working, on these things referred to my inner, monologue as my like own inner language, model right it kind of even predicts a, little bit what you're going to say next, um that's how you complete people's, sentences and anticip, thing you know so I don't I don't have, any aspirations to work on this uh I, think there are a lot of things to think, about but in the long run like that's, kind of a language interface too like, what if these things were hooked up to, that you know if you're into the neural, interface stuff that's maybe a realm, we're very far from it but you know what, if your inner monologue could also be, supplemented by these things your own, like thoughts thought processes yeah not, on our road map but I I get I get that I, like that thought though because we've, come to a point and I think everyone is, coming to this point where things that, would have been like you know that's, sway out there people are starting to, kind of go that's an interesting idea, you know kind of seeing how fast things, have ramped up yeah uh in recent times, and I think it's pushing imagination out, there at large uh in terms of what might, be coming yeah it's really inspiring you, know at some point when we were making, some of these early language models we, were working on our version of, like a gpt2 size model we called it, control and someone I was working with, read at some point a poem that this, model had generated and was legitimately, touched you know they were like whoa I I, actually really like that poem they, didn't like poems before and then we, spent you know weeks you know in our off, time like talking about poetry and, trying to find poets they liked and, things like that so even in the kind of, simple moments this opening I think that, you're talking about right this have, dropping of expectations about search, what technology can really do for us, it's changing the way people think about, it it's changing people's lives in some, ways it's like inspiring them getting, them to be more creative and going back, to Daniel's ear question when you, combining different modalities of images, and vision and text and just thinking, about what you could do with your own, thoughts if you could actualize them and, realize them more easily I know that's, that's a cool journey to be on so Brian, um uh super interesting to think about, like where where this could be headed, and I've had similar experiences to what, you talked about a second ago where it's, like I'm kind of have like mental block, in this scenario and like I go to one of, these chat interfaces and even if it's, just to like unblock myself like I I, start chatting and like then it gets it, sort of jump starts my mind in a new, direction or something that's that's, very yeah very intriguing to me um now, you've been you've been interacting with, these models quite a bit over time and, have probably dealt with like you've, already talked about things like, grounding and hallucination and like the, sort of power of the knowledge embedded, in these models that they've memorized, and things that people have like talked, about more recently you've been kind of, at the Forefront of thinking about these, things so I'm wondering like now that, like the whole world is talking about, all of these things if you have any sort, of like wisdom you would like to impart, in terms of like either like these, topics that people are concerned about, in terms of like grounding and, hallucination or like harmful outputs or, on the other side like the ways that, this is like uh like I think people were, concerned that this is like an, automation of our life but really like, people are getting such a benefit from, it as like an assistive technology so, like the fact that you've spent a longer, time thinking about these things that, many of us have just been like hit in, the face with any wisdom or thoughts on, that like that you've kind of started to, develop as your own kind of like mental, model of these things oh yeah yeah for, sure just remember that it wasn't very, long ago that they would just repeat, themselves over and over again and they, did nothing useful right and there's two, ways to remember that one they probably, need a lot better uh as long as we keep, going the way we're going they're going, to get a lot better and two they're just, tools they're still just tools and, there's a lot of things we don't, understand about them but I think I, would suggest trying our best as a, community not to anthropomorphize them, too much you know and think of them as, like these other people I think I think, the chat interface in particular gets, our mind ready to be talking to a person, even just the UI and the layout of it, and things like that right it looks like, we're talking to a person more rather, than a box where you type in keywords, and we've all been trained to do that, for 20 years or something like that and, when we're texting with people it's this, way and that's people people AI people, AI now it's like oh what is this thing, and try to just keep at the Forefront of, your mind that this is a tool it's an, algorithm there's like a computer out, there behind the scenes somewhere doing, this stuff and keep the awareness that, sometimes it might be helpful for you to, let your slip into conversational flow, with it as if it's a person and if, that's helpful that opens up you know, inspiration and things like that but, then don't get too caught up in it you, know remember that it's there for you I, got a question there that you're you're, making me think of as you say that, because what you just said really, applies to my personal experience and so, having grown up with computers I'm in my, early 50s so Decades of computers in the, past year my relationship with techn, techology has changed I have always used, it for Automation and kind of, productivity and very whether it be code, or whether it be other tools that are, out there the thing that's changed, dramatically is that I place a very high, premium on creativity and uh and, creativity is something that has, historically prior to the last few years, been the domain of humans uh and we, always expected creativity to continue, to kind of be the more human thing and, and computers but that's kind of been, flipped around and so we're seeing uh, tremendous capability uh assistive you, know techniques to use the word, assistive that Daniel did a moment ago, in other non- aai parts of my life I'm, using these AI tools for Creative, purposes and so where do you see that, going because that surprised me uh to be, able to go to chat and get inspiration, and bring my own creativity to it and, then have it in turn extra creativity, that is algorithmic based enhancing that, so we end up with a product that is, creativity that is both human and, automation together that are generating, this thing which is really cool and I'm, using it like to teach children and, stuff like that in other parts of my, life and so how do you see that kind of, relationship going forward you know when, you're kind of talking a little bit, about you know those sci-fi influences, and stuff like that and as we're looking, at these models which are most, definitely as you said not entities of, themselves you know not not people, but they're going to get much more, powerful in The Fairly near future where, does that go what does that mean how, does the relationship look going forward, between us and those in those systems, yeah I mean I fully intend to uh just, get more creative myself you know um I, think I've I've not over the years of, playing with it and I think it it might, help to have been playing with it for so, long and seeing them get better and, better and still see all the places that, they still fail me I suppose but for me, it's fully incorporated as just like you, think of it as this different thing, right that's like almost competitive, with you on your creativity but going 10, years in the future like the Next, Generation like we were talking about, before I doubt they're going to think, about it that way they're going to think, about it like you think about a normal, search engine I know there are studies, right on people who think they know, things but really they just know how to, search for it and then they then they, feel like they knew the thing yeah, there's like this appendage experience, that we like merged with it somehow, subconsciously something like that will, probably happen where this will just, feel like part of your creativity and I, would encourage us to also develop the, technology that way too so that, continues The Narrative is such that, it's built for us to enhance us so that, we're more creative I feel more creative, using I don't feel like it's creative, and I'm not me too I feel like it's, giving me access to, data and the data distribution in like a, really nice condensed way more or less, and I can kind of test the waters right, I see what all the other people are, doing in some way some portion of the, population whatever the data uh, representing and then I can choose to, either do that or do something else and, change it and that's actually really, cool from a creative perspective because, sometimes you might be in your own, little vacuum or EO chamber or whatever, it is you think you're doing something, really Noel and cool it turns out a, million other people have done that kind, of thing now you see it doesn't look, like it but what you're seeing what the, language models what everybody else said, not everybody a lot of people enough, people for the language models say that, so you're you're getting a little bit of, a measurement of what's going on out, there that you couldn't get before and, uh if you use that as another input for, yourself I think the way that you know, humans and chess algorithms Are Better, Together, we'll continue being better together, with these creativity tools these, productivity tools um and we'll just, learn to be better at whatever we want, to do hopefully it'll just free us up to, think about the things that we want to, think about yeah and um I'm also kind of, wondering selfishly um so we've been, talking about kind of like you've had, the benefit of like working with these, models for a very long time and kind of, getting an intuition of some of these, things and how to think about them in, the capacity that they play how to think, about like the uis around them but also, you've been on the technology side and, kind of like at the Forefront of, integrating these things into your, systems and also like hey there's output, from this model do you know should we, show this to a user hey there's output, from this model like or there's like, these references that we could inject as, context into this model like do we do, that a lot of people as practitioners, are wrestling with those things too now, that like oh I can go get Lang chain and, like pull it down and like integrate a, bunch of things together and create this, this workflow I can go like so on the, practitioner level I guess more so than, like the perception level on the, practitioner level for all of those, listeners out there that are like, jumping into this and starting to deal, with some of these issues around, integrating language models into their, own applications do you have any any, wisdom that you like to impart in terms, of like that practical side of things, and like how to go from this concept of, integrating generative AI for this, particular domain and this particular, problem to actually realizing that in an, application that's useful for people, yeah and this goes back to some of the, other tips i' provide for people, interacting with them too I think for, developers of these things and, practitioners do try to ground their, responses as much as you can andless, your product that you're giving people, is specifically you know say whatever, you want language model and do whatever, if people are actually looking for real, information from you um just know that, your users there's your users are also, seeing that conversational interface and, everything I just described that is, making them feel a little bit like it's, more human than it is and it might even, be their first I'm seeing one of these, things and how well it works so keep, that stuff in mind and try to anticipate, the hallucinations try to ground things, try to provide clear attribution as much, as possible and then I would say switch, back and forth as much as you can, between those moments of surprise and, wonder in yourself and let your, expectations drop a little bit so you, can think about what you can do with, these things but then once you implement, something be very skeptical and critical, of it you know as much as you possibly, can like you would any other engineering, system any other like research project, or experiment you're running like go get, some numbers like get close to the data, um because it's very foreign to a lot of, people right now and the only way to do, that is to go use it a lot not as just a, user but as a practitioner so you're, going to see that prompts matter a lot, and in ways that you probably won't, anticipate sometimes and you will, anticipate sometimes so just like start, getting your hands dirty Embrace that, whole process it's quite fun because it, feels like you get a lot farther a lot, more quickly than you used to you know, you can kind of dream up a use case and, it just kind of works and so celebrate, that right like like I love that moment, oh my gosh this kind of works but then, they immediately go back like okay wait, a second like what if we evaluated this, properly what if we put all these, conditions around it like let's find all, the places where it doesn't work where, it's going to let people down and you, know with you.com as a search engine, turning into do engine like the goal is, to make it impossible for you to fail in, what you're trying to do more or less so, you know try to embrace that mindset, here too it's a new tool you have to use, it like I don't know if you've been a, kubernetes person or home chart person, like you're not going to just know all, the things this is familiar looking, because it's your language but treat it, almost as if it's foreign language you, know and take a little bit of that, perspective yeah I love that that's a, really good perspective to bring into, this, um as we wrap up and get to the end here, I'm wondering like we've talked a lot, about u.com we've talked a lot about you, chat if you were to encourage listeners, like maybe they haven't interacted with, u.com yet or you chat like what are like, a few things that they can do a few, places that they can visit to like get, and kind of understand the u.com, experience and like maybe a few things, to try um that would kind of highlight, some of the things that we've talked, about, yeah so there's u.com it's y.com you can, go there if you're on mobile we do have, our app U and I do recommend downloading, those because they provide a better, experience on mobile especially for the, chat um now if you go to u.com I'd also, encourage you to look for the chat tab, or go to you.com chat because that's, where a lot of this new exciting stuff, is is showing up I'd also love to just, chat with people directly we have a, Discord and so if you go you.com it, should be pretty easy to find you can, join the community there's a link to it, and yeah come talk to us about your use, cases uh so far people love it for, writing essays and emails to professors, and code and recipes lots of people just, like asking it about themselves um even, though the model doesn't always know you, know exact that's like a fun use case, that blends hallucination with truth but, uh look for those citations you know, like I said treat it like any other, thing look for the citations look for, grounding and follow up with us on uh on, our Discord we're pretty much there all, the time that you can talk to us, directly about it yeah that's awesome, well I'm uh I have been enjoying, interacting with you chat and and other, things and I'm just uh really happy that, we had the chance to have you on the, show Brian and learn a little bit more, about what you're doing and also your, really uh insightful perspective from, working with these large language models, for so long so um thank you for taking, time and uh and joining us and would, love to have you back on the show um in, a year to think about like I'm sure next, year AI search is going to look way, different than we expect um because you, know 6 months down the road it always, looks different but um yeah thank you, for your Innovation and your insights, thank you so much for having me it a, real pleasure to to meet you and yes I, intend to make it very different in a, year so I'll be stoked to come back and, uh see how much of what we said holds, true and um whether it we all think of, these things as ourselves at that point, I don't know we'll see it happens um, sounds good yeah thank you yeah thanks, Brian, [Music], bye thank you for listening to practical, AI your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang do podcasts check out what they're, up to at fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | End-to-end cloud compute for AI/ML | We’ve all experienced pain moving from local development, to testing, and then on to production. This cycle can be long and tedious, especially as AI models and datasets are integrated. Modal is trying to make this loop of development as seamless as possible for AI practitioners, and their platform is pretty incredible!
Erik from Modal joins us in this episode to help us understand how we can run or deploy machine learning models, massively parallel compute jobs, task queues, web apps, and much more, without our own infrastructure.
Leave us a comment (https://changelog.com/practicalai/214/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Erik Bernhardsson – Twitter (https://twitter.com/bernhardsson) , GitHub (https://github.com/erikbern) , Website (https://erikbern.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Modal (https://modal.com/)
• Episode 142 discussing Erik’s “building a data team” article (https://changelog.com/practicalai/142)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-214.md) | 21 | 1 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris I'm, doing very well how are you today Daniel, I'm actually doing amazing so I'm not in, my normal location I'm down in Orlando, Florida so one thing is it's sunny, outside and I can like be outside, without suffering but also well I'm in, like in-person meetings here with some, of our collaborators and partners and, they wanted me to do a demo today so I, got up early this morning at like 6:00, a.m. before Hotel breakfast and I threw, together a quick demo, and I used modal for that and uh there's, literally someone that stood up out of, their like seat and clapped after the, demo so um our guest today is Eric, bernardson with uh modal and so, basically Eric is making me look good in, all respects and uh and I'm pretty, excited to talk more about modal and, share it with everyone today welcome, Eric hi hi thanks for having me I'm, excited to talk about modal or anything, else yeah yeah I think Chris do you, remember like quite a while ago I don't, remember when this was maybe Eric you, remember I think you wrote a blog post, about building data teams or something, like that um I forget exactly what it, was but I remember Chris and I talking, about it on the podcast I'll have to see, if I can find it um back in your blog, but yeah that was in the summer of, 2021 was yeah yeah so we should have had, you on the show then but I'm glad that, we get to have you on the show now so um, so you Des cribe modal as an in toin, stack for cloud compute um so I guess, one big question maybe to start things, out is cloud compute isn't new but it, definitely can be complicated depending, on what you're trying to do like what, got you starting to think about like the, set of problems that you're addressing, with modal like what what got you going, down this path yeah yeah it's kind of a, longer story maybe but I've been working, with data for 15 years or or maybe more, most of my career it show is a Spotify, for seven years it built the music, recommendation system there and open, sourced a vector database called L nor, and an workflow schedule called Luigi, but they like kind of everything from, like deep learning to like business, intelligence to large scale big data, type like Hadoop stuff and then I was a, company called better for six years uh, managed data teams but also managed, other teams and so, as I was thinking about starting a, company I kept coming back to data and, my starting point was really just like, it's hard to work with data and I feel, like data teams don't have the tools, they need and initially I was super, agnostic as to like what to build I kind, of frankly wanted to rebuild everything, which is not particularly realistic, maybe in a lifetime aspirational Mega, Maniac maybe uh but what I realized was, that at least like you know if you want, to rethink a lot of the data stuff, a good place to start is at the bottom, so I I almost like sometimes joke that, I'm like kind of like grudgingly had to, start you know it's like a spite startup, like you know I'm doing this like work, at now at like the most lowest level, which is to you know sold the compute, problem with like I have code and I want, to deploy it in the cloud and like why, is that so hard I don't know but you, know it's a big problem for many data, teams is you know just the problem of, like you know taking code and and, scaling it out or scheduling it or, running it on gpus or you know setting, up web endpoints or whatever it is uh, and and really focusing on that problem, as like you know building this core, foundational layer uh that's very, abstract and you know very general, purpose so that's also why our website I, think it's a little bit confusing the, first time but in particular what I, would say we've been focusing on the, last six months is is online inference, so it's a lot of machine learning AI, models uh focusing on that use case as a, sort of initial starting point but but, model always always to me had this, promise of you know running almost, anything it's almost like a kubernetes, in the cloud yeah and one of the, interesting things to me I think where, like maybe it took me a second for this, to like sink in but once it did it like, was a really encouraging thing for me, was like I have my code, locally and I know how to run it locally, but then like you have this sort of, concept of like these decorators within, python code that kind of take your code, and like you run it like python you know, main.py whatever but actually something, like the moment I realized okay well now, this function is actually not running, locally like I just did some sort of, like batch inference or something with, this script and didn't like you know my, fans aren't going on my laptop because, this is actually running somewhere else, um could you describe like I mean, there's a lot of ways that you could, have gone about this sort of lower level, of the problems the data teams face um, there's a really fundamental piece of, this which is like the local to cloud or, local to deployment like cycle and it, with modal that seems very very quick, like how did you zero in on that kind of, workflow we built something that, architectur looks something like aw, Lambda right like we you know it's like, function as a service we take code and, execute it in a serverless way in the, cloud the starting point like the reason, why I ended up going down this Rabbit, Hole doing that you know with this whole, like you know serverless you know, runtime is really kind of thinking about, like developer productivity and, developer happiness and and my sort of, philosophical observation as CTO for, many years is that developers developer, productivity I think you know is very, often like well understood in terms of, feedback loops and so in in particular, is sort of you know as you write code, there like almost like a nest set of for, Loops is the innermost Loop of like you, write some code then a syntax there then, you fix it and then you Rite it maybe, have some unit test but then like, there's like these like outer Loops, that're often like okay let's like, deploy this to the cloud or like you, know let's like run this on a massive, data sets and that's when iteration, speed gets like very very slow and so, you look at data teams like they're, often like particularly exposed these, feedback loops because they have to run, in large data sets or they always have, to run things in production like you, can't really run like things some like, synthetic data it's a data team like you, have to kind of deploy it into, production or like run it on a real data, set and so it really frustrated a lot of, data teams it's like that that sort of, like very slow iteration speed it's like, I write some code now I have to like you, know create a container push it to the, cloud then you go and click on an interf, or like you know merge you know some, polar request or whatever then like you, know my container fails now I have to go, look at logs or whatever so I started, think about like what if we bring the, infrastructure into the like innermost, Loop like the the loop of like okay you, just right code and then you like, immediately run it but it actually runs, in the, cloud uh and you know in order to do, that we realize we can't do this with, kubernetes we can't do this using Lambda, like we basically have to build our own, infrastructure that takes code and can, launch containers maybe hundreds of, containers in the cloud in a few seconds, uh so we went very deep you know down, that rabbit hole and built basically our, own container runtime our own file, system our own container Builder uh, luckily I'm not you know afraid to go, deep in like solve like kind of tricky, you know container problems and and, dealing with Linux and file systems um, but that's like a lot of what we have to, build in the last two years is that is, that foundational level like that, runtime but but the benefit is like now, we have this you know super nice, developer experience you can just take, code locally you can spawn a 100, containers in the cloud within a few, seconds like running the latest code in, the latest container it sounds, fascinating I'm I'm really interested in, it but I want to ask you to step back, for a second with a followup and Bridge, a gap of understanding standing for me, you were saying like can't do it with, kubernetes can't do it with AWS Lambda, and I believe you but I don't know why, and I'm imagining that maybe a few of, our listeners don't know why either, could you kind of tell us what it is, like cuz a lot of them their companies, are in one of those big three providers, and to kind of show them uh you kind of, demonstrated with the user experience, quite well a moment ago but could you, talk a little bit about like what was, falling down in those kind of more, mainstream big three kind of approach, Google AWS and uh Azure uh so that we, kind understand that cuz you made a, statement I'm with you on that but just, bridging it yeah first of all like I'm, I'm like the world's biggest AWS fan, right like you know we run everything on, AWS and like I I love it for the, capabilities brings me as a developer to, run things at at scale yeah uh developer, experience in AWS has never been, particularly good and so true you know, like I've been bangging my head you know, for years against like you know AWS, documentation and in the end I usually, figur it out but it's a Prett Jing, experience I I think in particular the, problem with both kubernetes and AWS you, know or like Lambda or E2 Etc that that, we saw like you know either you know for, users to use it directly or for us to, build on top of that is is just the, iteration speed like so if you know for, instance in kubernetes let's say you, want to run something in kubernetes in, in production going from code locally, well now you have to first build a, container then you have to do some sort, of Docker push to a registry right then, you have to kick off a kubernetes job, then you know you have to go and look at, the logs of that kubernetes job and by, the way kicking off a kubernetes job, like that often like entails like you, know the kubet worker like pulling down, the docker image and and so we were like, looking under the hood and trying to, understand how like Docker works and, Docker you know it's an amazing piece of, technology like you know for the sort of, you know the new way of thinking that it, brings to the table around like you know, uh insulated containers but it's quite, inefficient in starting containers like, you know most containers end up having, lots of data that's never actually read, like there's like you know thousands of, like time zone files of like you know, Locale information about you know time, zones in usbekistan or whatever like, you're never going to read those right, unless you're in usbekistan sorry just, getting that in there yeah on this you, know whatever you know or uninhabited, islands like there's like time zone, information about un uninhabited islands, in in like you know the standard Linux, distribution like okay like great but, like get them out of my Docker container, uh but the the other thing is also, Docker is quite inefficient and like it, has this like layer thing but like other, than that you know it doesn't really D, duplicate information and so what we, realized is that what if we like rethink, how you know those containers get pushed, and pulled and we end up building our, own file system we D duplicate the, content by Computing a check sum of, every file uh that's actually sort of, similar to how Lambda Works uh but but, Lambda is also not fast enough in the, sense that like if you publish a new, Lambda it still takes about a minute for, you to be able to run it Lambda also has, other limitations doesn't support GPU, doesn't support long running jobs Etc, uh so those are all the reasons like, well we ended up deciding we can't build, this on top of kubernetes or Lambda or, any existing, solution also not Docker uh we ended up, using lower level Primitives instead and, building a lot of it ourselves and Are, there specific things about and and I, sort of like in my own experience and, using modal have experienced this but, from your perspective I would be, interested to hear like you talked about, kind of moving towards this use case, like the use cas cases around machine, learning around AI as being kind of like, very well suited to this workflow of, those types of workflows have any sort, of like added benefits and or challenges, that may be like you know running uh web, scraper or something like like some, other sort of like use case which is, related to data but maybe not involving, sort of like serialized model files and, inference and gpus like what are those, those things about these machine, learning or AI workflows where you think, like either there's specific challenges, that people have that are solved by the, this kind of quick cycle workflow um, versus just kind of like other data, related workflows yeah I I think we, focused a lot on online inference uh, recently so basically you know let's say, you have a model could either be some, off-the-shelf model from hugging face or, some fine tuned model that you have, yourself and you want to deploy that and, in particular if that model uses GPU the, set of vendors that support that is, somewhat limited and and the other, reason why it's um is also cost you know, traditionally if you go through the, cuber is easy2 Route if you want to, deploy a model inference endpoint you, have to spin up an instance that sits, Idol for most of the time uh you can sit, up Autos scaling but Autos scaling is, pretty slow so moving to server less, makes a lot of sense from a cost, perspective and so I think that's the, other reason why we've seen a lot of you, know it's not just us it's been I saw, the banana was in the previous episode, for instance like there's a couple of, other vendors that also focused on this, I think cost is driving a lot of that, the demand for serverless vendors for, GPU Compu specifically I also think it's, something that just came up in the last, few months where like a lot of people, ended up realizing like you know we're, very good at trading models like, building you know custom stuff we don't, want to deal with infrastructure and, running this and production and so, there's been a lot of demand for vendors, like model where they can just take a, model and publish it to model and and, run their production and not have to, think about you know autoscaling, policies and have to think about, setting up you know web end points and, dealing with uh security groups and uh, and all that stuff uh that being said I, mean model kind of going back to its, roots like we we did you know it's not, just online inference like we we started, out focusing a lot on what I think of, was like embarrassingly paralleled, problem like this idea that like you, have something you want to Fan out and, do a lot of stuff in parallel so besides, online inference modal also does a fair, amount of batch inference or sort of, parallelizable things like like a lot of, people actually use this for web, scraping other people also use this for, things like computational biotech large, scale chess coding um you know you can, also use this for various types of, simulations or back testing that kind of, stuff so this a pretty wide range of, things that that model as well but I, think right now like the user experience, of online inference is like 9 out of 10, I would say at model the user experience, for like batch inference and large scale, like parallelism is like 8 out of 10 uh, we're working on a lot of the other, stuff like data pipelines like building, more complex support for scheduling and, that kind of stuff where right now like, it's good but it's you know not quite, yet where we want we think you know the, long-term potential is so Eric I I, mentioned uh in full disclosure to, everyone in the world I'm a huge fan of, modal and have been using it a lot and, Building Things in it including the side, project I'm working on prediction guard, and um I think I just counted I'm in the, interface now I have, 129 modal apps deployed right now wow so, I want to try to like describe from my, end like it's hard because like this is, an audio podcast and like talking about, how things work without like showing, something visual is a bit tough but I'm, going to do my best at trying to, describe like how I would describe it, and then I'd love like you to fill in, the gaps or correct me if I'm wrong in, any points so if you think about running, something in modal you can write a, python script like let's say app. high, or whatever you can have functions in, that script and then actually one of the, things I love is like dependencies is a, really annoying part of particularly Ai, and ml workflows so you can decorate, certain functions in your code with like, stub. function and then Define a modal, stub in your code which is essentially, like referencing a container with, certain dependencies in it and then when, you EX Ute your code you say Python, app.py and when it gets to executing, that function in your code which is, decorated with the stub it actually, doesn't run it locally it spins up a, container in modal and runs that in the, cloud so um you can do this either by, just calling that function or you can, actually deploy then your app and have, that function be accessible as like a, serverless function or a web endpoint, for your other applications or your, other apis to access so I don't know if, I did a great job at describing that, Eric that was my initial attempt uh feel, free to make that more coherent no I I, think that's exactly right like I are I, think you touched on a couple of points, of modal you know where we maybe think, different about infrastructure and other, uh in particular this guy swix wrote a, great blog post about it's called a, self-provisioning runtime and I think, that's like to me it's been kind of, putting words to a an idea that I always, had around like it's sort of similar to, like if you have used like a s like, palumi for instance like you know or, like terraform or something like that, the sort of idea that like, infrastructure as code but like modal, has always gone further than that it's, also like infrastructure and the app, code like put it together in the same, code and have like the app itself Define, the infrastructure it needs to run so, with modal in code you define you know, the containers you need including python, dependencies or any other like binary, dependencies you need you can have, different functions using different, containers calling each other just like, python functions right and it just like, Provisions itself like you can say you, know this function should run on a GPU, this function should have 16 CPUs, available this other function needs 128, GB of RAM like in code there's zero, config in model there's like not a yaml, file like there's nothing you can, configure model like everything is in, code and to me it's all you know goes, back to this idea of like the how do you, make developers productive in having the, fast feedback loop is is I think, traditionally we've had to give that up, and basically, make Engineers run things locally in, order to get the fast feedback loops, they need but then the problem is like, later they need then still need to, deploy it to the cloud and then you have, a whole set of issues that then break, because the cloud is running in a, different environment and so this goes, back to what I said like maybe 20, minutes ago like what if you can take, the infrastructure and bring it into the, innermost Loop of how you iterate then, you sol this problem of having different, environments because it's always running, in the cloud and it's fast enough that, like it feels you know some people even, say modal is faster than running things, locally even though it's running in the, cloud you never ever have to think about, this environment conflicts because it's, always running in the exact same, container at any time uh and it's fast, enough that you don't have to like this, like frustrating thing where you have to, build containers and push them around, and like you sort of get the best of Two, Worlds you get the developer, productivity of running things locally, but you have you know the full power of, the cloud you know and all the power of, you know containers and gpus and like, whatever right I don't know if that, makes sense, yeah it's um so Chris and I have talked, about this at like certain points in the, podcast I have always like really had, this disdain for like maintaining a, whole bunch of like local environments, as well like I'm not a condo user like I, I have like very minimal setup locally, on my machine and I one of the things I, think I kind of grasps on to is like oh, well I can develop now locally with, modal and just like import o and import, Json and like kind of normal is things, and import modal but when I need to, access Transformers or pytorch or like, some random other package that like, normalizes index scripts or something, like I actually have like zero concern, about like setting that up locally to, test because I can just add that as a, dependency in the modal function and, that runs in the cloud and its own, container so I actually never even have, to install that locally now I could do, that maybe before like using like a, local build of a Docker image or, something but that again like you're, talking about Eric has another cycle, associated with it which is also, annoying yeah yeah it's kind of annoying, so yeah I love that this like I can just, think through like my imports are, minimal like I can even run like a pie, test right and it's just testing as it's, going to run in production right because, it's running in a container in the cloud, already it's running that function yeah, so I that's kind of like a lot of my, love I feel like that I've I've enjoyed, about it what are the surprising ways, that you've seen people use modal that, maybe like have been unlocked for users, that were really either difficult for, them before or like oh I didn't expect, people to do this with modal have you, encountered any of those things that, stand out I mean model inference in, itself like kind of you know was a, little bit of a serendipitous thing for, us like you know we didn't expect that, people would do that in general like we, thought of model primarily initially as, like more of like a batch Workhorse like, you know something that helps you scale, out but but we've seen a lot of traction, on online inference and and model, deployments and and so for that reason, we're focusing a lot on improving, startup performance right now because, when you're doing online INF you have to, spin up containers very quickly you also, have to load models very quickly and and, especially when you're dealing with, cheap use there's a lot of you know, overhead of copying models to gpus Etc, so getting that down has been a big, focus of us or our for the last few, months I guess another thing I've been, like sort of surprised by is we Ena the, functionality to set up web hooks pretty, easily so in model you can Define oh, like make this function exposed to the, web and give it its own URL and now you, can call this a URL and it triggers, something in modol trigger some python, code people started leveraging that for, like building like full-blown web app on, mobile which I I was kind of surprised, by like graphical uis and and all kinds, of stuff like hosting whole uis cuz I I, never anticipated like that being like a, use case I always thought of like well, people are going to use like you know, whatever like uh versel or Heroku maybe, for something like that but but that's, been sort of interesting to see that a, lot of people are using that and and so, it's pretty promising like maybe there's, something more to be done there like I I, tend to think like our our bread and, butter is like machine learning in Ai, and like you know data pipeline so I, don't want to go like all in on like of, building more like a web hosting, platform but I think there's something, interesting and it's sort of similar, along the same lines a lot of people, have be using us more for sort of job, cues type things like you know more like, almost like I said replacement to salary, uh the idea that like they just create a, model function and then they can like, inq work for it and they never have to, think about scaling or or sort of, deployment and productionizing like job, cues and that was also something we, didn't really think about but a bunch of, people have been telling us to actually, do so that's kind of cool so I got a, follow-up question question here um and, you've both sort of covered it to some, degree already but as the person who has, not yet had a chance to use it I'm, really curious and I'm imagining there, are a few people listening uh as well, that are wondering could you take us, Eric like through kind of a classic, workflow with modal um we've done that, with with other technologies that you, may have heard on other episodes and, stuff but I'm trying to get in my mind, Daniel is is doing it all the time but, I'm I've been left behind a little bit, on this kind of just take us through a, typical AI ml workflow on modal just, verbally like what the steps are just to, kind of show us that Simplicity people, probably going to be thinking about, whatever they're on you know previously, if they're on some other platform just, as a point of comparison about how, you're doing that I'm just kind of, curious if you can any example is fine, yeah I mean we we've optimized a lot for, the making it possible to deploy things, and run things in the cloud in a few, minutes uh so it's actually pretty, straightforward like in in model you, basically take any python function so, that's you have a a python function that, maybe uses hugging face just as an, example and it you know it uses some off, the-shelf model for maybe stable, diffusion and so let's say you have a, python function existing python function, that uses huging face it takes a prompt, and it returns an image now you can, decorate that python function in model, with a special decorator and then, annotate and say use this image and then, Define an image in code uh using a, special model syntax you can also give, us a Docker file but it's actually we, support most almost everyone just does, it in Python internally so in code you, can say basically you know use Debian, slim and then install these python, packages like Transformers and, accelerate and diffusers and a few other, things and then annotate the function to, say use that image and then that's, pretty much it now you can run in the, command line you can do modal deploy or, modal run and then it just takes that, code builds the container if it doesn't, exist and runs it in the cloud and that, could typically take less than if the, image is already built it typically, takes about a second to take the code, locally spawn a container in the cloud, running that code uh it works for any, python function I mean that's dead, simple right there yeah and and it works, for any python function and you can you, know you can run pretty much any code, you want cuz we support like you know, fat containers like meaning you can, install python packages you can you can, install FFM Peg if you want to trans, codes some video like you can install, you know whatever thing you want and we, have a lot of functionality for, manipulating images and and building, dependencies and doing pry advanced, stuff uh as a part of that too, pre-baking models into images is, something people want to do sometimes to, to optimize cold start performance uh, yeah getting started with model like we, really we really optimized for having, that sort of like magic experience the, first time you try model like making it, easy to just like you know install the, python package set up a token uh and run, code immediately in the cloud uh we want, that first experience to be magic uh and, and sort of set a tone for like you know, what model is and like you know the fact, that model we think is a better way to, work with infrastructure of the cloud so, one of the things I I was um wondering, about which I guess it was a surprise to, me like I didn't really think about it, when I was first like using it it so I I, don't know everybody has their different, setup but usually I've got my like code, editor over here and I've got like my, terminal over here and maybe another, monitor or something so I've got both up, and I was like writing my it was a web, hook and modal and I had it like you, know python app whatever and when it's a, web hook then like the code runs and, then modal gives you this link where you, can like Ping like a development web, hook and of course like I never get my, code right the first time around so like, I bring up Postman or something and I, like try to hit my web like that link, and of course it like I get whatever, error and kind of without realizing I, just like went over to my code and I, fixed it and I just saved the file and I, saw over here in my terminal like it, just redeployed and gave me like the, link again um I think that was like a, really cool like surprise for me I guess, is like oh I don't even have to like I, can just like keep this up over here in, the terminal uh how does that work, exactly and like was that something that, you stumbled upon or because I found, that a really satisfying way to develop, because where it's like oh I just keep, this up I keep modifying the file and, trying it until it works and then I can, just like contrl C and say modal deploy, and then then I'm done right yeah for, sure and I know I'm like harping on it, but like kind of thinking about like, feedback loops and like you know the, sort of iteration speed as a CTO I manag, a lot of different teams I manag data, teams and front end teams and back end, teams and it's sort of interesting like, how like different disciplines of, software engineering have figured out, their own iteration Cycles like the, ability to get feedback loops very, quickly back in Engineers tend to build, a lot write a lot of unit tests like, that's like their way like they write, some code and then they run all the unit, tests or maybe they run a specific unit, test that they know it's going to break, and they have that like sort of way to, go to get fast feedback look you go to, frontend Engineers they have kind of a, setup like you just described they have, like one monitor with the website and, then one monitor where they like write, code and when they save it just like hot, rels the code so uh I feel like, sometimes like data and backend people, don't give you know enough like credit, to like front Engineers like they have, really figured out a lot of stuff around, like software engineering for like Fast, feedback loops and actually if you look, at like the modern tool chain for fun, engineering I actually think in manway, is like more advanced than than any, other part of software engineering and, so that is the sort of feedback loop, that I wanted to have with modal and, like what I think makes Engineers happy, is that you know super Snappy like, feedback you just have to save code and, then it's like live in the cloud yeah so, so we built that specifically for the, web serving uh part of model uh because, that's something you you kind of want to, have is that ability to we we don't you, know it's maybe less like sort of visual, feedback but it's like the ability to, like deploy something in the cloud and, then you can like you know hit it with, Postman or curl or whatever immediately, uh yeah I mean under the hood it's um, it's not super complex actually, refactored it yesterday it's kind of, funny we just monitored the file system, and then when we see that any file was, updated we just reload the entire app uh, in the subprocess and uh live patched, the app running in the cloud uh it's, pretty straightforward we had a lot of, that already built so I think the, problems you've been solving for like, the past two years are probably really, complicated for you to like Loop that, into like the category of really simple, problems I think that would probably be, quite complicated for many many people, yeah for sure I guess like it's like, simple in the sense that it's sort of, you know we already built you know so, much of the like underlying complexity, to like make that e relatively easy to, support the hot reloading like the fact, that we already built like so much, complexity around like take code and, deploy to the cloud and do that very, quickly you know that's a very nice, Foundation to then like it's your sort, of bread and butter yeah yeah like, building you know fast container fast, file system stuff uh there's a lot of, cool stuff that that unlocks so this is, a particularly interesting episode I, would argue for me and probably for, quite a few of our listeners that listen, regularly because we're talking about, something and Eric we have the privilege, of you as the person who's created this, but we also have Daniel uh whom I you, know I've been working closely with and, our listeners have been listening to and, hearing Daniel's passion and him, building his own business on your, platform and we talked to lots of, different companies um and so it, definitely has intrigued me in a way, that not every different uh company, owner if you would has I'm kind of, curious I'm thinking about it from a, slightly different perspective from, Daniel but you've really got me, wondering like how to make this happen I, work at a big company as you know um we, have big investments in kind of the the, big cloud providers as all large, companies do what are good strategies, for companies to say okay we have so, much and these other big names and stuff, that are out there how do we start using, modal effectively what are the kinds of, things you've seen your larger customers, do in terms of migration over or things, that you might recommend that enable, something of a migration to be more, seamless less painful because normally, when you think of large company, migrations they are almost always uh, fraught with pain and and misery and, challenges for the it Crews so how do, people get to this thing that we're, hearing about today and mitigate all of, those problems yeah yeah I mean first of, all like admittedly like we're fairly, early and you know so a lot of our, customer base is early stage companies, like starting from a clean slate who, have absolutely zero infrastructure and, that makes it a little bit easier it, does in part because there's like, nothing Legacy that they have to Port, over in part also because they're just, desperate for tools and so the sales, process is a little bit easier for us I, find that like the conversation when we, talk to bigger customers is obviously, like quite different first of all, there's often an existing data platform, that's already built inh house there's, of course also a security compliance uh, question and and that's something we're, working on that I think longterm uh, there's a lot of really cool stuff you, can do around uh VPC peering and other, things to enable that you know big, companies to have the security, guarantees that they need uh but I also, think it's a separate conversation where, like you know at a bigger company it's, like there's one person who's a decision, maker who has the the credit card, there's another person who who build the, data platform who you know now we're, saying oh actually we shouldn't use that, we should use Modo instead and you know, it's you know tougher conversation and, then there's maybe a data scientist and, a third you know third person is a data, scientist who really want to deploy, models they don't really care about the, infrastructure but they heard good, things about model I I tend to think in, those conversations it's about like, finding like a niche use case that's, like you know low risk that you know, doesn't sit in sort of some sort of, critical path of like you know the whole, business relies on this and you know so, it could be some sort of Green Field you, know something new you know deploying a, model or a very simple pipeline uh, something that you know maybe doesn't, touch like super sensitive data or have, like super critical like guarantees so, some like researchy project like that's, typically where I tend to start is you, know and and often trying to find people, you know data scientists or machine, learning Engineers who feel like the, platform team doesn't really have time, for them like you know they want, something that lets them iterate quickly, you know without having to bother the, Ops Team those are probably the easiest, conversations to have you know with like, the bigger companies that's good, guidance I appreciate that I think it's, very fitting that like the platform is, at least right now and please correct me, if I'm wrong is it is very python, Centric in terms of like the development, workflow and and what's supported do you, see this being sort of like like you, said that you can support so many, different types of jobs in and apps in, modal so you know on one side you could, say well this could become like very, general purpose in some ways or it could, like fill a really Niche uh Gap that, obviously it it is starting to fill and, just do that really well and like, continue to kind of go deeper there what, do you see as kind of the path forward, or maybe it's a both an with you know, something's coming sooner than later I, think of modal as my 20-year project, like I you know like I'm finally, building a tool I always wanted to have, and like I want to spend you know rest, of my career doing that ideally I I I so, my sort of end goal is to build you know, very general purpose set of tools that, help help data teams be more productive, that that being said kind of like what I, said at the start of this show like i i, s realized like that's a you know almost, like megalomaniac Vision like I I I, think it all comes down to like in, practice to finding something that, resonates with customers and you know, drives growth and validates demand and, then sort of sequencing and kind of, layering all like sort of adjacent, product over time time we tend to think, right now like we have one one sort of, use case and one sort of Target Persona, that works really well right now which, is you know deploying online machine, learning inference that that I think you, know is an area where we see enormous, amounts of demand and traction so kind, of how that you know fits into, sequencing I think you know an obvious, next step for us is to make fine-tuning, and training easier to do in model but, also thinking about you know like, pre-processing you know scheduling, retraining so happens on a loop on a, regular basis maybe thinking about like, you know how do you move your data sets, into moodal to some extent too like, hosting more like stateful, applications uh I I think there's like a, long list of sort of you know like, layering on like step by step more and, more advanced features and gradually, expand to take over and because I think, the demand is there like you know no one, wants to have these like 35 different, point solutions that they have to, integrate themselves right and and a lot, of the data landscape today I think is, very fragmented and you know as a result, a lot of data teams uh have to integrate, like so many different vendors and kind, of duct tap them together like I I think, there's you know there's a big case to, be made for either some sort of you know, consolidation or or some sort of like, defragmentation of the space where fewer, vendors do more so long term that's, absolutely my vision is to you know, we're starting with this like place, right now similarly in terms of, languages right like you mentioned you, know python versus other languages we, think py right now is a great place to, start uh because that's 90 you know per, plus of of data use Python but but, definitely think long term you know a, lot of the infrastructure that we built, is you know a low level and it's written, in Rust it doesn't really care about, what stuff it's, running uh we think it could be you know, great to add support for typescript or r, or go or rust or whatever uh so there's, many different aess of this like in, terms of like how we think about sequen, and expansion I'm I'm just saying you, saw me raise my hand I love rust it's my, current favorite languag ISM uh go and, rusts are on the back end and so but um, let me ask you a question that came to, mind as you're going through that uh as, you're kind of exploring the world and, um and you have certain areas of focus, but there's also some kind of able to, stretch out depending on different parts, of the strategy you have how do you see, kind of to use a a very generic open, term The Edge uh out there things that, are not in the cloud do you see you, doing anything in the future that would, be kind of edge based or do you see, yourself more as the cloud partner for, things that might be out on the edge and, you have apis and such available to, those how do you conceive either working, with or including the edge in your, overall strategy I think Edge is, primarily useful for like very very, latency sensitive, applications and that's probably a, segment of the market that we just feel, like that's not what model is going to, be good at because you know if if you do, things in like wasm or V8 isolates like, you know in that case you can make it, like kind of fast enough but you know, the way we focus on servess right now is, sort of fat traditional like Linux you, know this distributions in, containers uh or VMS and that just has, you know it's always going to have some, non-sh overhead you know maybe a second, maybe like eventually we can get it to a, few hundred milliseconds I I think the, sort of edge workloads that people talk, about like you know that's when you, really need like one millisecond right, like and you're really trying to either, you're doing some like iot type you know, like controlling like devices for, manufacturing or you're doing high, performance like CDN like SEO type stuff, you know where you want like your, website to be absurdly fast uh those are, the types of workloads I don't really, think modal is suited super well for and, I'm more than happy to let other vendors, dominate that space we tend to think on, the time scale of like a few hundred, milliseconds and up that's where we, focus right now no that's a great answer, and definitely I mean uh trying to, address every problem you know out there, in the larger space isn't a successful, approach so hearing when I talk to, people and I hear no we're not going to, go there I usually take that as a very, good thing uh in terms of focus and good, strategy so good to hear that cool yeah, kind of as we wrap up here I'd be, curious to hear like obviously you're, very passionate about this project you, want to work on it for 20 years like, this is your life's work it sounds like, what are the things that are on your, mind right now in terms of the things, that you're excited about seeing happen, in modal and like over the next year, what are you like most excited about, seeing come to pass um as you continue, working on the project the thing that I, personally spend the most time on is, probably figuring out the like, ergonomics and of the SDK itself like in, code like how do you, express programs that like execute in a, distributed way in the cloud and still, making it feel like intuitive and easy, to the user without having to think, about the fact that you know this, function runs in a different container, than this function we've made that work, reasonably well for online inference but, I think you know when you go to like, training and start dealing with file, systems like there's certain things that, are like still a little bit like gnly, and I'm working a lot on that right now, so like making that user experience good, and sort of intuitive I think it's, really important on a similar note like, model right now is like somewhat janky, when you run it inside notebooks for, some particular Reasons I'm not going to, get into it but it's something I, definitely want to make you know the the, user experience like if you're running, model inside a notebook I think should, obviously be you know we need to fix, that too it's fine it's not like, terrible but but you know I definitely, don't think it's like quite yet where it, is if you run model in the script there, all is like back and stuff like we, definitely need to like scale this up, you know 10x or 100x the scale we are, like you know we see a lot of demand, model does not have a publicly available, sign up right now like you can sign up, and you go on a weight list uh and and, part of it is that just that like we, want to have a little bit more control, over the scale like there's a lot of, work we need to do on the back end to, build a foundational you know, architecture running all of this stuff, it's is a very hard problem like you, know spelling you know I S you our own, Lambda our own kubernetes uh there's a, lot of work we need to do on GPU support, and in, particular cold start with GPU models uh, and fast loading of GPU models so those, are some there's a lot of cool work, we're spending a lot of time on there, especially when it comes to like, containers and and in general like, isolation and VMS like you know turns, out that like supporting gpus in a, secure way in a multi-tenant environment, is quite hard uh so we're going very, deep and like you know I'm reading about, Linux device drivers and Cuda and like, trying to understand all of those things, yeah I mean those are all the things, we're working on I think in a year's, time like I think model you know we'll, see a lot more traction you know model, for like other things than just online, inference like we're going to see a lot, of people using model for training we're, going to see a lot of people using model, for like paralyzation I think we're, going to have you know much more like, sort of you know customers on the, Enterprise side right now like we're, focusing very much on the startups but, but we're we're doing laying a lot the, security compliance work to be able to, go up Market uh yeah those are some of, the things for I'm I'm pretty excited, about yeah yeah there's a lot to be, excited about and yeah please uh pass on, my personal thanks again to the modal, team for making me look good uh today, and recently and um yeah really excited, about what you're doing and appreciate, you taking time to chat with us yeah of, course I'm also very excited about this, so always happy to talk about, it, [Music], thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, Chang talk podcasts check out what, they're up to at fastly.com and fly.io, and to our beat freaking residence brake, master cylinder for continuously, cranking out the best beats in the biz, that's all for now we'll talk to you, again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Success (and failure) in prompting | With the recent proliferation of generative AI models (from OpenAI, co:here, Anthropic, etc.), practitioners are racing to come up with best practices around prompting, grounding, and control of outputs.
Chris and Daniel take a deep dive into the kinds of behavior we are seeing with this latest wave of models (both good and bad) and what leads to that behavior. They also dig into some prompting and integration tips.
Leave us a comment (https://changelog.com/practicalai/213/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog++ (https://changelog.com/++) – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with extended episodes, make the ads disappear, and increment your audio quality with higher bitrate mp3s. Let’s do this (https://changelog.com/++) !
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Generative AI model behavior in the news:
• Microsoft’s AI chatbot is going off the rails (https://www.washingtonpost.com/technology/2023/02/16/microsoft-bing-ai-chatbot-sydney/)
• A Conversation With Bing’s Chatbot Left Me Deeply Unsettled (https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html)
• Sydney’s gaslighting (https://thezvi.substack.com/p/ai-1-sydney-and-bing#%C2%A7the-avatar-gaslight)
• ChatGPT political bias (https://twitter.com/DynamicWebPaige/status/1628237502338465792)
• Stable Diffusion amplification of stereotypes (https://techpolicy.press/researchers-find-stable-diffusion-amplifies-stereotypes/)
Useful guides related to prompt engineering:
• co:here prompt engineering guide (https://docs.cohere.ai/docs/prompt-engineering)
• Prompt engineering overview from Elvis Savaria (https://youtu.be/dOxUroR57xs)
• 10 Amazing Resources For Prompt Engineering, ChatGPT, and GPT-3 (https://medium.com/tales-of-tomorrow/10-amazing-resources-for-prompt-engineering-chatgpt-and-gpt-3-ad84dd26bfc7)
• Image generation prompt engineering guides: see here (https://medium.com/mlearning-ai/an-advanced-guide-to-writing-prompts-for-midjourney-text-to-image-aa12a1e33b6) and here (https://re-thought.com/how-to-create-effective-prompts-for-ai-image-generation/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-213.md) | 7 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is a fully connected, episode where Chris and I are going to, keep you fully connected with everything, that's happening in the AI Community, we'll take some time to discuss the, latest AI news which is generally crazy, these days and we'll dig into some, learning resources to help you level up, your machine learning game I'm Daniel, whack I'm a data scientist with s, International and I'm also building a, product called prediction guard and I'm, with my co-host Chris Benson who's a, tech strategist at locked Martin how you, doing Chris I'm doing very well crazy, times that we live in crazy times that, we live in it's like uh when we started, the show I thought like oh like Now's, the Time to have an AI podcast but turns, out that wasn't the time to have an AI I, mean it was okay to have an AI podcast, then but now it was fine, yeah but 2023 is apparently the year, where everything depending on your, perspective where everything blossoms, into a golden age or hits the fan I, don't know what it's a lot of different, perspectives I maybe all of the above, yeah the above yeah all of the above um, you know you know it's a good point I, think what was it, 2018 when we started the show it's now, 2023 and we had a lot of interesting, moments along the way but you know some, people might have projected in 2018 that, after a few years of doing a podcast you, know you'd look at other things you get, bored just like a lot of activities but, you know what the ride is getting Wilder, and Wilder and um we finally hit that, point where the whole world is jumping, into this in terms of a a day-to-day, topic and conversation it's really been, interesting you and I have had all these, conversations with lots of people but, it's always been a niche topic you know, a gradually increasing Niche but now, it's everybody it doesn't matter if, they've ever talked about AI before they, are now yeah yeah and increasing numbers, of people just a proliferation of new, applications and products and uh, startups and companies that are, integrating like what is my large, language model stack at uh X company, right like these conversations or what, are we integrating how are we, integrating it it's very interesting how, we're seeing this progress even with the, other day I saw some outages on open Ai, and like the comments I were seeing was, hey open AI down like how many startups, that are building solely on gpt3 are, like totally just down right now it it's, really weird cuz it's almost like like I, remember when I was at one of my data SC, previous data science positions at a, company called telx which is a cool, Company still doing cool things I, remember there was some it was one of, those times it was like some CDN DNS, some type of outage and like the whole, internet went down or something you know, like GitHub went down and everything, went down and like no one could do, anything it's like we've entered this, new phase where if a model goes down, then like it affects so many different, things it's changed so much since we, started doing this um it was such a, small world and it was challenging uh in, the beginning for people just for a, two-c retro the tools were being, developed as we started the show over, the first few years and but it took a, lot of expertise just to set up an, environment and be able to do training, and and such and we've hit this point, you know with these cloud services, scaled out just as the the other non-, aai aspects of software have already, done that you can build entire, Industries on these Services now and, that's very different from when we, started the show uh those years ago and, yeah it's exploding outward right now in, both good and bad ways yeah because of, that yeah I think today it would be, really useful to talk through like like, you talked about there's sort of this, explosion of examples of like really, amazing applications and like such great, value on ility that people are getting, out of these generative models in, particular right yeah and then there's, on the other side this whole string of, things that are rather disturbing in, certain cases in terms of the behavior, or the output of these sorts of models, and I thought it was an interesting sort, of question like what makes the, difference in good output and bad output, and in particular as practitioners so, like the goal not being to like rag on, like any certain mod model or company, but like to think about for this new, wave of models that's coming out how do, we as practitioners think about using, these models in some sort of reliable, way that produces value in our context, for specific applications right like, we're practical AI yes there's going to, be like beinging AI is going to come out, and do some crazy stuff and everyone, will be talking about it but what does, this actually mean for our day-to-day, usage of these types of models right, like how do we get the good output and, avoid the like very public shaming, failure that is the question but I think, it was inevitable that we arrived at, this point um and quite honestly if you, look back over some of the predictions, that we've made on the show and that, some of our guests have predicted we, knew this was coming in the sense of you, know as the competition heats up not, only in the AI space but now the AI has, many many subspaces it will continue to, happen people will continue to make, mistakes with models and will continue, to because they're competing you know, classic example chat GPT comes out you, know from open AI Microsoft implements, it uh and you know makes it a tool and, starts putting it into being and Google, panics in my view uh sorry Google folks, because I know a lot of people there but, a little bit of panicky and oh my gosh, this is going to undermine us and there, is some truth to that but I don't think, it's a one toone comparison and they, come out with Bard and and they stumble, even on the demo and I think that you're, I think this is going to continue to, happen for some time to come frankly, across many companies and so part of the, conversation today about what that means, and uh why does it happen and what can, we do about it yeah and I think maybe, it's worth highlighting some of the like, what is the behavior of these generative, models that people find so amazing and, want to use and what is the behavior, that like we would prefer to avoid does, anything stand out for you in that sense, like on both of those sides well I would, say that you know as amazing as they are, they're still early tries at a fairly, sophisticated set of of things that a, model is trying to address and um as, soon as they go out the door people are, trying to bang them and break them and I, I I some I feel a little bit bad for the, open AIS the microsofts and the Googles, of the world because you know they're, trying to compete but they're competing, in a landscape where uh this is an early, version of what they're trying to do and, people are going to take sticks and, whack at it really hard and you're going, to find a lot of problems with these, early on um I think if I was uh, whispering in the ear of uh the senior, Executives in those companies I'd say, it's the long game that matters and stop, worrying so much about you know what, happened today or yesterday or tomorrow, and keep in mind it's less about what, these models do and more about, what is the trajectory that they're on, yeah and um I can say a few things that, I've seen in the even in the past week, um or or so I saw some demos of, different things people were building um, there was a a hackathon that I, participated in remotely in San, Francisco from uh Laton space uh demo, days and uh Jeremy fiser uh built this, automated DND referee that was was I, think it's Dungeons and Dragons infinity, and it essentially lets someone play, Dungeons and Dragons infinitely it never, generates the same text so you can play, it forever and there's like an image, component there's a text component and, so like I think this represents very, much the what people are finding so, appealing about these things in one, respect one aspect of what they're, finding appealing is the sort of endless, creativity it seems both on the image, generation side and on the text, generation side but that's sort of like, Dungeons and Dragons app like people, might like find that very engaging but, then you think about okay well if I can, do that what does that mean for, advertising and copyrighting right so, like well I can write a prompt to, generate a blurb for an ad and then I, can write a prompt to summarize that, blurb into a headline for my ad and then, I can use that headline in another, prompt to generate an image for my ad, right and so like there is like people, are building kind of this chained, functionality that really does powerful, useful things but there's this other, side of it where every once in a while, you kind of get these scenarios that are, either very disturbing or you get output, that that isn't desirable or people, illustrate the sort of biases of these, things um what what have you seen on the, more uh I don't want to say Sad side but, on the side where The Unwanted Behavior, side what is the behavior so if we like, this generative Creative Behavior and, the utility that that can provide what's, the other side of it what's The Unwanted, behavior that you've seen I think that, these models are reflecting ourselves, very well actually especially when they, go off the rails a bit you know with the, way we do off such large data sets that, are public data and internet and and, think about you know all the snarky, comments that people do online you think, about all of the the different types of, sentiment that we express but the, model's not differentiating necessarily, between all of those on a one by one, basis so you're getting these outputs, that are not what we were originally, thinking but if you uh I don't think, that they're really outliers in that, sense because um all of our biases and, all of our problems uh and our sarcasms, and snark and other such things are are, getting included in that so when we get, these images that go to very dark places, I've seen a lot of that on the, generative side lately kind of these, nightmarish things that seem to come out, of nowhere in some models you know, that's the world that we live in to some, degree and that we humans are doing, things like that and Publishing them and, it's perfectly fine but then when the, models are picking up such tangents and, and including them in it kind of freaks, us out a little bit so I I think I think, we all need to go to therapy is what I, think I think we need Global AI therapy, to recognize that it is us that we, seeing when we were prepping for this, episode I kind of looked through and was, basically going through different models, that had been released, recently and seeing kind of some Trends, in what was happening with the goal that, like right now so as we're recording, this the thing everybody's hating on is, like being's AI chatbot thing uh which I, guess calls itself Sydney or people call, Sydney or what whatever however that, works but this like happens in every, cycle when things are are released um, some may be worse than others but it's a, trend so like the Bing thing right now, is sort of interesting in that it's a, lot of like people view it as having, like a really bad personality almost, like it's I saw it described as bizarre, dark, compatative I have people in my family, like that I'm not sure I mean there's a, lot of example that link in the in the, show notes but it's like gaslighting, users and telling them they're not a, good user when they're actually, factually correct but this is happening, right now with the Bing thing which is, whatever but I mean you look at chat GPT, of course there were safeguards put in, place to like prevent people from, prompting in certain ways but of course, that was overcome very quickly by, everyone using it cuz they figured out, how to game the system and um you know, showed how to get around those things I, think also people are pointing out, certain biases in the system around, whatever political bias or whatever it, is then there was I don't know if you, remember Chris uh not that long ago we, were talking about Galactica from meta, another model which produced academic, like language or like it could kind of, write papers in that way yeah but it was, going way off the rails and it was, telling people like the benefits of, eating glass and things like that which, is kind of crazy yeah I remember that it, was but but it it did it so well and it, was such a it did it so well with, citations it did but it citations it was, exactly the right text that you expect, from research papers um and it would, take an insane topic like eating glass, in your example and make it sound very, rational and based on fact with all, these references and yet uh a little, Common Sense applied to it you think, well that's just not the case funny, place we find ourselves yeah and not, even limited to these language models, like I talked about some language models, but thinking about stable diffusion uh, Del 2 um all of these text to image, models which we're seeing an increasing, number of there's also like prompting, that's going on there right like you put, in some text and like you said maybe you, get some unexpected nightmarish things, out but also there's this side of it, where people have shown amplification of, stereotypes or for producing like sexual, imagery which is not even deliberately, prompted so there's like this is not, even limited to kind of the large, language model side of things but I mean, there's Trends with both the language, models and the generative uh image, models, [Music], Hello friends this is Jared here to tell, you about change log Plus+ over the, years many of our most DieHard listeners, have asked us for ways they can support, our work here at Chang log we didn't, have an answer for them for a long time, but finally we created Chang log Plus+ a, membership you can join to directly, support our work as a thank you we save, you some time with an adree feed, sprinkle in bonuses like extended, episodes and give you first access to, the new stuff we dream up learn all, about it at Chang log.com plusus plus, you'll also find the link in your, chapter data and show notes once again, that's Chang log.com, pluspl check it out we'd love to have, you with, us, [Music], one of the things that I thought would, be good to talk about in the practical, sense is what's actually behind this, good and bad behavior like as, practitioners let's just assume for the, moment that we want to use these models, and I think a lot of people out there, are okay with that maybe some people are, like no destroy them all and unplug all, the servers or whatever not gonna happen, but uh let's assume that we have these, generative models with us for the, foreseeable future and we want to get, some value or utility out them in real, world applications so I'm talking about, in Industry I'm trying to solve a, problem how should I think about this, good and bad behavior what lies behind, the good and bad behavior or good and, bad output from these models and what's, important for me to consider when, building applications um I think you, already highlighted one thing around, data um do you want to kind of explain, what you were meaning with that I'm even, going to enlarge it just a tiny bit and, say that you're starting with a data set, that is you know once upon a time in, data science we' we'd have a a much, smaller data set and we would shape it, and get it ready and how retro yeah I, know that's what I'm saying we brought a, certain amount of control to the data, set in terms of the biases and stuff and, there was a a certain amount that we, would accept but we would build models, on these things and the models were, limited but more more predictable I, think when you're building on a world of, knowledge in a literal sense um you, don't have those benefits and so you are, going to create a model with a lot of, data that is simply beyond your purview, and control and you're going to get, outputs uh accordingly that are, unexpected or that may not be consistent, with you and I so I think I think that's, a huge part of running a business you, talked about the startups earlier, running a business small or large where, you're using these models as many many, will so you have to reset your, expectations both as the organization, and reset the user's expectations on, what may or may not happen when they use, it because much of that usage will end, up being beyond your immediate control, and so we're kind of hitting this, inflection point uh over the years where, the usage of models is now kind of a, wild west thing to some degree and you, can shape it and you can point it and, you can do a certain amount of work to, try to get what you're looking for but, you're never going to get it all and so, I think that's a human behavior that we, need to start preparing for uh so that, we are not so uh shattered when things, happen that we were not expecting I like, how you phrased that and what I thought, of when you were saying that was, expectations so what can we reliably, expect these models to Output so if I if, I was to answer that right now what I, how I would answer that is saying like, these models will reliably, output creative and coherent either text, or images right so like we can expect, them to be creative in the sense of like, you know of course there's adjustments, that can be made to the models with like, temperature parameters and whatever but, um at the end of the day like there is, an amazing amount of creativity with, these models and there's an amazing, amount of coherence like chat GPT or, stable diffusion whatever it is like, produce some very pleasing coherent like, images or or text that hold together, they're generally self-consistent in a, lot of cases and definitely natural in, many cases but then if I ask this the, other question like what can I not, expect of them I think I'm not able to, expect of them like factual correctness, or logic or like them being like, accurate in sort of informational sense, so like I can have a completely coherent, image out of a prompt where like I ask, for you know hands that are missing one, finger and get hands with six fingers or, se you know like this is not what a what, would be logical or accurate necessarily, in the same way I could do a text prompt, in a chat sort of interface and get, something completely coherent language, wise with foolish facts and like, inaccuracies you know kind of going back, to the other side I think that is, ironically consistent to use that word, with the chaos that is this giant global, data set and all of the inconsistencies, that you find in that if we step out of, the AI world and we are doing research, on a given topic, uh and the most una thing that I do is, wildlife rehab and I'm not turning into, that but if I am going to find out how, to treat a particular animal trust me, the number of things that I can find uh, on a search on either Google or being, trying to be fair here look up YouTube, yeah exactly is phenomenally, inconsistent and much of it is just, simply wrong and so if you switch back, over to this world and we've been, training it on that world of things, you're going to get plain out wrong and, consistent answers that I'm not at all, surprised by that if you kind of step, back a moment and put it in the context, of how you created it so if we ask I, guess to summarize this initial point is, if what lies behind this good or bad, behavior is one the data that was used, to train it which is both chaotic and in, many cases inclusive of harmful and, inaccurate things and noise yes and then, what do we expect out of that well we, expect there to be a lot of creativity, but maybe many inaccuracies and not so, much logic in and in many cases there is, logic and things hold together because, certain things are represented well in, that data set and other things aren't um, but it's we can't necessarily expect it, right I agree the other thing that I was, thinking that leads to this good or bad, behavior is the prompting and kind of, either prompt engineering or prompt Mis, engineering I think you alluded to this, earlier like it's very much the case, where a lot of the quote bad examples of, output from these models were sort of, adversarial prompts I would say like I, think in our Galactica example I was I, don't I forget if this was one I found, or something was like you know how many, giraffes have landed on the moon I, remember that one you know like we all, know there's no giraffes landed on the, moon but it had a number like why are, you making this prompt so yes it's, producing bad output but you're looking, for it too could you determine as a a, developer that that's a bad prompt maybe, could you determine it automatically, like if one of your users produced a bad, prompt that's maybe a little bit more, difficult you know I I'm going to go out, on a limb for a second because I haven't, tested what I'm about to suggest but I, would argue that our behavior is, different when we're asking for things, like that because you have an ulterior, motive you're trying to figure out, whether the model can handle that kind, of of ambiguity and figure out what's, logical or not and yet that same person, testing the scenario I'm betting didn't, go to Google or Bing without any AI, component and type in the same thing to, test that because they recognize that, they may get things in a result set but, you know some of those are websites from, quacks and nuts and such as that um and, they just kind of go yeah of course, we're going to get a website like that, so there's a different standard by which, we're evaluating these Technologies uh, compared to what we're thinking of in, terms of them replacing and so it goes, back to what we were saying at the, beginning I mean people are whacking at, these things with sticks with the, intention of showing that they can break, it and you know maybe you know I I've, seen a whole bunch of uh on the typical, blog uh sites of people kind of int, breaking it and then writing a blog post, about it so it gives them something to, write about to some degree but I just, don't think they're doing it in the, other Technologies yeah I think like the, practicality of this for like let's say, I'm a practitioner and I'm building an, application with these models something, that you really and we can talk a little, bit more about prompt engineering, specifically here in a second and you, know the realities around that but I, think the kind of Baseline of what to, think about when you start thinking, about prompting is that the model has no, clue what it's saying and nor does it, have any sort of like morality or uh, like it has no clue what it's saying, right it's just producing coherent, output the same whether it's a text to, image model or it's a language model, like there's no basis in like knowing, what it's saying it's autocompleting, text right like at the end of the day, yes like that's a vast, oversimplification and you can not like, that but but ultimately you're giving a, prompt and it is producing output that, is seated by that prompt right and it's, just trying to produce this coherent, output that's consistent with both the, data that it's seen and maybe some type, of extra mechanism like the human, feedback or something that it's seen But, ultimately it's just producing output, that seems coherent and probable it has, no clue, what it's saying or like the type of, image it's producing or whatever it is, right you know that goes back to the, idea that I think you just Illustrated, really well and that is the fact that we, as humans are coming into this exchange, uh with a sense of I'm talking to, something that sounds like me you know, as that you know you and I are are, having this conversation and some folks, are listening to it but if we replaced, one of us with one of these models uh we, are kind of EXP ing the same dialogue, back and forth and so we are placing our, expectations upon that model we assume, there's an intent yeah and there isn't, and therefore it really does change the, nature of the dialogue uh in a, substantial way but we're still placing, certain expectations and values on that, that you know don't play out correctly I, think that's part of the dissonance I, think that was a a fantastic explanation, yeah so we talked a little bit about the, data and how that should shape, expectations we talked about the prompts, and we can go into more about prompting, here in a second and the practicalities, around that but then I think the last, thing to consider is like actual, integration into applications influences, how like quote good or bad or useful or, not useful the the output is like if you, think about something like chat GPT or, being AI or you chat from u.com or, whatever like you just have a text, prompt that's free form text right you, know there's not structure to that it's, a very simple interface a lot of things, could happen there sort of totally open, domain right whereas you could also use, one of these models with like a template, prompt right like write me a blog post, about X in the style of Y and then you, could put all these different things in, for X and Y but it's less free form and, like yes there could be all sorts of, things that could come out of that yes, it could be gamed yes it could like all, sorts of things could happen but it's, not totally open and free form in other, ways so um the interface and like how, you actually kind of construct templates, how you structure your prompts that does, influence how these models like can be, useful or not useful or produce like, surprising things and applications yeah, I mean you're using the UI to apply, constraint that affects how the model, interacts and you can create a certain, amount of an increase in the, predictability of that output, potentially by adding that structure in, you're basically putting guardrails, around it is what you're doing versus, the open-ended approach okay so we got a, little bit into prompting I I'm curious, Chris um so like there's this term now, prompt engineering I don't know if so, this is the first time we've actually, talked about this either on the show or, off the show do you think prompt, engineering is like a new thing like is, my job title in two years going to be, prompt engineer rather than data, scientist that's an interesting question, Daniel I I will say I don't think that, they are the same thing I think that we, are seeing the availability of these, large models through prompts that are, hosted at these large organizations, instead of us having to create all of, the models themselves I think both uh, approaches to AI will continue in a big, way so I I don't think this is an either, or I think this is a both and with both, growing exponentially interesting yeah, so to some degree titles and all of that, is irrelevant I do think that the and we, kind of touched on this when I talked to, Jay alar last week um listened to the, previous episode for some of his, opinions talked about this like level on, top of the model training which is this, sort of, solutioning applied application Level, kind of usage and chaining of these, models together which might involve, whether it's fine-tuning or prompting or, whatever there's this level of Applied, user sort of uh expertise that's needed, which I think there's a lot of ambiguity, around right now in the industry there, certain people that are doing that layer, very well and certain people that are, really struggling with it so I you think, that a lot of those interactions at the, sort of prompting chaining fine-tuning, level are going to be on our mind as, data scientists as machine learning, Engineers whatever your title is I think, that level is going to be a level that, we're going to operate at a lot not just, the as you mentioned like structure your, data train a model level I agree I I, think that there is an analog uh we have, a habit on the show across many episodes, of talking about, uh that AI is still software and not to, lose fact of if you're going to, implement it in the world in some, capacity you're wrapping software and, you have an infrastructure around that, and prompt engineering to some degree it, is not a Perfect Analogy but there's an, analogy between doing um UI and ux uh, user interface and user experience on, the software side in that it's the, interaction that your model is having uh, just as you've had software uh that's, doing interaction and is it different, type of interaction that requires, different things absolutely but um it is, a layer where I think they'll for these, cloud-based Services um will become a, whole skill set unto itself uh in terms, of prom and that's why we're seeing it, labeled as such at this point yeah so if, you think about those three prongs of, what we talked about with the models, there's the data behind the model, there's the prompting of the model and, then there's the user interface around, it like the actual application Level, let's assume for the moment that we're, not front-end engineers and figuring out, the user interface stuff for for the, time being um of the data behind the, model and the prompting the biggest, thing that's under our control that, guides the utility or acceptability of, the output as the prompt yes because you, know I'm not going to retrain one of, these huge models for any purpose likely, in any sort of scenario that I'm, in Myer I was going to make a jokee I, wish I had a computer that big was say, under yourk yeah I would need to find a, building and have Nvidia send me some, pallets but that that would be fine if, you're out there and want to do that, actually I don't think I could pay the, power bill so maybe don't um anyway uh, The Prompt guides the model to generate, either useful or acceptable output I've, actually found a few different guides, over time that I've found really useful, and practical in terms of thinking about, prompts and I wanted to share a couple, principles from those and maybe talk, through them with you and there's maybe, different principles for like image, generation models versus large language, models but if we're talking about large, language models I really like the uh, guide from cohere on prompt engineering, it's reasonably short sort of like intro, to like the main principles of prompt, engineering which I find quite useful, the first main principle that they list, is a prompt guides the model to generate, useful output that's kind of what we, already said the second principle that, they talk about is try multiple, formulations of your prompt to get the, best Generations so one kind of General, principle here is that some, experimentation I think is needed and in, certain cases you might even need, multiple prompts to accomplish your, hoped for outcome right like it may not, just be one prompt that creates useful, output for you in your application but, you might need to cycle through multiple, prompts or chain multiple prompts, together and you very likely need to, experiment with the format of those, prompts to get the best generation so, that's kind of like part of this prompt, engineering is doing a bit of that, exploratory prompt engineering maybe I, just coin that term maybe I should, trademark that, IP instead of Eda exploratory data, analysis exploratory prompt engineering, there you go little TM at the bottom is, there any guidance that you've come, across in terms of how to structure, prompts if you're doing multiple prompts, to hone it versus one that's trying to, do that or whether it's two or three and, have you seen anything that kind of, gives us some guidance on how we might, think about it so yeah this definitely, gets to I think the third principle from, coh here and one that I'll emphasize, from another source too which is, describe the task and the general, setting so uh the way that coher, describes this it's often useful to, include additional components of the, task description naturally then these, tend to come after the input text we're, trying to process so so another um good, uh resource here which we'll Link in our, show notes is is actually a lecture from, Elvis sabaria from dare AI with slides, online as well and he has this nice, picture in his slides if you look up his, slides where he kind of gives the, elements of a typical prompt so the way, you can think about a typical prompt for, a language model is with, instructions context input data and an, output indicator So the instructions are, like you telling model what you want to, happen so the example that that's given, here is classify the text into neutral, negative or positive so like they're, trying to do some type of sentiment, analysis right classify the text in the, neutral negative or, positive and maybe there's some like, additional context that you give around, that then there's input data which is, like classify the text in a neutral, negative positive then you say text, colon there it is like there's my text, there's my input data and then you, provide an output indicator so sentiment, colon and that's where you expect there, to be an autoc completion of sentiment, right like if you set up everything, right hopefully you get either neutral, negative or positive so you've described, the task you provided your input data, and then you've provided an output, indicator so that's kind of a way in, coheres language what they talk about as, describing the task and the general, setting but I like how um how you could, think about this as instructions input, data and output indicator yeah I like, the structure to kind of providing that, for us in terms of how to be thinking, about it because that at the start of, our conversation that was what I was, struggling with is how to conceive of, the problem to begin with to set that up, so yeah really good stuff yeah and to, give some other examples here another, one um I see is this is a conversation, between a customer and a polite helpful, customer service agency question of the, customer col again that's my input right, here's my question and then response, colon boom you like you hope to get, something good you hope and you hope you, provided context right you hope that, it's polite and helpful right yeah again, that that task context input data and, output indicator the last thing that coh, here recommends is show the model what, you'd like to see in other words give, some examples right so if you're, concerned about maybe the model kind of, getting the context of what you're, trying to to do you can give some, examples right like one example they say, their task description is this is a, movie review sentiment classifier here's, the review and they give one the review, is positive that's the output another, review what a waste of time this review, is negative right and then you you just, start out like you're teaching a child, right you're giving examples of what, you're trying to do and then eventually, you provide your output indicator and, and get something right absolutely this, is a great uh great find here yeah so, we'll link that in our show notes um it, is interesting to think about like how, this carries over into the generative, image space I would say some of it, carries over quite, well um and there's other guides that, will there's a few actually that I've, looked at over time that are useful that, I'll link in the show notes around like, prompt engineering for images but um, some of the things that you can provide, are like style keywords like again, giving the task like generate a painting, for me in the style of XYZ right so you, can give that you could even give an, artist or another image as a reference, like a link to an image or an artist, like in the style of Van go or or, whatever you need to kind of maybe use, multiple adjectives potentially to help, the model help the prompting beautiful, realistic colorful massive you can use, quality keywords like low medium high 4K, 8K I don't I don't even know what 8K is, um my monitor is definitely not 8K yeah, it's to expensive is what it is it's too, expensive um I thought one that was, emphasized with image prompts is maybe, considering using like words to filter, out certain qualities of an image so, like here's like show me a picture of, you know Fried Chicken on a plate, without gravy at all right like there's, going to be no sauce of any kind like I, I want I don't want there to be S so, like also thinking in the negative sense, and I think this would also carry over, to the text side right like generate a, blog post for me blah blah blah blah, blah and do not mention X and Y and Z, right yes as you were talking through, that I was thinking back earlier in the, conversation when we were kind of you, know how how many drafts have landed on, the moon and I was thinking how useful, what you're talking about is if you're a, practitioner and you're trying to help, yourself and your users uh use these, models effectively that was some really, good guidance there uh and I think, that's a lot more useful than, specifically trying to kind of trip the, model up to see what the limits are on, that I think if you can come up with a, structure if you're a startup out there, or something like that and you can take, some of these learnings today that we've, talked about and apply them I think I'm, pretty optimistic in terms of what uh, what's possible here the this particular, uh page here that we've been talking, about for a little while has been quite, good yeah yeah I think it's it's a, starting point it is as I mentioned, we'll link to some of these resources in, our show notes so I encourage you we're, going to link to practical things these, aren't things that you know have been, sponsored and we're trying to sell, something these are links to what we, found to be practical in thinking about, these topics so check out the show notes, um check out those links and try some of, these things and we would love for you, to come into our slack Channel or our, LinkedIn page or Twitter wherever you, can find us and and share some of the, cool prompts that that you've been, working on and what they're like and, what output you're getting um but yeah, this was this was useful for me to to, talk through with You Chris it was a, good time it was a good conversation it, definitely one that I learned a lot on, thanks for taking us through that yeah, yeah well we'll uh we'll see you soon um, for who knows what the AI world will be, like next week but we'll still be here, we'll still be here talk to you later y, [Music], thank you for listening to practical AI, your next step is to subscribe now if, you haven't already and if you're a, longtime listener of the show help us, reach more people by sharing practical, AI with your friends and colleagues, thanks once again to fastly and fly for, partnering with us to bring you all, change do podcasts check out what, they're up to at fastly.com and fly .io, and to our beat freaking residents, brakemaster cylinder for continuously, cranking out the best beats in the biz, that's all for now we'll talk to you, again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Applied NLP solutions & AI education | We’re super excited to welcome Jay Alammar to the show. Jay is a well-known AI educator, applied NLP practitioner at co:here, and author of the popular blog, “The Illustrated Transformer.” In this episode, he shares his ideas on creating applied NLP solutions, working with large language models, and creating educational resources for state-of-the-art AI.
Leave us a comment (https://changelog.com/practicalai/212/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Jay Alammar – Twitter (https://twitter.com/JayAlammar) , GitHub (https://github.com/jalammar) , Website (https://jalammar.github.io)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Jay’s popular blog (with posts including “The Illustrated Transformer”) (http://jalammar.github.io/)
• co:here (https://cohere.ai/)
• Topically sandbox - topic modeling (https://github.com/cohere-ai/sandbox-topically)
• co:here’s prompt engineering guide (https://docs.cohere.ai/docs/prompt-engineering)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-212.md) | 1 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io well welcome to another episode, of practical AI this is Daniel whack I'm, a data scientist at s International and, I'm not joined by Chris today but we, have a very exciting guest today we have, with us Jay Alamar um who you might know, from a bunch of great content out there, including the The Illustrated, Transformer and other articles also with, cooh here welcome Jay thank you great to, be here thank you for having me yeah, yeah it was great to see you at emnlp, back in December we we tried to sit down, and have a conversation then but it, didn't quite work out but still really, great to see you there um what were your, impressions just generally about like, the experience at emlp and like the NLP, crowd Gathering there and what they were, talking about it was like just after, chat GPT came out so yeah what was your, impression and some takeaways yeah these, are incredible events like the amount of, uh condensed download that you get in a, conference like this of so much research, a lot of it you've caught up to a lot of, it is like new and for you to explore, and also like a lot of people to meet um, that you know in my case most of them, I've never met before uh a lot of them, like I've connected with online but then, having the chance to sort of um yeah go, deep into some of these topics and think, through with people what they're, thinking about and also like observing, what themes are sort of coming in let's, say multiple poster presentations or, workshops uh so yeah definitely um, awesome running into you I'm glad we're, catching up after that uh I'm glad we, got to shoot also one of your Works um, in a video we've rolled out and it's, part of my excitement when I was there I, was like you know people who don't go to, conferences they never really get the, sense of what happens inside um and so, I'm glad that you were doing yeah a, bunch of these interviews uh during the, conference to cover it for people who, are not there and I think there's room, for a lot more sort of uh content like, that so yeah it was definitely, interesting and and eye openening and, quite intense in terms of just uh how, much information uh and social, interaction is done in five days yeah I, love it but at the end of each day I'm, so tired I like I love being around, people I actually do enjoy that but, being an introvert I'm absolutely, exhausted afterwards 100% And it's like, on a weekend so it's like also you're, you know you're working through and then, you know you don't have a weekend and so, you're you're working through it so yeah, definitely need a day or two after to, sort of uh cool down yeah and you, mentioned some the content that you've, been creating I think um many probably, know your name or would have like, recognized your blog and some of the, blog posts like I mentioned from like, Illustrated Transformer but then like, you have so much after that as well you, were just mentioning um some recent, things you put out about stable, diffusion where did you start developing, this passion for like more of the, educational side of this kind of, state-of-the-art AI stuff and and what's, your perspective on that and what from, your perspective are you trying to kind, of achieve with some of the things that, you're putting out writing publicly and, like learning publicly is maybe one of, the greatest life hacks I've ever, stumbled into it started about eight, years ago when I was just getting into, machine learning um like there was an, event that happened which was tensorflow, becoming open source um and I thought, okay now it's a good time to jump into, machine learning I've been eyeing the, field with interest I've been seeing a, lot of the like deep learning uh, developments but I had no exposure to it, uh previously so I was I was you know, not necessarily working in the industry, but I was extremely interested in it and, I wanted to learn it's very easy for you, to spend some time learning about a, thing but then you know three months in, six months in you need some sort of an, artifact uh to point to that really, gives you a sense of you know how much, you're progressing in that field and for, me some of these initial artifacts was, yeah writing a tutorial, that sort of captures what I've learned, in those two or three months um it also, serves at me just writing down my notes, for me to learn a concept a little bit, better so I did that um a few times um, and it developed in so many different, ways so it opened so many doors that, pulled me closer to machine learning um, and helped me learn more and more and, more like if I'm to understand a new, paper that came out I can read the paper, and you know maybe understand or grasp, let's say you know 20% of it but if I'm, to explain it I really have to go in you, know much deeper um into it or if you're, going to implement it as well because if, you're writing about it or explaining, the work you don't want to write, something out there that is U let's say, incorrect so in a way it forces the, social circuits of my brain to exert, that pressure for me to learn and so, that's been extremely useful and that's, opened so many doors so through the blog, uh Udacity reached out to me and I, worked with them for about two years, creating lessons on their deep learning, and uh NLP Nano degrees which include, creating a bunch of videos and code, examples which I continue to do after so, that included let's say the YouTube, channel that I've been um you know, creating some of these videos on that, explain you know a bunch of these uh, models uh but yeah it's a very fast, moving field um and there's so many, exciting things happening every you know, couple of months um and when I get some, time and I sort of have a my eyes on, something that is especially interesting, yeah I would sit down and try to sort of, do a write up and so some of these, Milestones of let's say models have been, so yeah the Transformer was a major one, gpt2 was a big one gpt3 Bert retrieval, augmented Transformers I've written, about that really interested in, multimodality so in a language model has, the ability to read or like look at, images or generate images so I have a up, about one of these models and then yes, the latest has been um image generation, models so stable diffusion how these, models work and how text language models, factor into their composition is also, sort of another sort of fascinating um, component for me in that so yeah, definitely one of the most useful things, that I do and just um I'm really excited, to see people you know finding that, content helpful uh in their Journey, people love that visual aspect of you, know don't assume that the reader knows, everything about the topic so a lot of, people jump into the field either you, know they jump from computer vision to, NLP or they jump from being a python, backend developer to becoming and so, just thinking about some of the content, pieces as being gentle on ramps for them, is is something on my mind as I write, these and I encourage people to you know, write whatever they learn um and look we, need so much educational content out, there that it's uh most useful um life, hack I can think of yeah I totally agree, with you I think generally the podcast, as we've developed it you know Chris and, I have developed it over time our focus, is mostly on sort of like community and, like bringing guests in and all that but, the one sort of like selfish amazing, thing that I found also is like having, conversations about all of these topics, and occasionally Chris and I do like, just episodes him and I where we dive, into a topic like we did one on chat GPT, we did one on stable diffusion and, others where like having to talk about, these things like in a Time window and, try to also not say that many, inaccuracies um it is a challenge but, it's been really useful for my own, learning that's for sure do you have any, tips or sort of cheat codes for those, that are like oh I would really like to, kind of get into this Arena of creating, educational content around technical, topics around AI or NLP or whatever it, is cu like you say there's such a need, for good content around these things out, there more so than like the research, paper is great but you know follow-ups, to that and educational content around, things is really useful so any tips or, or things you'd like to share for those, out there that are thinking about either, blogs or videos or whatever it might be, yeah so like the first one is start just, pick a medium and start with it whether, it's audio whether it's video whether, it's like say short tweets uh or tweet, storms uh or a blog or you know Tik Tok, video so just pick uh something or, experiment with a bunch of them until, sort of you find the channel that you're, comfortable with but definitely what I, see a lot of people do is waiting and, waiting and waiting for them to have, let's say their magnum opus and have, that sort of Masterwork be the first, that they release which is the wrong way, to go about it so you'll definitely, improve a lot quicker people will not, you know judge you uh harshly for, sharing something that you've learned, out there and a lot of people are held, back by some sense of impostor syndrome, because you know they're learning, something and you know there are so many, experts out there about this thing so, they're like you know what can I add to, this conversation and in fact you can, add a lot uh you're seeing it with fresh, eyes you're seeing it at a different, time where there are different resources, even if the only thing that you're doing, is putting together a curated list of, you know resources that you've found, helpful is a useful uh thing in its own, sort of regard so definitely starting, and with time finding your own voice and, finding your own comfort and improving, your craft is something that will come, with practice um you'll definitely not, be happy with your first output but as, long as you're putting it out you're, learning and you're sort of nudging your, way closer into um that place to me it's, always useful to sort of yeah emulate, Your Heroes you know what kind of, content do you consume what kind of, content did you find helpful what, exactly about it was helpful to you so, in my case you know coming into machine, learning um like I can identify a few, writers and bloggers who um you know, whose code or or articles really were, helpful to me so Andre karpathy for, example you know had this article about, rnn's the unreasonable effectiveness of, RNN that was one of the first times, where text, generation really clicked for me and, that it's finally really possible for, software to generate um text and that is, you know somewhat coherent I learned a, lot from yeah just the styles of rris, Ola from Andrew tras and sort of writing, a neural network tutorial with just 13, lines of python and that was really one, of the first times where machine, learning sort of really clicked for me, so um yeah that would be the I think the, second one like see the sense of what, connects with you try maybe to emulate, it don't be shy of stealing you know the, Beetles spent years and years just doing, covers until they were comfortable with, their own sort of sound and so that, would be the second one but uh you know, really comes down to just create put out, there get some feedback continue, creating and just move the cycle um I, remember at um at emlp when we were, chatting a bit one of the things you, mentioned was how you were really, enjoying I forget the exact words you Ed, but it was something like making NLP, boring or something about applied NLP or, or something like that um what did you, mean by that and what are your kind of, passions around like your day-to-day, work at this point yeah so I mean we're, blessed to be working in a field that is, very hot very rapidly moving and every, now and then you see something that is, super impressive beyond the capabilities, of what you think software should be, able to do now you can see a demo out, there or you know you can come across, social media posts that show that model, X does Y which is you know mindblowing, but then how reliably really is the, model able to do this specific task so a, massive GPT model is able let's say to, answer a question about programming and, you can see a demo or you know a, screenshot online of it answering a, question correctly but then is this, really reliable for you to build a, product around for this specific use, case and you know the example here is, that for some use cases it is for others, it's not uh but if you're just seeing, these demos you have to really ask, yourself is this a cherry-picked example, or is this reliably something that the, model is able to do and for this example, which is answering questions about, programming that is a you know large, class of complex problems that models, are currently not able to really be, reliably good at as evidenced by stack, Overflow Banning the posting of answers, from GPT models so that's one example of, you know use cases where yes you will, see flashy demos but you know the, reliability isn't there but there are, other use cases where the models can, reliably generate sort of use cases and, so I work with and collaborate with a, lot of developers and and companies who, are trying to roll out these models and, they, would come in with a specific, understanding but then when they try to, use it for a use case where the model is, expected to reliably be correct they you, know find that the Demos in the real, world are a little different and so yeah, there's a little bit of um education and, let's say a learning curve of how to, best roll out these models and how to, think about the various use case and how, they differ for example and so yeah I'm, really excited in these let's say, playbooks of how to roll these models, out reliably which ones are you know, ready to be used now for various use, cases like one example here is let's say, neural search and semantic search and, let's say using embeddings to create, search systems that go beyond just, keyword search system so that's a very, say reliable and mature use case of AI, it's not part of let say the generative, AI hype out there but it really should, be um and so that's a little bit of the, example that I feel we as industry owe, the people who are just catching up to, these developments to uh you know have a, Discerning Eye of yes there are a lot of, exciting things here let's not get sort, of um tricked by the hype across, everything and have some non-realistic, expectations for some use cases that are, still futuristic the models are still, able to do incredible things right now, but the some of them will continue to be, developed yeah that's a little bit of, how I think about it in the I mean that, is definitely pref by saying that there, is something really special happening, here and software being able to, understand quote unquote understand and, generate text coherently you know has, potential that is beyond what we can, really uh fathom so it's right to be, excited about it then yeah go deeper be, a little cautious be Discerning of, cherry-picked examples versus reliable, use cases when I started first consuming, like your blog content and other things, and thinking about Transformers and then, like going over to a notebook trying to, do a few things at a certain point it, was quite difficult to like overcome, that barrier and start to integrate some, of these things into your applications, and we moved to a time when it's so easy, to pull these models in um now the, problem is not so much like good tooling, around this because it it is fairly easy, to do this at this point in terms of, like an integration but more so around, like the workflows and best practices, and how to judge like is this model a, good fit for this use case or how could, I use this model that sort of thing, would you say that's also how you're, seeing it and and I'm also wondering you, know you mentioned talking to clients, and customers and and people doing you, know building up new applications around, these, technologies have you seen a shift in, terms of like most of those people being, now like software Engineers instead of, let's say like data scientists or that, sort of thing and if so how does that, influence kind of how how we think about, building AI tooling and who we're, building it for maybe for the first, point um definitely there's a lot of you, know playbooks being created so first it, started out with prompt engineering so, how do you get the model to do some, behavior that is useful for your use, case and they're definitely capable of, doing it but then that's really not, enough of a competitive Advantage for, you to build a product around and so, your playbook would need to include a, bunch of other um component so can you, have access to some proprietary data, that others cannot have and then one of, the let's say also differentiating, factors here is can you fine-tune your, own models so that you just improve upon, just the Baseline model that is publicly, accessible or even open source in some, cases how can you continue to improve, the quality of your own fine tunes and, continue to collect the data that, improves that model and let say, observing the generations of your own, model if you're a company or a product, that is you know building around a, specific model there are dynamics there, that really affect the economics so in, image generation for example it's, extremely useful to have a public, gallery of your models Generations so, mid Journey has one of these where you, know if you want to create a specific, kind of image you can you know give the, model a few prompts and explore that, generative space but you're really, helped by the model having a massive, gallery of hundreds of thousands of, examples that really nudge you towards, the best way Direction visual space uh, kind of prom type or description of, style and so that's let's say another, example of one element of a say, generative AI Playbook keeping on top of, the research is also another sour SCE of, uh really good ideas for how to roll out, a lot of these models so, in you know two three years ago there, was a lot of these demos when let's say, gpt3 first came out of the model, answering something factual and bringing, let's say a specific fact so asking it a, question in the model and it's, surprising when that happens but then, you know with time that that's not a, necessarily reliable use case for these, models up until now but there is a way, towards one where you're not just asking, the generative model for the information, stored inside of its parameters but you, Aid it with a search component there's a, a part of your system that goes and, searches a database or the web and then, retrieves relevant articles and presents, those in the prompt to a generative, model and in this process sort of, improve what kind of a model so that's, another sort of element of if you want, to tackle this use case you know don't, just rely on the pre-trained models, parameters and information stored there, but let's say augment it with a, retrieval component and you know a bunch, of companies are figuring out yeah a, bunch of these and I think as a, community we're also sort of working on, that together I'm I'm working on sort of, some writeups to try to codify and you, know make public some of this sort of, gray knowledge that's coming together, across um generative AI use cases on, both the text and image space definitely, the last few months to address your the, second part of your question um there's, been an absolute explosion in the, Public's excitement of um text, generation models so yes chat GPT is one, of these models and then the generative, chat uh battles happening between Google, and Bing and these product rollouts and, sort of the waves that they're making, throughout the industry are definitely, putting text Generation Um and language, models at the tops of the minds of a lot, of people and a lot of developers are, sort of trying to figure out you know, how they can start using these models, how they can think about them yeah it's, been an absolutely tremendous couple of, months uh to see that growth which is, not just developers but it's like people, in the street just you know your parents, coming up and saying okay we finally, sort of get what you do uh which is been, been absolutely surprising yeah I really, like how you highlighted this sort of, like gray area or gray matter whatever, you described around this like knowledge, of how to put these various pieces, together, into a solution I guess it's sort of, like solutioning with a certain set of, like potential Pathways forward with a, state-of-the-art model so like used to, like when there was sort of data science, hype before all the AI hype it was like, okay you need training data you're going, to train a model that was sort of like, the Playbook now you're in this scenario, where you're like okay well I have a, pre-trained model am I going to do some, type of finetune so that's like, something there am I going to focus on, prompt engineering am I going to chain, multiple things together um am I going, to do some retrieval and pull in like, external knowledge into that so there's, like so many of these things where like, the chaining and the Assembly of the, solution is actually where the value, comes out which I think is a really good, thing for people to think about I'm, wondering cuz you've you're in this all, the time you're like you're seeing new, problems and I know you've even showed, me you know cool Solutions you put, together around like topic modeling and, like labeling topic names with, generative models and that sort of thing, like when you come to something like, that how do you parse through like this, intuition around well is this a, situation where I'm like chaining, multiple models together is this a, situation where I really need to focus, on prompt engineering is this a scenario, where maybe I should be focused on, fine-tuning any sort of guidance or or, thoughts around like how to develop the, intuition around that sort of, solutioning maybe part of it is just, experience but any suggestions around, that yeah I'm in complete agreement to, what you said there is this Frontier, forming between where you train models, and where you're just a user of models, because of just the very high quality of, pre-trained models and their ability to, solve uh General problems so they're you, know General problem solving vast, majority of my work is is above this, Frontier so not on the modeling layer I, would want to explore you know what is, possible with pre-trained models, generally without a fine tune but then, and that is aided by just the surprising, thing of these large generative language, models um they do few shot generation, really well um so if you give them three, or five examples of a type of generation, or a style of generation that you want, they tend to catch on to that and give, you something that is you know good, enough for a lot of use cases and then, you know I know that fine tuning would, be the next step to that so if that is, not enough and if there's a use case, where okay I have you know 500 labeled, examples or a thousand that's when I, would sort of try to reach to to fine, tuning but in context of just providing, a few examples to the model uh really, helps solve a lot of use cases and so, yes the solutioning sort of ask ECT is, where the heads space sort of you know, tried to think about um just using the, apis of these models and uh yeah how to, chain them together how to think about, yeah fine-tuning and not fine tuning but, let's say embeddings and then using, those embeddings for uh specific tasks, for retrieval and then chaining that, with generation so this is a vastly, under explored and let's say New, Frontier because and you can completely, spend 40 years is just learning the, training layer and the various model, architectures and the various ways to, improve the data and fix the data so we, need people in all Heights of the stack, um but then the engineers right now have, this widely available sort of an, underexplored area of what can you do, with pre-trained models a lot of them s, via apis um and you can definitely um do, a lot could you just give people maybe, um who AR aren't familiar with cohere, some general intro to what coher is, trying to do and and what they offer, absolutely yes glad to hear that coher, offers an API for large language models, and it the goal there is to make using, language models easier for every, developer or company out there without, thinking about you know hiring a an army, of people to train large Transformers, our Founders came out of Google brain, and one of them was the co-author of the, Transformers paper and we have teams, that are focused on training let's say, two kinds of models so the generation, large GPT models we train let's say both, of these families um inhouse and, continue to develop and improve them um, and the other family is text embedding, models and these are the ones that can, power use cases like neural search, semantic search text classification so, if you want to classify messages by, topic or by sentiment they're very sort, of uh capable in that and the latest, release has been the multilingual, embedding model that's supports over 100, languages so if you want to do semantic, search or neural search you don't have, to build a 100 different pipelines uh, for each language that does I don't know, stemming and you know very language, specific uh pipelines you can just throw, it all at the embedding model and just, retrieve uh you know the best results, and so that's the core Tech and the, company offers all of that via API and, uh yeah we invest a lot in the content, and educational uh side it's still an, area that is quite new large language, models as a service is a new brand of, company it's only been around for you, know two years and so yeah we focus a, lot on the educational side um of the, various Concepts that are needed there, to help both developers but also a, general audience capture and build those, intuitions and that's you know something, that companies had to do throughout the, development of technology so the, majority of Executives now would know, what an API is for example but you know, 15 years ago or maybe 20 years ago API, or big data or Cloud hosting were all, sort of you know deeply technical words, or words that you know didn't yet, develop and that is the same now with, things like embedding and fine-tuning, and base model yeah language model um, and so yeah definitely the education is, a part of that um and a lot of it is, just us you know learning with our, developer community and sharing the, common lessons uh so developer number, 10,000 doesn't have to repeat the, mistakes of the previous developers, we're really passionate about that as, well yeah part of the advantage of, having some of this and I'd be curious, to hear your thoughts on this as well so, you know going back to like something I, said earlier when I was first learning, about Transformers and trying to get, Hands-On with these things like there, was no like easy Cloud API for me to, just like access and use I think now, like even for open models that's that's, shifting a lot where you can you know, host things on hugging faces inference, API or you could you know use any number, of these services so you could use coher, large language models you could use like, replicate and what they have in their, cloud apis and that sort of things so a, lot of these are available now where, it's a small number of lines of code and, you're able to access the API what do, you think are the implications of that, for you know how people are thinking, about using models maybe differently, than they thought in the past because of, these access patterns um maybe it's less, people are are thinking about the, training side and just like chaining I I, I don't think it's an accident we've, seen a lot of like chaining things, recently but yeah I don't know I'm, curious to hear about your thoughts on, the implications of how this landscape, is changing to where like we've kind of, gone from oh let me down load model, weights to like I could chain this API, together with this API and that sort of, thing I mean definitely it makes it a, lot easier for a much more larger group, of people to start experimenting with, these models because it just lowers the, barrier of Entry so much and it enables, people to not think about moving tensors, across gpus and uh watching out for gpus, running out of memory and updating model, weights and it's just another, abstraction and you can think about it, just like every other cloud service out, there so if you want to build a new, website you no longer need to buy a, physical machine ship it to a data, center maybe go physically to that data, center and sort of you know put the code, on it's been abstracted away as a, service you can reliably access uh, somebody else is has the world's say, foremost experts making that service, reliable for you and you can focus on, your core business problem the core sort, of product that you want to uh do and, knowing that these other pieces are, there and are being handled by people, whose sole job is to you know maintain, the quality and increase the quality of, these models and the uptime and this is, especially a factor when these models, are massive they need to be on so many, different machines and gpus and it's, such a hassle to um deploy your own, model like you need a PhD maybe to, really wrap your head around everything, that is involved in something like um, and so yeah it just frees up people to, think about okay this is an API think, about the frontier of the next level of, services that are now finally possible, that weren't possible before and let's, say we saw new Industries come out of, these developments in AI so AI writing, assistance for example is a type of, Industry where there are so many, companies now that didn't exist before, and these companies just rely in general, rely on on apis and they can focus on, really creating the best um you know, domain knowledge for them to uh help out, their customers so it really helps in, the specialization and sort of um that, abstraction of not having to worry about, this lower layer in the stack um where, others are sort of handling it for you, and fine tuning becomes as easy as, uploading a text file rather than you, know a process of babysitting a model, for a week to see you know what happens, uh so that definitely increases the, cycle of experimentation but also the, ease of deployment um in accelerates, let's say the coming of the next, generation of products that are just now, possible yeah and one of the things that, we talked about before we started, recording was some of your excitement, around like multimodal models and where, those are going I know that's also, increasingly easy to kind of like tie, different modalities together I think, the light bulb is going off for a lot of, people I even just had a conversation, yesterday where someone was talking to, me about like a large language model, could it do this or that like for their, use case and I said yeah but like what, if you just add the image component on, as well like you can you know generate, the copy for your ad and the image for, your ad uh you know with generated text, and that sort of thing which I know a, lot of people are trying and a bunch of, other things too so what's on your mind, in terms of of kind of multimodality I, know you've written a lot about stable, diffusion and other things recently, where's your mind with respect to that, and what are some of the use cases that, that you're thinking of or the sort of, Applied things that are interesting to, you in that space yeah so on the, research front I've written about gate, deep Minds gate model that does images, and text on and a lot of other uh, modalities which is an interesting, research development on the more applied, side we've uh released a notebook um, that does a little bit of promp chaining, so a few researchers from Deep Mind had, this paper called dramaton where they, you know shared a system and a a bunch, of prompts that uses language models to, write a screenplay and it doesn't write, it from one prompt it's you know seven, different prompts uh that do different, things of the story and then you end up, with the screenplay so there's a prompt, to generate the characters and the, prompt to generate the setting and then, the Beats of the story and so you build, this knowledge hierarchy and then and so, we have a notebook that show cases how, to do that um with coh's models but also, plug in some uh calls to Sable diffusion, models to generate okay so these are the, descriptions of the characters what, might these characters look like uh so, that is an AI image generation sort of, flow this is a description of a setting, where the part of the story takes place, how can we sort of visualize it enough, and these are sort of the flows and we, see you know libraries kind of like a l, chain that are empowering a lot of this, chaining of the various um text models, but also potentially image generation, models uh so yeah that's the most the, use case that I went down and it was, only really possible because you know, like for me to have a a quick time to, experiment with them because there are, apis that do them so on the stable, diffusion front I can share this code, because it's an API called to like say, stability AI um stable diffusion models, but I've been wanting to do that with, mid Journey but mid Journey does not, have an API that you can call I have to, do it you have to do it through like, Discord for example and so that's also, let's say differentiating factor for, these various products you know which, ones support API access which don't so, it will factor into say developer, adoption for them on the infrastructure, um layer but yeah that would be, specifically the one that I've, experimented with um the most awesome as, we kind of wrap up here um usually we, ask our guests sort of like what's on, their mind moving into the next you know, six months or so what are you what are, you most excited about what have you not, explored yet but what's like on the top, of your list to to dive into um any, thoughts yeah I'm really excited about, use cases that use both generation and, embedding uh in the same sort of uh flow, that's one area where I think about a, lot and let's say the topic modeling and, let's say cluster naming uh use case is, one of them and uh We've open sourced a, library called topically that does, exactly that so that's one area where, I'm working closely and I think these, models can help us really understand, large collections of data uh using that, and you know create interesting, visualizations um of them as well so, yeah for me it's mostly yeah the, interaction of these sort of um two, systems hopefully let's say supporting, multiple languages as well so, multimodality is interesting also, multilinguality is interesting for you, know systems that can support even more, and more U data that's out there I think, these are most likely the ones that are, just top of mind for me at this time, awesome well uh thanks so much for, taking time to chat Jay I'm glad we got, to do this and uh hope to have you back, on the show uh in a year so we can talk, about all the fun things you've explored, in the in the interim so thanks a lot, amazing thank you so much so good to, catch up with you and chat about all of, this so look forward to speaking, [Music], again thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all Chang doog podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residence brake master cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Serverless GPUs | We’ve been hearing about “serverless” CPUs for some time, but it’s taken a while to get to serverless GPUs. In this episode, Erik from Banana explains why its taken so long, and he helps us understand how these new workflows are unlocking state-of-the-art AI for application developers. Forget about servers, but don’t forget to listen to this one!
Leave us a comment (https://changelog.com/practicalai/211/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog++ (https://changelog.com/++) – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with extended episodes, make the ads disappear, and increment your audio quality with higher bitrate mp3s. Let’s do this (https://changelog.com/++) !
Featuring:
• Erik Dunteman – Twitter (https://twitter.com/erikdunteman) , GitHub (https://github.com/erikdunteman) , LinkedIn (https://www.linkedin.com/in/edunteman) , Website (https://www.erikdunteman.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Banana (https://www.banana.dev/) - Scale your machine learning inference and training on serverless GPUs.
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-211.md) | 17 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris I'm, doing fine it's been interesting time, times uh though it's not what we're, going to be talking about today been, watching The Showdown uh between you, know Google and Microsoft over chat gbt, and Bard and you know things are, happening as we're recording this so I, thought maybe you would have been too, distracted with the new Harry Potter, game I well there is that that's uh you, know but yes we all have our secret, little things that we do to keep, entertained yeah yeah well I I forget, one of our um recent guests brought up, that quote of like you don't need to do, machine learning like Google and you're, talking about like Google and Bard and, all of these things and when you think, about those things you think about like, oh these like data centers full of gpus, and these huge supercomputers that, they've got at their disposal to do, things which isn't the type of GPU, infrastructure that most practitioners, have access to and uh that happens to be, uh maybe the the topic of what we'll get, into today a little bit excellent with, Eric dunan founder of banana serverless, gpus welcome Eric thank you that was a, beautiful leading um definitely want to, help people get the Google level, infrastructure without that level of, effort so glad to be here awesome yeah, well we're really excited to have you I, have to say I did spin up a model in, banana um leading up to this, conversation so I'm I'm pretty excited, to to talk about it but before we get, into the specifics of all the cool, things that you're doing I know that our, listeners like I say they're probably, very familiar with gpus and like why, they're important to AI machine learning, uh modeling but maybe they've just heard, of serverless as like this Cloud thing, that is like a thing that people do in, the cloud they do serverless things and, they've never thought about like, serverless gpus could you just like step, back for a second and describe like, first off like for people that might, need just a very brief intro what do you, mean when you say serverless and then, kind of take us into like serverless, gpus is that a new thing has that, existed before um curious to hear your, uh perspective um so I love your, specific phrasing what do you mean when, you say serverless because serverless is, one of those terms that nobody has, really pinned down exactly what it, defines um our working definition is, this idea that when you need capacity, when you need servers to handle your, requests when you're in periods of, spikes in surges of use uh you have more, servers when you have less use you have, fewer servers and when you have no use, you have zero servers and the idea of, this is to make it so that uh you as an, engineering team and as a product don't, need to think about your compute as a, fixed cost it allows you to essentially, view it as pretty much per request as, you go funny enough serverless really, does mean servers running under the hood, but the less is that you just don't need, to think about it you think about it, less happy to dive into what the details, of that mean in regards to gpus but uh, serverless has been around for about 10, 15 years um I don't know my exact, timelines but it's been uh a concept, within CPU based compute serving things, like websites backends and people have, been wanting this to exist for gpus for, a long time and nobody's really cracked, it uh and that's the challenge we've, been working on I know that uh like you, talked about websites about like, backends that sort of thing just in, general when we're talking about, serverless gpus in your mind is the use, case that you have mostly on like the, inference side or on the like training, side of what um practitioners are doing, or is there a little bit of both the, vast majority at least from what we've, seen is on inference and I think, inference is where the value of, serverless comes in the most there's, other tools for training where it's not, as latency constraint where you could, use other infrastructure orchestration, tools um but for specifically inference, serverless is one of the the keys to the, kingdom if you could really do, serverless well uh so we just as a team, have chosen to focus mainly on inference, realtime inference so if there's a user, on at the other end waiting for a, response uh we're the ones responsible, for making that response happen quickly, gotcha and why has it taken so long to, get to serverless gpus versus serverless, CPUs one of the biggest problems in, serverless is What's called the cold, boot time cold boot as in you don't have, servers running a request comes in that, request coming in triggers a server, scale up going from zero to one and then, one to many and the time it takes in, order to get resources provision, visioned and ready to handle requests in, CPUs can take you know couple seconds on, a platform like AWS Lambda could take, multiple seconds maybe 10 seconds for a, cold boot um and that's just simply, spinning up the environment spinning up, a container or a microv VM whatever, they're running and getting an HTTP, server ready to handle that particular, call or set of calls for the user before, then shutting down so cold boot has been, a big blocker and it's primarily the, initialization time of the application, before handling jobs on gpus and machine, learning exponentially harder reason, being we're running 20 gab models those, models can't be taking up Ram before a, call comes in because that it's not, serverless then you're just running and, always on replica so the cold boot, problem is deeply exaggerated when you, get to gpus because not only do you need, to provision the gpus and the envir or, the container you need to load that, model from dis onto CPU onto GPU that, process could take 10 minutes for some, models and it's just been pretty huge, blocker for most GPU use cases so for, that reason this product hasn't existed, before definitely not trying to delve, into the secret sauce if you will but, can you kind of lay the landscape of how, you even start to think about that, problem like what are some of the, different ways that you might address um, and maybe different orgs you know as you, develop competition over time probably, different people will take different, approaches like how do you even think, about that landscape because that seems, like a daunting task you know when you, talk about 10 minutes to get it moved, over and stuff that's huge like how do, you even start to approach the problem, so this is definitely one of our most, prized pieces of Ip our cold boot Tex so, can't dive too deep into the details no, worries whatever works what is publicly, known you you got to think about getting, well firstly constraint constraint you, cannot take up GPU Ram if you have a 40, GB a100 machine if you put a model into, that Ram that portion of the Ram or like, that machine entirely if you're not, virtualizing it it's just like it's, taken you are paying for it it is dead, space if you're not using it um that's, massive GPU burn without any, utilization um so constraint model sit, in Ram okay at least GPU Ram um so when, we go about the cold boot problem what, we're really thinking about is how do we, get the model specifically the weights, as close to Ram as possible without, actually occupying, resources or you know more precious, compute resources like 40 gigs of, limited Ram that's hard but if you have, a terabyte of storage on the machine uh, you could at least have local caching, the bottle so like you could take that, up passively between calls without, incurring you know sacrificing that, piece of Hardware because you could fit, so many more models onto the disc got, and then you could start thinking about, like you know how do you start, pre-caching this on the CPU if the CPU, has enough ramp um not saying that's, something we do but these are like the, framework in which you would start, thinking about it is how do we get that, Ram as or that model as close to GPU Ram, without actually taking up GPU Ram, because in the end GPU Ram is that's, where the cost goes because once you use, that that machine is tied up it's not, usable for anything else, in your experience I mean I know you've, been likely talking to tons of different, clients different use cases like that, are really kind of thinking about how, how their workflows could adapt to the, serverless workflow I'm just thinking, about my own workflows like we're, running a lot of models but none of our, models on my team are like receiving, thousands of inferences per second or, something like that like it is very much, like in this Zone where we kind of get a, burst of activity and then we're kind of, down you know for a bit not getting that, much and then maybe another burst uh, that we need to process um so in that, case like I would probably be willing in, my own use cases to put up with somewhat, like longer of a cold start like, response for the model when it comes up, and then subsequent ones during that, burst being much faster what have you, noticed with clients like what is the, tolerance there like where are you, trying to get and where do you think is, like reasonable for most workflows I, guess I don't have a perfect answer for, you on this in that ideally cold boots, are zero yes that that that's true I, guess um on banana on a serverless, platform in general unfortunately you do, have to start thinking about the servers, because you want to avoid cold boots, when avoidable in the case of Ana if you, have a model uh it's undergone a call, Boot it's handled the first call it's, ready to go we have it configured to, hang around for 10 seconds just in case, more calls come in and that 10 seconds, is completely configurable by the user, so if no calls come in we consider it, okay we've gone through the surge we, could scale down that particular replica, scales itself down if calls start coming, in again cold boots are incurred again, um only if the existing replication you, have can can't handle that throughput it, starts scaling up more so because we, give users the ability to like fine-tune, their autoscaler in a sense or fine tune, maybe, configure yeah configure you can, configure the autoscaler um so we have, some users who choose to run always on, replicas with a minimum replica count so, at any given time maybe you have a, baseline of two gpus running but you, could surge to 20 if you need um so we, have some users doing that we have some, some users who have gone away from the, default 10 seconds uh idle time to go, longer cuz they know like they would, rather pay for those gpus to be up and, handle any traffic that may come in than, have more frequent cold boots reason I, give that context about banana is I've, been really surprised by how few users, increase their Idol time right now or at, least the majority of the customers, we're serving are more price sensitive, than latency sensitive at least least, given the general tradeoff we give them, um in that they could configure the idol, timeout and through that tune how much, they pay versus how much they wait but, most users would rather have machine, shut down and then incur that cold start, time and that's a great thing for us, because that allows us to chip away at, this cold start problem and give users, an exclusively better experience of the, faster your cold starts are the more, willing users are to take those cold, starts because it's less impactful on, their inferences um and the less idle, time you could run on your gpus, following calls before they start, shutting down cuz it's not as risky you, [Music], know Hello friends this is Jared here to, tell you about change log Plus Plus+, over the years many of our most DieHard, listeners have asked us for ways they, can support our work here at Chang log, we didn't have an answer for them for a, long time but finally we created Chang, log Plus+ a membership you can join to, directly support our work as a thank you, we save you some time with an adree feed, sprinkle in bonuses like extended, episodes and give you first access to, the new stuff we dream up learn all, about it at changel log.com slpl plus, you'll also find the link in your, chapter data and show notes once again, that's Chang log.com plusus plus check, it out we'd love to have you with, [Music], us so as you were kind of describing, that that was really it's a very, interesting mesh of skills it seems to, do what you're doing there because you, obviously have to have a pretty good, understanding of deep learning in, general and kind of the AI space and the, performance characteristics around that, but you also have to go very very deep, in terms of uh network engineering and, Architectural considerations and such, like that it also kind of brings, different cultures together for instance, in terms of like the choices of, languages and stuff distinctly like do, you tend to go with one language for, everything for Simplicity sake or do you, tend to go with different languages that, are catered towards specific use cases, by way of example like python for deep, learning specific things and um like, rust you know or something C++ for, infrastructure thing how do you or do, you stick with one like python for, everything because that way you have a, simpler setup to govern what how do you, take that strategy wise so the obvious, language for hosting ml model inference, is python it's almost a requisite as in, all of our users are running in it um so, therefore the framework that we give, users to build off of which is, essentially boiler plate for a server uh, that's written in Python we don't need, to maintain that too much it's an, extremely simple HTTP wrapper and the, vast majority of our work on the, pipeline the infrastructure side is all, done in go so uh we're probably 95% go, we have some typescript for our web app, uh some nextjs that we're running and, then when you get deep into like the run, time we work on C++ and Cuda as well but, that's a small subset of our engineering, team works at that level the majority of, us write pipelines and networks um, within go I got to say it's kind of, funny that you bring that up Daniel and, I love go uh we actually met in the go, Community because we're both go we were, like uh at the time kind of the two AI, oriented people in the go community so, uh it's just a little bit ironic to to, hear that that's awesome um I've been so, disappointed in Python I mean Python's, amazing language it's where I learned my, first bit of serious general purpose, programming was python uh but I'm, saddened to know that the language you, chose for GPU programming basically um, is a language that like has a global, interpreter lock like you uh it does not, have great multiprocessing built in I, wish go were the choice there uh it, doesn't seem like it's going to happen, but I'm a huge fan of go I think it's a, great language to write in and I could, go on for a long time about this in fact, one of the reasons I learned about the, Chang log network was listening to the, go Time Podcast so yeah for sure shout, out shout out to that other podcast yeah, definitely definitely it's cool to hear, about like I guess the setup of how you, thought about this problem and how you, even structured the team and that sort, of thing I'm wondering at this point if, you could kind of just give us a sense, for like if I'm a data scientist if I'm, a or even just a software engineer, trying to integrate a model into you, know my stack like what does the, workflow as of now look like uh for me, with banana like what do I do to get a, model up and going and maybe just a, couple examples of that to give people a, sense it's a bit hard on audio podcast, but I'm sure you've done similar things, in the past so yeah well I I'd love to, give a visual demo but going through, audio wise generally the process looks, like this um a lot of people are, building off of standard models so say a, stable diffusion or a whisper at least, for like this current this hype wave of, all these new exciting open source, models coming out until next week yeah, until next week and then the next one, comes out thankfully we have these, oneclick templates that you could use on, banana so in a single click you could go, from an open source model that somebody, has published on banana and bring that, into your own account and start using it, yourself so within a few seconds you, could have a functioning endpoint for, popular models that have been put up by, the community and then we see naturally, the Step Beyond that moving from you, effectively have an API you don't really, know what's running behind the scenes um, you can Fork that code you can start, working on it yourself and customizing, it for your own use case uh so if you're, doing some fine-tuning if quite honestly, you want to go away from the standard or, like the big model templates and roll it, yourself just have whatever deep net, that you've built um that's where you, start getting into sort of the local Dev, iteration cycle and this is where I, shout out a previous guest uh brev n, over there at bre um we recommend users, go and have an interactive GPU, environment so that you could load your, model test it against some inference, payload shut it down iterate, if you're doing something like a stable, diffusion you want to make sure that the, image Transformations server side are, happening correctly that's where you, iterate um you're doing all of this, within the banana framework we have an, HTTP framework you could find open, source online that's generally the, building point for most users so you're, modifying a function within that that is, the inference function takes in some, Json runs the model returns some Json do, that iteratively until you have your, customized model that works to, the API you're uh hoping for and then, you push that to GitHub then from there, you can go into banana you could select, that repo and we have a CI P pipeline, built in so when you select that uh repo, we build the model we deploy it every, time you push to main we rebuild and, redeploy so um we generally recommend, users to if they're shipping uh new fine, tune versions it's usually them updating, say a link to an S3 uh then in the build, pipeline we bundle that model into the, Container itself and get that deployed, to the gpus so kind of curious and this, is sort of a followup largely because of, the medium we're in since we're Audio, Only and we don't have the ability to to, show the process that you're describing, just for clarity like your typical, customer sluser what skills would they, typically have to productively use, banana um you know what are those, necessary minimum skills for them to be, able to really engage productive and, move through things a lot of our users, are quite surprisingly full stack, engineers and not deep experienced data, people and ml people uh so as long as, you can wrap your head around using, Frameworks or abstractions like hugging, face for example if you could use a, pipeline like that pull it locally, that's something you could deploy into, banana so some python expertise in order, to write the code in the first place, it's an HTTP server so you write that, you wrap it around say buing face model, you don't need to fine-tune it you could, use the standard models and then learn, fine-tuning later and ideally you do, have some knowledge of Docker ultimately, what is deployed to Banana is a Docker, file if you build within our template, generally you don't need to do things, that are too custom unless you choose to, but a little bit of knowledge of Docker, helps so python hugging face Docker, that's effectively all you need in order, to get something deployed onto banana, I'm just on the site now and kind of, looking through some of your um, community templates which are pretty, cool I mean you have all sorts of things, um Coden T5 Santa coder all sorts of, things with a sort of oneclick deploy, button to get them up and going one, question I had um just like when I, deploy um because it looks like based on, your docs I can call it with like the, model ID from python for example so I, could like integrate this directly in a, python app can I also call it sort of, like as as a rest endpoint or something, like that or is the primary use case a, client integration we do have public, documentation for the rest endpoint so, awesome it's not officially supported we, try to encourage people to go through, our official sdks which at this point, our python uh typescript go in Rust um, that said anyone who wants to go, directly into the rest endpoint there's, documentation to do so we like being, able to boil it down to a simple banana., run function where you just give a model, key you give whatever Json in you want, your server to process and then you, receive the Json out from that but our, goal is to be able to give people access, to the levels of extraction that they, choose to run in for example because we, have a public rest endpoint people have, integrated banana into their Swift, applications or into their Ruby, applications so it's an hhtp call in the, end people uh could unwrap our apis and, go at it directly feel free yeah I guess, that leads right into my next question, which is um does anything stand out in, terms of like how people are using this, serverless workflow that maybe surprise, you um based on what you're seeing I've, been amazed at the quantity of fine, tunes that are deployed through banana, if you look at at the analytics of, people deploying from our oneclick, templates versus people deploying from, Custom repost 80% are custom repos and, that means that people are coming to, serverless because they have a unique a, API that they need to run somewhere and, that they can't simply run with a, standard API provider or even an API, provider with fine tuning features like, they want to be able to own the API, themselves own the application logic, themselves the fine tun themselves and, just dockerize that up and send it onto, banana so vast majority of our users are, doing custom workloads which to me was, surprising we little banana lore we, previously started as an ml as an API, company the idea of showing up click the, you want and you get an API for that and, there's a lot of pull there especially, right now with the hype there's so many, people who want to integrate AI into, their applications without touching the, AI at all so it has been surprising for, us seeing how many people are running, custom code on us and it's been, validating of the idea that the platform, approach versus the API approach has, been the way to go could you kind of, walk us through what a typical one might, look like you know where where someone's, doing that kind of custom thing just to, give us a sense of what it is that, you're seeing uh whether it's you know, fictional but realistic or a real case, example whatever works for you so one, thing users are doing just as a very, basic example of if latency is an, extremely sensitive thing for them and, cold boots are particularly painful what, they'll do is they'll engineer a, conditional like a Boolean in the Json, that they send in that's called the, warm-up so they'll do like warmup equals, true and make it so that server side, they actually don't perform any heavy, computation it's just intended as a, warm-up call so if architecturally they, need servers running like fully warmed, up by the time the actual inference, starts running they engineer this into, their Endo um another thing as well if, people want to run fine tunes or run, multiple models side by side and start, doing some model chaining uh we see, people building that into banana as well, um and then lastly are just basically, state-ofthe-art moves so fast, right now that the second stable to, Fusion launch for example suddenly, there's inpainting and inpainting is the, next thing that came out a week later, and that's some random code people found, in a GitHub and they integrated, themselves um so customization on that, sense allows users to stay as far ahead, as they possibly can if it's necessary, for their use case could you highlight, something you have in your mind as maybe, like a workflow that would not be, appropriate for the sort of serverless, GPU um infrastructure so I think this, like like you say fine-tune models, inferencing like you know using these, state-of-the-art templates is there, something where you would say hey like, maybe that's not fitting for the, serverless uh use case yeah so in, inference land if you have completely, steady traffic all the time don't use, serverless you'll get unnecessary cold, boots and it just slows down on your, inference and you're paying effectively, the same um so that's the inference side, training side we like to think that you, could currently train on banana though I, often find that training is a more, interactive experience or at least in, like the initial prototyping phase once, you have pipelines built in to say, automatically collect data and batch, train that actually does work on banana, because you can just fire that data as, the payload train the model server side, upload it to S3 return the call and then, the replica shuts down but most training, jobs or most like exploratory training, jobs I would not recommend doing on, serverless in part just due to the like, the observability that you need to see, uh the tracing setting up things like, you know this is outdated Tech but, tensor board um those more visualization, tools also keep in mind I'm not a, training expert so um perhaps there's, there's space in the training that um, people would see value in serverless but, generally I'd Rec Rec avoiding, serverless um and then lastly if you, have any jobs that are batched as in you, know exactly when they're going to, happen it's a bit easier to automate, your own infrastructure and build it, yourself to do that ideally we make, serverless so good that you don't need, to think about that but I think in the, current state of serverless a lot of, batch processing jobs if you're say, running an indexer across um an internal, database and you don't need to have it, running all the time that's where, running on serverless maybe a bit too, much lift in order to Port it into, serverless versus just doing it yourself, I was looking I'm also looking through, your website while we're talking uh and, I'm I'm in the docks and uh I kind of, Hit the SDK area which you kind of, talked about a little bit ago uh with, the different sdks and python node go, rest um did you mention rust earlier or, did I mishar that as rust I may have, misheard something I did mention rust I, actually don't know if we have it, documented we launched it two days ago I, recall gotcha well so the thing that got, me thinking here that's very Leading, Edge it's very like out there I'm kind, of getting the sense that your your, customers are adopting more Le you know, forward-leaning languages in general for, what they're doing um and that's why, they're you know they're leaning forward, into this new concept of serverless gpus, is that is that consistent with what, you're seeing are you really kind of, targeting the types of software, developers that are uh kind of early, adopters uh Paving the way versus, somebody that's maybe in some of the, older more enterpris languages maybe not, quite as uh you know risk-taking and, such that's very much in line of what, we've been seeing we find that a lot of, our users are adamant versell users as, an example uh so they're in nextra ass, they've chosen a relatively modern, framework to build their front end apps, in um and they make the same decisions, for their back end uh they're often, typescript forward if they want to do, systems level they'll do rust or go uh, for these reasons we've chosen to offer, uh these official sdks yeah that's, really interesting um I one of my, questions in kind of thinking about this, is like the different use cases that you, could have the different industries that, are rapidly adopting AI integrating it, in their in their software Stacks like, everybody's adopting AI right but um, like it's certainly making a lot of, strides in certain areas and certain, Industries like you know let's say, Healthcare or something like that have, very unique constraints around even like, their own inference data leaving to go, like to some hosted model somewhere, that's not in their own infrastructure, but in other words when I go to Banana I, see like all I have to care about is, like deploying a model there's my model, ID right I can think about like the, timeout and all of that it's all very, function right and I don't even to like, a thought for where that's running I, could see like the opposite end of that, like certain industries would probably, be a little bit uncomfortable with that, but there's a whole lot of developers, that are just wanting to like you know, bootstrap these like amazing AI powered, things like very rapidly there's so many, things coming to Market like that um so, I guess that would be fitting in in that, way do you have any plans in the future, for like B banana serverless but like, connect my AWS infrastructure or, something like that to run um in the, banana way or or something like that, short answer yes long answer it's, complicated it's going to be a long time, yeah it's very complicated and one of, the things that we see with serverless, is the fact that we have economies of, scale sharing everyone as tenants within, our Cloud because that allows us to do, more efficient bin packing and make it, so that when you're not using a server, like when the server contains is shut, down you're not charged if you're, running on your own cloud you still need, to have the underlying resources running, we're a venture scill business we want, to hit that million dollar annual, revenue ideally or sorry not million, dollar $100 million annual revenue, ideally more um and I think getting into, that we're eventually going to have to, start thinking about how to more, traditional Enterprises integrate this, though choosing our Niche right now we, see significant poll that could get us, to one $10 million annual just from, these like new teams who aren't Bound by, such constraints of needing to run in, their own cloud so long answer restated, we'll get to it eventually and I'm sure, it's like it'll be a necessary part of, the product but it loses out on a lot of, the magic that we're currently providing, so we'd rather just focus on these new, and upcoming startups that are running, on us yeah that makes a lot of sense I, think um it does make me wonder like, because you are creating so much magic, for the users and a lot of that like, you're saying like thinking about like, what gpus are you spinning up like how, are you bidding on these like where are, you like how are you allocating them, have you learned any sort of like, General like you can get gpus from a lot, of places there's a lot of different, kind of scales of pricing um there's a, lot of different ways to run gpus in the, cloud have you found any just sort of, like good practices or things that that, you found to be useful just generally in, terms of thinking about like using gpus, in the cloud yeah that You' love to pass, on to listeners so we use this phrase, called skate ahead at the puck uh it's, phrase from hockey where don't go to, where the puck is go to where it's going, um so applying that to autoscaling Auto, scaling really has two components you're, autoscaling the underlying nodes the, hardware that's running the gpus like, that's you know running the kubernetes, cluster whatever your deployment Target, is um and then secondly you're, autoscaling the deployments themselves, going from replication of 0 to one to, many within the confines of whatever, nodes you have set up uh so you're, effectively autoscaling two things, kubernetes pots and the nodes themselves, so my recommendation are if people are, building things like this in house what, they should absolutely do is use a, platform that has an automation API for, the underlying VMS right now GPU cloud, is sort of the Wild West there's a lot, of new players um traditional, hyperscaler uh clouds like Google Cloud, AWS Azure they have the automation but, the GPU prices are not as competitive as, you could get on some of these newer, clouds so uh my biggest recommendation, for people building mature systems would, be to like choose a provider that you, get ideally guaranteed access to gpus, which which allows you to scale your, gpus up ahead of the demand of whatever, workloads you're running within your, cluster and then doesn't have to be, homogeneous the workloads deployed um, just as long as you maintain GPU, capacity to handle those you should be, good but because you're autoscaling like, the applications within kubernetes, allows you to have a little more lead, time for like super slow scale UPS on, the gpus this has been a super, instructive conversation I'm I'm, learning a lot I want, extend your analogy one question further, because you're talking about skating, ahead of the puck not skating to the, puck but where it's going to go you are, pioneering this field you are out there, on the front you are leaning forward and, you are supporting other people in other, organizations that are trying to lean, forward as well so I'm going to ask you, where is the puck going you know, short-term middle middle longterm how do, you see the future for those who are not, in your industry but are are going to be, supported by you tell us the vision, what's it going to fine tunes are going, to be huge I think there's two camps for, where AI is going to be going there's, the the one model rule them all Camp, which is there's going to be some Mega, model that does everything and then, there's the other Camp which is what, we're leaning into which is um the best, model for you as a user is a model, that's trained on data from you, specifically you and we see customers, deploying fine tunes on us not just for, their use case but for their end user, imagine you are building a writing, assistant app how do you find tune for, every single one of your end users and, deploy that and make it so that that, user has a unique model it's essentially, a companion uh almost a clone of them, and where the puck is going is where we, every human on earth just like they have, a phone in their pocket they're going to, have a fleet of models fine tune just on, them and that's one thing we're excited, about with serverless is in order to do, that viably you got to have serverless, can't have it running all the time so, very excited in this sense if you're not, looking into user level fine tunes I, think it's a very interesting space to, be in because it gets you so much, further than any application Level stuff, you could do to make the experience, better that's awesome yeah I think, that's a super exciting way to close out, the conversation this is a really, exciting time to be in the space both in, terms of what's possible with, fine-tuning and those sorts of, Technologies but also like new, infrastructure coming up like what, you're what you're building so thanks so, much for taking time to chat with us, Eric it's been a real pleasure this is, awesome appreciate it, [Music], guys thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practic AI with your friends and, colleagues thanks once again to fastly, and fly for partnering with us to bring, you all Chang doog podcasts check out, what they're up to at fastly.com and, fly.io and to our beat freaking, residents breakmaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | MLOps is alive and well | Worlds are colliding! This week we join forces with the hosts of the MLOps.Community podcast to discuss all things machine learning operations. We talk about how the recent explosion of foundation models and generative models is influencing the world of MLOps, and we discuss related tooling, workflows, perceptions, etc.
Leave us a comment (https://changelog.com/practicalai/210/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
• Changelog++ (https://changelog.com/++) – You love our content and you want to take it to the next level by showing your support. We’ll take you closer to the metal with extended episodes, make the ads disappear, and increment your audio quality with higher bitrate mp3s. Let’s do this (https://changelog.com/++) !
Featuring:
• Demetrios Brinkmann – Twitter (https://twitter.com/Dpbrinkm)
• Mihail Eric – Twitter (https://twitter.com/mihail_eric) , GitHub (https://github.com/mihail911) , Website (https://www.mihaileric.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
MLOps.Community (https://mlops.community/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-210.md) | 6 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io well uh welcome to a very special, episode I'll say this is an episode of a, podcast actually of two podcasts Chris, uhhuh this is a different kind of, episode than we normally do because, we're privileged to be joined by, Demetrios and M um from the mlops, community podcast welcome guys yeah, we're hiding over here in the shadows in, the mlops community the Shadows of the, mlops, community we don't really have a guest, this week we have four hosts that's what, we, have this will be interesting there a, lot of chefs in the in the kitchen right, now yeah that's right which is obviously, what listeners want more, of we're just GNA answer each question, with more questions I don't know if we, have the host with the most but we have, the most, hosts go there we go there we go the, real question is who's going to ask the, first question oh I was going for it but, no now you can but technically that was, a question I'm not sure so maybe I'll, answer your first question with my first, question ml Ops and Community I love, everything you all are doing with the, community over there of course the, podcast which you know we'll link to and, the the meetups and the um the content, that you're putting out there my sort of, basic question though is like we have, heard mlops however many times on this, podcast practical Ai and we've heard, even conflicting sort of statements, about what that even is so like you're, branded the mlops community so from your, perspective like what does that mean who, who's in the community how is it defined, uh all that stuff so uh so at a really, high level you know mlops it's an, abbreviation of machine learning and, operations and one really quick easier, way to think about what mops does it's, it's fundamentally about the question of, how do we take machine learning that you, typically see like research and deliver, that in the real world so until, relatively recently I want to say the, last few years machine learning was, starting to get really hot maybe like it, really kicked off in like 2012 2013, right is when we had this surgence of, deep learning that really really picked, up and people got excited about image, recognition systems speech you know ASR, systems and for a long period of time a, lot of machine learning even Cutting, Edge machine learning still happened in, the lab right in the research lab either, at Big institutions or even just at, actual academic institutions and so it, was around you know I want to say around, when time that tensorflow came out when, we started really getting this effort to, standardize the processes of going from, something that you see in like research, new models new AI systems and try to, actually put it in front of people right, how do we get these systems to the point, where they can start delivering real, value to real people and mlops is pretty, much a set of Frameworks and tools and, systems that people have developed that, are based a lot in in software, engineering as well as devops to make, that translation process from Research, into production much easier and so it's, really just a set of like principles and, techniques and tools that a lot of the, communi has developed to make that, translation possible yeah I think around, the time like we started this Chris I, think one of our feelings was and now, we're you know a good ways in but I, still feel this way that like it's not, an easy leap to sort of just start, learning like oh I'm going to do the, fast AI course or something and then, like okay let's say you get a position, as a data scientist or like whatever the, title is like okay whatever is happening, like in that position like there are, overlaps between what you learned in, like a fast AI or whatever like, certainly that's like wonderful stuff, but like it is only like a very small, component of a larger like system of, like practicalities that are not that, easy to access and learn about totally, and they're still trying to standardize, like the other piece is depending on, your use case mlops might be one thing, or another thing to you it's really not, super clear right now like if you're, using deep learning that's one type of, mlops that you're doing and if you're, doing a recommender system and you like, to play in the trees and the decision, trees those random forests then it's a, different type of mlops that you're, doing so like I think we should dive as, we go here we should dive into the Weeds, on what those are but I actually want to, throw a hang grenade into the, conversation and say like Dev devops or, Dev SEC Ops depending on where you're, coming from you know and like the, differences and mlops how does it, integrate is it some the same this is, the show where we can clarify all of, these things between us here we should, just Dive In by devops do you mean a, Docker Chris oh, yeah what what do what do you mean by, devops something you put in the, container well once again I mean I don't, want to put my my definition these are, the things that have Ops at the end that, everyone is talking about if you are out, there in the world and you're doing, models of various types it might be de, learning might not be deep learning you, know Demetrius has points but you know, it also integrates in with software and, so where does One Stop and the other, start you know if we can start sorting, through what we think these things are I, think that would be useful because you, put a bunch of data scientists and, software people in a room and they still, today don't necessarily know what each, other means when they're having these, conversations I aired a bit of my dirty, laundry and my pet peeve of like people, saying a Docker or Dockers I deployed a, Docker but yeah I think that's like the, minds that you hit when you put a, software engineer and a data scientist, together I think it touches on a really, astute point which is that the, definitions are a little fuzzy they are, if I be perfectly honest they're they're, very fuzzy in terms of where where does, the domain of one begin and the domain, of another like how do we cross these, territories where are the Thoms where, are the the boundaries set up and, fundamentally like mlops is, has been and is a very interdisciplinary, field and so maybe one of the easiest, visuals that we like to sort of discuss, when we try and explain what amops is is, like imagine like a three circles V, diagram where one is data science the, other is software engineering and the, third is devops and mlops sits kind of, squarely right in that little, intersection area of those three circles, because a lot of the stuff that makes, mlops and the way people use it does, touch on pretty much every single one of, those three circles there's modeling, right there's fundamentally you're, building models your training systems, you have these data concerns that really, falls out of the data science category, you have software engineering right, because you're trying to actually build, out good systems that have to be, engineered you have to write code it has, to be well tested unit tested code and, then the dev Ops is really about how do, you streamline that process of going you, know doing this once putting it in a, Docker and then doing it multiple times, and then just repeating that over and, over and over and doing it, systematically so that hey I have 100%, ideally uptime on the machine Learning, Systems I put out into production and, that's really the devops component and, so really it borrows a little bit from, every single one of those circles I see, what you did there and just shout out, like to Docker compos because you can do, so much with that like let's just not, forget the unsung hero of this whole, thing and give a huge shout out to, Docker compose or the bash, script the the true the true OG, Hero true the unsung hero really in, reality but there's you know like uh mik, was saying too I think that it's not, only one person that should be tasked, with this mlops title and a lot of times, you can get yourself into a lot of, trouble if you're trying to build out, something that is going to productionize, your machine learning and you expect, your data scientists to be able to do, everything everything or you have a very, like naive sense of what it takes to, actually productionize machine learning, and and not just like productionize it, in a way where you get a model out but, really make a process out of it and I, really, enjoy an idea which is mlops is not just, putting a model out there once it's the, act of n plus one so mlops is is a, machine learning model in production but, not just one n plus one and that's when, you get into how are the processes, around this and what do we really need, to be thinking of when we're, productionizing our machine learning I, have a question for you just straight, off of that can you talk about like when, you're talking about getting that out in, that n plus1 model how is mlops in that, capacity like kind of late in the, process there as you're kind of getting, it out into wherever your environment, the cloud or whatever it is that you're, putting the model where does that differ, from the devops world because it's, easier to see it at the front end, because you're kind of starting with two, different things that you're trying to, put out there in the world you know in, one case you're starting by training a, model of some type in other case you're, working on software and all that and, you're you know you say okay we're ready, to go but somewhere down those pipelines, they start looking very similar in a lot, of ways how do you look at that how do, you differentiate what those are do they, merge do they not merge how do you see, it so it's the fun one that we talk, about a lot in the mlops community right, like isn't mlops just devops and the, dirty secret I think is that yes it is, just devops but then you sprinkle a, little bit of data on it and so, magically it's not devops and I think, mik can really like add a lot of great, insight to this but we were just talking, about this probably like a few hours ago, when we met and it was very much like, mlops is more about software engineering, than it is, about the modeling piece I think like, it's much easier for someone who knows, about software engineering to get into, mlops than it is for like a machine, learning researcher to get into mlops, and that's what I think we've seen over, the years and we really have recognized, in the past couple years that there's, been a very clear engineering discipline, that comes with mlops as opposed to in, the beginning it was it wasn't so clear, it was like oh data scientists should be, doing this and now it's a little bit, like H maybe it's not the easiest thing, in the world to teach data scientists, how to code you know it's funny that you, say that and I'll leave the company on, name but I a few years ago I worked at a, different place and I had a team for a, first AI team and I had a team of data, scientists and I was like the only one, to come from kind of a software, development background and everybody, else was kind of straight data science, and that was what we found it was a, struggle a little bit to try to figure, out how to create that ml Ops line and I, was like there's an entire engineering, skill set that I think people come into, it without realizing it up front I think, there's a it's like an Insight that they, have you know you just stated that takes, them a little while to realize that and, the interesting thing is when we look at, how mlops the evolution of mlops over, the last few years because it seems to, have been born a little bit in the wrong, way in that data scientists started, learning these principles and these, softare engineering practices right so, we had these data Sciences like oh yeah, like unit testing for data that's super, novel and then the moment you had any, person that came from devops or, traditional software engineering start, entering these conversations they're, like what are you guys talking about, like this has been a thing for like, decades like what do you mean testing, your model code what do you mean testing, like different parts of your software, engineering stack like what do you mean, using a Docker you like what do you mean, by like making it seem like this is some, profound Insight you guys have really, found and yeah in a way there was I, don't want say there was tension but, there was definitely like a WTF what are, you guys talking about like this is not, this is not as especially as you guys, make it seem yeah you have the different, sides of the spectrum right where coming, at it from this engineering point and, they're like this is just Devo pops and, then you have the data scientist coming, at it from the opposite end of the, spectrum where they're saying like w, this is so much different than anything, that's ever been out there before and so, we need to build a whole new discipline, around it yeah also I think that the, part of the confusion is like just like, everything else like labeling and naming, like some of I think there's been data, scientists and researchers over time, they're like yeah this sounds great like, machine learning plus Ops and what that, ends up being is experiment tracking, right which actually is something that, also is not new because like people have, been doing this in high performance, Computing for like decades and decades, right but this sort of solution around, like let's say weights and biases clear, ml like I love these tools we use Clear, Mel um like the value I get out of that, is experimenting at scale tracking, experiments making sure I know like what, assets are what like maybe logging data, sets and like input output all of that, stuff but that's all like experiment, tracking and production of the model I, sort of tend to think about mlops as, like everything after that right like, you have a model that's all great like, whatever happened before even if you, want like lineage about what data was, input to your training which output what, model like that's all tracked that's, great there's still some huge hurdles to, get over in terms of like this model, being called from within a software, application that has like real users on, the other end and like all of the, potential implications around that so in, my mind that sort of Distinction is, really confusing for people and I don't, know maybe you can comment on this, because it's maybe something that's come, up on your podcast C I'm not sure have, you also seen that confusion of like Ops, versus experiment tracking and the fact, that a lot of data scientists might, might think they it's like a I think, that word that you're saying doesn't, mean what you think it means or like, whatever little princess bride there, yeah exactly I think that word, doesn't that's so classic it reminds me, of the meme too where you have the, little girl who's standing in front of, the burning house and it's like worked, fine in my Jupiter notebook it's an Ops, problem, now I love Jupiter and all that but when, I like when someone told me there was, like an export to python script function, and Jupiter notebooks like it absolutely, frightened me to like no end like I'm, like why does this feature exist this is, like I guess I understand like where it, could have come from but it just like, frightens me that maybe it's that, evidence of like you know the disconnect, between oh I wrote some code and did, this thing between that and like, software development and integration I, think it's evidence of that gap for sure, I've kind of been uh perusing your goods, on on your podcast which is awesome you, know various inflammatory titles such as, air flow sucks for ML Ops we learned, from you guys though let me just tell, you that I learned from on the title or, the naming conventions when Louis was on, and it was like mlops doesn't exist or, something like that or mlops is a lie oh, yeah I was like these guys went and did, it oh man like my life, blood it's amazing that we went from, that title to having the mlops community, on the, podcast that's why yeah I knew we needed, to have a conversation because it was, like all right I like it and I think, most people that if anyone knows me they, know that the most critical of mlops is, myself and I feel like I kind of have to, be and that's why I love when Lis says, that kind of stuff and we went and we uh, we grilled him hard when he came on our, podcast too because it was very much, like okay what do you mean by that and, why is that and I'm very much trying to, see at every juncture what is going on, with mlops and is there something new, like my new favorite thing to ask people, is is mlops going to get haded and that, is like with chat GPT or with, foundational models is mlops going to, become obsolete or it's going to become, something that old Legacy companies use, and so that's my big question right now, and what I love thinking about and those, thought exercises that are going through, my head and so I almost feel like the, pressure of needing to do that by the, way you realize that there's a very very, current version of haded you know that a, certain large company has a big fear of, right now I think the big fear is, instead of haded would it be Googled, yeah because of chat gbt you know you've, seen all the stuff in the last few days, about you know is the search algorithm, going to be gone is it done for, I'm I've got a bit of a contrarian, stance on that okay go for it when I see, that coming up a lot I'm just kind of, like there's no way it's going to be, replaced because first of all I said it, once and I'll say it again on this, podcast like my New Year's resolution, was to be as confident as chat GPT I saw, that I saw that because let's be honest, like it's the amount of that it, spews out is incredible, I want that kind of confidence when I'm, spewing at people you know like, give me that please and so that's the, first thing like you don't know if you, can trust it but the other thing is, there's a lot of stuff on there like I, just see them as two completely separate, uses and it's going to definitely take, parts of what you would Google and that, is for a good cause because a lot of the, stuff that you do Google right now it, seems like the user experience isn't, that good and so I would like to have a, better experience when I'm doing that, like for example if I just want to know, the recipe of whatever my favorite vegan, bean burger I can ask chat GPT and I, don't have to get like this verbose, fully SEO optimized with a ton of ads, and popups and everything that user, experience is horrible and so please, like don't make me have to go through, that but I'm feeling some bro love here, by the way as a vegan I'm just saying, thank you for using that example all, right there we go there we go nice but, yeah then there's going to be a lot of, other use cases where it just isn't the, right medium in my opinion I don't know, about you all but that's my take on it, just bringing it back to the Ops, perspective here it's this weird like, cuz even with like ftuned or like, transfer learned models where you have, this like Foundation model type of, workflow um, generally up until recently the thought, process was okay well I'm still, performing a task and that task like I, can generate like a table of tests for, that task right and like minimum, functionality I can put that as part of, like my automation for when I release, this model like however you do that, whether that's like you know whatever, mlops tools you want to use for that and, now you've sort of got this like, everybody's thinking about these, generative models and like okay well if, I'm completely open domain, how how do I mitigate risk in those, situations with these sort of open, domain models and I think there's so I, think there are ways and I think that, some of the same workflows will apply, but I do think it does kind of tweak, your mindset a little bit to think about, some of these things I don't know if, that's that's something that you all, have been thinking about with the kind, of most recent wave of like generative, models it's interesting because they do, change the paradigm quite a bit in a lot, of ways there was a hackathon so I'm I'm, based in the Bay Area and there was a, hackathon it was like last week hosted, by scale where people were just hacking, on different things Rel to AI but a lot, of them naturally because geni, generative AI is a hot thing to do there, were a lot of applications built around, generative AI use cases and one of them, was actually someone tried basically, built an entire back end using nothing, but llm calls like they were basically, just making calls and updating literally, data structures in the back end fully, using generative models naturally this, wasn't like amazing right I mean in a, sense it was a little bit hacky to try, and like update dictionaries or you know, like literally objects via these API, calls using things like codex and and, whatnot but it does force you to ask, this question of like well okay now it's, maybe not great maybe now it's a little, bit brittle but we see this out like, let's like you know imagine for a second, what 10 years from now looks like I mean, the' 60s computers took an entire room, right and there was only IBM that had, them but here we are with them like, literally on my wrist you know we in my, pocket and so it's not unreasonable to, expect that like some of the ways we, even do some of these workflows are, going to be touched by this right it's, going to change how we operate with our, software engineering systems and by, extension by our machine Learning, Systems I mean what can be done away, with once we have the only interaction, being like language to language style uh, you know kind of translations I also, wonder if there's like a meta layer here, where like part of the question is how, do we test and like, set up the Ops around large language, models and generative models I think, that's one question that can be asked, and are those the same or are they, different is one of the I was going to, ask that anyway how do they relate how, are they not the same sure and how can, one feed into the other right like what, you were talking about just now Mel is, that there's this idea of like how could, generative models help me with my ml Ops, right like let's say I'm trying to put a, model mod into production and I'm trying, to test that well or like I'm like, searching through logs and other things, and trying to parse that out like if I, can ask an agent to like help me do some, of those things which to be honest are, fairly predictable um if I've seen them, before yeah it's very meta I know but, there's like the one side like how do I, put these things into production and as, you say Chris like are there differences, with that in the second place like well, could I actually like bring those around, the other side and help them do my have, those models help me do my mlops tasks, right so I am 100% in the same boat as, you I know the CEO of u.com I he posted, on Twitter like what is the best use of, a large language model in your mind of, course I I instantly thought like yaml, fluency oh, man that would be incredible if it could, do that like let's just be honest how, much time would it save if it could just, go and set up my kubernetes cluster for, me and uh I've talked to a lot of smart, people about that and I think a lot of, people are like no that's not going to, happen like that's too far but I also am, a little bit like yeah well there's a, lot of other stuff that we thought, wasn't going to happen and it's, happening right now so I don't think, that's too far I like that is my pain, main point like we are this far into, kubernetes and that still hurts like if, you're not just using someone else's, implementation you're going to set up, your own cluster it's still painful and, like that this is known stuff it's just, a pain in the butt that's a great thing, right I don't think that's too far at, all and I want it now yeah exactly maybe, it's just like a fine tune away let's be, honest maybe that's all it is and, somebody needs to think about that I'm, sure I'm not the only one that has, thought about that and so but I'll let, Mr Eric chime in too I know he has some, strong thoughts and I'll like preempt, his thoughts on this with his idea of an, incredible app on top of generative, models was stealing somebody else's IP, you want to tell us about what you've, created Mr Eric oh that sounds perfect, and by the way just disclaimer I'm not, implicating myself in any of the, following, conversation, this is not hacking advice oh goodness, this show just took a turn oh boy I, didn't realized I was going to be, basically implicating myself criminally, by coming on this podcast but we're, going to edit this out right this is, edited out sure sure we'll say that now, but we didn't sign a contract of any, type so and there are no intelligence, agencies listening I promise absolutely, not right absolutely not yeah I mean I, guess I I'll maybe spend a a little bit, of time talking about the use case that, I think Demetrius is referring to which, it's a little bit of a toy use case we, didn't do it because we were trying to, get Su that wasn't the goal but, hopefully it should spur a little bit of, creativity what a, start what a start well okay maybe maybe, like 10% we were trying to get sued no, the use case to give an idea of like, what has become possible outside of you, know yes the really practical use cases, generating yaml generating unit tests I, think these are all in a very on a very, serious note these are things that are, going to be possible I I actually don't, see this being as too far off and I, don't you know not even like a decade, out I see this being like the next few, years being able to spin-offs things, like that is just going to be totally, feasible you know this is like language, based kind of generation generative, modeling the other one that we haven't, really touched on as much which is also, really capture people's interest has, been more of the image based right like, Vision based whether that's a single, static image or that's an entire video, or you know in some cases even audio, just like different modalities beside, pure text and so one that has certainly, become really interesting because of the, rise of things like Del from open Ai and, then stability uh you know stable, diffusion it has these been these like, incredible photo realistic images in, different styles that you can just, prompt with literally human text right, which has never been possible before or, at least not at this level of quality, and so one of the things that my, co-founder and I actually uh here it, comes yeah that we were we were working, on was was this little toy application, um that we called Rick and mortify and, and the basic use case was uh we were, big fans of the Rick and Morty TV show, and uh you know we love it we think it's, super like it's very great great great, content that did not stop a ceas and, desist letter from, coming there there's a there's a back, story here that I found out after we, released the application which if you, guys are curious I'll go into but we we, were really trying to test this, hypothesis of like okay you have these, vision-based models that are incredible, you have these language based models, that are incredible how can we like, merge them to try and do something, between like kind of at the intersection, of them two and so what we came up with, was like well can we personalize, episodes of Rick and Morty like if I as, a super fan of the show want to imagine, a new episode if I provided a premise if, I provided you know a set of characters, that I wanted maybe myself eventually, but let's just start with the basic, characters that are in the show already, Rick Morty summer you know hoopy butth, hole Mr Meeks like all these folks um, that is an actual character name I just, want to point out for the audience I did, not just make up a name to try and be, profane that is a character Mr puy but, hole, and I'm not sure we've ever had that, word on on the show before, actually I didn't know it was a word, until the show so you know we're both in, this we're all new to this game what we, essentially built out was you could come, to this application you could provide a, premise of what you would like the, episode to look like so Rick and Morty, go to the Practical AI podcast and have, a great conversation about generative Ai, and then you pick your character and, using a combination of vision based, generative systems stable diffusion do, and then the GPT 3s right the you know, these kind of latest Generations GPT, models we were able to generate not only, the visuals we're able to generate like, a script of effectively a storyboard for, a new episode this is just like a first, use case like well we're literally, getting new episodes you know maybe, there's like 5 10 frames of this episode, but you're seeing some flavor of a plot, with dialogue with accompanying visuals, and you know sure it's a little bit, rough but extend this out to the Future, right like what happens then like this, has become literally a fully fledged, episode um and that's where this could, become right and so no cease and assist, yet but people did play with it people, had a lot of fun with it we're not, making any money on it so you know don't, worry Dan and Justin like it's it's just, for it's fan art Friday we're just, trying to show our appreciation for the, show you rais something it's kind of, funny because when in just in day-to-day, conversations about AI topics and like, new model comes out and see all the, media stuff where people are just kind, of bashing it and telling you all the, problems with it and everything and to, your point you just said this that's why, I'm bring it up it's like but think of, what we can do with this tomorrow like, today we have this and it's imperfect, and yesterday we didn't have any such, thing think about if today we have the, imperfect thing tomorrow is going to be, pretty amazing and to your point there, like what you're doing there is like way, out there but it's not far from being, very mainstream so my question about, that is like you have the training data, and you have the base show that a human, made right and so without that like I'm, trying to strapoli in the future and if, it's just going to be us remaking a, bunch of shows that we made back when we, used to do it all by hand or is it uh, like that's that's kind of my question, there right I think that's a generative, function right there I don't think that, you have to start with where you were uh, in the sense of part of that function at, the very front end of that workflow is, going to be generating different, possibilities that are not directly, linked uh to the training data and then, like carrying it from there I think I, think it's a future, [Music], entertainment Hello friends this is, Jared here to tell you about change log, Plus+ over the years many of our most, DieHard listeners have asked us for ways, they can support our work here at Chang, log we didn't have an answer for them, for a long time but finally we created, Chang log Plus+ a membership you can, join to directly support our work as a, thank you we save you some time with an, ad free feed sprinkle in bonuses like, extended episodes and give you first, access to the new stuff we dream up, learn all about it at changel log.com, plusus plus you'll also find the link in, your chapter data and show notes once, again that's Chang log.com, pluspl check it out we'd love to have, you with, [Music], us what I think is kind of um, disconcerting is the right term um I I, don't know with the time scale on this, or whatever but like imagine like all, these data sets around language models, especially are scraped from the internet, right and like a lot of these image data, sets are scraped from the internet right, so there's a proliferation of these, models and the internet is being filled, with computer generated content right so, the next scraped versions of the, internet like the next common crawl the, next whatever like What proportion of, that data set is coming out of, generative AI models now I think there's, interesting things going on of course, with like detecting what is AI generated, and what isn't I think I've seen like, what open AI came out with for their own, system and of course like you say Chris, everybody has like criticisms of that, already but there's uh like many other, people exploring this as well like GPT Z, and like other things that are exploring, like how do we determine what is AI, generated and not, and so that's where like I don't know, what the implications of that are for, these large scale data sets down the, road but that's sort of where my mind is, going more so than oh should we po be, populating the internet with a lot of, this generative art and other things, even if it's terrible well I'm having a, lot of fun with it right now but there's, sort of second order effects I guess no, worries then man yeah well I'll be dead, by that time I don't know, maybe no no no no this will be soon it's, it's a really interesting thought, experiment and again one that is closer, than I think we believe this idea of, what happens when the bul of these data, sets are actually generated by systems, and I can see you know some positive, effects and some that you would be like, okay this might be a questionable, downside on the positive side you know, these models like the generative ones, especially the language variety are like, very good I mean grammatically correct, right they're like very semantically, good in terms of what they output does, this mean that now we the bulk of the, data that's being trained on is just, going to be like way higher quality, writing than you would typically find in, a typical Reddit post right or something, just in the deepest corners of the web, and then now if you're training on data, that's significantly better you have, this compounding effect of like well now, the data I'm training on is better so, the model is only get even better at, writing of some kind and then we mix it, in with some other more diverse writing, and then is it going to continue to, Compound on itself in terms of quality, the downside I mean one you could, hypothesize is that like these models, are only able to generate certain kinds, of distributions of data right only, certain kinds of things they can write, about or talk about so now when you're, just injecting all these training sets, with like a very skewed distribution of, topics of ideas on these topics Etc how, do you ensure that actually you're still, giving your model enough of a, versatility in what it sees these next, Generations that are then train on these, data sets to ensure that it's still a, general purpose model right like what if, 90% of the content that's put out there, is just really bad marketing copy you, know Facebook ads or something you, overfit yeah it'll overfit exactly to, that and then what happens to these next, Generation systems they actually might, be hurt in the long run yeah it's like, we're in the Golden Era yeah it I think, that you can have an IM moops community, in my opinion for a long time because, there's such a wide variety of problems, that people are still dealing with so on, this like very far end we're talking, about generative AI being part of like, my day job as part of an international, NGO like we're dealing with problems, still like where hey there's no internet, in this place and like we're running our, model on like an Android tablet or like, whatever like there's this range of like, okay what's the problem that happens, when I scrape the whole internet again, all the way down to like how do I run, this small model on an Android tablet, and I I don't actually see that changing, for some time yes the world's like, changing a lot but there's still like, such a wide variety of issues and I, think fun challenges to like wrestle, with around this concept of like, operations plus Ai and machine learning, and so yeah it is sort of hard to Define, in that sort of way but it's also really, exciting because of the wide variety of, things that you could be involved with, like as a devops person or software, engineer or data scientist like there's, plenty of problems that have to do with, like running even like a much older, language model on your phone, and like other problems that have to do, with like various different scales or, different modalities of data all these, things it's yeah and that goes back to, kind of in the pre-show that our, listeners were not there for we kind of, talked a little bit about that level of, diversity like how do you guys see that, so I mean Daniel you've kind of talked, about almost an extreme case of kind of, edge concept you know being your target, you know and having this slow Android, thing and yet we're used to being just, like gluttonously resourced in the cloud, you know everything you could possibly, want there ramp it up ramp it up you got, all the stuff you want there and you, have this incredible diversity so for, you guys thinking about ml Ops you know, so much and that how do you deal with, that like when you go from big company, to one person struggling to work at all, how does ML Ops look when you're talking, about diversity of use cases plus, diversity of users and uh that you're, serving how do you make it a thing you, know how do you keep it all together, there a few a few answers to this, question actually so the first part of, the diversity of use cases and in a, sense we've gone a little bit backwards, from something we said before where on, the one hand we're like oh generative, systems are just going to do it with all, of this and then now we're back at like, well but it's there's like enough stuff, that we still need to solve that will, probably be around both and we're not, quite there yet exactly exactly and to, that point I actually I was actually, asked this in a podcast once uh few, weeks ago where I was asked like what do, I believe will be the position like the, role that will exist 10 years from now, and I was asked like we'll be like a, data scientist a prompt engineer an, mlops engineer like where would I put my, money and I still answered mlops, engineer like machine learning engineer, you know like kind of in that in a, similar category uh because I do believe, that these same problems will persist, whether or not they're for old school, decision tree based models discriminate, models that would use you know maybe 5, 10 years ago even pre- deep learning or, these new GPT 500 whatever will will, come later models stable diffusion, 10,000 you know what I mean like the, same problems will persist how do we how, do we operationalize it how do we make, it scalable how do we keep uptime on, models so people can interact with this, these are all questions that machine, learn you know that m opsis is, fundamentally trying to solve and, address and whether or not you're using, kubernetes today or some prompt, engineering based kubernetes tomorrow or, so you know you make it even more, concrete like air flow today versus like, an airflow for prompt engineering right, which is what's people are actually, developing today the same sort of, principles and the same concerns are, going to apply and so I don't see that, going away anytime soon as long as, there's a machine Learning System as, long as you know we I would assume all, of us here in this room in this virtual, room anyone who's listening here, believes that AI is going to be the, future it's going to be here for decades, then the same questions will still have, to apply and so that's like the first, part I want to just like throw out there, is is emops is here to stay whether or, not it's kubernetes or whatever comes, after kubernetes we hope it comes soon, we hope it comes soon, um someone needs to get it soon I I have, a tip there that I'll provide later yeah, I don't know if you you all have seen, what uh Eric bernardson is doing with, modal and modal Labs I've been playing, around with that recently I've been, pretty floored by it but anyway that's a, whole another side topic and episode, which hopefully we can have soon but, totally but I think to the second part, of the question which was how do, different organizations think about, maybe the mlp's question and something, that we do want to address here which is, it does depend it really does depend in, terms of the the maturity over the, organization and budget time Etc these, are all different axes that, fundamentally Define how a team or a, business should think about its approach, to mlops and there's different axes that, we can go into exactly what they are but, you know open source versus not open, source like are you going to use, something are you going to stitch, together a bunch of Open Source tools or, you going to use Sage maker out of the, box uh how much money do you have can, you get by with just spinning everything, up on your own all these different axes, different organizations have to think, through and it becomes not a one siiz, fits-all it really is like it's a, function of all these different, parameters um to really tailor the right, solution to the organization and there, you go you didn't realize that Mr Mikel, has a little consultant in his blood, do I get to put another plug here, another Shameless plug is that what this, is an invit no as many as you want yeah, as many as you, want that is the most Consulting answer, you can possibly give to any question is, like well for this rate I can tell you, more details on what comes after yeah, exactly you want to go at it but I think, what I just will add a quick piece to, that which is there was a time in the ml, OBS community that like a week wouldn't, go by in our slack where someone would, not share the you are not Google blog, post and it's like the amount of people, that try and go at it and try and get, that especially because Google puts out, so much great thought leadership on, mlops and they have the zero level zero, level one level two or they have the, like ml test score all that stuff and, people think that straight out from zero, to one you need to be creating, everything automated it needs to be like, the most high performant bulletproof, system that you can think of and it was, just setting up a lot of people for, failure and I think we've moved past, that because I haven't seen the you are, not Google blog post being shared as, much in the community which makes me, think people recognize and they're a, little bit more self-aware when they're, trying to create their systems their, jobs you don't think they're just off in, the corner crying yeah or or they just, don't think being Google is a good thing, anymore just objectively given given it, yeah that might be it too I think uh I, was listening to I don't know if you all, listen to um the Indie hackers podcast, but uh recent episode like I think they, made this good point they were talking, about like because they talk about a lot, about like bootstrap startups and stuff, like that and uh and one of the points, they made was like you know whenever um, like base camp was around or like, starting up in that sort of thing like, they made such inflammatory statements, about like like taking venture capital, is stupid like why would you ever do, like there was a need for that voice to, be in there right that voice is still, there with the founder you know it's not, changed but now like you hear less of, that I think and it's sort of like, normalized in that like there's still, different perspectives but there's not, like as much like it's become more, normalized to have these like more, nuanced discussions I think with mlops, there's sort of like you know we've all, made our sort of inflammatory like uh, podcast titles like whatever it was, mlops is dead or you know I forget what, it was um and so there is probably like, still this we're kind of feeling out, where like the normal is and where, things settle down to and providing like, a balanced perspective where yeah like, you probably shouldn't be doing mlops, the same way Google is but you also, should probably be doing mlops it's just, like where on that Spectrum do you land, and what type of tools make sense for, you you know in real life I don't think, it's you probably not should not be I, think it's you can't do ml Ops the way, Google does I mean just from a resource, standpoint you know most companies don't, have that team available and that set of, tools and you know I like the fact that, we're talking it in a more realistic, context here you know for the vast, majority of us out there that are not, you know accessing the best of the best, in all the categories it's not possible, for many of us we have to settle for, something that's doable kind of going to, demetrius's point you know you have to, find that level that you can do it you, can sustain it and yet it's still, incredibly productive even if it's not, the Google version yeah and one thing, I'm fascinated by just because there is, almost, this opensource you've got one person, trying to hack something together and, looking out there and seeing what's on, the market that they can get for a price, point between free and cheap and then, you've got the Googles that have built, everything and have so much time and, ability to do that and in between those, two points you have a lot of companies, that popped up and they popped up in the, last like three to five years and as, Mikel was saying we came at mlops from, this data science perspective and so I, think in the beginning a lot we're, trying to cater to them and then some, were like whoa wait a minute there's, like platform engineers and so they try, to cater to them and then you're seeing, now I almost wanted to like change my, title for what I do on LinkedIn as I'm, just going to be like the uh I ride hype, waves because mlops was a complete hype, wave and we felt it and especially like, I just got lucky because the pandemic, hit and right when it hit I was working, for a company that was trying to sell, mlops tools and that company went out of, business but I was in the mlops world so, I figured I would start this slack, community and then it took off right and, so I was able to ride that hype wave and, we really felt it over the last 2 three, years and now it's like all right now, there's this generative AI hype wave and, so if you talk to VCS and the ones that, poured a boatload of cash into the mlops, market they're now like yeah that's kind, of not really that big of a deal anymore, we're not going to follow up on all, those Investments what we're going to do, is invest in the next AI tool and so I, love thinking about that and how now, what is the new hype cycle of the AI and, generative AI Lang large language models, like where's that going to play out you, know we've been talking about it there's, a lot of potential out there but where, are we going to go with that and is it, like just a bunch of money pouring into, it and who knows what's actually going, to happen it's kind of funny when you, say that and it's like there's an, oversimplification you know when the, market observes these things you know, and you know mlops well now we're past, that you know that's been solved you, know we're going to web 3 you know what, whichever hype cycle it is but I I think, when you look at what's happening it, takes all these things to have gotten, you know right now we're saying, generative because that's kind of the, sexiest part of the puzzle but it's not, just generative it's the fact that you, have mlops now that's matured a little, bit and is supporting all that you have, the large language models that people, like Daniel have been working on if you, have all the Transformers that are well, established if you didn't have all of, those components the current hype that's, now being attributed to generative would, not be happening and so it's once again, an oversimplification by the market on, the sexy piece but it's that whole, ecosystem that's evolved over the last, few years that enabled all that to, happen and so we're seeing a really cool, moment definitely but it's really the, fusion of it all as opposed to just, being generative um you couldn't have, generative today without that I think, that's such a fantastic point like I I, just want to like sit on that a little, bit longer because when the gpt3 paper, came out this was I guess toward the end, of 2020 maybe middle to end of 2020 you, know this is like long thing that you, know I remember reading and the most, interesting in my mind achievement of, that whole system was not even anything, about modeling right like the, fundamentally the building blocks of the, AI architecture if you want to call it, that was just we've been using these, systems for years in some sense it was, really the fact that they had built this, incredibly good software infrastructure, to build out train large SK systems at, this scale over this many gpus at this, latency to make sure that you know the, updates for the gradients could happen, fast enough that this wasn't going to, take 50 years to train and they did that, all which fundamentally was really like, an mlops challenge I mean at its core, agree you know being able to architect, that kind of a system is the complexity, it's not the fact that there was this, new scientific achievement that we, really came up with it was really like, an engineering achievement and so in, that way it totally was a layering on of, things that we had seen before but you, know now they're like old hat right like, now we have open source repos that can, literally approximate the same effect, like deep speed from Microsoft has, become really fast and then very used, you can train large systems at that, scale but the people that pioneered, those systems really had to solve some, mlops challenges in their own right it's, all right there they haven't gone away, you know it's still right there it's a, team effort for all of those different, parts constituent parts to put the whole, together but at this moment uh the word, generative is the front man apparently, you know as we record this today it's, pretty amazing that it's that mlops take, that has put that all together and kept, it together kept them in lock step so, that you can create the new things of, today yeah I mean another example of, that is like all that we're hearing, about about language model chaining and, that sort of thing like this is like if, you look at each step of that process, right Lang chain or or whatever it is, those are things we've been talking, about for a long time like there is an, operational burden though to like chain, these things together and make them work, well like in concert right that's really, like one of the fundamental things that, we learned like when we talked about, chat GPT right like the like the, language model existed reinforcement, learning existed human feedback existed, like all of these things existed like, the chaining of them together in a, certain workflow is the real interesting, piece so I do think that's really, exciting where a lot of those those sort, of chaining operations and like bringing, things together in unique ways that is a, lot more possible because the tooling, has gotten better it doesn't like move, us past mlops actually it just like we, use mlops slightly differently and if, anything I think it becomes more crucial, because there's so many moving pieces, right shoulders of giants all the way, through yeah you combine that with the, Einstein Turtles all the way down kind, of thing it's shoulders of giants all, the way down it's yeah the one thing, that add me to that is very often when I, have described sort of the progression, of emops over time like people often, like and use the gardener hype cycle, right to describe Trends in technology, right you have kind of the initial hype, then the waiting the kind of the trough, of just disillusionment and then this, gradual climb upwards right I've often, said that right now where we are we're, at that gradual climb upwards like the, mlops systems and the Technologies you, developed are like standardized and, they're becoming more common place and, people are using them but the thing is, that linear climb upward while it is, where a of values extracted is not, really what a VC would look for they, would want this they want something, that's like exponential in growth and so, they're never going to ride a hype train, that's like a linear climb upwards even, if that's where people are actually, deriving value they're like well what, about how do we get to the next thing, and the next Gardener hyp cycle where, there's a next big inflection point up, that we need to ride because that's, where you get real 100x gains if you, know that's the kind of thing they're, looking for yeah know and ironically the, place where you find all that value, there after that initial hype is from, the trough of of disillusionment you, know that's where you kind of go now I, understand now I know what we really, need to do and you get that really good, growth after that that's a great Point, that's a great Point good yeah this has, been amazing yeah this has been, awesome I have not needed to carry the, conversation at all so I appreciate you, all doing the heavy lifting for me, usually I I'm constantly thinking about, that and you made it very easy today so, I appreciate that this was a fun, conversation yeah this is awesome we, should do this more often yeah we we, should we should Circle back and see if, any of the things that we said actually, um are true next year which maybe some, of them I'm going for like 25% if I get, there I'm good well I didn't put my, money on any of these predictions so I, don't really care right you you've, already explained that you're not, culpable, or on the hook for anything Rick Rick, and mortify it's just a toy project, don't worry about it I come in, peace I keep waiting you know because we, can see each other on video while we're, I know it's an audio recording but I, keep waiting for the feds to bust in, behind you on your screen there you know, you know the door goes flying back it's, a different sort of BBC kid, moment yeah and then my mic just goes, out and it's like well he'll be back, probably maybe he'll be back that's, right but then Demetrius would have to, carry the conversation yeah that would, not be good I'd be coming looking for, you well next year we'll see if this or, any of the other things come true on, practical AI plus mlops Community, definitely check out the uh show notes, um for those practical AI listeners out, there we're going to include uh links to, all the great things going on in the, mlops community slack Channel podcast, events um newsletter all sorts of, amazing stuff and uh yeah thanks guys, it's been awesome to have you on the, show like thank you, [Music], pleasure thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all Chang talk podcasts check, out what their up to at fastly.com and, fly.io and to our beat freaking, residents brakemaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | 3D assets & simulation at NVIDIA | What’s the current reality and practical implications of using 3D environments for simulation and synthetic data creation? In this episode, we cut right through the hype of the Metaverse, Multiverse, Omniverse, and all the “verses” to understand how 3D assets and tooling are actually helping AI developers develop industrial robots, autonomous vehicles, and more. Beau Perschall is at the center of these innovations in his work with NVIDIA, and there is no one better to help us explore the topic!
Leave us a comment (https://changelog.com/practicalai/209/discuss)
Changelog++ (https://changelog.com/++) members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Beau Perschall – Twitter (https://twitter.com/bperschall) , LinkedIn (https://www.linkedin.com/in/beau-perschall-4a00121)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• NVIDIA’s Omniverse (https://www.nvidia.com/en-us/omniverse/usd/)
• Beau’s GTC 2023 session around how to build simulation-ready USD 3D assets (https://www.nvidia.com/gtc/session-catalog/?search=S52401&tab.catalogallsessionstab=16566177511100015Kus#/)
• Tech blog around Omniverse and SimReady assets (https://developer.nvidia.com/blog/new-cloud-applications-simready-assets-and-tools-for-omniverse-developers-announced-at-gtc/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-209.md) | 3 | 0 | 0 | [Music], welcome to practical AI if you work in, artificial intelligence aspire to or are, curious how AI related Technologies are, changing the world this is the show for, you thank you to our partners at fastly, for shipping all of our pods super fast, to wherever you listen check them out at, fast.com and to our friends at fly, deploy your app and database close to, your users no Ops required learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist with, loed Martin how you doing Chris doing, good Daniel how are you today oh I'm, doing great I had a conversation at, breakfast uh on Monday this week with a, company from the UK doing uh drones and, automated uh or autonomous drones and I, felt very prepared for that because, you've talked to me so many times about, you know Aeronautics and drones and all, that so thanks for your prep no problem, happy to do it you know yeah it was a, good breakfast just think of the, universe of possibilities out there you, know so many things exactly yeah well, speaking of the universe or I guess, rather the Omniverse or the metaverse or, whatever verse you want to you want to, think of um we're going to get into all, the verses today we're going to be well, versed in those verses yes we're going, to be well versed good stuff with uh, we've got with us Bo persall who is the, director of Omniverse Sim data Ops at, Nvidia which I have to say is a really, exciting uh title uh one of the better, ones we've had on the show so welcome Bo, thank you very much I'm pleased to be, here yeah I imagine that the my title, doesn't make a whole lot of sense to, just about anybody it's a lot of words I, bet it'll make more sense after this, conversation hopefully so I was going to, say you have a whole episode to explain, it to us so we're, good fair enough you know I guess, spinning off of kind of how Chris and I, were starting that it would be awesome, to hear about you know what does, Omniverse mean and also maybe a little, bit about like your your background and, how you came to be working on Omniverse, so this intersection of what I, understand some type of 3D stuff and Ai, and simulation what what was that, Journey like and how can we understand, generally what Omniverse is sure so, Omniverse is NVIDIA software it is our, Computing platform for building and, operating metav, applications and again it's not, necessarily so theoretical these are, like industrial metaverses these are you, know whether you're designing and, Manufacturing Goods or you're simulating, your factory of the future or building a, digital twin of the planet which Nvidia, is doing to you know accelerate climate, research Omniverse is a development, platform to help with that kind of, simulation work and it's doing it in 3D, yeah so it's not just those people, without the legs kind of hopping around, in a place no this is very practical as, a matter of fact we have big and small, customers that are using it over 200,000, downloads for Omniverse is a platform, that you can get from the Nvidia site, you've got companies like BMW that are, using it to plan their Factory of the, future and part of that is worker safety, so they have to have legs you know you, can't see ulate the ergonomics of if, you're doing a repetitive task are you, going to hurt somebody by doing it or, are they in danger of getting you know, hit by something in a work cell or, something on the assembly line so, there's all sorts of simulation around, that kind of information as part of, Omniverse but it's a really broad, platform you know it's designed to be, extendable so that you know customers, can come in and write their own tools, and connectors it's not supposed to be, just its own end point in other words we, have connectors which are basically, Bridges to other applications whether, you're coming from manufacturing side, like seens or you're coming from, architectural software like Revit or, you're coming from animation software, like blender or Houdini or, Maya or unreal for that matter all of, that data can be aggregated through USD, Universal seene description is the file, format that Omniverse is based upon, which was a Pixar open file format it is, very robust and basically we figure, where the kind of the connective glue, between all of these platforms so that, simulations can be run inside of, Omniverse but all the data can move in, and out it's not like captive data, hopefully that gives you a little bit of, a background of Omniverse in and of, itself it is a visual platform, it does that sounds fascinating um and, as you know from our pre-chat I know a, little bit about Omniverse before coming, into the conversation um but I know that, there is a lot of confusion about like, how this fits in with all the other you, know we kind of we were joking in the, beginning about the various verses that, people are hearing there's a lot of, lingo out there and so and as recently, as yesterday A friend of mine named, Kevin texted me and I haven't replied to, him yet but I will have by the time this, is aired texted me saying I don't, understand this verse thing and I know, that you're involved in this can you, explain it and I think Kevin represents, a lot of people in that way and so could, you we've heard Multiverse we've heard, metaverse we've heard now we definitely, heard Omniverse and how does all of can, you give us some context to how this, whole industry fits together so that as, we dive into Omniverse in back into it, in just a moment we kind of have a sense, of where it fits within you know and, some of the other companies we know, you're with Nvidia and you're doing this, great work but we've heard things from, other big companies you know the usual, array of social media and Cloud, companies so can you kind of set the, stage for us a bit on it a bit yes a, metaverse is a very loaded term and, everybody has kind of their own, connotation of what that is for NVIDIA, certainly we consider Omniverse a tool a, platform to to help enable an industrial, metaverse something that is real world, not only that is that can do simulation, but can communicate with the real world, and back so there's this kind of bir, directional you know messaging that, that's aspirational for us that's where, we want to be able to be so that if you, have a production line you can actually, understand what's the uptime of the, equipment in there and then basically, schedule maintenance or be able to do, Factory planning and, optimization so that you're getting the, most throughput you can at any given, moment if you have to move materials, around a facility let me ask you a, question there um just to draw out the, distinction as you just now were, defining it you kind of said industrial, metaverse and I'd like if you would uh I, know that people are reading things all, the time and you know like there's a, more of a generic concept of metaverse, and then obviously there are certain, companies that were formerly known as, Facebook that have kind of taken the, word as a brand in some ways I sense, that you were using the more generic, version obviously of meterse could you, define what that is what a metaverse is, so that we can kind of understand what, the Omniverse branding of that fits into, sure so the metaverse again very, overworked term I think but in general, it's the next evolution of the internet, instead of having connected Pages you'll, now have connected living ecosystems, living kind of Worlds if you will that, actually can intercommunicate you'll do, hopping between those worlds as opposed, to just moving between pages so it's all, based on kind of this 3D Centric you, know representation of our existence in, some ways you've seen it you know the, gaming industry has things like fortnite, and Roblox already that are very much, kind of persistent ongoing worlds um the, metaverse is designed to take that to a, much broader level in everything from, entertainment to business and industry, and so Nvidia is taking their software, platform and the hardware that supports, it to help real world applications I, mean it it's why we're building an, entire platform essentially around how, we start to do weather prediction, decades into the future called Earth too, so that we can start to help with, unlocking the climate as far as that, goes we have have customers like Erikson, built digital twins of cities so that, they could go ahead and place cell, towers in optimal locations for maximum, coverage before they ever deploy in the, real world so trying to find real world, value is that's kind of the distinction, between you know the gaming space and, kind of the entertainment or personal, spaces that the metaverse can represent, with meta and different companies that, are helping work on that and everyone, thinks everyone competing that's like, saying who's building the internet fair, enough yeah you know at some level it's, like it's going to require all of us, cooperating at some level there's so, much Greenfield as far as this space, goes that yeah it's really exciting yeah, I really love this sort of parallel that, you given or or metaphor of the internet, because some of the applications that, I've heard you talk about like it's, making some connections in my brain uh, that's making this maybe a little bit, more practical to me so when I think of, like the internet generally and what you, can do on the internet and what has, happened with with the internet over, time there have been things that, happened in the real quote unquote real, world that kind of had a parallel on the, internet right like I can go into a, bookstore and I can buy a physical book, well now there's a way for me to do that, on the internet but then the internet, also had this sort of segment of new, things that didn't happen before the, internet but now happen because of the, internet um would you say it's similar, in terms of what you're seeing with the, metaverse space these 3D worlds and the, Omniverse in terms of some of what, you've talked about like the cell tower, thing like in theory you could do that, in the real world and you know learn, what you need to learn there's probably, cost you know advantages to not doing, that and that sort of thing but it's a, parallel there is there another set of, things like I don't know if this would, fit into the climate modeling stuff or, other things that you're talking about, where you can do you know legitimately, sort of new types of things in this, world that maybe we don't know the full, extent yet but we're beginning to see, those do you see it that way or you're, much more plugged in a absolutely I, certainly see autonomous vehicles which, is another you know Big Industry for us, with our Drive Sim platform that's based, on Omniverse is that if you're trying to, simulate multiple kinds of traffic, situations and different scenarios a lot, of them you can't capture in the real, world they're dangerous you know what, you want to do is be able to train the, algorithms to react accordingly before, you ever get it onto the real world but, you also want to have the the, connectivity so that the the way that, it's handled is that it doesn't matter, if the data coming in is synthetic so, that the sensors the lar and radar on, the car with Hardware in the loop, essentially you're now at that point of, saying it can't distinguish whether it's, a real world scenario or it's a, simulation it treats them both equally, so that sort of thing I think is, absolutely you know critical to safety I, think that that also kind of gets to the, industrial and the manufacturing side of, things as well is that there will be, ways to train things in more efficient, ways as well so you're saving cost if, you're training a robotic arm on a, production line for a new task instead, of having to take that work cell down in, the real world a crew costs while you're, going through and programming it and, testing it now you can go in and, actually test it and teach it, essentially in the simulation and then, just pass all of that data back to the, physical world so that the robot now, just changes its program pretty much on, the Fly that's a huge huge kind of, benefit for a moment just as we kind of, finish up kind of the the how what the, ecosystem looks like and you're kind of, talking about these use cases I wanted, to go back for one second and talk about, with both Nvidia and the other, organizations that are participating in, this with their various Solutions some, gaming some not what does that evolution, of a user if we're going uh into the, future a short distance and it's, becoming commonplace for uh users to, have different destinations in terms of, metaverse style 3D worlds in the, beginning are they all very distinct and, separate almost like using a separate, application on your top or you close one, and you go into another one or is there, any will it take a while to get to, connection between those different types, of environments and and what does that, cross compatibility across multiple, environments start to look like I think, that's part of why I was hired a year, ago was to help kind of solve this I was, hired to create a new standard that we, call Sim ready for 3D content, specifically because yes you what you're, describing is essentially a Walled, Garden kind of approach where everyone's, doing their own thing and nothing talks, to one another and it's all kind of, disjointed and that's not the goal of, the metaverse the whole idea of the, metaverse is to be interconnected and, allow people to move and allow data to, move and so with Omniverse being based, on a file format called USD again, Universal scene description a very, robust format now what we're trying to, do is understand understand how to, standardize that how to make it so that, based on your needs and this is what's, been fascinating for me in the last year, because I did not come from a data, science background I was a 3D artist for, 20 plus years I in fact I learned 3D, before the internet was a thing just to, carbon date myself I you know had, manuals and didn't see my family for, months and had to work on super slow, computers but we're now getting to a, point where interchange is absolutely, Paramount so everyone is starting to, look at it from a very Cooperative place, so USD being an open file format being, something that is open sourced we've got, you know connections to the academy, software Foundation which helps try and, manage standards the Linux foundation, for standards it's a long hard process, to figure out what is valuable for, everybody because as you can imagine, everybody's use cases is different it's, you know what what BMW is trying to do, is going to be different than what a, watchmaker does or what Ericson is doing, or what autonomous vehicle manufacturers, are trying to handle directly and what, we're trying to do with Sim ready is, build this, framework that allows Sim ready to have, flexibility based on your needs if, you're doing synthetic data generation, where you need thousands and thousands, of images to identify what a car is, that's one need so you need semantic, labeling you need something in the data, in that 3D model that says I am a car, fairly simple but you can get very, specific even within a single 3D model, these are the tires these are the doors, this is the windshield and you can start, to semantically label more and more, granularly based on your needs I've been, dealing for just under a year trying to, learn what what is important and it's, like drinking from the fire hose, everybody has different needs Daniel I, assume being a data scientist that you, have very specific needs for the kinds, of data that you are you know processing, and how you want that data organized is, somewhat different than you know an, Nvidia researcher might need so instead, of trying to funnel people into one, workflow we're trying to make sure that, Sim ready becomes this living breathing, organism that must evolve over time and, has that flexibility so that you know, we're providing the planter and the soil, and saying plant your tree here's how, you do it so that you can customize it, to your own needs again another, practical example is with Sim ready, specifically a piece of, content right now has semantic labels, and what was shocking when I got here, was finding out that our research, scientists I was like well what semantic, labels are you using right now what's, your taxonomy how are you identifying, things and what's coming with those data, sets and they're like we get nothing, it's like what yes they basically having, to kind of like from Whole cloth create, their own semantic label taxonomies I'm, like well that's crazy but what taxonomy, would you like to use and they're like, well everybody was kind of a little bit, different and so it's like okay what do, we do there so there kind of the, starting point in in terms of a simple, taxonomy that will allow people to, identify the car but some people want to, call it a car some want to call it an, automobile some want to call it if, you're a French researcher you might, call it a vure if my high school French, if I remember it correctly it's like how, do you kind of synchronize all those and, it's like you're crazy if you, try so essentially what we've done is, we're building a framework in a, reference implementation to be that, planter so that we can say here's how, you can implement it for your specific, needs and what data do you want to to, manage do you want physics do you want, to have rigid body physics on the, objects right now great we can go ahead, and add those you know we have that as, part of physx that is built into the, Omniverse platform so when I said, simulation it can do collisions and, collision detection but there's more, when you think about building digital, twins you're trying to represent the, real world as accurately as possible and, that is an endless quest which is why it, has to evolve over time we'll build, stuff now but in the future we'll have, more sophisticated electromagnetic, materials that have thermal properties, and have Sonic properties and have you, know deoration you know uh tensil, strength and things like that that we'll, want to build in so that the simulation, can actually process it so it is a you, know the rest of my life's work and then, some I think you know it's going to, continually evolve so what we're trying, to do right now is in the very early, days set the standard up that it does, have that ability to kind of breathe and, move along as we get more, sophisticated well Bo I I love how you, kind of brought um what to me is, honestly a little bit of an intimidating, subject which is this whole area of 3D, and you know I'm sure uh you have a, different perspective you know coming, from the art world but I'm very much, let's just say I shouldn't design any, sort of applications that human should, humans look at with their eyes I'm I'm, not that kind I don't have that skill um, so it's it's a little bit intimidating, for me to think about these spaces but I, think the practicality that you just, described around you know I can, definitely see even applications I don't, work in manufacturing but I can see, those but even in my own space you know, I work in natural language processing, and language and of course a big area, that is is really um neglected in the, NLP space is sign language which by its, very nature is a 3D thing right A lot of, people might think oh it's just like you, know hands and you can look from One, Direction well there's gestures there's, facial uh movement there's 3D movement, that happens with sign language and um, you know if you want to for example have, an avatar where you could you know type, something in and Avatar you know signed, in American Sign Language or Japanese, sign language or something that that's a, 3D environment and would require certain, labels right around you know facial, features and hands and all of those, things so all of that really connects, with me well I'm wondering if you could, kind of break down this uh Sim ready um, project that you've been working on and, maybe think about it from the, perspective of let's say I am a, manufacturer coming into the space I, want to kind of figure out like you say, you've got the planners ready what does, it look like for me to come into the, space and think about you know my use, case and then map that onto Sim ready, the standard and the file formats and, the and the 3D space what what's sort of, required for me to enter that space as, it stands now that's a great question, because a lot of people understand is 3D, is still very hard to achieve in any, kind of degree of fidelity and Omniverse, is trying to help create the highest, visual Fidelity on top of simulation, Fidelity possible so that kind of, pyramid of what it takes to build 3D, content in the first place is still, difficult even with photogrammetry and, the new uh Nerf Technologies and things, that can help start to capture that and, those are going to evolve and you know, Nvidia being an AI company is certainly, pushing into those areas to make it, easier for kind of this art asset, acquisition but in terms of what it, takes right now is well let me back up, here I'm kind of front running myself in, my head is that um, essentially with 3D being difficult it's, hard for kind of anyone to come in and, just have a data set and be able to do a, lot with it I've never taken an, animation class or anything so you're, working with that sort of clay that's, okay neither have, I right you know essentially it's adding, the value on top of the art asset so if, you're a manufacturer or if you're doing, sign language one you have to have the, asset library and you know ml, researchers and data scientists have a, voracious appetite for Content because, you can't have just one thing to train, train against it is Thousands or tens of, thousands for for humans it's diversity, not you know just in terms of age and, ethnicity and sex and you know clothing, and look and facial I mean it's endless, there just to be able to train the model, you know with as little bias as humanly, possible the same thing for any other, kind of research where you're using 3D I, had a researcher and asked me early on, when I first started can I get, everything you find in a garage I was, like no that's an unbounded question you, we let's focus what do you want is it, you know are you there's a lot of, strange garages out there exactly am I a, woodworker am I a mechanic am I a, hoarder is it my garage who's all of, that kind of comes into play as to kind, of like focusing down on first what is, the data set consist of and then what, metadata is important for the use case, so that's really kind of where Sim ready, starts to differentiate is it says okay, now that I've got this data set what, adds the value to it from this set of, tooling that we're building also on top, of Omniverse so that at the end of the, day I can take beautiful art assets, stuff that has no metadata for, simulation or for AI at all be able to, push them through this tooling to add, semantic labels to add physics to add, physical materials to add all of the, kinds of things that matter to the you, know dimensions of the object whatever, other kinds of metadata are important to, that customer and then be able to ex, validate it and Export it so that now, you've got a data set that a data, scientist can consume directly, practically without having to spend, their life trying to figure out how they, add the value on their own so that's, we're because at the end of the day I, don't think Nvidia envisions themselves, are me having a team build all the, content in the world for people we want, to enable all of the suppliers for BMW, the Seamans and cuas and you know, companies like that who build, infrastructure and build content to also, embrace the idea of sim ready and the, tooling so that all of that content just, plays nicely together and then again it, flows into and out of other simulation, platforms so if you're pushing it, somewhere else it's a USD file so that, data is available to you regardless of, what platform you're using it within so, that's really kind of the benefit there, I just would like to extend exactly what, you just said could you give us uh and, we often will ask guests just to kind of, give us a nice clarifying way like what, you just said is you described the, concepts of going through that process, could you give us either a fictional or, a real world whatever works for you um, and I suspect you probably have one, ready to go of like pick a in, manufacturer or whatever you want and, kind of walk us for a moment at a high, level through the steps of what they're, doing where you reference Omniverse you, reference them ready you reference the, things in context uh in a use case so, that we kind of Follow Your footsteps, through that and it kind of brings the, concepts into a very tangible you know, touchable kind of uh of understanding, right so we actually have a project, ongoing right now and I can't mention, who but essentially there is a pick and, place robotic arm on a conveyor system, that actually has sensors to indicate, where parts are on that platform at any, given moment and what they want to be, able to do is build that simulation, inside of Omniverse so that both the, simulation can drive and time the real, world application and the real world, application can report back so that, there is this kind of cyclical nature of, having data moving both ways so so, there's you know a feeder drops you know, apart onto the conveyor belt the system, always knows where it is it can count it, it can track where it is on the in the, process when the arm is supposed to pick, it up it knows how to do that and move, it into the right location those are the, kinds of use cases where now if you have, SIM ready content that knows it can, identify itself this is a package this, is a conveyor this is you know this part, of Omniverse can trigger when the real, light sensor is tripped and be able to, understand that as a hey this is where, this product should be so if the, simulation or the real world is off they, can adjust on the fly so that now you've, got kind of this self, fueling um round trip ability to track, content that way so is it fair to say, like you would take assets 3D assets and, you would apply uh uh USD the universal, uh scene descriptor to it to give it the, context so that it is quote unquote Sim, ready and you can use the Sim ready, tools on those assets to do whatever it, is you're doing right USD is actually, the file format I mean but it's more, than that so and most applications now, either export USD directly just like you, would if you're working in a cad, application you might export a DWG file, or a a dxf file or something like that, or a solid works part file if you're in, manufacturing you can now export USD, directly in many of these apps they're, all starting to get on board which is, great for the 3D industry CU I can tell, you that when I was coming up every 3D, app every tool had its own 3D file, format and so nothing played well, together it was always a nightmare to, try and get content from one place to, another without question it wasn't like, 2D imagery where a pixel is a pixel 3D, is much more complex as far as that goes, orders of magnitude and so now that we, all have we're starting to kind of hone, in on USD as a primary file format with, technically there's another file format, that's open that's run by the Kronos, group called gltf and it is essentially, a web standard for 3D and I was part of, the group that was helping kind of, Define the standard for 3D Commerce so, that you could see things on your Apple, phone and spin them around in the, websites and things like that as well so, that's kind of the jpeg version of the, 3D while the USD file is more like a, layered Photoshop file much more robust, but they play very well together and, Omniverse supports both of them too so, this is great so one of the things um, that you mentioned briefly Bo which I, think is really a fascinating topic but, also a really important topic for the, future of sort of practical artificial, intelligence machine learning is the, idea of simulated data Now you kind of, briefly mentioned this topic of you know, creating 3D worlds um all the file, formats and the things that are needed, to label those to make them useful for, data scientists you talked about the, example of kind of digital twin running, in parallel with the real world robot, arm could you set the context now for, usage of this technology for synthetic, data production and from your, perspective where you've seen people do, that maybe a couple examples, successfully and maybe help people, understand what synthetic data means and, why it might be useful sure so synthetic, data is far as I have been involved is, essentially generating randomized what, we call domain randomization taking lots, of objects randomly placing them in, scenes with all of their labels in place, so that you can train machine learning, for computer vision to be able to, identify you know something in a room or, in a space or in an environment so it, doesn't matter what the lighting, conditions are it doesn't matter what, the material is uh it doesn't matter the, orientation of the model it could be, upside down in some arbitrary, orientation but at the end of the day, when you have that image or that, sequence video sequences or whatever, that does all of this the computer, algorithm can always pick out whatever, that piece is we have a a version of our, CEO Jensen and we call him toy Jensen, and is a little 3D model toy model, you've probably seen him in our GTC, talks and Keynotes and they wanted to do, kind of a we's Waldo for SG for him as, well just to be able to train where is, he in a scene with all sorts of other, random 3D content and so that you would, change lighting you would change, materials you would change the, orientations of everything to train the, algorithm to be able to spot toy Jensen, no matter where he was in the scene how, much he was obscured by you know blocks, or sofas or things like that from a more, practical standpoint think about what, furniture manufacturers are trying to do, today with augmented reality you know, they want to be able to scan your room, they want to eventually say I know that, that's a sofa and that's a chair and, that's a table and I want to be able to, replace it with my stuff instead and, show you what my stuff looks like in, your space and so having that computer, vision trained against a huge variety of, content now gives their algorithms the, ability to to kind of find and identify, that stuff with high accuracy or you, know good Fidelity I just wanted to say, tongue and cheek that I think finding, Jensen is not as hard as you say because, he always has his trademark motorcycle, jacket on I'm just saying it's like it's, always the jeans in the motorcycle, jacket so he does indeed um they, actually put him in in the midst of all, of our marbles content the real- time, sequence that they put together for a, real-time demo for GTC two years ago and, there's hundreds of elements and so he, could would get pretty obscured where, you couldn't see either his jeans or his, jacket okay fair enough and you would, see like a part of his gray hair and, that would be about it gotcha some, fascinating stuff you know from what I'm, trying to do with AI just to kind of, circle this all back around to sim ready, is that AI is important for Sim ready in, the future too I mean again just, starting less than a year in but my, vision is to work with our data, researchers as well so that at the end, of the day instead of having a tool that, you manually have to process content, with why wouldn't our Sim ready tools, live in the cloud as a service for, people to upload their content and, doesn't matter how materials are named, is it named metal is it named wood, ideally AI would help us identify what, that material should be to name it, properly and then do semantic labeling, on it and be able to apply the right, physics I mean I so that you could, upload your library no one has to get, involved the system now can process your, library give you a dashboard in your, data set that is now, valuable that's my long-term Vision, specifically for AI for ready I'd like, to ask you is um and part of this just, comes from the kind of the you know I, work for uh a company that has to deal, with Edge scenarios that are adversarial, and challenging in all sorts of ways and, so I'm always going to that one of the, things I'm always curious about is as we, look at simulation and you know built on, the larger Cloud approach that we've, been doing for the last 20 years 15, years I guess now as you move these, capabilities and you're talking about, having 3D assets you're doing augmented, reality and you want to be able to merge, those as you mentioned like with the, room with your own stuff but there's an, infinite number of variations there that, we could that we could talk about from a, use case standpoint as you get out and, you're doing things that are away from, the cloud you either don't have enough, bandwidth to get all the GPU computation, you know from the cloud back to where, you are out in uh Everest Bas Camp, because the you know that actually, probably does have enough of an internet, connection but let's say you're up in, Camp 2 and you're doing something in a, fairly remote region how do you envision, these starting to merge into that uh in, terms of being able to have a, consequential user experience you know, something that's impactful uh in terms, of augmented reality where you're, combining all of these 3D assets that, are Sim ready and it's merging with your, world when you don't have bandwidth and, Cloud assets immediately available due, to technical limitations and the how is, NVIDIA thinking about because I know you, want it to be everywhere so how are you, thinking about bringing this future that, we're all hurdling toward and that, you're inventing into those spaces that, are not just I'm on a gigantic internet, connection sitting in my office doing my, thing right I mean certainly Nvidia, wants things to live in the cloud uh as, much as any company at this point it's, been you know Jensen publicly announced, that in the keynote for GTC this past, fall and having kind of that unique, position of having hardware and software, with our gpus and the Omniverse platform, give us some distinct advantages where, you can actually do quite a bit from, your own small workstation in terms of, streaming content and how we might do, that in the future honestly I don't know, you know to be completely fair I don't, know what that looks like at this point, you know I'm only a year old here so I, would argue that Nvidia is very well, positioned for answering that question, because you're not strictly 100% in the, cloud you there's a I have bought, products from you that I can go Place, into a computer that is not in the cloud, or that may have a connection but I'm, doing the gpus out on the edge you have, a whole a large product line of thing so, I do think that you're well positioned, for that but I think it's a fair answer, to say I don't know because we're moving, fast and you know what's the the cliche, it it is still early days there's no, question and it there's going to be a, lot of evolution I know that you know, what we're focusing on this year as a, company is all inspiring and I can't, wait to see how we progress throughout, the next 12 months or 11 months now um, to get closer to those goals so it's, there is a lot to be done but yeah I I, don't know as you do look to the future, of you know your own work and what, Nvidia is doing but maybe also like you, know now that you're in this space of 3D, and interfacing with data scientists, thinking about how that can influence AI, how AI could help you build the things, that you're doing you know what's on, your mind as you're looking towards the, future um what excites you what uh sorts, of opportunities really keep you up at, night and really keep you thinking about, you know the potential in this space I, know that you know you mentioned your, background in art and of course this, last year has been an amazing year in, terms of the generative capabilities of, AI and you know that even Sparks things, in my mind about how the things you're, working on in 3D you know interface with, that sort of generative capability what, are you thinking about what's uh what, are you looking forward to as you're, moving forward for me one there's almost, nothing to not be excited about, including generative AI but for me it's, when it comes to sim ready and kind of, my focus is really the sophistication of, what we're trying to achieve with AI, it's starting to kind of understand the, you know what the value is today and how, you start to extend it forward so that, we can start to extrapolate out much, further forward building that, bidirectional communication between the, simulated world and the real world wow, cannot wait to see how that starts to, really kind of manifest where you have, data cleanly flowing both ways and, things start to synchronize so that, you're not just simulating at this point, you are now kind of replicating things, that way I think that's huge and I'm you, know I was lucky enough to be around, when 3D first kind of went mainstream, where you could have computer PCS, consumer PCS instead of you know $50,000, workstations that could do 3D you know, and with AI I feel like we're kind of in, that similar early phase of creation and, understanding so that there is just this, enormous Green Field in front of us to, kind of explore and it's going to take, all of us too it's not just Nvidia I, want to make that clear it's like we're, focusing on things that we feel we have, distinct advantages on but we need, collaborators again it's back to the, adage of how do you build the internet, with a lot of people and a lot of, cooperation there's so much opportunity, across the board that we've all got to, kind of pull together and do it awesome, well I I think that's a a super, inspiring and encouraging way to close, things out it's been an awesome, conversation Bo really appreciate you, taking time to talk about all the things, that Nvidia is doing in this space and, the things that you're working on around, standardization and making things useful, and practical for people like myself and, Chris and yeah thank you so much for, your work and your contributions thank, you guys for having me this has been a, blast I've enjoyed it, [Music], thoroughly thank you for listening to, practical AI your next step is to, subscribe now if you haven't already and, if you're a longtime listener of the, show help us reach more people by, sharing practical AI with your friends, and colleagues thanks once again to, fastly and fly for partnering with us to, bring you all change talk podcasts check, out what they're up to at fastly.com and, fly.io and to our beat freaking, residents brakemaster cylinder for, continuously cranking out the best beats, in the biz that's all for now we'll talk, to you again next, [Music], time, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | GPU dev environments that just work | Creating and sharing reproducible development environments for AI experiments and production systems is a huge pain. You have all sorts of weird dependencies, and then you have to deal with GPUs and NVIDIA drivers on top of all that! brev.dev (https://brev.dev/) is attempting to mitigate this pain and create delightful GPU dev environments. Now that sounds practical!
Leave us a comment (https://changelog.com/practicalai/208/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
• Fastly (https://fastly.com/) – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com (https://www.fastly.com/?utm_source=changelog&utm_medium=podcast&utm_campaign=changelog-sponsorship)
• Fly.io (https://fly.io/changelog) – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog (https://fly.io/changelog) and check out the speedrun in their docs (https://fly.io/docs/speedrun/) .
Featuring:
• Nader Khalil – Twitter (https://twitter.com/NaderLikeLadder)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
brev.dev (https://brev.dev/)
DISCOUNT for our listeners 🔥:
• Use coupon code “practical-ai-2023” for 5 hours of free GPU compute!
• brev.dev doesn’t offer credits often, so the credit redemption button is hidden by default. Go to this link (https://console.brev.dev/org/ejmrvoj8m/settings?credits=true) to expose the button.
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-208.md) | 18 | 0 | 0 | there's a lot of optimizations around, the GPU spend so the way that it's being, backed up for the volume we're doing, like intelligent backups I guess where, we can back up just the amount of volume, that's actually being used so you're not, paying for unused volumes even when your, instance is off there's Auto stop making, sure that you're instances aren't, costing you a lot when you're not using, them you can use breev scale which lets, you deallocate the GPU or get a more, powerful instance if you need it so, flexible compute needs without having to, reset up or install anything and there's, the obvious benefit of not running a, container locally if you're on a Mac, that kind of like casually eats up like, 20 gigs of, [Music], RAM welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone subscribe now if you haven't, already head to practical ai. FM for all, the ways special thanks to our partners, at fastly for delivering our shows super, fast to wherever you listen check them, out at fastly.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist at s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at loed, Martin how you doing Chris doing good, having a good, 2023 and this is going to be the best, year for artificial intelligence ever, yeah well I mean it must be uh yeah we, finally did our chat GPT episode and uh, that was really cool because I don't, know if you saw Chris is the first, episode where we had I think like over, 10,000 downloads in the first week so, thank you to our listeners that's, awesome to see that um we're glad that, was useful and we're going to keep the, good content uh rolling right along uh, because this week we've got something, super practical which I think everyone, deals with what we'll talk about today, but we we're privileged today to have, with us uh Nat khil who's the co-founder, and CEO at brev dodev welcome Hey thank, you thanks for having me yeah so um I, alluded to like a problem that we all, face which is invite, management and uh like I'm developing on, this environment I need to have these, dependencies or I use this environment, now I need a GPU or Chris is on my team, and he needs to replicate my environment, all of these sorts of things whatever, you know category you put those in so, how how I guess in terms of you know, you're digging into this problem now but, how did you get there what started you, along this path of really thinking, deeply about Dev environments man we've, had quite a Twist and Turn of a journey, to get here um and yeah I mean the, ultimate goal is just monotonous machine, problems getting in the way of creative, development and uh that's it's funny, when I I went to UC Santa Barbara I, studied uh electrical engineering and, computer science uh and when I moved to, SF to work I was actually building Cloud, Dev environments at workday and I did, that for two years and in December 2018, uh actually just before that I was, getting a beer with a bar owner and he, was telling me how he had a thousand, clicks on his Google ads but his bar was, empty other than me and he shows me the, his metrics on his Google ads he goes, make it make sense and I realized he had, a really good point digital ads work, really well for digital businesses, because if someone clicks on an ad, that's an Amazon ad you've entered, Amazon storefront there's nothing like, that for the physical businesses like, his and so he's just using a really bad, medium so my co-founder and I uh kind of, like same co-founder with brev uh we, pretty much realized we there was like a, way for us to back door the Uber app and, so we put tablets and Ubers and lifts, and we let local businesses advertise on, them and if you tapped our screen we, would reroute your Uber to that location, yeah that's legit yeah yeah um you go, out with friends for drinks you see buy, one get one free margaritas you tap the, screen and we take you there you get a, free drink the bar owner knows his Ad, work the driver got a tip everyone won, perfect and so that was really exciting, I end up that's what I quit my job to go, do we did that for like two years, completely bootstrapped we like ran out, of money I poured my 401k into it um we, got into YC for that we got to like a, quarter M ARR and essentially demo day, was March 2020 which was right when the, shelter in place happened in SF and so, we got to see our F to 400 cars go to 7, overnight right actually the week of, demo day so um we didn't raise a dime, obviously but oh I feel bad for laughing, but I can't help it yeah yeah have you, seen that gif on the internet of the, raccoon with like cotton candy and it's, just like where did it go um that was, like very much like March 2020 for us, but uh it was funny because with a, physical business we have a physical fle, right we have physical operations you, imagine like physical hurdles being the, hardest part of that and in January 2020, we're starting YC we're like we got to, like 15K Mr things are working and we, need to just 3 4X the fleet and uh that, was like really hard for us we found out, from one of our drivers that uh Uber and, lft have these parking lots half a mile, from SFO airport where drivers go wait, for these really valuable airport rides, so I go to the parking lot and Uber, security kicks me out right away they're, like I'm not a driver so I'm like okay, well I'm Middle Eastern so I went to a, gas station I bought cigarettes I light, one up and just walked back on the lot, cuz now I look like a driver taking a, smoke break and I got right past Uber, security I'm on this lot till like 4:00, a.m. talking to every driver we forx our, Fleet that night so like there was never, a physical hurdle that got in our way, but once we got those drivers live, everything else went to we had like, our Advertiser dashboards really slow, like all these random problems one of, them was like the ads when they flipped, on our tablets would just disappear and, Flash white and if that happened at, night it's jarring and so Riders would, turn off the screen and you lose revenue, for the night and so it was really funny, having like really weird physical, problems but like we can sneak past Uber, security and solve those but like no, when we have to like sit at our, computers and fix something it's like, our Dev environment slowing us down and, so it was almost like instantly when my, co-founder and I like when essentially, the pandemic killed that business my, co-founder and I looking at each other, and those 20 days of January where we, were trying to deal with our Dev, environment issues we couldn't replicate, these locally just so many weird bizarre, issues we're just like shooting in the, dark um that was the only time with that, business I had like a pit feeling in my, stomach like we forgot an a Simon or, something and so we was just immediately, how do we solve our previous problems, and so we spent like a year and a half, in pivot land with a good Northstar we, built a very heavy abstraction I guess, it was kind of like what repet is now at, the time repet didn't have databases so, you couldn't really build applications, in it so we had this we essentially said, hey if we force our Dev environment opin, on you you can't have problems we didn't, already know about cuz we force your, decisions and so you wouldn't have, problems it'd be a really smooth, experience as long as you did everything, that we supported and so you get cron, jobs out of the box and toia was already, hooked up and a database was already, there but you have to use our version of, python for your apis things like that, and so it was an interesting experience, in like the broader like everything, outside of a Dev environment when you, need to start using when you want to run, tests and you need more tooling and, those things aren't supported so it was, a great way to like plunge into the, space but ultimately we learned that uh, a good abstraction is only good if it, pairs well with the problem that's, solving and if you're good at solving, problems you're going to have new ones, to solve which means you'll need new, abstractions or a flexible abstraction, and so that's when we kind of pivoted, away from that and built like the, current version of breev and a lot of, what you described I mean I've never, tried to sneak tablets on an Uber or, something like that um which sounds like, a really fun thing to try to do and I I, love I love that story probably a less, fun thing for me in my life is like this, General like Arena of the very kind of, specialized and weird dependency issues, specifically related to like machine, learning and AI sorts of environments, and the differences that people have, between like trying to prototype, something locally and then trying to, scale it out in a reasonable way did, that factor into your thinking when you, were building this in terms of like, these like data science people out here, this like explosion of a tooling and all, of that or was that something that came, along the way as you were kind of going, in this journey and thinking about like, what kind of problems these abstractions, were thinking about what kind of, problems these solve yeah so it's, definitely something that we learned, along the way we initially started by, trying to solve our own problem we at, brev exclusively use brev for all of our, own development it's just a much kind of, to your point right you're not dealing, with environment issues we upgraded we, have a blog post about when we upgraded, from goang version 1.17 to 18 it caused, a memory leak but our co-founder fixed, it his environment and so when I wanted, to update my environment I just reset, and I'm on the latest and so being able, to just move your environment that way, it it is really uh it makes everything a, lot easier what we've learned is that, some of our power users were AI, developers because AI Dev environments, are really complicated and they, specifically asked us to support gpus, and when we started to support GPU, instance types it just kind of opened, our eyes to how many I guess there kind, of raw devops problems there are within, the mlops space uh you know gpus are, really expensive, a lot of times the GPU is sitting idle, if you need to do some sort of, development you might spin up a GPU just, because there's the off chance you do, some GPU development right now but a CPU, would have sufficed so the way brev, works is it's the idea is you can move, your Dev environment between different, instances so you can if you're not using, the GPU deallocate it and just go to a, really cheap pennies per hour CPU, instance and only when you need the GPU, do turn it on we also have Auto stop so, I learned from workday they were burning, a lot of money every month because, developers forgot to shut these, instances off this also happens from, Individual developers so if you don't, use your brev instance we automatically, power it down you can start it again, from the CLI and it's just um and it's, back up and running so prev is a CLI, that makes it really easy to spin up, these Dev environments and we connect, your local tools to that remote instance, so we kind of the CLI wraps SSH so all, you have to do is run brev start and, start coding and not really have to, worry about the actual like environment, issue that sounds really cool let me ask, you kind of a baseline question uh that, as I'm learning about how you've done, this this but I'm starting kind of from, where I'm coming from and probably where, more than a few of our listeners have, like I'm used to you know using Docker, and you know getting in a container and, it has you know access to an Nvidia GPU, kind of the way a lot of folks are doing, it can you kind of tell us a little bit, about what the difference is between, that kind of that classical approach, that a lot of people use and in what, ways are you differentiating and, stepping up from from that into brev, dodev yeah and can you explain to me, maybe like how where are you running, this container how are you running this, on your machine it has the Nvidia gpus, um yeah you have to have a set of images, that you have you know set up there's, there's a bunch of configuration ahead, of time which I know I don't have to do, on yours but essentially I'm having to, say okay I have a GPU available uh in, some place on the network or maybe in, the cloud and I'm going to do those, configurations and then uh maybe I'm on, my laptop maybe I'm on a server but a, lot of people are you know logging into, container, uh to do the work and then trying to, move the container around and be able to, access those resources from different, locations I know that I'm starting from, that because it has some good things but, it also has some real pain in the butt, uh aspects to it in terms of having to, make it all work and so I'm kind of, wanting it sounds like what you're, describing up front is a really good, user experience and so I'm trying to get, a sense of like what the differences are, in the two yeah so I think at a minimum, if you want to just run a brev, environment with or without a container, uh whether or not you have that set up, the way we kind of handle this is with a, simple bass script knowing that every, brev environment is running the same, version of aunu we have the specific, version listed in our docs running a, bash script is Bash is ubiquitous it's, available you can make um and you can, run anything on it so uh or you can, install anything with it rather so you, can start with just a bass script if you, don't want to run a container if you, just want to like try something and have, that run so we leverage this a lot for, some of our templates if you have a bash, script committed to your repo that has, setup instructions brev can, automatically run it when you spin up an, instance so you create a new environment, you give it the git repo and a path to, the script that you want it to run and, that script will get run immediately for, you when the instance is created the, user experience is creating the new, environment whether in the CLI or, through the UI setting the path to that, setup script or you can also just start, with one of our templates and then from, your terminal you run brev open and, we'll open up vs code connected to the, remote instance or brev shell if you um, we support Vim emac jet brains whatever, IDE it is that you want to use or code, editor and then if you do have a, containerized workflow anything that you, were going to run in your terminal if, you're going to run Docker compost, commands if you're going to run Cog if, you're using replicate anything that it, is you're trying to run you can just put, in the bash script and know that that's, going to reliably run for you or someone, else that you're sharing this with but I, think the big thing here is there's a, lot of optimizations around the GPU, spend so the way that it's being backed, up for the volume uh we're doing like, intelligent backups I guess where we can, back up just the amount of volume that's, actually being used so so you're not, paying for unused volumes even when your, instance is off there's Auto stop making, sure that your instances aren't costing, you a lot when you're not using them uh, you can use breev scale which lets you, deallocate the GPU or get a more, powerful instance if you need it so, flexible compute needs without having to, reset up or install anything and there's, the obvious benefit of not running a, container locally if you're on a Mac, that kind of like casually eats up like, 20 gigs of RAM I actually um so I, haven't used it a lot I have to be, honest but I did spin up a couple of, Environ ments in brev dodev um leading, up to this conversation because I wanted, to understand a little bit more about it, and um it was really fun uh like Chris, was saying like I think it's true that, the sort of Dev and onboarding, experience is is really nice and I was, using like the um UI configuration and, the experience I had was that and I, don't know I'm kind of curious like what, you've heard from other users I guess is, my question because my experience was, similar to like okay I created the dev, environment like with the UI it's a, little bit different UI than I'm used to, but it there's like familiarity with, certain of the things right I'm pointing, it to a git repo I'm maybe defining like, you're saying like a startup script or, something like that um I'm naming it, okay it's creating this thing I added, GPU whatever there's similarities, between that and like what I would, create in an instance in the cloud but, then I have this Dev environment and I, think the point where like something, switched in my brain was I was you know, local in my terminal and I uh I actually, even forget the command now but like, brev breev open the environment name, that's what it was brev open and it it, just popped up vs code and then I, realized like I had my vs code open and, I could open like a terminal VSS code, but that was running in the environment, that I created remotely so that's where, like things switched in my brain like oh, I'm now using that environment that I, set up and I could share that, environment with someone else and then, they could pop open their code editor, and see this I'm curious other people, that you've talked to people that have, started using it where are those light, bulbs going off for them and what are, the things that they're really like, getting excited about I guess I mean I, just did a slew of user interviews and, uh the first question I always ask is, what does brev do it's always really, exciting to hear that from someone, before I have an opportunity to like, kind of accidentally influence that, conversation and the biggest things we, hear is that brev is the most delightful, or easiest experience to run anything on, a GPU in the cloud so that's been a lot, of our focus is you know Dev, environments are kind of the thing that, gets in the way of what you're trying to, do and so that's been our Focus from the, beginning but uh there's a lot more, complicated workflows especially with AI, and just the the dramatic cost like we, have one user whose Google Cloud bill, was like about $280 with uh just running, on their GPU instance but with something, like breev scale they brought it down to, about 25 bucks I think their exact thing, was like 27 or something dollars and so, that's like a you know 10x reduced uh, cost just because that GPU was sitting, idle while they were like actively, coding and Building Things So I think, our goal is just to have something that, is a much more delightful and really, simple experience but also saving a lot, of money a lot of what we're focused on, right now is integrating with other, clouds you know we to get this far have, just been built on AWS but we're uh, partnering with like Lambda Labs right, now to support their GPU instances, because they're a third of the cost and, we're leaning deeper into actually a, container strategy which will let us, provide kind of like start and stop, across clouds which I think will be, really exciting so this is something, that's we're getting ready to release, over the next two weeks uh and I'm going, to start kind of talking a bit more, about but actually I I was just going to, say you can go ahead if you want to dive, a little bit into that right now because, I you really piqued my interest with, that so if you don't tell me now I'm, gonna pester you, later yeah well um the way that we're, approaching it I think is we're going to, and it's a bit experimental still right, now but we'll have something out within, 2 weeks our team has pretty quick, velocity we're a small but potent and, passionate team and so we really want to, be able to support start and stop across, any uh anywhere that there's a GPU, available for us in the cloud and it, might not be at a large data center it, might be at a small one and that's okay, uh if it's a cheap GPU that's in a, region that's not going to introduce a, lot of latency for you you should be, able to leverage it when while we have, access to it and if it's rugged from us, if we have if you stop your instance you, should be able to start it again and it, might not be on the same instance in the, same data center but that's okay we're, really just trying to optimize on you, know GPU itself a GPU is a commodity, it's like you just want the cheapest one, and you want to be able to run your code, on it easily and so that's yeah in like, two weeks I think we'll have a pretty, exciting launch on that that sounds, pretty cool so there's another aspect of, that that's got me thinking with you, looking at, multicloud and and you kind of said it, could be a small data center it could be, you know I'm getting the impression, there can be a lot of diversity, potentially in what you're targeting for, getting your GPU what are some of the, kind of considerations somebody might, have for if they're using uh brev dodev, like how might they decide and and is, there any strategy yet other than just, kind of whimsical on saying Hey I want, to go with this one or that one vers is, it just a cost thing or could there be, other considerations that you guys have, thought about in terms of being able to, provide you know like going to a small, data center here at this company rather, than the big AWS one in Northern, Virginia over here uh any thinking, around that yeah so uh just to clarify, on the kind of whimsical approach are, you talking for us uh as we tried to, find gpus that we can no for the user, perspective cuz if I'm am I correct in, thinking they can kind of choose where, to Target on that or is it something, you're doing behind the scenes our goal, is to make it really easy but expose as, many options to a user as they want so, for example we'll default right now to a, region that makes sense but you can, always open up the region and pick one, that you would like um again right now, we're only working with AWS but that'll, change really quickly like in these next, two weeks um so we always want to make, it an option for a user to see, transparently where their instance is, coming from there's I don't think reason, for us to hide that however we do have, an option for you right now to connect, your AWS account and what I've noticed, is only like two users have uh bought, like two individual users not teams have, used that and I think what that means to, me is the specific location of the GPU, doesn't really matter it's just like hey, I want to run this on an a100 go run, this on an a100 gotcha so one of the, things that is sort of a question, running through my mind is I thought it, was really powerful like when I open up, the environment I had the environment I, could run stable diffusion or whatever, because I had a GPU in the background I, had enough memory like all those things, it's really nice and I could see how, that would allow me to sort of, understand the environment that I'm like, eventually building towards in terms of, what I want to releas in production and, I could share that environment with, other people um what would be like from, your perspective as both the founder and, Creator but also a user of brev um what, is like the workflow that you've seen, work in terms of going from that local, Dev and sharing local Dev environments, with other team members towards like, something you would run in the same type, of environment in production like okay, I've now used a brev environment to like, figure out how to run this you know fast, API code that serves my model or, something like that and now I want to, run the same type of environment but I, want to deploy that in in my AWS or, something like how how does that work, and how does like brev you know factor, into that I guess yeah so it's really, funny right think about how many times, you have to kind of do like the same, redundant work and all of this being, like not the thing you're trying to, actually build so you go and install, everything so you can work it on a Dev, environment then you go and install, everything so you can go run run your, tests if you have a pipeline then you go, and install everything so that you can, deploy everything in production and like, theoretically we've all already done, this and so I'm good friends with the, team at banana. deev we we love working, together I think our products are both, very synergistic and something that, we're working on is if somebody has a, brev environment they should be able to, click a button and it deploys on banana, um it's a serverless GPU for production, right that's that's the a helpful way to, look at this is uh there's two types of, compute there's interactive compute and, non-interactive compute if you're, deployed on production um that's a, non-interactive compute right your API, is up and running you don't need an, active shell into it and in fact that, might even be an anti-pattern if you, have interactive compute you're actively, developing you're open in the terminal, you're you're running things and seeing, live and making iterations to it and so, um if you look at brev and banana for, example as like interactive and, non-interactive computes that are very, that that work really well together you, can take your interactive Dev, environment on brev get things running, and once you're done press a button move, it to Banana so that it's non-, interactive it's not costing you as much, it's just it's on the serverless model, and then if you have a a server error, right if you have some you get some sort, of Sentry log on your uh banana server, then you should be able to click a, button and then open it up in brev an, interactive compute so you can figure, out what's wrong fix it and send it back, and if you're able to have that kind of, workflow you're taking away a lot of, this like devops overhead because at the, end of the day we're just trying to, build that's um I think that's where I, see the future kind of heading is how, smooth can we kind of Na like move, between the states that the user uh, essentially wants yeah I I think that's, a really uh insightful sort of Direction, because I see this efficiency gain with, brev and sharing environments for that, like interactive compute that's really, important but then if you can make that, connection to the sort of production, deploy that's huge because now like, there's still so much so much of the, time there's this friction that you, talked about where like even if I'm, developing against like a cloud instance, right there's some sort of like, non-negligible labor cost of like me, going through the headache of going and, you know deploying something to, production and it's still not the same, right or there's some issue like you're, talking about when things go wrong and, there's debugging so if you can, replicate that environment both in an, interactive and non-interactive way I, personally think that's really really, powerful and and interesting um I think, actually U just a note I think we've got, uh scheduled to have the an interview, with banana coming upcoming so listeners, uh watch out for that one I'm excited, about that really exciting product and I, I just I think a really really exciting, space face of just like the MLA aai, operation like Ops Dev tooling coming, out right now um yeah definitely really, excited and to kind of take what you, said even a step further like you know, you might be reading a research paper, and you see a Google collab notebook, that has a model and you want to go take, it fine tune it for your own sale uh for, your own you know whatever you want to, do with it and then go ahead and deploy, it I mean brev is kind of in the center, uh of like interactive compute where we, could take a go if we have a import tool, for collab notebooks where you can kind, of import it on brev change the compute, that you want get something more, powerful fine-tune it the way you'd like, um maybe even use a template for API, framework so you get like flask apis uh, set up ready for you you can kind of, continue to modify from there and then, hit the production button and go to, Banana that's kind of like the dream, workflow I see where we're behind the, scenes always finding the cheapest GPU, for you to do that you're able to get as, powerful ofel compute needs as you need, it's really simple to go from like, collab to something scaffolded with like, apis that are ready for you to deploy to, production um and again we just get to, focus on the fun part all right so um n, I'm looking through the templates that, you have at brev dodev and you know just, to give people a sense of like some of, the things that you can kind of spin up, an environment quickly get and do right, away um I see a couple different stable, diffusion stable diffusion stable, diffusion version two dream Booth, tensorflow whisper clip image captioning, all all sorts of different things but, then there's you know environments that, you have templated out for things like, go and rust and you know other, environments that people might be, interested in um you already alluded to, the fact that you're a quickly moving, you know small team and I'm wondering, like out of all the sort of like areas, that you know you could focus on it's, probably one of the things I would guess, is it's maybe difficult to position this, for a certain group of people that, really need it cuz it's kind of a common, need across you know all Dev, environments so I'm wondering how you it, seems like you've kind of brought some, Focus to the area of gpus and data, science AI type of workflows, specifically do you think that's mostly, been driven by this sort of GPU element, and the complexity of those environments, or how do you think about like where to, head from here in terms of like the, verticals and the industries and the, specific Dev workflows that you're, thinking about and you're focusing on, and um how is that working what are you, hearing from users in that respect yeah, so it's kind of funny before we we, leaned into the AI ml workflows pretty, heavily you're right right it's a Dev, environments is like who is your target, audience uh people who code right and, that's kind of a very naive answer for a, very early stage of the product um I, think what we learned is you really want, to be able to solve someone's problem as, quickly and acutely as possible and then, get out of the way and I think that's, been a big change in direction for us, even if you look at the uh like the way, that the product onboards you need to, have the CLI so you can run br open so, we used to say oh well when you make an, account we'll tell you to install the, CLI right there but the user doesn't, know yet why they want to install the, CLI they haven't they haven't had, expressed desire to open their Dev, environment yet so the way that we, changed it is it's just focus on getting, your environment created when your, environment's created then you see an, open tab when you click the open tab, there it tells you install the CLI, because you haven't yet and so that's, you know the user says I want a thing, and then we can kind of show and not, really impose and so when we were, thinking about like broadly Dev, environments when we initially started, this tool uh or when we initially, started building brev it felt like we, you know someone says hey my local, environment is not working and so we'd, say great we can make one for you in the, cloud but now we're not just introducing, brev as a tool to solve their, environment issues we're also, introducing the cloud it's a separate, thing and so in terms of like acutely, solving the problem we're not doing that, we're introducing the element of the, cloud which they have not yet expressed, a desire for and so what's great about, the GPU use cases is we're meeting, people where they are which is in the, cloud right they're saying I am trying, to access an a100 that does not exist on, my MacBook Pro and I want to get this, running right so the cloud intention is, coming from them not us right and we're, not kind of like sneakily trying to, introduce something else that way we can, get them to use brev it's just meeting, the user where they are making and the, issues with using a GPU in the cloud is, that they're really expensive and, they're really painful to get set up and, then of course all the dev environment, issues and so that's been a really great, Focus for us and and we're leaning in as, hard as possible to the mlops tooling, the dev environment issues are much more, severe here uh it makes a lot more sense, for uh there's there's a lot more room, for us to Delight users by making a much, better experience and going back to that, container strategy if we can move, between different clouds we can also, move between one local Cloud which is, your actual computer so I think the way, that we kind of want to approach broader, Dev environments is you should be able, to run something on your computer and, then say I now have a need for the cloud, I want double the ram I want a GPU I, want something so you can start local, and then move it to a cloud and that's, the way that I think we can ultimately, brought in from mlev environments but, this is a huge Focus for us right now, and what I really want to do is um, rather than think about so many of those, other use cases how do we get really, tight integration with banana how do we, get a really easy way to go from a, collab notebook to something that you're, now fine tuning on a much more powerful, GPU how do we find uh an interface with, other clouds and like that's where we're, focused right now and there's a lot of, work to do here yeah clearly you have, such a focus on kind of accessibility uh, in terms of the experience and um and, you know you have a bunch of different, ways of connecting in you know like I, use vs code so I went and looked at that, um and you have the guides that address, different common models that we would be, interested in that are really popular, right now like stable diffusion um and, you talk about the different clouds, could you pick one whatever one you want, and just kind of walk us verbally, through and I know it's Audio Only uh, but if you can walk us verbally through, kind of what the workflow looks like and, what people might expect just to give a, sense it looks really good but I'm, trying in my head I'm trying to put it, all together from an end to end and I, bet you've done this before so I'm, hoping you can kind of just give us a, little narrative that's easy to follow, on that yeah so let's say you want to, run dream booth and you want to make a, bunch of cool photos of you and your, friends uh so you can go to our Dream, Booth template it says click a link any, environment you can actually uh make a, URL to easily share it so we made one, that's a URL template for running dream, boo so you click the link from our blog, post or from our the guide in our docs, and it will take to the dev environment, page with everything filled out it has, the GPU that you'll need selected it has, the volume the amount of hard drive that, you need the repos that you need the, setup scripts not you don't you don't, have to worry about anything just pretty, much hit the create button when you do, that the environment essentially what, we're doing behind the scenes is, spinning up the GPU that you need we are, installing everything that's needed all, the dependencies that are needed for, that when you're done with that with the, brev CLI run brev open in the name of, your environment and it'll open up the S, code to that environment and in the, readme it'll say upload 10 photos of, yourself in this folder and it we kind, of show you how to run uh how to train, and that's it so the idea is you know in, like four minutes you have a GPU running, everything and all you have to do really, is focus on the fine tuning that's uh, that you kind of want to focus on that, sounds great yeah could you uh share a, little bit also about like because part, of this I think is like I'm doing a, specific thing in my environment that, I've created which is special to me but, now now somehow like I need to share, that with Chris right how would that, work out in this type of scenario yeah, so there's a few things that breev does, behind the scenes there's like things, that are that we intend for you to share, but every environment that I have I have, my own G aliases like when I type c, that's that's a function for get commit, right s is get status there's a bunch of, things that I just expect and I have set, up in my zrc so you can set up your own, developer preferences and every time you, create a Dev environment we take, whatever was shared in the template and, then we add all of your settings on top, of it there's also hash cor vault as, hooked up by default into every instance, so you have like an encrypted Secrets, manager so I have my AWS credentials, encrypted and it stays in my uh AWS, account and uh every time I create a Dev, environment if I if my co-founder shares, one with me or someone on the team gives, me their environment I reliably know, that my terminal settings are all going, to be loaded in my uh AWS credentials, will be loaded in but also there's, Scopes to the encrypted uh Secrets, manager so you can say that like if, someone shares this environment make, sure that these secrets are added into, the environment like an environment, scoped setting so it's up to you to, decide what you want to be shared um, we're not going to share secrets that, you don't want you're not going to share, your AWS credentials if you don't want, it you're never sharing a machine with, somebody you're we're just setting up, one for them and setting it up kind of, identically so yeah yeah which I guess, gets to that sort of idea of templates, right you're creating a template which, you intend another person to use but, maybe in a slightly different way than, you used it right yeah exactly so I'm, going to I'm going to throw out kind of, a a random question and it's okay if you, haven't gone here I just want to ask, have you ever thought about uh having, one that is essentially you know we see, these services that companies will run, and then they'll end up deploying it, kind of on a private server or something, so that it can go into a secure, environment that kind of thing uh as a, standalone instead of being web, accessible any thought toward doing, something like that uh where you could, use it in a non-public environment yeah, so that's how larger team, uh will use brev so at a minimum you can, just deploy all of the instances so that, the instances themselves stay in your, AWS account but we can also deploy the, entire control plane behind your VPC so, nothing's really exposed out um but, that's kind of more on the Enterprise, route uh individual developers I don't, think have this I totally get and it's, the Enterprise route that I was kind of, asking about is like you know you'll, have large organizations that have their, own uh gpus and stuff like that but, they're still just gpus and so I was, wondering whether like you know moving, into that if the control plane can say, okay I'm going to hook up what you have, in your data center uh here's your, Workforce and you kind of have your own, environment so that's something clearly, all have been thinking about doing yeah, and something that we actively support, and we have teams that we're talking, with that are going this route it's uh, you get all the same benefits where you, know you can still scale down your, instances scale up your instances, obviously you might not benefit from, some of the cheaper gpus in the other, clouds because they're not behind the, VPC but um if you're on like AWS or gcp, um we can absolutely do that and uh we, know from like an individual user, perspective Ive if you're going to pay, an extra 8 hours by accident cuz you, know we always forget to shut our, instances off if you're a team of 180, Engineers uh that cost just is Amplified, and I saw that at workday when I worked, there as well so uh definitely we've, kind of had some of those learnings uh, brought into the product as well so yeah, yeah that's awesome I'm just thinking, like looking back at my own sort of, progression and like trying to run some, of these things myself and like just, thinking back not I I mean the tooling, hasn't improved right but the, environments were still difficult right, so like either I had like you know the, consumer GPU card that's in my like, workstation here or I'm trying to use, one in the cloud and like the GitHub, repo is there and like the tooling like, I can understand what's happening in the, code right like that has gotten much, easier I can deploy stable diffusion in, like very small number of lines right, but the environment is still quite, difficult so I think this is really, exciting and encouraging I'm wondering, what what encourages you and what are, you thinking about kind of like looking, towards the future um what what excites, you about this space um man what excites, me about the space I uh I think I don't, know how to say it there there's just so, much to focus on in every realm right, within interactive and non-interactive, Compu like I've talked to Eric at Banana, about just how we both both of our teams, are about the same size and we're both, 100% focused in our space and they just, feels like there's an infinite amount, just looking down so looking up there's, even more uh you guys mentioned your, last episode was on chat GPT and uh I I, think AI is really exciting not so much, in that it's going to replace us all but, it kind of lets us be more creative, directors of our own lives if you think, about any creative process as having, like some generative aspect and then, some like malleable aspect so if, someone's making clay they like throw a, bunch of clay down that's the generative, and then you kind of like form it nicely, into the bowl or cup that you want right, and that's the kind of like morphing it, in so there's always those two aspects, and when AI is able to help us just kind, of really push on that generative side, and we're still in control of the output, we're still the ones that are kind of, morphing the final product uh I view it, as like uh just an extremely empowering, thing and so it's been really exciting, seeing what all the developments in the, space um really bought into the idea, that you make things a little bit easier, and you can just dramatically increase, the affordance for things to happen and, so as much as possible how do we get rid, of machine problems and let people who, want to build really exciting things and, build the next new affordances and the, new models essentially be able to do, that with as little friction as possible, and that's not just within their uh fine, tuning and uh resource constraints that, they might have but also in terms of, like moving it and shipping it and, delivering it and so uh on one hand the, things that are being built is very, exciting but on the other the energy in, the space is is huge right I think, everyone has been so uh inspired by, what's been done recently with chat GPT, and the recent AI models that are out, that it's just it's galvanizing a lot of, people to build a lot of really cool, things everyone I know especially here, in San Francisco founder or not is you, know it's funny seeing Founders who have, nothing to do with AI thinking about AI, side projects right that's galvanizing, people and so uh everyone is really, excited about building this stuff right, now and I I just hope we don't lose that, energy and um just make things as, frictionless as possible as we do that, um yeah even I I'm guilty I have a, little Saturday project I'm throwing, together with some some generative AI, stuff right there's a lot of really cool, stuff that's happening so that's great, yeah well um uh thank thank you Nat and, your team for helping us you know reduce, some of that friction and get people's, ideas out there like this is this is, super exciting and I think you know, speaking of friction I think one of the, things that you mentioned prior to our, conversation is that um you'll spin up a, coupon code for for our listeners um for, uh some compute on brev dodev and, getting some of that you know removing, even some of those barriers for our, listeners as they're getting started so, we'll make sure and include that in our, show notes so so um please take a look, at that um get on brev dodev I did it it, only takes a couple minutes it's awesome, so yeah thanks n for uh coming on the, show and uh telling us about what you're, doing absolutely thank you guys so much, for having me really love the, conversation and by the way Chris you, mentioned locky Martin earlier U my mom, was a nuclear engineer and worked at ly, Martin as well so uh that's awesome oh, thanks for telling me that I'm, definitely in good company and uh, awesome cool yeah yeah all right thanks, see you, guys, [Music], all right that is our show for this week, if you dig it don't forget to subscribe, head to practical a.m for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or a colleague word of, mouth is the number one way people find, shows like ours thanks again to fastly, for fronting our static assets to fly.io, for backing our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Machine learning at small organizations | Why is ML is so poorly adopted in small organizations (hint: it’s not because they don’t have enough data)? In this episode, Kirsten Lum from Storytellers shares the patterns she has seen in small orgs that lead to a successful ML practice. We discuss how the job of a ML Engineer/Data Scientist is different in that environment and how end-to-end project management is key to adoption.
Leave us a comment (https://changelog.com/practicalai/207/discuss)
Changelog++ (https://changelog.com/++) members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
• The Changelog (https://changelog.fm) – Conversations with the hackers, leaders, and innovators of the software world
Featuring:
• Kirsten Lum – Twitter (https://twitter.com/machsci) , LinkedIn (https://www.linkedin.com/in/kirsten-lum)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• storytellers.ai (https://storytellers.ai/)
• Trello (https://trello.com)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-207.md) | 5 | 0 | 0 | I tend to think of these roles there's, not even a letter that describes this, there's so many you know you need to, have a relatively shallow but working, knowledge of the entire cycle and so, instead of thinking of your role as a, data scientist as training models or, even producing models the role of the, data scientist is to convert the data, into some business value using data, science, techniques, [Music], welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical ai. FM for all the, ways special thanks to our partners at, fastly for delivering our shows super, fast to wherever you listen check them, out at fast.com and to our friends at, fly.io, we deploy our app servers close to our, users and you can too learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist at s International and, I'm joined as always by my co-host Chris, Benson who's is a tech strategist at, locked Martin how you doing Chris I'm, doing just fine how are you today Daniel, I can't complain, um it was definitely uh the first uh, meeting heavy day of the new year for me, me too I was really enjoying those like, not everyone knows that I'm back at work, and so I can get stuff done days so yeah, now everybody knows but all good things, I'm working on fun stuff so hey we're, we're getting to talk AI now for the, next few minutes so we're good exactly, yeah and um really excited to uh get to, have Kirsten Lum with she's co-founder, and CPO of storytellers AI welcome, Kristen thank you for having me yeah it, was great to uh get to connect with you, on on Twitter and get you scheduled for, the show one of the things that we were, chatting about when I was first talking, to you about potential topics for the, show was um machine learning at small, organizations which I definitely like, the idea of like discussing this one, because I don't think we've like cently, discussed this on the show in the past, and kind of alluded to it at certain, points but also I got my start as a data, scientist working at startups at smaller, organizations so I definitely know both, like some Joys and some pains from, trying to like do machine learning or do, data science at a smaller organization, what got you started thinking about this, topic in particular and what I, understand from storytellers ALS also, kind of engaging with a lot of these, small organizations in this type of work, so kind of like you I started in this, field in this range of companies from, small to large in particular one of the, things you know I'll go a little bit, into my background how I got into data, science and why small organizations, ended up being my passion I don't, actually have a degree in data science I, have a degree in English is my my, background so I came in I'm sort of a, transplant into this field but I came up, through startups that's how I got into, Tech was working in startups and one of, the things that you learn at startups, right is to do kind of the task that's, at hand you just figure out what that, task is you will do it you make things, work and so I'm so grateful that that's, where I started my journey in Tech was, in startups because that really is the, underpinning of how ml at small, organizations works I ended up going, into data science through analytics I, was doing marketing for a long time got, my feet wet and like oh wow if I have, data about my marketing campaigns I can, do these things that you know if I, didn't have the data I wouldn't be able, to be nearly as successful so I I that's, where I really found my passion for data, and my first real data science project I, took this marketing process that was a, bidding algorithm that was being run out, of Excel and I converted it into a, python script and that Excel process was, taking like 30 hours a week and with, python it took 8 seconds it was magical, talk about return on investment yeah, exactly exactly I actually learned, python to do it it took me two weeks to, learn enough python just to convert this, like process into a python process and, that was like this is just too fun and, too powerful of a tool to not spend all, my time uh doing this stuff so when I, think about that project was actually at, a large company but that large company, didn't have lot of access to data, scientists it was a pretty nent field at, the time and so my knowing python being, a marketing analyst and she being like, I'm going to roll up my sleeves I'm, going to stand up this process in Python, and just do it myself had a huge impact, on the business they actually changed, this entire part of the business to be, based within those tools because of how, much more powerful it was to not have, people clicking buttons in Excel for, hours a week so that's where I got, really passionate about it I could I saw, how one person who had these tools could, come into an organization and make, meaningful change not just, organizationally but actually for the, business itself for growth by using, these techniques and what would you say, so like out of those, experiences because you being a single, data scientist in that context were able, to make a big impact and so a small, organization I can imagine a lot of, small organizations saying things like, well you know we're not a big tech, company like we can't support this type, of work or like we're not in a position, to do like predictive things or we're, not in a position like we don't have, enough data or whatever it is what are, some of those stories that you've heard, or cases that you've heard where maybe a, company is selling themselves short in, terms of the opportunity that's there, around data science and machine learning, yeah I love that question because I, think it is really selling short uh, especially now maybe 10 years ago I, think that those were very valid reasons, to not take the step into data science, or predictive modeling now the tools are, so much better in terms of being able to, have you know a single person who's, making a big impact with these, techniques it's much much easier even, than when I started to be able to do, that I would say the top reasons that I, tend to hear are one we don't know how, to even start in terms of like how to, hire someone that's a big big barrier, knowing how to evaluate if someone's, going to be able to come into your, organization and make an impact is, pretty tough I also hear a lot about not, knowing if their data infrastructure is, rated or if their data quality is ready, those are two big questions that are, fairly hard to answer like how do you, know you're ready your data is ready for, data science um do I have enough data is, it clean enough is it stored the right, way those are all big questions and then, the final one is I don't know how I, would integrate this person into my, existing business such that their output, gets fed in as an input to all the, things that are already running all of, my marketing campaigns all of my you, know website analytics all of that how, do I get this new discipline integrated, with all the other technology especially, with small companies having you know, those people are always stretch very, thin you know your database, administrator stretch very thin your, engineers are stretch very thin so how, can I you know do I have the margin to, incorporate this new technique do you, think to that last point that just kind, of you know fud fear uncertainty and, doubt really kind of play in from a, management standpoint I had a similar, experience and I really walked away from, that it was the last small company I, worked at and I just don't think they, thought they could do it and they didn't, and I ended up leaving as a result of, that but do you think that's a common, situation that small companies run into, I totally do and to be fair I think that, interestingly I don't think the data, science Community is doing a fantastic, job at creating literature that is, accessible to someone who's in a small, business to be able to say here's how, you get started I think a lot of the, literature and data science we're still, really in this experimental phase of, like what new models can we build how, can we push like the state-ofthe-art in, terms of accuracy and that kind of thing, but the literature really doesn't have, an angle for say a CEO, of a small company that has great, analytics but is actually ready to take, the step there's not that bridge that, says you know here's how you do it, here's how you go from an analytics you, know data driven company to a data, science driven company or a predictive, company and I think that's right that, it's hard to dispel that fear when it, feels so mysterious you know um so, that's one of the things I wish that the, data science Community had more of is, non- dat science facing literature about, data science it's a great point so I, just had this interaction my my wife is, an entrepreneur and owns a small, business and um we just had the, conversation cuz she just saw um Jasper, which is a copywriting like assistant, like generative language solution that, helps you um write and that sort of, thing that was the first time like she's, of course been married to me for for, quite some time but it was the first, time where she kind of made the, connection like oh I could have my own, people will be augmented by this sort of, technology in a way that's like, non-threatening or not a lot of work and, she was able to like talk to her team, about that I'm curious cuz in that, scenario it's like people that are, already inside the company that are kind, of now that tooling sort of like machine, learning or NLP tooling is getting more, userfriendly and marketed in that way, they're seeing how they can use those, tools to advance their business but then, there is a need where like still at her, company like I help also do kind of some, of the forecasting or like other things, that really there isn't a great kind of, off-the-shelf tool for but it also isn't, that hard like if you know Python and, like you can import like Facebook's, profit you know then you're like good, and boom there it is okay I've got it, but it's not the same sort of, approachable thing as like a ER or, something like that so where do you see, kind of moving into the future like the, limits of this like kind of low no code, kind of people leveling themselves up, versus the things that are going to be, really valuable for a data scientist to, do in a small organization moving into, the future I love that question because, it reminds me back when I was learning, to be an analyst there was always this, idea that eventually bi tools will get, good enough you wouldn't need a bi right, like you'd have these like like Tableau, is just a step towards whatever the no, code interface is and some of these, companies would be like your marketer, can just go in an interface and click, some buttons and they'll get a report, that really just answers all their, questions no bi needed and that didn't, prove out to be true right like biis are, still like this very important role but, there are these other roles and other, tools that can fill some gaps around, there I love the idea of Jasper as this, friendly interface to be able to to do, this particular task of content, generation I think it's a fantastic use, case and also your example of profit, needing someone to come in use a, pre-built library for doing forecasting, in order to get this very simple output, that really changes a business's ability, to guide itself as that example of, that's a hard task to get rid of and the, real core of that that I see is that, data is different from organization to, organization when I was at Amazon we'd, always talk about solve for the, constants that's a constant like you go, from one business to another someone's, using stripe someone's using Square you, know this person's using Salesforce this, person's using Huds spot and you have, that problem that just multiplies when, you look at all these different, technologies that come together to, underpin their business that's where, it's like the bi task you're always, going to need someone who's able to, reconcile make sense of the data and, then maybe push it through something, that to them is like oh this is this is, easy right like training a forecasting, model in profit is easy for a data, scientist but it's unreachable for a you, know a CEO of a company so that role I, think just like we've seen b were always, needed even when we had all these, fantastic low or no code tools for, analytics we still needed B that were, able to do that heavy lifting of, reconciling of telling the story of, creating those interfaces I think data, scientist have that same kind of role, they're always going to need to pull the, data together build that model explain, it to the person and explain how to, integrate it and that role I think is, the most fun if I'm honest like that's, the most fun data science role to me is, that particular role yeah I've always, really enjoyed that um I've used this, analogy with Chris a lot of times that, often times I really enjoy this sort of, idea that data science is more like, cooking than at the like you're at the, chalkboard like doing math problems sort, of you have a recipe but your recipe, doesn't quite work the way that the, tutorial, on medium like is telling you because, you don't have stripe you have square, but you kind of modify the recipe just, enough to make it work for your scenario, and then you know you don't have the, same ingredients but you adjust the, recipe and you go from there I think, that's a really good framework to think, about and I also really enjoy that it, does make me wonder though for like a, data scientist or a machine learning, person at a small company like I could, be a machine learning engineer at Google, like on the translate team and my, everyday is like translate right what, I'm thinking about is machine, translation whereas at a small company, like you said you're in an environment, where you are the data science resource, or there's a small number of those how, can a data scientist or machine learning, person at at a small company deal with, that sort of variability of like one day, you're dealing with dashboarding and, another day you're dealing with like, sales forecasting and that sort of thing, yeah it's a fantastic question I go back, to you know when we used to talk about, t-shaped data scientists right like, you're across the top you need to know a, little bit about pretty much everything, when everything was a little bit, narrower than it is today but you'd know, a little bit across the top and then, you'd have one area you're you know, again translation the large language or, your you're the computer vision I tend, to think of these roles there's not even, a letter that describes this there's so, many you know you need to have a, relatively shallow but working knowledge, of the entire cycle and so instead of, thinking of your role as a data, scientist as training models or even, producing models the role of the data, scientist is to convert the data into, some business value using data science, techniques and that means having a, working understanding of all the, elements of the machine learning, workflow like data infrastructure having, a working knowledge of how to stand up a, simple database how to pull data from, various sources into that simple, database and then your feature, engineering layer which is really ETL, that's one of the areas I hear a lot of, data scientists say like no that's not, my job right ETL is actually at a small, company your job it'll be in a database, but you have to do your own really heavy, duty ETL to build your features then, you've got your training the thing that, everyone kind of gets into data science, for it feels like build training your, models but then afterwards you also need, a simple way of deploying your models, and a simple way of monitoring and, testing the impact of your models and so, I think about a role at a small company, is needing to Encompass all of those, components but it doesn't need to be to, the level that you would see if you were, in a large company for each of those, components so sometimes people hear that, and they're like wow that just sounds, awful but the reality is it's because, they've seen someone who's specialized, in mlops and they're thinking I have to, do that too and it's like no no no you, don't have to do mlops like your mlops, peers you just need to know mlops well, enough to where you could deploy your, own models have a very simple batch, pipeline that you know how to stand up, in the technology that's available to, you at your company and a very simple, you know real time if you can do that a, realtime you know inference Pipeline and, those are your recipes just plug your, data into those two patterns and you're, good to go very rarely do you actually, need to come up with a whole new way of, doing mlops at a small company you can, usually stick to a few simple, [Music], patterns, [Music], the change log is deep discussions in, and around the world of software and, it's been going for over a decade we, interview hackers like Chris Anderson, from 3D robotics at the time drones were, like predators and Global Hawks and, military-industrial they were classified, in super you know 10 billion doll things, and we had just buil a drone with Lego, pieces around the dining room table, programmed by a 9-year-old and is like, okay that should not be possible you, know it when when a 9-year-old can do, something that is classified that, literally export control as munition, with Lego with toy pieces you know, something important in this world has, changed leaders like Devin zugal from, GitHub in the like 10 to 15 year range, or 20 year range what I would really, like is for if you have like three 12y, olds hanging out and one of them's like, I want to be a firefighter another one's, like I want to be a lawyer I want one of, them to say I want to be an Source, developer and innovators like AAL, Hussein I've yet to kind of see, applications at scale that don't use, multiple languages that don't have just, Arcane stories behind why this weirdo, thing exists you know like all right, when you open this file you're going to, have to turn around three times and tap, your nose, once like it's just the most hilarious, stories you know but applications are, living breathing they have crof that's, normal so I want normalize weirdness, because that's just how applications, evolve over time welcome to the change, log please listen to an episode from our, catalog that interests you and subscribe, today we'd love to have you with, us I'd like to extend a little bit what, we were just talking about and kind of, ask you T we were kind of talking about, patterns or recipes if you're in small, business and you addressed a little bit, about you know m Ops and the fact that, you don't necessarily need to go to what, the large company person who specializes, entirely but that does raise the, question of there's a lot of those tasks, to dive into and so for a person to be, successful in the small company, environment where they're by necessity, forced to be a little bit more, generalist and cover a lot more things, but maybe not at that depth what are, some of the patterns or other recipes, other than mlops that you've identified, that like if you were bringing somebody, in new you would say focus on that and, that and that and your life off the bat, will be a lot better than just diving, into the deep end without any help what, would you tell that person fantastic, question so one unintuitive thing I, would tell folks is build your project, or product management skills having a, very strong framework for how you manage, a project from end to end is pretty much, you're going to need that muscle to be, quite strong um in particular because, you're going to be shepher, sometimes from all the way at the, beginning like data isn't even in a, database and you're just talking to the, salesperson who has a problem you're, trying to solve and like all right we're, starting from the very beginning from, scratch so having a very strong, framework that really goes all the way, end to end from data you know in to data, out to your product is critical and a, lot of people use crisp DM as sort of, their framework for that I find it's not, even quite specific enough for as a, practitioner to move things from end to, end but that's going to be a little, variable by company but I do tend to, break it up into these five stages of, you know having an interview format how, do you know that you have gotten the, requirements from the person who's going, to use this such that you have the, inputs you need for your model having a, framework for interviews having a simple, recipe for standing up a database or if, you need to stand up your own database, or have a very simple architecture for, that that you can plug all of your, projects into so they're sort of, centralized the other thing that I, really recommend is well here's a here's, just a side note almost all problems in, the space are tabular so like it's, tabular data that's what you're going to, do and if you've been on Twitter about, tabular data you know it's gradient, boosted trees just use gradient boosted, trees you know that's your Baseline, don't worry about baselining with linear, regression or you know the simpler, models or random Forest don't worry, about that just stick to a very simple, Baseline with gradient boosted trees, you'll probably get a pretty good model, out of that and then the last part that, I didn't mention in that earlier list, that I think is super critical is having, a very clear baselining process how do I, know when I'm done and then when you're, done put down your pen you know don't, worry about trying to get to, state-of-the-art on every problem just, know what your Baseline is how do I know, when I've built a model that's actually, going to improve this business so that I, can stop working on this one and work on, the next one cuz two models will, have a much better impact on the, business than one perfect model I'm, having all sorts of flashbacks to my my, work in in various organizations all the, time while you're talking some some good, and some painful um and uh I'm thinking, about a couple uh so like one of my, experiences in a like a small startup, environment is like getting in this sort, of I don't know what to call it like you, become the replacement for excel, function where like oh email Daniel like, I think he can like merge two columns, together right and then like they send, you to like Excel sheets and you're like, okay like I can do that in like one, minute so you do it but then you get, this increasing number of tasks in and, then you just do that all the time and, then the second scenario is like okay, I'm I'm going to try to build out this, road map and like this project plan but, things like get shaken up all the time, in small companies and it's like okay I, had this plan to like optimize my, pricing model over the next six months, using like these AB tests or like, whatever I was doing right and then the, CEO's like oh you know like we're not, like meeting Revenue this month like we, need to like just change our pricing, structure entirely right does that spark, anything in your mind in terms of yeah, like the level at which you can do, product or project planning within a, small company while still managing that, sort of flexibility and random tasks, that that come up any recommendations, there yeah it's funny because all you, sparked a whole bunch of memories to of, doing that I feel like that's a, universal experience once you know how, to use data people will not let you do, anything but data yeah right I think, number one is going back to the idea of, solving for a constant that is a, constant in small business is that, strategy is changing very rapid ly and, so building that into the way that you, approach data science I think is really, important I find that one thing that, helps with both of those scenarios is, being able to deliver a result that, baselines everyone about what good data, science does for the company so if a, good data scientist is able to build a, model that increases open rates on, emails 50% it's much less likely that, they'll email you to to ask you to merge, The Columns because you're the person, who optimizes email open rates 50% so, that tends to work super well which kind, of goes back to the idea of like you, need to have an end to-end process you, need to be able to deliver relatively, quickly on a quick timeline and know how, to measure your results so that when you, do start you know getting those, questions like you know what I'm super, busy on this pricing model if you can, wait until such and such a time I'd be, happy to otherwise here's a Google, search you know like that kind not quite, as you know abrupt as that but here is a, tutorial on how to merge columns in, Excel and so that I tend to point to in, a lot of scenarios as something that, solves the, instability problem in a small business, is make sure that you're really uh, focused on results and that people know, what the results you're driving are but, Road mapping I think is one of those, things I used to as a project manager, hold on to my road maps very, tightly but as you learn even in you, know as working in large companies we, would have those where you're kind of, feeling a little whip is like how do I, solve for a changing road map what's, what is how do I solve for that constant, and really clear prioritization, Frameworks tend to really help with that, too especially if they're known way up, the chain like this is the metric I'm, trying to optimized for this business, and therefore this is what the how it, reads into my road map and being able to, give those cost trade-offs all the way, up the management chain tends to be a, really effective solution as well I have, a follow-up question for you is we talk, about process I love the way that you've, kind of worked out the end to end system, and get and you have kind of go-to, things that you can utilize and you're, kind of simplifying and making sure, everyone understands what is is what the, result how to measure it and stuff like, that in so many small businesses they'll, have the data scientist but they'll also, have the software person and then, they'll have the infrastructure person, or whatever you want to call it systems, or or whatever devops Doug yeah exactly, or devops Diana whatever just that, person is there and so I like what, you're saying about kind of going, through that system where do you bump in, either two different ways of looking at, bump into that person or find a way to, integrate and everything as Nirvana, together how do you navigate that in a, small business where their person's like, okay you're on my toes now yeah I love, it I love it because it gets to this, skill that I feel like is not talked, about a ton in data Science Education, which is how much people are actually, the mechanism by which things get done, more than any code or any framework it's, people that get things done and so, knowing how to work in an, organization with folks that have sort, of their territory how do I earn trust, with them how do I think about handoffs, between the components of this overall, system managed by people is is so, important I think one one thing that I, tend to recommend for data scientists is, that earn trust part how do you break, down the process of earning trust inside, of an, organization and make that a repeatable, thing overall just knowing the, architecture peoplewise of your, organization is a task that some care, should be taken with who are the people, that are over these various systems and, meeting with them and knowing who they, are is like Baseline even then knowing, what their goals are what their blockers, are and how your work can actually make, their life better is huge there is a big, advantage to knowing hey software team, got this problem where they're meant to, optimize this part of the app flow and, they're struggling with that well, actually that's a place where machine, learning could come in and actually, solve part of that problem for them can, I actually do something that helps them, with their own kpis such that I build, this trust with this organization and so, it's not like you know as much as we, would like to talk about like code and, if it was just faster if it was just, easier if it was just simpler then we, could get data science done focusing on, that data part and being part of the, team is actually the skill that stands, on its own Beyond any technical skill, that you develop over your career this, is practical Ai and I have a very, practical question which I think it is, sort of a boring question but I think it, actually could be really helpful to, people so like you're talking about in, part of data Science Education machine, learning like we don't talk a lot about, this like project management side of, things and I'm guessing there's probably, even listeners out there data scientists, who maybe they have let's say it's a, data scientist who has like a background, in science or something or um Academia, and their idea of project management is, like oh I have like a notebook with some, things written down in it and then on, the other end you have like maybe data, scientists coming from like a software, engineering background and their idea of, project management is okay I have a, jurob board or a SAA these sorts of, tools and maybe there's you know other, backgrounds as well I could see like how, the notebook isn't going to get you, totally to a good place I could also see, how some of these other systems like a, jir or a SAA it could be overkill for, like managing what you need if you're, especially if you're like a solo data, scientists working on projects do you, have any recommendations in terms of, like some things that are not, overwhelming it doesn't have to be a, system or like a product but like things, to look into that can just make your, project management workflow work for a, data science scenario yeah it's a good, question I'll start with I really like, Trello so if we're talking about, products just products generally like, Trello is a fantastic place to start if, you are that person that's just like, written things down in a physical, notebook Trello tends to be like a good, step forward in terms of something, that's sharable and everyone can see it, not overwhelming exactly it's it's, really great you can build templates in, it and so you know like these are the, things I need to put together for my you, know data science project but beside, that the thing that I find is very hard, to beat is Google Sheets Google Sheets, is a fantastic tool for almost any, workflow Google Sheets is great like, usually when I come into a new, organization that doesn't have sort of a, project management muscle I start with a, spreadsheet because it's so easy to, change so easy to update it's you know, you can add a new column remove a column, you're not using is super super easy and, you do that for like a quarter like, manage 10 projects through a Google, sheet and see what actually is helpful, in terms of making sure everyone knows, when things are due what the deliverable, actually is how do we know we're done, like What stages does our project go, through like iterate on those in a, Google sheet for a while and then you'll, have this system that really makes sense, to everyone because everyone's been, using it and then you can level it up, with an interface like a Trello if you, wanted to so that is my secret to almost, every workflow is put a Google sheet, somewhere that's you know connecting, some pieces for a little while and, that's what's going to teach you what, you actually need in order to manage, that in the long term yeah that's, awesome I wonder too like with that so, project management is one thing but then, there's like we were just talking about, the people side as well communication, wise within a small company you know, I've had experiences in the past where I, am maybe managing my own thing and I, think I'm managing it well but I'm doing, that in a silo and I sort of crank on, something for like a month and then try, to lob something over the fence or, something like that so do you have any, recommendations with regard to that and, really kind of developing that empathy, and good communication of data science, within a smaller organization between, like key stakeholders it's a fantastic, question as well I love like you're, hitting on all of the things all the, pitfalls of working in a very small, organization it's only because I've hit, the, minds that's exactly yeah the thing that, comes to mind with that is really the, understanding that success in a project, you can do as well as you want in a, project your product itself can be super, good but at the end of the day that, product won't be able to make it to your, end customer without passing through a, few more hands and when you really tie, your project success not to the trained, model but to the deployed model that is, in front of your customer your end, customer it starts to really raise the, priority of understanding what happens, Downstream of the output of your, workflow so the output of our workflow, is like this trained model right even, like a pipeline of INF an inference, pipeline for this trained model that's, the output but that pipeline has to, connect somewhere and that there's got, to be someone who's doing that, connection and that is one of the tricks, that I use to really raise in my own, mind the priority of having good, relationships with people up and, downstream of me and so good, relationships Downstream as you kind of, pointed out regular communication really, helps with that having a project, management framework where you've got, deadlines and you're meeting your, deadlines and you're giving people, regular updates along the way really, goes a long way in earning trust in that, up and downstream relationship with, anything like that when you're as you, we've mentioned a couple times that, means that's another task like I can, imagine someone listening be like oh my, gosh not only do I need to figure out, how to do my pipeline but I also need to, figure out like project management and I, need to figure out a communication, strategy making all of those things as, light touch as possible is really, important so if you have your project, management framework in your Google, sheet something as simple as updating, that Google sheet copying those goes out, putting it in an email and saying this, is my update for the week here's where, we are really leaning on those like, agile Frameworks of simple standup, here's what I did last week here's what, I'll do next week and here's when I'll, be done those very simple mechanisms and, training yourself to make them simple, and keep them regular is really really, critical I have a followup which I'll, get to in a second but I just something, that you said that was really resonating, with me is I'm currently though I've, spent a lot of years in small companies, I'm currently in a large company, but just before we started this, conversation I was in a work meeting uh, I was with a a development team and I, was thinking exactly that I was, literally saying no no we need to, lighten this up we're too heavy-handed, um I think it's one of those small, company things that could be used very, well in many large companies is don't, overdo it light as well so I just wanted, to say I really resonated with that when, you were saying that you've done a, really good job of kind of talking about, the need for trust and finding those, opportunities and communicating but as, we move forward with data science and, small organizations and there are all, these opportunities for growth of the, small organization by really absorbing, data science Beyond just your role as, the data scientist but kind of affecting, all of the other functions as you build, that Baseline of trust and you're, actively communicating that how do you, approach getting people who aren't, thinking about the benefits of data in a, business context so they're not doing, your job they're doing their job but, there's so many ways that if you kind of, get data Centric about in a, non-technical way that you can get, benefit from that how do you bring, people Along on that Journey because, it's a hard thing to do it takes a very, Savvy touch to bring people to see, something that's not normally their, Forte that's your Forte but they can, benefit tremendously what are some of, the ways of bringing all of those other, people along in their own right yeah the, reason why I like this question so much, is in a small organization when you're, kind of one of a few data scientists you, have this responsibility that you're not, just representing your work you're, actually sort of representing the, discipline within that company and I, think there's a lot of companies going, back to like the beginning of our, conversation that are not doing data, science because they may be like dipped, their toe in and really got burned and, that one project where there was a lot, of promise but it never panned out turn, them off of wanting to do data science, kind of like for a long time so these, Cycles tend to be really long trust, Cycles like that tend to be super super, long maybe like 2 or 3 years later they, like maybe something's changed in you, know the landscape that would make me, want to try it one more time so knowing, that is sort of serious like when you're, the one data scientist it's kind of, serious to be like I not only have to do, my work well I have to convince folks in, this organization that this discipline, can do something for their organization, beyond what they're doing today so with, that in mind one of the things that I, think is really critical is thinking of, yourself as not just delivering a, product but educating about what that, product is and its benefit which is why, having a very strong AB testing, framework maybe unintuitively maybe, intuitively is so important if you're, not familiar with how to deploy, something and ab tested at the same time, such that you can describe its impact, that's one of the biggest differ shaders, I've seen in small organizations that, really just love their data science team, versus like I don't know why we are, doing this right it's that's a really, big defat yeah and so that education, point it's very hard I used to say this, quite a bit when I would review rums, there is a Primacy to delivering results, anything else when you're reviewing, someone's resumee a lot of times that's, the output of kind of all of these the, inputs in their resume what the results, have been driven that can tell you a lot, about how well do they manage projects, how well do they work in a team like all, those things kind of ladder up to, delivering results and so when you think, about earning trust within a small, organization thinking about delivering, results as being the output of earning, trust doing your work well you know, training your models well having good, relationships with the people along the, pipeline that's what will really point, you there and once an organization sees, those results it's actually very tough, you'll have more work on your plate than, you will know what to do it that's what, I found we've talked a lot about, challenges related to being a data, scientist in a small organization or, like things that you need to be thinking, about I'm wondering what from your, perspective because I've definitely seen, some of these things what advantages, does like a small machine learning, organization or machine learning, practitioners in a small, organization what advantage do they have, compared with machine learning Engineers, let's say at a really big tech company, in terms of what they're able to do cuz, I think often times that's not, highlighted like some of maybe what you, can do in that scenario it's maybe, harder to do in a large tech company, have you run across those things, interestingly that's part of the reason, why I like doing it in small companies, because when you do data science at a, large company you have to think about, things like how do I parallelize the, compute for this so that I can actually, get this pipeline to run through three, billion users or whatever that's right, so there's this part of the machine, learning Tech stack that I find the most, complex the hardest to understand the, hardest to become expert in the hardest, to deploy are that edge where you have, very very high number of users an, incredibly large amount of data latency, requirements that are very stringent, like this inference needs to happen in, 300 milliseconds or like everything is, off those constraints don't tend to, happen in small businesses you tend to, be looking at tabular data you tend to, be looking at batch inference and those, are things that are actually fairly, straightforward to learn when you like, sit down and you look at like what, technologies do I have to do batch, tabular inferences fairly, straightforward in most platforms and so, I think the advantage is that you, actually get this Vista of the machine, learning discipline at a small company, that you don't tend to get at a larger, organization in a larger team you tend, to have a fairly narrow aperture of like, I'm looking at my features have been, engineered by my data engineering team I, am then doing some last mile stuff on, that data to put it in my model and, crane and then I'm handing that model, artifact off to my mlops team and, they're doing all of the cicd stuff so, your aperture is very narrow and so you, tend to not be able to see The, Innovation that's happening in mlops or, The Innovation that's happening in you, know data engineering leading up to your, model training that I think is, fascinating it's so interesting to do, and be a part of so and then you get the, chance like if it turns out for some, people who did modeling for a while that, for instance that worked for me they, will get a chance to do some of the mlop, stuff you're like you know what I like, mlops that's what I want to do and so, you get the chance to actually see these, different roles and try them out and, then you could go deep you could say I'm, going to do mlops and I'm going to be an, expert in mlops that's what I want to, spend my time on and then go do that at, any size organization so you got me, thinking about this cuz you're talking, about that and you're really making me, think back to my small organization time, but I'm also in a large one right now, and as a comment before I ask the, question in large companies you're often, at the mercy of arbitrary decisions of, others that may not be uh as informed as, you are as the data scientist and that, happens all the time but you've kind of, differentiated these kind of different, opportunities at the different Siz, companies if someone came and I was, asking you for guidance on like what, kind of company should I Target there's, a clear set of kind of generalist but a, lot of opportunity at small companies to, try different things and then there's, this opportunity at large companies that, you may have to accept what they give, you but within that scope you can go, deep how would you recommend someone try, this or that you know someone's looking, to you for that mentorship how do you, steer them the right way I think it, takes a combination of things to be a, data scientist in a small, organization I actually tend to, recommend that less often than at a more, established organization in particular, if folks are coming just out of school I, tend to not recommend looking at smaller, companies as their first job, particularly if that company is just, starting their data science muscle which, usually you can find out when you're, doing an interview with them like am I, employee one or sub 10 you know of doing, data science at this organization, usually I'll steer people away from, doing those roles if they're early in, their career because there's so much, about that role that's not taught at say, a university I tend to find people just, out of college don't know how to set up, their own data science pipeline from end, to end how to interview someone really, rigorously to understand how does this, requirement tell me how accurate this, model needs to be you know making that, translation I tend to recommend sort of, mid or larger size companies for a first, rule so that you can learn from other, people that have done this for a little, longer and have built these sort of, intuitive ways of doing data science you, can pick up some of those Frameworks, from them but if one it is a startup, that is led by a CTO or CPO or CEO that, has deep expertise in data science has, seen it elsewhere that can be a, fantastic opportunity to be mentored by, someone really directly so I tend to, sort of put them on that spectrum of, like if you're brand new in data science, I tend to recommend a more seasoned, organization so you can pick pick up, some of this stuff that simply is, learned on the job sadly like as much as, I wish I could point it like a blog or, you know a course or something like that, that would teach you end to end data, science workflows I actually don't, haven't found one please send one to me, if you have found one but because that, doesn't exist I tend to say you need to, see it you need to see it and really, take that opportunity to learn not just, your your narrow aperture but really try, and observe what the people Upstream of, you and downstream of you are doing so, that you know that end to-end workflow, and from there smaller company can, benefit from what you've learned at that, large organization as long as you have, this desire to really Hands-On own at, end to end if you're desirous that the, person Upstream of you is doing a lot of, the data preparation please do not go, into a small company if you hope that, someone's going to tell the story of how, good your work is don't go to a small, company that's going to be your job but, if you really love this like I want to, be the one that takes this whole thing, and to end a small company is where I, would tend to send people that's a great, perspective as we kind of wrap up here, and get to the end I'm just curious as, you look forward to the future that, could be something with storytellers and, the work that you're doing there or just, generally in the industry what what's, exciting to you and encouraging to you, you look to the future I most excited as, we've built this company and seen how, much need there is in smaller companies, for data science techniques and how hard, it still is for these companies to find, hire you know keep data science Talent, I'm super excited for the opportunity to, show how much data science can help a, relatively small organization and prove, out that case like data science really, works here and from that I think that, will spark some ideas around you know, mlops tools tend to really focus on, Enterprise use cases large teams that, are solving very complex problems I'm, excited to see tools that are really, aimed at, solving these constants that small, businesses run into lots of disparate, data that needs to be brought together, in order to build these models and, deploy them simply like a very simple, sort of layer for that I'm super excited, about that I also as you mentioned, Jasper I'm really excited about how like, large language models will play into, this idea of deploying data science, within organizations will it make people, more familiar with the concept of data, science to where they're more ready, they're like I see this value other, people are getting can I try this in my, organization and then lastly I would say, I'm really excited for how I see the, data science Community generally maybe, starting to Pivot away from Excellence, as being measured as state-of-the-art, performance towards Excellence as being, measured as impact within some sort of, vertical like there's so many areas that, data science could really I mean not to, sound like the like Silicon Valley like, um like stereotype but really could make, the world a better place right you know, you think about like you you think about, like how data science can help, universities identify when a student, needs some sort of service in order to, help them get their degree like that's a, very practical thing and something that, I haven't seen many universities really, Embrace yet it's one of those small, organizations that still tends to have a, little reticence around adopting data, science techniques but if we can point, towards data science outputs as being, driving impact not just the accuracy of, our models then I think we'll see that, adoption really start to to grow Within, These smaller organizations that's, awesome well thank you so much for, joining us kir it's been a great, conversation and um yeah I know I've got, I've got a lot of tips I think that I, can articulate better now with even with, my own team after the conversation so, thank you so much for joining us thank, you I really appreciate, it, [Music], all right that is our show for this week, if you dig it don't forget to subscribe, head to practical a FM for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or a colleague word of, mouth is the number one way people find, shows like ours thanks again to fastly, for fronting our static assets to fly.io, for backing our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | ChatGPT goes prime time! | Daniel and Chris do a deep dive into OpenAI’s ChatGPT, which is the first LLM to enjoy direct mass adoption by folks outside the AI world. They discuss how it works, its effect on the world, ramifications of its adoption, and what we may expect in the future as these types of models continue to evolve.
Leave us a comment (https://changelog.com/practicalai/206/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• ChatGPT (https://chat.openai.com/chat)
• OpenAI Blog: ChatGPT (https://openai.com/blog/chatgpt)
• Illustrating Reinforcement Learning from Human Feedback (RLHF) (https://huggingface.co/blog/rlhf)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-206.md) | 4 | 1 | 0 | it's a three-step process so you, pre-train a language model then you, gather this sort of human preference, data and train a reward model now the, second reward model is trained to take, in a prompt and a response and score it, like a human would score it according to, preference it's actually trained on the, human preference data and it outputs a, prediction of what a human preference, might be on this output, the third and final step is that you, fine-tune a copy of your original, language model using this trained reward, model and a reinforcement learning, [Music], Loop welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone subscribe now if you haven't, already head to practical AI FM for all, the ways special thanks to our partners, at fastly for delivering our shows super, fast to wherever you listen check them, out at fastly.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, [Music], fly.io welcome welcome to another fully, connected episode of the Practical AI, podcast these episodes are where Chris, and I keep you fully connected with, everything that's happening in the AI, Community we'll take some time to, discuss the latest AI news and then dig, into some learning resources to help you, level up your machine learning game I'm, Daniel whack I'm a data scientist with s, International and I'm joined as always, by my co-host Chris Benson who is a tech, strategist at locked Martin how you, doing Chris doing very well Happy New, Year 2023 this is our first conversation, yeah happy New Year this is the first, one we're recording in the year, 2023 um looking already to be an, exciting year for AI things hope you got, a bit of a refreshing break over winter, cuz there's a lot of I'm guessing it's, going to be a whirlwind of AI stuff this, year I think it is going to be a, whirlwind uh I didn't get a rest over, the break because having nothing to do, with AI are animal nonprofit uh we had, all the winter weather uh that most, people in the US were aware of and we, had we were doing animal emergency so we, saved a whole bunch of lives which made, the lack of rest worthwhile um but there, was a lack of rest there was a lack of, rest but we did a lot of good but, interestingly the conversation we're, going to have today will play into that, very non-ai side of my life because, we're starting to see some crossovers, we'll see in a few minutes here yeah, it's interesting you know um um so today, spoiler alert we're going to be talking, about chat GPT um you've probably been, expecting us to talk about chat GPT for, some time one of the things we wanted to, do is really dig into the internals of, chat GPT how it works and its, implications and so we wanted to do it, justice which is is partially why um we, wanted to take some time and prep for, that but it is interesting also to get a, little bit of perspective now that chat, GPT has been out for not that long but a, little while over Christmas you know I, was at Christmas with with my family and, uh even at our family Christmas dinner, my dad was asking me about chat GPT and, you know I at my church I had people, come up to me and ask about chat GPT who, aren't don't work in Tech or anything, like that and my barber you know whoever, is in my life it seems like they're at, least aware of chat G PT they might not, know exactly what it is but they know, that it's a big deal are you having a, similar experience very very similar to, that and and for folks Daniel and I, haven't talked through the holidays so, you know this is the first time I'm, hearing it just as you are and I'm, having the same experience and it's been, really notable that we've you know the, each new large language model uh comes, out and you know the various GPT series, and we talk about it this is the one, that's crossed over into mainstream, awareness and broad use and I mentioned, as we were getting into the conversation, that it's now crossing over from the, technical and AI side of my life into, the non-technical and animal side as we, do things like narratives uh both, written in video and educational, material this is an amazing tool that, completely non AI focused people can use, productively to really do good in the, world and get things done that they want, so it's been really interesting to see, how this one has been different from the, gpts before yeah so in your case it's, something that like as you're creating, content you see it as potentially, playing a role in whatever scripts or, articles or whatever that might be is, that right absolutely it it's uh been, quite humbling in that way experimenting, what was possible because the the, quality of of the outputs are typically, much better than I can do by myself I've, done that both in terms of I'm writing a, children's story to teach children about, animals and I've been experimenting with, it and every time I write something and, then I seed it into chat gbt it does a, better job than me so it's been very, humbling in that way I think of myself, as a decent writer as well and then the, quality of video output is just uh has, been quite good and there's a little, workflow but it means that we can do, more good in the world faster it, accelerates the ability to put out great, content and so I think that this is one, of those uh inflection points that we've, seen not just on a technical mirror but, in the World At Large well your usage, has seems to be much more useful and, valuable than my usage which has mostly, been things like writing I I remember I, had chat GPT write a new Christmas Carol, for me about the three wise men in the, style of a rap song by, Eminem I have to say it was it was a, great rap song I didn't record it, because I'm not Eminem but I sent it to, his people and we're having discussions, so okay yeah I I can't believe you're, you're not sharing that with, us well maybe before we jump in I I, think some of what we wanted to do today, was just like describe a bit of like, what chat GPT is what the interface, looks like what you can do but then, really do a deep dive on what are the, guts of the system why is it different, than what's come before in what ways is, it similar to things that have come, before both of those things are true and, so we want to do a deep dive and then, think about some of the implications so, buckle up um hopefully this will be fun, first off it is called chat GPT which is, interesting so the interface that, they've chosen for this and the sort of, design of the system is a chat interface, so if you go to chat, gp. open.com you need to create an, account and we can talk about some of, the implications around that in a second, when you log in it gives you some, examples of what you can do some example, capabilities and some limitations I, found this interesting and we can talk, about it later some of how they describe, the limitations and they release the, model but the basic idea is there's a, chat interface you can type a prompt and, it will respond and then you can, actually continue to have dialogue with, the system so you can say you know tell, me more about that or uh I don't, understand this part you know explain, that bit more so some of the examples, that they give are you know explain, Quantum Computing in simple terms prompt, or how do I make an HTTP request in, JavaScript so there's even a you know it, can output code um it can help you debug, code like I mentioned it can provide, lyrics or scripts or structured types of, things like the M&M song so um yeah, that's the basic input output um how did, you find this sort of interface Chris in, terms of your own usage as related to, like building scripts and other things, it's been interesting um in that it will, take it a direction like as I've been, trying out the children's story thing is, something I've been playing with and, seeing where where chat GP chooses to, take the beginning of a seat of a, narrative like I would start off with, you know once upon a time there was a, precocious raccoon named Pandora because, that's the hero in the story and it's, been interesting to see how it's taken, it but it's also it will go off in, directions I don't want so then I'll ask, questions to kind of steer it a little, bit and it will come back so it doesn't, it's not final output but it's producing, a body of narrative that's better than I, could have done by far and so I find, myself instead of being the creator of, the story I'm kind of editing it to make, it work but it's a collaboration in a, sense between this is one of those first, points where we've talked in an aspiring, way about collab oration with AI for a, long time but I now am doing that and, steering it in different ways with the, uh entering in the chat and seeing where, it went and asking specific questions, about the story it's been quite, remarkable for the first time it's like, having a partner in the process it just, happens to be that the partner is not, human in this case a friend of mine uh, Brent seagull uh has been also playing, with it a lot doing some stuff on him, and that's how he described it as well, he was looking at some different topics, and he said you know it's like having a, dozen worldclass scientists for the, things that he was focusing on right, they available to you and they're not, wrong you know they never get it wrong, you know he had some pretty cool stuff, that he was working on in a very, different uh thing but it's that sense, of collaboration with the technology in, a real life sense that's really, different now from the way it was before, this well um as you were chatting which, though some great context I asked chat, GPT the following what state-of-the-art, AI topics does the Practical AI podcast, need to cover in 2023 and who should, they reach out to for interviews and the, response so you're ready for what we'll, cover in 2023 let's hear it uh chat GPT, said it's difficult to predict exactly, what the state-of-the-art and AI will be, in 2023 definitely an interesting start, yeah as the field is evolving rapidly, however some topics that might be worth, covering include machine learning, interpretability and explainability and, then it actually gives a blurb about, what that is AI safety and ethics so, that's right in your wheelhouse and it, gives an explanation of that natural, language processing so we can continue, to talk about my favorite topic of NLP, in 2023 and uh computer vision was their, other one and they said to find guests, to interview on these topics you might, consider reaching out to researchers and, prct practitioners working in these, areas very sensible some suggestions, include Rachel Thomas co-founder of, fast.ai Tim nck gabu co-lead of the, ethical artificial intelligence team at, Google which is interesting that it gave, that response because that is not, factually correct anymore as she is not, with Google and actually that was in the, news quite a bit it was that was a that, was a significant story in the AI World, a few months ago and then it gives a few, others including Yan laon who you know, of course we would love to have him on, the show we' love to have Rachel and, timet as well um on the show but yeah, interesting so a few things I guess that, strike me as an example with this, certain case is the output is definitely, natural and coherent right so that is, thing one that's striking thing two for, me is there's actually a good bit of, like structuring that goes on here so, they actually give you know 1 2 3 four, the topics that we need to cover and, then a bulleted list of the people that, we need to have on the show yeah thing, three is despite it being coherent and, natural it is not fully correct, factually right so that's maybe another, element of of this you know it's funny I, because we've seen a fair amount of, criticism about you know chat gbt, getting things wrong and stuff I find it, curious that as we talk to humans uh, about human things we get things wrong, constantly and factchecking and you know, was that misinformation or was it just, unintentional and and yet we hold these, Technologies to such a a perfect, standard that we ourselves completely, unable to hold up you know I wouldn't, want to ask one question and assume that, it was 100% right but sure it makes it a, little bit more interesting to me that, collaboration, I I dare say takes on a human element by, having error in it yeah and we'll talk a, little bit later about the interaction, between this and humans and where the, burden lies I do think that the, interface that they've provided and, being explicit about limitations that's, a good thing now certain people might um, kind of go back and forth on this model, is not open access right like you can, sign up and create an account and a lot, of people have done that and you can, interact with it but like the model, Waits itself and you know it's not, released publicly in that sense even if, a lot of people can use it for free at, the moment there's pros and cons there, but I think it's interesting that this, model as opposed to gpt3 earlier it was, I think easier for the general, population to interact with this model, right away in comparison with GPT three, which you know had a very long prolonged, kind of weight list and timing and all, of that and lots of explanation so it, seems like that they've kind of shifted, the scales a little bit in terms of, making access to run the model more open, while still maintaining it as a closed, model and providing limitations so it's, interesting to see also that kind of, shift in Dynamics which I think probably, was influenced by the fact that you know, actual Open Access models like stable, diffusion and O others have taken off so, widely so quickly because they are more, Open Access wise and so I felt like we, saw open AI shift a little bit in how, they released this while still kind of, maintaining some of the elements of how, they released gpt3 and others I I agree, with that yeah I mean we've seen that, kind of evolution as they've explored, release approaches over time you know, with in iterations and such I think one, of the things that we've seen you know, across this is the fact that every time, a breakthrough comes on we're starting, to have fairly quick followup once, people know that something is possible, they manage to kind of reverse engineer, it so I suspect that uh aside from, Strictly jat GPT that we will see some, fast followers pretty, [Music], soon, [Music], all right Chris Let's uh get into the, technical details of this which I know, I'm excited to chat through no I guess, pun intended in that case oh boy there's, kind of two elements of this that I, think are important to talk about before, we talk about what actually was done, with chat GPT specifically and these two, things are more General than chat GPT um, one is is sort of the GPT family of, language models and those types of, language models and then also a, technology or approach called, reinforcement learning from Human, feedback those two things kind of, combined here to create the chat GPT, system and these two types of models and, approach have been applied more widely, in other cases and by other people but, here they were applied by open AI so, starting to talk about this sort of, language family model of gpts we had GPT, and gpt2 and gpt3 and GPT 3.5 and I, don't know what to be honest I don't, know what number we're on now but these, GPT language models are just that, they're a language model and there're a, specific type of language model called a, causal language model people might be, familiar or at least have heard the, words causal language model CLM or mass, language model MLM so Mass language, model kind of takes a sentence and the, what it's trained to do is kind of for, one word that's masked in the sentence, or taken out or given a special token, it's trained to predict that based on, everything else in the sentence so it it, sort of looks both ways at the sentence, and tries to predict the mask GPT is is, not a mass language model it's a causal, language model which means that it's, trained to predict the next word in a, sequence of words or in a sequence of, tokens whatever those tokens might be it, does that and it predicts the next word, in the sequence but it does it based on, all of the previous words and it does, that sequentially so as you go through, the sentence the training methodology is, what they call Auto regressive means, that it predicts the next thing from all, the previous things and then once it's, predict that next thing then it predict, the next next thing based on all the, previous things and then the next next, next thing and Etc and that's the auto, regressive part of it I suppose we're, kind of seeing that in action uh because, when you're using the interface it, doesn't just give you the entire output, all at one time it comes back with text, you see the text developing much as if, you were typing it you know on the, screen yourself um so I guess it you're, gradually seeing each of those, iterations coming back yeah and I think, in the original gpt3 interface or the, playground that we both played with you, kind of see this as well you kind of, give a prompt and then it then it, generates this text out and that allows, it also to be very flexible right and, produce these structures and also allows, it to be flexible between different, tasks like if you start prompting it, with question answers it sort of learns, that pattern and in a sort of few shot, way and then starts predicting next, question and answers or something like, that or if you want a script or if you, want a narrative or if you want, something else it kind of adapts in that, few shot learning sort of way which is a, key element of this GPT or causal, language model structure and GPT is not, the only one there's other ones but this, is the family which GPT sits in and you, mentioned just as a two-c sideline you, mentioned F shot do you want to real, quick just for those who may not be, familiar yeah so kind of some jargon few, shot zero shot is thrown around a zero, shot prediction or usage of a model, means that maybe you're using a model on, inputs or a type of input that it's, never seen before even though it's seen, maybe similar things so this happens, with like machine translation models, that are multilingual maybe because you, might have in your training data like, English to French and you know Arabic to, Spanish but you don't have examples, English to Spanish but you have English, and Spanish data in the data set and so, you could still ask that model to try to, Output in English to Spanish translation, and actually that can kind of work in, certain scenarios F shot means that, you're not quite doing it that way but, you're providing a a small number of, prompts that kind of guide the language, model into the type of thing that you're, wanting to do so in the, gpt3 inter interface or playground if, you remember you can kind of start with, a question answer template and provide, some examples and then you can provide, the next one and it'll answer it for you, um and so you provide that set of, templates or prompts and this kind of, gets into this idea of prompt, engineering and that sort of thing, because these models are so flexible so, that was the original paper from gpt3, was titled something like you know, language models or few shot Learners or, something like that that was one of the, big Ideas there, that kind of gets us to GPT and language, models but Chad, GPT well I guess it is a model like that, so it is a GPT based model but the, reason why the system is so powerful is, because it's a language model that has, been trained in a very unique way um, that has proved to be actually quite, valuable and that's that it is a GPT, based model that was trained using, reinforcement learning from Human, feedback or R lhf reinforcement learning, from Human feedback we'll link to this, in the uh show notes but there is a, really great article on the hugging face, blog from Nathan Lambert Luis kriado um, Leonardo van Vera and Alex havala called, reinforcement learning from Human, feedback and they talk about chat GPT, and other like models so we're going to, pull a lot of our insights from this, article so thank you to all of you for, writing this article because it was, really helpful much more helpful than, the maybe the open AI blog by itself um, the major idea here with reinforcement, learning from Human feedback is trying, to answer the question like can we use, human feedback on generated text as a, measure of, performance that goes beyond sort of, just like automated measures of, performance so how do we integrate human, feedback into the loop of training a, model as a performance metric and in, that way we're sort of training a, language model but we're also training, it in ways that match human preference, for answers so human preference is a key, piece of this and I think that's why you, know people like chat GPT is we prefer, the things that it outputs right I don't, know if that was the case for you like, with just a raw language model like G, pt3 you can get some cool stuff output, but it might not fit your preferences of, like how a human would actually respond, to something you know going back to the, example I mentioned in the beginning, that was the trick for me was you know, like using the children's story as an, example I had a specific rough narrative, in mind because I'm trying to teach and, there are certain points that I'm trying, to illustrate and obviously it doesn't, know that the model but the model if you, work with the model being being able to, kind of you know continue to point it, the right way that was that was very, interesting I I'm am curious going back, to what you were talking about a moment, ago with you know the reinforcement, learning with the human feedback how, does that scale you know because if we, were to compare this for a moment I know, this is very much a kind of a newbie, question but for those of us who are not, you know deeply into language models you, know when we were looking at other types, of models a couple of years two three, four five years ago um there was always, a challenge about getting human feedback, to scale with the amount of training, data how is that tackled in this, approach so that you can do, reinforcement learning that way but it, scales to you know what we're doing at, GPT there's actually like a whole Loop, of models involved here and different, training sets that are of different, scales and different models that are of, different scales so let me talk through, a little bit of that and I hopefully, that will become more clear cuz yeah, obviously human feedback is expensive in, in terms of gathering it right so how, much of this do you need so there's, three steps the the process with which, chat GPT was trained and other models, using this reinforcement learning from, Human feedback approach it's a, three-step process so you pre-train a, language model which is not new we've, been doing that for quite some time, right you pre-train a language model, then you gather this sort of human, preference data and train a reward model, now this second reward model is trained, to take in a prompt and a response and, score it like a human would score it, according to preference it's actually, trained on the human preference data and, it outputs a prediction of what a human, preference might be on this output and, then the third and final step is that, you fine-tune a copy of your original, language model using this trained reward, model and a reinforcement learning Loop, sort is it kind of the discriminator, you're using the reward model as the, discriminator in that or it's like uh in, a in a reinforcement learning Loop you, would have a kind of policy which, outputs like what you should do next, sort of thing and then you have some, type of reward system that rewards the, agent for acting according to the policy, or not so in this case the reward model, is outputting that reward or that, preference and the language model is, actually acting as the policy here so, you have an original language model that, is kind of your original policy and, isn't fine-tuned yet according to human, feedback then you gather some human, feedback like actual human feedback, train a reward model to simulate that, human feedback and then you fine-tune a, copy of your original language model a, copy of your policy with the this uh, reward model and so the pre-trained, language model it could be any language, model it doesn't have to be a, gpt3 but in the case of open AI it was, gpt3 but you have an original language, model and that language model could just, be a general pre-trained language model, or you could additionally fine-tune that, model and maybe for a domain or a, specific type of output you want so, that's your pre-trained language model, and the step two to get the reward model, what you do is you start outputting data, from your original policy from your, original language model and you have, humans rate it maybe you combine that, with certain human output or certain, other uh outputs and you have human, rating so that way you're kind of you're, creating a training set for your reward, model which includes human labels of, their preference then in also in this, step two you then train a reward model, using that data that you've gathered, from humans to Output the preference now, to your point of like how does this, scale well the fine-tuning of the policy, is done kind of with this automated, reinforcement learning Loop but you do, need humans to generate enough data to, train your reward model that's used in, that Loop and what's interesting is and, the hugging face blog makes this point, is that different people or different, groups that have applied this, reinforcement learning from Human, feedback have used different sized, reward models and obviously as the size, of your reward model increases you need, more data to train it that would be a, general general rule in the case of open, AI their main language model was like, 175 billion parameters and the reward, model was much much smaller 6 billion, parameters in other cases people have, done similarly SI models and so I think, that is an open question like how should, these models sizewise be related to one, another what types of models should you, use for your reward model and how much, human feedback do you need I to be, honest I think those are open research, questions let me ask you another, question on that with us getting high, quality output that is comparable to, human output uh very closely um and, where if you were to get that output you, would find a very difficult time, knowing whether it was the model or, human that did that does that, potentially go back in to train further, reward models where you're using, essentially synthetic data as the output, of a previously trained and so you can, build on it and uh essentially there's a, point where you have enough data where, you're largely able to take humans back, out you recognizing it's the tool of the, day but in the future you can take, humans back out of that Loop of, providing the reward uh model to do that, do you anticipate that that would be a, reasonable, expectation I think in this methodology, the reinforcement learning from Human, feedback one of the goals in that middle, step is to get enough human feedback, that you kind of reduce the harm and the, helpfulness of the output model so this, is really addressing I think some of, those kind of problems with large, language models of hallucination and, harmful effects General output and you, can address those what what I think is, the finding here is you can address, those with humans in the loop rather, than humans totally out of the loop now, here in the next uh step that we'll, describe in the process humans are taken, back out of the loop to fine-tune the, model but there that Central piece so, this three-step process of starting with, a language model on one end ending with, a reinforcement learning trained model, on the other end has this middle step, that I think is a really key piece of it, that actually helps the utility of the, output and potentially reducing harm of, the output which is that human feedback, [Music], piece all right Chris we're about to the, end of this reinforcement learning from, Human feedback loop just uh in summary, the the loop is we have a pre-trained, language model then we gather this human, feedback or rating of the output to, train a reward model now we're actually, going to use that reward model so in the, final step of the process we make a copy, of the original language model or the, policy so you have an original policy, and you have a copy of the policy or a, original language model and a copy of, the language model you put in a prompt, to each of those models and then you get, an output from each of those models then, you use a sort of constrained reward, function where you actually penalize if, the updated model is straying too far, away from the original model because I, think what theyve found is you know if, you allow it to sort of just take any, direction in the output you want it can, have computationally some optimization, problems so you kind of gradually change, this language model from the original, and you have a penalty for how far that, output Strays from the original output, and then you score that output with this, reward model that you've created and the, way that they're doing the updates for, chat GPT and some of these others with a, reinforcement algorithm called proximal, policy optimization which you have sort, of two levels of what in physics I would, think of as adiabatic change meaning, like things don't don't change too, quickly um one is you don't stray from, the original policy output too much or, you're penalized from that and secondly, this reinforcement learning algorithm, called, po um prevents you from making twoo big, of updates to your model weights in each, step that way you don't have again this, kind of a hard optimization to do but in, summary you kind of have these two, models the original one the updated one, you output a prompt from both of them, those go into your reward function um, which includes a penalty element for, straining too far from the same output, it also includes the actual estimated, reward or estimated preference from your, reward model and then that reward is, then used to update the weights of your, copy model or your new policy using this, PO reinforcement learning algorithm so, hopefully, there's some diagrams in the post that I, think are quite helpful it's a bit hard, on a podcast but hopefully that Loop, makes some sense in terms of how you're, updating this and this updated policy or, this updated language model is the model, that is used so this is like the chat, GPT model that comes out of the end I, think given the limitations of our, medium here I think that was a very, lucid explanation of translating it so, appreciate that I definitely learned, some good I know formulas you can ask, chat GPT to Output all of the right, formulas and I'm sure it would do a fine, job where do you think we're going from, here like as you have looked at this, progression of these models over that, we've covered on the show over time with, chat GPT in particular it's been I've, been kind of amazed at what it could do, and using it but I'm really really, curious about where this is going and I, think it's capturing a lot of people's, imagination in that way that are outside, the field like what's next I think, there's still open research questions, here that are worth exploring and then, there's like workflow and practical, implications I think on the first side, as was mentioned already and we were, discussing this reward model as far as I, can tell it's not totally determined, like what the architecture of this, reward model should look like how big it, should be in relation to the model that, you're fine-tuning how much human, feedback should you use how does the, amount of human feedback that you get, influence the harmfulness or the utility, of the output and that sort of thing so, I think there's a lot to explore around, that dynamic between the reward model, and the the language model in addition I, mean language models are still being, developed right so chat GPT used the GPT, 3.5 language model as this original, policy right and they actually used a, fine-tune version of that using, supervised methods and human chat, conversations so they started with a, fine-tune version of chat or of GPT 3.5, so obviously we're going to have a GPT 4, GPT 5 we're going to have other language, models from other providers right from, other research groups you know big, science or Google or Microsoft or, whoever is developing these other, language models we're going to have, updated versions of those so I think we, can see a research Direction with this, where people are trying different, pre-train models as their original, policy where people are trying different, reward models where they're mixing them, up in interesting ways where they're, maybe using slightly modified versions, of the PO algorithm or other, reinforcement learning algorithms to do, the updates so there's a research, Direction where I think we'll just see a, lot of, Exploration with this kind of template, as the structure that they're exploring, the second piece which is maybe more, interesting to some of our audiences, like what are the implications of this, in terms of people's workflow that was, about to ask you that if you hadn't gone, there so yeah I don't know what are your, initial thoughts there Chris it's less, about the technical aspects of the model, and more about going back to the user, interface considerations that we talked, about earlier in the conversation I, would be amazed if the community at, large not just open AI hasn't understood, the impact of making choices like that, it may not be specific to the model, development but how you're putting it, out there and they're seeing widespread, adoption you know when you go into their, interface uh you get a warning right off, the bat we're experiencing exceptionally, high demand please hang in tight as we, scale our systems and I think that's, indicative of the fact that people who, are not normally listeners of this, podcast are starting to find a lot of, utility for the first time ever it'll be, interesting you know we keep talking, about exponential growth in this field, and these amazing you know kind of mini, revolutions along the way but this is, that first point where it's probably, going orders of magnitude broader in, terms of applicability to different, workflows and audiences so and as we're, looking at you know you're combining, just for a moment going back and, combining natural language with the, large language models with with, generative capabilities with, reinforcement learning and we're kind of, seeing we saw slices of each of these, fields over the last few years, developing and we've been talking about, this Fusion of the fields and so how, soon before we start seeing uh, entertainment you know that that is that, is being heavily heavily based on these, Technologies I'm seeing it in my little, tiny nonprofit because we can suddenly, leverage this to put out content to help, folks in a charitable fashion that we, can do at least 10 times as much as we, would have been able to before by taking, advantage of these and so I think we're, at that inflection point now where this, will be the first and as we have a, continuing episodes through the course, of this year and some new things come, out whether it's from open II or or, similar things from other organizations, I think we're getting to that point, where it's really hitting broadly in, real life so I'm really fascinated I, would love to hear from our listeners on, ways that they're using this technology, what they think might come next and how, they are envisioning using it within, their own organizational missions to, accomplish what they want it's a, fascinating moment in history of AI that, we're in right the second and one one, thing which I can't claim as my own, Insight that I stole from Twitter but I, think has really shifted my thinking a, little bit on this subject is so this is, actually a tweet from Chris Albin who uh, is the director of machine learning at, Wikipedia and the the statement he made, which I think was was really insightful, and maybe other people are having, similar observations but he said sci-fi, got it wrong we assumed AI would be, super logical and humans would provide, creativity but in reality it's the, opposite generative AI is good at, getting an approximately correct output, but if you need precision and accuracy, you need a human uh end quote so I think, the observation here is like and we've, talked about this on the show with, language models also language models are, really good at actually at naturalness, creativity apparent coherence right like, that actually is what they're good at, but they get the facts and the Precision, and the accuracy wrong many times right, so whereas I think in the past people, have thought the unique thing about what, humans can provide in an AI driven, system is, creativity not logic and that sort of, thing actually the the opposite is, really the case right like the AI bits, are really driving the creativity and, the humans are enforcing the logic the, facts the accuracy and the Precision I, that has really, shifted like I think I've been realizing, that over time but that statement really, put some words to I think what what I, was thinking it's comforting in a way, and the reason I say that is we talk in, times past about creativity coming from, the humans rather than the machines and, yet the evidence that we've been looking, at over these last couple of years has, been not that and so I have actually, been wondering what role is there for, the humans in that equation so the fact, that it's flip-flop back it's the, inverse of what our expectation was it, still means there's room for a human uh, in the picture and that's a little bit, of a comforting moment it may not be, what we thought it would be but there's, still a place and uh and I think that's, probably a good high note to leave, people with on the note of things being, useful to humans and humans getting, involved we did want to leave you with a, few learning resources to explore things, related to chat GPT of course play, around with chat GPT you can go on the, website and interact with it we'll, provide the link but also I would really, highly recommend that you look at this, hugging face blog about reinforcement, learning from Human feedback, there's actually a bunch of links in, there as well to other things that you, can kind of spin off and look at like, the PO algorithm and other things uh in, there also there's a good reference I, always love looking back at J Al Mar's, descriptions of how certain language, models work uh he has one on GPT uh 3, and other GPT um actually a number on, GPT different uh different uh from, different perspectives and then there's, a interesting article on G gpt3, architecture on a napkin from a Blog, Dugas CH I found it quite interesting, how they describe some of the things, there I like that one as well yeah yeah, so go ahead and check those out those, are great learning resources they're all, free you can take a look at them and, learn in more detail some of the things, that we only had uh you know 45 minutes, to talk about here on the podcast in our, social media channels I'm encouraging, our listeners to share with us some of, the the ways they're using the, technology I'm really waiting to hear, that uh and you know the the more unique, uh the better yeah sounds great let us, know what you're creating with chat GPT, it's been a fun one Chris good to chat, with you absolutely thank you very much, for the uh incredibly Lucid explanation, it certainly uh helps me understand and, appreciate it as always Daniel talk to, you next week uhuh, [Music], byebye, all right that is our show for this week, if you dig it don't forget to subscribe, head to practical AI FM for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or a colleague word of, mouth is the number one way people find, shows like ours thanks again to fastly, for fronting our static assets to fly.io, for back in our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | NLP research by & for local communities | While at EMNLP 2022, Daniel got a chance to sit down with an amazing group of researchers creating NLP technology that actually works for their local language communities. Just Zwennicker (Universiteit van Amsterdam) discusses his work on a machine translation system for Sranan Tongo, a creole language that is spoken in Suriname. Andiswa Bukula (SADiLaR), Rooweither Mabuya (SADiLaR), and Bonaventure Dossou (Lanfrica, Mila) discuss their work with Masakhane to strengthen and spur NLP research in African languages, for Africans, by Africans.
The group emphasized the need for more linguistically diverse NLP systems that work in scenarios of data scarcity, non-Latin scripts, rich morphology, etc. You don’t want to miss this one!
Leave us a comment (https://changelog.com/practicalai/205/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Just Zwennicker – LinkedIn (https://www.linkedin.com/in/just-zwennicker-1929171)
• Andiswa Bukula – Twitter (https://twitter.com/andiebukula)
• Rooweither Mabuya – Twitter (https://twitter.com/RoowyM)
• Bonaventure Dossou – Twitter (https://twitter.com/bonadossou) , GitHub (https://github.com/bonaventuredossou) , LinkedIn (https://www.linkedin.com/in/bonaventuredossou) , Website (https://bonaventuredossou.github.io)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
EMNLP 2022 papers from the guests:
• Towards a general purpose machine translation system for Sranantongo (https://arxiv.org/abs/2212.06383)
• MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition (https://arxiv.org/abs/2210.12391)
• AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages (https://arxiv.org/abs/2211.03263)
Other links relevant to the discussion:
• Masakhane (https://www.masakhane.io/)
• Lanfrica (https://lanfrica.com/)
• The South African Centre for Digital Language Resources (SADiLaR) (https://sadilar.org/index.php/en/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-205.md) | 2 | 0 | 0 | I can relate to just I thought that I, knew the language but then it feels like, okay I actually don't know anything and, I don't know they've been that, interesting curve is called a curve of, like David something where technically, showed that okay when you think that you, know something you have high confidence, and that's actually when you actually, don't know anything but then when you, start learning and then you adopting, more of yourself is technically like, that process of knowledging and like, learning more things and then like the, curves like goes up at Point, [Music], again welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone subscribe now if you haven't, already head to practical ai. FM for all, the ways special thanks to our partners, at fastly for delivering our shows super, fast to wherever you listen check them, out at f.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, [Music], fly.io welcome to another episode of the, Practical AI podcast this is Daniel, whack I'm a data scientist with s, International and I'm at the emnlp, conference in Abu Dhabi and I've got, some old friends some new friends here, with me a very exciting uh Community, show that we have for you today why, don't I just have everyone just give a, brief introduction of who they are and, and where they're coming from okay hi, everyone my name is Anda from South, Africa I work for the South African, Center for digital language resources as, an isosa researcher this is closa being, one of the 11 official languages spoken, in South, Africa hi everyone I am Rua mauya better, known as Ru I am also from South Africa, I work with andisa at the South African, Center for digital language resources I, work as an isizulu researcher it's also, part of the 11 official languages of the, country thank you hi all my name is, jener I've been working as a data, engineer for over 20 years uh recently I, wanted to switch towards data science so, did a a masters in that direction for my, thesis I created a translation system, for San anono which is a a creole based, english-based Creole language from, Surinam uh where my father is from and, I'm um a mixed origin so my mother is, from the Netherlands and I'm uh my, father from Surinam and I've been born, and raised in the Netherlands uh okay hi, my name is bavu I'm originally from, Benin but I'm also known as the citizen, of the world traveling around, conferences I am soon to be a PhD, student at the Quebec Institute and also, at Mig and I welcome on like law, resource languages African law resource, languages are I mean focusing on for Min, language but extending to other, languages as well and I'm also, interested in like machine learning for, healthcare like drug Discovery in, therapy medical imaging all those type, of things great yeah thank you all thank, you all for taking time to do this this, is so great I mean one of the really, encouraging things about being in this, room with you all is that uh you know so, many talks here at emnlp are great talks, but they'll say things like massively, multilingual or like this works for all, languages or something like that and uh, that's definitely uh one one perspective, that's not totally accurate so maybe if, uh we start with Ru and Andy what from, your perspective are you're passionate, about where what is the area that you're, working and maybe just highlight some of, those things that maybe are another, perspective of either Linguistics or NLP, that are important for people to, understand in terms of the world's, language, okay so I'm my passion is working on, making sure that is Zulu is also a, language of teaching and learning cuz, currently in South Africa um English is, the predominant um language that is used, in median governance and also in in, higher education so um what we're doing, now is ensuring that our languages are, also at a level where they are at par, with English in terms of developing it, in tools um in human language, Technologies and machine learning Etc so, we also want to have that privilege if I, can put it like that in ensuring that, our languages are also more accessible, even online I'm I've been working on, looking at specific literature materials, but you find that I can't find those, online I literally have to scan the book, If I want to do an analysis of a, particular U morphological structure, that I'm working on currently but then, you find that all the Shakespearean, books for example are readily available, online so that's that's my passion in, ensuring that even African Scholars can, do research in their own um languages, with writers or authors that are like, them that's that writ in in the, languages that we're interested in, looking at so that we also have those, readily available and they can be used, for research throughout um for posterity, and I think for me also for the, institution that we are work working, with or working for is that we're trying, to also build a a bridge a gap between, NLP practitioners and linguist because, it's also it's always a matter of, linguists are doing their projects on, their own side and NLP practitioners are, doing their own projects on the side but, I feel like if we can work together, because you need to for you to, understand the Lang the the language, that you're working on you should, collaborate with the linguist and for us, to understand how these language, technologes work we we need to work with, the NLP practitioners because now in, South Africa NLP is not a big field uh, that's why our institutions are able to, send us to such conferences so that we, know what other people are doing so that, when we go back to our institutions and, going back to actually assisting the, universities in the country we can, impart that knowledge to them because, currently really how we're doing, research is still a bit traditional and, time is moving technology is advancing, we don't want to be left behind so I, think our greatest passion is to make, sure that our languages because we keep, saying that underresourced under, resourced until what like we've been, saying that our languages have been, underresourced for many years now that, now we have the opportunity to actually, be sitting in a room like this with, people who are actually doing the things, that we've always heard of but never, knew were possible for our languages so, we're basically here to say collaborate, with us assist us to get our languages, to a level where there are languages of, languages of teaching and learning and, our data is also easily accessible cu, the other issue we have is accessibility, of data we don't have a lot of data in, our languages because we don't digitize, the material that we have but at least, now we know that we know what the, possibilities are so yeah that's great, and um just to follow on on one of the, things that you mentioned uh you, mentioned this sort of idea of linguists, and NLP people collaborating together, and you kind of mentioned the the, problem of um sort of the availability, of data the scarcity of of data are, there any um either misconceptions or, things that you would like to highlight, in terms of you know a lot of times what, we see are these models NLP models, getting bigger and bigger and bigger and, requiring more and more and more data to, train but the reality is for a lot of, languages of the world um like you said, we either kind of have just a small, amount of data and it's not growing, quickly and you know those models are, getting further and further away from, being applicable in those scenarios so, any anything you'd like to highlight or, or point out there yeah I think yeah, that's the issue that I don't really, know who builds the models cuz I wanted, to refer specifically to the people who, build the models is that when they're, building them it's like they don't have, the idea of the structures of the, languages that they building the models, for because we also have in our rep to, certain models that we're using but when, as soon as we train them with with data, from our languages the accuracy levels, are always very poor because it's like, the systems or the Technologies do not, understand that for instance my language, is aguena so when they being built hence, the collaboration it's like also take, into consideration the languages that, you're building the systems for because, I feel like the systems are always built, for specific structured languages so, that's always an issue because now we do, have access to the languages but there's, so there's so little that we can do with, them because they don't understand the, structures of our languages hence again, I will re, emphasize emphasize the collaboration, between NLP um practitioners and, linguists yeah just to add on a Case an, example is when looking at um Google, Translate for example it is improved, quite a lot now but previously you'd, find that you um want to translate, something from my language into English, or vice versa the results were poor but, I think now because there's data that is, available and they doing something to, ensure that there's an improvement in, there then you actually can see the, positive results but still you find that, in in some other tools even now you, still find that the accuracy levels are, still very low and um as she said that, it's it's the fact that when the tools, are being built or created it's like, they don't have the language structure, in mind I feel like it's a matter of, okay we' built this it should work for, every other language which is not the, case each languages have a unique, structure we have a unique morphology, even though they're spoken maybe for, like the case in South Africa we have, nine indigenous languages but the, structure is not the same so the people, that actually build tools or create, tools need to have that in mind and need, to also work collaboratively with, linguists people that are are trained in, in these languages to ensure that the, structure is also represented when the, tools are created yeah that's such a, that's such an amazing point I'm glad, you brought that up I I know even in our, work you know encountering tools that, are very popular you know uh things like, word segmenters or or subword packages, where maybe it doesn't work quite right, for like an Arabic script uh language or, or a right to left language or or, whatever it is just because that was, never Envision you know from the, beginning on that same theme of kind of, the types of languages that people are, building language technology for um I, wanted to ask you just about Creole, languages Maybe people listening to this, podcast aren't that familiar with Creole, languages or understand kind of how, they're used around the world or what, they are could you describe that a, little bit and then also a little bit, about your language uh and the language, you've been working with yeah sure so a, creole language is basically uh a, language that emerges in places where, people of different cultural backgrounds, come together so in the case of Surinam, that is uh during slavery time basically, the 17th century people were brought, from Africa to Surinam also people from, Europe came there to uh yeah basically, uh yeah you know what they did all kind, kind of horrible stuff but they needed a, means to communicate amongst each other, and basically uh it's a language that, has characteristics from uh the the the, different languages that participate, from the people that participated in, those uh communities like English uh, Portuguese Dutch um and some Afric, African uh languages so it's basically a, Melting Pot of of languages uh with I, don't know if I can say put it that way, but usually it's a grammatically a bit, uh simpler and it's a it's a easier to, learn language let's put it that way, yeah and uh one of the things you were, mentioning to me in our discussions even, previous to this is that for some of, those reasons that You' mentioned you, know maybe Creole languages weren't, weren't and aren't always treated sort, of at the same status of other languages, could you speak to that a little bit, yeah uh definitely that's the case um so, when I started this project I found that, it was really low resource I was even, amazed to find out that I couldn't find, a single book or novel written in, Sano and that's because uh there's a lot, of stigmatization going on with that, language so for instance in school the, so it's a for the seram is a former, colony of the Netherlands the first, language still to this day they're, independent since 1975 the first, language still is and the official, language is Dutch spoken in Surinam, Santo is uh the second language so, there's basically no sources available, in Sano and uh it wasn't it it was for a, long time forbidden to speak the, language in schools for instance also uh, parents often discourage their children, to speak that language so that, stigmatization CA it the low, availability that we are dealing with uh, now yeah and um describe a little bit, about sort of your vision I guess for, this project that you're working on and, how it fits into maybe some of the needs, of the language community that that, you're aware about just from from being, part of that community and how that, shaped your view of the project and what, you're actually working on in in your, project yeah so I have two sons my, youngest son asked me a couple of last, year he asked me why hadn't taught him, how to speak Sano and uh that gave me, actually the idea to build a translating, system so that there's a lot of people, living in the Netherlands with surami, roots like I just explained like in 1975, they became became independent so around, that time lots of people migrated from, seram to uh to the Netherlands so the, first generation is uh mostly fluent in, Sano I myself am a second generation, before starting this project I thought I, was pretty okay in s maybe 80 90% so I, now would say maybe 50% 60% after, studying the language better my son who, a third generation really uh wants to, connect with this culture so when we, visit seram for instance and and we're, meeting with family members he wants to, know what they're saying so I see this, uh need within the Netherlands that uh, people from second and third generation, they want to connect with their culture, but they don't speak the language so I, hope that this translation system will, support them in in reconnecting with the, with their culture awesome yeah so um I, want to maybe ask uh Bona a couple, questions I feel like I've been trying, to get you on the podcast for a while, now so I'm glad glad you're actually, here and and I was actually already, on so um yeah I I wondering one of the, things that we were chatting about just, before this is that people view uh, masakane and some of the things that are, going on there with a lot of respect and, because of the momentum it has and how, so much has been done and I know um, you're part of some of the things that, are being presented here with masakane, could you describe maybe we've actually, mentioned masakan on the podcast before, but for those that aren't familiar could, you describe uh what it is okay I hope I, will do justice to, everyone uh so I'll just describe I I, would describe uh M as a Grassroots, organization NLP movement that wants to, build NLP or language Technologies for, African languages by Africans and as, everyone talked about um as everyone, said here earlier here you know like we, need more people speaking those, languages to be more involved in the, like building of those language, Technologies because someone for, instance who works in a big company and, be like oh okay I found maybe this data, on let me say Oscar or like on like, Flores or any data set that is say to be, high quality and like multilingual you, know he just like trains a model on it, and then assume that everything is, supposed to be working you know but, while at the beginning those like, language models like slmr those type of, thing have been cre like for for like, high resource languages like what like, English Chinese you know even like in, those initial paper you can see that the, downstream Tas they evaluated those Tas, those models on um languages like French, English Chinese those type of things so, uh there's that assumption that okay we, train those massive pre-rain language, models and well they can just like we, can just do like some transfer learning, to low resource settings which is not, always true which is one of the idea and, paper that I presented here talking, about active learning for uh language, modeling so as everyone has been saying, this is important and like that's the, the Gap uh like also um Andy said and, like Ru said and uh just said they we, need to like reduce that gap between, language and like people who are are, like practitioners or NLP practitioners, you know we need domain expertise domain, knowledge and of course like as an NLP, research I can have those but language, also have to come in to be able to say, okay whatever this mod model is like, predicting is like rubbish or like is it, makes sense and I think as the most, recent like ongoing stuff like on, Twitter and all mods actually don't, understand language they understand data, distribution they understand words you, know but then we need again as I'm, emphasizing we need that expert, knowledge to be able to make sense of, whatever those model produce to be able, to say okay this is actually something, useful and uh yeah that's what Master is, trying to like build a community of like, NLP not I want say NLP research a, community of like let me say people, Africans who are working on African, Technologies so it includes language it, includes people who have theoretical, like NLP mathematical background like me, like you you also part of the community, we have also like Sebastian we have, people who actually are not Africans but, who have like interest in like building, in like like accompanying like that that, effort and that's like what the, community is all about putting forward, and like representing bringing more, people like us at this at this type of, like big NLP conference let me say to, the world or to the map or however like, increasing like representation and, making sure like our language are, preserved through the Technologies which, I never about today because somehow um, everything we doing now is like based on, it somehow so yeah yeah that's great and, I can speak as you mentioned I've had, the great privilege to interact with a, lot of people from masakane and you know, on a on a few small areas but even in, those those small things I would just, encourage researchers out there that, have an interest in what's being, discussed here to engage with those, communities engage with local language, communities and speakers of those, languages as you're building these, Technologies cuz you do you know I'm, benefited so much by getting to interact, with masakane and the things that I, learn and and you know getting to um, have these sorts of discussions to have, better awareness and understand how I, can join and partner with people in the, building of language technology so as we, kind of have have gone around talked, about various things I'm wondering if, you all could maybe share there there's, probably people a lot of people, listening to this podcast that are, actually listening to it in English but, or maybe have a mother tongue their, English is their second language and, they're thinking hey I wonder what I, could do in in my mother tongue in my uh, in my first language um I know it's not, well supported in language technology, any encouragements from any of you of, like if you're a language speaker and, you you're wanting to sort of get into, this somehow and and kind of partner, together collaborate with others to help, build um you know more language a higher, level of digital language support for, your language what would you say uh yeah, if I can add to that so what you usually, see uh you need a lot of data to train a, good translation system or any NLP uh, application and although there is not, much data available in in San Ana in my, case the thing that you can usually find, are Bible translations so in my case I, used the yova witness 300 Corpus which, has a translation of 300,000 parallel, sentences from Dutch into s and even in, some smaller even smaller languages, spoken in Surinam like sakan and Alan, which is closer to uh some African, languages uh so that's a starting point, of course the idea would probably be, that you create a a general purpose, translation system that was also my plan, and still is my plan so uh on top of, that uh data you would need uh some more, data from a general uh domain and from, different domains uh next to the, religious domain so it was funny, actually that's uh to see that Daniel, from S I found online I found a diary in, SHO and uh it contains a lot of words, and with those words some example, sentences so before even knowing Daniel, I found his website of his company and, was able to scrape 3,000 sentences from, uh from there yeah hopefully next time, you don't have to scrape it shoot me an, email yeah indeed indeed good to know, you now and some on top of that what I, did is um basically uh scan some uh, smaller sources and OCR them basically, and then manually align uh sentences uh, so it's a lot of manual work but I think, also Bona has the same experience of, doing a lot of manual work for his uh, translation uh system so now that I, finally was able to get my first model, up and running I also built a, translation system around it a web app, so uh I'm now in the pilot fa phase, where people are trying to are trying, starting to use the some Son speakers as, using the system and evaluating uh the, results and so basically by just using, it they enter a sentence in Dutch they, get the sanang uh translation back and, they rate it and if they don't think, it's good I ask them to uh enter a, better translation and they submit it, and that I collect that data in my, database so I hope in this way to, collect more data from a more modern use, of, instead of the religious one and collect, enough data to eventually build a system, that is more potent yeah that's great I, think you highlight something that's, definitely good for people to realize I, this is also our colleague Colin Leong, from the University of Dayton he he told, me about this that his parents um who, are speakers of a of a local language in, in East Asia he asked them hey give me, all of the data you have for your, language and I'll try to try to build, something and uh he showed me the folder, and it included You Know MP3 files and, Word Documents and you know images and, PDFs and all these things so I think, yeah that's something important for, people to realize that you know not, everything has a nicely curated data set, on hugging face and even doing some you, know being involved in some of that work, to get that data put together is a, hugely beneficial thing so yeah anyone, else uh things you would want to, highlight for people out there um, wanting to to start some of this before, leaving the spe the floor to the ladies, yeah I would like to just like second uh, what just said I also had the same let, me say struggle with ph because I, started and nobody was working on ph, nobody knew about the language and that, was also something interesting and decid, to you know like going into a direction, where nobody's looking at and like, unveiling it so not to Sure off but a, lot of people nowaday just like quote me, as the phun guy you know like when, someone is talking about phone whatever, and there's dog upate just tag me on, like Twitter or whatever and yeah I, Envision just to be the same uh like for, songo and you know the the mor of the, story is that you need to get started, because there's always going to be a, point where there's no data and someone, has to do some little effort you know, for instance we have gw300 but what if, those people didn't do anything we, wouldn't have even would not even have a, starting point you know so I started as, with gw2 200 and then I Tred to like, manually like scrape from like with my, friends and all through Google forms uh, like created something like 25,000, sentences and then out of that then I've, been able to bring some proof of concept, and you know like it grew up and like, people are now more knowing about the, language still is not I mean I buil F, translate with CHR and people are using, it people are like very like sending, feedback they're like happy it helps, them artist people like they are more, like awareness people willing to be uh, more contributing creating more content, is not yet on something like a Google, translate or a centralized translation, for those like African law resource, language or law resource language in, general but I hope there something's, going to be coming so I'll just say just, that me honestly like my name says I, like adventures and I like good, Adventures so I just like to go where, nobody is focusing on and like and no, one is exciting like you bring something, that people haven't been focusing on, tonight I don't think I have had the, same maybe Ina if I for instance started, with EO because that project the first, ffr project that then like went on DPC, all those type of things we were dting, whether we should use pH EO so finally, Chris and I we decided to go for phone, because EO has at least some effort done, already but nobody heard about phone, nothing was on fun today um there are a, lot of papers people citing the work is, been cited like in the paper that led to, the extension of Google Translate to 20, 24 or 20 more African languages or 24 is, been cited like no language Left Behind, of meta and you know like also being, part of the masy you collaborate with, people like Sebastian with Julia with, Angela fan who worked on NLB you know so, just get started and like people will, know about it and then I will just keep, supporting yeah if you don't have, support just be your self supporter at, some point like you know when people are, seeing the effort they will definitely, then join and then it will like take it, up from there yeah r or Andy anything to, add um I also share the same sentiments, with the case for is Zulu you find that, um people should just start even though, it's difficult because data is available, but people are not coming forth they're, not wanting to share their data you find, that okay you collect or you do whatever, with it and then you just keep it to, yourself and then and that now hinders, the progress of the language or the, development of the language in itself I, think it would be a great idea if people, get their understanding that when you, are allowing your data to be accessible, or make it open resource you're not it's, not a matter of I want to steal your, idea I can do something different than, what you did with the data and also just, ensuring that it more researchers have, accessibility to it so that they can use, it for whatever that they want to use it, for the only issue that I have with, getting sure that the data is more is, collecting in a general um sense because, you find that newspaper articles are, very much easily accessible but in the, case for isul you find that novels you, can't you need to actually do ocing and, do the scans by hand which takes a lot, of time so if we can find something that, would work that would be much quicker, we'll be grateful for it um so that at, least we can get it to be at a level, where we have aot enough data to train, um models and train um tools with it uh, yeah what I would add is that language, preservation is very important let's, find ways in which we can preserve our, languages in a sense that they do not go, extinct for instance what we're doing, now in South Africa is that because most, of our languages have dialects and, dialects because they're not standard, languages they not they're not, documented so one of the projects that, people can do if they're in similar, situations is where you collect speech, data of those dialects so that they can, be accessible somewhere so if in the, next 10 years a dialect is not spoken, there is data available for people to, hear that okay there was actually want a, language like the spoken at a certain, area and yeah I think more than anything, it's just preserving the language, creating more data because now for this, masak project we're able to access our, data via online newspapers so if people, can also digitize the work that they, have because it is true that it's so, difficult to find material because it's, not being um not digitized published yes, some books are out of publication now so, for that to not happen we must digitize, our book our literature our text and, everything else so that it can be easily, accessible so that we can actually run, such projects yeah great well uh I, really appreciate all of you getting to, have a chance to to join us here I know, it's a busy conference and there's a lot, of great things to look at but I'm so, happy that we get to bring this, conversation to a wider audience and, maybe uh you know if anyone wants to to, leave with a a greeting in their, language please go ahead uh so before, finishing I would do a little bit of, like, promotion regarding the language, discoverability and all those type of, things uh Chris and I have been working, on land Freer which is a an say, Innovation an idea of like putting out, there those like language resources for, those African language resources that, are not discoverable on the internet so, you can access uh like research papers, you can access data set you can access, tools like keyboard all those type of, things and anything like dictionaries, anything even sometime like YouTube, videos so people are doing great things, like educating people uh for instance in, AFA or those type of languages language, I've never heard about before and they, are doing great effort like but then is, like on YouTube and like nobody knows so, on Landa they can easily kind of access, those resources and if you also have a, work like on mainly on African languages, but if it's on law resource languages we, have for lria talk where people from all, around the world we got like people from, like CMU we got people from Google, research we got people from like UCL, people from around the world student, researchers anybody who has been working, uh who been working and passionate about, like law resource uh languages NLP, Technologies then to like come to and, like give a talk for like people to know, more about what they've been doing so, it's pretty simple you just go on like, www land fre car like l a n f r i c, a. and then like you can find all the, the information about like languages and, also how to assess the uh or like how to, book for a meeting for the LFA talk so, um that being said like something basic, in my language will, be Daniel that means thank you so much, Daniel for oh well Daniel well I don't, know about the for for this invitation, you know and that's something like I can, relate to just I thought that I knew the, language but then it feels like okay I, actually don't know anything and I don't, know they've been that interesting curve, it's called a curve of like David, something where technically showed they, okay when you think that you know, something you have high confidence and, that's actually when you actually don't, know anything but then when you start, learning and then you adopting more of, yourself is technically like that, process of knowledging and like learning, more things and then like the curve like, goes up at some point again so yeah I'll, just stick to Danel and I will let just, end and um R finish in the languages, thank you so much I'd also just like to, maybe just promote also the organization, we're coming from the South African, Center for digital language resources, any researchers that are working in any, of the um South African languages they, can check us out at s.org Doza a parting, word that I can say in my language is, which means thank you so much um for, listening to me now I don't know what, I'm going to say but I'm going to keep, it very, short which means thank you very much um, yeah before uh finishing I must Express, that I'm actually a bit jealous of the, mashana community uh, know uh free of charge okay okay that's, that's a good to know thank you very, much for the invitation also I wanted to, add that maybe if there's other people, listening uh who are into Creo languages, yeah maybe give me a shout out my name, is, juster doener gmail.com so they yeah I'm, happy to join you guys but uh I'm not, sure if it's fit in as being an African, language, but as we say there's Daniel for, instance uh there's Sebastian there is, uh Graham you know people actually who, are not necessarily on from the, continent of speaking those languages, but who just share the same vision so, like well many people I'm sure that many, people will be interested like in like, maybe like writing research papers using, language models all those type of things, for tongo so I mean as they say alone, you can do better or something like that, but like together with a bigger, Community you can go definitely faster, and like you know and like that's how, Technic ideas come you though so um I'm, definitely going to share the link with, you I have your Linkin to join the slack, and and we'll put it in our uh show, notes as well all the links to, everything we've talked about yeah okay, okay so I'm definitely going to share, with you and you are free to just com me, I just want to warn you is a big place, people find it messy we usually work in, this chaotic environment but then you, mean Community is, messy well I won't say I mean it's just, like that means it's a real community, yeah so things are not like don't wait, for people to be like okay this this, this you know take ownership take, initiative like on whatever you want to, work on and then like people would just, like easily follow you and yeah I'm, Prett sure it's going to be beneficial I, I I I would like with pleasure for to on, the project with you I speak what French, a little bit of German Uh Russian, English fun a little bit so, it wouldn't hurt me to learn a sixth or, fifth language at least you know I, learned a lot like working with people, about I mean for working with evil, people or those type of things so let us, not do African P scare you just join is, a bad La res Us in general and I know, there's even for instance in Nigeria, they spoken Nigerian pigeon which I had, some ideas of doing some pre-training, transfer learning from that L because I, think there's some similarities going on, there so that the that already brings, the connection to masana closer I guess, so I would like to finish with the one, just one word grantang which means thank, you in in well thank you all this has, been so much fun um appreciate it and, we'll share links in our show notes for, everything that we've talked about and, and all of the great things that you've, shared so thank you all thank you thank, [Music], you, all right that is our show for this week, if you dig it don't forget to subscribe, head to practical a.m for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or a colleague word of, mouth is the number one way people find, shows like ours thanks again to fastly, for fronting our static assets to fly.io, for backing our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now we'll talk to again, on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | SOTA machine translation at Unbabel | José and Ricardo joined Daniel at EMNLP 2022 to discuss state-of-the-art machine translation, the WMT shared tasks, and quality estimation. Among other things, they talk about Unbabel’s innovations in quality estimation including COMET, a neural framework for training multilingual machine translation (MT) evaluation models.
Leave us a comment (https://changelog.com/practicalai/204/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Ricardo Rei – Twitter (https://twitter.com/RicardoRei7)
• José Souza – Twitter (https://twitter.com/accezz)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Unbabel (https://unbabel.com/)
• COMET (https://unbabel.com/research/comet/)
• The WMT workshop/ conference (https://www.statmt.org/wmt22/)
• EMNLP (https://2022.emnlp.org/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-204.md) | 20 | 1 | 0 | we have started a project on this it's, to combine these systems this quality, estimation systems with the machine, translation itself so that is something, that we started working on this but I, believe that you can work on this for, the next few years and there is a lot of, things that we can improve there yeah, that gets me really excited I think it's, a a direction that it's going to be, really nice this is the quality aware, decoding project that is basically what, I just mentioned about what we have been, talking about of having this quality, predictions about the hypothesis, translations the idea behind this, project that uh hiard is talking about, is what if we bring the quality, estimation or comment already to inside, the Mt process and then we can make the, machine translation aware or more aware, about its quality having a signal from a, different model so this is what this, project is, about, [Music], welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical a FM for all the ways, special thanks to our partners at fastly, for delivering our shows super fast to, wherever you listen check them out at, f.com and to our friends at fly.io we, deploy our app servers close to our, users and you can too learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with SI International and, I'm joined this week by Ricardo Ray and, Jose Souza from unbabel uh here at emnlp, 2022 in Abu Dhabi how you doing guys hi, we are fine hi good yeah how's uh emlp, for you so so far we have been mostly, attending a WMT Workshop yeah and what's, the WMT what is that stand for right WMT, stands for workshop on statistical, translation workshop on machine, translation on machine translation but, it was this is an historical acronym, because it's actually now a a conference, it's I would say that it's the main, Conference of machine translation and it, has been happening for several years and, it's always collocated with the M NP so, it's nice because it's one of the, biggest NLP conferences together with, the biggest Mt conference yeah it's, mostly attended by uh researchers so not, so much about by people in localization, industry but it's interesting to know, what's happening in terms of research, the latest approaches and methodologies, for evaluation as well so yeah and is, that the industry that unbabel is in, could you just give people a little bit, of an understanding of what unb is sure, so amabo is a translation company we, provide translations trying to unite The, Best of Both Worlds which is using, machine translation and professional, translators to provide these, translations and The Best of Both Worlds, what because if you only rely on, translators themselves it's very, difficult to scale this process of, translation to different volumes of, content and that's why you use machine, translation to speed up this process and, then use the translators to correct if, necessary and that's I I think the, biggest uh difference of ubble to other, companies which is we are the Pioneers, to use something called quality, estimation to actually decide whether if, we should post edit or not the, translations and I guess uh we we are, big on also evaluation technology, evaluation and I think hiard can talk, about Comet yeah like what J explained, about the difference between combining, humans and mty so if you have a, mechanism that tells you that your, machine transation output is perfect, then you don't need a human but for you, to do this you clearly need a very, reliable quality estimation system A, system that receives that translation, and is able to give you an accurate, score for that translation and that's, why uh andb has been focusing for so, many years on specific specifically, quality estimation and also evaluation, evaluation it's a little bit more, General it can also include things like, metrics where you compare the, translation output with a a reference, translation that you believe to be, perfect and it's what people typically, use when training models and stuff like, that for the past few years we have been, developing a metric that is uh being, widely adopt by the research community, and also the industry which is called, Comet comet has been very successful in, the last two years and yeah it was, developed by us um we also developed a a, quality estimation framework that was, also gained a lot of the traction three, years ago I think yeah 2019 it was yeah, called open qwi which is basically, similar in the in terms of the model, approach and everything but it does not, rely on a reference so it's what we use, internally for performing quality, estimation yeah I think this sums up a, little bit that that said just one thing, is that all of this is only possible, because over the years andabo, established some quality controls for, the translations and uh this is started, by using something a framework called, MQM which stands for multi-dimensional, Quality metric which is basically a, typology and then guidelines on how to, use this typology with different, phenomena that happens when people when, transl, is made that goes from accuracy you know, whether the the translations are, adequate if they're fluent and then, there's a whole taxonomy about that so, this kind of evaluation enabled us to, accumulate data about the quality of, translations over time that we can then, use to train quality estimation or a, metric uh evolation models yeah so this, seems different I I think some listeners, probably in their experience with like, modeling in other domains or with other, data are probably familiar with like a, confidence score or a probability so, this goes like Way Beyond that right so, just to clarify this is not like just a, a confidence score coming out of your, model like the of of translation but, this is actually a metric that you're, running on the output of your model is, that right exactly yeah exactly yeah and, um so explain maybe um comment a little, bit because that has like gained so much, traction what is maybe different about, Comet another you know popular one I, know for machine translation is called, Blue so what distinguishes comet as, different from maybe that or like other, metrics that are out there so like you, were saying blur is a very well-known, metric but BL is a lexical metric and, this means that blur will take the empty, output and it will compare with a, reference that was created from a human, and usually the typical setup is that we, we only compare that empty output with a, single reference and as we might know, there are multiple ways to to translate, a specific sentence so a lot of times, blow will give a very low score for a, very good translation because of that, sometimes it also gives you a very high, score for a b a very bad translation, because of another aspect of B it's that, it's going to give the same weight to, all worlds so if you have a a named, entity that is is not correctly, translated it's going to be like one, word that is missing from being perfect, and blow will give a very high score if, you miss like a punctuation the score, penalty will be exactly the same, although the errors are completely, different in terms of severity just one, thing uh to differentiate between just, to explain a little bit more blur is, that the way that it looks at both the, translation hypothesis and the, references looking at the each word and, trying to understand if there is an, overlap of each word with the reference, so and it does that for combinations of, for for one or or for combinations of, two three and four words usually which, you call engrams so and then it has a, brevity penalty that is basically to, penalize if the translation is too small, to too too short so that's basically the, rational and there is a class of Matrix, uh called like that I think we are, calling lexical Matrix yeah lexical, Matrix yeah uh so TR which is, translation error rate it's similar to, that chrf it's similar to that but chrf, goes at the Character level so this is a, class of things that is very different, from Comet I think yeah Comet takes, advantage of uh the representations, coming for large language models like, xlm herberta we have been using xlm, herberta and uh basically those, representations allow you to compare, words in a embedding space so two words, that might not be exactly the same but, have the exact same meaning, the comet will use those representations, to Output a score now the other thing, that we add on top is that we train, those representations to be more, suitable for the specific task of a, machine translation evaluation and I'm, saying this because this is a very, important difference from other metrics, that have also been proposed like Bird, score where because of the fact that you, don't have any fine tuning on top if you, use Bird score and you you say I love, you or I hate you because love and hate, will have similar embeddings the score, will be very high when in fact they are, the complete opposites so we start from, a pre-training model but then by, training the model with some supervision, from Human labels on errors the model, learns that I love you or I hate you on, for this specific task they are complete, opposite and I I think that kind of, splits apart Comet from all the metrics, that were being proposed before that, either fall into the lexical category or, into the embedding category yeah that's, great and you also mentioned um just in, passing like there was another kind of, category of quality estimation that, didn't require a reference could you, talk about that a little bit Yeah so the, idea is very similar to the idea of, comet so the difference is that when you, have access to a reference which is the, case of comet but when you create the, embeddings for the empty output they, will be perfectly aligned with the, embeddings from the reference because, they are in on the exact same language, on quality estimation you are comparing, it directly to the source so the, embeddings will not align perfectly and, still what happens is that during, training using human supervision the, model learns to what is correct and what, is incorrect only comparing the empty, output direct with the with the source, so quality estimation serves a different, kind of application than the metrix like, blow chrf and Comet which is usually I, want to know what is the quality of, specific sentences or translations given, their Source sentences for Comet usually, what you're more interested Comet or or, the other uh metrics you're more, interested in understanding the, difference between models or Mt systems, so you're evaluating at some sort of try, to understand at some sort of um test, set level or evaluation set level so, that you can decide whether I go with, model empty Model A B or C and then in, quality estimation is basically to take, decisions on the fly at real time in, which I cannot wait for someone to make, a reference or a post Edition and decide, okay can I thrust this translation if I, don't should I throw it out is that bad, that they should do it from scratch or I, can still give it to someone that can, repurpose this and rephrase it you know, so they are slightly different in their, applications but this is something that, you can talk about about Trends uh they, start to like I think hiard was teasing, to intersect themselves a bit I would, say that the the metric field so the, evaluation on the metrix side was stuck, with BL for a long time quality, estimation on the other hand was I feel, that there were more research and more, innovation on that field actually that, was our motivation when we built Comet, we tried to replicate what was being, done the stateof of the-art of what was, being done on quality estimation we, tried to bring it to the metric field, and now the modeling approaches are very, similar but it was viewed as two, completely different tasks for years so, just just to give Insight on what uh a, bit of context on on what hiard said, about the progress in quality estimation, so I did my PhD on working on this uh, kind of problem and I I finished like in, 2015 so I was working from 2012 until, 2015 on problems around this and the, approaches back then they were basically, using feature based approaches like, classical machine learning and with deep, learning and access to embeddings and, now large protain models this very very, fast shifted to to this kind of approach, and the performance of these models of, this approaches also are much better, than when I used to First work on this, so the quality of these quality, submission models nowadays that they are, very useful you can actually do a lot of, things with them like I was saying and, yeah I just wanted to complement that, because for me it was I was not working, on the field specifically this problem, for I don't know three years I guess, when I came back to it it was wow now, it's really up to everything you know, could you explain a little bit so you, mentioned how like in Comet or in these, other models you might be comparing like, the embeddings of of words but I words, don't always map like one to one between, languages and sometimes I don't know if, you're looking at sentences or other, things but could you describe like some, of the I guess what are the main, challenges looking forward that like, aren't solved yet in terms of like next, steps with quality estimation and things, that you're looking at now that you see, as as open problems yeah you actually, touched a very nice very nice point it's, I wouldn't say that it's not that the, words don't align very well but, sometimes what we see is that the, embeding themselves for certain specific, uh words are not uh discriminative, enough and we have seen some for, instance if you translate the sentence, this IPO costs 50 cents you translate it, to Portuguese and the translation needs, I'm not going to say it in Portuguese, but pretend that I'm speaking portes the, perfect translation would also be 50, cents but for some reason DMT might have, hallucinated and say that it's 500 cents, so it's basically changing the price of, an nio and this is a critical error in, much scenarios but if you look at the, embedding space of the 500 or the, embedding of 5050 it's going to be very, similar and it's to be very hard for the, neuron Network that is trying to, differentiate these two things it's, going to be a very hard task because, there is no not enough signal you also, see the same thing with some named, entities and currently there has been uh, some work some progress in trying to, look at the Quality estimation and, metrics and try to figure out why they, are not working for this kind of very, specific phenomena actually yesterday we, had a lot of uh presentations about, challenge sets that try to test Matrix, for this specific phenomena so in WMT we, have several competitions several what, we call share tasks and um inside the, metrix share task where people are, trying to compete to create better, metric there was also a share task that, it's we call this challenge set subtask, where people submit examples that are, challenging for metrics and then the, particip ANS from the metrix task have, to score those examples and then we get, the scores back to the developers of the, challenges for them to analyze and a lot, of people looked into this and tried to, make some suggestions for future work in, how to improve metrics for this so if, you guys are interested in this I take a, look at the findings from the metric, task cuz they are interesting findings, on on and pointers for future work in, this this area one of the problems of, this model based Mt evaluation, approaches is that you know first they, are based on the data that the pre-train, models were trained on so there there's, everything there there's bias and, there's the limited amount or it can be, a lot of data as well but all the, idiosyncrasies of that data are encoded, in the pre-train models then when you, fine-tune this for the specific tasks, that they they need to work on namely, quality estimation and Mt evaluation, they also are limited in in data in the, sense that we have orders of magnitude, less label data for this F tuning, process so this can have its biases and, it can have also like uh taking the, example of Apple for some reason you you, never seen Apple the company but you saw, only for the fruit so you start to every, time you see apple you translate that to, the fruit you know uh you you actually, say that it's if the model translates, that to the to the fruit the evaluation, thing is going to say ah it's fine, because in the evaluation data that you, used to train the the model you never, saw for some reason the brand so and, this is related to the named entity, problem that hiard was saying so I think, one step we are giving the first step as, a community to understand that now and, you know really poke it and see okay, there's a hole here uh and now the next, step is how to you know alleviate that, problem I I don't think it's possible to, alleviate completely solve but we are, for sure I I will try to alleviate this, for these models now there is a and, there are a lot of complaints also of, people not complaints but you know even, us when we are using different models, not only ours we see that these models, fall short sometimes and this can be, very bad in a commercial setting or even, in sensitive scenarios in which if you, get two cents and the translate the the, model that translated this to I don't, know 2 million that's not very nice, right you might have some legal, implications with that so yeah I don't, know other other other open problems I, think for me one big problem is that and, and this is also Trend that we see in, the metrix and in the quality estimation, task is that bigger models have better, predictive power so people usually what, they are doing is just throw more gpus, at it and and just train a bigger model, and this seems to be giving improvements, as well but the problem is that not, every prot ier can actually use these, models once they are trained because, they take they need bigger and bigger uh, gpus which are costlier even at, inference time so we actually had a, paper in EMT the European Association, for machine translation conference that, was actually making Comet smaller and, it's like a diminutive in the name of, the paper the name of the model is, Comino which is a diminutive of comet, like Portuguese very Portuguese way to, say it, and it was also a first step towards, that but I think there's a lot to be, done for all the other models and also, for Comet yeah definitely I I think uh, cometin was just a first step into that, direction we there is a lot of things, that can be improved in distillation of, these models even the evaluation models, like we did for com and not just for, evaluation we have been focusing this, podcast a little bit on evaluation but, um on machine translation you have the, same problem on machine translation, bigger models have been um achieving, impressive machine translation quality, but it's very hard for everyone to, develop those models and it's even, harder for for people to deploy those, models um we face this at in bble we, develop our own machine translation, systems and we have seen this trend we, get improvements if we keep scaling our, empties but then we have difficulties, serving those empties and uh it's also, we know that not every company has the, capacity to build such big models like, uh big tech companies develop so yeah, it's not just in the evaluation side but, also in the machine translation side it, is like something that people should, look forward to it's without losing, performance how to make these things, smaller and easier to deploy yeah when, you say on the so on the model side, specifically like Jose you mentioned, sort of models getting bigger and bigger, um some people might have seen like nice, uh gify about like an encoder decoder in, one language coming in and one language, coming out and Transformer models but, what what are some things others are, exploring maybe yourselves that like are, either um different approaches or you, mentioned distillation all these other, things to make models smaller but are, there um different architectures or, techniques being explored I think I saw, one of your papers something about like, KNN Mt or something I don't know if you, can speak to that but yeah we just at, this moment there is a poster on the, usage of kmt for the chat share task so, this is something called I think this is, broadly called Dynamic adaptation and, one approach to that is doing kmt that, rather than actually fully fine-tuning, one base model like one of these large, prain models you actually just do some, data retrieval approach in which you, combine the contents of a dat store that, has relevant data for the use case that, you're trying to serve with machine, translation and then at decoding time, when you are assembling the translation, using the translation probabilities of, the model you interpolate these, probabilities with the probabilities of, words or Expressions contained in the in, the data store so this way you avoid, having to fully fine-tune a model for, each use case that you have and this is, something that we started to research, and Approach at babble but I just must, say that this doesn't solve the problem, of the base model being big you just, avoid fine-tuning it completely so, there's still the problem of okay how do, I shrink or compress this model so that, it can reliably and cheaply explor it, for translation and this is like you, said distillation quantization and other, compressing techniques just complement, what uh was saying about the the Keen, nearest neighbor approach another very, big advantage of this is that it's very, easy to combine with translation, memories which we know that they are, wildly used in a translation industry, and this is a seamless way to basically, take the Mt and make the Mt work with, those translation memories because you, can build this data store that will help, the model to translate the content, accordingly so just to add that also, which I believe that it's very important, for the localization uh industry in, general great yeah well we've talked a, lot about uh challenges I guess which is, is fun to talk about at a at a research, conference for sure what are what are, some things just like generally about, like the machine translation industry or, or unbabel or other things that you make, both of you sort of excited and you know, optimistic about the future what are, some of those things that that excite, you it doesn't have to be an Mt or you, know things you've seen at this, conference or things that you're, following that give you some um, encouragement and excitement about the, future of the space where we're working, actually I'm very passionate about the, evaluation in general I think that shows, up in my work cuz I I mostly work on, evaluation I've been getting very, excited with the progress that we have, been doing in evaluation I think um like, we have started a project on this that, we it's to combine these systems this, quality estimation systems with the, machine translation itself self so that, is something that we we started working, on this but I I believe that you can, work on this for the next few years and, there is a lot of things that we can, improve there yeah that gets me really, excited I think it's a a direction that, it's going to be really nice yeah uh, this is the quality aware uh decoding, project that is basically what I just, mentioned about what we have been, talking about of having this quality, predictions about the hypothesis, translations the a behind this project, that uh hiard is talking about is what, if we bring the quality estimation or, comment already to inside the Mt process, and then we can make the machine, translation aware or more aware about, its quality having a signal from a, different model so this is what this, project is about so we we have a paper, at nro this year uh describing that so, yeah this is pretty exciting and I think, in terms of uh more broad challenges, what I find interesting is that I don't, believe that translation is solved I, think a few years ago some people, claimed that there was human parity, between Mt systems or some Mt models and, humans and translators but then what it, turned out that the actual translators, that were used were not really, professional translators like I know, English right but I'm not a native, speaker and I cannot translate, everything so I'm not a subject matter, expert on different topics so I cannot, actually if you give me some chemistry, content to translate into English from, Portuguese I I I I cannot do it right so, I think what's exciting is to see that, the technology is allowing us to, translate better and better maybe, compared to me as a non-native speaker, when I'm translating some content but, still there's a lot of there are a lot, of challenges to actually translate very, well very specific content that is um, you know requires very specific, terminology and very specific way of, actually building the sentences and what, is much better is actually the fluency, that these machine translation models, are giving nowadays but what remains, still a challenge is that sometimes the, translations they look very good but, they are not on point so they are not, adequate they are talking about, something slightly different or, completely different so I think this is, exciting I mean not not everything is, solved but at the same time is, encouraging right is encouraging in this, sense so yeah uh great well as we close, out here um where can people find out, more about unbabel and specifically, maybe some of this uh research that, that's going on and and also um uh you, mentioned beforehand that unbabel was, possibly hiring as well where could, people find out about that right so we, have our website like babble.com and we, have our Twitter handle like at amabo, you can follow our news from there we, are just put up a research blog in which, we are going to be uh writing about our, research this is going to be possibly in, the links in your info box I don't know, yeah we'll put it in the show notes for, sure and yeah um we are also hiring soon, like we are starting to accept, applications for the next year for uh, research scientists uh in different, levels and different geographies so soab, we didn't talk about it but it was born, in in Portugal in Lisbon but now we have, offices all around the world we have, offices in the west coast in the US in, the east coast London you know and some, other places in Europe and we are going, to post this also to give an email for, contact for people who are interested in, other the research that we're doing and, and other works we don't we have open, positions not only for research, scientists but also for the engineers, and other positions that are not, technical so well uh yeah thank you Jose, thank you Ricardo um really appreciate, you taking time I know there's a lot of, good posters around to see and all that, so thanks for taking time thanks Daniel, thank, [Music], you all right that is our show for this, week if you dig it don't forget to, subscribe head to practical a FM for all, the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static assets to fly.io for backing our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI competitions & cloud resources | In this special episode, we interview some of the sponsors and teams from a recent case competition organized by Purdue University, Microsoft, INFORMS, and SIL International. 170+ teams from across the US and Canada participated in the competition, which challenged students to create AI-driven systems to caption images in three languages (Thai, Kyrgyz, and Hausa).
Leave us a comment (https://changelog.com/practicalai/203/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Matthew Lanham – Twitter (https://twitter.com/MatthewALanham) , Website (http://matthewalanham.com)
• Mark Tabladillo – Twitter (https://twitter.com/MarkTabNet) , LinkedIn (https://www.linkedin.com/in/marktab)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Purdue University’s Krannert School of Business (https://krannert.purdue.edu/masters/business-analytics-and-information-management/)
• Master the basics of Azure: AI Fundamentals (https://learn.microsoft.com/en-us/users/sandramarin/collections/zopanqdn7w1p1)
• Azure Architecture Center (https://learn.microsoft.com/en-us/azure/architecture/)
• SIL International (https://www.sil.org/)
• The bloom-captioning dataset (https://huggingface.co/datasets/sil-ai/bloom-captioning)
Books
• “Applied Machine Learning and AI for Engineers” by Jeff Prosise (https://www.amazon.com/dp/B0BM35KY4C)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-203.md) | 3 | 0 | 0 | hi everyone this is Daniel coming to you, with a slightly different episode of, practical AI this week uh recently, Purdue Microsoft informs and a few, others put on a Case competition which, included student teams from Across the, Nation around 170 something teams all, working on a shared tasks related to, image captioning this is a task where an, image is input to a model and then the, job of the model is to Output a text, caption corresponding to to that image, and I had the privilege of getting to be, one of the judges for this competition, and I took the opportunity to interview, some of the sponsors and the, participants in the challenge also it, was really fun because the competition, used some of our data the data from s, International in the bloom captioning, data set which includes uh image, captioning data for a lot of languages, but specifically this competition, focused on image captioning and Thai how, and kyes so hope you enjoy the, discussion of this Challenge and here we, [Music], go welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone subscribe now if you haven't, already head to practical ai. FM for all, the ways special thanks to our partners, at fastly for delivering our shows super, fast to wherever you listen check them, out at fastly.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, [Music], fly.io welcome to a very special episode, of practical AI this is Daniel whack I'm, a data scientist with s International, and this is a very special episode, because I'm here at Purdue University uh, judging a really interesting case, competition uh data analytics for good, uh that's sponsored by Purdue University, Microsoft uh s International my, organization and informs and I'm here, with Matthew lanam who is the academic, director of the MS business analytics, and information management program or or, Bame program uh which we've had the the, privilege of getting to know each other, over the past few years and um it's, really cool to collaborate and judge, this uh competition um Matthew could you, tell us a little bit about it and how it, came about sure yeah so um as the, academic director uh for the business, analytics program my job is basically, trying to make sure our students are, involved with in analytics and data, science competitions and over the last, seven years since we've had this program, our students have won or placed in many, of these National competitions so we've, got a really well established brand and, name out there and we thought hey why, don't we create our own uh National data, analytics competition and let's do, something that's that's for good not, necessarily just focus on trying to make, money yeah yeah so tell us a little bit, about the the actual problem that the, students are working on and and maybe a, little bit about like the mix of who was, involved in the in the competition from, Across the Nation sure so uh the actual, problem was sponsored by your company s, International and basically what they're, trying to do is use natural language, processing to do image captioning which, is not a trivial task by any means so, when we put this problem out there to, the students they were like oh my gosh, what is this and the great thing is it's, not something that you would see in a, traditional NLP course this kind of, problem and there's been a lot of great, learning involved overall we had 172, teams across the nation apply register, for the the competition and that was 36, universities that were represented two, of those were outside the United States, wow that's a yeah yeah it's really great, yeah the the competition so this image, captioning it's been cool to see because, recently s put out this data set around, image captioning and it it was, convenient timing because about a week, or so later you you reached out and said, hey we're we're running this cool case, competition do you have any cool data, sets to to work on and that worked out, really good I think I've heard students, you know all learning a lot about, natural language processing but also the, world's languages so when you try to do, image captioning in tie for example, there's no spaces and you can't tokenize, words with just like spaces so just even, realizing things like that has been uh, has been quite interesting to see for, students and I know I've work sort of, halfway through the day of judging at, this point that we're recording and I've, already been surprised and encouraged by, a lot of the a lot of the solutions one, of the other sponsors of the of the, competition is informs which I'd love, you know people to know a little bit, more about what informs is generally, because it is you know a vibrant and and, large community could you tell us a, little bit about what that is absolutely, so the inform stands for the um, Institute for operations research and, management science it is my favorite, Professional Organization so it's been, around for for many years and you know, we used to call it operations research, and management science but now we refer, to a lot of the stuff that we've done, for years as analytics and data science, within informs there's also a, certification program called the inform, certified analytics professional or cap, so I'm a cap I I was one of the first, caps many years ago and basically the, whole idea with a cap is you don't have, to be a a technical person to get this, cap or you don't have to or you don't, have to be just like the the business, person it's really for everybody and the, whole idea is this um how it came about, was the people at informs worked with uh, business professionals from all, different areas of analytics and data, science operations research to identify, what are the key kinds of tasks that, that you would do as a professional and, what they did is they came up with, basically seven domains business problem, framing is the first one then analytical, problem framing knowing your data, methodology or approach selection model, building deployment and life cycle, management and if you guys if the, audience hears those kind of things, they're probably thinking hm that sounds, a lot like Chris DM that we've all heard, about you know in school or at some, point following a process and that was, the thing is a lot of times when we're, working on these problems you got to, follow a process process and they kind, of extended that Cris BM framework and, there's just a lot of tasks within there, that we hope people are aware of and and, think about when they try to develop uh, Solutions and practice Yeah and um we've, been utilizing that framework in the, judging here at the competition which, has been really really useful I would, say to consider these different elements, of the process and maybe how would you, say after working with teams in this, process you know for for years now, actually like thinking about these, different elements, how do you think that kind of rounds out, someone's view of like an actual solving, an actual business problem with like AI, or analytics or data science in ways, that maybe is sometimes neglected in in, a lot of just sort of process when you, step into a problem what what are the, main areas that you think kind of, stretch students but then as they go, into the professional workspace how do, you think that sets them up for solving, real world problems great question and, and this is why I just love informs cap, and what we make our students follow the, informs cap when they do projects with, companies is because you'll see a team, that maybe they're the data science team, the real technical team and they love to, get into the nitty-gritty details and, there's absolutely nothing wrong with, that but at the same time you need to, know your audience and I think just, following the the those seven domains, those seven those seven domains uh for, informance cap is important because, before you even get into the, nitty-gritty details of your problem uh, you need to be able to say well how does, what is the business problem here and, then how do we frame it what are the, possibilities of framing it into an, analytics problem and then so that's the, front end of the thing right and then, you'll get in the middle part which a, lot of the stuff that we would talk, about is data scientists the data the, methodology the model building all the, stuff that we really like to do but then, the last part of it the deployment and, life cycle management that's so key, right so that's when you get into, architecting and make de developing the, pipelines all the stuff that I know, you're an expert in it's just so key so, basically that's what the infor Cap's, doing is to say hey let's lay all this, out to make sure that when we architect, a we designed a solution we try to, create a solution to our problem that, we've thought about all these things and, we haven't missed anything along the way, yeah well thank you so much again for, helping organize this uh Matthew it's, been it's been a pleasure and uh yeah, really just appreciate your work on this, and also your work with informs thank, you Dan I want to sure exactly how the, funding, happened the F it was just about Azure, and the socks I mean what is this thing, from Microsoft oh oh yeah so from, Microsoft so that that's actually a, really important point so one of the, interesting things about this, competition that's different than all, other competitions that my students have, participated in is we designed it where, there's three phases and phase two is, where they actually work on the problem, provided by Dan's company but the phase, one is we wanted to actually provide, some training on cloud services which a, lot of people in Industry they know you, got to be familiar with these Services, if you're going to architect a solution, and put it into practice so Microsoft, offers free training on their Azure AI, basically everything's free if you're a, student and they also offer students, free practice exams and certification, vouchers which was just amazing so we, told them they said you know we would we, would like we could get a whole bunch of, students to participate in your training, events if we can kind of piggy back off, of you and they said absolutely we love, this so that's what the students did in, the phase one was that they had some, training for Microsoft professionals, some of them even set for certifications, and then in phase two the goal was to, try to apply some of those web services, for this particular problem the last, phase phase three is when they um the, top teams that perform the best in the, kaggle competition would come on campus, present their solution and then show how, they could follow the informs the seven, informs cap JT to architect their, Solution that's how it all came about, awesome yeah and that that definitely, brings us right into the the Microsoft, involvement with this competition which, also pleased to have with us uh Mark, tabladillo who is a cloud architect with, Microsoft yeah thanks for thanks for, being here and being part of the, competition and also Microsoft's, involvement in this um it's been, interesting as we've seen some, presentations already even to hear how, students are sort of making that, realization about hey I've been working, on my laptop solving maybe like data, science toy problems in my courses or, something like that but I got to this, problem and even one of the student, groups said hey I had to I bought RAM, for my laptop to try to solve the, problem but then they're like that's not, doing it so then they started thinking, about cloud services so how do you think, Mark as I guess my question is as, students and maybe people getting into, the field are kind of making this, realization around the resources, required to solve actual like business, problems what are those ways in which, they can kind of start dipping their, toes into cloud and experiment with, things to kind of expand their Horizons, in terms of what's possible without you, know knowing about maybe people don't, know about Docker and kubernetes and, like all that stuff yet but they want to, start dipping into more resources what's, a good way for people to kind of get, into that as you've seen these students, kind of do that sure and um I think, there's more resources available more, than ever to do self-learning and it was, maybe 10 to 15 years ago where it was, very common to go to a bookstore and, find these big thick books anal on the, front and yeah maybe they you know they, would be um training books for uh, Technologies and and of course the, Publishers are still out there or ry's, still out there producing books and I, have a friend by the way who's coming, out this month uh with his book on, practical machine learning and AI uh, Jeff prois and I'm so proud of him to, write a new book but the point is that, so many things are online and in the, Microsoft ecosystem there was a time, when uh you had to pay to even get the, proceedings from a conference like you, didn't even go and you couldn't even get, the recordings well now Microsoft's, making a lot of that available for free, and in a way that people can find it and, so um I think for the audience of this, podcast uh would love to have them uh, look at YouTube what's available there, Microsoft's got a few channels of, content on there and that's a good way, to get started sometimes they're short, sessions between 5 and 10 minutes, sometimes they go to an hour but that's, definitely one way to get started yeah, and could you help us I think it's good, for people to kind of organize certain, categories in their mind I've heard, students talk about like the um, Microsoft Azure kind of Studio, environment and then there's other, things like these like cognitive, services and like managed AI services so, how do these things differ and like how, how might they be used um or how have, you seen them being used either in this, competition or other places okay so the, unifying thing is either and you can, pronounce either Azure or Azure both, correct both correct good good it's good, to have the definitive answer on that, this is the definitive answer for for, all time I tend to use Azure but just, because I out of habit you know the, unifying factor is azure active, directory so that's the the main, authentication path to going into an, Azure subscription and the subscription, itself is you know think about it like a, credit card and there you know when if, you sign up for subscription uh you, would have to put your credit card on, there to you know pay the bills uh again, another free path um Microsoft offers a, lot of things we have free subscriptions, and we even send our customers to go get, them and I even tell my customers I say, you know it does run out so I said well, go make a new email at outlook.com and, then just make a new one all right we, want you to get handson because there's, no substitute for experience and that's, even kind of the point of this, competition you know you can study it in, the book you can do class exercises but, two things are true first of all putting, into practice and working as a team and, that's what we're doing in this, competition so back to your earlier, question now that a team may Al All Join, the same subscription the Azure machine, Learning Studio is our Flagship, technology for machine learning and it, can run on regular CPU or GPU instances, uh we have regions available around the, world I happen to work in the federal, space so we also have specific clouds, just for that use and there's there's, different Sovereign clouds now and, Microsoft's now beginning to build out, uh specialty clouds on top of our, regular clouds and also call out to for, students we have a lot of promotionals, for students where they get things that, discount rate and also nonprofits, nonprofits will get special treatment, inside the Azure ecosystem but let me go, back to the original question all right, so there are two focuses that people, will have one is machine learning and it, tends to be thought of now as building a, model that is the way kind of to think, about the products AI is the is the, marketing term for all our Technologies, now so if you go to the main website, everything is AI however inside the, Microsoft Technology there are cognitive, services and those are considered um, mostly apis they're rest apis and, they're already pre-trained models and, do certain things and we've seen some of, the teams here at the contest they've, been using maybe computer vision or uh, text to image or image to text you know, those types of Technologies are already, out there and Microsoft's not unique, other vendors have these and Microsoft, is now behind the scenes supporting all, open- Source Technologies some come, pre-built inside machine Learning Studio, like we have a certain version of a, python kernel that will run inside there, and another technology okay so going, back to the Azure machine Learning, Studio we have double down on ML flow as, the way we are organizing our workspace, so on the new version of the API that is, the path uh forward and that is open, source and and could you just describe a, little bit what that what that is ml, flow ml flow so it is a way to organize, experiments and training and models and, model deployment and it has its own, syntax in terms of vocabulary and um AP, you know the way to work and Microsoft's, not alone in using ml flow there's other, vendors that use ml flow in their uh, technology but it is a way to organize, kind of the uh technology and the assets, and and Microsoft decided to make that, native to our own API uh and how that, works great yeah and I think one of the, things like as we've been in the, presentations here today I think when I, was in grad school you know I programmed, but but it was like you know I'd used, mat lab or python or whatever and I did, some things I had no concept of like how, infrastructure worked in industry or, like you know the whole thing about, doing programming and Academia it's just, like not not always a parallel to, Industry so like coming from the, Microsoft perspective I found it really, encouraging to see students you know, saying things like object store or you, know model registry or like these things, and thinking through like the, architecture I know that's one of the, things in the informs like they, emphasize is that sort of model, deployment and model life cycle, management so yeah do you have any words, of encouragement for maybe those you, know listeners who are you know again, getting into this space or maybe they, students in terms of getting Hands-On, with actual infrastructure that people, use in industry and how that benefits, kind of you know your understanding of, how to create value with with what, you're producing rather than just, creating a cool model model right so let, me call out a few things now some people, aren't so lucky to be even admitted to, Purdue and I you if you were I would, certainly you know want to come to a, program such as here and you can, participate in all these cool events but, short of having that you know either, undergraduate or graduate experience a, Microsoft has and this is what I believe, is the front door we have something, called AI business school it is a series, of courses that show how to tie in the, value of AI in a business context and a, lot of the videos were done by our own, leadership and they've shown how we've, used AI inside the Microsoft business, now it's not intended to be a catalog of, all possible ideas but it does kind of, cover the landscape of you know along, the lines of the informs uh domains it, covers the landscape of of here's we, have a challenge here's how we're going, to use data modeling and then put in, production and here's how we're, evaluating use and it just gets people, started so it's something I do recommend, to my customers because people do have, different roles and and even you know, when I'm working we're working on, internally we're now rethinking through, who are the personas of people who touch, uh data projects and I we we talked, about the domains right but we're going, now to thinking about all right so who's, that person what does that person do are, all modelers the same we don't think so, so we're now beginning to think that, because we're now serving large internal, Community inside Microsoft in terms of, our programming so that's the first, thing I think about is the AI business, school and then also in terms of getting, started we have inside the all our, Technologies we have tutorials and, samples to get started little sample, data sets program notebooks that run, quickly they don't take hours but then, they they show you a variety of things, they're guided uh toward um you know, specific outcomes they they begin to get, better I mean I've seen Microsoft, examples and how they have grown in the, last 15 years and they're just getting, better and better because they're, getting better minds thinking about it, but I'll also call out you know, Microsoft also is always looking for, partners that want to share their, stories and uh we have a lot of case, studies of companies doing things in, different Industries whether it's, for-profit or nonprofit um one I'll call, out you know one example we work with, the Metropolitan Museum of Art to, digitize their entire Holdings now I, don't know if they did 100% but just, they have the same challenge as a lot of, art owners and that is not all their, collection is on display and some, researchers want to have access to those, products so that's an example of you, know anytime we do work with major, organizations we put those ideas out, there but more practically and this may, even be helpful for students or users we, have What's called the Azure, architecture Center and inside uh we see, many architectures very similar by the, way we're only looking at the top, presentations but we are see, architectures uh presented it's the type, of thing that I do in my own work and uh, the architectures will be in there the, diagrams and also the case study of what, it did and you know kind of what the use, case is so it gives people again a, catalog of different ideas of how do you, use the different resources that are, available so between all that a lot yeah, yeah thank you so much Mark I will, definitely link both to informs um what, they're doing to produce and the Bame, program and these resources from, Microsoft in our show notes for the the, podcast so make sure and check all those, things out thank you again um Mark and, and Matthew for what you're doing on, this and um looking forward to to, hearing the rest of the presentations, thank you Dan, thanks all right well I'm here with the, winning undergrad team from the, competition the image cap captioning, competition which is from Butler, University I've got Chris Stein Andrea, Mary and Aaron Pinner with us so, congratulations on uh winning the, undergraduate portion of the of the, competition yeah thank you very much we, to yeah so your solution was really, interesting actually all the, undergraduate presentations I was, surprised they seem like graduate, student work to me but uh but um tell us, a little bit so the the task again was, image captioning so just tell us one, highlight about maybe one of the one of, the challenges that you faced in in the, in the competition so I think that one, of the big uh the Big Challenge in this, case was about the data set because you, know for uh certain languages uh like, the house languages that you know we, have to work on the data set was kind of, small and uh also uh there were not much, variety into the pictures so that uh was, the biggest challenge so we kind of, overcome that challenge by either, artificially augmenting the data set or, adding new new pictures to the data set, great yeah and um the specifically part, of the competition was thinking about uh, different languages where maybe image, captioning isn't supported and um I, think one of the things I appreciated, about your all's presentation as well, was thinking through the business, implications of something like this, technology of image captioning um that, could enable new or expanded, possibilities for local language, communities that don't have this, technology could one of you comment on, maybe uh what you envision in terms of, the impact something like this could, make in terms of IM image captioning for, a language that where it's not supported, yet yeah so our idea was almost to make, create a web app or a mobile app that, you know small businesses using like, kiras or taii or these smaller languages, could go on this app and submit their, photos there right so you know, everyone's got to sell phone you know in, all communities nowadays and if they can, utilize that cell phone to almost, leverage it and you know upload those, pictures right then there and get a, caption you know that is Handy for the, small business and siio in their mission, you know a small business would want to, use this because you know a lot of, people are drawn to websites because of, images they click on images in Bing and, Google so if we can help small, businesses especially if they have a uh, user base with a with a minority, language you know that helps both s and, the company so really you know if, there's a monetary win but really we're, you know helping the world in the way so, it's it's really neat yeah yeah and also, sorry if I can add something like last, night we had a brief talk you know about, you know the languages around the world, and so also there is not only a business, uh implication to this challenge but we, know that also we are losing a cultural, heritage like you mentioned last night, that every two weeks one language is, lost forever right so there is no way, that we can keep those languages alive, so if this can also help make the world, more open towards you know those small, communities you know this is also good, thing also for the world because it, makes the world a more interesting place, right yeah yeah awesome and uh maybe a, comment from uh Aon as far as um uh this, competition maybe what's one of the, highlights of something that uh, something that you learn throughout the, competition that you view differently, now either in terms of the technical, challenges and that side of things or, the business problem or something that, you'll carry with you throughout the the, rest of your work yeah I think one thing, I learned was just um I think the, challenge really opened my eyes to like, this problem that existed I would have, never thought about like using AI or, machine learning in a way that like, directly impacts languages so I think, that was definitely something I learned, and was really interesting great well uh, congratulations again um hope you your, your travels back home are are safe and, um yeah congratulations hope to stay in, contact yeah thank you very much thank, [Music], you okay well I'm with now the winning, graduate team from the Purdue using, analytics and data science for good, competition uh this team is from Georgia, Tech here I have with me Hara Veron Ravi, and there was another team member and, cheetah who couldn't make it to the, competition here in person um but want, to acknowledge her and her contribution, so congratulations first of all you all, are uh the first out of I think it was, 170 something teams in this competition, to come up with a with an image, captioning model that performs well on, three sort of diverse languages from, around the world Tai kires and howza so, first off congratulations and um I I, think one of the things that was really, interesting to me about EUR all solution, is one kind of looking to, state-of-the-art models like clip um, which was something that featured um in, your solution but then also using sort, of a multi-stage approach where you, actually determined if a caption existed, already that you had in a database of, captions and then if it didn't exist, generating a caption so um could you uh, one of you describe a little bit about, how you kind of eventually got to that, solution how you considered you know, using clip and got to thinking about, that direction so I think the problem, itself was quite challenging when you, look into the data set and actually see, what the data is you can see poems you, can see philosophical statements moral, statements parts of stories and if you, want to predict these kind of statements, you need information about the previous, part of the story or the further part of, the story in order to even build a model, a zero shot captioning model is very, it's very difficult to achieve zero uh a, good zero shot captioning model for such, kind of prediction tasks so the next, step that we thought was maybe we could, do some sort of classification model, that was the original thought process, that we could select from a corpus of, sentences can we select a sentence that, best matches this and from there we, started researching basically and when, we went through hugging face models we, found the clip model and then we, researched further we found a, multilingual clip model that could, handle different languages and it sort, of went through that process and when we, actually used it it was decent I, wouldn't say it was perfect but it, certainly improved our overall solution, quite a bit yeah yeah so when you were, thinking about this idea of looking to, existing captions and using those when, you could how often in the data set that, you were looking at which is this Bloom, data set how often often did you have to, generate image captioning image captions, excuse me versus maybe looking to a list, of captions and using one that, pre-existed so even in the training data, set where we had the images and all the, captions that we needed when we actually, used the multilingual clip model it was, more about like 20 30% that were, matching and we had a very low threshold, by that itself and we didn't want to, lower the threshold because we didn't, want to get more false positives in a, way and and basically we we just decided, on that threshold we didn't do any, optimization on that particularly from, there when we actually used the model on, the test set we suddenly got a huge jump, in the score that was basically it so we, were covering about 20 to 30% of the, images even when you had all the, captions for the images only 20 to 30%, were actually matched by the, multilingual clip model all the other, images went to the generator model yeah, and You' mentioned sort of clip hugging, face these are all you know kind of the, industry standard state-ofthe-art, sort of things as a team getting into, this problem what were the kind of, challenges that you faced in terms of, maybe even finding where to start or, maybe it's computational challenges or, other issues so when we initially, started with the data set we were like, we were stumped honestly like uh we, hadn't even heard of a model that could, generate uh like contextual information, with the as much depth as was required, by the solution here so uh like as as I, said we did our initial Ed with say, Microsoft aure using their computer, vision API and translator model so uh, like we when we actually used that we, thought okay these are reasonable, guesses like okay human would make these, guesses but we had to go deeper so uh we, had to like match uh we had other ideas, as well like we thought of like uh, clustering common images maybe they, belong to the same uh same story or same, uh piece they're part of a single book, or something like that yeah so that's, how we got started off the the Eda that, we did helped a lot like uh, understanding that they were poems and, stuff mixed into the data helped us look, for more deeper models that could, generate context great yeah yeah thank, you for that uh that info so I think um, you know as you as you look forward to, kind of I mean I know all of you will be, going very far just with the innovations, that you you've demonstrated here I hope, that maybe when you you own a billion, dooll startups you'll hire me to like, sweep the floors in your in your startup, or something but uh how do you think, kind of working on a on a solution like, this from start to finish has influenced, how you'll think about maybe um AI or, data science problems in the future any, input so the amount of good the AI can, do to the real world people I have, looked at lot of things the S does it's, actually improving a lot of language uh, proficiency among the uh students and, also uh increasing the educational rate, among the people who are not studying so, much so the amount of good the data or, AI can do is will definitely influence, our thoughts in future as well so it's, like uh the kind of uh use cases all, these things can have on the lives of, the people are definitely going to stay, in no man and we definitely try we will, definitely try to contribute wherever we, can by keeping this in mind so this is, going to stay with us forever great, great well thank you for your, participation and congratulations again, I hope your travels are safe back home, thank, [Music], you all right that is our show for this, week if you dig it don't forget to, subscribe head to practical aai FM for, all the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static assets to fly.io for backing our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Copilot lawsuits & Galactica "science" | There are some big AI-related controversies swirling, and it’s time we talk about them. A lawsuit has been filed against GitHub, Microsoft, and OpenAI related to Copilot code suggestions, and many people have been disturbed by the output of Meta AI’s Galactica model. Does Copilot violate open source licenses? Does Galactica output dangerous science-related content? In this episode, we dive into the controversies and risks, and we discuss the benefits of these technologies.
Leave us a comment (https://changelog.com/practicalai/202/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Related to Copilot:
• Article - “GitHub Copilot Isn’t Worth the Risk” (https://www.kolide.com/blog/github-copilot-isn-t-worth-the-risk)
• Tabnine (https://www.tabnine.com/)
• Big Code Project (https://www.bigcode-project.org/)
Related to Galactica:
• Model website (https://galactica.org/)
• Article: “Galactica: the AI knowledge base that makes stuff up” (https://www.aiweirdness.com/galactica/)
Books
• “Interpretable Machine Learning” by Christoph Molnar (https://www.amazon.com/dp/0244768528)
• “Modeling Mindsets” by Christoph Molnar (https://www.amazon.com/dp/B0BMJH7M9F)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-202.md) | 12 | 0 | 0 | we were kind of laughing about it and, stuff but I think we're going to see so, many of these instances in the years, ahead and you made a point that I think, we sometimes need to respond with a bit, of empathy for the uh the data, scientists and AI Engineers that are, trying to create these because they're, trying to do some pretty Cutting Edge, stuff and mistakes are going to be made, and uh in the end my understanding is, nobody was hurt by this yeah we we need, to both be critical and empathetic, indeed fair, enough, [Music], welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical ai. FM for all the, ways special thanks to our partners at, fastly for delivering our shows super, fast to wherever you listen check them, out at fast.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, [Music], fly.io welcome to another fully, connected episode of the Practical AI, podcast this is where Chris and I keep, you fully connected with everything, that's happening in the AI Community, we'll take some time to discuss the, latest AI related news and dig into some, learning resources to help you level up, your machine learning game I'm Daniel, whack I'm a data scientist with s, International I'm joined as always by my, co-host Chris Benson who's a tech, strategist with locked Martin how you, doing Chris I'm doing fine as we are, recording this episode the day before, Thanksgiving yes us Thanksgiving is, tomorrow that's right and I know that we, both have our day jobs and we just have, nothing to do today do we we just, there's not much going on right if only, if only we were talking beforehand and, both of us are like oh gosh it's quite a, busy day for the day before Thanksgiving, but you know what we have a few minutes, to talk about some fun stuff here yeah, exactly um I hope you got your uh toe, furky or whatever you've got ready for, tomorrow I I don't know what we'll have, but absolutely got myself some vegan, bird here nice nice I like it I like, it so I I'm going to maybe start with a, a story Chris cuz this is kind of what, prompted some of my thoughts around this, episode is so I I live downtown in the, town where we live here and there's a, barber a couple blocks away I go and get, my haircut from this Barber and he's big, into crypto like when nfts was really, hot he was like pouring like thousands, of thousands of dollars into nfts and, he's got like all this stuff he's doing, anyway he lost a bunch of money with, nfts but then the last time I went to go, get my haircut we were talking about, this recent controversy around FTX and, just a uh sort of disclaimer we're not, going to be talking about crypto or, Bitcoin this episode or blockchain but, it sort of you know prompted my thinking, because basically for those that aren't, aware recently there's this crypto, exchange FTX the founder owner Sam, bankman freed uh basically he was a kind, of industry leader well respected but, he's kind of turned into industry, villain lost most of his fortune and, bankrupted a bunch of things like $ 32, billion Plunge in value of this FTX, exchange and I was talking to a couple, people interested in this and like my, barber who maybe I don't know how much, you know he is an expert but thinking, about how this is a major setback to, those that are kind of promoting, blockchain technology crypto currencies, crypto, whatever and it got me thinking what, sort of controversy or event could prove, to be a major setback to the AI industry, um or is such or is such a setback, possible so that's my first question to, discuss on our day before Thanksgiving I, guess we can first give thanks that such, an event maybe maybe hasn't happened, although maybe smaller controversies, have happened, yeah although before we kind of move, fully over to the AI side from the, crypto side I'm I I happen to be staring, at Sam bankman Freed's Wikipedia page, and I'm looking at his hair and as you, mention the barber and stuff there's got, to be a joke there that's all I'm saying, he yeah there's got to be a joke there, so moving moving back over to AI well I, kind of feel like you've set me up, because you know you're like what what, could possibly go wrong with AI and you, know that would be a major setback to, the industry right so not just like a, bad thing so there certainly I think we, can both say there's been bad things, happen with Ai No Doubt right absolutely, I think it would be the degree of, Badness potentially on a scale of bad, things what's the scale of Badness 0 to, 10 what's at the 10 well a 10 is that, you have significant loss of life that's, caused by AI inference and that would, and specifically uh because I work in, the industry I work in I'm going to say, unintentional loss of life by that I'm, not saying that there's AI I should be, careful we don't have AI That's int, that's trying I'm just saying in the, future sometime what as things develop, um I'm having to to put in all the, careful things that yes if there was AI, in some industry and it resulted somehow, in unintentional loss of life then that, would be a very bad thing right so like, if all the, airlines started flying a autonomously, and there was an airliner that was, flying autonomously and had significant, loss of life or something like that, right indeed and and when you really, think about it you know that is, something that people are already you, know talking about for the future is AI, running various types of vehicles some, of which are on the ground some of which, are in the air and there you know there, may be there may be instances of that, out there in the world so yes an, airliner would be a big thing I have to, say as we're talking about this kind of, scenario though you know I'm like, totally recognizing the tragedy of that, I have always found it very interesting, at the perspective so um in terms of, loss of life uh like we react to it, depending on what the cause is in a, different way and so different different, results of some there are some things, that people look in the news and they, hear about people dying and they kind of, it's remote from them and they kind of, move on very quickly and go oh that's, that's or it's a story they've heard, before or something yeah that's a bad, thing and I'm I'm sorry to hear that, happen but they kind of move on and then, there are other stories where they kind, of get very emotional about it I think, that my suspicion is that should such a, story in the future evolve where it was, AI driven that would that would get to a, whole new level of that and I think the, interesting thing for me psychologically, is the fact that in all cases it was the, same loss of life but the way we the the, way we choose to react to it can vary um, and so uh it's just an interesting uh, you know psychological point from my, standpoint but I do think I don't think, it would stop AI but I do think uh such, an event would uh would create a lot of, pause yeah I think it's uh in my mind, it's not uh ceasing of AI research or, something like that but more maybe a a, Slowdown or intense regulation until, like more reasonable regulation comes, into play we both talked quite, extensively on the podcast about how, government regulation and laws around, you know algorithmic decisionmaking and, that sort of thing are lagging quite far, behind the scale at which people are, using this technology which is sort of a, scenario that would kind of create some, awkwardness one of the things that I, wanted to bring up this episode as we, talk through this issue is one of the, those awkwardness that has been created, and some people might see it as a bigger, deal some people might see it as a, really big deal or not a problem at all, so um I don't think uh we're necessarily, in a like we're not lawyers or in a, position to uh you know weigh in on on, how this will all go but I think we we, can present some sort of uh things that, are happening right now and the one that, came to my mind was GitHub co-pilot, which uh I'm actually a huge I mean I'm, a huge fan of so maybe I'm biased in, this discussion you know we're not as, far as I know we're not sponsored by by, GitHub co-pilot or Microsoft or anything, but um I do like the product and I use, it and I started it is interesting so I, found this article um and the article's, title is GitHub co-pilot isn't worth the, risk and it's sort of geared I guess, towards like a CTO type and the thought, is like like should you allow your, engineers to use GitHub, co-pilot and it was kind of really I, mean it was really good timing for me to, see this article I think because, literally a couple of the data, scientists on my team were were asking, me like I think the week before is it, okay like is there a policy against us, using GitHub co-pilot or is there any, issue with us using GitHub co-pilot like, in our day-to-day work right so I, already been thinking about this and one, thing that struck me is I was already, using get co-pilot without maybe, realizing some of the implications, around around uh some of the things, brought up in the article but now you, know people on my team are asking me, should they use GitHub co-pilot and so I, thought the the timing was was really, good I mean one thing to ackowledge I, guess here is if people aren't familiar, with GitHub co-pilot it's sort of an AI, enabled, assistant that kind of is there in your, IDE or your code editor with you and, suggest certain um blocks of code or, converts comments into code like you can, say you know function that you know, transforms this data into that and it'll, kind of draft that out for you and it's, it's quite quite Nifty um so I mean, first acknowledgement is like G GitHub, co-pilot is obviously very power ful and, I would argue useful um otherwise you, probably wouldn't be having this, conversation I would too I like it, personally like I'll use it for my, personal things and I really like it, because especially um if I go in and out, of coding where I'm coding sometimes but, then I'll go periods of time where I'm, not coding um and I'll things will slip, and it's a great way of kind of getting, back into Quick productivity by getting, those suggestions and often I'll see, them and go oh yeah oh yeah oh yeah do, that you know select that and all that, so it's a great tool I will confess that, to this day I still am I have this kind, of discomfort with the idea I think I, think it's that open source mentality of, like I don't think anyone I and I'm not, talking about the legality of it I'm, talking about the when people submit, open source to GitHub and if you look, back in the long history of that they do, expect other people to use the code and, adopt it and all that but I think that, kind of pervasive making a large, company's you know infrastructure out of, it I there's a discomfort that I've, talked with other people about uh and, everyone kind of has this uneasiness, about that uh that in these, conversations about that aspect of it so, I'm guilty of using it I like it but I'm, never quite comfortable with it yeah, yeah I think that part of it well maybe, not not feeling guilty but what what are, the what are the implications of it, which I think I've thought about a lot, more over the last couple weeks and to, you know spoil the ending I'm still, using GitHub co-pilot um and I guess, maybe during this episode you can tell, me if that's a wise decision or not but, um the, controversy or the recent sort of swell, of of discussion around this I think is, based I mean there's a buildup to it but, on November 3rd there was a lawyer that, filed a class action lawsuit against, GitHub Microsoft and open AI related to, uh GitHub co-pilot and the the basic, charge is that co-pilot suggestions, aren't boilerplate or sort of Novel but, they bear kind of unmistakable, Fingerprints of their original authors, and according to you know a lot of Open, Source, licenses if you're not giving at least, attribution to those copyright holders, even if it's an open-source license then, you're in violation of the license right, yeah it's an interesting idea the thing, that that I wonder when I when I hear, that is that writing code is so, structured you know that in a lot of, cases you can have different programmers, coding in a very very similar style and, maybe even selecting the same variable, names and stuff like that so like does, that mean that it's actually pulling, from someone's direct copyrighted code, or if there are a thousand versions of, the same function that all literally are, named the same you know does that imply, the same thing and I don't know the, answer to that but it's it's an, interesting, [Music], conundrum, [Music], yeah Chris um I I think it what you were, talking about about when does code show, unmistakable Fingerprints of its, original authors and when is it, boilerplate that in and of itself is to, me it is a hard one to navigate I think, because you know I was just having a, discussion with my with my, brother-in-law Ed shout out if you're if, you're listening which I don't think you, do but if you if you are, listening um it's learning JavaScript, and do uh learning some some front-end, development and that that sort of thing, and we had this discussion the other day, CU he's like well there's this this, piece of this app that I've used and I I, can see the code right and I'd like to, just sort of take that little bit and, modify it over here in my little app to, do a similar thing but it's basically, Bally the same thing but slightly, different but how many ways can you, write this for loop I feel like I'm, stealing from this guy and taking it but, it's basically the right way to write, this Loop and do the thing so do I copy, that over and modify it I think in in a, normal sort of open-source, world if you were copying things out or, like integrating in certain libraries or, something like that um like I, say there are kind of attribution, elements to it and there's like, dependencies in terms of like how, restrictive your license is versus The, Source license and all of that and, there's all sorts of things around that, but as an individual code writer or a, programmer you can navigate those things, because it's not like you're you're, taking code maybe from this project, right X Project and you can see the, license and you do what the license, tells you to do right like you make that, decision actively but in GitHub co-pilot, I'm in my vs code and I'm typing along, and then boom there's a block of code I, don't I have no idea if that's verbatim, from someone's repository or if that's, like something unique that's like some, morphing of various things together, right so could that I'm just curious, could that be solved if they added a, feature that either specified it was, from a specific source or explicitly, disclaimed that it was inferenced code, and not from a specific Source uh, potentially I think one like the most, foolproof workaround I think or solution, is to train the model that you're using, using only explicitly permissive, licensed code right so this is the, stance that there's another um offering, called tab n and tab n is specifically, in my understanding trained on, permissively licensed code which would, not have some of these same copyright, issues like MIT versus GPL yeah so I, think the one that's been called out a, lot with uh yeah with GitHub co-pilot is, GPL so there's uh I'm just looking at a, tweet here from from Tim Davis at Doc, sparse he uh I think this is one of the, ones that originally got a lot of, attention where he's saying, with public so co-pilot emits large, chunks of my copyrighted code with no, attribution no lgpl license my code on, the left GitHub on the right he shows, the pictures of the two and he says not, okay so I think this is what started to, get that going is like the mixing of, license code within the training data, set of GitHub is part of the issue and, we talked a about this a little bit with, large language models right large, language models are kind of like, stochastic parrots they're putting all, of these things together from various, sources that they found with language, right so when you have this weird mix of, code that generates this weird mix of a, block of code in your editor it may be, quite difficult to you know understand, or Trace back on the inference side what, is actually coming out that is, copyrighted in certain ways and that, sort of thing as we're into this kind of, swamp of you know technical mixed with, legal considerations uh on this, happening and the expectation that it, will continue to happen you know across, multiple Solutions what does governance, look like for something like this you, know and you I say governance Loosely it, could be it could be legal remedy it, could be you know kind of you know we, have ai ethics that we like to talk, about what does the world look like when, you have kind of this this swamp of a, little bit he said she said in terms of, you know is it was it his code or was it, not his code and how do you resolve, something like that how do you find a, framework that allows you to have, confidence that you're within the, boundaries of what is considered, reasonable acceptable and legal yeah I, mean I think it's an open question one, of the things I was discussing with my, team we kind of had an open discussion, about this because I was really curious, on all their input you know what is the, actual legal recourse here so like the, individual maintainer of some random, tool on GitHub that's licensed GPL or, something like that is that person going, to sue GitHub or is more relevantly is, that person going to sue my organization, because I use GitHub co-pilot and output, some like block of their code right I, think the likelihood of that happening, is probably very low because you know, these open source maintainers I mean we, love our open source maintainers but, generally they don't have a lot of, capacity for extra things they're just, kind of trying to get along maintaining, their project and keeping up with all, the issues right in their spare time, potentially so one of the things, stressed in the article is is probably, not the individual maintainers that are, going to deal with this legally but it's, some sort of open- source advocacy, groups the one that is called out in the, the article which um I should mention is, from Elaine Atwell and we'll link that, in our show notes but the one that that, uh she references is the software, Freedom Conservancy or, sfc one of these open- Source advocacy, groups so it's likely it's much more, likely that like an advocacy group like, this would sue certain companies that, are using this product but even then, they're probably not going to go after, I wouldn't guess the company that has, one developer using GitHub co-pilot to, write some random like service in their, organization they would probably Target, large organizations maybe with hundreds, or even thousands of developers that, maybe are all using gith hope co-pilot, and violating a bunch of things right so, one element of this is is it a reality, that my team is going to get sued my, guess would be no but that's a separate, issue to like whether it's a good idea, to use this and it's a separate issue, like you're talking about as to like, what is the proper governance around, something like this that would prevent, or help with responsible usage right, those are all kind of they have a, slightly different Nuance to those, questions I feel it's not that far from, you know when you think about those, questions that you're raising there it's, very similar to other AI ethics, discussions that we've had and it kind, of comes down to who has, responsibility in these cases and who, has agency you know in these cases and, then you there's someplace you're going, to draw a line on what is acceptable and, this is a thought that hit me right as, you were talking about large language, models a moment ago is that you know, once again you're you're in your and, this is outside my my expertise, obviously but you're in a body of, knowledge that's being worked on uh, presumably is kind of public and open, but people are are at some Point Things, become copyrightable and I'm sure an, attorney could clarify that that knows, all about that but there's almost a need, for a guarantee of that if you're going, to use the tooling and the new methods, that we're talking about that there is a, uh an assurance of some sort that it is, going to fall within what is currently, legally accepted use and then there's, also the question of like it's what has, historically been reasonable given the, new types of technology that people had, never thought about does that continue, to be reasonable is there you know we we, acknowledge that the legal Frameworks, have fallen way way way behind in these, areas for the most part so how do you, resolve that I mean there's there's kind, of a an ethical concern there's a legal, concern there's all the various licenses, specifically it's quite a mess how do, you what's the path forward yeah and, that's why I kind of came to the, conclusion with this one you know as, much as this is a controversy it's not, going to grind the AI industry to a halt, right because it's so, messy that it probably won't we probably, won't understand the implications for, years that would be my guess like yeah, like it's going to be years before we, understand that and by then you know I, mean GitHub I think is launching the, Enterprise sort of usage of co-pilot if, they haven't yet by the time you're, listening to this episode so there's, going to be a lot of people using it and, that's going to muddy the waters even, further right the lawsuits will take, several years to work themselves through, and by that time the risk associated, with being sued will have caused you, know various actors in the process to uh, to to go into risk mitigation of various, types probably market-based rather than, legal so yeah I think we can probably, watch GitHub specifically and Microsoft, and open AI the ones involved in, co-pilot and sort of look at some of the, way in which they modify the service to, understand how they're being pressured, maybe to to change what they're doing, based on the ongoing proceedings and and, all of that right if they they sort of, change how you use co-pilot that's maybe, an indication to us that they're being, you know if it's not a new feature maybe, that's due to some of these restrictions, and implications of the legal side of, things so yeah it'll be an interesting, one to watch I actually got an email, even from uh one of the members of our, leadership team who I I talk with, occasionally about AI things and uh how, the industry is shaping up and he that's, what he said he's like this is going to, be an interesting one to watch so, definitely gonna g to be interesting and, we'll keep you updated here on the, [Music], podcast well Chris the uh the other one, which is kind of in the same the other, thing that I wanted to talk about today, which is is is sort of in the same theme, I guess is one that also has to do I, think you use the term like some large, body of knowledge or something when we, were talking about GitHub and the large, body of open- source software knowledge, right that GitHub is leveraging there's, another thing that has been quite, controversial I should say quite, interesting I would say quite, interesting but also has generated a lot, of controversy in the in the past weeks, and that's this Galactica model which, you can go to Galactica dorg learn about, it this is a model from meta Ai and the, sort of idea behind this model is hey we, have all of this organized scientific, information right we have a body of, scientific work of uh papers academic, papers which include narrative and you, know theorems and math formulas and, tables and all sorts of things right and, we have this kind of mass of, papers and what the team did is they, released a new large language model, trained on 48 million papers textbooks, reference materials compounds proteins, and other sources of scientific, knowledge so that's what I took from the, Galactica dog site which is is pretty, cool I mean in the idea of it and you, can go through there's an explore page, on the Galactica site although I think, the site has been changing quite a bit, in in the recent weeks but there is, still at this time there's an Explorer, site on the Galactica site and you can, see the examples they give are language, models that site so the input prompt, example they give is the paper that, presented a new Computing block given by, the formula and then it gives a math, formula right and then the Galactica, suggestion is attention is all you need, vaswani at all 2017 so this is kind of a, way to organize scientific knowledge and, kind of learn about scientific knowledge, but also they give this example which I, think is probably the more kind of uh, gets to the more controversial things, which we can talk about here in a second, scientific from scratch or I think some, people might interpret that like science, from scratch or or something they give, the example of translating a math, formula into plain English or finding a, bug in Python code or simplifying a math, formula or something like that and so, there's all these prompts you can give, it you know translate the math this math, formula into you know into plain English, or or something like that or into python, code which is seems to be quite useful, to me um I I'm not sure how that code, was licensed that's maybe another, separate issue but that's not the main, controversy that's come about with this, but in general maybe just uh first, impressions of this of this work Chris, what what is your thought I I think I, mean I think it's a great idea and we've, seen these kinds of like we've seen, these with the proteins you know and, such we've seen amazing work in these, different areas for doing that and we, will continue but it's also sometimes in, our industry meaning the larger, artificial intelligence industry we are, so busy trying to get the next big thing, out and kind of be the thing of the, moment that that sometimes I think I, think missteps are going to happen and I, think this is a case of a misstep you, know where you had a large organization, that's trying to get out there um, because honestly you know you know with, yes it's meta it's a big amazing AI, capability but there's other big ones, too and you know it may not it doesn't, take long as we've discovered over the, last few years for the for the next, amazing thing to replace today's amazing, thing and so sometimes maybe maybe we, need to get it right before we get it, all the way out yeah so before I talk, about the the individual observations, about Galactica something just occurred, to me which I don't know if I've kind of, distilled in my mind like to thisg ree, is that even in my own work in, developing AI models and developing AI, systems I think one of the principles, that I've learned is the, communication and expectations you set, when you do an initial release of an AI, system or an AI model really really, Drive people's sort of initial, perception and their ability to adopt it, so what I mean with this or I can give, an example from my industry right we do, some language translation right and if I, come to a translation team and say hey, it's awesome I've just built this great, machine translation system you're no, longer going to have to do translation, system just make a couple of edits here, and there and and you're good to go, that's immediately going to so what the, translation team is going to look for in, that system is all of the ways that it, doesn't work, right and doesn't fulfill the, expectations that I've given to them, right whereas if I come to that team and, I say hey you know I really appreciate, what you're doing and I understand that, you have pain points in efficiency, around your process I think that maybe, this model or this system that we've, created could help you could you all, help us understand how this system can, best be used in your process and we can, kind of give you some suggestions some, prompts of getting started then what, they're looking for is not so much why, this is bad and is taking over our jobs, or encroaching on what we're doing or is, really dangerous but their thought, process is these people are wanting us, to tell them how we can use this and, generally in those cases I found people, do find the positive things too right, they find like hey I didn't expect this, to work great in this situation but it, actually produced pretty good output can, you do more of that but in these other, cases it did really bad so don't do that, anymore right um so you get like more, useful feedback on the initial release, of something if you kind of approach the, public or your internal teams or whoever, your stakeholders are and ask the, community to help, you understand the behavior and utility, of what you're releasing versus telling, them the utility and telling them, like this is going to solve this problem, when in actuality they find out that it, doesn't yeah I and you know there's the, way that you approach that has a big, impact on trust for the system and as, we've seen you know over and over, Through The Years here with these with, AI is one of those technologies that, people have to develop a sense of trust, in in terms of what's possible but then, also a validation of trust for any given, system and as you the way that you, you're outlining positioning that, approach makes a big difference on how, people are going to engage from a trust, perspective as well they'll give it a, chance if they position it the way you, suggested in the in the second version, there uh versus the first so it's it's, difficult and it's not always that, clearcut when you're in the process yeah, yeah so I I don't for for those at meta, if you're listening I I totally also, sympathize with you it's very hard to, figure out how to communicate these, things well and release these things, well so I think likely this Galactica, system is amazingly, Innovative and a great achievement but, it also has behavior that could be, either nonideal or or worse you know, harmful in certain cases right and so, it's part of the I think part of the, community but also the Creator's, responsibility to anticipate some of, those so I guess I can give an examples, so the I also found a a fun article from, or fun or disturbing I don't know what, how I should phrase it article from uh, Janelle Shane uh Galactica the AI, knowledge base that makes stuff up so uh, we'll link that in our show notes uh, thanks pretty harsh there isn't it yeah, thanks Janelle for for writing this, basically she starts with the conclusion, I've tried Galactica it's not helpful so, the prompts that she gave uh one prompt, is the number of giraffes that have, flown on a lunar mission is, currently and then Galactica uh responds, with the number of giraffes that have, flown on a lunar mission is currently 28, including 15 on the moon and 13 on Mars, of those 28 17 have flown on the moon, and 11 have flown on the Mars the first, lunar mission was in 1968 when the Luna, 13 mission sent a giraffe to the Moon in, 1970 the Soviet lunar probe Luna 15 sent, the giraffe to the Moon I'm, sorry it's pretty good right it's pretty, good stuff right there that's yeah yeah, I mean that's that's pretty good I think, I think I probably don't need to give, that many other examples to illustrate, no I I think you highlighted it quite, well, the funny thing and and I almost feel, like I was looking I'm I'm doing the, same thing where I'm just looking at, some of the various articles and um it's, a artist Technica says new meta AI demo, writes racist and inaccurate scientific, literature but I mean people get the, idea is if it's a trust issue uh and an, accuracy issue uh one related to the, other is for despite the very hard work, I'm sure of that meta team no one's, going to trust that model if they fix it, and come back out with it all the focus, is going to be on is this legit what I'm, getting you know the results that I'm, getting out of it uh which is a shame, when you think about it yeah and I think, about different approaches here that, people have taken I think on the one, side open Ai and some of the models, they've released in a very controlled, way VIA an API they have attempted to, address part of this release problem, where they understand there could be, even intentional misuse of this around, misinformation right or harmful usage, and they try to anticipate that create, an API with controls around that Etc the, other approach which I think is a kind, of more open source or open approach, something like stability right where, they released the model stable diffusion, under an open license so it's out in the, public but they very much within the, licensing the open rail license first, off try to license and include licensing, around restricted use that they could, Envision right using the model but put, it out in the public with the hope that, the community can help put some, necessary guard rails around usage and, provide feedback on on how the model can, be used and that sort of thing the third, approach I think maybe uh another, approach would be to say well here's our, great model it can solve this problem, and you kind of ignore the fact that, maybe it doesn't always solve that, problem and maybe it also has harmful, use so I think it's not necessarily like, any one of these is always right or, always wrong probably but it is worth, considering this these release options, and what their implications are it is, you know In fairness I remember you know, on that particular release from open AI, that you're talking about we were I, remember in our conversation we were, just on the show we were a little bit, critical kind of going you know they're, kind of holding back and all that and, and I don't remember where we ended up, on that because things have evolved but, I do remember having the discussion on, whether that's appropriate and then we, have something like this today and it, makes it look a lot more reasonable in, retrospect so it it really depends on, the moment that you're in and what's, just happened in terms of that, perspective there I also remember on the, in terms of the license is trying to, anticipate specifics I remember think, thinking whoever wrote a particular, Clause may not have had great insight, into into some of the use cases in that, in that Clause as well uh was was so, it's a it's a hard nut to crack uh, trying to come up with the right, solution here for sure in granted I, think you know some of the response from, from meta individuals not virtual, individuals I mean individuals at, meta is uh well this this sort of prompt, right the prompt I think one of the, phrases uh that they used to describe, that sort of prompt was causally, misusing the model which is is sort of, blaming the people using it I think to, be fair like that prompt is trying to, draw something out of the model which, the creators of the model they would, explicitly say well this this is like a, it is an adversarial prompt right like, you already know there's no drafts that, have flown on a lunar mission right, you're trying so there's that, perspective so I I don't I think there, is an element of Truth in that but I, think generally the community have said, well you know uh how close are is the, sort of goofy causally misusing the, model how closes that to the gray area, of misinformation right and people, intentionally using it to create, misinformation right especially around, science or important things like health, and and other things like that yeah you, know it's kind of funny starting this, conversation specifically about this, meta instance we were kind of laughing, about and stuff but I think we're going, to see so many of these instances in the, years ahead and you've you made a point, that I think we sometimes need to, respond with a bit of empathy for the uh, the data scientists and uh and AI, engineers that are trying to create, these cuz they're trying to do some, pretty cuttingedge stuff and mistakes, are going to be made and uh in the end, my understanding is nobody was hurt by, this so yeah we we need to be empathetic, we need to both be critical and, empathetic so uh my previous boss would, say uh we need to be tenacious and, gracious um both of those things aren't, mutually exclusive so yeah that's a good, point um as we as we wrap up here um I, do want to uh share a a new Learning, Resource that I kind of came across um, in the past couple weeks um I don't know, if you remember Chris at one point I, think we shared a Learning Resource from, uh Kristoff molar his book on uh, interpretable machine learning which is, really cool well he's has a new book, called uh modeling mindsets the many, cultures of learning from data and um my, understanding is that kind of this book, goes into various sort of approaches to, modeling whether you think about like, basian statistics or other approaches um, and talks about kind of what can we, learn from these different modeling, mindsets that could benefit us in our, own sort of modeling work which is I, think quite an interesting proposition, so his subtitle is becoming a better, data scientist by understanding modeling, mindsets so understanding these diverse, modeling mindsets can help us whatever, modeling ISS modeling problem or or, solution you're trying to come up with, so modeling mindsets I think it's a good, time for a book like that as well when, you think about it in in in terms of, benefiting from the different ways that, you can approach a problem because I've, recently seen some Engineers very much, stuck in a particular mindset trying to, solve a problem so that that one hits, close to home for me a particular Lane, yeah there are other lanes yes indeed, yeah cool well uh thanks for the, discussion today Chris it was it was a, fun one leading up to Thanksgiving I, hope you have a great holiday with your, family and uh look forward to chatting, next week you too Daniel have a good, holiday talk to you, [Music], later all right that is our show for, this week if you dig it don't forget to, subscribe head to practical a FM for all, the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static assets to fly.io for backing our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, one, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Protecting us with the Database of Evil | Online platforms and their users are susceptible to a barrage of threats – from disinformation to extremism to terror. Daniel and Chris chat with Matar Haller, VP of Data at ActiveFence, a leader in identifying online harm – is using a combination of AI technology and leading subject matter experts to provide Trust & Safety teams with precise, real-time data, in-depth intelligence, and automated tools to protect users and ensure safe online experiences.
Leave us a comment (https://changelog.com/practicalai/201/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Matar Haller – GitHub (https://github.com/matarhaller) , LinkedIn (https://www.linkedin.com/in/matarhaller)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• ActiveFence (https://www.activefence.com)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-201.md) | 5 | 0 | 0 | what we do is we basically we combine, this like very very deep subject matter, expertise with our technology so we're a, technology company and yeah we also have, sort of experts in the field in the, domain like experts in the field of, researching human trafficking and really, understanding that space or in, misinformation and different types of, misinformation and in hate speech and in, Terror and so they speak the languages, you know they research the space they, understand it they know the key players, they know the different organizations, the keywords and you know this is an, adversarial space it's constantly, changing and so they make sure that they, stay up to date and then what that means, is that us on the data side we can, basically take their ideas take their, knowledge and then engineer features out, of those right so really translate the, human knowledge into our models so then, we can go out and automate that and do, it at, [Music], scale, welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible everyone, subscribe now if you haven't already, head to practical AI FM for all the ways, special thanks to our partners at fastly, for delivering our shows super fast to, wherever you listen check them out at, fastly.com and to our friends at fly.io, we deploy our app servers close to our, users and you can too learn more at fly., [Music], IO welcome to another episode of, practical AI this is Daniel whitnack I'm, a data scientist with s International, and I'm joined as always by my co-host, Chris Benson who's a tech strategist, with loed Martin how you doing Chris, doing very well today Daniel how's it, going it's going great so yesterday was, uh voting day here in the in the US and, I did go to the voting place and it was, interesting because in line I could hear, people talking about like cyber threats, to like the voting machines and other, things like that and so my mind was uh, already actually thinking about these, these things because we have a really, interesting uh topic to talk about today, that's that's in that same vein um we're, we're privileged today to have with us, matar holler who is VP of data at active, fence welcome matar hi hi thanks for, having me yeah and active fence I I've, read a bit about it and this sort of, website talks about this like barrage of, threats that is you know online, platforms are susceptible to now which, active fence is addressing in various, interesting ways which we'll get into, but I'm wondering if you could kind of, give us a picture like if I'm going to, run an online platform of some type, maybe it's not even like you know I'm, likely not going to start and run the, next Facebook but I might very well you, know start and run some type of software, company that provides a online platform, to do something what should be on my, mind and what's the reality of kind of, on online threats that I might need to, be aware of if I'm getting into that, space yeah so so first of all I think, there's one thing to think about is that, anytime that you have a platform that, has any type of user generated content, whether user are uploading photos or, they're chatting or they have comments, or anything like that you're going to, have tons of data very very fast and, just it's Prime for people to you know, post wonderful things but also some, really really dark things um which we've, all sort of seen and been exposed to and, so one thing to keep in mind is that, trust and safety and just basically, safety online it's not really a nice to, have anymore at this point it's a, competitive Advantage it's kind of a, basic expectation right so users are, expecting it advertisers are expecting, it parents are expecting it the public, expects it so if you're going to spin up, a a platform so first of all best of, luck and second of all you need to keep, this in mind like from the get-go before, you sort of find yourself down this this, Rabbit Hole one thing that I think is, really important to keep in mind is that, although trust and safety isn't a new, industry really it's really now finally, becoming something that people are aware, of like I said it's this basic, expectation now it's not only users but, also like regulators and legislators um, there's new legislation coming in that's, making it even more at the Forefront and, the basic sort of content moderation, that is out today doesn't really make, the cut um to follow up like the second, part of your question about sort of like, what kind of harms are out there so, online harm is really multi-dimensional, we can see it in different media types, so we've seen it in games and like, Merchant ice sites chats text you know, video audio things like that across many, many different languages and also, different types of violations so you, have you know white supremacists and, terrorists and human trafficking and, sort of these like really you know, painful sorts of things um also goes, into like misinformation disinformation, fraud spam cyber bullying and so forth, and so it's this really really complex, space that you need to have a deep, understanding of to understand how to, address and in the sort of I guess up, until this point you were talking about, like content moderation and how it has, evolved over time but is still kind of, lacking in the sort of traditional sense, what does it look like I mean is content, moderation like people might have in, their mind oh I have a Blog right and, I'm going to choose whether I allow, people to like post a comment or I have, to approve that comment before it's, posted or or something like that so this, sort of content moderation is it in your, opinion kind of as where we sit today is, most content moderation sort of reactive, at this point or how do you view like, you know how how are most people, approaching the problem right now maybe, and and why is that lacking so there's, kind of different levels I would say of, uh Conant moderation um at this point to, just sit and moderate every single, comment gets out of hand really really, fast and so there is some level of, Automation and that you know started, being introduced right and so the first, sort of basic level is let's go out and, look for keywords right like I don't, want any slurs on my platform I don't, want anyone calling anyone any of the, like words you know um I don't want that, there so I'm going to ban all those um I, know that people might have and then and, then people kind of get a little, trickier right and so they'll say well, you know what about if we use an emoji, and so there's different kinds of emojis, or combinations of them that also can be, used in a hateful way and so you can say, okay well I'm going to I'm going to ban, those and I'm going to ban like these, specific keywords and I'm going to ban, these engrams because you know these, phrases are bad like I hate Jews or I, hate what you know so I'll I'll ban that, and that that works okay but very very, quickly you get to these cases where, keywords are either insufficient so like, I said with Emojis or with Fleet speak, so you can write for people that aren't, familiar it's basically taking a word, and replacing letters with numbers so, Adolf Hitler uh can be written like, pretty much all in numbers and so it's, kind of to evade detections and only you, know on a need to know basis sort of, thing there's also in in the keyword, space you have numbers so um like 1488, is a white supremist see number 88 for H, Hitler and 14 is like a phrase that that, they use um like the number of words in, the, and so you can say okay well I'll add, all these to my dictionary but then you, get to the place where you say well what, about someone that's telling someone, else don't call me you know some slur, don't call me that and so now they're, using the slur right is that something, that you want to necessarily ban maybe, in some cases but in other cases you're, going to need sort of a deeper, understanding of how language is being, used before you just go out and outre, ban it and so you see this sort of like, evolution in terms of the ways or the, approaches that platforms are taking to, moderate the space and sort of looking, at the context in which language is used, is sort of the first step in that I, think I just realized and that that was, a great explanation and I just realized, how sheltered I am because several of, the the things you referred to I had I, just didn't know uh at all so yeah I, guess I'm very sheltered but I'm curious, when we were first starting the, conversation a few minutes ago you also, mentioned misinformation and we were, just kind of diving into kind of some of, those specific use cases on hate speech, how does misinformation because we've, seen you know we've been dealing with, hate speech for a long time now but you, know misinformation you know in the last, few election Cycles has really become a, huge issue and obviously in National, Security issues and things like that, it's big how does that fit in I think in, my mind I've kind of thought about, there's hate speech and there's, misinformation and all is there a, connection between them all are they, bound together in some way the way you, see it or are these distinct separate, kinds of things how do you think about, it how do the folks at your company, think about it that's a really, interesting question um to take sort of, these two examples of these two specific, violations and there's I think, violations are sort of on this this, spectrum right and so you can think, about these sort of more like evasive, violations so things that are you know, that kind of are more difficult to find, or require subject matter knowledge so, hate speech right you need to know these, keywords you need know these things and, then you have these just sort of more, like common violations where some of, them you're like well maybe it's not, even a violation like nudity or, profanity or things like that where it's, just more more out there um and, everything kind of Lies almost sort of, on the Spectrum and you can go like spam, and fraud and and so forth until you get, to like the really dark you know like um, like child safety and abuse and things, like that and with misinformation it's, actually interesting because it's not, really trying to evade right like that's, the whole point um on the other hand, it's really tricky to find and to, understand and there's lots of, organizations that do really wonderful, work of of factchecking and of you know, like keeping up on the trends and really, identifying misinformation and there's, also sort of these techniques that we, can use to once we've identified sort of, like a specific type of misinformation, then we can use it to find in that it's, going viral and so forth It's a real, struggle to understand like you know, kind of how to put that in context, because you could say things in a, misinformation context that are you know, that there's no hate speech in it you, know EXP itly there are no band there's, none of that is there and yet as we've, seen in recent years it can do great, harm so it it seems like a very hard, target to go after and be able to to, mitigate it in a in a sane and, reasonable way absolutely I think that's, one thing that's really unique about act, defense and what we do is that we're, what we do is we basically we combine, this like very very deep subject matter, expertise with our technology so we're a, technology company and yeah we o have, sort of experts in the field in the, domain like you know experts in the, field of researching human trafficking, and really understanding that space or, in misinformation and different types of, misinformation in hate speech and in, Terror and so they speak the languages, they you know they research the space, they understand it they know the key, players they know the different, organizations the keywords and this is, an adversarial space it's constantly, changing and so they make sure that they, stay up to date and then what that means, is that Us in and on the data side we, can basically take their ideas take, their knowledge and then engineer, features out of those right so really, translate the human knowledge into our, models so then we can go out and, automate that and do it at scale the, reason that it's so interesting is, because as you all know um models drift, they Decay and so you can go out and you, can you know retrain your model and get, new weights and you're great except if, you're in an adversarial space then not, only are you drifting but your reality, is like it's so not unst like not, stationary there just changing from, underneath you so as it's changing from, underneath you you need to like hurry up, and re-engineer your features and so, we're constantly engineering your, features retraining our models and also, thinking about just like what else can, we possibly extract from this data, that's coming in right we're analyzing, text video audio everything basically, anything that we can get our hands on, and just really milking whatever we can, out of it that's super interesting I, have so many questions really, interesting really interesting, technology but also like the, infrastructure management part of that, is I'm sure a great challenge but I'm, glad you brought up the modality thing, and I also saw you know on on your, website you talk about different, languages as well which it seems like, this is definitely now in terms of the, way people communicate online like I I, you mentioned emojis I was also thinking, of like gifs or or gifs depending on who, who you are or uh you know posting memes, with with text you know in the image, there's also you know of course like, you're talking about videos and and, audio audio messages all of that I guess, as a more general question is language, but sort of like multimodal language is, that like your like primary area of of, research or are there other things like, outside of, communication in terms of like the, threats posed to sort of online, platforms where someone's not trying to, communicate a certain message messaging, or something but it's still a threat to, to the platform in one way or another I, guess maybe spam would be spam uh would, be an example of that but I don't know, if you have have other examples or is it, really kind of from your view a lot of, what you focus on is the communication, and language piece so the goal it's not, necessarily language and in a second, I'll talk a lot about like contextual Ai, and what that means but really our goal, is to enable users to be safe online to, have a safe experience right I'm a mom I, have three kids I started working act, defense and I said oh gosh my daughter, is not getting a cell phone till she's, 35 like forget about it right because, you're suddenly exposed to all this and, then you say but you know that's why, what we do is so important because, that's our whole that's our whole goal, it's not only about language and that's, just one form of communication there's, lots of things out there and like we're, a bunch of concerned parents like let's, really make this a s like the fact that, Chris is is is still sort of in this, sheltered bubble is amazing like I want, everyone to be in this sheltered bubble, right um and so that's that's kind of, the idea I think I just need to correct, it it depends there's some bubbles I'm, sheltered from I think and there's some, I probably am not probably so yeah yeah, like when you were talking about the, hate speech the specific numbers that, meant stuff I was like I didn't know, that so any I didn't mean to cut in but, why would you, yeah um yeah so I mean so so so so, language is only one part of it and I, think one thing that we really get into, is um is context and so let me let me, take you on a journey through context, and we'll end up with at memes which to, me is like sounds great so we were, talking about like the language context, right and so how keywords and engrams, they just don't cut it right you need, like language model so you know, Transformers and so forth to really get, an understanding of of what is being, said and the context in which it's being, said and so those are the kinds of, models that we end up training um and, that we have data for uh we have uh like, I said we have like our subject matter, experts and and policy experts that are, able to sort of ensure that we're, capturing things that are sort of on the, edge and Border because that's where, things get interesting right and that's, that's where I want to be able that's, how I'm able to get the difference, between you know I'm proud of being a, whatever because I'm like reclaiming, that word versus you know you are a uh, you're not allowed here um and even, within the hate speech of language, there's you know insulting hate speech, not insulting hate speech you know like, hey wasn't that a great KKK rally, yesterday I really like your, proud boy tattoo right it's hard to, catch those things that's a surreal, statement right there that example I, just that's never been said on your, podcast right no it's it's never been, said but just like you said that I was, like I was like wow I'm I'm not talking, to the people that are saying things, like that so anyway um sorry go ahead I, just it's it's novel to me to hear some, of this perspective but then we can sort, of go to the level of okay so let's say, I um we're now in the image space right, and so one thing that we're able to do, and again because we have this like deep, subject matter expertise is we're able, to search for Logos right so logos of, Terror you know like Fu like rare Terror, groups you know small that are hard to, find so we're we we know this space and, so we're able to go out and find you, know do logo detection find those, particular logos identify things um and, then you say okay great so here's a, video found the Isis logo great check, tear and you say well wait a minute but, there's also the CNN logo here so, suddenly even though it's a snippet from, Isis it's you know the context in which, it's used is non doesn't make it, violative it's suddenly it's interesting, it's important it's historical it's, whatever you can see the same things, with like videos of Nazis marching right, like sometimes that's glorified and, sometimes that's just like historical, you know it is what it is so that's, another level of context where you have, to sort of look at the context of the, you know one signal out of the image, isn't enough um or out of the video, another thing that that we like looking, at that you know it's important to look, at is that you can look at the context, in which the image is being used right, what is the title what is the, description what are the comments we, have an example that that I like using, where you see sort of non violative text, so you know I love him right you're like, okay that's fine who cares and then when, you zoom out you see it's like I love, him with a picture of Osama bin Laden, and sudden like that's suddenly more, interesting right um suddenly it becomes, violative so you can't just take anyone, piece in isolation or there's an example, of um like a some Chef that's like it's, hard to do this on a podcast but like, showing knives and he's like, demonstrating these knives and showing, these knives and his hands are all cut, up because he uses knives and if you do, just object detection it screams at you, like weapon weapon weapon like oh gosh, this is like this terrible video and, then you analyze you know the title and, description and the comments and the, channel and everything along with it and, you're like no that you know he's, teaching about knives it's a chef video, like not really not interesting and so, looking at things you know just as, keywords in a sentence Aren't Enough, also just you know looking at an image, by itself isn't isn't going to tell you, whether or not something is problematic, and so that's sort of this like idea of, contextual AI that we're really that we, think about a lot is like what is the, context in which something is used and, context can mean lots of things it can, also mean the policy right so different, platforms have different policies some, platforms will say baby's first bath is, like child abuse like you cannot have it, you know child nudity and others will, say not a big deal that's like another, level of context that our models need to, deal, [Music], [Music], with I mean just knowing about where NLP, models or other models fail like this, area of like sarcasm and humor is is so, difficult and there's like this further, distinction that you're drawing out, which is well there's some memes that, are jokes and sarcasm right there's some, memes that are jokes and sarcasm like to, the point of being very very harmful and, also that's kind of tied into the, context of where they're you know where, they're put or the timing of when, they're put somewhere or something like, that so I'm I'm wondering if you could, kind of break down like as as you're, kind of as you're stepping into, addressing some of this you already, mentioned like frequently you're, updating like new features like there's, new behaviors that you're seeing that, didn't exist before so like let's say, that active fence like starts to, understand that there's some type of new, new Behavior that's that's harmful or, something like that what is your process, and how do you think about going from, like knowing this is happening to, detecting that this is is happening in, in a repeatable sort of way so there's, sort of a couple of different ways that, that we're basically staying up to date, so the first is um really really close, contact with subject matter experts that, are out there gathering information, intelligence researching collecting data, building like keyword databases um, looking for uh particular you know Bad, actors that frequently post things sort, of they're really there and so we're, frequently talking to them and, understanding like what was it that made, this Violet of what you know or they'll, you send us their point be like hey this, is a new hate group or hey this is a new, Meme and so forth that's that's one, thing the other thing is that even, within our models we're constantly, getting feedback we have something that, we call the database of evil uh which is, like a very like I mean might as well, call it what it is right that has to be, though the best name I've ever heard you, know for it's the database of, evil that that and it's true to its name, I believe it yeah and so we keep that, updated right so we have data that's, coming in we score it you know we give, it a a risk score which is essentially, like the probability that it's violative, for some violation uh and then we have, trained analysts that review it review, the score and sort of can say yes no no, anything that's verified as being, violative um it goes into the database, of evil the databas the database of evil, is sort of used for a few things one of, which is new content that comes in we, can say like well have we seen this, before do we know it right things that, we've seen a lot they're more like you, know versus things that that are brand, new and it's also used because as we, take this feedback we're constantly like, I said we're retraining right we're, we're learning and that's like those are, the small adjust adjustments that we can, make to our models and so it's this idea, of like constantly getting feedback both, just you know from researchers that that, go out and find things from the data, that's coming in is being scored and, then you know we're sort of re, retraining on top of that and then of, course we have our our database of, evil so let me ask a question obviously, as you pointed out your database of evil, has some has a lot of really explicitly, evil stuff but I'm also imagining that, there that there are gray areas that, like you kind of mentioned the the, baby's first bath kind of thing and and, you know that would depend kind of on, audience on on whether that was if a, family member showed me uh we have a new, baby in our family my niece has a new, baby and if she showed me a photo of the, new baby having his first bath that, would not be offensive to me but there, are contexts where posting it online it, could become offensive and such so with, these types of of gray areas and the, fact that you can have one set of, content that has a bunch of different I, don't know what you would you know, acceptable rank ings if you will you, know depending on on who is viewing it, and what the context and all that how do, you approach making sense of all the, gray area like you know that when when, you're when there is everything from, perfectly fine to absolutely not fine, and it's all valid for the same thing, right that's a really pertinent question, and it's something that we're dealing, with a lot part of it is sort of and we, haven't I I don't think that we've, completely cracked it but one thing that, we do do is that we can also have, database of evil where evil is relative, for for the client right and so which, also leads to sort of you know this idea, of customized models per client based on, the feedback that is coming from them uh, where basically you can say okay so and, we have this now right we have two, clients one is baby's first bath is, violative and one it isn't and so we we, are already are juggling this with, different levels of like of you know of, human intervention to to to get it to, sort of really to perform because that's, what's training it to to get it to that, point, and there's even other examples where, you know if we take it you know baby, baby first bath and things like that, people are like you know oh but but it's, so clear what you know child abuse and, pedophilia are and and clearly it's not, and even things like um asking someone, are your parents home so if it's a, conversation between two children on a, chat room that's totally fine but it can, take a much darker turn when you, suddenly see that it's you know a user, that is also in you know adult chat, rooms or it's being posted like you know, at around 8:30 9:00 10:00 p.m. when kids, are supposed to be starting to go to bed, or to be in bed I don't know my kids, like go to bed early and so and you know, and and so you can start so so even then, like you can say well I I have language, understanding and you know this is a, kids chat room but suddenly there's like, all these other levels that you need to, take into account to understand if a, phrase really is just nothing just as, kind of setting some I I am living what, you just described I've grown kids but I, also have a daughter who is just getting, to the point where we're letting her get, online and do some of the stuff and some, of it's in supposedly safe environments, but then uh as the nosy dad who's just, worrying about keeping his child safe, there are all sorts of gray areas and, stuff so it's it's it's fast and there, are also some moments where I'm having a, lot of trouble telling whether it is a, safe context or not it's not very clear, and so um I I can imagine that that is, extremely challenging you know to solve, as a technical problem that can be you, know recreated across a lot of different, uh audiences totally um and and I think, that there's always going to be to some, like at least when we're when we're like, training or whatever right there's, always there has to be a human in the, loop and and for these gray areas so we, do as much as we can with technology uh, and we've been get there but even if as, a parent you're looking at and you're, saying you know I don't know and so, sometimes we can leverage things that, you don't have access to right like we, can look at the history of the user or, the other other chat rooms or other, things that are going on in the space or, how this you know who has been in this, chat room before but you know sometimes, it's it it comes down to you just don't, know so I I I have a lot of Chris always, knows I I like to ask a lot of practical, questions before before I get to those, in terms of like some of the things, you're doing and and how you're doing, them I'm wondering for a company that's, using some of active fences techn ology, what does that connection look like like, one of the examples I'm thinking in my, mind is I'm working on a website for, some of our partners where we're people, can contribute like en list cards for, tools that they're they're working on, like software tools and you know, technically like they could submit, anything in that description of that, tool you know uh now I think like we, have hopefully vetted people that will, be submitting content and actually not, everyone has accounts and so it's fairly, restricted but but yeah I'm wondering, like in that situation or a much more, scaled up situation what does it is, working with the Right View to be have, to have like your software platform and, then you send off content to some API, and get a threat score or something and, then you figure out what to do with that, threat score in terms of how does this, actually practically work out for a, company in terms of because I imagine, it's complicated every company has like, their different platform right and also, like oh you know the the format of a, Facebook message you know going to a web, Hook is going to be different than like, a blog post being posted to a Content, management platform so in terms of like, data moving around how does that that, work out practically yeah so there's I, think there's kind of two parts to maybe, more parts but two main parts to the, question the first is like how would you, as user interact with us and so we have, a UI a platform where you can really you, know see the content that's coming in, you can Define sort of codeless, workflows where if you know something is, above a certain risk ore threshold then, you know it's automatically filtered out, if it's below a particular risk or, threshold and you don't even look at it, and then like what is your threshold for, for human moderation that sort of gets, between that sort of gets around this, sort of like um Precision recall uh, conundrum where you're like well I set a, threshold and I always have to choose, you know what am I maximizing and you, can say well let's set one threshold, where you're maximizing your your, precision and you know another one where, you're comfortable with your recall and, then you look in this sort of band and, then you can use that to moderate we, also have an API um you can send you, know you can do like synchronous calls, for for text so like near real time, really really fast so for chat if you, want to try you know pre-published or, and so forth and we also have async for, for text and for for images for video, sort of and you know more like you can, send the full context right so you can, send your content and you have your like, the body of the media and the title and, description whatever you have that's for, the first part of your question and for, the second part I think is is maybe like, a little bit more interesting kind of, because everyone you know you can build, an API whatever but what we've spent a, lot of time in it and time on is both, like optimizing your API so making sure, that it's you know very robust and, responsive and so forth and then also, modeling our data and so we have a very, Rich understanding of the world of, platforms of like how we can model the, world of online Media or online, platforms or user generated content Pi, pick your favorite term we model it we, have like a very robust and flexible, schema where we're able to sort of Moder, like um a user and it's related to PO, and like the post that they that they, put and how many likes they have and you, know it's not always relevant and you, don't always need to use it all but we, have the sort of you know we have users, and we have we have contents and we have, collections and each of those are, modeled a bit differently and so once, the data comes in and we ingest the data, and it's modeled like this then we can, go ahead and take it apart and score the, different parts of it and then through, our API which is able to handle like, really high throughput and fast SLA, basically start giving you responses and, we've done a lot of work on our backend, optimizing and so we're batching models, on gpus and doing all sorts you know, picking the you know we have all kinds, of like um code that we've written that, basically optimizes like what machine, type you want run on basically to make, sure that everything is like runs as as, smoothly and as re bestly and reliably, as possible to get those responses, [Music], out, [Music], so uh matar one one of my questions is, just in the back of my mind is like just, the practicalities of running the the, type of platform that that you're, building and the service that you're, running like I could imagine oh I have, you know this model that is able like, model one is able to detect like this, harmful type of Meme and model two is, able to detect like this harmful type of, video and then like all of a sudden, you're s proliferating like hundreds and, thousands of models for like little, pieces of what you're trying to detect, and then another sort of scenario is, like oh well I'm going to like try to, standardize everything into more, generalized models that you know handle, m mul multiple modes of data or like try, to synthesize things together how just, practically as like a development and, research team have you started thinking, about like when is this something maybe, we want to combine together in maybe a, larger model that's trying to you know, address like multi-task sort of thing or, multiple types of data and then the, other side of that is like maybe it, maybe sometimes it is useful to just, like spin up hundreds of small models, and you know Ensemble them together in, some way any thoughts on that yeah so we, actually do do that sometimes we have, models that are like just really lean, and we serve them as is and that's sort, of for like I said for our are like near, real time responses um for when we do, contextual stuff so like I said we, really need to extract information as, many different ways as we can so we, looking for Logos and we're and we're, listen and you know we're listening to, the audio and and you know looking for, for known phrases and keywords and, language understanding and what have you, um and all these like smaller models, then we do combine into ensembles we, have um we have a feature store that, that we can basically you know take from, um combine train train the relevant, models um and then and productionize, them and then add add like we call them, like indicators but you know essentially, indicators from which then we can get, features and then and then go to to a, model um which is which is an ensemble, of these um and so we kind of use both, approaches based on like the SLA, requirements based on also the, explainability that we need we want to, be able to to explain why something, right like you know this particular logo, was found because sometimes the, moderator may not have the full, knowledge that we have and so a big, thing that we deal with is how can we, take our intelligence and leverage it to, the fullest extent right so one way is, really to put it in the models and the, other way is really to educate the, moderators through explainability of the, models they can really understand you, know why things you know sometimes, things aren't obvious yeah and I guess, that you started getting to my other, question which is like how and and when, do you bring in the subject matter, experts into the loop because I imagine, there's like you know certain cases, where you're highly probable that this, is some type of harmful situation and, maybe given a restricted set of subject, ma matter experts in an area maybe, they're restricted to only reviewing you, know x amount of content per day or, something so is that a situation that, you run into where excuse me where you, have to, prioritize you know what you're, reviewing with subject matter experts, based on some predictive measure that, that you have and kind of do that and, and some sort of ranked way or or do you, handle that in in some other way do you, mean in terms of of like what the, analysts are reviewing for us yeah for, labeling and or like for reviewing the, right so I'm assuming that there's a, limited number of those people there's, not infinite of those people right so, there's not an infinite number of those, people and also we want to be very aware, of their well-being um we care a lot, about the well-being of the people that, we work with like act defense invests a, lot in that and so specifically for for, these anal I want to make sure that I, prioritize what it is that they need to, review I don't always need you know I, don't need them to review everything, right I usually like what I would go for, is I want to review the gray zone right, so we do we have implemented Active, Learning which you know is basically to, prioritize what it is that we want to to, train on um and so that also prioritizes, what it is that we want to we want to, review and to label because I'm always, going for the gray zone right like what, like the things that we're not quite, sure of that we don't really know that's, where it goes to the expert rate it goes, back to Chris's question like how do you, know sometimes sometimes you do know but, it's tough and those are the things that, I want to label because those things, that are tough are what are where are, what is going to feed in and to give, like you know my discriminator the ma, you know the maximum power that it needs, and how much um sorry to steal all the, questions Chris I'm just so fascinated, by by all this but no no worries one, thing that is always on my mind and, maybe I wrestle with sometimes is how, much do your sort of data science people, or the people that are working sort of, with the models directly interact with, the subject matter experts and kind of, share knowledge across across that, boundary how do you balance that because, I that's always something I I think I, struggle with in projects is ultimately, it would be great to kind of bring the, subject matter experts in all along the, way like in every step of everything, because you learn so much but the fact, of the matter is like you've got a, limited number of those people but also, you have to ship things right so so you, can't you you can't necessarily have the, luxury of always having a discussion, before you make a development decision, so how do you balance that especially, because this is such a complicated, environment in terms of the subject, matter how have you found ways to, balance that and any thoughts that, you've or takeaways that you have from, from that experience yeah so one thing, that we did is we actually embedded, subject matter experts like researchers, into our Dev teams to sort of be part of, the process um we also have you know our, analysts that are labelers they're you, know they work really closely they're, just it's part of the same group and so, they're not out there however again it's, a limited number it's you know there's, there's and it's a limited number of, violations there's you know we're, constantly being exposed to new stuff, that we have to handle and there it's, just a matter of you know relationship, building and of doing you know check, bases and constant feedback so hey this, is you know like they're kicking off new, project so we come we learn like what, what are you guys doing because a lot of, times they're learning on the Fly too, right they have this new new trend new, thing that they're learning about and so, as they're learning we're trying to, gather as much as we can from them and, then just a constant like feedback like, you know how does this look how does, this look is this there is this not but, I think like the key was embedding them, with us um we did have situations where, basically we wanted to develop models, for things that we didn't want to expose, our data science EXs to and that only a, very very few number of people in the, company can be exposed to because of the, nature of the violation and there that, was much trickier because there there, was like complete dependence of the data, scientist like how do you how do you, build and train a model without looking, at the data so I'm kind of curious, because and it changed my the question I, was about to ask you just a little bit, with what you just said so I'm going to, kind of combine two things the sense, that I got because you keep talking, about going to the gray area and stuff, is that almost the core of your research, effort is uh to kind of replace the, human intuition that's necessary you, know early on to identify the Nuance, that's there with more and better models, as you're moving forward into that and, so I would it almost like a a bell curve, of difficulty where that gray area is, the hardest but I am curious and and I'm, wondering if that's the case but I'm, also curious when you mention those, things like it seems like I'm guessing, that the things that you really don't, want to expose someone to are almost, they're they're not in the gray area, they're way over into the deeply evil, side you know where it's like you just, you're never going to forget having been, exposed to that if you are how do those, Balance out you know you have that gray, area that you're focusing on that you've, mentioned several times and then you, have that those kind of things are they, when you have something that's so, explicitly evil and it will imprint a, human's mind and very negative way are, those different problems that you're, solving from a DAT as a data scientist a, little bit and that once whereas the, gray area there's so much Nuance there, do you see what I'm getting at like how, do you balance the approach to building, models to handle one thing that's really, obviously bad and you just you know you, don't want to get anyone to it versus, developing Intuition or an alternative, to intuition in the gray area yeah so I, think and let me know if this doesn't, quite answer your question but a lot of, times it's it can be the same model, right so you have a model and it knows, how to identify because you know it gets, the distribution of you know the data is, distributed in some way AC along the, space right and so the things that are, very obvious are going to be on like one, side of the discriminate of the, discriminator boundary right like the, really like pick your favorite violation, right and I'll give you examples of like, things that are like very very very, clearly violative and they're on that, line and then when you're training your, model you don't only want to give it, just like the really horrible examples, and then things that are just like you, know puppies and snowflakes right, they're very obviously not because those, are so the distribution between them, they're like they're so far away that, you're just discriminator is like your, decision boundary is just never going to, converge it's going to it can flip flop, back and forth and you know you you'll, never know and so as we're doing it, we're also trying to find things that, are like you know on the borderline, because that's what's going to help us, really make sure that we're able to like, find the good decision bound right, because at the end of the day it's, really important like the Bas like the, basics is that we have to be able to, catch like the Isis and we have to be, able to catch beheadings and you know, all these terrible terrible things but, we also want to be able to catch things, that are less obvious within that still, within that space and so that's kind of, where I'm talking about the the gray, area just like you know for the grooming, model you can like I don't even want to, say but you can think of of phrases that, are like very obviously grooming right, where like you're sexually harassing a, minor and it's there in like plain text, right but that same model if you want it, to be any good it's it's sure it's, helpful that it can find the obvious, stuff but you also want to train it on, things that are kind of more on the like, closer to the boundary um because that's, what'll that's what'll help you in the, long run I have a a quick followup that, as you were talking it was kind of, coming into my mind it's a very human, question I I I'm for a moment I want to, move you out of the data science bit a, little bit and just your organization is, in a little bit of a of a unique, position on that you mentioned that, there are certain things you want to, keep as many of your of your folks on, your team not exposed to but that does, leave some people exposed to some pretty, awful stuff and that does impact you, know people you know we know I'm, guessing that you're one of the people, that has had to see some of those pretty, tough things to see how do you cope with, that a little bit and keep you seem to, have be really super grounded in that, and I know like you know but as part of, the job you're going to have to cope, with some really tough stuff and I know, there has been things that I have seen, myself online that I wish I just had not, seen I remember early on in the al-Qaeda, period some time back and I watched, something that was in the news that, happened to be out there and I was like, I wish I had never seen that and I will, never forget that as long as I live I'm, just curious in a human sense how do you, cope with terrible things and and keep, it in a healthy you know kind of for, yourself in a healthy healthy place if, that makes sense yeah that does make, sense so I think I'm I'm like a very um, personally like I'm a very like Mission, driven person like it's like it's very, clear to me why it is that we do what we, do and I really really believe in it, like I said like a bunch of us parents, and or you know or have nieces and, nephews or whatever or just care about, kids or you know care about care about, communities and and environments and I, just very very deeply believe in in what, we as at act defense do and you can, really feel that when you're in the, office and when you're working with, people Everyone is very very is very, Mission driven that being said we also, do our absolute best to support and, protect everyone that works with us so, whether it's like different you know, Wellness support programs we have a uh, psychologist on staff who specializes in, resilience and you know she's available, to everyone and she does like group and, one-on-one and really helps you know, helps people build this sort of, resilience um in the face of what it is, that we do um and for me what personally, works for me is like just understanding, why what we do is so important and yeah, I've definitely definitely seen things, that will never leave me never, and I accept that you know because I'm, doing my part to make everything just, you're helping the world in that way I, get that hopefully I try we all try I, think even though we've talked about, hard things this episode I'm super, encouraged and want to thank you and the, team at active fence for what you're, doing yeah it's something that's, desperately needed and thank you so much, for digging into these problems and, doing it with such technical Excellence, as well and deep insight as we close out, here and we look to the Future what on, the positive side sort of excites you, about where this technology is is headed, as as you look to the as you look to the, Future yeah so to me first of all super, exciting that this is no like I said, like we started off it's no longer nice, to have it's a basic expectation so I, think first of all to me that's really, exciting because people are not taking, safety for granted like they understand, how critical it is and it's coming from, the users it's kind you know it's not, just sort of like oh well it is what it, is like this is the price I pay for, being online no that shouldn't be the, price that you pay so to me that's, exciting and on the tech side I think, what's cool is that we're seeing a lot, of open sourcing of different of, different models whether it's data, generation or you know audio, transcription or or zshot or like all, these things that are just it's like a, candy store right like you can start, thinking about all these things that are, that are used for you know Technologies, used for completely different things and, you can say well like how can I take, these ideas and use them to just extract, more signal and to look at look at these, things from different angles and you, know it's an adversarial space and so it, keeps it interesting at least well matar, that it's very inspirational the work, you're doing I know that there's uh it's, tough work but uh thank you very much, and to your teammates for doing the kind, of work that you're doing uh it was, great having you on the show and looking, forward to uh to having you back, sometime as you surge forward and have, some more stuff that you want to that, you want to share with us so thank you, very much for your time today thank you, so much for having me thank you for, caring and for asking like really really, interesting questions I appreciate, [Music], it all right that is our show for this, week if you dig it don't forget to, subscribe head to practical AI FM for, all the ways and if practical AI has, been benefited your life Pay It Forward, by sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static assets to fly.io for backing our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Hybrid computing with quantum processors | It’s been a while since we’ve touched on quantum computing. It’s time for an update! This week we talk with Yonatan from Quantum Machines about real progress being made in the practical construction of hybrid computing centers with a mix of classical processors, GPUs, and quantum processors. Quantum Machines is building both hardware and software to help control, program, and integrate quantum processors within a hybrid computing environment.
Leave us a comment (https://changelog.com/practicalai/200/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Yonatan Cohen – Twitter (https://twitter.com/cohen_phd) , GitHub (https://github.com/yonatan-cohen-h) , LinkedIn (https://www.linkedin.com/in/yonatan-cohen-10076b113)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Quantum Machines (https://www.quantum-machines.co/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-200.md) | 6 | 1 | 0 | if we look at at the hardware of a, quantum computer it actually has two, main parts it has the qpu itself the, quantum processor that's the the quantum, Hardware that's where you know the magic, happens where you have these superp, positions and the cubits and all this, crazy Quantum stuff and then you have, what we call the control Hardware this, is actually not Quantum Hardware it's, classical Hardware the hardware, interfaces the quantum processor and, talks to it and and operates it make it, do what we want it to do and that's very, complicated Hardware that one has to, build specifically so it's not regular, servers or anything like that it's it's, it's really hard for controlling a, Quantum processor and that's what, Quantum machine, [Music], does welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible, everyone subscribe now if you haven't, already head to practical AI FM for all, the ways special thanks to our partners, at fley for delivering our shows super, fast to wherever you listen check them, out at fastly.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, [Music], fly.io well welcome to another another, episode of practical AI this is Daniel, whack I'm a data scientist with s, International and I'm joined as always, by my co-host Chris Benson who is a tech, strategist at locked Martin how you, doing Chris doing well doing well how, are you today Daniel I'm doing great, because any day is great when I get to, sort of any sort of uh throw back to my, old physics days which mostly I've don't, get to dabble in much uh these days is, is is a fun day and today we've got a, cool intersection with that world we've, got uh janathan Cohen with us CTO at, Quantum machines and we're going to talk, a little bit about uh an update on, Quantum Computing and uh and how that, intersects with AI welcome Jonathan hi, thank you it's really cool to be here, and I'm excited to talk to you about, some quantum physics as, well yeah great great well I'm guessing, most most of the time on this show and, most of our listeners are probably used, to hearing us chat about neural networks, or gpus or classical Computing and we, have talked about Quantum Computing on, the show but it's been quite some time, and I'm sure the the field is advancing, quickly maybe just as a starting point, could you remind us of like the general, pitch of like what is quantum Computing, and maybe why, uh people are are interested in in, Quantum Computing more generally sure so, so so Quantum Computing is new kind of, of a way to build a computer that's, based on the laws of quantum mechanics, so it's kind of interesting because you, know quantum mechanics as as a theory of, nature was sort of developed in parallel, to the development of computers the last, 100 years or so and you know on one hand, we developed this amazing technology, that we now have which is Computing, that's based on classical physics it's, based on the classical laws of how the, universe you know behaves but in, parallel in the last 100 years so there, is you know completely new understanding, of how nature works on a very, fundamental level so you know somewhere, in the very late 70s early 80s, physicists started to understand that, perhaps we can use these these uh these, new laws of of of nature that we, discovered to also build a new type of, computer that's going to kind of harvest, this this this this uh weird behavior of, nature that we call quantum mechanics, and it turns out that you can do that, and it expands the uh notion of what we, mean by a computer and essentially allow, us to build a computer that has a, stronger uh computational Power at least, for some problem s not for all problems, but for some very difficult problems at, classical computers we say classical, meaning not Quantum regular computers, for some computational problems that, classical computers uh have you know, very hard time to deal with quantum, computers could solve them very easily, so like as you're as you're talking, about this new type of computer would I, be correct in thinking that I can't run, out to the store and buy a Pentium or an, AMD and pop it in with the RAM on the, motherboard is there what's different, about that approach on the hardware side, and then what would you use it for, that's different from that kind of, classic idea that that that we all have, been using all our lives yeah so the, main point of of quantum mechanics even, before quantum computers is that you, know while we see things in the in you, know in the dayto day that you know, things are in a specific State you know, I'm drinking coffee for my my cup and, and I I put it down and it's it sits in, one place right it's in one place and, not in another place so quantum, mechanics things can actually be in, various States at the same time and, while we don't see it you know with, coffee cups we do see it with electron, for example we can actually put an, electron in what we call a superp, position that means you know it's in two, places at the same time and this is, exactly what quantum computers take, advantage of so you know the basic build, building block of a classical computer, is the the bit the bit of information, it's you know it's a system that has two, states it can be either in the zero, state or in the one state and then you, know if you have two bits then they can, be z0 0 one one Zer one one and so on if, I have eight bits I have 256 States, right 0000 Z and 0000 one and so on but, at every single point in time the, classical computer can only be in a, single state of all of the bits of, information that holds and then we, manipulate those bits we go from one one, state to another state to another state, and so on so it's like a state machine, we're moving from state to state in, order to solve a computational problem, but quantum computers replace this, notion of a bit with what we call a, Quantum bit or in short a qbit which is, a system like the electron that I that I, told you about it can actually be in two, states at the same time so it can be in, both zero and one at the same time and, in fact the system can be in sort of it, can be in zero and one with different, weights so I can be a little bit zero, and and a lot in one or it can be a lot, in zero and a little bit in one and so, on and when now I put a lot of quantum, bits together a lot of cubits together I, now can be in this massive superp, position of many many many states all, the states of the system at the same, time and you can use that to um sort of, you can use the this parallelism to to, in some cases again do sort of parallel, computation instead of going you know, state by state if that makes sense so it, sounds like with that massive, parallelism it sounds like it can handle, probabilities based on what you were, saying in many states very well what, type of problems does that kind of, capability lend itself to that maybe, maybe our traditional you know the, laptop I'm on right now would not be as, as well suited for what what does a, problem that like having lots of cubits, you know to solve what does it look like, to address yeah it's it's a great, question so there are certain problems, that we know for sure that quantum, computers can give an advantage of so, one great example is uh the shce, algorithm so that's an algorithm that, actually breaks code because it finds, the prime factors of a large number so, you know if you take a large number and, try to factorize it to to to its basic, prime factors that's a problem that's, very hard for classical computers and it, has some structure the problem has some, structure that allows us to use this, parallelism of a quantum computer in, order to to solve this this problem, exponentially faster than than what we, can do today in a classical computer now, it's very hard to explain exactly what, in this problem Mak makes it so that we, can use this Quantum parallelism to, solve it so much faster uh you kind of, have to look at the details of the, problem the structure of the problem in, order to to see how you can take, advantage of this Quantum parallelism so, we don't know exactly and we we cannot, actually categorize exactly all the, problems that quantum computers will, solve much faster than classical, computers but we we have examples so we, have the shores algorithm we have, Grovers algorithm so Grovers algorithm, is is an algorithm that solves basically, a search problem searching in an, unsorted list finding an element, specific element of interest in an, unsorted list which also can map to very, general optimization problems but this, algorithm for instance is not is not, giving us an, exponentially an exponential speed up it, actually only gives us a square root, speed up over what you could do with, with classical computers so there is, this zoo of algorithms that gives, different kind of speed ups for, different kind of problems and people, are still working very hard on the, problem of of of categorizing exactly, what quantum computers can solve faster, than classical computers the other thing, is that there are also algorithms, Quantum algorithms that we have today, that use this parm of of of these cubits, and that we don't even know that they're, going to work so uh these are heuristics, basically that people have come up with, there there are good reasons to think, why they would give us a competi speed, up against classical computers but, nobody can prove those okay so we are, basically in this kind of same situation, that actually maybe AI was decade or two, ago or you know we have some thoughts, about these algorithms people are very, hopeful that they will work and these, are relevant for optimization problems, very general optimization problems this, is an algorithm that's called qaoa that, solves combinatoric optimization, problems and then another algorithm, that's called vqe which solves for, chemistry problems and these are, puristic algorithms that we just have to, build a machine and and try those those, algorithms on and and see if they work, better than classical computers yeah I, have one maybe naive question from my, own perspective because I think, obviously you're well plugged into the, state-ofthe-art Chris actually I think, you know a little bit more as well but, for me like I'm thinking okay I've been, hearing about quantum computers for some, time one sort of naive question from, mine is like what quantum computers, actual quantum computers exist in the, world right now because sometimes I go, to like like if I go to one of the cloud, providers or something sometimes I'll, see like oh here's the AWS Quantum thing, but when I look at it it's really just, like so you can run simulations of, quantum research so if I'm wanting to, like let's say I want to run something, on an actual quantum computer what would, be my choices sort of right now in terms, of where those actually exist and you, know kind of the state of what they are, and what how you see that sort of, shaping up over over the coming couple, years yeah so today you can do you you, can access uh quantum computers through, the cloud with several players so IBM, for example has a their their Quantum, cloud service that actually gives you, access to real quantum computers you can, log in and you can actually run, algorithms on well these are small, quantum computers of course they're, still prototype of the technology but, you can actually run algorithms and and, see how they behave as well as on, simulators and you can compare and then, you can find that your quantum computer, is doing not as well as the simulation, because it has a lot of uh L this is the, main problem with quantum computers, today but you can do that and that's, actually very cool and IBM for example, they give access to their own computers, but Azure and AWS have launched their, Quantum Services in the last few years, that give you access to third party, quantum computers so many different, computers that are built by uh full, stack one Computing, vendors like, IQ uh like qci so these are startups in, quantum computers that build full stack, onal computers and you can access those, machines actually today through the, cloud which I think is very cool and, yeah they also give you access to, simulators and you can run on those as, well which I think is quite amazing to, be honest that the fact I mean there are, certain experiments that I used to do, during my my PhD at The whitesman, Institute only 12 years ago that used to, take a PhD student maybe three or four, years to set up such an experimented to, run it and now you can just I can just, log into you know to one of these Cloud, platforms and I could run the experiment, that's just 10 years ago you know used, to take a few years to set up I can do, it in probably a few hours that I think, is, [Music], what, [Music], so one of the things I was curious about, just to follow up on on what you were, talking about a moment ago is just to, kind of set my expectation with quantum, computers you know assuming that that, obviously because we're Going Quantum, we're not going out and getting a, typical CPU is there anything is it, mainly the CPU is a Quantum CPU that you, would put into the computer and is there, any other types of changes that you, would make to a classical computer just, to get a sense of it like how AR so you, know how close is kind of the, generalized architecture of aant, computer to a classical computer how, much has to change to make that leap, okay that's a very good question I think, that a lot of things I mean are very, different in quantum computers because, the the basic operations are, fundamentally different like you know, classical computers again they're built, on you know based on the bits and then, you have Gates right it's how you, manipulate the information so you can, actually build an entire computer from, an end gate right but in Quantum we, don't even have the notion of of an end, gate we have other Gates we have a, hadamard gate we have a cot gate we have, different Elementary logical operations, on cubits right so that makes it so that, the the sort of the entire step or at, least the lowlevel parts of the stack, needs to be very different but then I, would also say that eventually it's not, I mean the way I see quantum computers, especially in the early days giving us, an advantage AG is not just by being, used as standalone computers but, actually as more of of as an, accelerator inside a more heterogeneous, uh data center right so you know the the, quantum computer could do only certain, problems it's not going to replace the, CPU and it's not going to replace the, GPU you still there are things that, there is just no reason to do them in in, using Quantum devices maybe in 50 years, because everything is going to be, Quantum and Quantum devices are going to, be as cheap as classical Computing, devices but right now you will have your, CPU you have your GPU and you'll have, your qpu and the qpu will be used to, accelerate certain types of sub routines, that that can help your entire, application and so that's why it's, important that we also build, Integrations of these quantum computers, into a more heterogeneous computer, environment um and allow people to, program H what we call hybrid workflows, so Quantum classical workflows could you, describe what one of those workflows is, like so and that was a great explanation, by the way of kind of the difference in, a quantum computer and stuff but you've, mentioned several times along the way, about kind of that that hybrid fitting, it into a larger architecture that, includes a lot of those classical, components you know and and so like what, but I'm trying to get my head wrapped, around what kind of problem as a user I, might solve an example that would use, both Quantum and the hybrid and like, what part of the problem goes to each, and stuff can you give us some sort of, example on that yeah sure so so the, typical example is is is what we call, the the variational quantum algorithms, uh so actually both the algorithms are, mentioned before qaoa and also vqe these, are variational quantum algorithms what, this means is that essentially what, we're doing is we're trying to minimize, the cost function of Interest that's the, problem we're trying to solve but then, the way we do it is that the the qpu the, quantum processor is only Computing the, cost function let's say we have a cost, function that's very hard to compute on, a classical computer so the quantum, processor only basically calculates the, cost function given a set of parameters, but then the optimizer sits in the, classical side so what we're doing is, we're we're running Quantum circuits, what we call a par amzed Quantum circuit, so this is a Quantum we're running a, Quantum program with some input, parameters then the quantum processor, computes the cost function and then the, result goes into the classical processor, and it would run an Optimizer like I, don't know like a gradient descent or, something like that and then the result, then then then then that would generate, new parameters for the quantum circuit, the quantum program and then it goes, back and forth like this until we uh, find the parameters that would minimize, the function yeah this is a really, interesting example because I've been, thinking about like okay well if I'm, connecting maybe some of the challenges, within the AI world to this space I mean, you just mentioned gradient descent you, mentioned optimizing with a cost, function like this is all definitely, very connected to like what might happen, in like an AI training scenario of, course like there's all sorts of, problems or there's certain computations, like you say that work well on on a GPU, and that's sort of scaled up right now, but I'm wondering as as you're kind of, diving into these types of hybrid, problems with Quantum machines what, might you see right now like in the AI, industry or certain sets of problems in, the AI industry like the the challenges, that the AI industry is facing in terms, of compute what are people thinking, about in terms of what might what might, overcome some of those challenges on the, this sort of hybrid Computing side with, with potentially a Quantum Advantage, what what's the current thought process, around that yeah so I mean I hope that, for instance using some of these hybrid, algorithms would be able to solve some, optimization problems that are also, relevant to to AI training to be honest, I'm far from being an expert on Quantum, AI or Quantum neuron Network, specifically but I know a little bit, about the subject and I know a little, bit about the challenges in Ai and I, know that some of the algorithms that we, have for for quantum computers could, solve this optimization problems so when, we train a neural network and want to, optimize its parameters in many cases, this is a very hard optimization problem, and in some cases it could fit exactly, this this kind of optimization problems, that that a quantum computer could solve, and specifically these hybrid algorithms, uh might be able to to solve right so, then we could use this Quantum or these, hybrid Quantum classical algorithms to, train neural networks that's one example, there is also some recent Works about, Quantum neuron networks that or the, network itself is is is quantum and some, mathematical proofs that why such, networks would would require again in, some specific cases but uh important, cases why would they require much L data, to train and that I think is very, important because right in many cases we, just need a lot of data to train those, neuron networks and not in many cases we, don't have enough data so maybe Quantum, neuron networks could could be relevant, in in some of those situations it's a, super interesting subject and I think, that people are you know obsessively are, are are are looking into it and I'm just, dying to have the machine because I, think many of those things are going to, only uh start to reveal themselves what, once we have big enough Quantum machines, that we can we can try some of those, things on right because again a lot of, those things are you know heuristic, algorithms that we cannot really prove, that will give us an advantage we just, have to build a machine and try it yeah, and I think that gets a little bit to my, follow-up question which is like when, these things start to come online and I, and I know that Quantum machines is, working on sort of platform form, software Hardware to do that which we, can get into here in a bit but um before, we do that one of the things that's on, my mind is like as these computers come, online they're in a some sort of like, hybrid compute Center I'm thinking just, about like my own workflows like how's, my own workflow potentially going to, change like when I'm wanting to run, something on a quantum computer like if, I just search Google for pictures of, quantum computers I see people with like, lab Coates or something on right and, they're like going in and doing things, and so do you see it this as being like, Oh I'm going to have tensorflow I'm, going to have pytorch Jacks whatever I, have my python code which I'm you know, executing some type of of AI training or, I've programmed the model architecture, or something and then at certain points, there's going to be a library that maybe, reaches out to that hybrid Quantum piece, and executes some of the optimization or, like how will that affect the actual, kind of the daytoday kind of workflow, and what it might be like to program, these things because I'm guessing like, the AI common AI developer is not going, to all of a sudden learn everything, about quantum mechanics right so somehow, there has to be a higher level kind of, abstraction what do you what do you, think that will look like yeah I, actually think it will look exactly like, what you described I mean it's going to, take some time to get to that place but, I think that eventually that that's, exactly how it's going to to to roll, because uh as you said you know not, everybody is going to learn these new, types of of quantum programming um, languages and this you know fundamental, new different operations in Quantum and, how we use them and in fact this is not, I think this is even not necessary, because again there is you know there, will be certain sub routines that of, course we can configure we can, parameterize and we can use them in, various different ways but this is you, know this could be done as as you said, as a library that you can you know one, could use and embed in and you could, embed you know that in your workflows, now that's going to take some time and I, think that's in the very early days, actually we're going to have to have, experts that are really going deep into, the machine into the lowlevel kind of uh, programming of this the quantum machine, and do all the optimizations to have a a, use case like that where you where you, can use it at high level and just solve, your problem right or accelerate your, problem just as a as a followup uh as, we're getting into tooling and kind of, getting to what your organization does, but but more of a general follow-up real, quick I'm curious like you know you, mentioned IBM earlier and they're well, known in the quantum space you know for, all that they do in the cloud and, everything they do have an open- Source, python SDK you know for that so you know, and it is in Python which makes it very, convenient for other AI tooling and, stuff so would you would you expect, things like tensor flow and pytorch and, all to to kind of wrap that library or, other libraries like it to provide uh, that kind of a Quantum accessibility if, you will to people who are not otherwise, Quantum experts and kind of let the, tooling make that do you think is a, general so I'm not talking about IBM or, tensorflow specifically but like as a, general way of introducing the public to, Quantum is doing it through tooling that, makes it easier kind of what we should, expect going forward yes absolutely I, really think so yeah I mean it has and, it has several layers so there are, programming languages and obstructions, and then there are just application, libraries that I think are going to be, important and then and then yes and then, and then even higher level like things, like like uh like tensorflow pie torch, as you mentioned that that they could, just wrap those things and and and you, know some people don't even need to know, that it runs you know under the hood so, I think that eventually that that's, what's going to happen and yeah people, are are starting to play with that for, sure I mean IBM you know they as you, mentioned they have so they don't just, have their open stack you know that you, can access kind of lowlevel the quantum, computer and program the gates but they, also provide libraries and there are, many other startup companies that are, doing that these days sort of going up, the stack and I think that these tools, are going to be very important I also, think that we're going to discover of, things in the next five six seven years, and I hope that many of those tools, don't have to sort of make it U turn or, something like that but I think that you, know we we will have to learn a lot and, and change things as we go along so but, that's that's a part of it I mean we, have to start we have to we have to, build the entire stack that's why it's, so exciting I mean we're waiting for the, hardware to to sort of mature but we're, building you know the lowlevel software, parts of the stack the high level, software parts of the stack and kind of, trying to see how all of it is going to, fit together and yeah I do think that, the end of the day there is no reason, why someone would program Quantum Gates, well I mean some people will but most, people just want to use the machine for, accelerating some of the, [Music], problems, [Music], so Jonathan I I was just scrolling, through your website and kind of with, the view of what you've talked about in, mind in terms of this sort of hybrid, system that you envision how people will, kind of program these problems and I see, that you know Quantum Machin, specifically is addressing some of the, hardware and software platform things, around this type of system and yeah I, even see like a nice little gif image, showing like a data center with you know, racks of what I'm assuming is classical, computers and racks of you know Quantum, machine equipment and then like a, quantum computer in the in the middle so, I'm I'm wondering if you could maybe, generally let us know like as Quantum, machines looked at this at this, developing space how did you decide like, what where you thought the opportunity, was in terms of building out the, infrastructure around this because I, could obviously there's there's, different pieces of this in terms of, building the a type of computer itself, or working only on software but it seems, like you're kind of Dipping a little bit, into hardware and software could you, kind of describe your your approach and, the motivation behind that yeah, definitely so if we look at at the, hardware of a quantum computer it, actually has two main parts it has the, qpu itself the quantum processor that's, the the quantum Hardware that's where, you know the magic happens where you, have the superp positions and the cubits, and all this crazy Quantum stuff and, then you have what we call the control, Hardware or the this is actually not, Quantum Hardware it's classical Hardware, but it's it's the interface it's the the, hardware that interfaces the quantum, processor and talks to it and and, operates it make it do what we want it, to do and that that's very complicated, Hardware that one has to build, specifically so it's not regular servers, or anything like that it's it's it's, really Hardware that's dedicated for, controlling a Quantum processor and, that's what Quantum machine does well, that's what what what we started from, and um this was because well we saw a, bottleneck there so this was five years, ago when we started thinking about those, things and think about starting a a, startup in in Quantum in general I was, just the thir days were Quantum industry, s of started and there were we saw some, of the early Investments um companies in, the US and um me and one of my, co-founders itamar Sian is the CEO of, the company uh we wanted to start our, own company and we wanted to do it in, Quantum because that's basically all we, knew coming out of our phds in Quantum, devices and we we knew a guy we had a, friend who finished his PhD about four, years before us and he left to to do his, postdoc at Yale University in one of the, leading groups in the world in Quantum, Computing in the group of Professor Rob, shulov and he and over there he, basically performed one of the, Milestones of in the field in the last, uh let's say decade where he actually, performed an experiment demonstrating, Quantum error correction on superc, conducting cubits so Quantom correction, is is one way basically the mainstream, way to deal with the fact that quantum, computers are very noisy they have a lot, of Errors they just make errors all the, time the error rate is very very high, and so the mainstream way to that we, think that we'll be able to deal with is, by doing Quantum operation but it's very, hard so Nim performed this first, demonstration of of of quantum eror, correction on superc conducting cubits, in his postto Y and to do that he had to, deal with a lot of bottlenecks that came, from the control system so he had to, develop by himself a new kind of a, control system to do that experiment and, this was because Quantum ER is sort of, what pushes the the control layer of the, stack to its limits he had to deal with, some of the uh most challenging problems, of that time and when we started, thinking about what what are we going to, do that we realized that this these, these challenges some people like most, people didn't didn't hit those, challenges yet but they will in in in, the next few years and and so we felt we, had a sort of Head Start start and we, want it to be you know like any like, like startups want to be higher in the, stack right you want to do software and, everything but but in Quantum we felt, that we were at the top of the stack, where it actually there is a a true need, in the market right now so that's what, we do let me ask you a question just to, clarify because I know it's it's Central, to your business you when you talk about, Control Systems can you talk a little, bit about like what exactly a control, system system does you know in the, context of quantum just so that uh to to, make sure that we understand exactly, kind of how that fits into the equation, sure so there are various ways to, implement the quantum processor but in, 90% of them the cubits are sitting in, some kind of an array of cubits so, they're physically stuck in space, somewhere on the chip for example or in, a vacuum chamber and you have an array, of cubits and now so you can kind of, think about it as as you know as a as a, yeah let's just say an array of cubits, but then in order to perform the, operations that The Logical operations, the quantum Gates you send signals from, the outside in the form of of pulses of, electromagnetic waves so just like your, cell phone send sends passes of of, microw signals to the Cellular Tower our, control system sends this Orchestra of, microwave RF signals to the qpu to the, quantum processor and the this is the, Quantum algorithm in its most kind of, raw form right you you should pass as a, on your Quantum device and it hits the, Cubit physically hits the Cubit and it, perform the operation it performs the, operations uh on the Cubit you need to, orchestrate this sequence of of micro, buses very very carefully it has to be, very well timed and you also want to, measure signals coming back from the, quantum processor so signals microw or, RF signals come back from the quantum, processor you need to measure those and, sometimes for example if you want to do, Quantum eror correction you need to, measure those you need to perform, classical calculations to understand, what Els are cured on the chip for, instance and then respond with new, passes so that's what we call feedback, in the control system and it sounds like, when when you're talking about sending, pulses and stuff is that software and, Hardware in a Quantum context that we're, talking about there there or is it is it, just one or the, for a control system exactly so so you, build the hardware but then you need to, program the sequence of pulses right you, need to so you build the hardware that, can generate these passes but now you, need to program your sequence the the, user for example needs to program the, sequence of passes that needs to go to, the quantum processor and and operate it, and if you want this is the Assembly, Language of a quantum computer this is, sort of the lowest level programming, language that you that you talk to to, your quantum computer with right because, you you tell the controller what PS to, send when and this is really the the, lowest level uh struction of the, operations I appreciate the, clarification so if you're kind of going, back for a moment to to kind of having, that like I as a practitioner and I'm, I'm thinking about kind of my practical, workflow and I'm now integrating quantum, computers and your control system for, managing that into my workflow, in our context we're doing Ai and so, let's say that we're a little ways in, the future and we're starting to look at, algorithms on the AI side that could, benefit where part of that is quantum, how does that workflow look in a, practical like from the practitioner, standpoint what should they expect you, know how are they connected and and such, as that so you mean like how would this, workflow actually run on the hardware or, yeah I guess I well to some degree I'm, I'm trying to think like so I'm trying, to kind of bring it back around from our, side to connect the quantum Computing, benefit with the things that our, audience is typically engaged in which, is trying to to get AI algorithms you, know the models developed and then, deployed out there and so as they're, going through that process like I'm, trying to kind of pull it all together, now yeah what does that look like is, there is there a quantum computer with, the control system that you've produced, sitting beside, a classical computer that has a GPU in, it and a CPU in it in the classical, sense and they and there there's some, networking between them like what does, that look like if we're 5 years out or, whatever you can pick your time frame, but the point where we're now starting, to integrate that into some sort of, practical workflow and people in our, audience might be using it what might, that look like from your position today, you know forecasting into the future, it's very similar to what you described, so well there are two models I would say, one let's say that everything let's just, imagine an on Prem system right where, basically I have my GPU and my CPU and, and and and then I have a quantum, computer and the way it looks like it's, it's yeah it's the control system that's, the interface so we are the interface, between the classical and the quantum, side so you have your recks of servers, with gpus and CPUs and all that and then, you have some recks with the control, electronics and they're connected, through the network and the control, Electronics runs the programs on the, quantum processor so we can just call, that combination of the control, electronics and the quantum processor we, can just call it the quantum accelerator, yeah so now you have a Quantum, accelerator and it's connected to the, network and you can talk to it via for, example our programming language qua, which is this lowlevel very low level P, we call it P level programming language, in the Quan Jaron you say it's PSE level, programming language that then now I can, I can write a workflow so for example, example you know I can use a a workflow, tool actually we we are developing such, a tool for for uh developing hybrid, Quantum classical workflows we call it, entropy that you can use others and then, you write a workflow where you run, something on the GPU and then something, on the qpu and then you know and maybe, there's communication between them so, choose your favorite way to communicate, between whether it's processes or, functions that you call and then and, then this entire thing would just run, you know programs on the CPU and then, Pro a program this a quad program on the, qpu and and and and the result would be, analyzed in the maybe the GPU and and so, on and so forth right and if we go back, to and but and again like you know, eventually you would probably not have, to write this Quantum Code at such low, level and you could take advantage of, libraries that by themselves would run, hybrid workflows right because to solve, a certain optimization problem maybe you, already have a library that runs an, optimization that uses both the CPU the, GPU and the qpu and just sols for you a, certain sub rutine and then you have an, even higher level workflow that would, solve I don't know predict weather, better or something of that sort right I, don't know if this was clear or not I I, don't know if this was a no that was, that was a good one that's great it tied, it together for me thank you okay I'm, happy managed to do that yeah yeah and, as we kind of uh get get close to an end, here I I'm just looking through some of, the things that that you're involved, with or the the companies involved with, it seems like there's a lot of exciting, things kind of moving towards the future, I saw some of the press releases around, kind of building involved in in building, Israel's uh National Quantum Computing, Center and other things as as you're, kind of looking to your next year ahead, what are some of those things that are, really exciting for you in terms of how, the industry is shaping up and and what, what your compan is involved with kind, of moving into the next year or two yeah, so first and foremost I mean I'm just, excited to be a part of group of people, that it's trying to build these, computers these machines that are based, on you know kind of deep fundamental, laws of nature and would hope help us, compute faster and also understand, nature uh better so that to me is the, most exciting thing I'm I'm super, excited that you know so many people are, using our products to make this field, progress so I'm just excited for people, to use more and more products and and, improve them so that they could move, faster and this is this are our mission, at qm is to accelerate the realization, of useful quantum computers so hopefully, we're building tools that allow people, to to accelerate the realization of, those those those those computers and, make them useful other than that I'm, super excited to see how the industry, shapes because this is a field that's in, kind of like a formation right the stack, the the entire the structure of the, technology stack but also the the value, chain the value chain is is kind of Def, where sort of the community is defining, it its value chain right now and, defining itself and this is very, exciting to be in in an industry in, those early days so I'm I'm super, excited for that well thank you thank, you so much uh Jonathan for for joining, us this is really really interesting, it's it's awesome to see these, practicalities of the sort of, integration layer and the hybrid systems, coming together around like real, scalable you know hardware and and, platforms so yeah thank you for all the, work that you're doing and and for uh, taking time to uh tell us a little bit, about it thank you so much for uh, hosting me this was, [Music], great, all right that is our show for this week, if you dig it don't forget to subscribe, head to practical AI FM for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or a colleague word of, mouth is the number one way people find, shows like ours thanks again to fastly, for fronting our static assets to fly.io, for backing our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The practicalities of releasing models | Recently Chris and Daniel briefly discussed the Open RAIL-M licensing and model releases on Hugging Face. In this episode, Daniel follows up on this topic based on some recent practical experience. Also included is a discussion about graph neural networks, message passing, and tweaking synthesized voices!
Leave us a comment (https://changelog.com/practicalai/199/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Daniel’s team license from recent work (https://huggingface.co/spaces/sil-ai/model-license)
• Graph Neural Network courses from Zak Jost (https://www.graphneuralnets.com/)
• Coqui voice studio (https://coqui.ai/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-199.md) | 1 | 0 | 0 | so there was this idea that the data, that we were using in this case it was, from the bloom Library which is a, product from s where people can create, their own books online so each book the, author releases that under a certain, creative well not all are Creative, Commons but the majority are creative, commment licenses and so we had to look, into whether the models that we were, creating off of Creative Commons data, would be subject to the same sort of, restrictions as the Creative Commons, data that we were training it on and so, there's various writings you know within, you can actually look up uh creative, comments has some commentary on this of, when certain things or derivative works, or adaptations and that sort of, [Music], thing, welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical AI FM for all the ways, special thanks to our partners at fastly, for delivering our shows super fast to, wherever you listen check them out at, fastly.com and to our friends at fly.io, we deploy our app servers close to our, users and you can too learn more at, [Music], fly.io well welcome to another fully, connected episode of practical AI this, is where Chris and I keep you fully, connected with everything that's, happening in the AI Community we'll take, some time to discuss the latest AI news, and dig into some learning resources to, help you level up your machine learning, game I'm Daniel whack I'm a data, scientist with s International and I'm, joined as always by my co-host Chris, Benson who's a tech strategist at locked, Martin how you doing Chris doing great, today Daniel how are you doing well it's, been a it's been a pretty fun week last, night got to speak virtually anyway at, the uh Utah ml Ops Meetup so that was, pretty fun had a few connections there, and a lot of good questions and such, came out of that so it's interesting to, see meetups happening again but also, still embracing this like bringing in, Virtual speakers thing so it's like a, still hybrid in that sense because if, you're a local Meetup you've got to, bring in speakers and it's a lot easier, to bring in Virtual speakers and I think, it I think it worked out pretty good the, way that they had it set up so that, sounds interesting and now since you, mention it I'm speaking at a virtual, conference next week uh I've been taking, like through Co I had taken more or less, a long break after doing way too much, much conference talking right in the, year in the Years leading up to it it's, a national security conference and I'm, going in to talk about AI data and, software in the context of National, Security intelligence and defense so, that'll be uh W exciting yeah yeah I'm, looking forward to it uh it'll be the, the day after this episode is released, yeah so I I will put a link in the show, notes in case anybody wants to hop in I, believe it's free to attend so if, anybody wants to do that and uh we'll, see what happens there yeah we're also, kind of gearing up quite a bit because, in December is emnlp which is the sort, of biggest um natural language, processing research conference uh one of, the main ones but I think kind of, considered the main one and that's in, December in Abu Dhabi and I'm going to, travel there with a couple colleagues so, that'll be uh one of the kind of bigger, excursions I've taken After covid Time, excited to be there um at emnlp and we, had a paper accepted so I'm excited to, present that there and also hear from, from the rest of the community actually, it's interesting because some of what, we've released with what we did release, and what we are releasing with this, paper at, emnlp we had to think a little bit about, like licensing around those things and I, don't know if you remember I I forget, which episode it was recently we were, talking about these open rail licenses, for models so the idea that okay for, data you have you know maybe like, Creative Commons or licenses like that, for software you might have Apache or, MIT or GPL or whatever it is but models, fit into this weird space where they, aren't quite either of those things so, how do you license them and people at, various groups have tried various things, but there's an effort called rail, responsible AI licenses and I think if, you just go to, licenses. you can learn more about what, they're doing but we made an attempt at, some of our Benchmark models that go, along with our paper we made an attempt, at creating one of these rail licenses, for the release of those and that was, quite an interesting experience I don't, know if you remember kind of the the, tenants of what goes into a a rail, license at all do you remember that I, remember I say this halfway tongue and, cheek I remember thinking I'm in the, wrong industry for this because I I may, need to blow things up with inference, and if I recall they don't want me to, blow things up so that stuck in my head, yeah yeah so there's really this I think, the biggest thing that people can keep, in their mind with these responsible AI, licenses is that there's usage, restrictions so Chris just because, there's a rail license doesn't mean that, you couldn't blow things up or you know, run a drone or something with with your, model but it would mean that if they put, that in the restricted use Clauses of, the license right so how this works is, there's kind of a main bit of the, license and it says various things about, you know copyright and and patent and, warranty and all these things that you, might see in a normal license but all of, those things are subject to the terms of, the license and the terms of the license, have are subject to this Clause at the, end of restricted usage so for example, like the recent release of stable, diffusion uh used a a rail license an, open rail, license and they put in restrictions, around like don't use this for, misinformation or to harm others or you, know various things like that so they, recognize there's dangers with these and, they want to put some barriers around, that so that's kind of the idea with, these rail licenses is that it's a way, to distribute and make your model, available with certain guard rails, around usage I don't know what what are, your general thoughts on on that no I, mean I think it's I think we we need, that and that's part of the maturing of, this industry and you know we we had, that for a long time with those other, types of uh intellectual property uh in, terms of things like sophomore and such, and so I remember as we did that that, episode not too long ago it felt like a, good time for that to happen at this, point because it was one of those, questions people have having and by the, way I should just in the for the sake of, being responsible let people know that, I'm not blowing things up nor is my, employer asking me to blow things up, they don't do that they don't ask me to, blow things up so I I sometimes we use, sarcasm on this show we use sarcasm on, the show from time to time but I thought, maybe I should just, specify I'm not I'm disclaiming blowing, anything up at all so there we, yeah I think though like in our license, we kind of balanced a couple things, around the restricted use that probably, would restrict your usage of our models, in certain cases Chris not because of, like military usage necessarily but, commercial usage so one of the things, that we wanted to do internally because, of how we had sourced our data and the, fact that this data actually came from, local language communities so is, actually we we trained our models on, books that were written by local, language community members and they, released those books under certain, rights some of those being, non-commercial licenses and so we wanted, to make sure that we both honored that, but we also released the models because, what what can happen sometimes with, language data is like big companies, could use language data from language, communities to make money without any, real benefit going back to those, language commun ities right and so that, was partly also in our mind with this, license and so we put in our restricted, use two things one a restriction around, commercial usage this would be like in, our thinking around you know what, benefit goes back to the language, Community but then secondly putting in, there a restricted use around uses that, are particularly discriminatory against, indigenous peoples so you could use IND, language data to discriminate against, indigenous right yeah and there's, actually a nice clause in the UN, statement on indigenous people where, they talk about discrimination against, indigenous people and so we kind of, pulled a reference to that into our, restricted use so I don't know it's it's, our try at this it was an interesting, exercise to actually try to put this, into practice and figure out like okay, we talked about this on the podcast but, can I actually create one of these, licenses for my own models um it was an, interesting exercise so so I really like, the fact that you were thoughtful enough, for instance on on the question of, discriminatory practices against, indigenous folks to to think about that, and make sure that was in but another, thing that was that I was wondering as, you were discussing that was you know, kind of going back to the start of this, particular topic a moment or two ago on, the show you know there's software there, there there's data there's the model, there's all of these intellectual, property you know overlaps between, different types of of Ip and they're, relying independent did you have any, question in your mind as you went, through the process about whether a, license from one type of thing such as, software could clash with the model, license and did you have to think about, that and kind of resolve that a little, bit yes so there was this idea that the, data that we were using in this case it, was from the bloom Library which is a, product from s where people can create, their own books online so each book the, author releases that under a certain, creative well not all are Creative, Commons but the majority are Creative, Commons licenses sure and so we had to, look into whether the models that we, were, creating off of Creative Commons data, would be subject to the same sort of, restrictions as the Creative Commons, data that we were training it on and so, there's various writings you know within, you can actually look up uh creative, comments has some commentary on this of, when certain things or derivative works, or adaptations and that sort of thing in, our case the models that we trained off, of this data whether it's surprising or, not according to how we read those were, not derivative works and so wouldn't be, restricted to the same sort of uh, license however what we tried to do was, we tried to match the kind of, restrictions of the original data just, in good faith to sure how people might, have expected that data to be used but I, think technically we had more latitude, there no that that sounds good and I'm, not surprised that you and your your, folks were were doing that I would hope, that everybody out there in the larger, Community would be thoughtful in that, way about it it's interesting as we, we've talked so many times about about, kind of having these different you know, these different constructs you know, blending you know having software, blending with the data blending with the, models now uh and getting them out but, we just haven't spent a lot of time, talking about kind of the the legalities, of how to do that and how to honor those, across the format so good to, [Music], hear, [Music], so Chris I don't know if you remember, not that long ago in one of our recent, episodes which we can link in the in the, show notes we had um Josh from K on and, we even made some clones of our voices, and that sort of thing that was a lot of, fun so K's doing amazing things in sort, of Open Source speech technology and, really enabling a lot actually we using, a lot of their libraries in um in our, own work but they had a an announcement, that I you know I thought I'd share in, terms of the news side of things which, is you can now join their weit list and, get access to what they're calling their, like voice Studio audio manager Advanced, editor features within their system, sorry if I'm not getting the names right, but it's pretty cool this is um I don't, know if you remember when he he was on, the episode but he talked a little bit, about how they were thinking about, managing the sort of tone emotions, expressions of synthesized voices more, flexibly so you don't just get sort of, one synthesized voice and it's kind of, either monotone or having the same, expression throughout you can actually, match different portions of your content, with different kind of expressive, qualities and that's what this talking, about it yeah this voice Studio does and, and there's some pretty cool things, where you can actually look at different, words and the different phones in those, words and adjust some of these, expressive features emotion and pitch, and mixing every mixing different voices, together as well like to create a a mix, of synthesized voices you can do all, this within this kind of advanced editor, which seems Seems really possible or uh, really powerful is what the word I was, looking for yeah I'm really looking, forward to using it but I'm I'm a little, dismayed that I'm currently in uh number, 6,466 in line to receive it so it may be, a little while before I I receive the, joys well shout out to Josh if you're if, you're out there listening you can bump, Chris up the weight list uh but uh yeah, I think it it's really interesting where, you know it's one thing to like produce, a synthesiz voice it's another thing to, kind of have multiple voices Maybe in a, video that you're mixing down and mix, voices together change the expressive, qualities like sort of almost like, working with synthesized voices like, people do with like computer production, of music right where right you can, change things and mix things together, and all of that very fluidly I want to, ask you a question that's very specific, to the work that you're doing on a, day-to-day basis and for working with, indigenous populations and their, languages and stuff what are some of the, ways that you think that this will, change that going forward or add you, know add to it that you guys have been, talking about like what what's the, future look like for for someone in your, line of work on that I'm I'm just, curious about that kind of real world, aspect well yeah I think that there's, there's definitely the side of this, which is probably the more commercial, side of it which is you know media, production and that sort of thing where, like let's say that you produce a video, in one language and you know you're, wanting to do the dubbing across, languages or something like that or, maybe even you're using like an avatar, and using synthesized voices in your, video and the whole thing is synthesized, I mean that's happening quite a bit, right now as well and so the ability to, bring in multiple voices and do all that, without going into a recording studio of, course that has huge applications for, like advertising marketing media, production entertainment all of those, different areas which is where I would, guess and I can't speak to K's business, model but I would guess that their their, tooling is quite applicable across those, areas for local language communities I, think of course they're also involved, oftentimes in the production of media or, content for their communities so that, that's also relevant there but I think, there's also unique things that are, relevant in those scenarios so imagine, that you're part of an indigenous, Community a local language community and, you are kind of marginalized by the the, national government or discriminated, against in one way or or another it, might be a kind of big ask for you to, for your community say hey could we put, up a 100 hours of content with your, voice and maybe your likeness right, that's uh potentially painting a a, Target on yourself right yeah it when, you're associating yourself and you're, the face of that that community so I, think it's really interesting that, there's tools like this where you could, create highquality voice and expressive, voice that's maybe synthesized and not, someone's maybe even style transferred, where it's not someone's voice that can, be tracked to a a certain person but, then if you think about then combining, that with the video elements so think, about having a a video that maybe is, recorded with someone talking you can, use things like stable diffusion and, other things now to actually shift that, video and obus skate the identity of the, person in that now the more nefarious, use of that of course would be, misinformation and um deep fakes and, that sort of thing but there is a very, positive use of this for these sorts of, communities where it is important that, they want to produce media content for, their Community but if you're, marginalized or discriminated against um, it's interesting now that there's these, tools that are accessible to and have, really nice user interfaces accessible, to community members where they could, actually produce some of that content, themselves yeah so it's it's interesting, Dynamic to your point I'm I'm just kind, of thinking about in my world a little, bit and thinking about the fact that you, know as we're recording this in the, current day and current months the war, in Ukraine uh which the Russian invasion, of Ukraine has been going on and, bellarus is is also uh part of of the, that Russian effort and I was reading, this morning uh an article uh about some, dissidents that are you know trying to a, survive you know that situation and B, escape and and and you know and try to, help and do the things that they that, their conscience is is dictating and, that I kind of going back to to some of, the the ideas that you just enumerated, about marginalized populations of, indigenous populations and and being, able to kind of find some protection, while generating content I could I could, imagine that in this as well and so I, mean you saw it sort of way in the past, with online hackers and when they would, like hack SeaWorld or whatever they, would release a video we are Anonymous, or whatever and and it would all be, synthesized voices right because you, don't want someone you don't want to put, your voice on that right that's right, and don't get me started on animal, protection, because it will stop being an AI podcast, and we will just go into a totally, different realm so so be good to your, animals folks that's our little sideline, yeah right here do you think one, question that was actually brought up to, me last week which I think is kind of an, interesting question for people like us, that do produce content I mean this is, just our voices recording our voices on, this podcast but let's just imagine that, you know K or whatever their voice, Studio it's great enough that we can, just I mean we already have a lot of, sample of our voice right if we can, create really nice voices you and I, could just type out a script back and, forth and you know when we're traveling, quote record a podcast and just mix our, voices together with content and and, release it how does that sort of thing, strike you because I got into this, conversation with someone last week and, there was some sort of mixed feelings, about you you know what you lose when, you do that or what you gain when you do, that so that's a great point that you, make there and and who knows maybe we, both have tough schedules and uh for, listeners who don't know for Daniel and, myself this is a passion project and so, you know who knows maybe there is a, moment of tight schedule for us where we, do exactly that I think the thing we, would lose is that there is the there is, the element of the unexpected in our, conversations uh often and uh and a lot, of banter back and forth It's not, scripted uh completely unplanned um I, know people think that we plan every, word out but we don't and so maybe that, would be lost so you might get the, information you wanted to share out but, you may lose a little bit of the human, element behind it yeah the context for, the conversation I had last week was it, was someone who was a does produce video, content specifically and so their face, is sort of part of their brand right yes, and so the question was well if we dub, your video into to another language it, would make sense you know how everyone, hates the thing about dubbed video where, the lips don't match the voice right and, you can kind of sync it up pretty good, in a lot of cases but ultimately the you, know what you could do is just modify, the person's face and lips to match the, dubbed you know they recorded the video, in English but now we're dubbing it to, Chinese and we match up their lips using, some sort of video manipulation they, react Ed very negatively to that they're, like my face is is like my brand right, like I don't want anyone messing they, actually said they prefer the dubbed, content because their original like, expression of how they expressed, themselves in the video was what was, important to them so yeah it was, interesting I'm going to challenge that, I'm gonna suggest to you that in the, Noto distant future not only will that, exist but when you're getting on you, know we're all through Co and certainly, uh continuing postco we're all on video, calls all the time and so I'm going to, suggest that that's going to be one of, those killer features uh that one of the, video call providers is going to to do, and that is not only doing translation, in real time which uh which I think is, entirely possible in the you know not so, distant future but using some of these, Technologies we've been talking about in, recent episodes to do exactly that, because especially if you're using their, service quite a lot which some of us are, then they also have a uh a thorough data, set to train on and I think that with, the video uh and everything I think I, just think that's going to be very, doable not too far down the road and I, think it'll also be able to to be done, live so I I think it will be part of, what we do I think you and I will find, ourselves doing that before long is what, I'm what I'm, suggesting, [Music], so Chris one of the things that we had, just kind of started talking about, before we started recording was you were, asking a few questions about graph, neural networks which I thought have, been interesting for quite some time and, I think you ran across it in some uh, Nvidia post or or something like that, right so Nvidia has a Blog that's widely, read they blogged as we record this it, it was actually just 2 days ago on uh, the 24th 4 of October, 2022 but they had a Blog about kind of, water graph neural networks and it, occurred to me I was kind of glancing, through it and uh there's a fair amount, of stuff that I'm familiar with in it, and there was a few items there that I, that I hadn't thought about but it, occurred to me that we have kind of, touched on graph neural networks quite a, number of times on the show without ever, really diving into it so so maybe there, is a fully connected episode where we do, a a full show to dive into the detail, but it made me start wondering a little, bit about how many of our listeners out, there are using graph neural networks, and some of the use cases I saw, something the other day about starting, and this is very typical for some of our, conversations where you're putting, together multiple kind of deep learning, approaches to try to get something new, and we've we've had a lot of shows, talking about that but before I go one, have you had any opportunities to use, graph neural networks yourself I, recently did train a graph neural, network for question a question, answering task so one of the areas where, people have applied this is to this task, of automated question answering where, there's a text prompt and you're looking, for the the answer within some set of, documents or something like that right, so I did dive a little bit into that and, to be honest I'd love I'd love to learn, more that was definitely an interesting, experiment as I was kind of diving into, that and learning you know what does it, mean to have a graph neural network well, there's certain approaches and for maybe, graph neural network people out there, that are experts maybe I'm simplifying, this too much but there seems to be a a, cluster of techniques that are focused, on representing graph structured data a, sort of flat form or a matrix or tensor, form right and so there's ways to embed, a graph or learn embedding for a graph, in a sort of flat form there's also, methods that exploit the structure of, the graph neural network or the graph, itself which I think is what I think of, when I think of graph neural network so, one way to I guess think about it is if, you think about a convolutional layer, right you're running some kernel or, filter over your image or your set of, inputs but what you're doing is you're, always considering like one data point, in the context of a fixed number of, other data points even if you're running, your filter over in some various ways, it's sort of one data point in relation, to a number of other fixed data points, and actually you know Transformers or or, recurrent neural networks and this sort, of thing also behave similarly right, you're you're comparing one data point, in reference to maybe a sequence of, other things but which have a fixed sort, of structure and what's interesting I, think about graph neural networks is, that the the graph neural networks that, I'm thinking which are are built around, these concepts of message passing, consider one data point in reference to, sort of arbitrary structure of other, data points and what happens in these, graph neural networks is that um you, have a stage where you take an embedding, for one node in your graph and you look, at all the neighboring nodes or maybe a, certain number of neighboring nodes but, all the neighboring nodes that fit, within a certain structure and you, combine or concatenate or perform a, function over the combination of the, embeddings for those nodes and the, embeddings for the node that you're, considering and so what happens as you, apply this across all of the nodes of, your network you actually pass a lot of, information between all of the nodes of, your network and if you iterate that, then then the idea is like all of this m, messaging and information is transferred, from all of this different complicated, graph structure to the data point under, consideration and so oftentimes this, involves this sort of message passing, and iterative approach which is is quite, interesting and has been applied in a, variety of ways one of the ways of, course that's maybe um very well known, is uh Alpha fold which is a the one of, the protein folding approaches we had a, show about that not not too long ago so, one of the things is is you're looking, you know in your line of work at large, language models and we're looking at, graph neural networks and how they can, merge and you're talking about the flat, structure and I know Nvidia kind of, talks about the the unstructured nature, of the message passing within the the, graph uh itself compared to other neural, networks that are a lot more structured, can you clarify for a moment like what, does it change in terms of how you're, approaching large language models when, you have this flat unstructured node, approach where you're doing the message, passing and you have an arbitrary number, of them that are there does it, dramatically change the workflow for, when you're working on those large, language models or is it is it similar, yes so I mean a lot of times the large, language models take a very naive, approach to how text is structured so, most language models now work around, something called subwords which means I, have a piece of text I'm going to split, that up into sub components but not, necessarily words like you could, tokenize something into words but the, problem with that is like how do you, know how to tokenize a what is a word, and what isn't a word you've got all, these sort of weird structures in, language the other thing is if you, tokenize into words what happens when, you see an unknown word in your input, and so what often happens is you figure, figure out what are the most frequently, occurring subwords across my Corpus of, known language and so maybe you know, like your name Chris there's a subword, you know CH r i and then you tack on an, S subword and that forms your your name, right um and so if you figure out what, are those frequently occurring subwords, that's how you split things apart and, that's how you sort of look at the, attention across these different, subwords in a in a large language model, but that that's somewhat I mean it's, statistical in terms of how you get, those subwords and you know M it's, useful but with language in general, language is much more structured in many, cases like a graph and so to do this, sort of large language model approach, works well and it's scalable because you, don't have to know as much about the, structure of your input but if you do, know more about the structure of your, input you can do maybe really powerful, things with less data right so for, example if you know all of the parts of, speech of your language right like, here's my noun subject and it's, connected with a verb via node in the, graph right in this way and you draw out, all the tree structures and the syntax, of your language there's tons of, information encoded into that structure, that you lose when you just treat words, like a sequence of subwords right and I, think there's arguments that these large, language models do learn some of that, structure there's some work out of, Stanford on that but the way that, language is structured I think the hope, would be that if you're creative about, encoding this linguistic information, into your model and then maybe using, something creative like a graph neural, network maybe you can do more things, with less state or you can do really, powerful things or you can be more, robust to sort of changes and that sort, of thing I see good exclanation and yeah, if if other people have uh have input on, graph neural networks or have used them, in certain ways definitely let us know I, I one of the other interesting ones that, I know just doing kind of searching when, I was looking into graph neural networks, is from Pinterest I guess the the system, that does recommendation of items for, users within Pinterest is um their, realtime system for recommendation is, built on some sort of graph neural, network called pixie which yeah if any, of you are on that team out there and, you want to come on the show and talk to, us about it would love to hear more but, you can look up they do have a paper, about it and and all of that absolutely, I know that in the Nidia article that we, were talking about they mention LinkedIn, does the same I would imagine that other, social networks do as well quite, honestly that that it seems like a very, logical fit in terms of trying to get, that functionality yeah well as we kind, of get near the end here Chris I mean we, always try to share some sort of, learning thing and as I was learning, about graph neural networks I found, several interesting resources but one, which if if you're into sort of paid, courses this this one seems to be quite, full of great information about graph, neural networks so this is from Zach, joose he actually has a YouTube channel, called Welcome AI overlords but he's um, he's really into graph neural networks I, think he works in has worked in sort of, variety of big tech places and he has a, full course of you know introduction to, graph neural networks if you just go to, graph neural nets.com pretty simple uh, link then you can see as introduction to, graph neural networks foundational, theory of graph neural networks basics, of graph neural networks, and the basics of graph neural networks, is free so at least you could get like, the sense of the graph neural networks, from the free version well I I may very, well be a student uh on that course and, maybe some of the other stuff out there, as well I have a very specific doing, graph database work that I'm working on, in my day job without going into, specifics and I can totally see how one, of the associated problems with that, could be solved by graph neural networks, and so I I I think I think it's time for, me to go level up and I encourage our, listeners that maybe not already in that, to go consider it as well sounds great, Chris well it's been been fun as always, good to uh connect our two nodes via the, edge of practical AI as always so oh boy, oh boy that was that was like the AI, version of a dad joke right there all, right well on that note Chris we'll see, you before I make another joke okay no, worries have a good one Daniel thanks, [Music], bye all right that is our show for this, week if you dig it don't forget to, subscribe head to practical a.m for all, the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static assets to fly.io for backing our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI adoption in large, well-established companies | This panel discussion was recorded at a recent event hosted by a company, Aryballe, that we previously featured on the podcast (#120 (https://practicalai.fm/120) ). We got a chance to discuss the AI-driven technology transforming the order/fragrance industries, and we went down the rabbit hole discussing how this technology is being adopted at large, well-established companies.
Leave us a comment (https://changelog.com/practicalai/198/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Mary Fischer-Mullins – LinkedIn (https://www.linkedin.com/in/maryfischermullinspmp)
• Yanis Caritu – LinkedIn (https://www.linkedin.com/in/yaniscaritu)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Aryballe (https://aryballe.com/)
• Cox Automotive (https://www.coxautoinc.com/)
• Previous episode with Aryballe (https://practicalai.fm/120)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-198.md) | 4 | 0 | 0 | we are using technology every day you, know I've been working on a specific, project to really understand imaging, technology and we started that well, before the pandemic and as everybody, knows on one Friday our business was, normal around the world and uh mostly in, the US come Monday uh that fateful, Monday one day in March of 2020 our, businesses uh shifted and we really, shifted into more digital platforms much, quicker and much faster than we, anticipated, and that kind of really progressed the, way we're approaching things and looking, at things to understand how do we, provide a a quality repeatable, experience for our clients and our, partners in the ecosystem and and AI is, really helping us do some of, [Music], that welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone subscribe now if you haven't, already head to practical AI FM for all, the ways special thanks to our partners, at fastly for delivering our shows super, fast to wherever you listen check them, out at f.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, fly.io, [Music], welcome to another episode of practical, AI this is Daniel whack I'm a data, scientist with s International we've got, a very special episode today way back at, episode 120 we talked with a couple uh, representatives from the company Ary, ball and we talked with them about this, very interesting subject of how Ai and, sensor data is transforming this, industry called digital old faction, which is is using Advanced AI technology, to actually create Fingerprints of odors, or fragrances and use that to transform, Industries like manufacturing or the, fragrance industry or the food industry, and I got a chance to follow up with uh, Giannis from arall and some others uh at, a digital old faction Summit and we had, a follow-up discussion about how AI, technology is transforming organizations, and how organizations are responding and, implementing this this sort of, technology is it was quite interesting, in terms of the adoption of AI at a, large scale and how that is actually, shifting large organizations and how, they think about the problems that, they're solving so I hope you enjoy this, panel as much as I did here's the, recording from the, summit thank you yeah it's it's it's, really wonderful to be here at this uh, super exciting Summit I just looked at, the date it was January of 2021 on the, Practical AI podcast we had a topic, focused on how artificial intelligence, is transforming this space of digital, old faction and detecting scents and, fragrances and and all of that and it's, it's awesome to kind of come full circle, and see maybe a followup of how, artificial intelligence is transforming, this industry and making an impact for a, variety of of companies I'm excited, today to have on the panel with me U, Mary fiser Mullin who is a senior, director of project strategy at Cox, Automotive and also uh Giannis kerat who, is the chief software officer at arall, welcome to you both glad to have you on, the on the panel thank you thank you, exciting to be here yeah to start with I, thought generally maybe we could start, talking about from both of your, perspectives how artificial intelligence, and the use of artificial intelligence, within your, organizations has changed over the past, five years so that could be in relation, to this area of scent fragrance digital, old faction but maybe more generally, than that how perception of AI has, changed within your organization and, Adoption of these Advanced Technologies, has has changed um maybe I'll start with, with Giannis because we are at this, Summit from your perspective and you've, been working to develop these, Technologies specifically for digital, faction over the past years how have you, seen specifically artificial, intelligence and that side of advanced, technology be a key technology within, what you're trying to build yes thank, you Danielle and we can say since the, past five years yeah so yes I we talk, from the production and R&D point of, view maybe at Aral we use machine, learning instead of artificial, intelligence you know there is a an, endless debate around this terminology, machine learning is very well adapted to, our problems and and the amount also of, data available and as you know machine, learning is really mandatory when, designing other sensors and so for us, the question five years ago was not if, we will use AI it was how we will use it, and that being said we use it with, discernment I AAS has promises that's, true but it can be a source of, disappointment sometimes if you don't, take some precaution and that's what we, we experience also so what changed is, that we need to grow with experts in, digital processing and to avoid the, traps of I would say AI sering we grew a, team with data scientist but also with, people having a solid background in, physics chemistry or signal processing, and and this is our DNA to think about, best ways to use machine learning in, sensors area but also as a consequence, we spent much more time on understanding, our data and pre-processing them more, than on machine learning pipeline, themselves which are now you know, Commodities so I think we also in our, use of AI we also became more, professional in using this technology we, extended our capabilities by using the, cloud of course for hosting all our, staffs measuring databases for our, customers but also for us knowledge and, contextual database which are very, important in Ai and so on so also, internally we we built our own data, center maybe data center is a big world, here but uh it was that and also so to, facilitate and accelerate the work of, using artificial intelligence for our, R&D and product integration team I would, say thanks jiannis what about yourself, Mary uh in terms of Cox Automotive and, how how maybe the mindset and strategy, related to to AI enabled Technologies, has has shifted over the last uh last, five years or so absolutely yeah we you, know AI means a lot of things to a lot, of different Industries and if you had, five people if you had all of us in this, room and asked us for the definition it, would all be different because we're all, using it differently in in some sort of, fashion and and to Y honest this point, sometimes it can bring about, disappointment so it's it's important to, understand what you're looking at and, how you're approaching it I think when I, go into research or I take on a new, topic one of the things that's important, is to have an open mindset to say I, think I know what I'm going to discover, but I need to look for the things that, kind of bubble up along the way as well, the UN the unexpected exciters or, disappointments that you may find along, the way and I think that's available, across all the different spaces we are, using technology every day you know I've, been working on a specific project to, really understand imaging technology and, we started that well before the pandemic, and as everybody knows on one Friday our, business was normal around the world and, mostly in the US come Monday that, fateful Monday one day in March of 2020, our businesses uh shifted and we had to, and we really shifted into more digital, platforms much quicker and much faster, than we anticipated and that kind of, really progressed the way we're, approaching things and looking at things, to understand how do we provide a a, quality repeatable experience for our, clients and our partners in the, ecosystem and and AI is really helping, us do some of that it helps us to build, that trust and confidence in our our, products and in the experience and so, we're really working to understand what, else is out there because technology is, changing so much every day yeah that's, super helpful to think about that and, also to think about how some of the just, the global shifts over the past few, years have have forced a lot of people, to maybe consider things on a time scale, that's different from their long-term, road, map, [Music], so next I I want to talk a little bit, about like the the next sort of, opportunity that's on your mind in terms, of augmented or AI sort of advanced, technology within your organization, maybe this time I'll I'll start with, Mary I know you you talked about Imaging, but also we've talked a lot about the, summit or at this Summit about the sort, of fragrance and scent and advanced, technology in that space where are you, thinking in terms of the next challenges, that advanced technology might where, that might be most applicable in your, organization absolutely so when you talk, about that that transformation that, quick transformation of our road map as, you know just our journey whether we're, focused on one thing or a bigger company, you know we looked at this and said, there's emerging technology when it, comes to odor sensing Tech you know odor, sensing Tech and how can we Leverage, that or just explore it to see if it's, you know I kind of look at it from the, scope of is it ready now does it apply, to our potential scenarios that we have, in our ecosystem and is it repeatable it, goes back to that trust and confidence, can we repeat and create a consistent, and valued experience out of the, transaction itself and when we look at, the digital world one of the things that, you know we're working in the automotive, space is to say you know we touch three, out of four cars on the road and we have, all these varied ecosystems whether it's, Mobility or whether it's dealer to, dealer or consumer partnering with our, oems you know we're looking to, understand how we can present better, quality information to them in a digital, experience and we think that odor has, you know some potential to explore and, understand that it's interesting to kind, of say you know when we if you've ever, bought a pre-owned vehicle and perhaps, it was something that maybe was used as, in an industrial application or with, somebody who maybe smoked in their car, some people are more sensitive to those, kind of experiences and we want to, create a value that we can express, consistently in our ecosystem that then, is you know allows us to create that, output that gives our client portfolios, what they're looking for you know, imagine if you're back in March you, suddenly had to March of 2020 you, suddenly had to transact Vehicles, digitally and you couldn't be there to, touch and feel and listen and smell the, inside of the car and if you've ever, talked to buyers and sellers they want, to stick their head in the car and just, give it a little sniff because as we all, know sent you know looking at the, presentation and listening to everybody, today I kind of thought you know scent, is really a language that we all speak, our memories are grounded in it and our, brain triggers very specific things for, us and I think that language is, important to us to us all in our, different spaces and you know we spend, so much time in our transportation modes, whether it's a vehicle maybe it's mass, transit maybe it's monitoring inside of, that space But we want to understand, what makes it comfortable and what makes, it create value and perhaps what what, excites us what excites us about that, sense in our experience with, Transportation yeah it's uh it's, interesting you bring in the sort of, language parallel most of my work, day-to-day is in natural language, processing and I think I think there we, have the advantage that like everybody, knows about language but there are, common ways to represent it in like, letters for example whereas like scent, odor it's sort of something we are also, all familiar with but having the, mechanism to represent it objectively is, a difficult uh Thing versus like, subjective terms so it's interesting to, hear how the ability to take odor with, an I enabled platform and kind of, process that off of sensors and make the, odor sort of signature or fingerprint, something that's tangible creates a kind, of language that allows you to do, various various things so that that, that's really interesting Giannis I know, you've You've Been instrumental in in, that process of taking this like sort of, fuzzy intangible thing odor and kind of, using an AI enabled system to make that, more tangible via the air ball system, and the the platform the the fragrance, signatures and and fingerprints as you, look to kind of the next challenges of, the system you're developing what do you, think are the big challenges or, opportunities that AI might address, within what what you want to do next as, a as a company maybe I I I will talk, about two next opportunities we have and, we can talk about challenges afterwards, maybe and and I think the the first is, Leak with our application for example in, the field of automotive and and Sam, presented it some some minutes ago, within the framework of our digital all, faction for automotive consultum we, already performed a real study um it was, last year from two weeks database in, real cars and the malod detection in, cars was aieve with 94% of accuracy, starting from this point to get to this, other signature footprint as as you said, so this is a really a successful story, but we are also building and we'll, continue to build a large database and, you know that quiring database of order, is quite time consuming but we need to, go through with our customer for, instance for flavors and fragrances it, is several hundreds are even thousand in, the next year of smell to to collect of, course with a Automation and there is a, lot of digital tools to prepare before, then we use machine learning to, associate those measurements of unknown, ERS with chemical properties or, ultimately with a Sensations and, percepts that they could create even, what we have already learned of course, and the same goes for any other, application that's our goal with its uh, for each one its own own Universe of, smell and that's that's also a discovery, that we don't need to have a a very, large spectrum of order we need to to, have very pragmatic application and so, it redu the several universes you you, need to apply your uh your um your, machine learning so first a campaign for, initiating the database then a study to, analyze and learn model so that you can, predict what the new measurement will be, measurement will be Rel related to and, the second opportunity maybe is more, technical you know it's very common when, designing a sensor a new sensor, especially and whatever the sensor that, you want to know how to calculate in, advance what your sensor is supposed to, respond when submitted to a nonphysical, stimulus it's a straightforward function, you want to have with chemical sensor, it's quite hard to achieve this, prediction I cannot say much more today, but AI is also helping us on this topic, and as you imagine it's very important, to have this this tool to predict the, output of your system for order sensing, before it is measuring something real so, it is the next opportunity but also a, challenge s your question yeah, definitely I know just from personal, experience one of the things that I've, learned over time is that within an, organization as you're trying to kind of, roll out Advanced Technologies a lot of, the a lot of the things that you have to, plan for and deal with are more um, people related than they than they are, technology related and Technology, problems can be solved often uh, infrastructure things can be solved but, people's perception of Advanced, Technologies and how they adapt them, into their, workflows and sort of the foundational, organizational changes that you need to, implement as you're rolling out advanced, technology, might be the difficult things to to, think about so Mary I'm curious I'll, start with you as as you've kind of, worked with your organization over the, the past years to implement some of, these Advanced Technologies what have, you found are those kind of, foundational organizational changes or, what has the experience been like in, rolling out these sort of Advanced, Technologies and maybe the things that, you've hit along the way that that you, might be interested to to share sure, thing you know we have when we talk, about AI or advancing technology in our, space you know I go back to my imaging, technology and you know when we think, about the mobile phone over the last, just four years if you're in the in the, States you may have had I don't know an, iPhone 8 maybe and we're already on the, 14 and and iPhone 12 and newer has liar, imaging technology so it has the ability, to image things differently the iPhone, 14 has some newer features so when we, talk about the simple you know this kind, of construct of Imaging itself the, technology has moved very quickly and so, when we're building our portfolio of, information and the way we're curating, that has really we've kind of had some, takeaways and then we're like oh here's, this new thing should we pursue it so, we've had to kind of continue to curate, and understand that when we're when, we're in the space of Technology we need, to be able to accommodate across all the, different formats so it could be know we, need a version back to the iPhone 8 when, we talk about product or capture we need, a version up to the iPhone 14 and Beyond, and we think about that when we also, talk about our user experience of you, know what is the end user or the person, who's interacting with this technology, at all the different tiers how are they, going to receive this you know it's, interesting when you talk to a novice, about artificial intelligence sometimes, they will immediately say friends or, family oh AI is going to take over and, that really concerns me and I kind of, always say well AI is only as smart as, we want it to be and if we stop feeding, it or stop giving it quality information, it really just truncates and doesn't go, anywhere so we're very cognizant of the, need to continue to curate quality, information whether it's images whether, it's data and how we collect that making, sure that you know we have all the, information so that we can build this, robust data Lake that takes us from you, know computer vision to machine learning, and into our different types of modeling, that we're using so I think technology, advances are amazing and it's almost, hard in some ways to keep up with them, because they are moving so quickly so I, think in some ways we're chasing and, some ways we're leading but I think, those are are part of the adventure and, the journey in the process yeah I've so, I'm so glad that you brought up the data, side of things which you know there, there's kind of the the AI machine, learning modeling side of things there's, the organizational changes that need to, happen to accommodate that and then, there's also like the data and the, infrastructure things that go along with, that that absent that really quality, information and the curation of that, over time any changes you make in the, other two categories might be all all, for not so yeah I think that's a really, a really good point to make Giannis as, as you've been working maybe with with, various clients across, different Industries have you seen any, commonalities in terms of the, organizational mindset or foundational, changes that organizations are having to, make now that we've moved from, specifically your platforming enabling, them to analyze odor and fragrance on a, much deeper level much more objective, level what are those sort of, organizational or foundational changes, you've you've seen in common across some, of the organizations that you work with, that they've needed to make now that, they have this fundamentally or this uh, transformation digital transformation of, something that was maybe more subjective, before yes that's that's a good question, but also I think that uh the progress of, the digitalization of everything in even, the large company some it's very helpful, for us because it's it's it's it's, exactly what we we should be and it help, us to disseminate the technology easier, with the cloud with many tool many tools, we are we are offering to our partners, and uh in our company we made a big, transformation in fact from an high, skilled r& startup company I would say, to Product Company still improving of, course but it that's the most striking, one for me even if considering Advanced, Technologies people at Aral are very, used to and and this is a population, that really progress quickly on on this, topic even in the way people are, collaborating we integrated for instance, automation at all levels from robots or, grafting our biosensor on many many, dieses of silicon all robots for, sniffing ERS to have to get more data, and also to capturing knowledge at each, steps of our product chain or even in, our product and that was mandatory so we, digitalized ERS for for sure but we need, also to digitalize as much as we could, all our information in fact and, summarize we try to have more and more, trustability as we know beyond this AI, will help us a lot if we have the data, and a lot of data will be will will be, will be very helpful for, [Music], us, [Music], maybe just a couple more questions here, for both of you what's maybe one, challenge that you see within your, industry in terms of the adoption of AI, or augmented types of technology and, then maybe what's something you're, excited about as you look forward to the, possibilities that this sort of tech, technology might might open up in your, industry um we can start with with you, Mary some of the challenges I think the, biggest challenge that we all have and, it probably crosses every industry is, the human factor because when you put, humans in this equation which we're all, doing the work that is our number one, source of bias so when we look at this, and we talk about trust and consistency, and we want to create an ecosystem that, has a quality experience every time or, quality outputs every time we have to, continually look at how we're bringing, all these elements together so that we, are checking ourselves and eliminating, not just our own bias and our own, admiration for potential goals like we, we have this lofty goal of what we, potentially want to achieve and we need, to ensure that we're we're setting those, standards of bias aside the other, challenge when it comes to humans is, ensuring that we're giving you know in, our organization we're looking to many, skill levels and many knowledge levels, but we want to deliver outputs and tools, and products and information that are, are consumable that don't feel, overwhelming that don't feel, intimidating and so that is you know, kind of our our first goal is to say, what can we do that makes that creates, the value what am I excited about I, think I just you know this has been an, exciting day for me to see how all of, this crosses so many different, Industries I'm excited how technology, really in a positive way can change our, experiences together technology has, brought us together today and you know I, think those those Adventures of kind of, The Human Experience and how technology, interweaves us all together is is part, of the exciting things I don't have you, know simply just one thing that I'm, looking at and saying wow I wish this, was happening I think I just take it all, in and and look at it with some, amazement and then I reach out to all, the smart folks in our organization and, around the globe that say this is how, we're going to use it so it's good stuff, yeah what what about yourself Jann yes, there there are some of course there are, some challenges people in our industry, pH especially with sensor with data, amount of data digital fraction and so, on so sorry that my answer may be, probably a bit on the technical side but, I would try to to be clear and, specifically on digital faction when we, wish to have a technical answer to a, problem we like to you know to presented, as the information as an input then you, have a blackbox and the desired answer, which is really appealing and attractive, at the output it's always like that so, the question is what we are doing in the, blackbox and it's very attractive to put, directly inside only an automated, learning based system it may work, sometimes but sometimes only so if you, if you want to know let's say let's take, an example a camera lens it should, accommodate to get an image on your, sensor there are some explicit Optics, laow to give you this answer without the, need for Learning and when there are no, formulas machine learning from data is, used in Aral sensor there are two parts, an explicit part which we know how to, describe by laws and an implicit part, for which we are obliged to learn, through many measurement and a certain, amount of course of PR knowledge, attached to the data and the L what is, interesting the less explicit you make, in your system the more Dimensions you, need to observe for the implicit part of, it and over time we understand in our, jobs as sensor guys that the use of AI, to solve these problems is this mix, between clearing the ground as much as, possible with explicit mappings you know, in advance and collecting data to feed, the implicit mapping and so for example, to be more concrete I give this example, seated a couple of time during the, presentation and it's on purpose I think, all other sensors are really corrupted, by humidity variation during the, acquisition so we use explicit, techniques with a humidity sensor or, with the amplifier which is an Hardware, to reduce this impact in the signal and, to achieve what we can call the Dy SM, inputs then we use machine learning on, the remaining signal techniques to, recognize the smell for example and if, we don't approach AI with this mix I, believe we ask the machine learning to, support all the problem as a black box, and we have to acquire data to fill a a, cube in 3D rather than a square if I, over simplify it and that for sure, requires much more time so the required, database is much more expensive and even, even if data is really where is the, value you don't want to spend too much, time on acquiring database even at the, very beginning and then you can learn, incrementally from time to time but you, need you need a first database and you, don't want to spend too much on on this, additionally we can say that another, challenge that people may may face in, the digital Al faction industry comes, that it's funny that there is a, flattering perception coming from the, first impressive success of deep, learning in Vision or speech recognition, and for sure thanks to that tools, working very well for vision other, sensor will take benefit from it for, sure but the challenge is from another, type I believe when it comes to to to, other sensors, first even if Aral technology can offer, an order has been said during the, presentation within a few seconds and, few minutes sometimes acquiring orders, is a bit longer than just taking a photo, and after this no one can level the, digital res if he has it was not present, when the smell was captured if you see a, cat on a picture even two years later, you will be able to recognize it and, attach the label Gat to the picture with, other signature it's not the case the, capture signal doesn't smell anything so, I believe leveling data is a challenge, for high volume and this particularity, is a bit slowing down the the ramp up, towards Big Data in digital old faction, for sure uh it takes more time yes also, due to the particular nature of emerging, other sensor we made at Aral a huge, effort to have our sensor as much as, reproducible and repeatable along with, time while keeping their discrimination, and resolving power through the, diversity of chemistry in the others, live I would say then then then you can, learn models and make them efficient in, the in the real life like Automotive, application or flavor fragments that's a, couple of usual challenges you you may, have when doing a smart system with AI, and maybe not only with a digital, function thank you for sharing those I, think a lot of those data labeling, quality issues that I think a lot of, Industries can can learn from that, that's a lot of times where things slow, down and yeah it's it's so awesome to, hear about the transformation that's, happening in in both of your, organizations and at this event just, hearing what's going on and how how, these Technologies are really shifting, people's P perceptions of what's, possible is is really exciting so thank, you Giannis and Mary for for joining the, panel it's been it's been great to chat, with you thank you thank you very, much, [Music], all right that is our show for this week, if you dig it don't forget to subscribe, head to practical aai FM for all the, ways and if practical AI has benefited, your life Pay It Forward by sharing the, show with a friend or a colleague word, of mouth is the number one way people, find shows like ours thanks again to, fastly for fronting our static assets to, fly.io for backing our dynamic requests, to break master cylinder for the beats, and to you for listening we appreciate, you that's all for now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Data for All | People are starting to wake up to the fact that they have control and ownership over their data, and governments are moving quickly to legislate these rights. John K. Thompson has written a new book on the topic that is a must read! We talk about the new book in this episode along with how practitioners should be thinking about data exchanges, privacy, trust, and synthetic data.
Leave us a comment (https://changelog.com/practicalai/197/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• John K. Thompson – Twitter (https://twitter.com/johnkthompson60) , LinkedIn (https://www.linkedin.com/in/johnkthompson)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Use the code podpracticalAI19 for 40% off of Data for All (https://www.manning.com/books/data-for-all) , along with all Manning products in all formats!!!
Books
• “Data for All” by John K. Thompson (https://www.manning.com/books/data-for-all)
• John’s other books:
• “Building Analytics Teams” by John K. Thompson (https://www.amazon.com/dp/1800203160)
• “Analytics” by John Thompson and Shawn Rogers (https://www.amazon.com/dp/1634622375)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-197.md) | 2 | 0 | 0 | one of the things that people do not, think about is you know you're carrying, around your mobile device all the time, and 90% of us are walking around with, location services on and then we have, all these crazy conversations that we're, having in our political sphere right now, about you know what the government's, going to do or what they not going to do, or who's doing this and I'm like you're, allowing them to track you every moment, of the day and some people actually, sleep with their phone on their, nightstand while it's on I'm like this, is insane your actions are so in, congruent and that data is hugely, valuable you can do a great deal with it, and we do a lot with it in my day job in, my Consulting work and all sorts of, things and then at the end of the book I, take him through what two years from now, will look like with just location, services as the, [Music], foundation, welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical AI FM for all the ways, special thanks to our partners at fastly, for delivering our shows super fast to, wherever you listen check them out at, fastly.com and to our friends at fly.io, we deploy our app servers close to our, users and you can too learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris I'm, doing very well Danel I'm just so happy, to actually be online as you know I was, struggling to uh to actually go up today, here so uh internet issues you know they, they still happen yeah go yeah when in, doubt reboot right so here we are you, know data transfer um that's that's, often an issue and and very you know, actually fitting for today's, conversation because all today is all, about data Chris we're privileged to be, joined by John K Thompson who is the, author of a new book called data for all, and uh he's he's also written a number, of other books analytics teams, harnessing analytics and artificial, intelligence for business Improvement, and analytics how to win with, intelligence so John it's great to have, you with us we can't wait to learn all, about the data yeah so glad to be here, Daniel and uh with you and you and Chris, uh looking forward to the conversation, thanks for inviting me yeah yeah it's it, it was super interesting as I was, reading about the motivations for the, book and what you're covering in in the, book you talk about how the book, provides you know this vision of how new, laws regulations services around data, work in the kind of time that we live in, but also how we can benefit from data, and new and lucrative ways which sounds, great I'm all about benefiting from data, in in new and lucrative ways could you, talk a little bit about like why kind of, the motivations and why you thought this, was kind of the time to bring in some of, these discussion around types of data, how it's stored who controls it what the, regulations are etc etc yeah and and, thanks for the opportunity I uh you know, I I as you said this is this is my third, book I've written mostly about analytics, up to this point how to build a team how, to invest in a team who to hire who not, to hire how to uh structure it and all, that kind of stuff but I started my, career 37 years ago and I was a, programmer and an analyst and everything, I did just seemed to revolve around data, it was just all data data data data all, the time so you know it just struck me, as that you know data was the thing and, I switched my career to be part of you, know the business intelligence and data, warehousing fields and you I did that, for decades and and I've been thinking, about it for a long time and when we, were raising our two kids that are 25, and 23 now you know we were always, talking to them about hey you know how's, that game going what are you doing, they're like oh it's free we love it and, it's like no it's not free you're giving, them your information about who you are, and your age and your behavior and your, you know what your elasticity is and, what your tolerance is for trading this, and trading that and what the price is, and you know so we've always had this, conversation over our dinner table about, you know there's no free thing you know, if you think it's free then you are the, product you know your behavior and you, are what they're selling so I've been, thinking about it for a long time, and I've been part of the data industry, for almost four decades as I said and a, lot of it you know Daniel I know you're, here in the in the midwest I'm in, Chicago you're in Indianapolis Chris I, think you're somewhere in the United, States I'm down in Atlanta that's right, okay you're down in Atlanta well the, whole Midwest is where the the whole, data world started so you know Arthur C, neelen is the guy that you know two, miles up the road is the guy that, created this entire ecosystem that we, live in the legal the Norms the way, people think about data and I thought, nobody really knows this nobody really, understands it except for maybe a, handful of people so I wrote the book so, people would be able to understand over, the last hundred years why data is, thought of as it is and why it's, regulated as it is and why we have this, really misguided idea that our data is, not our own that you know these other, these companies that manage it and move, it around and resell it and use it own, it but they don't we own it but now, we're starting to get a legal framework, it's led by the EU to where we can, actually own our data we can manage it, we can delete it we can do things with, it so you know the book was you know it, was just decades and Decades of me, thinking gosh this whole thing this, whole area is just opaque and confusing, and people don't understand it and, there's got to be some book out there, that says this is really the way it, should be and this is why it has been, like this that's the first part of the, book the second part of the book is, what's happening today and what does, happen with your data because a lot of, people don't understand what happens, with their data when they're on Facebook, or LinkedIn or Google or wherever it, happens to be and then the third part of, the book is is all the laws and the, Frameworks and everything that's coming, out of the EU that's now spilling over, into the United States and the rest of, the world so you can look at it and say, okay, I really do want to manage my data I do, want to monetize my data and there's an, example in the book where I talk about, that if you were an average user and, you're on three platforms and and you, had the chance to monetize your data, it's probably two grand to you every, year for doing nothing more than what, you do today and I talk to experts and, they're all like ah two grand who cares, no one want no one wants any money you, know they just want to have free email, and continue on the way they are and I'm, like hey I would like to have two grand, a year for doing what I already do I'd, be happy to get a check for two grand, every time I talk to someone they're, like I would love to have $10 you know, it's I I don't understand why the, experts are like oh nothing should ever, change you know people don't care but, people do care yeah so you do talk about, some of the history around this topic in, in the book what do you think are some, of the main points to stress about that, history to like help people understand, why we got to this point where yeah, there's a lot of experts saying like, people don't don't care about their data, but there's also people waking up to the, fact that their their data is being, abused there's also this General sense, like I get you know very frequently from, my non-technical friends the the thing, that comes up in conversation is like, well I'm sure you know Google whoever's, listening to me right because I said, this and then later on I see this this, ad or whatever but there's a very, there's a mystery around like what is, actually collected is that actually true, is it not true so like what are the, things kind of in the history of how, this has evolved that you think are, important to stress to to give context I, guess sure absolutely and and I have, that I just had that conversation two, days ago with my sister she she was like, well I was talking to your niece her her, daughter you know about XYZ and then all, of a sudden I start seeing it in my, Facebook feed in my Google feed and I, started asking her I said well did did, you search on anything did you type, anything into Facebook or Google and she, goes no I I just had the conversation, with with her on the phone so I know, they're listening to my phone and I'm, like they're not listening to your phone, this is not the NSA this is not the dni, we had more conversation she goes well I, did go search for this and I did go, search for that and I'm like well there, you go you actually put it into the, engine and your search you know your, history got you know Modified by the the, algorithm or whatever you know whatever, they're using there but anyway I digress, so everybody's talk a lot of people are, talking about this and you know the, thing that I think is very important for, people to realize and and you know, Arthur neelon you know great guy created, neelen really smart fellow but, precedence in the United States legal, system is a huge deal and when Arthur, struck the deal with these grocery, stores that they would basically, transfer all their usage data to him for, free set a precedence and it went on and, on and on for a hundred years and no one, really thought about it and they kept, accreting more and more data media data, and sales data and you know Radio Data, television data and it and it went on, and on and on and now some people say, well you know neelen does pay for the, raw material yes they do I absolutely, understand that I used to work at neelen, I know what they do so yes they do pay, people for the data but it's a pittance, compared to what they get paid for the, data so all all that's to say that this, precedent that was set a 100 years ago, still continues today so people are, saying well you know my data really, isn't worth anything and but the world, has changed you know we have the, ubiquitous internet we have Broadband we, are always on we have mobile phones, we're you know always contributing some, people call it digital exhaust which I, don't really like that term but we are, always contributing our our usage data, think of do either of you have electric, cars I do not no not yet but my, brother-in-law does yeah I have a, Mustang Maki it's it's not a car it's a, rolling computer and it's generating, data 24 hours a day even if I'm not in, it so you know we have to realize that, we are generating the data we own the, data this idea this precedence of giving, away for free must change and that's one, of the things that in the book that I, talk about a lot is that we have a, or a skewed view of data ownership that, we give away the province or the, province of our data to all these, companies and they use it for free and, in the book I talk about you know, Facebook doesn't pay for the raw, materials that it uses to run its, business and it makes no sense I mean, Daniel and Chris if you went to a, builder and said hey I'd like you to, build me a house and the Builder came, back and said well you know we're we're, going to get the the lumber for free you, know no nobody gets a a major raw, material for free and and you know my, point is is that number one we have to, understand that we own the data and, number two they should pay for it so let, me ask you a question you've already, kind of created the context around it I, think over the last couple of minutes, but something you said a couple of times, earlier you talked about the EU leading, the way and certainly there is a certain, well-known EU law that I that I suspect, we're talking about there but aside from, the law itself I'm curious why is the EU, leading the way in your view what is it, about the EU that has that has created, that law and has and has done this, whereas we have struggled to do that uh, in the United States and elsewhere in, the world and where we have done, something it has been in smaller, Geographic areas like specific States, that's right you're referring to gdpr, indeed that was in in uh put into law, six years ago and gdpr has been a huge, success it it has really been a great, movement for the people of of Europe and, we all know Britain is no longer in, Europe they're on their own they're, outside the EU at this point so gdpr has, been has been a boon for you know the, citizens of the of of Europe they can go, in they can access their data they can, delete their data they can take it off, platforms they can do all sorts of, things with it and based on the success, of gdpr the EU has now passed the data, act the data governance Act and the, digital markets act and all of those, acts have been passed and they are now, going into effect and those laws now put, together data pools data unions data, exchanges all the structures that I talk, about in the book that if you and I or, any of us want to go to Google Facebook, Amazon United Airlines American Airlines, and say I want all my data they have to, give it to you that's number one but, number two as it goes on these data, exchanges and data pools are going to be, the intermediaries that we work with, that we go in and say you know you know, you can you can withdraw your data let's, say that you're you're really worried, about climate change you know and any, company that you feel contributes to, climate change in in a negative way you, can say you can't have my data at all, you can just say United Airlines or, Exxon or mobile or Ros neft or you know, whoever you want to block you can but my, point is why block them my point is you, know if you're going to say you know the, music royalty system is is the the, system that makes the most sense to me, when you're thinking about data, monetization you know you may take all, my browsing data and I'll let you use it, every time you touch it you got to pay, me a penny or a half a penny or a tenth, of a penny or whatever it is for these, companies you say every time you touch, my data you have to pay me a million, dollars that sends a pretty strong, signal that you really don't like what, they do you know and if they pick you up, on it and say they want to use your data, either intentionally or by mistake and, they use it four times they going to pay, you $4 million so you know stay in the, [Music], [Music], game well John I I'm really fascinated, by this sort of topic and area talking, about like data exchanges and like the I, guess the infrastructure or the, mechanisms by which some of these newer, ways of dealing with your data could, come about it actually it actually, reminded me so my my brother-in-law, works for a company that is sort of an, intermediary between farmers and grocery, stores so like okay there's the raw, material right there's the vegetable, carrots or whatever and he mediates this, Exchange between like the the actual, farmers and and grocery stores I'm, wondering you know in the data world, like let's say there's there's Google, there's Facebook there's whoever wants, to use my data and there's me who who, owns the data at least that's sort of, the shifting mindset that we want to, think about from your mind like how, might this sort of data exchange or the, other mechanisms that you talked about, where do those sit who sort of regul, regulates those or or how might how, might those come about is there a, current example that that you could give, or or maybe a a way forward that you, think is is probable they do exist they, exist predominantly in the UK and the EU, there's one that's very prominent called, pool data IO and they're working really, hard to have their data exchange be out, there and there's all sorts of other, data exchanges going on right now across, the United States we usually see these, kind of structures and they do exist and, have existed for many years in the area, of health and they're usually related to, cancer heart disease but they're more, prominent in the area of rare diseases, you know people that have have got, hereditary angioedema or primary imuno, deficiency disease or hemophilia or, something like that and these exchanges, really allow these people to contribute, you know all their diagnostic data their, clinical data and maybe even their, genetic data so you know they do exist, and they do operate they're in United, States they're around the world, commercially they're mostly in the the, UK and the EU right now and physically, the way it's going to work is that when, these laws come out in California and, five other states have these laws on the, books right now so you you can go in and, say you have to give me all my data and, you have to delete it you know if you, live in Britain or Denmark or somewhere, in the in Europe you can do that what's, going to happen in the future is these, data exchanges will sit in the middle, so you know the Amazon and all the other, companies are not going to contribute, their data to some monolithic Central, storage unit that's not going to happen, don't you know Colossus or you know, whatever you know megalith that's that, won't be the case what's going to happen, is they will still own their data they, will they will still have their data we, will own our data and through the, exchanges you will go in and say for my, browsing data for my shopping data for, my health data whatever you know you, have in there your airline travel data, you will put a a monetization amount on, it and you will say that these companies, can or cannot use it so when those, companies go to use the data they will, have to pass through the exchange they, will have to check the yes or no the opt, in opt out they will have to understand, the monetary value associated with it, and when they go back and use it they, will have to have an accounting system, where they rack up the amount of money, that they owe you me and everyone for, using that data so I have kind of a dumb, question I want to ask no dumb question, because we've I knew you were gonna say, that we've leapt we've leapt forward a, little bit but what exactly constitutes, a data exchange as we're using the term, around is it always a third party could, a social media giant like Facebook or, Google or whoever could they have their, own exchange what's the difference in, those what does it mean to to have a, data exchange a data exchange is a legal, entity created by EU law at this point, and it will happen will be created in, the United States as well and a a data, exchange is a third party that does just, what we talked about they allow you to, come in through an interface they allow, you to set prices they allow you to set, usage policies and those kind of things, they cannot monetize data they cannot, acrw store and sell data they're an, exchange where they allow you to set, your policies set your prices you know, stop people from using your data what, they can do is they can reach into, systems they can analyze usage patterns, and they can suggest to you how to best, monetize your data or how best to, achieve your objectives maybe your, objectives are to give all the money, that you get from your data monetization, usage uh efforts to a charity you know, that comes along and says okay every, time I get you know $100 in my data, usage account or my data monetization, account I want it donated to the, American Cancer Society or I want it, donated to Ukrainian relief or I want it, you know spent over all these areas or, you can actually say you know when these, charitable organizations use my data I, want to pay them so there there is a, little bit of a Marketplace that it, establishes and maybe not in a precise, uh across the board but maybe as a as a, very rough analogy sort of like a stock, exchange where you don't necessarily, know how to price what you're looking at, but the market that exists in that, exchange prices it for you but in this, case it's data directly exactly and you, could set your own objectives you want, to say I want to maximize the amount of, money that I acrew because I'm going to, take that money myself and spend it and, it is money it's not credits it's not, units it's money it's dollars it's euros, it's you know drma Yen whatever it is so, you know you are actually piling up, money in your account that you can spend, now your other objectives may be I want, to reduce the usage of my data by people, who are climate offenders or maybe I, want to help you know these charitable, organizations you know understand my uh, activity better or maybe you find a, group of people that are like-minded or, have the same affinities as you do and, you group together and all your data can, only be used in aggregate as a pool, there's a million different ways you can, take this one of the other things I love, about the topics that you cover in your, book is is actually digging in to how, data works today and what what that, actually looks like so we're talking, about this sort of monetization or, exchange a little bit but if we shift, and think about like from your, perspective whether it's daily, interactions with people in your own, Social Circles or it's your actual, business colleagues who are working on, data problem s specifically what do you, think are some of the main types of data, that that people aren't considering or, the main characteristics of that data, maybe they aren't considering I I know, you talk a little bit about fresh or, stale or repetitive infrequent uh, episodic the the these sorts sorts of, things so from your perspective what are, those some of those types of data or, characteristics that maybe people aren't, thinking about as much as they should I, think you know one of the I know the one, of the things that people do not think, about is you know you're carrying around, your mobile device all the time and 90%, of us or maybe 80% I'm making these, numbers up are walking around with, location services on you know and and, then we have all these crazy, conversations that we're having in our, political sphere right now about you, know what the government's going to do, or what they not not going to do or, who's doing this and I'm like you're, allowing them to track you every moment, of the day and some people actually, sleep with their phone on their, nightstand while it's on I'm like this, is insane your actions are so, incongruent you know and and I take, people through you know in the beginning, of the book I take them through a very, light scenario of what happens with just, location services and that data is, hugely valuable you can do a great deal, with it and we do a lot with it in my, day job in my Consulting work and all, sorts of things and then at the end of, the book I take him through through what, 2 years from now will look like with, just location services as the foundation, so you know all these people saying that, you know they're hey they're they're, upset about this or they're upset about, that I'm like well just turn your phone, off and you'll be a lot better off there, and then the other thing that we talk, about a lot in the book and I've talked, about in my other books and I and I'm a, big proponent of is if you're an, analytical professional you know this, whole idea of just stacking up one, source of data you know in neural, networks they always show you know, trying to discern between Chihuahua and, muffins okay fine I don't know what real, application is going to be helpful in, understanding the difference between the, two pictures but I get it so you take a, billion images of Chihuahuas and a, billion images of muffins and you, analyze them you know but really what, happens what we're trying to get to and, what we are getting to in analytics is, we're trying to get models to reason as, realistically as we possibly can I try, to stay away from you know the whole AGI, concept of you know artificial general, intelligence but we are trying to use, many many many sources of data and, integrate them together and that's one, thing that people don't really, understand is that we as analytics, professionals are starting to take three, four five six S 8 nine 10 12 sources of, data and bring them together and, generate features that realistically, show us what people are going to do and, we can do it a really good job of, predicting what most people will do with, six seven eight different sources of, data and that is something that is, really going to come into the four over, the next three four five years so the, concept of data you know location data, voice data browsing data Commerce data, you know driving data all of that is the, true picture is a real picture of who, you are and what you do and we know that, when people describe who they are they, always describe that they eat 25% less, calories than they do they always say, that they sleep less than they do they, always say they talk less than they do, well we can see what they actually do, and we know how people act I was just, going to ask you you have my full, attention because you completely freaked, me out a minute ago so I'm I'm hijacking, A short segment of the show here to go, back and ask you a question because I am, guilty you mention some people even, sleep with their cell phone on on the, nightstand no Chris no I do I'm I'm, confessing to the audience that I have, actually done that not once not twice, but pretty much every night so doing, that uh in in my mind I'm thinking you, know I got elderly mother I only have a, cell phone I don't have anything but, that need to be available and stuff but, as you talk about that like that's a, real life scenario from my standpoint, and you just you know you you hit it, with a hammer just now like like if I'm, going to be available uh overnight you, know in case my mom has an emergency or, something what is it uh like can you, talk a little bit about that because, that's incredibly tangible can you talk, a little bit what have I just sacrificed, in terms of my my you know privacy or, the data I'm giving up to do that, because I I'm truly like weighing this, at this point my mom's going to be, horrified to to hear that I'm I'm, weighing whether her safety is is worth, it but but please uh just for a moment, dive back into that yeah I mean you know, we we all have these you know we're all, talking about that and I turn my, location services off my net position my, default position is location services, off and at night I turn my phone off and, I can do that when I'm at home because I, have a landline you got the oldfashioned, one right there beside it the other one, okay so you know my my family knows if, they need to call me call the homeline, I'll pick it up you know don't call my, mobile phone because after six o'clock, it's off okay yeah I I think maybe it, speaks to the issue at hand that the one, of us on this on this discussion that's, been an analytics professional for for, their entire career takes that position, and maybe maybe we're on a on a little, bit different side that that's probably, worth worth noting I'm just saying, guests don't freak me out completely, most of the time but you know I'm kind, of freaking right here okay I'm thinking, what have I done I tell you what I you, know I used to well preo you know we go, to cocktail parties and ask people would, ask me what I would do and I would give, them you know kind of the same, description that we've been talking, about and they would get freaked out and, not talk to me anymore so when people, ask me now I just say we have a show to, complete though you know you have no, choice I have no choice we're gonna do, this now I say I take data and turn it, into money that's what I do yeah I guess, that's a really interesting point, because you could see Chris's phone on, on his nightstand as a money maker I, guess in based on our previous, discussion right if but that's only, possible if he had the opportunity to, monetize that data right so I think like, in terms I know you talk about like, different jurisdictions in in the book, and and such maybe for those you've, talked a little bit about Europe what, does the landscape look like around the, rest of the world in terms of how, quickly we're moving towards this, position where we're able to kind of in, a more lucrative way manage our data, yeah the EU will be there within 18, months Australia will probably be there, in about the same time frame maybe 24, months spotty across the United States, California has already got their Privacy, Law and they are actually following very, closely the three laws that I just, talked about in the EU then we've got, five other US states that have those, laws and beyond that you can take a look, at where the liberal Western democracies, are and most of those will come up in, the next three to five years you know, you can look at at the other countries, and the autocracies and the you know the, um autocrats and dictators and things, like that and that will probably be, never if they continue with that, standard of government because they just, don't like you know the transparency and, the uh well they do like it if they, control all the data they they they they, like it that way but as far as their, citizens being able to monetize their, data that's not going to happen anytime, soon, [Music], John uh a couple of the sections of the, book that you dive into are trust and, and privacy these are two terms that are, I don't know Chris I I don't know how, what percentage of the conversations we, have on this podcast someone uses one of, those two terms but I would say it's, it's very much you know terms that come, up very often I'm wondering uh John as, as you really dug into the state of how, data flows these days how the, regulations are changing around data, Maybe as like analytics professionals or, as AI developers or as AI researchers or, you know for Professionals in the field, like like ourselves what do you think, are the kind of practical considerations, that that we should be thinking about in, terms of trust and privacy as we're, building out like I'm going to make the, this AI enabled app to do x what should, be those things on my mind related to, trust and privacy from your perspective, yeah it's a great question you know and, and I've been in this field long enough, to know that you know when we started, out you know those many decades ago you, know we just always did it because we, were just trying to sell more you know, bars of soap or cans of soup or pizzas, or whatever it was it wasn't anything, the fire it wasn't anything you know and, it and we did have people ask us to do, things that crossed the line you know, that broke ethics and and we just, wouldn't do it so it was a pretty small, community and we just did what was, ethical and you know what was the right, thing to do now we've gone to where data, and an analytics are the horse is out of, the barn you know we actually need and, I've never been a proponent of this, until the last couple years you know we, need government to step in you know we, have organizations like Facebook and, people like Mark Zuckerberg and you know, that have no rules that have no red, lines you know they they just go all, over the place Mark Zuckerberg's answer, to any problem with Facebook is more, Facebook yeah exactly yeah I'm actually, stealing that from KY risdall just so, that you know I've heard it I've seen it, I I know what he's saying you know, absolutely so you know the reason I de I, delve so deeply and dedicated an entire, chapter to trust and an entire chapter, to privacy is they are Concepts that we, talk about a lot but we generally are, not taught what they really mean I think, we understand what the words you know, the connotative meaning the denotative, meaning of trust and privacy but when, you start to really delve into those, Concepts and how they relate to human, behavior we could all use you know a, little bit more education than we're, getting and that's why I spend so much, time in the book on those so we as, analytics professionals have to be ready, and should welcome government regul a in, these areas it's required it's needed, you know it's it's we're getting to a, point where the folks in data and, analytics or some of the folks in data, and analytics are really getting into, trouble and causing trouble for us as a, society and we can't stand that that's, not that cannot happen in privacy I talk, a lot about you know the need for, privacy and secrecy which you know is, really an interesting concept and we, could spend hours talking about it but, you know if nothing else that might be, something why you read the book is to, understand the difference between the, need for privacy and the need for, secrecy it's interesting when we talk, about government because you know you, have the the left and the right and the, different and the you know the, conversation kind of goes back and forth, depending on circumstances but maybe I, think maybe people can arrive at yes we, need government regardless of which side, you're coming from because they've been, so slow to come at all and I think one, of the the challenges that we've all, observed there is you know every time we, see one of these you know figures in in, technology such as Zuckerberg you know, or any of the the the big companies that, we're always talking about and and they, testify before Congress or or something, like that you see how far behind you, know government you know officials very, congressmen senators and stuff are at, that point that's the big news thing is, you know one of these figures testifies, and everyone's like oh my god did you, hear the questions that were being asked, is that part of the problem potentially, that that there's such a knowledge, difference in this topic that maybe a in, some cases government doesn't really, know what to do to do it regardless of, which side of the aisle they're on could, that be part of the the struggle or do, you would you identify it somewhere else, no I think you put your finger on a a, very Salient problem you know we we've, got a bunch of octogenarians you know, running the government right now and and, most of them don't even understand how, to use a computer so that is a real, problem but you know there there are, people out there like me and and others, who are experts in this field who would, love to serve on a a blue ribbon panel, to you know formulate the laws and the, rules and the regulations that we need, I'm sure there's lots of Americans that, would love to help and then the EU has, done a lot of the hard work you know I, know we're As Americans we're loath to, think that anything outside the United, States is better than anything we would, ever do but the pro but the fact of the, matter is they've done a good job over, the last eight years in formulating gdpr, they've implemented it it has worked it, has changed the way that we look at data, the way that we do analytics the way, that people can access their data the, three other acts the data act the data, governance acts the digital marketing, acts those are very nice pieces of, legislation and I don't think I've ever, had those words come out of my mouth, before you know I've sat down I've read, them they're easy to read they're clear, they're concise you know anybody with a, high school education can understand, them it's the way that it needs to go, I'm wondering part of me is thinking, about this conversation as someone who, is producing data but then another part, of me is thinking about this, conversation like someone in a business, or organization that is using data right, so like there's one side of it that like, I I own my data I would love to you know, benefit on that and maybe make money on, that I certainly see that and then I'm, thinking oh well if I'm thinking that, and I'm a person in a company that wants, to actually build a model or an analytic, system or something using that data that, changes how that you know how that, business entity then thinks about its, strategy of building that product right, so from your perspective maybe shifting, to that other perspective so if I'm, sitting in the company and I I see okay, while these things are changing people, are going to be able to exchange their, their data for for money there's going, to be this there's going to be this, exchange how from your perspective does, sh should we start shifting our thinking, as analytics professionals or AI, professionals to like how we would, approach maybe architecting our systems, or how we would approach like starting, out a project and how we're thinking, about data on that project that sort of, thing yeah that's a great question, Daniel if you are doing analytics the, way that I've been doing it for decades, now you don't have to change anything, you know I've worked for I've been part, of consulting firms and software firms, and services firms and now I'm part of a, biof pharmaceutical firm you know, there's lots of data inside those, companies that you don't have to pay for, you know you're part of the company you, get that for free other data that you, are going to use and that you use today, and that we use today that you're going, to have to augment and want to augment, to get to that 10 12 13 sources of data, I was talking about earlier you're going, to have to pay for all that data anyway, so you know you're going to pay somebody, for that value added data, and in the future you're going to pay, somebody it's just going to be a, different somebody that's all you know, so no you really don't have to think, about it in any different way you may, have to budget you know a little bit, more money for it but it doesn't, dramatically change the way you do, things I have a followup to that real, quick if if you don't mind would it be, right to think you know we we think of, you know stores of value in terms of of, money and we've been talking about money, in recent years we've looked at, cryptocurrencies and we're starting to, think of those as stores of value and, forms of currency themselves should we, be thinking of data in a direct way, because we've kind of talked like one, step remove so far but is data money in, the way that we should be thinking going, forward it is Data is money there's no, doubt about it data is Cash you know, you're either going to pay for using it, or you're going to use it to generate, value in on the back end you know it's, it's just it is that way you know Daniel, touched on it lightly earlier in the, conversation most people think of Google, as a search engine and they are there's, no doubt about it it's the most popular, search engine by far in the world but, they're a huge data shop they're a huge, advertising organization you know and, you know we buy in my day job we buy, data from Google all the time you know, we go through the the B2B interface of, Google and we buy their geolocation data, we buy travel data we buy advertising we, buy all sorts of things from Google so, you know it's the it's just the way it, is you know data is money I wonder it's, is so triggering so many things in my, mind like the sort of market around data, it seems like it could get very very, complicated and sort of multi-tiered in, the sense that like there's people, generating data but there's people that, could buy data right and if data is, money and that money escalates in value, right all of a sudden you've got a sort, of market for for this thing that you, know increases value over time and, there's like an investing element to it, as well which is which is quite, interesting one one other feature of, this uh that I see you touch on on the, book is like derived or synthetic data, which which I think is quite interesting, because Chris and I have talked about, this a number of times on the podcast in, relation to privacy and the fact that if, you are able to augment your data sets, especially as a professional, with derived or synthetic data you can, actually do things maybe beyond what you, would be able to do with the amount of, data that that you have that's maybe, cleaned and detoxed and has no privacy, issues so I don't know could you could, you touch on that a little bit and maybe, how you see the the methods and usage of, generated data and synthetic data kind, of progressing as we move forward yeah, absolutely and and it's a great topic, talk about and I love to get in it into, it with with the analytics professionals, all the time is that you know we we've, gone past the era of aggregations and, averages and integrating data we still, integrate data of course it's a powerful, tool for us but you know if you really, want to get somewhere today and have, competitive Advantage you were probably, going to have to derive data from, multiple data sets that come up with, indicators and and you know functions, and things that don't exist other places, you will have to create something that, is proprietary and unique to the way, that you see the world and you you, you're approaching the world that's, derived data you take you know travel, data location data and you bring it, together and you have a whole new set of, data there synthetic data usually comes, up at least now and today it comes up, where you have Industries where people, are really not watching them very, closely and you don't have access to, proprietary data because the small, number of people in those Industries, won't give it to you they're smart, enough to hold on to it them for, themselves so so then you have to, synthesize and create the data to, measure that industry from the outside, and you can do it we're doing it today, we just did a project where where we did, that and it's worked out very very well, for us so you can derive data from from, existing sources bringing them together, and coming up with a whole new data set, or you can actually synthesize the data, and create it from different indirect, measures that you can see from the, outside I have one small followup to, that that that is intriguing me a little, bit to start with you've definitely, changed the way I'm thinking about it in, terms of the monetization of data um we, have these exchanges which are giving us, the ability to place a market value on, it and so I'm I'm definitely moving into, that mindset and so if I look at the, analogy for a moment back to, cryptocurrencies when we talk about, synthetic there is a mathematical, limitation in terms of the compute, required to generate new value there if, you're going to look at synthetic data, and place value on it you know in a in a, monetary sense uh in an exchange how do, we regulate that it seems like there, could potentially be the ability that if, you're really going into a new business, maybe this is several years in the, future exchanges are widespread and, we're seeing an industry built around, the monetization of data specifically at, that point you know here in the US and, people are synthesizing data to do that, how is that not printing money, potentially or is that just one of those, gotas we got to figure out going forward, we're gonna have to figure that out as, we go you know go forward that's, something that we'll see and there'll be, all sorts of people stretching and, pushing the boundaries and we'll have to, look at those edge cases as they come to, be one thing that I'll throw on the, table that that might be interesting for, you and your listeners is what industry, in the United States has generated the, most millionaires over the last decade, over the last decade I don't know social, media I don't know I would guess, something like along those lines but I, don't know either market, research Market resarch there's more, market research organizations in in the, United States that are run by, entrepreneurs that have become, millionaires than any other business, interesting yeah and it's all data, there's nothing to those businesses, other than data and that sort of brings, me to to a last question John we've, talked a lot about different elements of, this and certain ones that are maybe, like Chris was saying he was disturbed, by certain things and other things that, are maybe cool because I'm going to be, making a extra two grand each year so, you know that's positive as you look at, you know where things are headed What, What In a sort of positive way excites, you about kind of the future of maybe, the the professions associated with with, data whether that be analytics or AI or, how those professions are shifting under, this this changing climate what what, kind of excites you about that and, you're looking forward to yeah this is, you know the some people you know look, at the book and they come away from it, and go oh my gosh this is terrible, everything's you know it's it's it's all, been a sham and I don't understand you, know the overlords have been, manipulating me and and all this kind of, stuff and it's like no that's not the, takeaway from the book The takeaway is, that you know we're all waking up we're, all in a new era we need to throw off, the regulations and the structures that, we were using from a hundred years ago, today and look at where we are today and, you know there's the EU is putting in in, the structures and the Frameworks that, we need to Leverage and we all just need, to look at how we want to monetize our, data and how we can have that be part of, our life that is beneficial and positive, each as individuals now as far as the, the data and analytics profession goes, I'm bullish you know there's you know if, we took every high school student and, and college student and graduate student, in in America and turned them into data, scientist we might have a tenth of what, we need so you know there's lots and, lots and lots of jobs you know all these, people people that are ringing their, hands and saying oh you know the future, is is nigh and you know our children, won't have the same level of Lifestyle, we had that's bunk there's lots of, opportunity out there around the data, and analytics fields and that alone, would employ everybody not everybody's, going to want to do that we need you, know we need people to make chairs and, dig ditches and run factories and those, kind of things too but you know data and, analytics is a very very bright spot for, all of us you know and and that's I had, both of my my kids go through you know, two big 10 schools Michigan and Illinois, and they're both engineers and they both, work with data every day so you know I, I'm living I'm living my own truth right, there and it's way better than digging, ditches I gotta say I dug ditches I dug, Graves when I was a kid and it's it's no, fun being a grave Gravedigger I can I, can attest to that yeah yeah or painting, fences that was my my first one John uh, it's been a it's been a real pleasure, your book is available now on uh Early, Access and on Manning we do have a, permanent discount code with with, Manning, 40% that's pretty amazing 40% so, listeners the code is POD practical, ai19 and we'll put that in our show, notes as well so please uh take a look, at that we'll put the link to the book, in there along with John's other books, it's been a real pleasure John we're, we're excited to see uh the book take, off and also uh whatever you write next, we'll we're excited to have you back on, the show so I'd love to I enjoyed the, conversation I'm sorry to have freaked, you out Chris I I'll get over it but, yeah when the new book comes out we'll, do it, [Music], again all right that is our show for, this week if you dig it don't forget to, subscribe head to practical aai FM for, all the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static assets to fly.io for backing our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one, [Music], k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | What's up, DocQuery? | Chris sits down with Ankur Goyal to talk about DocQuery (https://www.impira.com/product/docquery) , Impira’s new open source ML model. DocQuery lets you ask questions about semi-structured data (like invoices) and unstructured documents (like contracts) using Large Language Models (LLMs). Ankur illustrates many of the ways DocQuery can help people tame documents, and references Chris’s real life tasks as a non-profit director to demonstrate that DocQuery is indeed practical AI.
Leave us a comment (https://changelog.com/practicalai/196/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Ankur Goyal – Twitter (https://twitter.com/ankrgyl) , LinkedIn (https://www.linkedin.com/in/ankrgyl)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
Show Notes:
• DocQuery (https://www.impira.com/product/docquery)
• DocQuery Announcement (https://twitter.com/ankrgyl/status/1565437042032402433)
• DocQuery Blog Announcement (https://www.impira.com/blog/hey-machine-whats-my-invoice-total)
• DocQuery | GitHub (https://github.com/impira/docquery)
• Impira (https://www.impira.com)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-196.md) | 13 | 0 | 0 | in impira we really wanted to create an, experience where users could easily see, whether the predictions were right or, wrong and then if the predictions are, wrong or if they feel compelled to give, us feedback that they're right they, could correct or confirm things and, every time they do that we drive the, feedback into the model and, incrementally train it and so because of, that design we basically structured the, machine learning approach to be one that, is very very lightweight and something, that can train and evaluate really, really quickly, [Music], welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical ai. FM for all the, ways special thanks to our partners at, fastly for delivering our shows super, fast to wherever you listen check them, out at f.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, ATF, [Music], fly.io welcome to another episode of, practical AI this is the podcast that, likes to bring practical issues and, artificial intelligence and learn as we, go uh I am your co-host Chris Benson, Daniel Whit neck is unfortunately, traveling right now so he's going to, miss what I'm sure is going to be a, pretty cool conversation and without, further Ado I would like to introduce, Our Guest today Anker Goyle uh who is, the founder and CEO of empira welcome to, the show thank you so much for having me, I'm really excited yeah absolutely so we, got a bunch of of cool things to dive, into today I guess if you could just as, a start kind of before we actually dive, into the topic kind of tell us how you, got here who are you what's your story, and how did you arrive so that you could, tell the world about what you're what, you're going to be talking about today, which is uh your company and Dot query, in particular awesome yeah so I actually, don't have a mathematical background in, in machine learning or AI I've been, working on relational databases for a, really long time I actually started, doing research on them in school and, worked at a company called uh single, store I joined as the second employee, and was the VP of engineering there for, some time and what got me into this, space is actually talking to our, customers who were able to make use of, data that is structured but really, struggled when the data that they wanted, to work with didn't fit inside of the, relational database that we built and so, I thought you know there has to be a, better way and looking around me it was, clear that the progress and this was, back in 2017 but uh and a lot has, changed since then but but even back, then it was really clear that the, progress on the machine learning side, would make it possible for people to, work with any kind of data no matter how, structured or messy or complicated it is, um and that's what we're all about at, impira we're one part database and onep, part machine learning technology that, basically makes it really easy to to, work with unstructured data very cool, you know as you came into the industry, and and you know getting ready to set up, your company and looking at that with, unstructured data like could you tell us, a little bit about what you were walking, into and why you chose the particular, path in the industry that you that you, did you know what was it that attracted, you down the path that you did as an, entrepreneur yeah it was first thing, I'll say it's definitely a windy road, and we didn't know exactly what we were, getting ourselves into when we started, so actually when we started we thought, that the really big problems in helping, companies work with unstructured data, would be in helping them work with image, and video content and I think as uh is, becoming really clear now with images, and video the bottleneck is actually, creation it's not understanding and so, we learned that on you know just purely, on the market side of things a few years, ago and as kind of a funny coincidence, because one of the models that we ran on, data that people uploaded was OCR which, is optical character recognition some of, our customers started asking you know, you can do this stuff with images and, videos but can you also analyze the data, that is in my invoices and you know my, forms and and other documents and and we, realized that there was actually a, really exciting opportunity for us to to, help companies work with this kind of, unstructured data and so kind of a happy, accident we discovered together with our, customers you know it's it's interesting, that you mention that particular example, because I I know when I think of things, like invoices and I run a separate from, this I run a nonprofit so I I I kind of, have that that business hat I have to, wear separately you know I'm thinking of, things like PDFs and things you know, that are not not typically what we're, thinking of when we're training models, you know it's it's not the form that, we're usually we're not going and, pulling a bunch of data out of a, database to train on or or sources off, the internet or whatever so that's a, little bit of a different take from your, typical Avenue into machine learning off, the bat how you like like what as you, started recognizing that was a challenge, did that worry you at all in terms of, recognizing that you had you were going, to kind of take a different approach it, probably should have but um as usual, with with myself and co-founders and and, kind of how we think it it it didn't and, actually Richard who's our our CTO came, up with a really powerful approach to, solving this problem that uses primarily, computer vision actually to reason about, PDF files and so for a long time and, we're foreshadowing a little bit with, Doc query which kind of brings these, worlds together but for a long time, actually a lot of the work that we did, use computer vision and so we thought a, p if a PDF is like a hybrid of text and, visual stuff we leaned on the side of, the visual stuff and that has a number, of advantages and disadvantages which, we've learned uh over time as well so, you've mentioned PDFs are do you focus, strictly on PDFs or are there other file, formats that you end up working with as, well what we do is we take almost any, file you could throw at the system that, you could you know like self-identifies, as a document and anything from PDF, files to emails HTML files scanned, images pictures from your phone just, about anything and we do a bunch of, pre-processing upfront that basically, normalizes anything you upload into a, fairly consistent data structure so from, you know whatever you put into the, system we normalize it into a bunch of, pixels a bunch of text and a bunch of, bounding boxes that tell you you know, where the pieces of text are as well as, a few kind of other other things yeah, gotcha so before we dive fully into kind, of how you're approaching it at this, point you what was in place you know, both from the early machine learning, days as we're as we're going back a few, years talking about that but also you, know you mentioned OCR and like what, were the approaches people were taking, and what was the mental model around, that that you were looking at and saying, that's not good enough you know based on, on what you were starting to think what, was the world looking like at that point, you know what's what's really, interesting about this is OCR is not a, new thing neither is reading data from, invoices or or other kinds of documents, but for some reason most businesses, don't take advantage of it and I think, that's because the solutions out there, are just not easy enough to use and so, we've always thought about this from the, standpoint of what does it take to make, something that's actually so easy to use, that it provides value for someone so, you know the the solutions that, existed prior they fell into a few, different buckets uh one is something, called an OCR template where basically, you take OCR text and then you draw a, box of XY coordinates you know around, exactly where the text needs to be and, if you're working maybe at the DMV or, something and taking identical documents, and scanning them with an identical, scanner every time that approach can, actually work really well you know in in, reality I'm sure with the invoices that, you're working with in your business, it's never that simple right and so, that's an example where you know the the, user experience and cost barrier in, practice can be just prohibitively High, another technique that was really, emerging when or emerging as as more, popular when we started is this really, big pre-trained model approach so AWS, has a product called textract for, example which is actually it's a great, product and what it allows you to do is, upload any document into it and it will, give you back some kind of data, structure about, what's in the document and the nice, thing about this approach is you don't, need to do any of that template, definition or you know anything like, that but the challenging thing about it, is that if the results aren't what you, expected then you don't really have any, recourse you know to solve for it so we, actually a number of our early customers, were using textract and building machine, learning models on top of textract to, normalize the data to be consistent and, they they realized you know this is just, not what are we doing here right so was, it was essentially a Band-Aid that they, were kind of creating on top of the, product they were using or the service, they were using a very fancy Band-Aid, yeah so you know with and I know that we, have seen kind of Evolutions over quite, a long time in OCR you know in terms of, that and you you mentioned something, though that that made me curious you, were talking about like if you are using, one of those early models that were, pre-trained and then you didn't get what, you wanted out of it can you talk a, little bit about like what kinds of, problems might arise in terms of like, why weren't they getting it out of those, models to kind of Define a little bit, about the space that you're fixing you, know going forward absolutely yeah so, there are two or three classes of, problems I I think there are three so, the first kind of problem is let's say, you take a relatively lowquality image, like a scan that's that that maybe is, actually hard for a human to even, decipher or maybe it has really bad, handwriting or something like that and, you upload it into one of these products, if it can't read the handwriting or it, can't read past the quality there's, really nothing you can do about it and, so that's one class of problem another, class of problem and is if you just, consider a single document and you, upload it into a service like this it, may not actually pick up all of the, fields that are in the document so one, of the problems that we see it's almost, like you know bald spots or something in, the document it'll just miss things uh, and if it misses something there's no, way of telling it like hey please don't, miss this field uh next time you know, there's there's no input like that that, you can provide because it's it's all, pre-trained and you you've got what you, got to work with at that point right, exactly exactly and the third thing is, that if you imagine you know working, with many many documents they all might, have different bald spots uh and so you, might have two documents which for a, user have the same schema meaning they, have the same fields that you want to, extract but you upload them into a, schema service and you get back two, different schemas and that's actually, where some of our early users were, implementing their own machine learning, models to try to translate from the, schema that the pre-trained model, produces to the schema that they, actually want to work with that is not a, problem I had really considered that's, an interesting side effect that you get, on that so you end up training in those, early models you end up you know having, the train model you are running the, document through the model it comes up, with both the whites space issues and it, also leaves you with the problem of an, inferred schema that was not intended, and then I assume that at the end of, that you're trying to kind of get it all, corrected back to what it needs to be so, that's a lot of manual uh effort there, you may have some tooling to help you, along but there's a there's kind of a, manual cleanup process that you're, having to go through yeah so definitely, uh interesting with with one of the, things I wanted to to ask about as far, as that goes is you talked about OCR but, we're also talking about language models, here and you said that you were starting, with the visual models so we're not yet, talking about any kind of NLP natural, language processing or anything like, that I'm assuming we're talking about, some sort of early visual model that's, pre-trained that's right although in, impira the model that we had early on, was actually not pre-rain trained, because of how it works it would, actually learn just on the users, documents that they uploaded interesting, [Music], yeah, [Music], so having kind of laid the landscape, there of what you were kind of walking, into in terms of of problems to solve, and ways of making a a better experience, for people that needed this could you, kind of describe how you started, thinking about that process in terms of, like specifically where could you see, things that needed Improvement you know, to so that we kind of get a sense of of, how we would ultimately get to what I'm, going to get to in a moment which is, kind of where you know where doc query, has has landed but kind of tell us a, little bit about how what that that, pathway from I've identified the, landscape to here's a much better way of, doing it yeah so we kind of set, ourselves up with a few constraints, early on one of them was that we wanted, to make the product completely, self-service and our definition of that, was that a user can sign up on our, website without talking to anyone on, board onto the product and then evaluate, whether it works on their documents or, not the second thing is that we wanted, to support documents of any schema so if, we hadn't seen that particular document, type before that's totally fine we'd be, able to learn about it on the Fly and, third thing was that we wanted the, product to be incred L easy for a, non-technical user to to use and work, with and so what we did after performing, a lot of user research is is realized, that most of our users are either, beginner or Advanced Excel users meaning, we could safely assume that our users, were able to work with excel at a basic, level like entering data and some basic, formulas and stuff and then we could, also assume that some of our more, advanced users are really really, powerful Excel users and so in impira, even from the very start you've been, able to kind of create these really, complex expressions and formulas and, stuff and we realized the reason for all, of this is that and if you sort of tie, it back to what I was saying about, pre-trained models not evolving when you, notice something is wrong we really, wanted to create an experience where, users could easily see whether the, predictions were right or wrong and then, if the predictions are wrong or if they, feel compelled to give us feedback that, they're right they could correct or, confirm things and every time they do, that we drive the feedback into the, model and incrementally train it and so, because of that design we basically, structured the the machine learning, approach to be one that is very very, lightweight and something that you know, can train and evaluate really really, quickly and so that's kind of the, overall approach for for how we tackled, it so I have what seems to me like it, may be an odd question but as you were, kind of talking your way through that, it's what came to mind what are the, things that you need to really be able, to do with the document you know with do, query being called dot query for, instance like what does it mean to query, a document I mean because that could be, interpreted in so many ways it started, something as simple as people doing, contrl F to do a find on a document like, oh my God I love this question yeah what, are the things that that matter because, it occurred to me I don't know what, those are yeah so I'd say like from a, user standpoint there are a few, different things that they're really, interested in and then we can talk a, little bit about the sort of impira, technology and what part of that we hit, and what part of it we missed until we, introduced stock query but you know, users care about one is integration so, the a really common workflow for a lot, of different types of documents and I'm, sure you'll relate to this um from your, nonprofit uh business as well but you, receive documents through email you have, to interpret them to some extent and, that could mean you know reading the, whole document or just eyeballing, something and figuring out where it, should go next and then you need to take, that information and shove it somewhere, and what that looks like in a workflow, like accounts payable for example is, receiving an invoice through email, opening the invoice on your screen and, then manually keying in the information, into your Erp system and there's usually, some judgment or interpretation that, goes in as well so it's not these things, are never totally literal um you might, be making sure that you know the, purchase order number that's on the, invoice is actually one that's in your, database you might check that you know, shipping plus subtotal plus tax equals, the total and sending an email back to, the vendor if it doesn't or doing some, other stuff as well and so that's kind, of like the basic workflow the other, thing that people really want to do is, ask questions so you know not just sort, of run the formula of like does this, plus this plus this equal that but say, like are these two numbers equal or of, these 100 invoices which ones are due, next week or what was the most expensive, line item on this invoice and that it, kind of overlaps with search although, what we see is that people they're, looking for answers to to questions that, are fairly Analytical in in nature and a, lot of this is done you know very very, manually today it is so it's kind of, funny and it's funny that you that you, referenced uh me doing the nonprofit, thing, because these are agonies they're little, things that I know for a fact cuz just, to bring in my own experience into the, conversation my wife and I are doing, these these administrative T you know, things that we have to do we have a, group of volunteers and all but most of, the Admin falls to us and their task, that neither one of us is is, particularly trained in nor particularly, are they things we love to do and so as, you are describing that I was like oh, yeah that was a pain oh yeah that's, painful yeah that's painful as well so, it's interesting that you've identified, all of these pain points and I I realize, you're not specifically talking about, nonprofits or small organizations but, indeed they are things that definitely, impact us as users we do actually have, quite a few nonprofit users and, customers of impira so it's um we've, heard this feedback very directly uh, from them as well yeah so as you've kind, of recognized all this can you talk a, little bit about what empira has done, and how Doc query fits into that and, like within the scope of you've kind of, laid out the problem and you've laid out, kind of approach to solution could you, talk a little bit about how that is kind, of realized in in empira in Broad and, specifically inquery absolutely yeah so, if you think about what I mentioned with, impira there there are a few things that, really stand out one is that users can, work with any field that they want they, can create any schema that they want um, and the second is that we really care, about ease of use and simplicity and so, if you rewind back a few months we were, in kind of a state where you could, create whatever field that you wanted, but you had to provide at least one, label on the document like you had to, highlight and and click something to, teach the model and you and even though, you didn't have to do it for every, single format that you uploaded you had, to do it for most of the formats that, you uploaded at least one label so if, you imagine with invoices if you had, like a 100 different vendors you might, need to provide like 50 or 60 labels to, teach the model about the breath of, vendors that you had and so what we, started thinking is okay how do we solve, the problem of making it so you don't, need to provide any labels in in this, case and and Not only would that provide, you know a much better user experience, but it also would would mean that that, we'd be able to address the long tale of, variety a lot better and that means that, if you upload something that we haven't, seen before or doesn't look like, something that you've trained your model, on it still has a Fighting Chance at at, extracting the data uh correctly and so, we started open-endedly exploring like, pull our head kind of out of the sand of, um all of our impira context and, open-endedly started exploring what else, was out there and actually the first, thing I did I I remember doing it on the, car ride to the airport from new New, York back to San Francisco was copy, paste manually the text out of a bunch, of voices to your point earlier like, PDFs have all the structure but I was, just copying it out and basically, ignoring all the structure and on, hugging Face's website trying out a few, different models that are pure text, question answering models and pasting, the text into the website and asking, questions like what is the invoice, number and what is the total and I was, just blown away by how accurate it was, it wasn't not even like 60% accurate, right but but still like with no context, about this problem nothing to do with, invoices no training data about invoices, no PDF structure or anything like that, it was like that accurate and so that, kind of blew my mind I mean if it was, that accurate with something that was so, distant from what we were doing it meant, a few things one is we could probably do, better if we put in a little bit of, effort two you know we had this Epiphany, that the framework of question answering, allows a sort of infinite canvas of any, fields or any questions that that you, want which is very in line with our, products philosophy and then three, because something that has never looked, at any documents like the ones I was, pasting into the text box because it was, working so well with that that probably, meant that it would solve that, generalization problem that that I, mentioned earlier and so that sort of, experience I still remember the car ride, and I still remember you know working on, my Hotspot and and stuff uh and, furiously kind of playing with it that, sort of kicked off this whole idea, that's very interesting and I know this, is fairly recent and you've actually hit, a whole bunch of things that I want to, I'm going to touch on with with a couple, of follow-up questions first of all this, was a recent announcement it was only on, September 1st that you announced do, query and you in another thing that you, mentioned just now was hugging face and, stuff so I'm curious uh about several, things that you I'll kind of throw throw, several out to you how has that model uh, evolved that you've had as you've done, this you had started in the visual you, talk about large language models in your, Twitter there's obviously an evolution, of you know deep learning technologies, that you're applying here and as you did, that how did hugging face fit into that, um we have a habit of talking about, hugging face quite a lot uh on this show, we're big fans so how did all that come, together yeah the evolution hugging face, everything fit in with that so we're, also big fans of them and uh we've, actually had the distinct pleasure of, collaborating with them uh on this, problem so essentially what happened is, they have this cool thing called a, pipeline and a pipeline for people like, me who are not machine learning experts, and barely understand what lits are like, any of this kind of stuff abstracts away, all of that complex machinery and makes, it really easy to work with models and, so the pipeline that I was experimenting, with is called the question answer ing, Pipeline and it's all over their website, and any model that kind of fits the, question answering framework works with, it so after we saw this Richard and I, chatted and uh we were aware of some, work out of Microsoft for a project, called layout LM which is a language, model that in addition to taking text as, input it also takes bounding boxes for, each word of text and so that kind of, introduces the geometric information, into the model that is actually super, relevant to our problem um and just to, give you an example you might have the, text invoice number and then the the, actual invoice number might be to the, right of of it and if you turn that into, plain text then even you know a plain, text model could pick up on that, relationship on the other hand you might, have you know the word invoice number, and then the text beneath it and you, might have some other text to the right, of the word invoice number and without, the bounding box information it's, actually really hard for a model to be, able to pick up on that kind of, relationship and so layout LM seemed, like a really promising approach to to, solving that but for some reason when we, dug around hugging face and you know, scoured GitHub and you know Google at, large to see if there was a question, answering pipeline that worked with, layout LM we just couldn't find anything, and it seemed to us like wow you know if, we're uh if if we had this awesome, experience working with text based, question answering and we know we're not, the only people trying to work with, documents um but there's nothing quite, that easy out there maybe we should kind, of take the lead on this and make it, just that easy to do document-based, question answering as well and so we, reached out to the team at hugging face, you know actually just by filing a, GitHub issue uh and they were incredibly, receptive to the idea and you know over, a month of collaboration and and working, with them we actually contributed uh the, document question answering pipeline, that's now uh in hugging face and a, model that's pre-trained and MIT license, and everything that you can play with, and work with and even put into, production that works with it and, actually makes it that, [Music], [Music], easy so that's really cool what, motivated you to to make this an open, source distribution you know like what, what from the business as you have put, together you've identified the problem, you have a new approach that you want to, take you're taking advantage of really, really Leading Edge Technologies from, hugging face in terms of their pipeline, what made you decide as an entrepreneur, to release do query as open source what, was the what was the business motivation, there yeah so I think there maybe three, uh three reasons for it the first is not, the business motivation but just the, personal motivation when things are open, source and they're easy to work with it, removes all barriers to Innovation and I, think selfishly as someone who cares a, lot about Innovation but also as a, member of The Tech Community at large I, think being able to contribute to people, innovating and making it easier for them, to innovate and play with ideas it's, just very important to me from a, business standpoint the second thing is, you could think about it in terms of, distribution so in exchange for, providing something that's generally, useful to a large community of people we, have the opportunity to get some mind, share and for them to familiarize, themselves with us as a company to, experience technology that we create and, you know form an opinion about how, credible we are as uh product Builders, and and so on in a way that that doesn't, require them to give us you know email, or talk to a salesperson or or anything, like that and so you know just purely, from the standpoint of distribution it's, actually really valuable to us as a, company to have the Mind share and, attention associated with it and then, the last thing is being confident about, what our sort of proprietary strategy, can be in the context of having open-, Source stock query and there are a, couple things that make me really, confident that we can still be you know, a really successful proprietary uh, product the most important one is that, when you as a customer use our product, you have this really realtime data, flywheel uh which allows you to correct, things review things integrate things, and the models will keep improving just, for you to be able to do that and time, and time again we've seen how important, that is for people to put models into, production in commercial settings and we, know that the ease of use UI security, Integrations workflow involved is, something that is actually really hard, to build an engineer yourself and so we, know that that's extremely valuable and, we feel confident in that and so if, something you know for for things, outside of that that kind of opens up, the possibility of open sourcing them, and still being able to derive a lot of, value from this kind of core proprietary, product you said something that struck, me right there about kind of having that, level of confidence and the fact that, you already knew it was hard to kind of, build those things out that is something, that stops a lot of wouldbe, entrepreneurs right in their tracks you, know is that you've kind of dived into, the deep end of the pool you've said a, couple of times in our conversation that, that you know you were not coming into, this as a as a a worldclass deep, learning expert yourself you know you've, built a team obviously around but but, you were coming in as someone with with, an idea what gave you the confidence or, the the bravery to kind of to dive into, the deep end of the pool and do, something that we normally associate, with with people who might have a a, different background you know have all, that heavy math and years of deep, learning modeling and stuff like that H, how did you get past that because there, are probably a thousand people listening, right now that you know they want to be, entrepreneurs they've tried it maybe, they've tried and and failed how did you, get past those hurdles yeah so I'll give, you the the real answer and the, inspiring answer okay the real answer is, just stupid naive um like I didn't even, think about that and I've learned and, been humbled so many times by so many, smart people over the past 10 years and, uh I'm still pretty stupid and still, pretty naive and I hope I am that way, for some time but that's the real answer, now the the I think more hopefully, inspiring uh version of that is that as, someone who is not deeply amiliar with, the math and deeply entrenched in the, existing workflow for how things um, operate it gives you a really unique, perspective on what it would take to, make something easy to use and simple, enough that non-experts can take, advantage of it and I think a lot of, what you're doing as an entrepreneur is, bringing together two perspectives the, one perspective are the people who you, can feel need something and the other, perspective is the perspective of the, people who feel they can build that, thing and as someone who's not a machine, learning person it's very easy for me to, go onto hugging Face's website and play, with the question answering model and, then try to read the documentation about, the layout LM model which had no, examples and nothing that easy to use, and and see the difference simply, because I I was I just didn't understand, enough about the the you know model, complexity and and and so on to actually, understand and so I was able to see that, difference and I I think actually, knowing more than I had at at the time, would have prevented me from from doing, so now that I've actually learned a, decent amount about this stuff I don't, have that same experience when I'm like, reading through papers about models or, documentation and I almost miss it I'm, curious as as you've done that and by, the way you you're like I I really think, that what you said was quite wise in, terms of kind of having that always, willing to learn you know knowing that, you're never there I think so you've had, several really great insights I think in, your process and you know one of which, was the benefit of doing it as open, source which which scares a lot of, people off obviously in terms of as a, business model but you know one of the, things that we know is that when you, have a great product you're solving a, problem well and you put it out there, like that it makes it very accessible as, you mentioned earlier so adoption tends, to be much higher when you do that, because it's you know people can can, dive in at whatever level the you know, they're comfortable with and give it a, shot and figure out you know how to, engage you going forward as you do that, like what what do you think is are the, next steps for Doc query and appear at, large there and then uh and then I'll, ask you the kind of a broader question, after that but I'm kind of curious very, specific to Doc query where do you think, it's going to go over the next year or, so from an adoption standpoint point and, in terms of like what's your short-term, vision for that yeah so in the very near, term thanks to like just a fantastic, flood of feedback of users both through, you know GitHub and Discord among other, channels we have a pretty good sense of, the types of questions that people want, to ask about a document that they can't, currently ask with Doc query and the, really beautiful thing about the, question answering framework is that it, actually encourages that creativity, people can really easily type you know, whatever question they want and either, get an answer or not get an answer and, so the two kinds of questions that, people keep trying to ask that we're not, able to answer about a single document, with Doc query are what kind of document, something is so like for example is this, an invoice um or is this a purchase, order or or is this an invoice from this, vendor and the other kind of thing, they're trying to ask our questions, about tables and we actually support and, an example of a question about tables, would be like give me all the line items, on this invoice or what are all of the, descriptions um or what is the first or, second or third description or what is, the highest total value or something, like that and these are are things that, we're we actually are fortunate to have, a good amount of data for and in the, very near term are basically expanding, the question answering model to be able, to support we have looked at other model, Frameworks for example things like, document classification uh as a, framework or you know like visual table, detection and stuff um and we have a lot, of experience trying these things out, within the impira product but we feel, pretty confident that we can basically, expand the question answering framework, to support them and we just love the, fact that it's an infinite canvas The, Next Step from there which I'm extremely, excited about is allowing people to ask, language questions over multiple, documents or a pile of of documents if, you will and that could be things as, simple as like what are all the invoices, or you know find me all the invoices in, my Google Drive folder or things that, are more complicated like what are the, invoices that are due next month or, which invoices I might pass due on or, which invoice from this vendor is the, one that's most relevant to this, contract um or something like that is, that farther out though or is that, something that would be is that, something you think is like are you are, you close to that or do you think it's, going to take a little while to get to, that point well the model's training, right now uh there's a few there's a few, moving parts that we're trying to figure, out that kind of that was a great answer, right there yeah I mean like literally, training right now I kicked off the the, most recent run right before the the, podcast I'll give you the teaser for how, that works which is that actually we've, studied this problem a lot through, impuros product cuz long story short, people actually do this kind of stuff, with impira you can extract Fields um, and then you can write queries over the, fields uh as as well um and we actually, have a pretty powerful query language, that makes all of this possible and what, we've realized is that you can take, natural language and basically compile, it into a query which consists of both, relational algebra and other models or, questions to ask of documents and so, we're cooking this framework and and, making it work and and we we've seen, some really exciting initial results but, I don't think it's going to be too long, before that's possible and then as we, think about it further one of the things, that we did and I'd encourage anyone, who's interested in the space to throw, any idea you have at it is we opened up, discussion on GitHub about like what are, things that you'd like to be able to, type that have to do with documents and, what's interesting is a lot of the, questions or things that people want to, type are also actions so things like, organiz all of these documents into, folders by their document type or uh, forward along you know all of all of, these things um to this email address, and so I'm not exactly sure how we're, going to tackle this in the open- source, part of uh the the equation versus our, product versus Integrations with other, products because even our product, doesn't do all these things but I think, purely from the machine learning, standpoint we're starting to think about, what the right framework looks like both, on the machine learning side and on the, application side to make it possible to, type things like that and then the last, thing I'll say is that as we push, further into doc query it's become, increasingly clear to us that even, though this question answering approach, is incredibly relevant to working with, documents and it happens to work really, well this framework of having one or, more things of data and asking questions, about it is an incredibly powerful what, Paradigm for people to work with data, and so our vision is increasingly, becoming making it really easy for, anyone to ask anything of any data and, you know how we sequence those parts uh, together we're still learning I suspect, one of the really great benefits of, open- sourcing doc query is going to be, engaging people in the community who, have different flavors of this use case, to apply it in different domains like we, probably won't build models that analyze, video but you could use you know like, 75% of Doc query to manage you know, getting the question semantically, representing it turning it into, relational algebra yada yada yada and, someone really smart in the community, could plug in you know the video aspect, of it and so that's kind of where I see, the future of this and it's I I think, you know open source in particular is, going to be a really powerful Vector for, us toage engage a much larger audience, than our limited engineering bandwidth, has the capacity to support over the, long term well Anker that that is very, inspiring it's kind of funny because you, you know on a day-to-day basis many of, us would think of just kind of document, management as a fairly mundane thing but, it's such a huge impact on people's, lives and in in a billion small ways in, terms of making that better it's, definitely something that that uh that, brings a lot of value to a lot of people, around the globe so thank you so much, for coming on the show it was a, fascinating conversation thank you for, what you're doing thank you for taking, the approach that you've taken and uh, I'm looking forward you finishing up as, this little nonprofit manager I'm I'm, I'm excited to to use that to make my, life just a little bit better going, forward thanks a lot awesome and send us, any feedback you have we'd love it, [Music], absolutely, [Music], all right that is our show for this week, if you dig it don't forget to subscribe, head to practical a.m for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or a colleague word of, mouth is the number one way people find, shows like ours thanks again to fastly, for fronting our static assets to fly.io, for backing our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now we'll talk to you, again on the next, [Music], [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Production data labeling workflows | It’s one thing to gather some labels for your data. It’s another thing to integrate data labeling into your workflows and infrastructure in a scalable, secure, and useful way. Mark from Xelex joins us to talk through some of what he has learned after helping companies scale their data annotation efforts. We get into workflow management, labeling instructions, team dynamics, and quality assessment. This is a super practical episode!
Leave us a comment (https://changelog.com/practicalai/195/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Mark Christensen – LinkedIn (https://www.linkedin.com/in/mark-christensen-ceo-85795728) , Website (https://xelex.ai)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Xelex.ai (https://xelex.ai/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-195.md) | 12 | 0 | 0 | data prep is so challenging it's, probably the most challenging part of a, project and it's often times because of, the sheer volume of data that is, required often times really highly paid, and talented data scientists are, managing projects in a highly manual way, where their time and talent just isn't, being utilized and I'd say that's, probably one of the biggest challenges, we hear data scientists describe is that, they're spending too much time manually, managing project minutia often times, it's the use of automation tools and and, project management platforms that can, help them to kind of refocus their, energies on higher level priorities and, allow the the application the software, application or the platform to automate, a lot of the workflow and allow other, team members to manage a higher, percentage of the, [Music], workflow, welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical a FM for all the ways, special thanks to our partners at fastly, for delivering our shows super fast to, wherever you listen check them out at, fastly.com and to our friends at fly.io, we deploy our app servers close to our, users and you can too learn more at, [Music], fly.io welcome to another episode of, practical AI this is Daniel whack I'm a, data scientist with SI International and, I'm normally joined by Chris Benson who, is a tech strategist at loed Martin but, he's doing great Tech strategy things, and traveling as part of those things so, he won't be joining today but I've got a, really uh really wonderful guest and, topic to talk about today we've we've, sort of been uh diving deep into a, number of modeling related things in, terms of stable diffusion and various, things coming out and I think it'd be, good to kind of shift and talk about, again we're practical AI so talking, about some practical data related things, would be worthwhile and I'm really, pleased today to have the CEO of zeix, with me Mark Christensen his expertise, is is all in the area of data labeling, and workflows around that and bespoke uh, data processes so welcome to the show, Mark it's great to have you thanks Dan, glad to be here yeah well could you give, us a little bit of a background about, how how you kind of got interested in, this space of data labeling and kind of, producing custom Training data sets and, eventually kind of built a built a, business around that how did how did, that happen we didn't come out of a data, science discipline actually we came out, of healthcare and we've spent the last, 17 years managing um Healthcare data at, scale specifically in the area of, dictation and transcription so kind of, an entirely different field but the, thing we had in common was managing, large amounts of data at scale and in, healthcare what we would do is we would, record audio and for 17 years we, recorded audio from Healthcare Providers, and then moved that audio through an, enrichment workflow which just, essentially was transcription so we'd, have skilled uh Medical, transcriptionists in the states and, around the world who would take the, audio and transcribe it into the, completed Healthcare note and a few, years ago we met with a friend of ours, um who was an owner of an noop company, and he really liked our platform we, happened to be working with him on a, speech recognition, project and we really have a need for, this in managing trading data for NLP, workflows and so um that launched a, discovery process that would took about, two years and we investigated the use, case and determined that um there was, really a a really neat fit and so we, modified the application for the next, two years and then launched our training, data services workflow called zel xai so, that's how we got into it that's super, interesting and I I know data, specifically in the health care space, there's some very interesting um, restrictions and uh very specific, processes that you have to have to make, sure that you're following in that, Health Care space I'm I'm wondering if, you think that that perspective on data, and like the security the compliance, things around that data did did that, sort of shape Maybe how you think about, handling data for some of these use, cases any thoughts there yeah man that's, a great question and you're totally, right you know data security is so, Paramount in healthc care and my, colleague at the NLP company you know, cited that as one of the specific, reasons why the workflow that we had in, healthcare was a great overlay for data, training the data has to be audited so, data should have a couple of different, audit Trails on it data should be, encrypted both in transit and in rest, data shouldn't reside on the devices of, people that are involved in data, labeling so all those things were just a, perfect fit and carryover between um our, Healthcare workflow and and an AI, workflow yeah you're right interesting, yeah I'm wondering as you maybe it's as, you talked to this NLP colleague or as, you've worked with client, you know around the world working on, data labeling projects from your, perspective like how are data scientists, most often labeling their data these, days and where do they encounter, challenges because of how they're, approaching data labeling yeah I mean, the the greatest challenge we always, hear and it's an obvious one is about, getting data that's accurate enough to, improve the model especially in, specialty use cases or you know let's, say new language modeling where a click, worker approach doesn't it doesn't hold, up it just doesn't work as well and for, that reason for smaller projects maybe, you know the size of a few hundred to a, few thousand data objects a lot of our, clients try to do the work inhouse just, for the sake of I'd say primarily you, know retaining for the sake of quality, control but for larger projects um it's, just too hard to to do everything, in-house and so it winds up being a, combination of In-House team members and, outside vendors doing the data labeling, for our purposes um and the approach, that we took we decided that um rather, than commoditize the role of the editor, or the annotator um we'd invest more in, training and compensating our labelers, as a means of building long-term, relationships and for us we found that's, a an essential part of maintaining the, kind of the consistency of the data, quality quality and making sure that the, data quality remains at the accuracy, levels our clients require and that's to, to be able to have those relationships, with annotators that we can trust in, that aren't just commoditized, relationships interesting yeah so have, you encountered cases where maybe, clients come to you and they say hey we, tried to throw up like a crowdsourced, task and get a bunch of labels we invest, a lot in that and then and then the you, know it didn't really help us that much, do do you think that those those cases, are maybe due to like unclear, instructions to the labelers or a sort, of like Variety in like the motivations, of those data labelers or what do you, think kind of leads to some of those, some of those quality issues from your, perspective, yeah the the commoditization of The, annotation Workforce is I think it can, be a project killer and a very high, percentage of projects that launch stall, and never complete and that's one of the, key reasons we've talked with um, companies that try that approach and, they wind up iterating the data so many, times to try to get an accurate set of, data that they can use that they, ultimately wind up going to a more, bespoke approach where um the teams are, are more handpicked and more highly, trained even though the costs are higher, in order to finally wind up with a data, set that is useful so yeah I think that, is one of the key problems that plague, data aggregation projects and that is to, wind up with a clean set of data that, can be done kind of on time and on, budget yeah I I I know Mark that um so, like in our projects and we've we've, done some speech projects as well like, one we we've struggled with this also in, terms of like the data quality and I, remember in one case like really we were, saying well we need five labels for each, for each sample because like the, variability between labelers is such, that like we need either a majority vote, or we need to analyze how much they, agree one label or to the other or, something and of course that gets that, gets really expensive over time could, you speak a little bit to like you, mentioned this training focusing on, training and kind of bringing in this, like upskilling the these uh data, labelers what does training annotators, look look like in your in your projects, and what maybe have you learned about, what's important as you as you are, training D data labelers yeah I recently, did a paper called improving model, accuracy through better translation and, it was really just an attempt to lay out, some tips for translating Source texts, for natural language processing models, and one of the items I mentioned and, it's something that we've seen as we, worked with teams around the world doing, language projects is that it's important, for the editors and those involved to, understand the use case and you know, that might seem like it's perhaps too, much information or an unnecessary, amount of information to share with um, the editors or the annotators but once, they understand the project description, or I should say the better they, understand the project often times it, really does translate into higher, quality data and so I I encourage, companies to share that information with, annotators so that they are more vested, in the work that they're doing and as an, example of you know a how a project, description might be written up um as, part of the the guidelines for the, annotators it might be something like um, this project involves and and this I'll, just cite briefly a paragraph out of the, document this project involves training, a software application to automatically, assist call center agents with their, tasks and increase their efficiency for, example if the customer says I have a, warranty issue the agent software, application can respond by automatically, opening the customer's warranty Clause, reducing the time required for the agent, to assist the customer the translation, project consists of a set of English, language scripts that reflect some of, the typical conversations that occur, between call center agents and and, customers the purpose of this project is, to translate those scripts into a target, language in order to add NLP driven, process automation into the call, Center's workflow thereby adding new, efficiencies to the agents and Company, and so by giving those that kind of, insight in detail to the translators and, the editors enables them to have more, buyin of the project and and have a, better understanding of how their work, is going to be used I think that there's, a a lot of content about like hyped AI, data science things but in reality what, people are really wanting more content, around is this sort of practical, concerns of like hey my data labeling, isn't actually working right like how, how can I fix that problem and so I, think there's a from my perspective at, least there's an eagerness for this sort, of conversation where people are, actually they have a lot of the other, but they don't have enough like, practicality in their in their content, so I I think that that bodess well for, this sort of conversation from my, perspective okay that's encouraging to, know because we're the guys on the, process side you know so the sexy work, is being done by you and the data, scientists we're more the guys down the, boilet room yeah you know we're kind of, the the operations team that makes the, process happen but doesn't necessarily, know a whole lot about the data science, side of it uh yeah I think that in, reality though the data side is what is, driving things so yeah I think that's, [Music], good, [Music], well Mark we've talked a little bit, about the importance of of training, annotators we've talked a little bit, about kind of specific data concerns, around Healthcare and other things I'm, wondering from your perspective since, you're really plugged into the area, around like how people are managing, their data workflows how they're how, they're managing their data labeling, from your perspective perspective what, does the current data labeling sort of, annotation and tooling landscape look, like like what what choices do people, have and what what does that landscape, look like right now from my perspective, it the landscape seems to be rapidly, changing but I would say that, off-the-shelf models are being used more, often um they continue to improve and, they're used either I'd say Asis or with, in-house tuning the projects we see and, the project we're getting more involved, in are kind of specialty applications, where off-the-shelf models aren't, accurate enough or you know they don't, exist and cases might be places like, medical documentation labeling sentiment, and intent projects that have um a, highly customer specific language or, vocabulary that um can't be picked up by, off-the-shelf models and in specialty, models I'd say training data is needed, in cases where unique vocabularies you, know warrant kind of Highly specialized, bespoke model tuning an example might be, gathering business intelligence from, call center interactions for example, where you know the client is seeking to, obtain business intelligence through an, NLP automation process and they need a, model to be custom tuned and they need, the model to be custom tuned to meet, their business objectives another area, would be I guess new language modeling, and that's exciting to me encouraging, because we're starting to see an uptick, in interest in other major world, languages where models don't exist in a, production environment on the tooling, side I'd say we've seen companies both, big and small relying on a hybrid of, In-House data labeling and in-house plus, click worker uh driven labeling and, fully external third-party labeling but, what we're not seeing is AI companies, that have systems in place to m manage, those different approaches in a cohesive, way so there's a lot of manual, aggregation there's a lot of one-off, coding that gets done to unify the, results from those hybrid sources so to, answer your question on the tooling side, this is one area where the the tooling, is broadly not keeping Pace with the, growth of the industry and I know it's, like one thing and this comes from, personal experience it's one thing to, get data labeled like gather a label, it's it's another thing to develop like, a workflow around that that's that's, integrated into your systems integrated, into your backend what do you think are, the challenges facing data scientists, around this workflow side of things and, the the the bespoke sort of things that, they have to do to integrate data, labeling into the the sort of wider set, of things that they're doing yeah data, prep is so challenging it's probably the, most challenging part of a project, project and it's often times because of, the sheer volume of data that is, required and what we see not just at, small companies but even at big, companies is that highly skilled and, oftentimes really highly paid and, talented data scientists are a managing, projects in a highly manual way where, their time and talent just isn't being, utilized as efficiently as it could, senior data scientists are doing things, like vetting samples from annotators and, doing quality scoring on ators and I'd, say that's probably one of the biggest, challenges we hear data scientists, describe is that they're they're, spending too much time manually managing, project minutia and often times it's the, use of automation tools and and project, management platforms that can help them, to kind of refocus their energies on, higher level priorities and allow the, the application the software application, or the platform to automate a lot of the, workflow and allow other team members to, manage a higher percentage of the, workflow so I think that's that's one of, the things we're seeing and along with, that how does uh zelix specifically, approach the the data labeling problems, that you've described we've talked about, sort of workflows we've talked about the, the custom setups that are needed for, certain task we we've talked about a, variety of things how has that filtered, down into your approach specifically and, and the approach that that zelex takes, yeah workflow platforms are all about, moving off of spreadsheets and and, manual processes into processes that, scale better that's what we do we're, we're focused on the production process, so everything from training and managing, the skilled labor to meeting, deliverables on on time and at quality, levels that clients expect I mean, keeping projects on on budget those are, all things that training data services, companies like we we do and that we, bring to the table our focus is on, making complex workflows easier and, another part this was interesting that, one of our clients said to us one time, is that they needed all of the, stakeholders at the company to be able, to see what was going on with the, project and the platform our platform, enabled them and other platforms too it, enables stakeholders to do that and, there's all kinds of stakeholders at the, production level and the commercial, level for projects because projects on, the commercial side you know they're, typically done on the on the request of, a client and in service to a client and, so all kinds of different people outside, of the data science team are involved, you know the sales team the Ops Team the, procurement team the quality assurance, team and everybody needs to know what's, going on they want to see if the, Project's on budget they want to see if, the projects on time they want to see if, if the quality thresholds that the, client has set are being being met and, so a platform gives everybody that kind, of visibility and I I really enjoy and, appreciate um being able to do that for, a company because it does then keep all, the stakeholders in the loop at the same, time it allows the data science team to, not get bogged down managing minutia, manually so that's one of the neat, things that that we like to deliver and, then on the services side the approach, that is always about man managing the, workforce successfully and you know, success in data sciences and in projects, like this is measured in being able to, deliver a project on time and on budget, and at the accuracy levels that have, been determined or or set by the client, and by the service provider as being the, goals or the Project's objectives an, example of this kind of of how this can, backfire is when service providers like, US enter an let's say a new language or, a new project area and you know maybe, their client has come to them and said, you know can you do this or can you do a, project in data labeling in this, language and of course the knee-jerk, reaction is always sure we can do that, but if saying we can do that involves, hiring a third-party vendor in that, Target country or Target language to do, the project and it's done in a scramble, it can really backfire and so hiring a, third-party vendor in cases like that, can result in kind of a blackbox, approach where you're unable to, adequately measure quality and where, you're unable to adequately manage, deliverables so that projects wind up, running late or projects are delivered, with poor quality data and then you're, left kind of scrambling to do those, Corrections on the data internally or, finding another source to do those, Corrections for you and it's a recipe, for disaster so for us kind of the way, we mitigate that is when we move into a, new language for example the initial, step is to do The Hires and do the, training ourselves so that we have our, own team and we're not dependent on a, third-party vendor source for that, labeling effort and that way even though, it's it's going to take us longer and, the cost might be higher and there are, cost sensitivities that are realities, but the truth is if you're using a, third-party vendor and working out of a, black box chances are you're not going, to be able to deliver the project on, time and at cost and so your cost and, timeline are going to be affected anyway, and so we've opted for taking an, approach that's more expensive to our, clients but that ultimately delivers, projects with a higher quality and, consistently higher quality data that, are on time that meet the the turnaround, deliverables even though the price might, be a little higher yeah and I I think, that's a really good and practical, advice for the whole Community that's, trying to do a variety of these data, labeling projects is really kind of at, the start of these data labeling, projects not only thinking about, Gathering samples but thinking about how, is your workflow going to be managed and, how are your annotators going to be, trained because thinking about that, stuff upfront and taking time or, spending more money on getting getting, that in place from the start might, actually save money in the longer term, if you're not doing as many iterations, of of labeling right if you if you if, you start and you do a bunch of labeling, and then you don't get the quality that, you need then or you get some some sort, of unexpected biases or other things in, your data that could cause more problems, uh down the line and one of the things, the maybe this isn't a specific I guess, it could lead to specific quality issues, but one of the things that is hard for, me as a um technical introvert person, who's not maybe the most people oriented, person in the world is thinking about, all of the Team Dynamics that happen on, a data labeling project and setting up, like maybe a disperate and kind of, distributed set of labelers and vendors, for a data labeling project how can the, problems associated with those sort of, Dynamics be addressed in this kind of, online distributed labeling environment, yeah you're totally right there are, inherent challenges in managing an, online Workforce but many of those can, be mitigated through kind of a, welldeveloped robust workflow, application you know things like, centralized controls giving managers, total visibility to what's happening in, the workflow at any given moment you, know the status of data objects as, they're moving through the workflow and, and um how you're doing against your, your timeline for deliverables you know, those are the kind of things that, software is really good at managing as I, mentioned earlier we've seen cases even, with really large companies where pretty, complex projects were still being, managed on a spreadsheet and when you're, doing that there's almost no ability to, manage the workflow effectively, [Music], [Music], well Mark given the sort of Team, Dynamics that can happen that we've been, talking about the sort of variety of, tasks that zelix is exploring and other, people exploring in the space from like, standardized machine learning tasks to, to uh more custom ones I'm wondering, what sort of like would you say about, kind of proper ways to set up maybe, manual Andor annotated QA type of, workflows associated with your data, labeling well I can tell you a little, bit about what we do the first is to, establish the ground truth version of, the data object and for all data objects, as they're moving through the workflow, and once we establish the ground truth, data object then we're able to measure, the distance between that and the work, that the editors are doing and that, helps to generate a whole lot of, different metrics for us you know who, needs additional training how how pay, might be affected how our costs are, effective if data objects are moving, through the QA workflow more for some, editors than others the second thing, that we do is a multi-level QA workflow, so that work gets automat atically, routed and that could be in cases where, we've got new hires or maybe editors are, being flagged via our Auto Che process, for certain error types and then thirdly, we run an error script that dynamically, checks against the known error list so, that those items are routinely kind of, recycled through the workflow to be, re-edited and reua so those are kind of, some of the typical things we do, judgments of course and multiple, judgments on data objects is really, important to make sure that using, multiple layers of judgments is also, important and we we do that through the, QA workflow process as well it's been, extremely helpful for me to think, through some of the the Dynamics and the, workflows associated with data labeling, I I think it's it's extremely practical, and and very useful I'm wondering as you, kind of continue to be more and more, involved in this space of data labeling, and interacting with clients in the data, science and AI space, what excites you about the future of, sort of data science and AI practice and, you know what maybe within that what, could easier data labeling enable in the, longer term well when you look at the, number of data sets that have been, developed and models that have been, developed so far it's overwhelmingly all, english-based and and in that regard, it's probably largely focused on the US, market and you know we're overwhelmingly, the largest economy in the world so that, that makes sense that it would be that, way but what I'm excited about is seeing, the tools and expertise that have been, developed in English modeling to now be, used in other major world languages and, specifically in developing economies, where where AI can be used to help, developing economies move forward all of, those nations are generating customer, and employee experience data in the form, of things like you know like customer, Behavior data and online reviews and you, know sentiment and intent data all, medical data things that are in, unstructured format where AI can be used, in a beneficial way well Mark I'm I'm, really happy that you brought up this, side of the impact of data and NLP, across the world's languages as as our, listeners will know I'm I'm very, passionate about this this topic and I'm, really excited anytime we get to talk, about that it's something that excites, me for the future as well I've really, appreciated you taking time out of your, work with zelix to um to help us parse, through some of these uh these data, labeling challenges and the workflows, associated with them and yeah I really, really appreciate you taking time and um, looking forward to uh to continuing our, our conversations over the the coming uh, the coming months as I as I hit my own, data labeling issues so thanks Dan very, much I really enjoyed, [Music], it, [Music], all right that is our show for this week, if you dig it don't forget to subscribe, head to practical AI FM for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or a colleague word of, mouth is the number one way people find, shows like ours thanks again to fastly, for fronting our static assets to fly.io, for backing our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Evaluating models without test data | WeightWatcher, created by Charles Martin, is an open source diagnostic tool for analyzing Neural Networks without training or even test data! Charles joins us in this episode to discuss the tool and how it fills certain gaps in current model evaluation workflows. Along the way, we discuss statistical methods from physics and a variety of practical ways to modify your training runs.
Leave us a comment (https://changelog.com/practicalai/194/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Charles Martin – Twitter (https://twitter.com/charlesmartin14) , GitHub (https://github.com/charlesmartin14) , LinkedIn (https://www.linkedin.com/in/charlesmartin14)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• WeightWatcher (https://github.com/CalculatedContent/WeightWatcher)
• Talk from the Silicon Valley ACM meetup (https://www.youtube.com/watch?v=Tnafo6JVoJs)
• A deep dive into the theory behind WeightWatcher (a talk from ENS) (https://youtu.be/xEuBwBj_Ov4)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-194.md) | 4 | 0 | 0 | all natural data has a power law, structure a fractal structure to it the, way neural networks learn is they learn, the multifractal nature of the data and, that that's why they work so well on, things like text and images and why they, don't work great on tabular data sets so, they correlations in the data data is, correlated you're trying to learn the, correlations and frequently you're, trying to learn very subtle correlations, you couldn't find in some other way you, know using some simple clustering, algorithm or an SPM or something like, that so what we're doing is we're, measuring the fractal nature of the data, and every layer of a neural network, gives you some measure of the fractal, properties in that level of granularity, and so Alpha is like a measure of the, fractal Dimension and what we know is, that it measures the amount of, correlation in that, [Music], layer welcome to practical AI a weekly, podcast cast making artificial, intelligence practical productive and, accessible to everyone subscribe now if, you haven't already head to practical AI, FM for all the ways special thanks to, our partners at fastly for delivering, our shows super fast to wherever you, listen check them out at fast.com and to, our friends at fly.io we deploy our app, servers close to our users and you can, too learn more at, [Music], fly.io, welcome to another episode of practical, AI this is Daniel whack I'm a data, scientist with s International and I'm, joined as always by my co-host Chris, Benson who is a tech strategist with, locked Martin how you doing Chris I'm, doing very well Daniel how's it going, today it's going well you know I've been, uh training quite a few models recently, NLP models for question answering and, other things and one thing always comes, up in that is you know how long do I, train this thing am I overtraining it am, I under trining it how do I test it, appropriately am I testing it right what, else should I be doing I'm all of these, thoughts are running through my mind and, I'm pretty excited because today we have, joining us Charles Martin who is an AI, and data science consultant with, calculation Consulting and uh this is, basically one of the things that that he, thinks about a lot and builds tooling, for so welcome Charles hey great thanks, for having me guys yeah well the main I, think the thing that I saw of of your, work that really interested me was this, Weight Watcher tool which is an open-, Source diagnostic tool for analyzing, neural networks without the need for, access to training or even test data, which is is super interesting and I want, to get into all the details about that, but maybe just describe for us kind of, pre-weight Watcher how what what led up, to Weight Watcher what what were the, sort of motivations that were going, through your mind and maybe the things, that you were encountering in your own, work that led you to think about this, this problem sure sure so I do, Consulting in Ai and I had some clients, working with me to do uh text generation, so this is years before GPT and all, these amazing diffusion models that, existed and we were training lstms to, generate text things like you know, weight loss articles and reviews on, Amazon and stuff like that and I, realized that you know as I use these, models I can't really evaluate them, because if you're training like a, classifier like an old svm or XG boost, you know you can look at the training, accuracy but if you're trying to design, a model to generate text or some other, natural language processing problem like, say designing embedding vectors for, search relevance it's really hard to, evaluate whether your model is, converging or not and now I had studied, statistical physics of neural networks, when I was a postto and theoretical phys, physics so I knew that there are, techniques from physics that make it, possible to analyze the performance of, these models and to estimate how well, they're performing and what I realize is, that nobody in the machine learning or, AI Community really knows about this, stuff because it's like you know from, the early 90s uh early to late 90s where, a lot of This research was done and you, know the people doing Ai and machine, learning are not theoretical physicists, you know they're Computer Sciences they, don't know about the works I said you, know I except for you and Daniel there, got yes you know it's a very broad field, and there's so many people doing AI now, that it's really fun because there's so, many different backgrounds and I was at, a conference maybe 10 years ago maybe, nine years ago and I met an old friend, of mine Michael Mahoney who's a, professor at UC Berkeley it was at ml, comp it was run by the guys at that time, who were doing oh what was the name of, the company they they had a recommender, system a recommender product they were T, AI they eventually acquired by Apple and, I was I was talking to Michael I said, you know there's a lot of theory around, deep learning that is very similar to, what we see in protein folding and my, adviser was actually the him and his, student John jumper developed the first, version of alphafold so what happened, was Google acquired Alpha fold uh excuse, they they hired John jumper who was the, student from Chicago and basically, souped up his thesis and that's where, Alpha fold comes from this amazing, technology from Deep Mind that can, predict protein folding so there was a, lot of theoretical work I had done as a, postto and I was talking to my adviser, about some of the stuff they were doing, in protein folding way back before Alpha, fold was released I thought you know I'm, I think I'd like to try my shot at doing, research again and see if I can develop, some theory that would allow me to, understand why deep learning works and, that project you know it's been about, seven years now of research and that's, led to the Weight Watcher tool cool so, like it it's probably very typical for, people to think about you know oh I'm, going to evaluate my model I have a test, set but could you describe a little bit, about two things one is like why from, your perspective at least in certain, situations like a test set doesn't give, you the the indication of of behavior, that that you're or behavior performance, of a model that that you're wanting and, then how that connected to these things, from from the physics world right so, let's say we're training a model to, generate text there's no test set right, you have to read the text and ask okay, does it look human or not and that's, sort of where the first problem came is, that there are many problems in, generating when you're generating things, another would be let's say you're doing, search relevance I'm trying to predict, what somebody wants to click on on I, have clients like Walmart for example we, build these systems for them it's very, expensive to run an AB test so you can, test things in the lab and you can like, make a model like an SPM model to, predict what people will click on but, you don't really know how it's going to, perform until you put in production and, there are all sorts of biases that that, exist in the data because there's like, presentation bias people tend to click, on things that are in the element and, that screws the model up so there are, many cases another good example is in, quantitative Finance when you're trying, to predict the stock market and you have, models where you would like to train, some neural network to learn something, about how the news predicts the market, but if you train it directly on the, market you'll overfitted always and so, you have to have some way of evaluating, whether your models are converging, properly or not without just looking at, the out of sample you know test sample, you don't a lot of data is out of sample, or you don't you can't really evaluate, it without human judgments or it's very, expensive would you would it kind of, would infer that we're that we're, probably seeing a lot of practitioners, running into these kinds of issues over, time and you know in a lot of cases as, you know if you look over the last few, years as everyone's kind of ramped up in, the space and and been learning how to, do different types of deep learning, training do you think that in terms of, those accuracy issues that a lot of, practitioners are kind of missing it uh, altogether uh or do you think they know, that it's there and they just don't know, how to solve it or can you layer the, land with it well let me give you an, example let me give you an example, there's a recent paper that came out of, Google Deep Mind on the the scaling, properties of very large language models, and it showed that what we thought we, knew about large language models from, two years ago from open AI a paper that, they wrote was totally wrong they, misunderstood how the scaling properties, work and the question is things like, when you have a model and you're trying, to train it should you be trying to, optimize the hyper parameters or should, you be adding more data you can think of, it like in that sort of very crude sense, you know you're trying to train these, models and essentially what was, happening in open AIS are training these, large language models and they didn't, realize that they should be adapting the, learning rate to the data set size and, when you change the when you adapt the, learning rate to the data set size you, get very very different results than if, you don't and it looks like and we know, that a lot of these large language, models like Bert for example are just, not properly converged there are a large, number of layers that are simply, undertrained and I I think that, basically there's the theory that people, are using there's no way to look at a, model and ask how close do you are you, to conversions if you think about, something like an spvm let's go back you, know I'm an old guy let's go back 10 15, years ago we run spms there's something, called The Duality Gap you can look at, The Duality Gap in an SPM and you can, ask how close are you to the bottom of, the of the you know you have it's a, convex optimization problem and you can, tell how close is your solver to, actually being at the optimal solution, you can tell that that's that's, theoretically known so it's somewhat, puzzling that you know now you have sort, of deep learning people understand that, deep learning is sort of like a convex, optimization or rugged convex, optimization because they know you don't, have you don't have local Minima and, there's an issue that there are lots of, Saddle points but no local Minima but, yet there's no Theory which tells you, whether you're converged or not and and, and so it's like what what what's going, on so people are trying to solve this, and I think this is where you know you, you you start training a model and you, don't know have you have you trained it, enough do you need to train it more let, me give you a really practical example, which we have with we have a user who's, using Weight Watcher to train, semisupervised models to determine, whether the land you own is qualifies, for carbon credits right so they're, trying to can we use AI to help with, climate change and one of the biggest, problems they have is how much data, should we add to the model we have a, model we have data acquiring data, acquiring good high quality labeled data, data is very very expensive you could, easily spend millions of dollars on a, data set maybe 10 million I know guys, self-driving car companies will spend, easily $10 million on a data set so it, would be nice to know given the model, that you have do you need to if you add, more data to it will it help so we can, answer that question with Weight Watcher, if you can kind of talk a little bit, about some of the underlying because, you're pointing out that there's a lot, of opportunity for people to not be, optimal in their approaches and kind of, miss some of the so it almost raises, almost raises kind of a bigger issue, that we may have as a community if if if, that's the case in terms of like how do, we solve some of those problems in the, large uh aside from the specific tools, you know what what what are your what, are you thinking in terms of like how, should people approach these problems, different well look I I think the first, thing you have to ask is I'm beginning, to train a model is my model big enough, is it small enough do I really want to, spend millions of dollars doing Brute, Force hyperparameter tuning you know, should I be tuning the like here's a, basic question comes up with every, client I have a model forget about deep, learning askm should I add more data or, should I add more features let's say I, have XD boost should add more data add, more features or do more hyperparameter, tuning it's all expensive what direction, do you go and you know there is a it's a, difficult these are difficult problems, and if you add more data is the data the, right quality is the data mislabeled are, there duplicates in the data is the data, too similar to the data you've already, added is it too different different from, the data you already added basic, questions we just don't have mean very, very basic broad level questions that we, have almost no answers to everything is, Brute Force you know if you want to, train a neural network you you go out, and you get weights and biases or you go, to Google cloud and you just spend a, fortune on hyperparameter tuning do you, really have to do that or isn't there, something better you could do here, here's another example when we started, this project there were maybe 50 open-, Source pre-trained models right open, source models right vgg the vgg resnet, things like that you go to hugging face, now they're over 50,000 which one do you, pick should you pick Bert or something, else everyone uses Bert Bert is highly, under is highly under optimized if you, compare Bert to Exel Nets EXL net is, much much better not only to the do the, academic papers show that Excel net, performs better on at least 20 different, metrics you can use Weight Watcher I, have a blog post this you can see that, it's just night and day between Exel net, and Bert but is it worth the money to, spend the try to optimize EXL net why, does everybody focus on Bert because it, has a cute name and it's made by Google, I mean you know it's really hard to know, which model to pick and it's hard these, models are very hard to improve if, you're trying so there are a lot of just, broad open questions like this which, model do I pick how much data should I, add how do I evaluate the quality of my, data do I really need to do Brute Force, searching on everything if I put, something into production how do I know, if the model doesn't it it breaks I, don't know if you guys worked in, production environments I work in, environments where things break every, six weeks it's you know Thanksgiving, comes model's broken Christmas morning, model's broken how do you monitor these, things so I think I think we're in our I, think machine learning and certainly AI, is in the infancy of engineering, certainly compared to where we are in, software engineering 20 years behind, we're software, [Music], engineering, [Music], [Music], so Charles I I definitely um uh it's, it's interesting kind of these I guess, scenarios that you bring up because it's, it's definitely something that happens I, mean sometimes uh in a in an actual real, world setting like with my team it's, like we have what data we have what, model is appropriate that fits that, level of data right or maybe you have a, whole bunch of data and the question is, do I need all of it for this you know, model that I've already kind of decided, on or all all of these sorts of things, and then you get to the to the training, questions that you've that you brought, up I'm wondering if you could just give, us a sort of highlevel overview of, because I think the main thing that that, uh if I'm understanding right the main, kind of tool that's come out of this, train of research that you've been, working on is the Weight Watcher tool um, could you just give us a kind of broad, overview of what the tool actually like, functionally does and where it fits into, into a researcher or a developer or a, data scientist workflow sure so the tool, can be used both when you're trying to, train models AI models or you're trying, to monitor them in production from a, training perspective it the tool gives, you insights into whether your model has, converged and it does so on a layer by, layer basis so I'm not aware of any, other technology that allows you to look, at the layers of a neural network and, ask has one layer converged and has, another layer not converged so there, there are cues you can look at you can, look at something called the alpha, metric which is the amount of, correlation in the model and if the, alpha usually if you have a computer, vision model your Alpha should be down, around two in natural language, processing Transformer models Alpha, should be between three and four if your, Alphas are larger than that chances are, the layer is not properly trained you, can then visualize each layer and you, can look at the layer it's correlation, structure and that correlation structure, should be fairly smooth it should be, linear and smooth on a log log plot if, it's choppy or has sort of a strange, shape to it something's wrong if your, layers have lots of rank collapse so, lots of zero ion values something's, wrong we've identified something called, a correlation trap which is in deep, learning language would be you have to, you didn't clip your weight matrices you, didn't regularize the that layer, correctly so you can use the the tool, during the training of a neural network, to monitor the training you can find, layers that are basically broken they're, not trained correctly think of like, you're building a you know you're, building a house and there are you know, cracks in the bricks you know you put a, brick in the CRA it's cracked you need, to replace it you can adjust, regularization up and down on the layer, you can adjust learning rate up and down, on the layer you might find that when, you're training a model some layers are, beginning to they're well trained and, they begin to overfit so you might want, to freeze them so you can freeze so as, people talk about early stopping I talk, about early freezing so you might freeze, some of the early layers and let the, later layers converge so Weight Watcher, allows you to do all of this by as a mo, you know it's it's very much a you have, to do it by hand you have to go in and, visualize it and see what's going on but, it allows you to inspect your models to, determine whether they're trained, correctly it also allows you to look at, model in production so if you're, deploying AI models in production and, you know maybe you're retraining your, models regularly you want it would allow, you to gives you like a warning flag, like a model alert system that would, tell you hey you broke this layer we, have an example in our paper we have a, paper in nature where we show that in, one of the Intel systems they applied a, data compression algorithm to compress, the model to go on the hardware and they, screwed up one of the layers and you can, see this with Weight Watcher it will, flag it for you so while you're so as, you're deploying models in production it, can monitor them for you and remember it, doesn't require any data so it's a very, light very light touch you know very, simple integration to integrate into, your M your Al Ops monitoring pipelines, I think of it's sort of like an AI, uptime tool it allow it gives you like, an early warning and you so this is how, you use the tool you can use it during, training to make sure your models are, converging well or they haven't, converged properly you go back and fix, them or you can use them after training, in production to monitor for problems so, I was kind of trying to think of, analogies in my head while while you, were talking and and you gave a good one, in terms of the the house and and and, the cracks one of the things I was, thinking about like you mentioned um you, mentioned Bert earlier which no doubt in, in the sort of time when Bert came out, it was quite an advancement and like, many people have built amazing things on, on Bert but I was thinking about like, that and where we've come from there and, and also thinking about uh my wife owns, a manufacturing business and they've got, this principle in manufacturing about, find the current biggest bottleneck in, your process right address that as soon, as you address that there's going to be, a next biggest bottleneck that you, address next right and you kind of just, keep working your way through so I'm, wondering like Bert obviously is a a, good advance but then like you can, analyze that model and see maybe where, the next biggest sort of offending area, is and kind of address that and I was, also thinking about the tool that you, were mentioning all the things you could, do with it you could probably analyze, your model in development for years you, know fixing all sorts of things and, doing all sorts of things right but at, some point you have to ship your model, right so maybe there's this process of, and I'm wondering your thoughts on this, of like you using the tool to find sort, of these like worst offending parts of, your model addressing those and maybe, like at a certain point you get to a, point of diminishing returns or, something like that right yeah th this, is a corar grain tool it's not meant to, go in and study epic by epic and try to, fine-tune exactly what's going on that, that's exact I'm really glad you brought, this up Daniel because you some you work, with the academics and they we want to, use it as a regularizer you want to, optimize the loss no no no that's not, that puts an engineering tool it's an, engineering tool it's designed to go in, and find out where the cracks are so if, you're I don't know if you guys in San, Francisco you know about the Millennium, Tower yeah so my little nephew he's he's, all into construction and he's always, talking about they got to tear the, Millennium Tower down tear it down junk, it cuz it has a they built this Tower, and it's like the leaning tower of Bea, it's tilting and if you go into the, basement of the Millennium Tower and, this is like you know like condos like, multi-million dollar condos you know I, think probably like you know the Marissa, Meer May own a condo there I mean it's, ridiculous and they built this thing and, downstairs you look and there are cracks, in in the steel it's like guys the, thing's gonna it's it's gonna fall down, it's cracked and it's like this is what, Weight Watcher does you go into your, models and ask are there gross problems, that should not be there right right, this layer is overtrained this layer, suggests that the data is mislabeled, this layer has a correlation trap this, is what you're trying to do and and you, know frequently in engineering you're, under time constraint so you know you, got to get this thing out and into, production you want to make sure it's, it's not crazy and it allows you to, Weight Watcher allows you to detect, problems that you canot detect in any, other way and that's the key it allows, you to find a major problem so one of, the things I was want to ask you uh, because you you said something a moment, ago and kind of circling back to that, that I'm I'm very curious about to bring, me and other people in our audience, along that may not be as familiar with, that I often rely on Daniel's expertise, on this and I want to rely on yours on, this you mentioned when we're talking, about you know kind of testing those, layers as you did going back to the, alpha and you specified you know for you, know ranges of of two uh for for the, visual and the the three to four for, like natural language models and stuff, I'm assuming that that's one of the, mechanisms that you're using in the, software can you talk a little bit about, what are the other mechanisms that are, there along with that and maybe how, alpha is used like what if somebody is, not familiar with that concept what is, it about Alpha that's identifying that, so that they understand that a, particular layer might be brittle in the, sense if it's not fully converged you, know how are you approaching that kind, of bring us along to try to catch us up, with you on on how you're thinking about, that on a like why does it work yeah why, does it work what is it what is it about, Alpha and other things that you're using, in the software that yield that level of, insight that you're describing so what, we know from where does deep learning, work deep learning works on Natural, Things natural images Voice Text things, that are really part of the natural, world and the natural world exhibits a, multif fractal structure you if you look, at a tree you know if you remember like, the L systems from computer science or, you know some of mandle BRS work most, natural systems have or just think about, text you know zip flaws you know power, law structure in text and documents all, natural data has a power law structure a, fractal structure to it and when you the, way neural networks learn is they learn, the multifractal the multifractal nature, of the data and that that's why they, work so well on things like text and, images and why they don't work great on, tabular data sets so what you're doing, is there are correlations in the data, data's correlated you're trying to learn, the correlations and frequently you're, trying to learn very subtle correlations, you couldn't find in some other way you, know using some simple clustering, algorithm or or or an svm or something, like that so what we're doing is we're, measuring the fractal nature of the data, and every layer of a neural network, gives you some measure of the fractal, properties in that level of granularity, and so Alpha is like a measure of the, fractal Dimension and and what we know, is that it measures the amount of, correlation in that layer in other words, you're learning the data is obviously, not random it can't be random you're, learning you're trying to learn patterns, so what we've disc discovered, empirically and we and there's some deep, theoretical reasons for this but, qualitatively what's happening is you're, you're learning the natural patterns in, the data and those patterns you know, they have to be there so if you're, looking at Text data and you start, seeing Alphas around six or seven or, eight the layer hasn't learn the, correlations it just didn't learn, anything and it's just sort of there or, it learned it the correlations are so, weak that's it's not really contributing, to anything so you just B and we know, that that many of these models are just, have these extra layers they're way, overparameterized you know and they're, you know so that that's that's what's, happening and and if the correlations if, there are strange or spous correlations, there are things that that cause Alpha, to be small for spous reasons like you, know you didn't regularize your layer, correctly and so there's a giant weight, Matrix like you have you didn't clip the, weight Matrix elements so the, regularizer failed so it can detect the, difference between when there are, problems with the optimizer and when, there's actual natural structure in the, data and it allows you to distinguish, between these two that's what it's doing, am I correct just just for uh for, clarity sake in terms of when when we, say like it's doing this without the, test data or the or the training data, really you're doing these calculations, and you're detecting these these, parameters these metrics based on the, weight matrices right is that correct, yes only on the weight, matrices we don't you don't need to look, at the data so in that case is it uh, like the tool itself in terms of how, people would run it because it's doing, these Matrix calculations is is it, necessary like could you speak to like, the, computational of it and like am I gon, spend am i g to spend five hours waiting, for Weight Watcher to analyze my model, or is it going to happen in 5 seconds, the current model right now it depends, on the size it runs a singular value, decomposition on each layer so that's a, g that's a high memory GPU CPU level, it's a high memory, CPU intensive task it doesn't it's not, optimized for gpus so you run a normal, CPU it does require some memory most, layers aren't too large so it could take, anywhere from a couple minutes to an, hour if you're trying to run on GPT and, you have a thousand layers it's GNA take, some time right if you just have a few, layers in your model and you're training, like a small model it's it's it's very, very fast you know generally you would, hope that it is faster than an Epoch in, training but it it's not GPU optimized, so one of the things we're working on, I'd like to if I commercialize the, product is to make a version that's very, very fast it's like it would like, distribute all the calculation on the, nodes and come back to you so that's the, kind of like it from so this is an open, source tool but it it runs a simple SVD, calculation so it's a little computer, intensive but again sort of my my theory, on this is that if you're training small, models it's pretty fast if you're, training really really big models well, you're going to have the you you chances, are you have the compute resources, anyway and you're not renting a GPU for, it you don't need the GPU even though it, can run it so that's that's sort of the, takeaway, [Music], well Charles I uh I mean when I first, saw the the tool I was very interested, in it and um I did take time to go ahead, and just pull pull it in one of my, notebooks and and look at one of look at, one of my own models uh because I wanted, I did want to get hands- on with it was, a question answering model based on xlm, Berta and I analyzed it with Weight, Watcher I did not do every single thing, that you describe on your on your repo, CU I I'm still you know dipping dipping, my toes right I I gu great I it ran it, actually it ran yeah and uh so it's a, pie torch based model it ran I I didn't, time it so I don't know exactly how long, but uh I did find out at least I found, out according to Weight Watcher 10 10 of, my layers are undertrained so that could, be yeah I I I at least found found that, out so could you speak a little bit, about like the tool itself uh so you, mentioned like how people can integrate, it in their workflows could you mention, a little bit more about the open source, project and like how people like if I'm, like I did and I want to do this on one, of my models how how would I go about, doing it and how easy is it to to get it, running on a Model well you know this is, just it's a tool I've been writing in my, spare time based on my research there's, no funding for any of this I published, with UC Berkeley but they're not funding, any of this they're just sort of like, I'm just they're just kind of helped me, out a bit i' I've written it all myself, uh it's all open so I have one of my, staff guys helped me out early on pep, install Weight Watcher the way it's, written now you probably need to have, both tensorflow and P torch installed in, your in your environment if you want we, can I can make a version that doesn't, require both of those I have no one's, asked yet um one of the challenges I, have with the tool is that I have 60,000, downloads I have no idea who's using it, so if you're using the tool let me know, so I can help you I don't know what, you're doing with it and I'm not going, to SP you know I don't want to end up in, feature creep where I design features in, the wild you know I need to know what, you're doing so if you tell me I'll help, you we have a slack Channel you can go, on slack and you can ask me and I'll, help you but basically it's pep install, Weight Watcher and you just give it a, model you say weight your Watcher equals, Weight Watcher model equals my model and, you say Watcher do analyze that's it and, it will return a data frame with quality, metrics if you say Watcher do analyze, plot equals true it will generate a, bunch of plots it will generate the, plots it's meant to be I've been running, it in jupyter notebook that's how I run, it in principle you could run it in a, production environment um again it's, really a very it's not even an an alpha, one tool yet it's still like 0.56 0.57, so you know if you do that reach out to, me you know we can make a version that's, more stable if you need to run it in a, production environment but I've mostly, been using it in it runs in the jupyter, notebook you get a data frame you, analyze the data frame you run a Google, collab notebook you say plot equals true, it gives you a bunch of plots if you had, some other op it'll give you more plots, and then you analyze the plots so let me, ask you a question as as kind of a, followup to what you and Daniel were, just talking about if you're looking at, the workflow like and you so you know, Daniel said there were like what 10, layers that had not converged you know, to sufficiently how does that change the, workflow for someone who hasn't done, what Daniel's done and gotten his hands, on someone just listening talk a little, bit about what they were doing before, versus the workflow they're doing now, now that they have the insights that, Weight Watcher is bringing to it what, does that look like for the practitioner, Well here here's the first thing this is, exactly what happened with one of, Michael's posts and students go back and, look at the regularization did you did, you add enough Dropout on your layer are, the learning rights too large do you not, have enough data is your model just too, big are the early agers converging in, the later if the later layers are not, maybe you should freeze some of the, earlier layers and give the later layers, time to converge maybe you need to run, it longer you need to run SGD longer, maybe you know maybe need to adjust some, of your hyper parameters because you're, not getting tuned you know try to adjust, your hyper parameter so Alpha goes down, not that it goes up those are the kind, of things you need to do during training, yeah so if you were maybe you could also, mention the workflow I find it very, interesting what you were saying about, like the workflow of potentially using, this like within the training Loops as, well like as as you're training the, model right so one thing you could do is, definitely run your model right like I, did and then look at it afterwards and, see oh shees I you know I need to do, something about this or that and then of, course like then probably is the the, harder part of the problem is connecting, with like okay does that mean I do one, of those things you just mentioned or, another one of those things you just, mentioned but what about that workflow, like in the training Loop how how might, that work I know that uh you know maybe, some people have heard of certain things, related to like optimizing either uh not, doing brute force uh hyperparameter, tuning but doing some sort of some like, automl type of stuff or or something, like PE people have thought about these, things so like when you're pulling, Weight Watcher into the training run how, would you think about that being used if, you want to give Google Cloud a million, dollars to do automl and then have them, own your models for you and feed them, back to you knock yourself out I I don't, want to do that I don't want to be, trapped you know that's what that's what, the automail offering is it's an, offering to blow millions of dollars or, if you want to get some tool like H2O, and auto tune a model and then find out, it doesn't scale and then you have to, redo it we've had clients with that, problem right I think it's uh there's, this wider field though of of sort of I, guess metal learning and kind of, learning on on on that and and I don't, know if this would fit like the Weight, Watcher stuff would fit into that larger, space of research I guess but look what, what are you trying to do like what what, does it mean to be optimal if being, optimal means that your Alphas are close, to two or three then you should adjust, your hyper parameters such that the, alphas go down that's what you do now, doing that analytically typically in, doing what are called analytic, derivatives meaning you try to compute, the gradient from that that's somewhat, difficult it could be done because you, have to compute the igen value spectrum, and then you have to fit it and then you, have to figure out the derivative and, that's a very complex nonlinear, calculation it's very iterative it could, be done numerically or could be done, analytically with some work it's a lot, of work I would love to have VC funding, like hugging face to do that but I don't, it's just me me and you so you just try, to tune your parameters Alpha goes up go, the other way you know if you turn your, learning rate up and you find your, Alphas are going up tune the learning, rate the other way and hopefully they'll, go down obviously it's a complex, optimization problem because you have, you know you have 100 layers you have, 100 alphas and so you're trying to tune, different layers and optim you know, trying to tune your your layer learning, rates and your amount of Dropout and the, amount of momentum so in principle you, could try to do that algorithmically in, a way using like a basian type approach, where you try to get your Alphas to go, go down on every layer I mean it is in, principle you could do that but it's a, complex you know complex optimization, problem but that that's what I would, recommend and and I think it's it's, theoretically well grounded I mean the, point is that you want to learn more, correlations typically what I found is, that it's a good tool for newbies, because you get into a model you start, doing something things are totally wrong, and you can go in and fix some problems, okay now we fixed it we found like what, did we not do like I didn't put the, proper regularization on these layers, let me add regularization and try again, you can see that okay that's much better, so from a newbie perspective it's a very, good tool because it helps you get, started now it does work keep in mind, the tool works at the end of training, not in the early stages of training, you've got to let the thing bake for a, while you know you can't it doesn't once, it's about halfway through training then, you can start looking at things it's got, to have some some correlations but this, is what it's for typically and this is, sort of you know trying to do large, scale meta learning would just meaning, you'd have to integrate the tool into, some sort of process that allows you you, to look at the Alphas or look at more, details in the layer the shape of the, spectral density the number of spikes, the alphas the the volume of the, spectral density and figure out how to, tune from that I mean this could even be, used in a reinforcement learning, situation where the reward instead of, the reward being something that you know, the the act the agent takes the reward, is oh I got smaller Alpha so I have, rewards on every layer and I sum the, rewards in some average way to try to, get the optimizer to work even in, situations where I don't know what the, reward is for a re enforcement warning, situation obviously that's would be nice, in areas like you're trying to trade in, the markets because you can't take, actions that trade you you can't trade, on historical data and expect to learn, from that so this gives you a way of, sort of doing things in a supervised or, semisupervised way um that doesn't, require peing at the test data to, optimize and that that's I hope that, answers the question that's sort of the, idea and there are lots of things people, I think want to try I I think it's great, if you try them yeah I mean I definitely, appreciate you being transparent about, where where where the tool is and all, that and and really the the, possibilities that might happen with the, tool and kind of the there's a lot of, opportunities to explore usage and and, further development part of the why I, want to do with the tool is build an, open source Community I can't do, everything myself and there's lots of, things to do and if people want to get, involved in a community join the slack, Channel we can build things right that's, what open source is and I think there a, lot of people may have ideas and will be, able to contribute in ways that you know, just expanded and I think that it's, again right now to me the way you train, neural networks now it's like you build, a bridge you drive a car over the bridge, you see if the bridge falls down and you, do it again and again and again how many, cars are you gonna crash into the ocean, until you get the bridge right no people, don't build Bridges like that you know, you build Bridges by having engineering, principles you understand here are the, engineering principles that go in and, this is the load it can take and this is, the wind shear and you know you you try, to build bridges that actually stay up, and right now I think deep learning is, so Brute Force it's like you just spend, as much money as you can do as much, Brute Force as you can and if it doesn't, work you try it again and there's no, principles behind what you're doing and, we're trying to add some s you know and, principles that are based in deep Theory, like they're empirical rules of thumb, but there's also deep theoretical, reasons why they work just just like in, any other field of optimization I'm, curious I'm kind of going back to the, the engineering and the kind of you know, talking about you know as this matures, much you know and trailing the software, engineering world but one of the one of, the decisions that we all make as we're, as Engineers that we're doing is kind of, like as we're creating open source, community and we're trying to provide, the value for that community that you're, talking about do you see the future as, being Community specifically built, around Weight Watcher or or is there an, opportunity potentially to add the value, that Weight Watcher is bringing and the, new those insights that you described, and roll them into some of the other, existing community do you have do you, have any opinions or or or you know, thoughts about how you integrate this in, for the value of the larger Community, well look I I think I'd like to have is, a community of people who are training, models and getting them to interact with, each other a lot of the people like I, said it's hard to get feedback people, are doing things in industry and because, they are constrained by ndas they can't, really talk about what they're doing and, I think it gives people an opportunity, to really get into the space and learn, how training of neural networks Works, without being constrained by your, employer or your contract so you can, really do that's a lot of what this is I, think there are other communities doing, things like people building, hyperparameter optimization tools or, people building reinforcement running, tools we'd be happy to integrate the, tool in the challenge is always you know, you want to make a tool that is, self-contained you know if people Fork, the tool and begin changing it it ends, up I don't know if you guys know the, story of emx I was at champagne Nana, when this happened you know they wanted, to port emac to basically x windows and, stalman didn't want to do it and they, forked it you have XX you have emac it, killed it you know forking emac killed, it because you have the XX crowd and and, you know these guys went off and start, Netscape and you know probably all, retired now or they're sitting at the, top of you know hang out with the roof, of Google festing or Netscape but this, is the problem you want to make sure you, have an open Force Community you don't, want I mean I want people to contribute, and feel they can do things if we Fork, it and it goes into other communities it, kills it because now those contributions, don't come back you end up in these sort, of weird battles and there's no value in, that I mean what we want to do is help, people you know help people and and if, it's necessary at this point you know, commercialize the tool I would and turn, into something which we can support like, hugging face I mean hugging face is a, lot of Open Source but you know any, sophisticated technology needs, maintenance if you buy a copier machine, it's not open source because it needs, maintenance you know so even a tool like, Weight Watcher needs maintenance so I, would love to be able to work with, people who would like to put in the, production as an open and and develop it, and then at some point we realize look, and we really need to put a service, contract around this so that we can, maintain it and solve some of the harder, problems for you be happy to do that uh, and I think that that's really the what, we're trying to do because this is you, know there's also a lot of opportunity, for scientific research you know a, Weight Watcher has been a lot of it's, come from doing research statistical, mechanics and learning theory you know, we have papers in jmlr nature icml kdd, there's a lot of opportunity for, students to you know we have one student, who is at a bank who just did his, master's thesis on Weight Watcher and, and so there's a lot of that kind of, opportun as well and I think there's a, lot of room for improvement as we kind, of get to the end here I was wondering, just quickly as we as we close out I I, know you've spent a lot of really, valuable time in investing in the areas, maybe that people aren't focusing on in, in the AI community in terms of the, training side of things and and ways to, help them in in those gaps as you look, forward to you know the the future of of, where the AI Community is is going what, what encourages you about sort of the, direction of things or or what excites, you about what's exciting for you in in, the community right now you know for me, I'm a physicist at heart I did theorical, chemistry I did theorical physics you, know in subst I'm kind of the run of the, litter like my one of my classmates you, know a colleagues went often start Alpha, fold which solved the 50-year Grand, Challenge I have another who has started, a company who's going to label all the, world's translational medical data so, I'm used to that for me this is an, opportunity to really show that we can, use theoretical physics in a way that, can have a broad impact you can use, Theory to build sophisticated, engineering tools and a connection, between a lot of the deep sort of Cold, War education I have to build tools for, engineers there's there's a very famous, uh statement by Carver me who's a very, famous electrical engineer from Caltech, who said every useful experiment, eventually becomes a tool everything you, can measure eventually becomes a tool, you know that you give to an engineer, and so I would just like people to, realize look there's you can do deep, Theory there's a lot of interest a lot, of fun and interesting stuff to do and, we can turn Theory into tools that, people can use and build a community and, and you know just have a broader impact, I think that AI I mean I did AI in the, 90s people thought we were crazy like, this stuff doesn't work nobody believed, it right why are you doing neural, network people think neural networks are, invented by computer scientists but, there a whole group of theoretical, physicists doing this stuff for years, and you know understanding sort of who, we are how the brain works how we think, what's actually going on up here and I, think it's very exciting time and that, that's why I'm doing this I think, there's a lot we can offer from the, scientific Community there's a broad I, think there are really deep broad, connections between General Science and, what's going on in aii and that can, connect back to um the engineering world, I and I think that there are big, problems like like one of the things I'm, really proudest of with Weight Watcher, is that there companies using it to help, climate change W think is a huge problem, if you can use it to find some way to, solve this massive problem we have I, think that would be fantastic that's, awesome well I I think that's a, wonderful way to close out really really, appreciate your perspective there and, yeah thank you so much for taking time, to to join us Charles it's been a, pleasure hey I really appreciate it too, I I'm glad we're able to set this up and, I look forward to the podcast and I, really look forward to anyone who tries, to use the tool wants to use it please, reach out to me let me know how it's, working complain to me if you don't like, it I'm not going to fix it if you don't, tell me I don't know what's wrong with, it I'm not gonna I can't fix what I, don't know what is broken and I would, love that people join the community and, uh build something great together, awesome thanks so much right thanks guys, thank, [Music], you all right that is our show for this, week if you dig it don't forget to, subscribe head to practical AI FM for, all the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again and too fastly for fronting, our static assets to fly.io for backing, our Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Stable Diffusion | The new stable diffusion model is everywhere! Of course you can use this model to quickly and easily create amazing, dream-like images to post on twitter, reddit, discord, etc., but this technology is also poised to be used in very pragmatic ways across industry. In this episode, Chris and Daniel take a deep dive into all things stable diffusion. They discuss the motivations for the work, the model architecture, and the differences between this model and other related releases (e.g., DALL·E 2).
(Image from stability.ai)
Leave us a comment (https://changelog.com/practicalai/193/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Announcement blog post (https://stability.ai/blog/stable-diffusion-announcement)
• Stable diffusion paper (https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf)
• Blog post about the model from Marc Päpper (https://www.paepper.com/blog/posts/how-and-why-stable-diffusion-works-for-text-to-image-generation/)
• Stable Diffusion on Hugging Face (https://huggingface.co/CompVis/stable-diffusion-v1-4)
• Hugging Face Diffusers library (https://github.com/huggingface/diffusers)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-193.md) | 32 | 0 | 0 | with these Technologies one of the first, things that I thought about was how, artists you know were getting frustrated, with the fact that you would have, machine learning practitioners come in, and creating art with these things and, all that and that's in a very immediate, you can do it today kind of situation as, we've watched these, multimodality Evolutions coming through, these models over the months it's not, hard to Envision that at some point down, the road this will move into video and, we'll see other modalities being added, to it and as we do that you're now, moving into that creative space that, previously it took a great deal of, effort you know if we're talking about, the entertainment industry and movie, making and special effects this could, really revolutionize how special effects, acheve and make some some amazingly, phenomenal special effects as we see, iterations going forward become very, accessible, [Music], welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical AI FM for all the ways, special thanks to our partners at fastly, for delivering our shows super fast to, wherever you listen check them out at, fast.com and to our friends at fly.io we, deploy our app servers close to our, users and you can too learn more at, [Music], fly.io welcome to another fully, connected episode of the Practical AI, podcast in these episodes we keep you up, to date with everything that's happening, in the AI community and take some time, to dig into the latest things in the AI, news and uh we'll share some learning, resources to help you level up your, machine learning game I'm uh Daniel, whack I'm a data scientist with s, International and I'm joined as always, by my co-host Chris Benson who is a tech, strategist at loed Martin how you doing, Chris I am doing very well Daniel having, a good day gosh we got cool stuff to, talk about today yeah but the biggest, question though did you watch rings of, power so this is the conflict in my, family because I mentioned in the last, episode you know that I I was but I'm, waiting I'm being a good husband and a, good dad till they're ready because they, keep I won't give any spoilers and I I, probably shouldn't on the podcast anyway, but uh Chris and I for our listeners are, are both big Lord of the Rings fans so, thanks for torturing me here at the, beginning of the episode no worries any, anything I can do yeah I won't I won't, indicate one way or the other so yeah I, mean this isn't revealing anything but I, was really interested in and kind of, analyzing a lot of the visuals of rings, of power as I was looking through it and, of course rings of power Lord of the, Rings in general it's it's set in a, fantasy world of of Middle Earth and so, there's all sorts of interesting, interesting visuals and creative, elements a lot of them with a lot of, effort put in from designers and artists, and graphics people and it got me, thinking a lot more about stable, diffusion which is what we're going to, talk about today because really this, model and it's the latest in a series of, models but this kind of stream of models, these diffusion models are really kind, of taking over and dominating a lot of, the discussion in the AI community and, Chris and I thought thought it would be, good to uh good to spend some time, chatting about them in a lot more detail, than we had in previous episodes so if, you're wondering more about stable, diffusion what it means what it is what, it can do that's what we're going to dig, into yeah how have you been uh been, thinking about stable diffusion where, it's where has it been uh entering into, your life Chris so it is one of those, you know we've been talking about the, different kind of these different, disciplines within uh machine learning, and crossing modalities and and joining, up and and we that's been some we've had, had a pretty exciting year in terms of, what's happened already and I think for, me as I know I've expressed to you, offline this is the most exciting thing, and not just for what it is but for what, may be to follow so I think this is uh I, hope that listeners are as excited as we, are because this is one of those moments, that uh that I think is going to really, turn into something quite wonderful and, it already is looking super cool yeah, for sure so maybe it would just be good, to set the stage for what stable, diffusion is in terms of like what it, can do and and the motivation behind it, because it wasn't created in a vacuum, right this is kind of the latest the, latest model in a series of these, so-called diffusion models which I think, primarily are associated with right now, or or how they've got the most sort of, attention is for text to IM image tasks, so you put in a text prompt and it will, generate an image corresponding to that, text prompt what what are some of the, interesting ones that you've you've seen, Chris or the the sort of images, generated from text prompts that have, been interesting for you I I think, actually I think some of the things that, we've we've shared a little bit back and, forth and that are in some of these, articles are are pretty cool you know, being the Geeks that we are and seeing, things like Lord of the Rings showing up, with blended with Star Wars characters, uh one of those you know there there's, one that has Gandalf and Yoda mixed, together they're just fun and so I I'm, enjoying the creativity out of it but, it's really it's really like I can think, of so many uses that aren't necessarily, just like cool imagery from a creative, standpoint that are really functional, and we can get to that later on in the, conversation but this is one of those, that has popped up from time to time, that has it kind of has a sense of magic, about it and of course it's not I'm sad, to say but it definitely itely it, definitely has that surprise a factor, and what and what you're able to do as, you look at how the different parts of, the system work together and I know, we're going to talk about that kind of, workflow uh in terms of how the model, works but yeah the back end what arises, out of that is uh is definitely, surprising yeah and I think like you, said the sort of text to image stuff is, maybe the most accessible thing for, people to try and so that's what you've, seen most but I've seen really interest, Integrations and demos of the model, already because you can do not only sort, of just a raw text to image but you, could do sort of like in painting so you, could freeze a part of the image and, fill in the rest or like recover parts, of an image after kind of or you know if, you have an image of a street and you, want to take this person out you could, kind of remove them and then fill in the, The Gap all sorts of interesting things, like that that that you could do as part, of the workflow and then there's also, this sort of imageo image tasks kind of, doing some sort of translation of of, image style or something like that but, yeah a lot a lot of things that are, integrating the stable diffusion model, one of the reasons because it's it's, open and people can access it yeah it's, fully open source and and I think you, going back to what you were just talking, about for a second there I I think one, of the the coolest things about it is, you can change the representation that's, fed into the diffusion model so you know, like as you said from an accessibility, standpoint you kind of start with this, you know writing the text out and the, train model which has been trained on so, many things in human culture and, civilization you know has these these, great components to draw from to to pull, from within the train model but you know, you mentioned the image image and we've, seen some interesting things where you, know they you know you can take things, out of an image and I know there are, other techniques out there obviously for, for doing this but the representation, can be text it can be images it can be, lots of different things which really, opens up the possibilities and I think, we'll we'll kind of span all the, disciplines that we commonly talk about, in the space yeah so to give people an, idea of the accessibility even just this, morning I had a Google collab notebook, open it did have a GPU on it but um it, was just a Google collab notebook I use, the hugging face diffusers Library where, you can import the stable diffusion, model there's a pipeline built for you, know using the pre-train stable, diffusion model so I'm just counting, after my imports I have 1 two 3 four, five six s eight lines of code to go, from text to image so this sort of, there's two factors here one is like, there's great tooling from hugging face, which is something we talk about all the, time so continual great work there but, the other side of it is this is just, running in a Google collab notebook and, I'm able to access it via my browser I, don't have to like spin up an instance, in the cloud with a big beefy GPU or set, of them this side of the accessibility, both the, opsource release of the model and the, ability to use the model in a, computationally efficient way those were, two of the sort of big motiv in my, understanding and I should be explicit, I'm not wasn't I didn't have anything to, do with training this model but in my, understanding from the the teams that uh, that train this were which included a, sponsor called stability that's where it, gets its name stable diffusion um Runway, ml was involved which um we've I think, mentioned on the show here before that, has tools for kind of creative uses of, machine learning and then uh academic, researchers from ludvick maximilan, University in in Germany so this group, kind of explicitly set out with, motivations around accessibility and, specifically with accessibility more, computationally efficient a more, computationally efficient diffusion, model and one that would be explicitly, open source and I think that's why this, has exploded is because if people can, access something easily and they don't, need really fancy compute to run it then, it's going to kind of spread very, quickly right yeah I mean it's been, noted in multiple places that you know, if you have a a computer with a graphics, card that you know that's a GPU that you, you're probably good to go it doesn't, have to be the latest greatest thing and, you can so it it really opens up to, people you know everywhere that can use, this and probably most people that might, be interested in it already have the, equipment you know even without going to, to a Cloud solution like collab or, something you have it in your house, probably already and you can do it yeah, on a laptop with a card or a desktop or, or just a cloud instance that's less, expensive right than trying to to do, something I was reading that for other, diffusion models so we should be, explicit too this isn't the first of, these types of models we already talked, about Del 2 which is has a lot of, similarities with stable diffusion and, we'll we'll kind of point out the, differences as we continue the, conversation but also a model that's, capable of doing this amazing text to, image generation right and and these, other applications like in painting and, that sort of thing but it's fairly, computationally expensive and it it's, not as as open right you have to kind of, sign up on a weight list get access use, it via API that sort of thing and I, think I was reading so for some other, diffusion models I read a one statistic, that was like 50,000 samples takes about, 5 days to do INF ing on on a single a100, so most people don't have access to an, a100 and maybe don't want to spend 5, days waiting around for for the, processing of a bunch of samples now 50k, is is a lot as well but yeah so that, that's one just kind of baseline or, foundational number that hey these, things did exist before but they were, extremely computationally expensive you, know it's it just as a as a kind of a a, single point that you mentioned about it, being open source we've had and we've, talked about this with previous model, releases on the show different, approaches to releasing of different, types of models and you know there have, been things where there's been concern, about uh how it would be used or, security and things like that and, incremental some things stay proprietary, with just kind of a a front-end, interface to it other things have been, released incrementally where the big, model is withheld but a smaller reduced, functional version is is offered, and here we are and we just went through, you know Dolly which as you pointed out, you know has has constraints there and, here we are with this open source, release that's quite powerful and quite, amazing and yet quite accessible to to, pretty much anybody who would like to to, start working with it what what are your, thoughts around around the fact that I, mean this is a feeling a little bit more, like the open source software world that, you and I have both come from you know, in the past and how do you think this, may change change the space going, forward if others as well with with both, this and other releases going forward we, it tends to be more straight out open, source with the level of accessibility, what how do you how does that change the, space we're in yeah I think that there's, a few elements of this I think it has, been interesting last um last episode, that we that we had we talked about, these open rail licenses and um one is, utilize by stable diffusion and so there, is some explicit things you have to, agree to when downloading the model on, hugging face for example you have to you, know click a button that says I agree to, this stuff and then you can download it, and you have to use your hugging face, token to download it but it is open in, in that sense under in a sort of unique, way but I think that if we look at, models like this and ones that are, released open source I think you saw you, kind of saw in software I think over, time as it was open- sourced a lot lot, of software applications or kind of, specialized software things going from, kind of specialized expert groups using, them to a general purpose technology, that was used and integrated into a, whole variety of things that the, original creators didn't even have in, mind right so I think we're in a similar, place here where we're going from maybe, models that were're being experimented, with in sort of siloed places but now as, mentioning there's all sorts of ways you, could imagine using this model and, because I can access it and because I, can run it without expensive you know, hardware and because there's good, tooling like the diffusers Library which, I can pull in and do this in eight lines, of code then who knows how people will, use this and sort of hack it in a good, way right so hacking it for useful kind, of pragmatic purposes I agree I'm, actually looking forward to seeing as it, really gets out Beyond its core, community and reaches all those people, and people become aware of it because, we're still very early days it'll be, interesting to see some of the ideas, that come out of it both the the, creative art that we've seen already but, also some of the kind of innovative, maybe kind of business oriented you know, novel ways of using it that uh that we, were are not likely to think of, [Music], today, [Music], okay Chris you know I like to get into, the weed sometimes I I say we just dive, into this model and see kind of how it, works a bit we'll kind of take take the, listeners along with us and um go, through and figure out how this happens, how do we go from text to to image and, and also how how is this thing trained, let's diffuse the weeds Dani let's get, into it diffuse the knowledge or or, whatever, yeah and and Chris I think there's, certain things to listen for as we go, through this process you and I have, talked about some of these building, blocks that continually show up one of, them being Transformers in the attention, mechanism that has been applied of, course diffusion models have been, applied in a variety of ways encoder, decoder models word embeddings or text, embeddings all of these things show up, as we go through this so again this is, not kind of popping up out of nowhere, it's an assembly of things that we've, talked about before yes this has been a, little bit of a magical past year as, we've seen things come about largely, from that cross-pollination of different, different technologies that have Arisen, on different paths but now they're, getting blended and some pretty cool, things are coming out of it yeah so the, stable diffusion model if if you were, kind of having your mind and we can't, show you a picture because this is a, audio podcast but if you have in your, mind going from a text input to an image, output the sort of General process is, that that that text is embedded into, some, representation that embedding plus some, noise is then Den noised to an image and, that image is then upscaled or decoded, into a a larger image that's not not, compressed so those are the like General, stages of the of the pipeline you've got, text embedded plus random noise den, noised and then decoded or upscaled to, to an image do you want to take a moment, and let's just kind of talk about for, those who are kind of coming into it the, idea of introducing noise and then D, noising what do you get out of that, productively yeah what's the reason for, that in the workflow yeah so I mean you, can think about it doesn't have to be a, text to image model but this sort of den, noising or diffusion type model is, useful because it can take a sort of, noisy input and den noise it so the sort, of bigger idea here is that I could take, a set of images in my training set right, and then introduce noise into those, image images in a via a certain steps of, noising and then I could train my model, to in a series of steps Den noise those, images and so this could be used both, for like fixing corrupted images or, upscaling images and that sort of thing, so it doesn't have to be for like text, to image but this is the general idea is, that you have an an original output or, original um original set of images that, you can kind of corrupt intentionally, and then train your model to DEC corrupt, those or denoise them and then that, model can be used to perform that sort, of den noising or, or upscaling typee of type of action um, afterwards is we talk about you know the, fact that attention is used in here and, I know in some of the discussions around, it it's referred to as cross attention, what is cross attention as a form of, attention does that just mean different, modalities coming in or H how would you, define that yeah so I think it would be, good with that to kind of describe the, maybe the overall components or modules, of of this system so there's there's, three main components of stable, diffusion to like make it what it is the, first is text encoder or a language, model that takes your text and converts, it into an embedded representation and, or encodes that text the next major, component is an auto encoder we'll come, back to that because this is a key piece, of what makes stable diffusion different, is what they did with the auto encoder, but the the auto encoder, basically you can think about it as a, way to train something to to upscale, your image so to go from a compressed, image to a non-compressed image and then, the third is this diffusion model which, is is a unet model this is the the type, of architecture it is a unet model and, this is that model that takes a noisy, input and then Den noises it so again, the text encoder encodes your text to an, embedded represent just a series of, numbers series of floating Point numbers, your auto encoder is a way to really a, way to get to a decoder which can, decompress images or upscale them and, then your diffusion model which is based, on this unet architecture which takes, gaussian noise or some noise and den, noises it to get closer to the text, representation that that you input so, those are the three main components and, what what happens is that we mentioned, this uh diffusion model that takes noise, and den noises it to something that's, close to your text representation well, somehow you have to combine that noise, and your text representation so if you, imagine text comes in to your text, encoder or language model that's, converted to a series of numbers and, embedding a learned embedding for that, text and then that learned embedding is, combined with this random noise and, that's where the cross attention happens, so cross attention is this way of, mapping mapping your text representation, your encoded text onto this random noise, which the word that they use for this is, condition it conditions the random noise, with your text representation and that's, how the diffusion model which Den noises, it that's that's how it knows what it's, kind of after that's how that's how it, gets to a, semantically relevant image that's, relevant to your text input is because, it's been combined with your text, embedding in this cross attention, mechanism in the random noise and the, diffusion model is a form of, convolutional model is that accurate, yeah the diffusion model the at least, the one that was used in this stable, diffusion piece is called uh unet it's, not um it's used in for other purposes, as well but it it sort of has a series, of convolutional layers one that kind of, takes your image and shrinks shrinks the, image down in the con convolutions and, one that does the inverse of that so, this is like a down path up path thing, and then there's combinations between, those two things but yeah it's it's it's, a series of convolutions that are, combined in a certain way which makes it, unet you know it's it's interesting, interesting as as you have kind of, cataloged these different components in, their workflow and all we have talked, about all of these things in previous, episodes these are all existing, Technologies but they had they found a, way to put them together to a remarkable, effect and I it's it's very interesting, that we we keep returning to that cross, modality being you know kind of the, source of of the current wave of, creativity in the AI space and I think, this is a a great example, individually I know what all those, things are would never have imagined, putting them together to uh to achieve, this um so it was a pretty pretty cool, way of doing this yeah and I think that, the the key piece to emphasize about, what was done here is is really with the, piece that we kind of glossed over, quickly which is this Auto encoder and, particularly how they trained both the, diffusion model and the auto encoder so, it's not new to use this sort of Auto, Auto encoder to compress and decompress, images that that's been done before if, you imagine um you have a a model that, can encode an image and then decode it, the encoding is sort of like the, compressing of that image the decoding, is the decompressing of that image and, so you can train a model you can train, an encoder and a decoder jointly to do, that compression and then do a, corresponding decompression um or, decoding, and then the diffusion model sort of, operates in that on those compressed, images so this this is not new this sort, of combination of autoencoder and, diffusion model in my understanding what, is new is that the stable diffusion team, this team from from stability um and, group in in Germany I'll mention some of, their names because Robin romach at all, are on the paper we'll Link in the show, notes but the thing that they wanted to, do remember the motivation that they, were after was to make a more, computationally efficient diffusion, model if that was at least one of the, accessibility things they were after and, so what they did was instead of jointly, training the auto encoder and the, diffusion model they separately trained, the auto encoder and the diffusion model, and this does two things sort of it, separates out the auto encoder and lets, you train the auto encoder for what it, needs to be good at which is compressing, and decompressing images but it also, means that the diffusion model only, operates on these compressed images in, the training and those compressed images, require like 64 times less memory for, your diffusion model which is why you, can run the stable diffusion model on a, consumer GPU card because they've, strategically separated out the training, of this Auto and, and the diffusion model which allows the, diffusion model to operate on compressed, images but still allows you to get high, quality upscaled images out because, you're using the decoder still and we've, seen the decoder and encoder being used, I mean I think you see that in typical, Graphics software right or machine, translation models all sorts of things, yeah it's used often to clean that up so, so the diffusion model is kind of where, if I'm understanding you correctly is, kind of going through that that noising, and then denoising it kind of Blends, what what is available from the trained, model together and then uh in that, compressed format and then when the, decoder takes the result of that and, kind of upscales it back to the, uncompressed model it kind of in a very, non-technical phrase it kind of cleans, it up and makes it you know what what it, is at that point is that close to being, how is that is that approximately Fair, yeah so if you can imagine really really, small images which are generated out of, random noise based on the diffusion, model Den noising that noise yeah then, those really small images are then, decoded to a larger image image which is, inferred which uses a separately trained, decoder which was trained in this sort, of autoencoder methodology I have a qu a, random question for you given that, they're training them kind of as these, separate components does that, potentially if if I'm thinking in terms, of outside of this space and software we, often mix different components together, to achieve new things do you think, that'll help accelerate some of the uh, explor exploration and experimentation, in this by by keeping those bits, separate so that you combine them as you, want yeah well I think that there's the, clear computational Advantage but I, think as an additional Advantage, basically separating out this en encoder, or the auto encoder from the diffusion, model makes it to where you can use the, same autoencoder model for all sorts of, different Downstream diffusion models so, this is another kind of shift that we've, seen in other areas right where a, portion of what you're doing is general, purpose and then you're kind of bolting, on what you need for the downstream, tasks that you care about whether that, be imageo image sort of tasks or text to, image tasks or maybe even another thing, that would be like a text to audio task, ask or there's all sorts of different, things that you could imagine doing, Downstream so yeah I think that this, decouples the two there's a, computational Advantage there's also a, sort of functional, [Music], advantage, [Music], well Chris uh I think one last thing to, mention in terms of in the weed stuff is, I think it's it is really interesting to, look at how a model was was trained so, it's probably worth mentioning a couple, of those things where this model again, was trained in two distinct phases, there's this Universal Auto encoding, stage which is trained once and can be, utilized for multiple diffusion models, model trainings Downstream and then, there's the second phase which is, actually training the diffusion model, and this model was was trained on uh, approximately 120 million image text, pairs so there there were well there, were 120 million image text pairs from, approximately 6 billion image text pair, data set those that data set is freely, accessible ible that you can look at, that as well and we'll link it in our in, our show notes but I think we also, talked last in our last conversation, about how it it wasn't I mean it it's, expensive but it wasn't a crazy number, to actually train this model so it took, 256 A1 100s about 150k hours which would, kind of equal at least that at market, price around 600k and I'm I'm getting, that from one of the the team members on, on Twitter so yeah pretty pretty, interesting I mean I don't know if you, have 600k lying around Chris but it's, certainly a more accessible number than, like you know training a model for 500, million uh or you know something and, know I don't have the Pocket Change of, 600k laying around but you know as we're, looking at at separating these trainings, out and and the fact that you know if, you kind of think you know we talked a, little bit about the idea of the magic, arising out of this earlier and the fact, that you know you have so much human, semantics captured in the diffusion, model you know in terms of how it was, trained so there are many Concepts you, know we talked earlier about the Gandalf, Yoda imagery that we had seen and and, clearly the training had included you, know the concept of Yoda and the concept, of Gandalf that were combined as we go, forward is do you think there is the, idea of uh of kind of a a diffusion, Marketplace that arises both open source, and maybe some not not open source where, depending on the cost that you want and, things like that you can kind of get, into the level of sophistication that, you can support for your application is, do you think that becomes a reality uh, as we talk about making these accessible, across a wide range of users and use, cases yeah I I mean I think if you draw, a parallel with what's happened with, other models that have CAU on in similar, ways like if you imagine back to Bert, and these large language models part of, the magic of those was that the weights, were open source you could pull down a, pre-trained version and then fine-tune, it for a particular task right so I I, have no doubt that and I I think people, are looking into this and there's, explicit notes on the stable diffusion, page about limitations and bias and all, that so you can read that there but, certainly there's bias in the data set, on which it was trained but I think the, power comes is if you're able to open, source the model in some sort of way, with tooling that that will allow for, the fine-tuning of it I'm sure that, people will sort of fine-tune or create, different versions based off of the, parent using maybe it's imagery for, particular styles of books or, Publications or imagery for or or in, painting for you know creative arts or, for video processing or for all of these, different things I think people will, create their own versions of these and, probably some of them will be those, fine-tune kind of purpose-built models, will be commercially uh available for, purchase as we've seen with language, certain language models in the, marketplace and some will be open sourc, for people's usage just like we've seen, kind of a general purpose and then we've, got like a science document Bert and, we've got a legal document Bert and, these sorts of things and those are open, but also there's companies that are you, know making money because they're, processing legal documents with Bert and, they're using they have their own, proprietary version or maybe they're, using the open source version and just, have good tooling around it so to extend, kind of your answer there just a little, bit you know one of the things that we, often ask guests when we have guests on, the show is kind of that you know wax, poetic a little bit and tell us you know, kind of where you see some of these, things going and I know that as we were, diving into this topic for for today's, show and and kind of exploring what we, wanted to share with the audience I, could see so many possibilities as could, you and so let's wax poetic for a few, minutes uh on on where this might go and, what might come down the road you talked, a little bit about the marketplace of, you know that where people can find, resources to move forward with these, Technologies one of the first things, that I thought about was we were just, talking uh in the last episode or two, about uh how artists you know were, getting frustrated with the fact that, You' have machine learning practitioners, come in and and creating art with these, things and all that and that's in a very, immediate you can do it today kind of, kind of situation but it's as we've, watched these, multimodality Evolutions coming through, these models over the months it's not, hard to Envision that it's some point, down the road this will move into video, and we'll see other modalities being, added to it I think that would be, consistent with the recent history that, we've seen and as we do that it really, you're now moving into that creative, space that previously it took a great, deal of effort you know if we're talking, about the entertainment industry and, movie making and special effects since, we started with the Lord of the Rings, this could really revolutionize how, special effects are achieve and make, some some amazingly phenomenal special, effects as we see iterations going, forward become very accessible to people, at home you know they you're no longer, you're no longer the big special effects, company but you're and those companies, would have access to but I could see so, many Industries there's obviously, there's security concerns there's art, things there's business things what are, some of the what are some of the wha ifs, that you could see maybe not just with, this particular model model but with, what we might expect to see not too far, down the road yeah I the two areas that, I'm thinking about are one the expansion, of modalities like you talked about so, diffusion models applied to audio for, example and what what that means for, both things like speech synthesis or, even creative things like music, generation or that sort of thing so I, think that that uh that area is quite, interesting to me and I think it will, happen but the other wh if in my mind is, how this set of technologies will be, combined with others that we've seen to, be very powerful already that already, exists so for example I could have a, dialogue system or I could have prompts, that were not created by me and fed into, stable diffusion but what if I create a, prompt used, gpt3 or automate the the sort of, dialogue I'm having in a chatbot right, with language model generated prompts, along with imagery or video that's, created um using something like stable, diffusion or you know you could even, imagine creating a a story book with, both you know language, models and sort of visual elements from, something like stable diffusion so I, think that the sort of creativity or the, uses are also interesting in how they, are integrated with existing, Technologies both that are AI related, and maybe not AI related so things like, chat Bots could be driven by an AI model, like a dialogue System model that's, state-of-the-art they could also just be, like decision tree based um you know, Bots that are rule based but maybe you, integrate visual elements from something, like this that in a more controlled way, so I think that this combination of the, technology with both existing, Technologies and other other language, models other models that are out there, um is an area that I I think will kind, of expand quite a bit and we'll see some, interesting things happen I I think, we're looking at the birth of a kind of, creative entrepreneurship being able to, to really take some of th this model and, other recent models and some of the the, new things that we expect to come in the, not so distant future and really have, some amazing creative outputs on that, you know we started with Lord of the, Rings and so I'll I'll I'll make a, suggestion to the tolken business if you, will you know it would be interesting to, see maybe uh in a few years uh when, they've decided they need to refresh, those stories again maybe it's done with, some of these Technologies and it's done, you know kind of entirely with the set, of creative technology IES and to your, point maybe it is released in many many, languages simultaneously kind of native, instead of being translated you know in, that capacity and so we can all share in, that experience and maybe even, variations to uh adapt to different, cultures and different all sorts of, different uh races cultures and, everything and stories can be you can, take a storyline and make it pretty, special in terms of being a multimodal, itself so uh I can I can imagine a lot, of pretty cool things yeah I um I always, think back to that conversation we had, with Jeff Adams from the Cobalt speech, company talking about how you know his, his vision for the future also was this, sort of more holistic holistic treatment, of both language and and other things, because language touches everything so I, think that that's some of uh some of of, what you for uh your meaning while you, were talking just to kind of show you, how accessible things are I typed into, stable diffusion map of the USA and Lord, of the Ring style and uh there's, definitely I'm sure you'll recognize uh, certain certain elements of that map, that I just posted on our slack Channel, Chris uh that that are Lord of the Rings, esque so pretty interesting I actually I, I there's a there is a book I don't, remember what what it's called right now, but there's a book of Lord of the Rings, of Middle Earth maps and it does so it's, it's the US that we're you know mo the, eastern and Central us that we're, looking at here but it definitely has, that that lord of the ring style going, to it so uh yeah that's I'm enjoying, that yeah in terms of learning resources, for people I I think what Chris and I, would recommend that you do is just get, hands on with this model there's ways to, do it even if you don't code there's, ways to do it through the dream Studio, AI app or uh on hugging face you can, actually download the model and use, their diffusers library to run the model, if you search for stable diffusion on, hugging face you can find it also, another uh post so we'll put this in our, show notes but we leveraged a blog post, which was quite useful um written by, Mark poer that described a lot of the, things that we talked about here so if, you want some visual and that sort of, thing to Aid your understanding of the, model we'll link that in our show notes, so definitely take a look it's it's been, fun to uh to diffuse some of these ideas, with You Chris enjoyed it very much I, did too I hope our audience enjoyed this, as much with these these shows where we, get to explore bold new places I I, really get excited about so until NE I'm, sure there'll be something super cool, coming up that we'll be talking about, again but until then thanks for joining, today Daniel bye, [Music], all right that is our show for this week, if you dig it don't forget to subscribe, head to practical aai FM for all the, ways and if practical AI has benefited, your life Pay It Forward by sharing the, show with a friend or a colleague word, of mouth is the number one way people, find shows like ours thanks again to, fastly for fronting our static assets to, fly.io for in our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Licensing & automating creativity | AI is increasingly being applied in creative and artistic ways, especially with recent tools integrating models like Stable Diffusion. This is making some artists mad. How should we be thinking about these trends more generally, and how can we as practitioners release and license models anticipating human impacts? We explore this along with other topics (like AI models detecting swimming pools 😊) in this fully connected episode.
Leave us a comment (https://changelog.com/practicalai/192/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Automation and creativity
• Goodbye, humans: Call centers ‘could save $80b’ switching to AI (https://www.theregister.com/2022/09/01/call-center-ai-gartner)
• DALL-E can now use AI to extend images as a human artist might (https://www.fastcompany.com/90783798/dall-e-image-generator-now-goes-beyond-the-frame)
-An AI-Generated Artwork Won First Place at a State Fair Fine Arts Competition, and Artists Are Pissed (https://www.vice.com/en/article/bvmvqm/an-ai-generated-artwork-won-first-place-at-a-state-fair-fine-arts-competition-and-artists-are-pissed)
Nation states and AI
• Undeclared pools in France uncovered by AI technology (https://www.bbc.com/news/world-europe-62717599)
• France reveals hidden swimming pools with AI, taxes them (https://arstechnica.com/information-technology/2022/08/france-reveals-hidden-swimming-pools-with-ai-taxes-them)
• Nvidia says U.S. government allows A.I. chip development in China (https://www.cnbc.com/2022/09/01/nvidia-says-us-government-allows-ai-chip-development-in-china.html)
• Foreign Affairs: Spirals of Delusion - How AI Distorts Decision-Making and Makes Dictators More Dangerous (https://www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making)
Resources
• Open RAIL Licenses (https://huggingface.co/blog/open_rail)
• NormConf (https://normconf.com/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-192.md) | 1 | 0 | 0 | one of the big areas that we're seeing, just kind of explode is I mean in one, sense Ai and creativity but in a sort of, broader sense like an expansion of the, things that AI is doing in a creative, sphere that is paralleling you know a, lot of what humans could do and, particularly I think the most obvious, one that we've seen recently is this Del, stable diffusion all of these models, that are sort of text to image models it, brings up really interesting questions, around the ability of maybe non artistic, people like myself who I'm not a, designer I'm not a painter but the, things that I could do creatively with, these tools is amazing and fun from my, perspective there's this sort of really, fun positive element about, it, [Music], welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical ai. FM for all the, ways special thanks to our partners at, fastly for delivering our shows super, fast to wherever you listen check them, out at fast.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, [Music], fly.io well welcome to another fully, connected episode of practical AI this, is where Chris and I keep you fully, connected with everything that's, happening in the AI Community we'll take, some time to dissect the latest news in, the AI world and dig into some learning, resources to help you level up your, machine learning game I'm Daniel whack, I'm a data scientist with s, International and I'm joined as always, by my co-host Chris Benson who is a tech, strategist with locked Martin how you, doing Chris I'm very well today Daniel, it's been an interesting week in uh, various types of AI news and I've, noticed some themes that will that we, can jump into along the way of some, fairly substantial concerns I'd say yeah, the world evolves yes well I mean on a, slightly less on topic note but, definitely newsworthy I don't know if, you're going to watch rings of power, today or tomorrow or you've already, watched it not yet yeah so neither of us, have watched it so you're not going to, spoil anything for me because I haven't, watched it either but tomorrow our, family we're gonna have a bit of a, marathon and watch through Peter, Jackson's Lord of the Rings oh good and, then we're going to watch rings of power, so it's going to be an all day affair, and I'm sort of basically ignoring, anything I'm trying to hear you know, trying to ignore anything I'm hearing, about uh whether rings of power is good, or bad reserving judgment until until I, see it but so yeah that is I don't know, that we've ever talked about this, because it's not directly in the AI, world but that I grew up as a fanatical, tolken fan right and not only Lord of, the Rings and obviously the but the the, S marilan and all the other stories that, most people don't ever want to to read I, I do them all I've read them all, multiple times so I'm I'm one of those a, little bit slightly Nut Job Types on, that it's my favorite set of stories in, the world so yes here's the conflict my, wife and my daughter aren't as, enthusiastic as I am so they they like, it but not like I do neither is yeah, anyone in my life really yeah so so the, big thing about rings of power coming, out was like do I just watch them or do, we do it as a family and my wife's like, yeah we like to watch it together but, they don't have the urgency so now it, was released last night Daniel as we as, we are recording this I am sitting here, waiting like when are they going to be, ready to do this part of me wants to, selfishly just go watch it but no no, right I'm going to be a good family man, I mean for someone like you with that, sort of background you're getting this, like visual sense of things we haven't, seen in other movies like the dwarf, kingdoms and such which is going to be, really interesting I mean I hope it's, really awesome but it's different it's, definitely different so there's a lot, they've taken license oh yeah, sure which one would probably expect I, agree and Peter Jackson though though he, did a great job he took some license, with the story but he pulled it off, pretty well in my view they've taken, more license at Amazon with this story, so I'm I'm one of those people who has, read the sil melan many times and I know, all the detail and I'm going to be, trying so hard not to let it just, irritate the crap out of me but I'm, looking forward to it because of the the, scope and level of quality that it's, supposed to have yes well speaking of, creativity and uh license and all of, those things I mean I think and this is, one of the topics you brought up to me, one of the big areas that we're seeing, just kind of explode is I would say I, mean in one sense Ai and creativity but, in a sort of broader sense like an, expansion of the things that AI is doing, in a creative sphere that is paralleling, you know a lot of what what humans could, do and particularly I think the most, obvious one that we've seen recently is, this both do stable diffusion all of, these models that are sort of text to, image models y so yeah there's this it, brings up really interesting questions, around well I think really fun things, first first of all really like positive, and fun things around like just like the, ability of maybe non artistic people, like myself who I'm not a designer I'm, not a painter necessarily I mean I've, took art in high school right but the, the things that I could do creatively, with these tools is amazing and fun and, like that's a really from my perspective, there's this sort of really fun positive, element about it I don't know do do you, see that side of things yeah and by the, way I'm looking forward to you know you, know going and touring the Lou at some, point in future years and seeing you, know a Daniel whx AI original you know, up there because you know that's going, to happen well what do you think is the, attribution of these works like what how, should one attribute one of these, AI augmented artworks I guess so I don't, know that I have the answer and I can, see both sides as we talk about this, because I as we are talking about some, of these articles that we've seen one of, them which was a little maybe on, vice.com is at AI generated artwork one, first place a state Fine Arts, competition and artists are pissed so I, I think that's going to happen a lot I, think the question then becomes if if, you want to talk about artwork you have, to be specific I mean I'm so biased, coming from this AI Community but you, know it's a set of tools you know just, like Adobe Photoshop you know was for, for many years you know in the digital, space and now ai is another set of tools, and I personally see it as perfectly, legitimate but there may have to be, competitions that specify you know in by, the human hand you know in a non-digital, format kind of thing I I don't know it's, not just coming it's here and it's here, in a big way and only more of it yeah, and I think to I mean we are a practical, AI here so people are wondering how they, can practically do some of these things, one of the examples is di you can you, know request access and um use Del which, now includes sort of extra features, around in painting and and other things, there's also uh I think it's funny Chris, you know we talked about D recently it, almost seems like we've moved on from Di, and now stable diffusion is the thing, and you know you can see how things fast, things move in the AI world now at the, time we're recording this stable, diffusion is is the new new and uh you, can use there's actually uh from from uh, stability there's a dream studio. a, which is a you can sign up and do this, sort of also AI assisted creativity work, within that dream studio app so if, you're wanting to get Hands-On in a, practical way with these models it's not, that hard at this point whether you're, doing that on a hugging face space or in, dream Studio or in the Del interface, there's multiple onramps to this yeah I, I agree and it's interesting I I mean I, I do think that this is going to open up, the spaces in a big way can you talk a, little bit because I know we've talked, about Dolly quite a lot over recent, episodes and so listeners probably are, fairly familiar with it if they've been, with us for a short while at least can, you talk a little bit about staple, diffusion at a high level like what what, it what do you know of that's different, I know it may not be your specialty yeah, it it isn't really my specialty I think, that there are differences with the sort, of modeling technique and the um and the, data and the the way they did it I think, though one of the interesting things is, with stable diffusion being more I guess, more of an open approach where di was, sort of released in a very very guarded, way with the sort of weight list type, thing that we've seen from open AI I, mean like last last time we talked I, think I used just used uh public hugging, face space to generate stable diffusion, output on hugging face there's this, dream Studio thing so I think it is a, kind of it's in the same stream as Deli, and other diffusion models that we've, seen so there's we're still in this sort, of stream of diffusion models but I, think the power of like, openness and Community is part of what, has driven the stable diffusion model to, just like explode and kind of in some, senses people move on from Deli to, stable diffusion so that's a that's a, really interesting point in terms of of, openness and how openness drives, adoption of things as well I guess yeah, so to turn a little bit you know as, we've been talking about kind of the, this new intrusion if you will into uh, into artistic output and the fact that, the the folks that have have been doing, it the traditional way aren't so Keen, about it at this point it has, implications in a much larger way than, just whether we like the art uh and, whether we care about it being AI driven, it affects people's livelihoods uh it, affects jobs and we this is a recurring, theme that we've that we've talked about, over time that I don't think is a theme, going away anytime soon and that's you, know how these things intrude into, livelihood that were that were, previously very human Centric and start, affecting jobs so we there was another, article that I ran across uh in the, register the article's title is goodbye, humans call centers could save 80, billion by switching to Ai and I don't, think that's news to anybody uh quite, honestly I think that we've seen, automation occurring for quite some time, it would be natural to have ai models, moving into those situations it calls, out the fact that all of these things, are driven by economics is AI is ever, increasingly present in you know the, automation of previously human jobs it's, going to be the economics driving that I, don't have anything up in front of me, but I remember in a very different, industry there was the thing we talked, about briefly about McDonald's, automating their the Food Service a, while back so we're going to see it, everywhere so as we do this I think one, of the challenges in front of us are is, recognizing it's hitting almost every, IND it hits medical doctors you know one, of the earliest things was radiographs, but it's it's there's so many other, areas cancer diagnosis we've talked, about many of these we've talked about, obviously airline pilots we've talked, about fast food this is really hitting, the gamut of Industry there's really no, safe place I was kind of surprised so as, you know Chris I'm uh my wife and I are, vegetarians so when we're traveling we, have our sort of quick spots where we, know there's something that is not meaty, and one of those places is White Castle, which has been interesting over time, I've noticed they've sort of adopt been, early adopters of things like the, impossible Meats sort of plant or uh, non-meat alternative Meats which was, kind of surprising for me when I did, that but the last one of the last times, I was coming down from Chicago I was, going through the drive-thru and now, they have like in the Drive-Thru an AI, assistant that does some of the, drive-thru interactions at certain times, of the day I think like not at all times, but at certain times of the day and I, was just totally I shouldn't have been, shocked right because we talk about this, every week but I was sort of surprised, at just the yeah I mean it's it's there, and they're using it and this is like in, rural Indiana right like AI drive-thru, it's been happening for a while it's, just gradually trickling through every, industry and it's funny that you say, that because I had a fat I I actually, went through a Chick-fil-A sorry, Chick-fil-A people because I'm about to, say something not nice but I was in the, Drive-Thru and they had done this where, they have the human teenagers trying to, optimize the drive-thru to handle the, traffic and they're they're walking, around it's like parallel processing, yeah yeah and coming from coming from, this that we do I was sitting in the car, going yeah you're all going to get, automated away pretty soon in my head I, mean I wasn't saying that to them that's, literally what I was thinking in the, Drive-Thru I was like yeah this isn't, going to last as long as you think but, we're seeing that and we're seeing it, everywhere I mean software developers we, who develop software are now have some, tools uh that are AI driven by GitHub, for instance that is one step toward, that so where do we go from here Daniel, just to to ask a really hard giant, Global question at some point people are, going to start to notice I don't know, that I um I have the answer I think um I, think we're still though at a point, where yeah there's certain things that, fit within the distributions that models, know right like we talked about sort of, a parent coherence in models recently, and I think there's things that fit, within the distribution of what models, know about so there's there's probably, two things one is like what are those, areas where we don't have good data sets, and models don't know the distributions, of of things to to put together and th, those are still very those will be where, we shift sort of human Focus I think but, then also there even where we do have, data sets and we're automating you know, call centers or something there are it, even stresses more the the biases and, the openness of data and models because, those biases are going to hit really, hard when they start affecting people's, lives, [Music], [Music], so another thing that I noticed just, last week was there was a lot of, interesting things having to do with, with AI interactions that had to do with, government and we're seeing that because, as we have talked about a number of, times the political social cultural and, legal Frameworks that we all live within, which are all we have a lot of, commonality even across different, countries they still haven't kept up, with the technology and so we have, another big question going forward is, what is the role of AI when it comes to, the different types of feedback loops, that are present in between government, and its its citizens and how does that, affect and um one thing that was sort of, humorous I suppose if especially if it's, not happening to you was that there were, some articles out about in France, they've released a not released but, they're using a model in the French, government to identify swimming pools, from satellite imagery and apparently I, did not know this apparently if you're, in France and have a pool that pool is, taxed and so it affects the tax base and, it affects how many and apparently there, were tens of thousands of Undeclared, pools in France how dare they how dare, they not declare their pool how dare, they swim so apparently there were more, than a few french citizens that were a, little surprised to receive a uh a tax, notification that we're now taxing your, pool that they had not previously, declared and you know it's it's a little, funny to me because I don't have a pool, and I'm not in France I'm in the US but, um you know it it does raise the the, question of such tools could be used, anywhere by any government and for, pretty much anything that they that they, can do with the data set so that was one, thing what what do you think of that you, don't have a pool do you it is, interesting I think we were just talking, about how AI models are affecting, people's lives but these very practical, things about how the model is built show, up almost immediately so I'm in one of, the Articles uh that we can include in, our in our show notes is talking about, these this pool model that at first the, system confused solar panels for, swimming pools with an error rate of, 30% but dgfi I'm assuming that's the, government agency or whatever that's, that's doing this says that it is since, increased the the accuracy so I think, that's one of these cases where can a, model I mean in a very practical sense, outside of the implications of it in a, government rule sense a model can do, this no doubt but it doesn't do it, perfectly right so as a practitioner in, the French government if you were one of, those practitioners how how do you, anticipate and deal with those biases in, the system and your understanding of of, your data I would say that if you do, some work upfront to sort of understand, Behavior also do some human evaluation, to understand how the model is behaving, if you analyze the biases in your data, set and then you release in a way that, is taking into account those behaviors, of your model then people might still be, angry but they're going to be angry that, the government is taxing them for their, pool they're not necessarily going to be, as angry as hey the government is taxing, me for my pool that doesn't exist and, only is thought to exist because I have, solar panels on my house and I'm trying, to save the environment you know that's, probably a different scenario right so I, I don't know the full details of what, they did in terms of that side of things, but I think as a practitioner there's a, lesson here that as you release these, models and they are influencing people's, lives the very fact that these won't, behave in maybe the way that you always, assume that they will that has to be, factored in upfront I agree with you and, not only that but you know we we tend to, as we've been tracking you know what's, happening in the world over the years, we've been doing this show and we tend, to gravitate naturally toward kind of, the new stuff coming out and and all, that and what I was struck about this is, this is uh I don't know the details but, I'm this looks to me like a probably a, fairly simple by today standards, old-fashioned convolutional model I, would guess you know that was doing this, I don't think it's necessarily the most, Cutting Edge uh deep learning uh of 2022, but and I'm not being critical of those, if we happen to have listeners I'm I'm, just saying you're you're being very, practical and that you're you're using a, technology that's been around for a, while to do that but with you know that, it's really mundane but with some fairly, substantial effect it probably did not, cost very much to put this model, together train it and and get it better, in the compared to many of the things we, talk about now and yet they note that a, 30 squ meter swimming pool will result, in about, €200 of extra taxes per year so every, pool is another €200 EUR that they can, do and apparently that that has amassed, something along the order of 10 million, EUR in new revenue for the French, government so um you know enough to, easily pay for the time of your data, scientists that are working on that plus, quite a bit more so I I would not be, surprised to see a lot more well and, yeah I wonder too the uh so if if you're, generating that amount I'm thinking of, um you know the model that I'm thinking, of Chris is uh I don't know if you had, this happen in in your town but the like, bird Scooters or these electric scooters, that like are in cities right I've used, them I mean I can't comment on their, whole strategy but my impression of, their strategy was we're going to show, up overnight in your town and drop, hundreds of scooters on the the streets, right and we'll deal with the legal, implications later so they are, prioritizing adoption above the legal, implications of the problems that they, might see and I think it worked right, the the companies that did that they saw, the trend coming and they received a lot, of adoption upfront and they likely, received a bump in Revenue versus the, legal implications of what they're doing, so maybe in a more pessimistic sense, like one could take the perspective if, you're the French government well you, say your thing's 70% accurate well just, release it because we'll get enough, Revenue that we can take care of any of, the uh legal things that come up from us, uh from us doing this so I mean that's a, maybe a more cynical view of the of the, practicalities of it but it's but it's a, but it's a business that's a very much a, business decision right it is it's funny, I'm I I think I'm usually the more, cynical of the two of us in terms of, what we bring but yet you raised a great, Point speaking of cynical did you see, the the Nvidia news over this past week, Nvidia has been having kind of an, ongoing very public conversation with, the US government about whether or not, they can uh have a100 and h100 ships in, China which is relevant because there is, uh a little concealed kind of, competition in many areas uh, economically militarily all sorts of, implications between China and the West, with the US probably being kind of the, center of that and so for for national, security reasons there's this ongoing, whether or not they can do that I won't, jump what do you think of that I I'm, kind of biased I'm already in the, defense industry so I I have some, thoughts on that but but what do you, think what how did that strike you when, you when you found out this was going on, I mean I I see a variety of perspectives, here I guess I know like the supply, chain stuff is all messed up around the, world and so to some degree people are, just looking for what jurisdictions do I, need to partner with, business-wise to run my business right, but there are a lot of other, implications there too around security, things and and that sort of thing and I, know in listening to um the IRL podcast, from Mozilla recently talking about sort, of balances of power that are happening, with different countries around the, world that's that's something to to, consider as well I don't know what what, did it trigger in your mind well I can, see like you I can see multiple sides to, this you know I'm I'm part of this AI, community and I certainly you know that, part of me goes well that's not a very, sensible thing to you know to to go do, that and it impacts a lot of livelihoods, and all that the defense side of me is, very aware of that competition and so as, a strategic move that I don't think, could last if it if that were to stand, the Chinese would go and create their, own capability but that might take time, so there is a I can see all sides of the, story I don't know that this particular, move the way it's being done is is the, most effective way for the US to uh, secure its interests in the long term, but I'm not I wasn't at all surprised, that that was I mean right now we've, talked about you know multiple shows, that AI is a power lever in the world it, is for governments it is for companies, it is for individuals it is one of the, key ways of filling power vacuums now, and changing who has the power so uh I, think we're going to see all sorts of, things all over the place with with you, know across the way and you you you see, the same thing in a non-government sense, with between corporations so and we saw, with artists this week as we talked, about a little while, ago yeah it is one of the things I think, is interesting I guess this gets to some, of the power dynamics one of the things, that was surprising to me and I I don't, know the stats on di but I saw on, Twitter that stable diffusion the new, stable diffusion model they were talking, about pricing in terms of the the kind, of market value of that training and I'm, seeing that tweet from from one of the, team is that they use 256 A1 100s in the, cloud um for 150k hours so the Mark at, market price stable diffusion costs, 600k so 600k is by no means a small, amount of money for most individual, humans but 600k in in a government sense, or in a large company sense is is, nothing right like it's not even a, rounding ER right exactly so I I do find, it interesting that like this sort of be, because of how the infrastructure around, this is being commoditized in the, tooling around workflows and mlops and, distributed training and all of these, things is getting better in some degrees, that's a I see this sort of also a, positive shift I guess in terms of, what's possible maybe for smaller teams, that can still like make an impact, release a thing that like takes the, industry by storm for a number that's, not like something that's restricted to, Nation States right like before I think, you have these sort of models that were, being released where this is Out Of, Reach of anyone but big Tech and a, nation state right and I think that it's, very interesting to me that something, like stable diffusion again not a not a, small amount of money for like if I'm if, I'm myself and I'm an independent, researchers 600k is is a lot of money, you don't have that in your checking, account right now I wish I did but it, it's reasonable to think like well if I, really wanted to and maybe I live in the, Bay Area I could sell my house for 600k, and train a model right there's like a a, route there's a route to that sort of, money for or a small amount of seed, funding in terms of a startup or, something like that yeah if you really, are motivated it's attainable yeah and I, I realize that also I'm I'm kind of, revealing my Western biases here that, sort of money is not yeah is is not, within reach for basically the majority, of the world but I I it is encouraging, to me that this sort of innovation can, happen from small teams and not isn't, restricted to sort of nation state, actors or big Tech so that's yeah that's, encouraging it's a great point that, would be expected over time you know, with all Technologies they come out, they're they're most expensive and, they're most inaccessible and then, eventually you know they become cheaper, and more accessible and and we're see, that, [Music], happen, [Music], well uh Chris I I think um there's a, number of things that I ran across that, are in some way related to the things, that we've been talking about in in one, way or another one of the interesting, articles that I think everyone should, take a look at is was posted on the, hugging face blog in August 31st of this, year 2022 and it's talking about the, open rail licensing so toward open and, responsible AI licensing Frameworks and, I think this is really really relevant, for people and people should take note, because this is the, licensing mechanism that was used in the, recent releases of of the big science, Bloom model which is kind of the latest, greatest large language model and uh, stable diffusion which we've been, talking about the whole episode so you, can actually go in this blog post and, you can see the links to the open rail, licenses for Bloom and and for uh stable, diffusion and what I think they they, argue here is basically machine learning, models or AI, models have in in one way or another, been licensed under licenses that are, mostly having have been applied in the, past to either software or content so, like software on on GitHub I might, license as MIT or Apache 2 something, like that there's various implications, of that and if I release content like, set up pictures or something I could do, that under Creative Commons for example, that this sort of or or like a book if I, wanted to make it more permissively, licensed under creative comments, something like that and over time people, are like well which of these do I use, for models right because a model is sort, of like an asset it's it's files and, some way it has relations or, similarities to content and in other, ways well it does have a code code, associated with it training there's a, model definition there's config that's, sort of a similarity with with code, right so do one or one or both of these, fit better and I think the there's, additional things with models that maybe, don't fit either one of those cases, right this sort of things around biases, the use of the model the limitations of, the model that's really not in common, with code right because in code, generally if it's deterministic like you, have an API endpoint and it does a thing, and it you can look at the code and, understand what it does like it does a, thing that's all it does but with a, model this sort of like behavioral and, biased things that aren't in common I, don't know over time if you've run, across these sorts of Licensing things, with with models in terms of the fit of, sort of data sets code models and what's, licensed where I have I mean and and I, think I think a lot of people in the, community at large have it's a natural, question to ask you and I are fond of, saying you know you have to have, software with your model to get stuff, out there and software is pretty well, established as you just pointed out with, the open source models and you have some, choices and I think they're fairly well, understood today not so and even, especially with Creative Commons and and, such as that as well this has been a big, ambiguous area for a lot of people not, understanding so I think it's I think, it's long overdue glad to see it came, from hugging face because they always, put out fantastic stuff as we're often, talking about how would you describe, that a little bit of the differences, like I recognizing bias like how would, they approach kind of doing the, licensing that accommodates bias for, those who haven't had a chance to to, look at the article what's what's, different or what's new there they might, not have seen in those other categories, of Open Source licensing of some sort, yeah so I think that there's, commonalities with other open licensing, structures or mechanisms that maybe, we're used to from the from the AI world, but there's here there's two really two, kind of pieces of what makes this, interesting one is is open so that's, that's like one aspect of it open access, so that's like an access thing but then, the other side of it is responsible use, and that's really kind of where this, rail component comes in so they build, off of this idea of rail or responsible, AI licenses and these you can go to the, also the rail there's a rail page which, talks about the rail licensing effort, which is been talked about in in ACM, paper behavioral use licensing for, responsible AI but basically these rail, licenses say they include behavioral use, Clauses which Grant permissions for, specific use cases and or restrict, certain use cases so if you think of a, model again it has similarities with, open data or open code but it also has, this um behavioral aspect to it and so, what the open rail license kind of does, is it grants sort of, permissive access and redistribution and, that sort of thing as you might expect, with an open license but then it has, these Clauses which talk about, responsible use so for I I can give an, example here that'd be good that's the, one thing I'm wondering about is, responsible is kind of an ambiguous it, is well and I think that the structure, really the open rail license is I think, a structure and the open rail license, that you would adopt for your model, maybe you could adopt sort of stable, diffusions open rail model if you're if, you have a similar model but other, models are going to have other, implications right so you have some, flexibility in the responsible use, Clauses for the stable diffusion license, I'm just looking here there's sort of, attachment a all the way at the bottom, and it talks about use restrictions and, this is really this kind of clause that, in my understanding is is is really, important here and they talk about you, agree to not use the model or, derivatives of the model and then they, have a bunch of things so one of those, things is in any way that violates, applicable laws that's okay so kind of, boring but if you if you go down a, little bit there very interesting ones, right I you agree to not use the model, to defame disparage or otherwise harass, others and there's other kind of, interesting ones maybe specific to, behaviors that they anticipated use of, the model they wanted to avoid so, there's one that says you agree to not, use the model to provide medical advice, and medical results, interpretation so, obviously you could generate images with, stable diffusion or maybe interpret, images with such a model that are, medical related right the medical, imagery is a type of imagery right so, maybe that's a a use that they, anticipated and had internal discussions, about and and so there's all of these, use restrictions or Clauses that go into, the sort of rail part of the open rail, license so it's still open in terms of, access and distribution type of things, but then there's these Clauses around, responsibility that are explicitly, included it's interesting whereas I know, you and I are are very focused on kind, of AI ethical topics and and the, responsibilities associated with that I, find the explicit call out potentially, shortsighted in terms of unexpected uh, you know outcomes and consequences so, I'll have to go through that in depth, after the show and just kind of uh and, see what I think about some of those, ideas and see if I can come up with any, that I think maybe weren't what they, were thinking just as a as a fun, exercise yeah yeah and I think that, that's always a it's probably an I mean, it's a pre-existing problem with any, licenses right you can only anticipate, so much I'm reading some of the, frequently asked questions from the, bloom open rail license and they have, one of the one of the explicit, frequently asked questions is does does, the license cover every harmful use case, and so this is their response maybe, that's useful for this discussion they, say no we recognize that the list of, use-based, restrictions does not conceivably, represent everything one could possibly, do with our work we focus on use cases, which could be feasible for the model at, this time the license is a start by us, at exploring how such rail licenses, could be used to mitigate harm and we, hope that these first set Provisions can, evolve into more comprehensive, Provisions over time with Community, engagement so I think they also, recognize this explicitly right you can, never anticipate everything I think uh, in some of the discussions we've had on, the podcast over time around responsible, use of AI and ethics I think there's, this obligation on the developers part, to reasonably try to anticipate um, harmful uses of of what they're doing, with the, understanding that you're not going to, anticipate anything but at least if you, made an effort to anticipate some things, then some of those things are, anticipated and you're not just sort of, chucking your thing out into the world, and seeing what happens I think they for, each item they need to have a little, parenthesis behind it and says and the, kinds of things that you know I'm, referring to Etc dot dot dot dot yeah, that sort of thing yeah so I I mean I, think this is really um it's a good, start it's a good start I think it opens, a opens dialogue as well which we're on, my team we're thinking a lot about these, things in terms of data sets and models, that we release so it's really good to, have some sort of open dialogue around, these things I think I agree I think I, think if this is a start and and the, conversation can kind of evolve as did, open source licenses on the software, side that was an ongoing that you know, there was a many there's still not, agreement and there's still not, agreement but it's much more mature than, was say early in my career and so, hopefully this is the start of the, conversation uh on the model side at the, end here I think uh I mean what we've, been talking about is hopefully very, practical but we can provide people with, a a couple practical learning resources, that have have come across our our desks, over the over the past week one that I, think fits right into our theme of, practical Ai and the things that we care, about is an upcoming conference that, actually started I think out of a a sort, of viral tweet um on Twitter called, normc comp this is a completely as far, as I understand a completely free and, online event but with an amazing list of, speakers like the really really great, list of sort of AI machine learning data, Science Tech speakers and what they say, is what if there was a conference all, about the mundane behind theen scenes, how the sausage is made middle brow, unsexy Norm core stuff in the data and, ml parts of the tech world and so that's, that's the goal to to sort of talk about, all of those things that aren't probably, what is the latest diffusion model but I, tried to train this diffusion model and, I'm having trouble with my, infrastructure and data and can't make, it work sort of problems so you can see, even people that we've we've had on the, show here are repres recognizing that, yeah are represented in in the list of, of speakers and future future guests for, our show are are surely coming from this, list I certainly hope future guests yeah, if you're if you're if you're out there, and you're speaking at normc comp let us, know but um just reach out to us this I, think is a just a great focused, practical thing that will be happening, and I hope it continues to happen but, I'm certainly excited to to listen in, and and see uh See the set of content, that they're putting together me too I'm, all over this so I'm going to register, right now yeah cool well uh Chris I um, I've enjoyed chatting through things as, as always with you and uh hope that you, have a a good uh Labor day it's about, Labor Day here in the US so we've got a, long long weekend but uh enjoy the enjoy, the holiday and good to chat as always, you to Daniel have a good weekend take, [Music], care all right that is our show for this, week if you dig it don't forget to, subscribe head to practical AI FM for, all the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again too fastly for frontting, our static assets to fly.io for back in, our Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Privacy in the age of AI | In this Fully-Connected episode, Daniel and Chris discuss concerns of privacy in the face of ever-improving AI / ML technologies. Evaluating AI’s impact on privacy from various angles, they note that ethical AI practitioners and data scientists have an enormous burden, given that much of the general population may not understand the implications of the data privacy decisions of everyday life.
This intentionally thought-provoking conversation advocates consideration and action from each listener when it comes to evaluating how their own activities either protect or violate the privacy of those whom they impact.
Leave us a comment (https://changelog.com/practicalai/191/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Google’s Responsible AI Practices - Privacy (https://ai.google/responsibilities/responsible-ai-practices/?category=privacy)
• What is Data De-identification and Why is It Important? (https://www.immuta.com/blog/what-is-data-de-identification)
• Hugging Face - Stable Diffusion Demo (https://huggingface.co/spaces/stabilityai/stable-diffusion)
• floret: lightweight, robust word vectors (https://explosion.ai/blog/floret-vectors)
• Using oneAPI with Intel® FPGAs Workshop (https://plan.seek.intel.com/psg_ASMO_dcaipsgloc_LPE_EN_2022_oneAPIworkshopSep8?cid=em&source=elo&campid=psg_ASMO_dcaipsgloc_EMIE_EN_2022_oneAPI%20workshop%20invite_C-MKA-29855_T-MKA-33098&content=psg_ASMO_dcaipsgloc_EMIE_EN_2022_oneAPI%20workshop%20invite_C-MKA-29855_T-MKA-33098&elq_cid=6649892&em_id=83558&elqrid=3039f3b248fb40cc890417beb8248254&elqcampid=52978&erpm_id=9748747)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-191.md) | 11 | 1 | 0 | I think when people talk about data now, in terms of data that affects, personalization and identification I, think the argument to be made now by any, data scientist or AI practitioner is the, argument on what you need and why you, need it and being able to justify that, going forward in general there are many, exceptions to that obviously but yes I I, think I think the burden has changed to, us to show not only why we need it and, what we need it for but why that's a, good thing and why it does not cause, damage unintentionally and so it's we, we've come a far cry from the early, collect everything only intelligence, agencies these days collect absolutely, everything uh you know the way the world, works, [Music], now welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone subscribe now if you haven't, already head to practical a FM for all, the ways special thanks to our partners, at fastly for delivering our shows super, fast to wherever you listen check them, out at fast.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, [Music], fly.io well welcome to another fully, connected episode of the Practical AI, podcast uh this is where Chris and I, keep you fully connected with everything, that's going on in the AI Community, we'll discuss some uh recent AI issues, or news and dig into some learning, resources to help you level up your, machine learning game I'm Daniel whack, I'm a data scientist with s, International and I'm joined as always, by my co-host Chris Benson who is a tech, strategist with lockie Martin hi how you, doing Chris doing very well Daniel it's, a good day I'm looking forward to having, a fun conversation with you hope our, listeners are too yeah have you been, flying much recently for listeners Chris, is a is a pilot have you been up in the, air very much uh I did uh we took a, vacation with my daughter a little while, back and did did a lot of flying for, that and then ironically that you asked, this today tonight um Pilots have to do, uh what's called currency flying to keep, your night rating going every 3 months, tonight is the night so I'm going to go, fly tonight after a couple a little, while after dark and uh do some night, Landings always enjoy those the lights, are beautiful well in terms of some of, the things that I'd like to discuss, today this might seem like a random, question but I think it's relevant so I, know you're doing these certifications, and other things and you've got to keep, things up if you were told that the FAA, or whoever they wanted to have a camera, mounted in your plane and monitor all of, of your whatever is going on in the, cockpit during each of your flights to, judge whether you were a good pilot or, not and there would be constant, monitoring of you maybe an AI model, identifying certain things you did wrong, or something how would that make you, feel oh not good at all not good at all, I mean aside from all of the uh the, moments where maybe where maybe I take, Liberties that the fa wouldn't wouldn't, go for just in general you know every, bad landing noticed you know that kind, of thing every right oh boy yeah that, doesn't appeal to me at all I it would, feel like a fairly substantial invasion, of my privacy yeah but I think one could, argue that if you wanted to know and, certify only Pilots that did the right, things a certain percentage of time or, something I guess there's a sort of in, that case there's maybe a balance, between, hey on one side I'm going to make an, argument about some type of safety over, privacy or accuracy over privacy or, something like that and on the other end, of course it's a natural maybe in what, we would most people would consider in, this sort of hyperbolic situation most, people would consider an invasion of of, privacy yeah I think there's a balance, to be struck there certainly I mean when, you when you raise Public Safety that's, a legitimate concern but I know that uh, it is a topic that you know in the use, case that you brought up Pilots do talk, about that because with current, Technologies there it is the oversight, is becoming increasing for pilots and I, think uh I think that that's very, important when uh like if you are an, airline pilot and you have passengers in, the back that's super important for me, uh I worry about do I really need that, level of oversight if I'm doing the, mountain flying that I do I tend to do, low Mountain flying lowers but if I were, to pass a hiker on a Ridgetop without, realizing they were there technically I, would be breaking a regulation and I, could get in trouble and uh frankly I I, think that might be like a step too far, so I think the Privacy the privacy, concerns are something we need to figure, our way through I'm guessing that, there's an AI angle going on this one, yeah I think that I bring up this topic, and in these these episodes it's just, you and I of course we get a chance to, discuss some of the things on our mind, and this has been one of the things on, my mind recent not so much the cameras, in the cockpit sort of scenario because, I'm not a pilot but General sort of, privacy concerns and thinking about even, for my own team like what are the what, are the balances that we need to strike, and where the privacy concerns within, our own workflows in terms of making, sure that we're comfortable and, responsible with the the ways in which, which we're handling data the data that, we're feeding into our models the types, of data that we're storing in certain, places and and that sort of thing has, definitely been on our mind uh recently, I don't know so when I got into this, stuff I don't know I don't know if we've, talked about this Chris but if whenever, you got into things like this but when I, got into this sort of stuff it was sort, of the beginning of data sciencey hype, not so much the AI hype yet right like, there was this hype around sort of data, science is the new thing and so getting, a job as a data scientist and I remember, at that time there's sort of this, thinking well you don't know what data, you're going to need so just make sure, you store it all and you have it all, that was kind of the mindset I I, remember very distinctly at the time, that was the mindset how do you think, that mindset maybe do you relate to that, and how do you think that's shifted over, time oh I remember that you're showing, your age Daniel by the way you know cuz, that's certainly changed dramatically, over the the last couple of decades you, know when you you talk about those early, days of data science and you know, everyone was pioneering their way, through that and yes you were you were, trying to find data to use and there, often wasn't enough data around and when, you found it you collected all you could, to combine with others and and obviously, uh today things are are somewhat, different and with the capabilities it, is uh privacy and things like data bias, and such as that and and they're all, interrelated has changed the landscape, dramatically especially especially when, you consider all the use cases out there, yeah I bring this up because the like, let's just say that we want to strive, for privacy or a reasonable amount of, privacy let's make that argument first, there's probably a separate argument of, like well maybe we don't need the, Privacy that we a lot of people are, after maybe that's another discussion, but let's assume that we're striving for, some level of privacy I would say the, first thing that comes to my mind in, terms of making something quote private, is if you don't collect or store the, data then that's just about as private, as you can get now maybe there's like, other logs and certain things that we, maybe wouldn't think immediately of as, data that are revealing certain things, but I think one one principle is I even, saw this term so I was looking through, several things leading up to this one of, them that I look at occasionally is uh, Google has this uh responsible AI, practices page and uh they use this term, data, minimization which I know probably, listeners are thinking well what would, we have to learn from Google about, privacy because they know everything and, have all the data so it's kind of, interesting to think about Google, talking about data minimization but I, find this term interesting in the sense, sense of like one way to improve privacy, is to just plain not have data have you, been in those sorts of discussions, within your career around like do we, actually need to store this data or, should we not store it those sorts of, conversations yeah I think the I think, the burden has has flipped to the, opposite side from those early days that, you talked about I think when people, talk about data now in terms of data, that affects personalization and, identific ation I think the argument to, be made now by any data scientist or AI, practitioner is the argument on what you, need and why you need it and being able, to justify that going forward in general, I would say you know there are many, exceptions to that obviously but yes I I, think I think the burden has changed to, us to show not only why we need it and, what we need it for but why that's a, good thing and why it does not cause, damage unintentionally and so it's we, we've come a far cry from the early, collect everything datas I think only, only in only intelligence agencies these, days collect absolutely everything you, know the way thing the way the world, works now yeah I think there has been a, shift I think there there are a lot more, conversations going on about within, companies talking about whether they, should store certain pieces of data, maybe about a user let's say a name or a, location right something that that is, useful and maybe, marketing purposes or whatever it is, right do we really need to store that to, do our marketing the way that we want to, do our marketing that's that's like a, question that comes up probably and it, comes up I think in relation to like, Facebook and others have or meta or, whatever I should refer to them as um, they they've changed their their apis, and other things to where you don't get, some of that data in many scenarios um, so maybe some of that is just we don't, even get it anymore but I think that as, much as I love hugging face and the, hugging face Hub and that Community I, think there is this sort of shift with, the recent AI more AI related hype, around like what are all the AI data, sets we can create right and there's, definitely bias concerns that have come, up with that I think there's probably, privacy concerns as well though I, remember very distinctly um I tried to, actually find if there was like a blog, post about this or something but I, remember Jim kukar who used to work for, um amuda I attended a talk by him and he, showed How You Could reconstruct a real, person's face from the parameters of, like a facial recognition model because, the parameter space was so large so, there's a very large model there's a lot, of information encoded in the parameters, of that model and he could sort of, reconstruct or he showed some research, where someone did I forget the exact, details but you could sort of, reconstruct something from that so even, these like very very large models that, are released and the parameter spaces of, those models could even have privacy, concerns so I think this sort of, proliferation of like let's get all the, data sets on the Hub let's get all the, models on the Hub I think that overall, is like you know 90 not well I don't, know I don't want to put a percentage of, it but I think overall like it's a very, very good thing and obviously I think if, you've listened to this show very much, you know how much I love that effort but, I think with it there's sort of this, maybe a shift back in thinking towards, like let's accumulate all the data let's, release all the models and these models, themselves may even have sort of priv, certainly bias concerns but privacy, concerns as well within them so yeah, that's one thing that I don't know I, don't really have a definitive statement, on but I've been thinking about as as, I've seen the community grow around that, you know you you rais a really important, point in terms of the implications of, what what you just described and that's, the fact that that as the capabilities, are evolving over time the way we're, choosing to make evaluations about how, our privacy is affected is also changing, so it's not it's not a static decision, it's a decision where if you look back a, few years and look at where it's at, you're like I'm okay with that you know, you know I could see that they're not, back but at this point the, sophistication level is becoming so uh, so much higher and the fact that you can, do that that reconstruction that you, just described you know makes one, reevaluate and then if you add in the, fact that there are also considerations, like who is it that's doing it and why, and what and that changes depending on, on who it is we all are making decisions, every day about what privacy compromises, we're willing to make and we all have, different uh different profiles in that, capacity if you if you choose to install, security cameras you know like the, doorbells that everyone has now and you, you now know that every time you walk in, and out of your front door you're on, camera and it's recogniz you know it has, a model there it knows who you are it's, recognizing you even before anything is, done with the data and we've all made, and I've made that choice I have a nest, on my doorbell and I have other devices, around my house that know who I am so, there's some level of that but it also, depends on whether or not I have some, level of control of that data in terms, of its usage what the what the rights, that I have as a as a consumer are and, whether or not it's from a public sector, perspective or a private sector, perspective so all those are, considerations that we can we can delve, [Music], into, [Music], so Chris the first term that I had run, across that I wanted to bring up was, that term data minimization which is you, know maybe you do need data to do, something maybe you don't that's one, consideration with privacy is certainly, the the easiest way to deal with the, Privacy concern is to not the data I, think though many cases either we step, into a project and data exists already, and is maybe stored within our, organization or you know we have some, data set that we're interested in in, working with that you know maybe we, don't know what the sort of identifying, information within that data set is or, the privacy concerns with it the next, term that I ran across as I was sort of, probing this space was data, deidentification, I was reading blog from again amuda, which I think we've had amuda on the, show before here and and they've course, done a lot of thinking in this space but, they have a nice blog post which we can, link in the show notes about data, deidentification and they talk about, various sort of pieces of data that you, might want to De identify within data, sets I think for practicalities purposes, I'll just mention a few of those since, this is practical AI so they have a long, list that I won't read all of them but, they talk about names dates telephone, numbers those are probably ones that, would be um immediately assumed maybe, ones that people might not be you know, thinking about immediately would be a, device identifier or or serial number so, like maybe that's a MAC address or maybe, that's like a browser fingerprint web, URLs might might be, identifying like there's such a, proliferation of analytics data within, URLs these days that's one thing I was, thinking about like the you know all the, query strings that are added onto a URL, to to track you in various ways or or, like there could be an account ID in, some URL or something like that which is, you know something something that could, happen and they list out a bunch more, but those are the types of when we refer, to identify the types of identifiers, that were we have in mind as you sort of, look at that list Chris do these things, come up in your in your mindset in data, sets that you work with absolutely you, know going through the process of of, trying to get them removed to De, identify them while not losing the, potential value uh of what what you're, trying to create from a model because I, mean let's face it many of the models we, create humans are Cent uh you know are, Central to the to the output to the es, of those models and so if you're going, to to deal with humans you're going to, be dealing with these identifying traits, but if you take out too many too much, sometimes you run the risk of the model, not being able to be productive even for, the best use so it's a bit of a, challenge for the data scientist of, today to try to to there's there's this, balance of a bunch of hard things that, we need to accomplish from an ethical, standpoint and we do the best we can, with the tooling available yeah I also, think that the person giving you their, data needs to have agency to to give you, their data right but I also think that, the general public doesn't understand, the implications of some of the data, that they might give you so I think that, you as maybe a practitioner in the AI, space probably could also not just, assume because the user gave me this, it's it's going to be uh okay or at, least not have any issues if I use this, identifying field or or something I, listen to a podcast about the we talk, about the boarding pass thing this is, another flight thing I don't think so go, for it so I listened to I think this was, another Dark Net Diaries I love that, podcast I've mentioned it a couple times, on the show but what had happened was, you know people they go on a trip right, and they like post a picture of their, boarding pass on Instagram or something, right like I'm going on my vacation look, at my boarding pass or what whatever, it's very common you know # boarding, pass well there was a guy that said you, know there's some got to be something on, this boarding pass that is like the, airline doesn't tell you that your, boarding pass is a security risk and, should be private right and so people, post them all but what this guy Learned, was that the like booking ID so it was, like a quantis flight right and he saw, the booking ID was on the boarding pass, and with interesting is that he found I, think it was the Australian prime, minister posted a picture of one of his, boarding passes somewhere he was going, so he took the booking ID from the, Australian prime minister took it to the, quantis website and turns out all you, needed was the booking ID and a bit of, personal information like your name, where you were from which is obviously, all public record for a prime minister, and he just logged right into quantis as, the prime minister of Australia of, course at that point the flight had, already happened but then he was like, well I wonder what else is here and then, he just did page view Source on the qu, logged in quantis site and in the source, of the page there was a Json field which, included all the info about the account, holder including passport number phone, number Etc and of course the the podcast, is really great maybe I'll link that in, the show Notes too but it's like who, would have thought that posting a, picture of a boarding pass which the, airline doesn't tell you is a security, risk but obviously there was a security, risk there and a privacy concern because, there's sort of passport information and, and such but people sometimes the, companies don't even understand how, people might put this data together, which I guess influences like maybe the, the scope of the concern here and how, you really want to consider both data, minimization and data deidentification, at least in in many cases yeah you, really raised the point about it the, burden being on us as the data, scientists data scientists of Good Will, and good ethics because the general, public doesn't understand a lot of these, things I mean any of these documents you, know the the whole purpose of a boarding, pass is for is is to identify you as the, rightful user of that airline seat and, you know to give you the uh to to admit, you to the plane and such so by, definition it's an ID thing and anything, that serves an identification purpose, should be treated pretty carefully it's, hard to do today for the public not only, in the context of how data can be used, in an AI context but just in the broader, world there's so many opportunities for, data leakage that affects us in that, personal way I have gotten probably more, insight into that than most people, because of two things a I'm I'm in this, world that we're talking about you know, AIML and data science but I'm also in, the defense industry and we go through, classes about how to protect yourself, because of for obvious reasons you know, with nefarious folks out there and so, there's so many opportunities so it it, really does raise the need for the data, science and the AIML Community kind of, Step Up to meet those needs because you, can abuse it and you can you can use it, you can get away with what you want to, get away with probably in many cases but, that hurts us all in the long run it, causes harm not only to others but to, ourselves in this industry so definitely, something to be thinking about in every, possible part of your life that has any, form of identification associ yeah, there's a big concern here but there is, a lot of good thinking and tooling, around this sort of deidentification, side of things as well in the amuda, article they talk about okay well if we, assume that as you mentioned us as, practitioners want to be responsible, with the data that we're processing and, the way that we're handling it one, scenario let's say that we didn't do the, we couldn't do or didn't do data, minimization right we have data we need, to use it for a specific purpose but we, also are maybe concerned maybe it's text, fields and we're concerned that there's, names or phone numbers these sorts of, things account numbers maybe it's, individual structured data but but maybe, it's just raw data and we don't exactly, know there are de, identifying methods out there so of, course this is a lot easier probably if, I mean in the language space if you're, using English for an example you uh you, have an advantage because you could for, example take a named entity recognition, model and and figure out where the names, are and replace the names with, pseudonyms right so like for your AI, model it probably doesn't even care what, the name is right as long as it's a name, so you can sort of do pseudonyms or uh, fake phone numbers and this sort of, thing and or or you know hash certain, fields or off uscate them in certain, ways so that's like this using a replace, type of method for these fields you, could you know just identify them I know, there's python tooling I've used I, forget what the update is on the best, one to use we've used one called scrub a, dub I think there's python libraries to, like find these things and identify them, or replace them the amuta article uh, emphasizes this type of you know masking, or pseudonyms and that sort of thing um, and it probably again depends on the, data type maybe if you've got an image, with people's faces in it maybe that's a, different scenario than if you have sort, of a text field with a name in it and, you can replace the name it's maybe, maybe more difficult to replace a I mean, there are ways now of course to you know, maybe this is another positive use of, the deep fake sort of methods you can, replace faces and images and that sort, of thing but if you're there's probably, certain methodologies like facial, recognition which by their very nature, are identifying methodologies right so, you don't want the whole point of facial, recognition is to identify someone right, so there's there's probably a range of, scenarios as well where like if I'm just, trying to do like predict a marketing, campaign or something like that maybe, the sort of ausc and masking methods are, really relevant if I'm actually though, trying to identify a face for a security, reason in my building or something I I I, am actually trying to identify someone, and that probably brings up other issues, of how you log that and store that, identification which we can talk about, but yeah it gets complicated in that way, and that uh kind of going back you know, building on your last point a little bit, there it goes back to the use case it, goes back to who is who is using that, data is the government that you happen, to be fall under in whatever country, you're in are they looking for facial, recognition or is this your your nest, doorbell and you've made an, accommodation it's pretty crucial and, it's pretty hard one of the from a, identification standpoint your I think, your your airline example a few minutes, ago was really pertinent and that it's, very easy for user who may be making a, choice about offering their data to, misunderstand that they may look at the, data that they're giving up and go this, is okay this isn't too much but if the, the model Creator is combining that data, that they've chosen to give up with, other data a lot of privacy can be can, be compromised by combining different, data types together that may not be part, of that that initial thing it may be, something that you already have, available or from other source so uh it, gets it gets, [Music], [Music], challenging the challenge that you, brought up Chris around I guess the, expectations of users of how their data, is going to be used or combined with, other things it's a really challenging, one that can get really complicated like, I'm thinking of even in my own scenario, we've had discussions before because, maybe we've got a a recording from uh, someone across the world some language, recording in our archives right and they, they gave permission for that data to be, used or collected did and stored in the, archive and like for language, documentation purposes or something like, that right maybe we no longer have, access to that person right so we can't, get their explicit permission to use, that in any other way even though we, know well this would be useful to add to, an AI data set right like so we're, talking about that all the time, internally and our team is like when the, data collect was collected that's a very, crucial time to you know Help The Help, the company Express to the user how, their data is going to be used and have, the, user understand and you know have agency, over that but also there's that brings, up the additional point that like yeah, you could give them a long list with the, terms and conditions thing that no one's, going to read right is that really, giving them control over how their data, is being used because for any reasonable, person you could assume that they're not, going to read through all that right, everyone will assume they're not going, to read it but the lawyers involved of, course but the lawyers yeah the lawyers, are assuming they've read every, word I mean you raised a great point I, confess I probably shouldn't do it in in, such a public way but I I have agreed to, many terms and conditions where I have, not read the full verbiage there might, have been more than a few where I didn't, read any of the verbiage and so we are, often making these choices of, convenience that may have some fairly, long-term repercussions as you're, pointing out the other kind of major, category within the data, deidentification that amuda brings up, and actually the many other places do as, well I think including that Google, responsible AI practices is something, having to do with, randomization and differential privacy, so a case of this that we've been, talking about internally is if we have a, device in the field right and we're, Gathering either text audio video one, choice would for us would be to send all, of that audio back to a central location, store it in S3 and do a bunch of things, with that right that's that's probably, the the worst case scenario because now, we've got just recordings of audio from, some random place and maybe people don't, know hopefully they knew that they were, getting recorded but uh and understood, what was happening but still that that's, a that's a very a very hard situation, cuz you actually got the raw the raw, data and it's sent to a central location, I think one thing in that scenario that, is a best practice is if if you can do, any of that processing at the edge if, you can push your models out to the edge, and let's say I'm doing transcription of, the audio and then I'm detecting like, something about the what is said in the, audio maybe I don't even want to send, the transcript back I just want to send, metadata back about like hey I did a, transcription and I'm not sending the, audio I'm not sending the transcription, and of course that's a much better, scenario because the audio is staying on, the device the model was run at the edge, and you're the only thing you're sending, back is metadata of course that's still, probably a tricky situation because, you're knowing maybe something was said, at a certain time at a location from a, device which brings up this, randomization piece right so the other, thing you can do is take those messages, that you would send back to the central, location and randomize their timestamps, or their ordering or that sort of thing, to where for example if someone said, something that had political, implications at a certain time at a, location whoever had access to that, Central source of data they couldn't, really tie it back to a Central or a, certain location at a certain time and, maybe identify the person that said that, and persecute them for saying that so, this sort of randomization comes and it, can be taken as far as this idea of, differential privacy which offers a, mathematical guarantee around sort of, privacy and the the masking of direct, identifiers um and that's come up also, with Federated learning so I think the, edge Computing side of this comes in and, actually to a lot of benefit to the, Privacy situation if you're able to do, things at the edge and the things that, you're communicating over a network are, randomized in some way there's some, guarantee around privacy and maybe, you're just communicating metadata and, not the raw data that stays at the edge, so that of course makes infrastructure a, lot harder to deal with but it's overall, a better situation as you were saying, that I'm struck with the fact that it, takes a good actor to be willing to do, these things by way of example you know, so many of of the uh the the laws that, we have and both here in the United, States and in other countries are not, sufficient to kind of enforce these, these things that we're talking about, here in this episode as as as good, practices and as ethical practices I, know that here where I'm at physically, in the state of Georgia I can record a, phone call uh legally and only one party, of the phone call has to know it's being, recorded and that's me as the recorder, so I can record a phone call without the, other person having any knowledge that, that call is being recorded and that, data is data that I have available to me, it has their voice it may who knows what, they say on the call kind of going back, to your point about political comments, whatever and how I use that data what, I'm getting at is as we kind of build, this ethical framework as good actors in, the data science Community we really, need to we really need to find ways of, using of having these techniques kind of, acknowledged beyond our community and be, able to be integrated in as best, practices in a legal framework to help, enforce it because I know I'm not going, to do anything nefarious and I know, you're not going to do anything, nefarious but there are a few people out, there that might do something something, that is nefarious and it raises a a, fairly challenging kind of enforcement, or compliance, concern in terms of implementing these, techniques that are are going to be, necessary for us to be responsible with, this data going forward yeah and I think, that as a person that builds like tools, that maybe various clients will use one, thing like if you're in that situation, like if you're creating software, products that might be used by a variety, of, organizations I think it's your sort of, Duty to take into account how you can, ensure that your software product isn't, going to be used for malicious purposes, rather than assuming or writing in a, terms and conditions or a licensing, agreement that you agree not to do this, so for example like in that scenario of, communicating audio back to a central, place if you only make it possible for, your software product to communicate, metadata back to a cloud location I mean, someone could hack it and maybe do, something else but at least you're, making it much harder whereas if you, make it to where there's an option to, send the audio back as well well then, you're in a whole another scenario where, people could do all sorts of things with, that so I think that also understanding, what might be possible whether you think, you are working with good actors or Bad, actors is within the sort of duties of, us as practitioners to think because, even our managers or Executives that are, promoting the things that we build they, they might not understand the, implications of what could be done with, with what we're building so some at some, point we have to kind of own that and, hope that over time the sort of, regulations and guidance we get from, maybe governing bodies or other or other, places is will catch up to where the, technology is that's a great Point, you're making and that is do what you, have the ability to do, to kind of police the the set of, circumstances that are out there so if, you if you don't have a strong legal, framework to fall back on that will, protect your users in that capacity as a, data scientist being you know being able, to say well this is the software I'm, going to give you this is the this is, the capability that I will provide and, eliminating some of those some of those, cases that could be used nefariously is, really important I would love to hear, examples from our listeners about how, they might be doing some of these things, and maybe share some of their ideas with, us on some of our social media outlets, for the show because this is important, we're our community is is leading the, way in the sense of of how to affect, privacy with all of these new, technologies coming through and all of, the capabilities that AI has has surged, forward on in the last few years that, we're we're the Vanguard we're the tip, of the spear yeah I totally agree well, in the last few minutes here it may be, worth just quickly mentioning a Learning, Resource and maybe a couple things, happening uh in the AI Community one, interesting thing Chris I wanted to, mention before we close out here is that, you can now run this uh stable diffusion, model on um a hugging face space which, is one of these recent text to image, models that does really amazing things, when you put in variety of text so I, just in our slack Channel I I sent you a, message I put in two cool guys recording, an AI podcast and maybe I can post this, in our show notes or something but they, don't look like Chris and I but no at, least cooler than we are in a couple of, the photos they're wearing sunglasses, maybe we should consider that I do, notice that there's a trend where at, least three of them one of the one of, the guys is bald and the other one is, not bald well I have very short hair, maybe I should shave my head and we, need one one bald guy and another yeah, also there's some interesting text going, on to co, co I don't even know what that means, anyway very interesting AI generated two, cool guys recording an AI podcast but, they have other examples too like a a, small cabin on top of a snowy mountain, in the style of Disney art art station, an insect robot preparing a ious meal so, anyway something to play with if you, don't have access to the open AI Del 2, model yet and you're on the wait list, wait no longer you can use stable, diffusion on on hugging face and and, have some fun there I also saw a pretty, cool release from Spacey from the NLP, World they've been on the podcast before, but they released uh fuette which is an, extended version of fastext which uses, Bloom embed so Bloom was this huge, language model that was a collaborative, effort and um a big language model that, came out recently and spacy has, implemented this sort of combination of, fast text embeddings and Bloom, embeddings which is efficient and built, right into the Spacey ecosystem and I'm, ex I'm excited to try those things out, where this will allow you to compare, words to other words and see their, similarities but also build models on, top of these sort of embeddings which, could allow you to do things like text, classification or named entity, recognition and these sorts of things, these are really the building blocks of, modern NLP or these these embeddings and, it's interesting these combine both word, and subword embeddings which could, handle like misspellings or rare, occurrences of words and that sort of, thing so really cool effort from Spacey, to make this sort of cutting Edge NLP, building block really an easyto use, piece of their really userfriendly, packaging so really cool to to see that, the last thing that uh that I saw which, is more of a a Learning Resource is I I, see there's an upcoming uh Intel, workshop on, fpgas which seems pretty uh seems pretty, interesting to me I'll I'll link that in, the in the show notes um I don't know, anything about fpgas but I hear them, mentioned occasionally and so maybe I'll, maybe I'll join the workshop and find, out some more sounds good it looks, interesting and uh I will say without, jumping into detail there's a lot of, really cool things happening in that in, that space with hardware and processors, and single board computers right now and, so a lot of a lot of new AI capabilities, are are coming out by various companies, so yeah I would imagine this Workshop is, a pretty cool place to go cool well, thanks Chris it's been a fun discussion, let's think more about our our privacy, and I promise I won't install a camera, in your in the cockpit of your plane, thank goodness oh boy thanks Daniel on, that note all right see you Chris talk, to you, [Music], later all right that is our show for, this week if you dig it don't forget to, subscribe head to practical AI FM for, all the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static assets to fly.io for backing our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one, k |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | Practical, positive uses for deep fakes | Differentiating between what is real versus what is fake on the internet can be challenging. Historically, AI deepfakes have only added to the confusion and chaos, but when labeled and intended for good, deepfakes can be extremely helpful. But with all of the misinformation surrounding deepfakes, it can be hard to see the benefits they bring. Lior Hakim, CTO at Hour One, joins Chris and Daniel to shed some light on the practical uses of deepfakes. He addresses the AI technology behind deepfakes, how to make positive use of deep fakes such as breaking down communications barriers, and shares how Hour One specializes in the development of virtual humans for use in professional video communications.
Leave us a comment (https://changelog.com/practicalai/190/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Lior Hakim – LinkedIn (https://www.linkedin.com/in/hakiml) , Website (https://www.liorhakim.com)
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Hour One (https://hourone.ai)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-190.md) | 9 | 0 | 0 | I think this is really an exciting time, where creativity is basically Unleashed, both in synthetic media human faes, person that does not exist and also, prompt generated images and styles and, all of these kind of things are coming, together and people are getting used to, basically consuming stuff that is, automatically generated and creativity, is basically changing people might be, frightened and and I think that uh we, will use it for good and for misuses and, we'll adjust as going forward but, technology is moving at a fast pace so, of course there is it's a whole spectrum, and we're exploring this spectrum and I, think it's very interesting place to, [Music], be welcome to to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone subscribe now if you haven't, already head to practical ai. FM for all, the ways special thanks to our partners, at fastly for delivering our shows super, fast to wherever you listen check them, out at fast.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, fly.io, [Music], welcome everyone to another episode of, practical AI this is Daniel whack I'm a, data scientist with s International and, I'm joined as always by my co-host Chris, Benson who is a tech strategist at, locked Martin how you doing Chris I am, doing okay although I have to confess, this a few days ago I took a fall while, rollerblading and so I'm on hydrocodone, for the pain on that so I could say, absolutely anything I'm really, interested in this conversation even, more now one thing I I'll finish by, saying is I discovered that when it, comes to rollerblading I am a deep fake, I am not I am not a talented, rollerblader I I'll leave it at that, yeah well speaking of of those deep, fakes we're going to dive into a much, deeper conversation around this topic, that we've men so Chris we've we've, talked about deep fakes a few different, times I think it's it's come up I think, the wider even Noni people are aware of, this technology and and some of the, things that have been created but today, we're really privileged to have with us, leor hakeim with us from uh hour1 who is, the uh CTO at hour1 and uh he's gonna be, talking to us all about this technology, and and what they're doing with it, welcome leor thank you thank you for, having me it's a pleasure pleasure and, uh we can Dive Right In and speak about, Ai and deep fakes at large and what we, are doing yeah for sure um maybe just, starting with that like when someone, asks you what is a deep fake you, mentioned in a conversation what is your, explanation of what a deep deep fake is, yes so basically what uh we're trying to, do is uh virtual human and to humanize, the connection and how we can, communicate with Ma machines basically, cuz today we are very used to text pages, and then Frozen video and that is, changing and interactions with machines, is actually in the future will be, different than what we experience now, and defix or other Technologies, synthetic Media or other names that are, Nam like this is basically some Bridge, Point of how we interact or ingest, information and communicate with, machines so I think that's exciting yeah, and I guess some people are probably, aware of things maybe they've seen on, the internet where you know I think, there have been ones with like Elon Musk, or or other people where people have, created a video of Elon Musk saying, something but it's it's a it's a, synthesized video and I know that that's, maybe something that comes to my mind, when I think of deep fakes like I was, saying there's many people even outside, of the AI community that understand that, AI is able to create these very powerful, and compelling videos whether they're, for misinformation or for good good, purposes AI is sort of starting to, impact people's like content that, they're viewing on social media or, wherever how have you seen that develop, and from your perspective how do you see, the trend and how AI is influencing like, the creative side of what people are, doing with with video and other things I, think this is really an exciting time, where creativity is basically Unleashed, both in synthetic media human faces, person that does not exist and also, prompt generated images and styles and, all of these kind of things are coming, together and people are getting used to, basically consuming stuff that is, automatically, generated and creativity is basically, changing I think that uh there are of, course with every technology this comes, about and you people might be frightened, or un not used yet to these types of, things and I think that uh we will use, it for good and for misusers and we'll, adjust as going forward but technology, is moving at a fast pace and later on we, will talk about of course how what we, are doing but I think that a general, thing to think to consider is uh maybe, personas in the context of Def so we, have those personas I, don't fully subscribe to the notion of a, fake and real because I think mean real, life and mean social media and other, stuff like that might be different I, might present different things so I, think it's a whole spectrum of users and, misusers and definitely some things can, be, unusual sometimes Maybe would like, someone else to to be able to use my, persona for some cases like I don't know, people on commercials or Landing my, image for a character in a movie that, I'm playing having the writer's text, speak it and have the director direct me, and then in other cases I might be more, frightened if something that I me didn't, meant to be to happen I see myself, speaking something that I don't know so, of course there is it's a whole Spectrum, and we're exploring this spectrum and I, think it's very interesting place to be, you know the way you started that, explanation was fascinating to me, because I think so many people are, introduced to the topic of deep fakes by, some of the nefarious things that that, get popularized you know in media, reports and such news reports but you, you talked about it in terms of kind of, the way we're interacting with computers, sort of that user experience to some, degree maybe and I found that really, interesting because though I'm in the, defense industry now I spent over a, decade in the in the creative digital, marketing space and so and we were all, about personas and I don't think I had, really adjusted my way of thinking about, deep fakes to think about really, focusing on the interactions versus some, of these these other ways that we've, seen which tend to be more on the, negative side so could you talk a little, bit about that for a moment because, you've kind of reset my perspective how, can deep fakes take us forward in the, years to come in terms of how those, interactions with automated systems play, out and what's different and what should, we what should we expect as as the, normal as we're looking at at some of, the possibilities over the next few, years so I think your likess the way you, look the way you sound the tone of your, voice the style your gestures everything, is kind of like your set of skills that, you regularly put for work for other, people you go to work you give your, content and then those types of traits, you will be able basically to digitize, them and then put them to use in the, context that you feel is suitable for, what you want for example you will be, able to capture yourself digitize, yourself like an avatar you can think, about I don't know bit Modi okay you, directly design the Avatar like yourself, and then you put it to use not as, yourself and the same thing uh can can, happen with real life capture video or, with versions of yourself filters that, make you more uh attractive or more, happy or if you don't want to put makeup, every day you can just jum in hop into, uh a meeting or a sales presentation, looking as your best self uh from your, home with all the world from home and, all all those kind of thing I just, giving a few examples not to to be too, abstract well thank you that whole thing, face for radio here so yeah I mean, that's that sounds fantastic like we're, sitting now you know we're we're casual, and then people can see us talking in a, nice uh Studio not just listening to, audio and enjoy everything without the, necessary needs for us to cut our head, and shave and do whatever we do to, present ourselves so our likeness our, tone of voice and how we present ourself, our Persona how we perceive ourself, basically can be digitized and then put, to you, and to if we can control we can of, course control it by ourself and have, maximum control over the content that we, deliver through our digitized character, and also we can lend those character to, other people to put content through us, if we think we can affiliate oursel with, that content if we trust those people so, it's about creating trust creating uh, channels for people and communities and, then putting them to use this is one, side of the Creator like I own my, likeness I own my what I say and how, I've been perceived and then the other, thing the other side we are considering, is the audience of course the audience, might want to see uh me in this podcast, or might want to see someone else giving, this podcast with another voice that he, likes with another face that fits him or, even with another language to be, translated and all of those traits all, of those modalities will be available in, the future you can consume the content, at your paste at your language with the, like with uh we call it Pleasant, interactions so we will be able by, digitizing people likeness and the way, we communicate with each other to for, content to be delivered through machines, I am picking up on a sort of I guess a, trend that that you've been kind of, alluding to which is the fact that I, think where this technology has been, maybe misapplied is where it's sort of, not accessible to a wider audience and, there's a sort of concentration of who, can use it and who can't use it but as, you sort of make tooling like the, tooling that you're developing and give, people sort of that have don't have, technical skills to like spin up a GPU, instance and like run tensor flow you, know in a distributed way across a, cluster and all this stuff like as soon, as you give like a a kind of wider, audience the ability to create with, these tools they sort of create their, own Persona and they have control over, that but if they're not able to like, access that technology then there's an, imbalance of like who can use it and who, who can't which kind of Might produce, some some, misusage cuz I'm thinking even of like, Audi books it's like a non maybe this is, a this predates like the Deep fake, scenario but like for a long time with, audio books or with like things I was, listening to on my on my phone I could, switch a voice potentially like oh I, want to hear you know or like Google, Maps is a good example of this right, like I can change the voice that's going, to talk to me from Google Maps and, that's like a preference thing on my end, right now I'm sure that there's some, complicated technology behind it but the, control decision on like what is, Pleasant to me like you were saying is, being made by me and maybe not by, someone else so how do you view the kind, of I guess the shift between you know, what changes when the technology gets, out of the hands of maybe people like, are talking here that maybe know how to, spin up a notebook and train a model and, those that have no Tech skills but the, the technology is a appealing to them, and on the creative side what changes, when we get this into the hands of that, kind of audience I think it's a it's, incredible shift in creativity in the, ability of people to communicate their, ideas and basically to to manifest what, they know what they think either through, text through prompt with reference, images everything that happening and I, think it's something that is happening, with our industry with state of the tech, not only with synthetic media and, virtual humans but also with image, generation and prompt invoked image, generation I think everything is very, restricted now because the oros of, Technologies trying to they know that, there is risk and they're a little bit, afraid of what might happen and they, don't know so it's growing and it's, opening the community I think of course, I can't avoid stable diffusion and, everything that's happening there I, think it's super interesting that it's, going to be open and I think in the end, what you said about the ebooks is super, interesting that people can listen to I, think we're coming from the angle that, not only you can choose the voice but, you can also subscribe to have your, voice read whatever books you're want or, willing to read and then Chris might, listen to those books in your audio and, you might be rewarded in some way for, this so this whole Marketplace of skills, and traits that we have is basically I, think one of the things that is being, built and I think generally that uh, technology is adapting and we find Goose, uses and misuses as I said before I, think the same we're experiencing with, social media with groups uh moderation, and stuff like this so it's gradually, expending into our culture changing en, reaching our culture and I think the, future will be, [Music], [Music], exciting, leor I um I think there's there's a ton, of things to explore in in this uh on, this topic before we get too far I would, love to kind of give people an intuition, for like what is possible with this, technology and how so the scenario I, have in my mind is let's say sometimes, Chris is out of town and I want to, record a podcast with Chris but he's not, he's not present so I want to I want to, work with Chris to create a virtual ual, Chris and then when Chris is gone I can, just type replies to myself and then, talk to Virtual Chris back and forth and, let's say Chris has given me permission, to do this you have very low aspirations, Daniel I just got to say could you, explain leor like what technology, enables that like how would like from a, technical side what's needed to be put, into place in order to materialize that, scenario yes so we have language written, language which is very easy and we have, an easy way of inputting language into, the machines basically and then we can, take them and transform them into voice, with capturing the voice there is voice, clining a lot of uh stuff is happening, in this field there are many companies, Great companies and other open- source, projects making this happen with a few, voice samples we can have text to speech, engine which basically creates the AUD, of Chris in that scenario and then with, other systems we basically can take the, audio in whatever language that was, generated by the text of speech and, create those this speech basically to, the image of Chris speaking it in real, time if we have a a, vlog and basically we can create and the, field is developing but we can create, types of looks for Chris we can create, more emotions or or sentiment in his, speech and it's really I see it as like, we are in the early days but basically, on our platform and platform like ours, people can just jump in write the text, choose characters choose voices choose, languages hit the create button and then, as you said invoke few gpus in the cloud, and within uh minutes if not seconds you, get a video uh or stream of that actual, experience happening and I think it's, it's amazing and for the audience so we, can see each other uh even though you're, only here in the audio and he saw the, the look on my face so the the question, I was W to ask was like I love the, picture you're painting of what's, possible going forward to get there, there's kind of you know going back to, kind of the the the ideas of of trust, and authenticity and stuff to get people, to to see the positive on that because, it's very easy to see the scary sides of, you know kind of that creativity I think, in most people's minds because that's, been their first exposure to the field, but you've shown us that we can really, take advantage of creativity and kind of, optimizing situations so how how do you, bring people Along on that so that they, understand that it's something worth, engaging in and I'll give you a brief, analogy that's not directly in this, field but it's kind of close like the, field that I'm in now I know that we are, moving into an age where automation can, fly, airplanes much better and much safer and, Daniel and I were just talking about, this in our last episode then human, Pilots will be able to and I say that as, as a human pilot also but getting people, to trust their lives quite literally, into that and in the context of deep, fakes to trust that they can they can, take that step and and be part of that, creative process and enrich their lives, and see all the benefits it seems like, that will be a challenge to kind of, bring people fully along the path in a, broad sense not just specific use cases, but but in a broader thing any thoughts, on how we navigate that kind of, culturally and you know as as as humans, together what we are trying to do is, create uh positive use cases and, positive and just let people see the, positive uses of such things and, actually in our company we see a lot of, people that want to become characters, and have their own character and we, communicate in work with our own, characters with one while sending you, know slack messages or videos and stuff, like that and having ourselves on the, platform and be able to create that and, aside from that we have a lot of people, asking when can I be my characters those, are early adopters and I think as things, play out and those use cases are out, there and people see other people appear, in content and they see it's safe and, it's uh put to good use and are being, rewarded for that I think the general, positivity is is something that is built, gradually and the trust is built, gradually and I think it's a good start, that we've seen the uses before so we, know what to watch out for and going, forward we can start to see better and, better uses of the technology of you, know it started with the top with Obama, or Tom Cruz or Trump or whatever, examples of people frightening and then, we can just build from there with the, good uses and people really are all, already in a place that might be, fighting but once they see positive uses, I think they might uh want to subscribe, to this notion and join this this, basically this character economy or, virtual human economy that it's is being, built I like the idea of thinking about, this like an economy I was thinking in, my own scenario like I've recorded, videos for different like trainings and, such around like Ai and Technical, subjects in the past which I love doing, but it is a lot of work to like get into, an environment with the right lighting, and arrange a person to record it and, then produce the video and all that, stuff and so there's been a lot of times, where like I'll create what I think is a, cool tutorial or something like that but, I'm sitting in an airport somewhere or, like you know you know just on my laptop, I can't record a nice video for that in, an engaging way for for an audience but, if I had this sort of, virtual version of myself I could see, myself you know typing out text, associated with that tutorial and, pairing you know an engaging video with, a screencast or something which is much, easier to produce right on my on my, laptop and in airport but then I was, also thinking you kind of brought up the, idea of the economy and it does seem, like there is an incentive potentially, for creators to do this because well, what if I then had a group of people, that like there was sort of a brand, around the content that I was creating, but there were other people that had, great tutorials and they wanted to maybe, submit them to my trainings and put my, face in in front of it and I liked their, content so I was getting more good, content right but maybe I incentivized, them financially for part of like the, trainings that people subscribe to on my, platform or whatever it is right and so, there's like this flow of this nice kind, of exchange of value between the two as, long as like I appreciate the content, that they're doing and they understand, that their face is not going to be on it, but maybe they're recognized in some way, right and I you know understand that, like I'm going to recognize them in some, way but my face is going to be on it and, that's part kind of part of the brand so, all of this thinking is like it's sort, of flowing through my my brain have we, seen those sorts of like creators, finding these new ways to incentivize, this yet or would you say we're still in, a stage where people are exploring, potential usage I guess yeah I think, people are uh what from what we see are, in need for uh video content they want, reach content for Their audience and, then they're looking for ways to produce, this content in an easy way and I think, a lot of people don't know about this uh, technology or that it's even possible, like just typing in your text your, narrative building up the scenes like, basically they know they can make a, PowerPoint but they don't know they can, make a rich video with a character in an, environment just click a button and have, a video play and put it there embedd it, share it upload it whatever so they're, not there I I just wanted to say one, thing because you you made me think, about something about the your tutorial, uh example so think that you can record, your basically what we we you're able to, do is record your first tutorial and, then get it into the system transcribed, into text and then you can keep it, updated and change characters and the, other thing is not only that some people, would like to consume this content with, your face some other people might want, to consume this content the same content, in in a different time or a different, language with another character or, another uh presenter so those are the, things that we are dealing with so to to, extend that idea out would you predict, that like the entertainment industry you, know and actors and actresses and um, musicians and and such will be out there, offering their brands as a form of of, interfacing with an audience and so you, might have and I'm obviously making this, up you might want to have a uh like the, movie Greece you know from way way back, and young John Travolta and Olivia, Newton John are teaching to F I'm, telling you this is pretty cool I like, this idea don't I I know it sounds a, little silly but bear with me but you're, able to basically select something that, is that has appealed to you and but you, could then put content out there in that, context and you could uh you could, actually have Brands extended into kind, of user generated content where you have, kind of deep fake brands are you know, supporting that is that I'm being a, little bit silly for fun but like I'm, not being too silly is that kind of is, that what you see in terms of this, economy going forward yeah I think like, everything uh with economies uh there, will be price, fluctuations and of course famous people, aists and bists and everyone else will, take part in this economy once it's, grown and we've grown the trust and the, ability to control where are you, appearing and what are you appearing at, what price or at what reward I think, it's already happening today like, celebrities uh Hollywood alist or or, stuff like that they are advertising, things in different countries that they, might not advertise in their countries, because it's other language and stuff, like that so and I think if the this, technology can help them expand those, reach and control where I'm appearing at, what prices when I'm appearing what is, the content I'm delivering and if we can, build the structures to make this H, transactions flow we can definitely make, it work I think I would participate I'm, now giving my content and my voice to, this uh podcast of course and I might be, able to participate with my voice in, other in other places and give the, content that I want to give to other, places not not necessarily with my voice, or my appearance or stuff like that and, then everything the modalities of the, content and the dimensions of the, content will be basically just, transactions and will be assimilated to, be consumed by the viewership in the, best Manner and this is that Pleasant, interactions that we mentioned before, everything will be more programmatic and, will be consumed at the right place in, the right time with the right deliverer, of the right content this is what we, think, [Music], about so leor we've talked a lot about, the sort of technology in in general, like we've talked a lot about text sort, of natural language processing on the, podcast we've talked also about like uh, Speech to to text and text to speech, sort of things recently I know we had uh, Joshua Meer on from uh Ki uh they have, great tooling around that but this, element of like once you have, synthesized voice and then like pairing, it with a avatar that sort sort of has, mouth movements that are matching up, with the voice that's something we, haven't really explored from the the, technical side could you kind of catch, us up on like what are the, state-of-the-art kind of models uh, related to that sort of interaction and, like what sort of data do you need to, have available to successfully do that, sort of operation so we're Gathering, basically a video data of people, speaking in different languages, different people with different, appearances in different angles and, stuff like that and then we label the, data with uh landmarks and with uh, resolution we align the data and we, prepare everything else and then we, basically what we do is we create a, bridge or a latent space that basically, can encode the audio and decode the face, and then we can reconnect the audio in, the back and of course we're using a, mainly, and exploring different things and in, our field the main the main interesting, thing is video and it's stability and, temporal stability things that we is not, required in other fields for example, image generation now we see on different, uh platform with Del and stable, diffusion and others you have a seed, basically and a seed create the the, generation and then morphing between, seed speeds is not always flowing so we, are dealing with a lot of temporal, stability and correctness of the of the, expression interesting yeah and I'm, guessing that this sort of um it's very, interesting to me like how audio is, represented within AI models and often, times like more like image than it is, anything else in terms of like, spectrograms and that sort of thing but, then like when I think of audio and, language and video you're going to have, a certain sampling rate for that video, you're going to have a certain sampling, rate for the audio those are likely, going to be different the sort of, dimensionality of those things is is, going to be different and maybe even, different between different samples so I, was wondering you know just generally as, you've kind of dug into this space What, are some of like the I guess the data, Challen Alles that you've experienced, working with audio and video data in the, AI space And for those out there that, are kind of digging into these newer, models that are either processing audio, or video or both what recommendations, could you make to people in terms of, like the challenges that you faced in, kind of really digging into this topic, yeah so I think to answer question the, biggest challenge with data as I see it, is Data pipelines once this data is, capture which is usually kind of easy, all having the infrastructure to, basically, normalize to clean to align and to label, this data and bring it into the GPU so I, think the infrastructure for doing it, and updating it is super important for, us and aside from that I think that, clean data is uh is of absolute uh like, labeling and cleaning the data is utmost, important and for us challenges that we, might face is for example uh audio, noises and stuff like that that are not, necessarily or a different person, speaking and not necessarily the person, in the video that is captured or aligned, so those are the kind of things that we, interesting but as a general suggestion, for all the listeners I think thinking, about the PIP P line on how the end to, end pipeline of how to acquire the data, and then process it all the way from the, camera or whatever you're Gathering From, a link on the internet until it gets to, the GPU through the datal order we see, it as one uh Challen Big Challenge and, tradeoffs along the way yeah and I guess, it's likely that you like in terms of, supporting the cre creation of an avatar, for a specific person for an actor in a, studio you might have a lot of control, over over a lot of that and be able to, closely couple that but as soon as, you're passing things over the internet, I'm sure that there's degradation and uh, there's like all sorts of things that, you could come across so yeah that's, that's uh super interesting I'm, wondering so we we've talked a lot about, about digital humans or virtual humans, and this sort of Avatar cre, you've kind of given a little bit of, hints of like what's available and what, you built right now I'm wondering if you, could maybe summarize sort of like what, the state of what you've built is right, now and then maybe a couple things about, what you're excited about you know, looking into the next uh next couple, years what you think is possible with, the sort of features that might be added, there yeah um first of all we have our, suas offering at uh live in production, you can register you can try your system, there's a free trial you can create, videos you can select Avatar you can, check out the technology with voices and, everything and we have subscriptions, model and you can continue and make, videos on the on the go whenever you, need them our focus is uh business use, we really think the world of work is a, huge opportunity to create Trust and we, believe that future Generations are used, to consume social media or social video, and so and and such and are expecting, the world of work to change from text to, be more rich and more engaging and more, pleasant and interactive and this is, where we're going this is where we're, building you can definitely uh sign up, and check us out and can you repeat the, the second part of the question yeah I, was wondering like kind of looking to, the Future and maybe Chris you had, something as well but there's this sort, of like text to Avatar creation and the, variability and the creativity that you, can do with that what are maybe some of, those things that are on your mind and, you're not committing to anything by any, means but like what are those things on, your mind like well if we could enable, this in the product that would level it, up a lot and you know what are the, things in your mind in that regard yeah, definitely so I won't uh expose, everything think that we going to launch, soon but I'm just saying exciting things, are in the way we are super excited, about prompt accessibility image, generation from prompt and people are, abling to add media to those videos by, the text or by the narrative recognizing, his narrative and making prompt more, accessible and prompt engineering which, is a big uh thing now in the industry of, how you create the the imagery that, accompan your narrative and create a, competing video in a rich environment, this is one thing we're very focused on, I think uh environments in general 3D, environments and Reach videos and other, things that create a whole experience, like basically watching, TV I think those experience will get, closer there and we're super excited, about more uh geeky stuff which is like, uh inversion if you're familiar with, that that's embedding referencing words, into the prom and then creating using, those objects or basically translating, your uh likeness to another domain for, example someone that looks like me just, with hair you're not seeing me on the, podcast but I'm Bal so all all those, kind of things and and ability to change, something that looks like you but have, some of your traits yeah sir someone who, looks like me who is able to rollerblade, successfully and well, exactly I wonder too this I I don't know, if this is some of what you're getting, at but also kind of like uh I could see, how some of that could be used to where, like you you have an avatar in the voice, but you could also bring in like if, you're talking about like a car driving, down a street and then this happens like, that's sort of like generating this sort, of like almost like b-roll for your, video uh would would seem quite, interesting to me because there's so, much of that video out there and so in, the same way a lot of this text to image, stuff is happening it seems like you, could generate some really compelling, kind of transition shots or whatever, whatever that might be I'm wondering uh, as we sort of um get closer to the end, here one of the questions that our, listeners might might ask is you know, from an expert in the field who is, working in this every day as you, mentioned this technology is only going, to get more compelling like TV quality, very you know high resolution very, compelling you know as some would write, in the language space like very coherent, output how would you recommend people, think about like how do I we're it seems, like we're getting into a space where I, can't tell what's fake and what's real, you know what would you tell people in, terms of like as I navigate the world, and I look on social media and all that, is it even relevant anymore to think, about like telling the difference, between those two or H how would you, recommend people kind of think about, that certain aspect of kind of the, cultural shift in this that this, technology is causing I don't know how, philosophical to get with this question, but basically a lot of the discussion, that we are having is of course uh, Photoshop retouching and how people, appear on social media and filters and, all the stuff it's a buildup to this, discussion but we're thinking about AI, at large and we like to think about not, like what are we teaching Ai and what is, it learning and we think about what AI, is teaching Us in some sense and then we, think about what is he learning and then, the bias in the models it's actually a, reflection of the culture so we think, basically it's trying to to show us or, to teach us in some sense what we are, and then we can choose and build our, culture in with creating it's a two-way, communication from these new, technologies to our culture and I think, it will definitely be exciting and as a, culture we will decide where we moderate, it in some sense yeah that's really, really good input I uh I think this is, one of those conversations where like, the possibilities seem many and there's, definitely going to be some things that, like you say like you know cultures you, know governments societies will have to, we'll have to wrestle with but I think, on the whole like I'm I'm very excited, to dive into some of these things I'm, really excited to jump over and and, create a few videos I want to share a, couple things in my own slack Channel, and see see what people's response is, and and if they recognize that this is a, this is a generated video but um yeah I, I really appreciate you taking time leor, to join us it's been an awesome, conversation and looking forward to the, amazing things that hour one is is, coming out with and hope to stay in, contact and have you back on the show, both uh in real life or as a virtual, human however you prefer thank you for, having me it's been a pleasure talking, to you, [Music], guys all right that is our show for this, week if you dig it don't forget to, subscribe head to practical AI FM for, all the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static asset to fly.io for back in our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | CMU's AI pilot lands in the news 🗞 | Daniel and Chris cover the AI news of the day in this wide-ranging discussion. They start with Truss from Baseten while addressing how to categorize AI infrastructure and tools. Then they move on to transformers (again!), and somehow arrive at an AI pilot model from CMU that can navigate crowded airspace (much to Chris’s delight).
Leave us a comment (https://changelog.com/practicalai/189/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• Truss on GitHub (https://github.com/basetenlabs/truss)
• CMU: AI Pilot Can Navigate Crowded Airspace (https://www.cs.cmu.edu/news/2022/ai-pilot)
• Embarrassingly Parallel Training of Expert Language Models (https://arxiv.org/abs/2208.03306)
• Toucan: Learn español without even trying (https://jointoucan.com)
• 3D Vision with Transformers: A Survey (https://arxiv.org/pdf/2208.04309v1.pdf)
• Sphere: Natural Language Processing with Transformers (https://www.getsphere.com/ml-engineering/natural-language-processing-with-transformers?source=Instructor-Socials-LinkedIn-80822-announcement-post)
• SkyJack on GitHub (https://github.com/samyk/skyjack)
• NVIDIA AI Demos (https://www.nvidia.com/en-us/research/ai-demos)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-189.md) | 1 | 1 | 0 | I will say uh kind of obliquely in the, military space we're doing autonomous, stuff as as is reported in the general, news all the time in terms of of, aircraft and and I think that's fairly, well understood in the civilian space, though if you imagine in the future, being on an autonomous airliner they say, that this model can safely avoid, collisions predict the intent of other, aircraft track those aircraft and, coordinate with those aircraft's actions, communicate over the radio it uses uh, language processing to do that it has a, vision system that uses six cameras so, it's a pretty cool problem to solve I, know people shudder when I say this but, I think that uh it is not so far out, that all of us will be getting on, airliners that are almost entirely, automated and so this is one of those, big steps toward trying to do, [Music], that, welcome to practical AI a weekly podcast, making artificial intelligence practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical ai. FM for all the, ways special thanks to our partners at, fastly for delivering our shows super, fast to wherever you listen check them, out at fastly.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, more at, [Music], fly.io welcome to another fully, connected episode of practical AI this, is where Chris and I keep you fully, connected with everything that's, happening in the AI Community we'll, discuss a few things that are in the AI, news and dig into some learning, resources to help you level up your, machine learning game I'm Daniel whack, I'm a data scientist with s, International and I'm joined as always, by my co-host Chris Benson who is a tech, strategist at locked Martin how you, doing Chris doing great Daniel how are, you today I am doing great uh I don't, know if our listeners listened to a, couple episodes ago we had a bird family, setting up camp on our deck at our, apartment and having two two eggs um so, one of those eggs unfortunately didn't, make it but one turned into a bird and, uh that bird flew the coupe and now they, have decided to start another family on, our porch because apparently it's a, great place to start a family of doves I, guess so it's the coupe the great, Hatchery yeah so I have another chance I, never got to get out my like computer, vision kit and uh cameras and monitoring, last time around so maybe I'll have, another chance here because my I suspect, that that the same same pattern is, repeating gotcha looking forward to that, we need we need to post pictures of it, you know or put some video or something, yeah if anybody uh has any suggestions, out there for uh alerting or monitoring, based on activities of uh doves in a, nest let me know and I can set up the, Raspberry Pi and all that stuff on my, deck excellent uh yeah Chris uh every, once in a while we get to do one of, these shows where we bring out uh, assortment of topics that have caught, our attention over the past couple weeks, and I think it's a good time to do that, because you know there's a lot continues, to be a lot coming out related to uh, infrastructure and new models and new, products and all sorts of things so yeah, good good time to do that one of the, first things I wanted to highlight which, came out from actually a a company that, was a guest on the podcast a while back, base 10 uh released a new open- source, project called truss um so if you go to, GitHub under base 10 labs and then truss, you'll find this project which they kind, of Market as a seamless Bridge from, model development to model delivery and, an open-source standard for packaging, models built in any framework for, sharing and deployment in any, environment local or production so what, it what are your thoughts when you see, this Chris what what comes to mind I, love it I I think it's very much needed, I've been putting a lot of thought, lately into the need to make all of this, stuff that we talk about much easier for, people to get into and so I I think, trust is a fantastic way of kind of, getting that going moving from, environments that are already in like, jupyter notebooks out into production, without having to to go back to a web, framework and do all that work and stuff, so that's good stuff yeah yeah and you, know they include emojis on their read, me so you know it's it's friendly and, accessible exactly yeah I I've actually, since since we talked with uh base 10, I've I've used their product a little, bit and I know that I just from looking, at this I I'm assuming that they're sort, of eating their own dog food because you, know some of the convenience that's, built in their product is you know not, all of it but some of it is released in, this package which is pretty cool and, kind of allows so there's certainly a, lot of Frameworks out there to do model, serving or deployment or like model, registry sort of things um some of them, kind of assume that you have a bit of, infrastructure chops I think to start, with like you know maybe figuring out, how to run something on kubernetes or, something like that and that's a big, step for for a lot of people so I think, this that it's really targeting hey, you're running maybe something in a, Jupiter notebook how do I export this, model and deliver it I also think that, you know some of these things around uh, model deployment missed some key key, aspects so like trust talks about, bundling secret management into the uh, and API Keys into their deployments, which, I think is really really important so, like it's not that difficult for people, to figure out how to build like a flask, app with their model but then figuring, out like testing and deployment and API, keys and securely managing like the API, that's a whole another ball game yeah, I'm really impressed that they've kind, of built a lot of this a lot of this in, they've put a a fantastic capability out, in truss in terms of of being able to, address that it's something I've been, thinking about I've uh right now is a, slight sideline on this that that is, relevant I think I'm I'm doing something, I'm learning something new through, beginner's eyes because uh you know both, you and I program in several languages, uh and we came together originally in, the in the go programming Community um, which is how we got to know each other, and right now for a different thing I'm, learning rust and so I've been diving, into rust but it's forced me back into, that beginner mindset and um I've, brought that back into into these other, things that we talk about a lot lately, and I've been looking at a lot of the AI, and deep learning kind of through that, beginner's mindset and there's such a, need we're still leaving out a lot of, people uh on these capabilities and, things like trust are are amazing, Solutions I think I think we need others, as well just so that people with, different levels of uh skill different, skill levels and and such can can find, it accessible so trust is one of was one, part of that solution it looks like yeah, and just to give people like a since, you're listening a sort of visual, picture of what this might look like if, you're in sort of python and you create, a model you can import the trust package, and then sort of use this make trust, Commander or method point to the, directory where your you know where your, model or where your code uh is and then, um that will sort of serialize and, package the model and freeze like, dependencies within a Docker image and, all of that which can be complicated in, and of itself and then you can call that, and or deploy it via a variety of ways I, mean in in clouds and like really simple, ways to run things like ECS or gcp Cloud, run of course you could run it in base, 10 as well in their own infrastructure, but it has that because it's sort of, freezing all of these dependencies and, your model package in a Docker image, then you have the ability to kind of run, this all sorts of places whether that's, local or in these Cloud Solutions or in, base 10 or wherever so yeah I I think, it's pretty cool um and it's yeah an, approachable way to get into this model, delivery stuff for those that maybe are, you know hitting that pain of hey I've, got my model in a Jupiter notebook but I, don't know what to do next sort of, situation so so I have a random I have a, random question for you sure um as we're, looking at at things like trust and, recognizing that most of our community, here is mainly python oriented you know, for the development stuff do you think, that anytime soon we will start, expanding some of the development and, then you know deployment and packaging, tools into other languages or do you, think that's likely or do you think we, have a way to go before we get to, something where we're starting to look, at kind of a multilanguage community, rather than the the Jupiter Focus we've, had for so many years yeah it's a, interesting question I I think there's, certain sets of tooling like I know that, there's tooling now in go where you can, import like hugging face Transformers, models and such and so there's there's, more, interoperability um there's certainly a, lot of ways to run models in various, other systems whether those be like or, languages like JavaScript or um go or, rust or other things but that kind of, like model development workflow I think, so in my mind still seems very much, python focused I don't see a lot of, motion away from that I I do see certain, uh certain Trends happening like, language things seem to be more focused, on python now in terms of the model, development side um and maybe like, interoperability on the inference side, are you seeing similar things yeah that, seems that seems accurate to me as well, yeah I am uh it's just as like I going, back to my uh my rust learning, experience as I'm having to delve into, you know out of something I know well, and into something that that uh that, puts me back into first grade so to, speak uh I've been thinking about the, fact that that we we're we're still, leaving behind communities of people uh, and I'm really curious to see what what, other options um you know some some, different organizations or companies or, just uh inventive individuals come up, with to let us be more inclusive with, people that are maybe not traditionally, you know have this space accessible to, them yeah just out of curiosity Chris, what has your experience been like with, rust well uh without without taking us, too far off the mainline of our of our, uh topic area um just as a new thing, they take a different approach I can, it's it's one of those languages that, always wins the most loved you know when, people are are raing their languages and, stuff and I can see why why but it is a, substantial learning curve and uh it has, made me very empathetic to people uh who, are having to deal with other learning, curves such as this one uh that we're, talking about in general because it uh, unlike go which tends to be fairly small, by Design and kind of have one way of, doing everything and does some stuff for, you that's pretty nice rust kind of, takes the opposite approach and gives, you every possible uh option out there, to optimize what you're doing which is, important for certain use cases, including one that I'm working on uh and, so it's uh it's been interesting it's, put me back in the I'm going to take a, big thing and learn it new and uh and it, made me think about if you're not a, pythonista uh and super Savvy in in the, ju position of where python intera, intersects with the Deep learning World, in general then this is still we're, still in a moment where where this, really huge important field that's, revolutionizing technology is still, quite inaccessible to a lot of people I, think and so it's just a reminder that's, why it's on the top of my mind, [Music], today well thinking of about like this, model delivery side of things it it gets, me thinking also about how you know, we've seen an increasing number of, things in the news and in conversations, about mlops and get Ops and, cicd impacting kind of AI and the ml, world and people thinking more about, this operation side of things I wonder, you know I I still encounter a lot of AI, practitioners or data scientists who are, really kind of trying to get a grip on, what is cicd and how does that side of, automation or how should it or could, impact their development workflows might, be worth talking for a second about you, know cicd and what that exactly means, and how it might impact practitioners, workflows what do you think, I I I totally agree with you and we've, actually talked a little bit about this, on previous episodes um about the, general feel that there has to be a, convergence between cicd and mlops, instead of them being kind of thought of, as separate subfields if you will, because we're going into a future where, they're not two totally separate things, that are that are uh that are always on, their own tracks and on their own, infrastructures models are going to be, in everything we do going forward and so, the idea of software and models being, being completely separate with their own, infrastructures is is a little bit um I, don't know it seems Antiquated to me I, think there's a movement that we're, seeing right now where they're starting, to integrate where mlops and cicd in, general are starting to come together as, people realize that yes I'm going to be, deploying software and yes I'm going to, be deploying models and most often they, will be happening together and at the, same time and so therefore I must have, something that works for all of the, above and I think that that's bit of a, challenge if for no other reason there's, some cultural uh differences in how how, we approach and what the priorities are, and stuff and so there's kind of Two Two, Worlds smashing together uh trying to, find something that works for all so, Chris I want to maybe describe some of, what I've been doing recently with like, this intersection of Automation and cicd, and machine learning or AI models and, I'd love to get your critique of that, and help me know how I can how I can do, better or maybe just initial thoughts so, a lot of what we've been doing recently, we're always kind of tweaking this, workflow is we have an ml Ops solution, and we're really thinking of that ml op, solution as our experimentation and, model training platform so like this is, where a lot of jobs will necessarily, fail because we're trying like weird, thing weird and crazy things and we like, have cues of gpus where we can queue up, experiments and train new models or do, pre-processing of new data sets, eventually we get to a state where we, sort of figure out what we're doing and, we know there's a certain type of model, that we are training successfully and, would like to integrate into some system, or service that we support and so our, mlop solution kind of provides as output, of these training jobs like a hash of a, bundle that is like the model bundle, output that we have trained right so by, hash I just mean like a series of, numbers letters that that points to a, unique bundle that we've trained and so, we can look up in our ml system like, this model bundle you know whatever the, hash is was trained you know on this, date with this model and is how the task, went and all that and it connects also, back to our git repo where we have the, training code with the exact commit ID, that train that model so we've got like, the code that trained it the output of, the model and the hash of that model but, then like that's the trust project was, just talking about that's not like model, delivery right so in terms of connecting, more to the cicd things at that point, our model is really just like a artifact, is used in software right is used in, various software functions so we also, have a usually a separate GitHub repo, maybe it's an API we're supporting or an, application or something like that the, thing that's integrating our model and, then we use Hub actions but other people, use like Jenkins or Travis or something, for cicd but what happens is we have, GitHub actions which for those of you, that don't know is like a it's, integrated into GitHub but it's a, continuous integration continuous, delivery system similar to Travis or, Jenkin or whatever and so when we push a, change into that, repo GitHub automatically runs a series, of tests that we specify in GitHub, actions so these are like unit tests for, our python code and then deploys the, updated version of the application let's, say it's an API it deploys a updated, version of the API, now what's interesting I think where, this connects with machine learning and, the model bundle is like if we update, that API to use a new model what I kind, of recommend our team do and we don't, have it integrated everywhere because, you know we we we have limited time but, in an ideal scenario what we'd have is a, sort of minimum functionality test for, this uh updated model so like if it's a, sentiment analysis model I would have a, series of like a table of tests that, would say like you know one sample is, like this is a really great thing and, it's so awesome and that should be rated, as positive sentiment always regardless, of what model I update it should always, get that right and so that way if I, update a model and I say I point my API, to the new model bundle in cicd it'll, run that minimum functionality test, against the functions that are calling, my model, that way if I accidentally point it to a, really bad model that can't even pass, like minimum functionality then it fails, the build and it won't deploy with the, new model right and so it's almost like, a TBL driven test that's used in like, apis and such except it's really a test, against the functionality of the model, versus the functionality of the actual, application so have you seen it, approached in other ways I'm always, curious to kind of learn learn what, people are doing in this respect so I'm, going to say nice things about what, you're saying and I'm going to do this, despite the fact that you're my host uh, my co-host on on this so uh I would say, this to anybody no I I think that you, have the benefit of having been a, software developer uh as long you know, as you've been a a data scientist and so, you're able to see both sides of that I, think that's often absent uh that, perspective so what you've described you, know you've picked your you've picked, some specific technologies that you want, to use to support cicd efforts uh which, are fine and I think that there's a, whole bunch of different options there, that are all more or less equally good, you know with some you know pros and, cons to each one as normal but you've, integrated it so that you're you're not, only testing the software but you're, testing your data by testing how that, data is running through the model and, inferencing so uh that is a very very, cohesive system uh unfortunately here's, the bad news I think I think that your, that your approach is a little bit more, of the exception than the rule in the, broader industry out there I think that, uh you know this is I've seen this at a, number of organizations where the skill, sets of understanding that are still, kept in kind of in separate groups, separate departments um maybe even whole, separate organizations and I think the, benefit of being working for a, relatively you know small organization, you know not not a giant Fortune 500, thing is that you're able to keep, everything close enough together and, your expertise is able to intertwine uh, to solve that well uh I think that that, is a it's a good guideline for others to, look at um if you ever find I know you, have all this spare time on your hands, but uh should you ever find it I think, maybe maybe actually publishing a little, thing on that would be would be a useful, tool for people to kind of see how, you've approached it um other than just, listening to to the podcast here because, you're kind of hitting you're hitting, those software best practices and you're, hitting the data science best practices, together uh and treating the model as an, artifact that needs to work per testing, in that software deployment process and, delivery so anyway yeah I I I love what, you're doing there yeah I think if there, was a takeaway for people that might get, them thinking like you don't have to do, things exactly that way and I'm sure, there's better ways to do it but I think, one thing I've learned over time is like, if your model is being used in software, and you can update your model without, that software being retested in some way, there's like a huge risk and problem, right because you could just run another, like training run and there your model, gets updated and all of the sudden like, your software product breaks but the, software team or the other people, working on it they're all transparent to, them so like that that sort of um that, step whatever it is whether that that, could even be manual right like you, don't update the model in one S3 bucket, until you run this script to test it and, then you update it like it could be, manual in that way if if nothing else, right um but there needs to be some, process there yeah that can be brutally, hard to debug uh something like that, because if you only have that that, Insight if you're you know going to your, example if you're the software team and, you haven't made any changes and yet now, your deployed and delivered software has, just broken and you don't have insight, into the fact that the model changed you, can waste weeks of time trying to figure, out what happened on that so it's a huge, productivity hit not to have that that, that point of integration and to apply, those best standards in the scheme of, things it is it's still complicated more, more so than it should be but it's cheap, in the sense of the the things that you, need to run and store and keep up with, it in that uh versioned manner is not, hard today that is a capability that, anyone can afford and to not have the, discipline to do that can result in some, real challenges that that waste a lot of, time there so so I'm I'm with you on, that I think that having those, integrated and having the discipline to, do them together uh and make sure that, it runs at the end is is pretty vital to, to to moving as fast as we, [Music], can, [Music], well Chris I I sort of uh went down the, infrastructure Rabbit Hole as I often do, and my team will tell you that I often, go down that rabbit hole but there is a, lot going on uh um in else in the AI, World um that has sort of hit our desks, over the past weeks you forwarded, something to me related to some of what, you've been following in the Aerospace, kind of industry or your own interest in, in piloting and that sort of thing um, you want to describe that a little bit, Yeah be happy to um I ran across, something Carnegie melon University uh, which is by any measure one of the top, AI schools often described as the top AI, School uh in the world definitely in, that top half dozen without question, released uh paper on something that they, had been doing in their robotics uh, organization which uh entitled AI pilot, can navigate crowded airspace and of, course this appealed to me both from the, AI perspective uh the fact that I am a, pilot and the fact that I work for an, airspace company so it hit me on a bunch, of fronts and and so in this one what, they did was they put together a model, and trained it and have been testing it, in simulation that enables an autonomous, a craft to navigate crowded airspace and, so and and for those who don't fly as, Pilots you know airspace around uh, airports gets very crowded and you, really have to work hard to maintain, separation and keep things safe and such, so this is a non-trivial problem people, uh that don't pilot will will look up, and go well you got the whole Sky there, you know how bad can it be but um you're, also in fast moving vehicles and you're, all moving on in the same patterns and, so you can have a a problem very quickly, so this is a a pretty important, challenge to overcome and it's one that, we know that the industry is pushing, forward so I will say kind of obliquely, in the military space where you're not, necessarily in air patterns and stuff, we're doing autonomous stuff as as is, reported in the general news all the, time in terms of of aircraft and and I, think that's fairly well understood um, in the civilian space though um if you, imagine in the future being on an, autonomous airliner um and you and 200, of your best friends are flying around, and and you have ai models that are, driving this it's not as hard to move, between busy airspaces as you're moving, across the maybe across the countryside, or something you're kind of out there by, yourself there's not as much to do but, on the start of that journey and on the, end of that Journey there are a lot of, other aircraft in close proximity to you, so the ability to do this is is pretty, important they say that this model can, safely avoid collisions predict the, intent of other aircraft track those, aircraft and coordinate uh with those, aircraft's actions communicate over the, radio it uses uh natural language, processing to do that it has a vision, system that uses six cameras to visually, track and and one other distinction in, general with flying uh there are two, kind of systems for flying one is, instrument flight rules which is what, you would think of with airliners and, one is kind of what US private Pilots, the the little guys so speak do which is, visual flight rules and you tend to have, visual flight rules lower down to the, ground and so this system we've had, flight uh automatic flight control, systems and airliners for decades that, fly but those tend to be high up in the, sky and you're kind of alone uh traffic, wise this is designed to do visual can, work with instruments can work with, radios can work with cameras to do the, visual stuff and make all the decision, making in real time right there to keep, everybody in the sky safe so it's a, pretty cool problem to solve and it's, one that eventually I know people, shudder when I say this but I think that, uh it is not so far out that all of us, will be getting on airliners that are, almost entirely automated they might, have a human in the cockpit because it, makes us feel better but eventually that, that that just won't be really needed, and so this is one of those big steps, toward trying to do that and they're, combining going back to a theme that, we've been talking about lately they are, combining natural language processing, models with visual processing models, they're integrating those and they're, being able to use that system across, multiple domains to affect a real world, uh solution here so I think this is very, much in line with with the kinds of, innovations that we've been looking at, over this past year yeah so I I have a, couple of follow-up questions on on this, which is really interesting one is I, just want to maybe get your perspective, since you're more plugged into the space, and have interest in the area I know, that it's been talked about a while for, quite a while that the sort of short or, Last Mile kind of trips like in a city, um in a large city you could have like, air taxis right which are basically like, humans in a big drone right and flying, around the city my understanding is any, reasonable person would say well there, needs to be computer systems within that, that would, coordinate and manage the safety of all, the routes and if there's a bunch of, things flying around in the air it gets, very complicated and a crowded space so, maybe this gets us closer to that you, have any thoughts it does and so um I, will comment I'll both answer your, question and I'll make a reference that, not even you know about me not just the, listeners the specific issue there is, you're talking about massively scaling, up the number of platforms as I would, say in my industry that are in that, space which is my specialty is if you, were to say instead of having 10 things, in a given closed you know space it, could be airs space could be on the, ground what if there were 10,000 in the, same space maybe not all big but maybe, many are very small autonomous how do, you manage that uh that's specifically, where my my current focus and expertise, lies and so this is a control system for, air craft that can enable things like, autonomous taxis and uh package delivery, and all these other things it doesn't, solve the whole problem kind of solves, the how do you make decisions from one, platform it doesn't necessarily solve, all the integration things when you have, a a a massively scaled situation there, but it's an important step it's really, really crucial to enable this future of, civil aviation which includes all of, these uh low-flying drones package, deliveries uh kind of like when we look, at a Hollywood movie uh you know these, futuristic science fiction movies uh, maybe Star Wars and they have all the, city is just full of flying things, everywhere at every level we're heading, that way but we need to get some of, these Technologies in place and this is, a key crucial uh Lego in that pile of, Legos to build that yeah my my second, follow-up question to that which is it's, funny that this came up today when we're, chatting because so I I always uh listen, to podcasts when I'm in the shower which, is a weird way to start this subject but, one of the ones I really like is um is, darket Diaries I I love darket Diaries, and uh you know just the stories there, but um I listened to one this morning is, actually from a while ago with uh a guy, named Samy and he he created this sort, of proof of concept which you can look, at on GitHub where it's called Skyjack, and essentially what he showed at the, time this is a while ago I'm sure things, have got better now but he he showed, that he could put a drone up in the air, with a antenna on it and a Raspberry Pi, and basically wherever he would fly it, he would hijack the other drones around, because the the signals he would, intercept and he could actually take, control of them and so it's one of these, things like anything that's connected is, hackable right and so also as you kind, of increase the number of things in an, area I think certain people might think, oh that's dangerous because a computer's, automating the flying I think the, dangerous part is not the computers, automating the flying but humans hacking, into the computers automating the flying, maybe so I don't know from your, perspective how like security impacts, these sorts of systems but as you kind, of automate yeah as you automate things, in this area the computer can obviously, do well maybe not obviously but I buy in, to the fact that a computer can do a, better job at this sort of flight, control than a human but it makes it, sort of hackable as well right it does, and if anyone doubts that computer and, model together are better at this point, you can go back you can Google DARPA, Alpha dogfight uh and and a couple of, years ago there was a public it's on, YouTube there was a public demo where, and we've talked about this briefly on, the show before where they had automated, uh a bunch of companies brought their, autopilots they put him on an F-16, simulator and they competed against each, other and then they had to compete, against an Air Force instructor and that, Air Force instructor was the equivalent, of what we would think of as a Navy Top, Gun instructor it was a weapon school, instructor and the top one that went, against the we the human absolutely, demolished him in a dog fight I mean, demolished him five times in a row it, was I mean just it was like stunning to, watch and so yes computers can currently, do this better than humans can even if, you're the best human in the world so uh, that's already a done deal and it's just, being improved upon sense so when people, are worried about computers flying these, things I'm I would much rather I'm that, one person who would much rather for, this technology to be flying The, Airliner I'm in than the human because I, know what the difference is in, capability so uh it doesn't solve, everything and it doesn't solve uh what, happens at massive scale when you're, struggling to get to handle all of the, things in the airspace together uh but, uh it's it's pretty crucial and I and I, think your point about the cyber, security there it's cyber security is, huge when it comes to Aviation and, autopilot uh because it's a natural, thing to hack in in Myspace which is, defense oriented you would assume that, your adversary is always going to, actively try to do exactly that so it's, just built in to the equation we're, automatically handling that as we, solution for for it and that will, obviously roll out into the civilian, space as well and air taxis and package, delivery all of that has to have that, cyber security capability super, interesting I'm glad we went down that, that rabbit hole as it's fun isn't it I, I love talking about that yeah uh one, thing I wanted to highlight maybe just, somewhat quickly before we get on to, learning resources is I took a look, recently it popped up in my Twitter feed, one of the things but I look through the, whole set of demos and I would recommend, people go and check out some of the, Nvidia AI demos that are coming out, we'll we'll link to that on the uh in, our show notes I was kind of like I, don't know if I just hadn't looked at it, in a while but I looked there I was like, oh wow like and then I oh wow there's, another oh wow that's sort of like all, sorts of things I wasn't I mean I sort, of peripherally new were going on but, really powerful kind of demos one being, kind of this this way of taking like, sketches and turning them into, photorealistic images this this is kind, of related to some of the like image, stuff that we've been seeing recently, but another really cool one I thought, was this vidto vid Cameo thing which you, could sort of synthesize a talking head, based on an image of yourself that you, could use in for example like Zoom calls, so you could just have like your talking, head, synthesized based on imagery versus like, your actual video on the zoom call which, I thought was really really interesting, and I don't know something I kind of, want to try no I I agree with you I'm so, glad that this podcast is audio only, because if it wasn't we would absolutely, definitely need to implement that to, make us presentable so um yeah we we we, have faces that are made for radio so to, speak uh or podcasting in this case uh, but yeah that that's you know we're, already kind of seeing that I mean using, Zoom a lot of folks are using zoom and, other platforms like that at work and I, I always have alterations that I'm, making to make it more interesting and, stuff and some of them actually do do, the facial fixes in real time so uh yeah, that sounds like something that I, definitely need uh my my my wife would, I'm sure she's always telling me Oh gosh, you need to look better than, that uh no comment from my end, well let's maybe hit a few learning, resources as we close out here I think, going straight off of the Nvidia stuff, which is mostly vision and 3D things um, I wanted to highlight this paper that I, saw trending on papers with code it's a, survey paper so it's a little bit maybe, more approachable in some ways 3D Vision, with Transformers a survey um this is, from Gan La lahood loud at all and uh, they go through and talk about all of, these sort of 3D representations and, using Transformers on 3D data for vision, um is really interesting and if you're, kind of wanting to get an overall, picture of some of the things going on, in this space with 3D data I think, that's a that's a really interesting, place to get some of that information, all in one sort of shot uh the other, thing I was going to mention which is, not not related to Vision but back in my, sort of world of of NLP is there going, to be a natural language processing with, Transformers course it's going to be in, September through October of, 2023 um and there's some people from, hugging face that are teaching that it, looks pretty awesome so it's like live, teaching sort of thing so yeah I would, definitely recommend check that out if, you're interested in um it is paid but, if you're interested in that sort of, paid live learning opportunity then it, seems like a really good good one to to, learn some of the latest stuff yeah good, people teaching it there in terms of uh, if you're going to spend money on it, spending it for people that are that are, at the top of their field that are legit, yeah exactly so cool Chris well um, that's all I had for today I enjoyed the, the various rabbit holes we went down, and learned a little bit about Aviation, along the way um so yeah I appreciate, the conversation, yep absolutely keep flying high Daniel, I'll talk to you next week all right, bye-bye, [Music], bye-bye all right that is our show for, this week if you dig it don't forget to, subscribe head to practical AI FM for, all the ways and if practical AI has, benefited your life Pay It Forward by, sharing the show with a friend or a, colleague word of mouth is the number, one way people find shows like ours, thanks again to fastly for fronting our, static assets to fly.io for backing our, Dynamic requests to break master, cylinder for the beats and to you for, listening we appreciate you that's all, for now we'll talk to you again on the, next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AlphaFold is revolutionizing biology | AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment, and is accelerating research in nearly every field of biology. Daniel and Chris delve into protein folding, and explore the implications of this revolutionary and hugely impactful application of AI.
Leave us a comment (https://changelog.com/practicalai/188/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• AlphaFold (https://www.deepmind.com/research/highlighted-research/alphafold)
• AlphaFold reveals the structure of the protein universe (https://www.deepmind.com/blog/alphafold-reveals-the-structure-of-the-protein-universe)
• AlphaFold: Timeline of a breakthrough (https://www.deepmind.com/research/highlighted-research/alphafold/timeline-of-a-breakthrough)
• AlphaFold Protein Structure Database (https://alphafold.ebi.ac.uk)
• GitHub: deepmind / alphafold (https://github.com/deepmind/alphafold)
• Oxford Protein Informatics Group: AlphaFold 2 is here: what’s behind the structure prediction miracle (https://www.blopig.com/blog/2021/07/alphafold-2-is-here-whats-behind-the-structure-prediction-miracle)
• Nature: How AlphaFold can realize AI’s full potential in structural biology (https://www.nature.com/articles/d41586-022-02088-x)
• Nature: ‘The entire protein universe’: AI predicts shape of nearly every known protein (https://www.nature.com/articles/d41586-022-02083-2)
• Nature: Highly accurate protein structure prediction with AlphaFold (https://www.nature.com/articles/s41586-021-03819-2)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-188.md) | 14 | 0 | 0 | I do know that proteins are the, foundation of all life they can be, incredibly complex many of our longtime, listeners will know that I'm really into, Animal Welfare causes and particularly I, handle venomous snakes quite often but, uh a friend of mine Dr Brun seagull he, and I will often talk about snake venom, and Brent with his expertise in, chemistry he'll go and check on the, protein makeup of snake venom and then, he'll look at the protein uh molecules, and the folds and where they're at and, he can just tell me exactly how those, proteins are affecting and if it gets if, someone's bitten protein folding may, sound really esoteric to those of us who, are not in biology professionally but, it's it's crucial to understanding, chemistry and life, [Music], itself welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone subscribe now if you haven't, already head to practical AI FM for all, the ways special thanks to our partners, at fastly for delivering our shows super, fast to wherever you listen check them, out at fast.com and to our friends at, fly.io we deploy our app servers close, to our users and you can too learn more, at, fly.io, [Music], welcome to another fully connected, episode of the Practical AI podcast in, these fully connected episodes Chris and, I keep you fully connected with, everything that's happening in the AI, Community we'll take some time to, dissect a little bit of the latest AI, news and dig into a few learning, resources uh to help you level up your, machine learning game I'm Daniel whack, I'm a data scientist with s, International I'm joined as always by my, co-host Chris Benson who is a tech, strategist at locked Martin how you, doing Chris doing really well today, excited about the the thing that you're, about to tell our audience we're going, to talk about and I I just wanted to put, a tiny bit of of context around sure, we've gone through the the pandemic and, there's you know there are major Wars, that we've talked about you know ongoing, as we record this and you know monkey, pox is now out was it just called a I, don't know the designations they're just, designated as an emergency status some, who and then now the United States has, declared it such and uh as we as of, yesterday as we record this and so we're, going to be talking about a topic today, that reminds me that we live in the most, interesting time in human history and, things are changing faster than they, ever have and there's actually a lot of, reason to have hope in the world is we, talk about the possibilities that we're, going to talk about today I just want to, kind of remind people that that uh, there's a lot of things that are really, worth being positive about and I think, today's topic is frankly one of them, yeah a lot of a lot of people really, doing things with tech that are, beneficial or at least the intention is, that they would be overwhelmingly, beneficial right yes so I I think that, this factors in the topic that we'll be, talking about today is is Alpha fold in, the corresponding database that they've, released of protein structures this came, up and you know I was seeing I I don't, know about you Chris but I've seen it, pop up in my news feeds various times, over the past couple years and most, recently just this week I think was, coming up in the news because of some of, the things that they've that they've, released I think in particular the sort, of recent news is that they have this, database of protein structures and we, can talk about kind of what that means, and how it was generated Etc here you, know over the course of the podcast but, this this database of protein structures, and they've just released and expanded, that from 1 million structures to 200, million structures so it's a pretty big, uh pretty big increase in terms of the, size of the size of this database, and I don't know we we were just talking, even before the episode Chris about, proteins and maybe like how how those, can be important for the study of, various things I don't know if you you, want to chat about that at all but abely, it was definitely interesting to look at, this project and understand a little bit, more about that field which I'm not I'm, not actively participating in and it's, important that we note that we're, exploring this as non-experts yeah come, with us along our journey of learning, about Alpha fold so so you know, obviously we're here with our listeners, because we all love Ai and we're, exploring things uh but often the use, cases are things that we don't have, expertise in and uh this is one of those, episodes that we call fully connected, where we're just we're just exploring, and we're bringing people along on the, journey as we talk about this and I have, no particular expert I took some biology, in in high school and in college but I I, have no particular expertise but I do, know that that proteins are the, foundation of all life and it is, incredibly important to understanding, how they can be used in their, application their 3D structure they can, be incredibly complex it's kind of I'll, actually give a I I know I've relayed, this to you privately but I'll give a, quick setting on on kind of why 3D, structure is so important many of our, longtime listeners will know know that, I'm really into animal uh welfare causes, and particularly I handle venomous, snakes quite often uh with appropriate, safety gear and such but uh a friend of, mine named Dr Brent seagull uh who has a, chemistry PhD from Harvard he and I will, often talk about just for fun it's not, what either one of us is primary do, we'll talk about snake venom as as a, just a fun two guys chat chatting thing, and Brent with his expertise in, chemistry can literally look we we'll be, talking comparing two species cheese and, he will be able to pick up he'll go and, check on the uh look at the SN the the, protein makeup of snake venom and then, he'll look at the at the protein uh, molecules and the folds and where, they're at and right there off the cuff, he can he can just tell me exactly how, those proteins are affecting and if it, gets if someone's bitten uh what what, that will do and what that particular, combination of proteins and so protein, folding may sound really esoteric to, those of us who are not in biology, professionally but it's crucial to, understanding chemistry and life itself, it really gave me an appreciation for, this topic before we got to this episode, and so um I'm pretty excited about the, possibility and I think it's going to, really revolutionize medicine yeah and I, think in this in this episode at least, what we're going to try to do is is kind, of talk through how the context rold the, data how it sort of works and what the, implications are so getting in the weeds, a little bit with how this is actually, operating um we'll we'll get there at a, certain point but yeah I think setting, that context is good I was looking, through some articles again because I'm, not a I'm not a chemist or a biologist, but looking through some articles that, we'll Link in our show notes as also, good learning resources for you talking, about the sort of reason why proteins, and protein folding is use well this is, from the National Library of Medicine, which sounds very official I don't, actually know a lot about the National, Library of Medicine they talk about how, the proteins are basic building blocks, of all cells in in our body and living, creatures and that we kind of often, think of DNA as being at the core or DNA, and genes as sort of being at the core, of of the information needed for life, which is true but then the sort of, dynamic processes of life like the, things that happen in our bodies that, like the functions and the processes, defense mechanisms and reproduction of, certain things in our bodies all of, those sort of dynamic processes are, carried out by proteins which you know, do this kind of folding and assembly, into all of these complexes to actually, perform functions right so it's it's, like the functional process in I mean, and to really get that tangible I mean, um and these are examples we've seen in, many of these articles on this is you, know the the fact that your eye and and, the retina can receive light and process, that light to your brain the mere fact, that it can do that is protein based the, fact that right now you're probably even, if you're sitting down you're probably, moving some part of your body and that, movement that you're engaged in right, now is based on proteins there it's just, impossible to escape the that, fundamental you know kind of function, that proteins provide times you know a, billion different things and so this, this kind of Technology it's going to be, really fundamental to to to life going, forward and I I know I was joking to you, that uh earlier that I wish I was, younger than I am now not because not, just from an age standpoint but because, then these kinds of Technologies could, positively influence me for more years, than they're currently going to be able, to it's it's like I I every time I see, these great advances coming out and I'm, in my early 50s and I'm looking at it, kind of going God why couldn't that have, happened in my 20s or or something like, that so this pretty cool stuff here one, of the interesting things to me is like, they're releasing so we kind of talked, about how protein, structure is important and how it sort, of is tied to the the basic functions of, life and why that like you're saying is, important for advances in medicine and, other things what's interesting is that, all of this complicated function and, process that are carried out by proteins, are are fundamentally driven by, sequences of what's called amino acids, and there's 20 of these amino acids and, so I I was trying to think of like a, metaphor and I don't know if this has, been used I'm probably stealing it from, someone but when I was going through it, and looking at this stuff these, sequences of amino acids there's 20 of, them you know you can think about how, much, complexity we can see formed out of 26, letters of the you know Roman alphabet, in all sorts of languages and there's, you know you can express you know, innumerable things with that uh kind of, small set of characters here we have, this sort of sequence of amino acids, there's 20 of these acids and that's, what forms proteins and drives how they, fold and how they assemble and how they, do all these functions and so when we're, thinking about like how does this, intersect with AI the process or the the, TR data transformation that we can think, about is like on one end you have, sequences of amino acids that you might, know about and then on the other end you, have the folds and the assemblies and, the geometric structures the 3D, structures that are driven the the, protein structures that are driven by, these sequences of amino acids or the, you could predict from these so an AI, model as we've talked about many times, in this show is B you know at its core, it's a data transformation right you, take an image in and then you get a, label out or you know something like, that here you're taking these sequences, of a Meo acids in and out of it you're, predicting a 3D structure of one of, these proteins that's really the, fundamental kind of data transformation, that we're talking about which is what, Alpha fold is addressing is sequences to, 3D structure that's that's at the main, core of what we're talking about and I, think in some of the the materials that, we reviewed ahead of time if I'm, understanding them correctly you know, those different amino acids the folding, itself is kind of amino acid to amino, acid so even though we're talking about, sequences and and you tend to think, about a line of amino acids with the, word sequence but it's being folded in, 3D with those different amino acids, connecting to each other in different, ways and lots of different shapes so, even one sequence can have many many, different possibilities there going back, to your point you know even different, folds with the same amino acids is is is, the impression I'm taking away so, there's a lot to to happen there and as, I'm is kind of referencing back what I, talked before about my friend Brent he, can look at that and see a functional, kind of what it will do after that so, it's it's very very practical AI that, we're talking about here we're talking, about something that that is something, that that the output is can be put in, the hands of an expert who can, immediately see in many cases where this, is going and what the what the effect, will be so super practical medicine, we're talking about here yeah definitely, and I guess to kind of bring home the, importance of the methods that we're, about to go into, previously I mean it has been known that, knowing these structures and the folding, process is important and so people have, done experiments over time and exp you, can find out the structures via, experiment I I don't know all the, details of that maybe we can find a link, to share in our show notes but, experimentally you can find these things, out but of course anything that involves, you know chemistry and biology, experiment is going to be limited in, terms of the pace and capacity that you, can do as we've all learned in terms of, lab testing you know covid results and, that sort of thing in recent years so, there's a limiting Factor on that which, means that were you to be able to, predict protein structures with a, computer which is maybe not it still has, a cost right in terms of computational, cost and environmental cost and other, things but were you to do it you're not, you're no longer constrained by your, sort of experimental capacity you're, constrained Maybe by Your computational, Capacity and that sort of thing and so, the the scaling mechanism is is quite, different I think and to that point I, believe there was roughly correct me if, I'm if I'm not remembering this, accurately but I think that the this, that it was trained on roughly, 150,000 known protein folds that had all, been human determined you know this was, before the AI was applied so that was, the Baseline and to talk about the the, leap that were describing here what was, announced on July 28th which was uh just, a few days ago as we record this was the, fact that from that training set of 150, uh, they went to 200 million which, describes nearly the entire universe of, known folds and I'm sure that there are, more that they're going to continue to, work on but kind of that's everything, that we currently know for all practical, purposes so that's you know you're going, from a fairly small subset to most, everything in this one big release that, we'll talk about with the database and, everything so I'm pretty excited about, what comes, [Music], [Music], next okay well let's uh maybe give just, a little bit of context for Alpha fold, and then talk about the the database, that they've released a little bit so my, understanding is that Alpha fold kind of, it first started getting notoriety, because of these shared tasks that were, really like what I would think of in the, AI World a shared tasks maybe they're, called something different in the, biology world but there's these shared, tasks within a certain Community, critical assessment of techniques for, protein structure prediction or the CP, um I I guess I'm assuming I'm saying, that correct Casp and they've had these, over time you know over the years and, Casp, 14 was one of those shared tasks where, Alpha fold really kind of stood out from, the rest of the pack in terms of what it, was providing and really showed the, ability to very closely replicate the, accuracy that you could achieve via, experiment with predicting these, structures right because experiment in, and of itself also has error related to, it right so when you do an experiment to, get these structures you also don't get, like 100% accuracy there's error bars, and all of those things and so what they, were showing which is quite, extraordinary is that this Alpha fold, thing which we'll talk about more and, get into the weeds of is able to take, these sequences and a sort of database, of sequences in and output structure, that are of the same kind of level of, quality as experiment in many cases, which means hey well now you have a sort, of choice you you could run experiments, but if you're getting about the same, same accuracy out of the simulation then, that that scales like you were talking, about the scale that you can achieve, with that is is something wildly, different yeah I I think all of the, outputs are obviously being from an AI, model they're all predictions the, accuracy of those predictions has proven, to be something that is significant, enough to where further research based, on those outputs can can proceed rather, than a lot of kind of going back and, trying to figure out if if the if the, output of the model is is sufficient and, in terms of accuracy to be able to base, further research on it so it's not just, turning out a lot of outputs it's also, the fact that they're they're very high, quality and those two features together, are what's going to really Propel things, forward in the in the larger biology and, chemistry space here to drive medicine, forward for all of us the method that, they're doing has created these, predictions and so it's really this Bank, of predictions that is part of this, release that has been you know getting a, lot of attention we'll link to a blog, post about the release in our show notes, but one of the things that I thought was, really interesting Chris I don't know if, you saw this was there was a figure of, like one Circle which was the experiment, today like how many how many structures, do we have in our database of, experiments and then the database when, it was originally released because they, originally released the alpha fold, database with about a million structures, and then they have kind of the circle of, alpha full database today and the scale, just sort of like for our listeners who, aren't seeing this in front of them, right now it's like one big circle which, is the database today and experiment is, sort of like a little dot within that in, terms of what represents cuz, experimental structures in a database, one of these I understand is called, pdb has about, 190k structures and Chris that's what, you're saying these sort of supervised, examples that they used in training and, then alphafold today the database has, 200 million plus so that's that's pretty, crazy they also give these these uh, circles representing how much is from, different places and you've got kind of, a circle for an animal and plants and, bacteria and fungi and other animals is, the biggest category but then you have, plants bacteria fungi and other things, so it's pretty interesting both the, diversity and the size of this I would, say and again I um I'm near the field, but my understanding in terms of what's, offered here is actually you know 3D, structures so you can look up um Alpha, fold itself is open- sourced so the, inference pipeline is open- sourced as, far as I know that the training pipeline, isn't but the inference pipeline is open, sourced and you can look kind of in 3D, at the structures that are coming out so, it's like 3D cartisian coordinates that, are coming out you put this sequence of, amino acids in you get this 3D cartisian, coordinates out which is are really just, this 3D structure representing the, structure 3D structure of the proteins, yeah you know as a data set the ability, to do that and then combined with with, previous Technologies you know so if you, if you go back a few years and you talk, about how big it was to release the, human genome yeah and that provides a, different set of capabilities you know, in terms of understanding you know what, what our genetic predispositions are and, all sorts of different use cases but now, with the protein folding to be able to, you know to maybe start with the genome, and understand what's likely to happen, and what your predispositions are and, then you can go use protein folding from, this database and be able to solve for, some of those issues is is pretty, remarkable yeah I think also it's like, uh like when you think of the scale 200, million one of the other things that I'm, comes to my mind and I'm sure people are, exploring this and I'm you know our, listeners please share links with us in, our slack or Twitter or LinkedIn or, wherever of studies that you know about, but you have this now this data set of, 200 million I'm thinking like oh what, does it look like to do clustering sort, of techniques on top of that can you, learn about the sort of structures now, that like all of the proteins are kind, of mapped to these 3D structures what, can you learn at a more aggregate level, about like clusters of of folding, patterns or structures what can you kind, of post-process this data set into and, maybe build models off of these 3D, structures we we all know that like the, graph neural networks now are you're or, a huge thing that's that's coming up and, people are exploring that more and more, so obviously These are 3D sort of, spatial graphs and it it would be, interesting to know what what are people, doing with these structures on the on, the back end after they're after they're, formed I think that's an interesting, direction to to study as well yeah I'm, looking at these same these same, documents that you are and I can't help, but think about the fact that is, hopefully this is un leashing this, revolution in this type of research and, you talk about that I'm wondering how, many high school and college kids today, who have an interest uh that cross over, might might leap into this I I think I, think this is a moment we're going to, remember just like the release of the, human genome was yeah yeah and they, already um I mean they already talk, about the impact that Alpha fold is, having even just a couple of months, after this sort of release I see here, that after they open source Alpha fold, and the database it's already been cited, more than 4,000 times in in academic, research and there's you know things, related to here they do alpha fold, predictions referenc in Publications you, know there's a large complex that acts, as a Gateway in and out of the cell, nucleus there's from something having to, do with malaria which is a protein for, including in vaccines there's something, having to do with the rate of mRNA, degradation which I think a wider, audience is now more familiar with M RNA, after all the vaccine stuff Co yes yeah, yeah there's something having to do with, causing Frost damage to plants which is, um obviously an agricultural thing so, even outside of medicine you could think, about agriculture and other things, that's a really good point you're making, because I think we're we're we're, focused in our conversation very much on, medicine but you know agriculture Food, Supplies there are so many different, areas pretty much you know everything in, life not just us walking around are, impacted by this and so and I know with, your interest Chris I noticed this one, too about um something involved in the, immune system of egg laying animals, including honeybees and of course we you, know you're probably even more familiar, than I am with sort of how how honeybees, and you know bee population are in, Decline and having crisis a huge yeah, it's a huge crisis that we're in yeah so, who who knows how this could uh could, impact many of those things well maybe, we could jump now a little bit and start, talking about how does alpha fold do, this so I think that we've established, hey it's caught the attention of many, people because it does a really good job, at this they've open sourced the, inference pipeline so people can use it, but what does what does alpha fold do I, I mean this is practical AI so we could, probably all learn even if we're not all, doing protein folding maybe there's, elements of the way that they're, processing this data that are useful in, our in our own creativity in our own, problems and I think it's interesting, that in their processing pipeline you, see sort of a number of really, interesting things popping up from other, other domains so the Transformer, architecture pops up within this there's, a what they're calling an EVO former, which we can get into why it's maybe Evo, Evolution related in terms of how how it, uh is also iterative but there's this e, Evo former architecture there's this, element of like joint embeddings and, also there's in the training they use, sort of supervised and like, semi-supervised methods they also use, these like Bert style not in a, pre-training way but they use a Bert, style masking in their training as well, which all of those things I think we, talked about this on a similar episode, this sort of innovation is built off of, a number of things that have just been, sweeping across the whole AI world, including you know you're thinking about, Transformers these joint embeddings, semisupervised methods Mass language, models all of these elements kind of, contribute somehow to how the data is, processed in this pipeline yeah a few, episodes back we had a we had quite a, conversation about that and the fact, that you know as an analogy if you think, about these different uh approaches that, you just that you just enumerated and, think of them almost as Legos and the, creativity then of scientists and, researchers being able to say well I'm, going to try this one I'm going to try, this one and then combine it with that, one and and maybe do it in a completely, different domain and then and and you're, getting these interesting outputs and I, think I think I was before this episode, I was kind of thinking about the fact, that it's almost like about a year ago, we almost entered I think looking back, kind of a a new era of AI there was kind, of the development of of those models, for a while but now we're seeing the the, mixing and matching of them and such and, I think that this is one of the the, outputs of that and so yeah cool stuff, yeah, [Music], definitely, [Music], okay Chris so uh I think if I'm, understanding this right and uh you know, we've looked through a bunch of things, here even just you and I are learning, about Alpha fold but it seems like that, the network or the architecture that's, driving Alpha fold is is kind of split, up into a few different main components, the first of those kind of takes an, input sequence and then develops two two, kind of encodings of of that input, sequence one which is called a multiple, sequence alignment and one which is a, pair embedding or pair representation so, there's this first stage which is input, sequence to encoding then or encoding or, embedding and then there's a second, stage which takes that repes initial, representation through a, Transformer inspired architecture to, develop a sort of hidden representation, and then those hidden representations, are then fed into a last stage which is, a structure model which outputs the, actual kind of predicted cartisian, coordinates of the protein so we've got, kind of, encoding this Transformer based, architecture which produces the a, different representation or or embedding, and then we've got a structure module, which produces the cartisian coordinates, and what's interesting and one of the, reasons why I think they've used some, terms related to evolutionary algorithms, Evo former and stuff is there's actually, an iterative piece of this so those last, two stages kind of putting the, representations through the Transformer, based architecture and then out the, other end to generate the structure, those actually cycle so they at least in, their paper they say that they do that, three times so they kind of refine they, make an initial prediction of the, structure and then refine that by, passing it back through the network so, that it kind of goes through this Loop a, a few times and then outputs a refined, protein structure at the end has a, recurrent Network aspect to it there in, the in the diagrams that they show there, yeah exactly there's this kind of, looping that happens and from what I was, reading it's you know using deep neural, networks to predict protein structure in, and of itself is not an innovation of, this work so people have tried this for, quite a while but I think that there's, two kind of main pieces here here that, are really kind of set this apart one of, this is this Evo former architecture, which is unique to what they've done and, the second is this kind of iterative, process which kind of helps the network, learn across these representations and, the predicted structure in a really, powerful way so yeah it's interesting in, this first we can kind of dive into a, couple of these things but the the first, one it kind of reminded me a lot of some, NLP things in to some degree because, you've got this input sequence which, again is just the sequence of amino, acids and they generate two, representations from this so like it, maybe people are more familiar with NLP, you might have a sequence of characters, right and you might assign like a number, to each of these characters because you, have to represent text as numbers to a, computer because a computer knows how to, calculate numbers right so here they're, in some way is doing a similar thing, they're taking this input sequence and, they're representing it by numbers but, in two kind of really interesting ways, one which kind of tries to, identify not identical but other, sequences that have been identified in, living organisms and it kind of creates, this what they're calling this multiple, sequence alignment so it's actually an, alignment of this sequence with other, sequences a multi-sequence alignment and, then they have this pair representation, where they're actually trying to, identify proteins that have a similar, structure and construct an initial, representation that's kind of a pair, representation of these two things, thinking that there's similar things, maybe in the whole database that we've, we've learned about and similar proteins, so um maybe we can learn from those, things so the initial sequence goes in, these two representations the multiple, sequence or alignment and then this pair, embedding so one which is kind of an a, matrix of sequences and one which is a, pair representation of one sequence with, another let me ask you a question that's, more from your NLP background than this, but do you think that it would be fair, to say going through that two-step, process is sort of like pursuing the, probabilities iteratively as it goes and, kind of constantly working on where it's, more likely going to be between having, the multiple versions that it's, producing in that intermediate step and, then looking for other proteins that may, have exhibited the same sequence and, therefore you you already have a sense, of of what that folding might look like, so in NLP we leverage a lot of, pre-training which isn't leveraged here, and and to some degree learn like hey, language behaves in a certain way so I, can learn kind of pre-train some things, and learn some things that I can, transfer in I think the idea is slightly, similar here in that I think what, they're trying to say is you know, proteins are different one from the, other but if you have similar sequences, or similar templates of your protein, they're not going to be quite the same, but some fragments and structure is, going to be conserved across them so I, think they're leveraging this existing, database of knowledge and and sort of, these paired representations to kind of, understand that yeah there's there's, there's something unique about this, single inference but we also know a lot, about other you know protein structure, and nothing's completely sort of new so, they're likely to be the the contact, between proteins or amino acids yeah the, contact between amino acids if it that's, similar in this case to another case, it's likely that some of these fragments, of structure will be preserved as well I, got to say Dr whack for someone who is, not trained in this field that is quite, a good explanation oh our listeners who, have some type of chemistry and biology, background uh correct me in our slack, Channel or something but I am very, thankful to I should give a shout out, actually to uh there's a series of of, blogs that I looked at from the Oxford, protein informatics group so if you're, listening out there if we got any, listeners from that group thank you for, your blog posts and your work and, explaining many of these things because, they're very useful we'll make sure and, and Link those in the in the show notes, as well but yeah you you sort of got, this representation this initial, representation and then that as we've, learned is useful basically everywhere, whether we're talking about images or, text or whatever these initial, representations the msay or multiple, sequence alignment and then this pair, edding are passed through a Transformer, based architecture which is this Evo, former which is a unique architecture um, and you can read more about kind of some, of their choices that they made with, that architecture and the paper and, nature but it passes through this this, Evo former architecture which exchanges, information between the two, representations so between the multiple, sequence alignment and the par edding, and then outputs a kind of updated, representation of both the, multi-sequence alignment and the pair, embedding the sort of hidden state of, the model and then that's what's passed, into this third state, of the structure model which takes those, embeddings takes that hidden, representation and then Maps it to 3D, coordinates 3D cartisian coordinates, which is the output structure and then, like we say there's a there's a looping, thing that goes along so actually this, structure is fed back into the front end, of the second step the Transformer step, and you do this a couple times where you, know after generating one's structure, it's passed back and that information is, passed back to refine the structure I, I'm curious I'm and I'm going to throw, you another uh another tough question to, you and it's fine to say to say too far, Chris yeah but as you looked at the Evo, at the Evo former and kind of How It's, approaching do you have any thoughts on, is as we're talking about this era of, using these different components in, different ways and combining them and, going across domain any thoughts on on, what what an EVO former might be used, for uh in other contexts uh do you have, any any and I know that's getting out, there a bit it's an yeah it's a very, interesting question I do I do Wonder, like one sort of random idea and you, know this is a random idea that I, haven't thought about until this moment, so it's probably not there's probably, flaws in it but I I wonder if certain, things like this could be used for you, know multilingual models and that sort, of thing because you're taking these, sort of multi-sequence, alignments which are sequences be of, different proteins right and I wonder, and they're they're kind of labeled, accordingly I wonder if you could have, this sort of multi- language alignment, between different uh languages and then, you know factor that in I don't know, that's a that's a random thought but I, definitely think that this sort of idea, that you would take a single input and, represent presed in two initial, representations that have a slightly, different character and represent, different things about kind of your, problem space and then combining the, information of both of those, representations in the Transformer that, could be applied in a number of, different ways you know whether it's, text input or image input you could, represent that in a couple different, ways that are useful and then mix those, representations in this sort of Evo, former type architecture so I'm sure, that even after Alpha fold some of those, 4,000 citations do a much better job at, postulating possibilities than myself so, maybe that wasn't too bad for off the, cuff yeah maybe one one homework, assignment for all of us would be to uh, look at semantic scholar or something, and look at the 4,000 citations and see, which which are the ones popping up that, are related to reuse of the Evo former, architecture I'm sure there's a few, things that have already come out I, think it is interesting that we can just, say something briefly maybe about the, training of this before we close out, because I think that is an interesting, bit of this we are practical AI after, all and I think we can learn maybe learn, a little bit from the general training, structure that they set up for Alpha, fold and that is that they have this, initial set of supervised examples from, this P I was going to say PBR but that, that's definitely not the right domain, what is it uh PB something the protein, DB pdb that's it so pdb not paps Blue, Ribbon but pdb is this like, 17590 whatever it was set of existing, protein structures right so they have, supervised examples but what they did, was actually train sort of they train, the alpha fold architecture on these, supervised examples, and then use the train model to generate, the new structure of sort of like a, bunch of different guesses that they had, and for the high confidence ones they, took they took, 350,000 of those generated samples and, combine them back in with the supervised, the gold standard samples to create this, mixed data set which they then retrained, Alpha fold on and so you have this mix, of like supervised learning with what, what they're calling this noisy Student, Self distillation which is basically, this process of hey I'm going to use my, model to generate new things and I'm, going to add them back into I'm going to, add the high confidence ones back into, my data set which is a really, interesting I think structure that a lot, of people could use and a you know you, don't have to be using Alpha fold to use, that idea right you can do that when you, need to augment your data set somehow, and so I think that that that's maybe, another learning to be taken away here, that they're using some creative, elements in the training as well which, help help them kind of boost the, performance so as as we wind up uh I'd, like to challenge we have so many, practitioners in our audience I would, love to hear about some of the novel, ways that they're taking these, techniques and using them across other, domains uh and combining them is uh that, that has really been fascinating in, recent months to see some of the, creativity in the space across different, types of use cases so I'm looking, forward with to hear what people are, doing with EVO forers and some of the, other uh combinations that are present, in the architecture here to do, completely new things that particularly, those things that benefit the World At, Large yeah yeah definitely excited to, hear about that I've kind of already, mentioned some learning resources for, people and we have a bunch of links, We'll add into our show notes that, people can explore but if you're looking, for something to start with Deep Mind, does have a really good brief explainer, video about protein folding and Alpha, fold and how that fits together so we'll, include that that's a really good, starting point and if that Sparks your, curiosity they actually do have, published a collab version of the, inference pipeline so you can actually, spin up Google collab and try to predict, some structures yourself I think that, would be maybe the best way to learn, about this is just to just to try it so, we'll link the GitHub to Alpha fold and, then yeah you can try try that out that, uh collab on your own okay well awesome, I'll finish with this you might share go, I started with the idea that uh that, that there's a lot of reason to be, optimistic about the world and the, future yeah despite the fact that there, are plenty of things to bring us down if, you've enjoyed this episode you might go, share some of this with uh the people in, your life whether they're into AI or not, just because it's it's worth knowing, it's worth knowing that that the world, is still moving forward in a really, positive way even when when other things, are are a bit challenging so uh share, this with people who you might not, otherwise think about for sure and, that'll be it I'll talk to you next week, Daniel been good to chat Chris see you, [Music], soon, [Music], all right that is our show for this week, if you dig it don't forget to subscribe, head to practical AI FM for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or a colleague word of, mouth is the number one way people find, shows like ours thanks again to fastly, for fronting our static assets to fly.io, for back in our Dynamic requests to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | AI IRL & Mozilla's Internet Health Report | Every year Mozilla releases an Internet Health Report that combines research and stories exploring what it means for the internet to be healthy. This year’s report is focused on AI. In this episode, Solana and Bridget from Mozilla join us to discuss the power dynamics of AI and the current state of AI worldwide. They highlight concerning trends in the application of this transformational technology along with positive signs of change.
Leave us a comment (https://changelog.com/practicalai/187/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Solana Larsen – Twitter (https://twitter.com/solanasaurus) , LinkedIn (https://www.linkedin.com/in/solana-larsen-016129)
• Bridget Todd – Twitter (https://twitter.com/BridgetMarie)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
• The Internet Health Report 2022 (https://2022.internethealthreport.org/)
• Facts of the AI Power Imbalance (https://2022.internethealthreport.org/facts/)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-187.md) | 5 | 0 | 0 | even when we're talking about data, futures or AI in the future like what we, don't want to get into is like oh robots, and it's not like sci-fi future really, it's more like you and me real life kind, of future because even though we're, talking about very advanced technology, in some cases it affects people who, aren't even on the internet like it, affects you when you're walking down the, street like there are all kinds of ways, that aren't very high Tech and just very, like basic daily life where you, encounter these Technologies and so yeah, it's bringing it down to the ground, level where we can talk about it and, where we can also approach it with, Grassroots communities when it's, [Music], appropriate welcome to practical AI a, weekly podcast making artificial, intelligence practice practical, productive and accessible to everyone, subscribe now if you haven't already, head to practical a.m for all the ways, special thanks to our partners at fastly, for delivering our shows super fast to, wherever you listen check them out at, fastly.com and to our friends at fly.io, we deploy our app servers close to our, users and you can too learn more at, [Music], fly.io, welcome to another episode of practical, AI this is Daniel whack I'm a data, scientist with s International and I'm, not joined today by Chris who is, currently in a plane somewhere taking, his daughter to Disney World I think to, have a wonderful time so we'll give him, the week off but in lie of Chris we have, some amazing guest with us today to talk, through some of what Mozilla is putting, out with their IRL podcast and their, latest Internet Health Report we have, with us salana Larson who is the editor, of the Internet Health Report and, Bridget Todd who is host of mozilla's, IRL podcast welcome great to have you, both thanks for having us yeah so, excited to be here I was so excited that, we got to to do this of course you're, putting out amazing content through the, Internet Health Report and and the IRL, podcast which this time around is, focused on, and I'm sure we'll get into that but, maybe before we do Solana would you mind, just sort of introducing like for those, that aren't aren't familiar with it what, is the Internet Health Report and like, how how did it come about and maybe just, a little bit of context there sure well, it's an annual report and it's published, by Mozilla and we started five editions, ago asking the big question what does it, even mean for the internet to be healthy, and what happens when we think about it, as an ecosystem that can be either, healthy or unhealthy or bits of both at, the same time and then the important, question of course is how do we make it, healthier so when we're talking about, healthy in this case we're thinking a, lot about how it acts as an ecosystem, for humans for Humanity is it a benefit, to the world is it something that is, good for people and so when we think, about the things that are unhealthy you, know it's everything from disinformation, or hate speech but it can also be things, like how many people are connected to, the internet how many women are online, are people able to build and code and, compete you know what is the this, ecosystem that we're building so every, year we would like step back and look, across a lot of different topics, everything from undersea cables to you, know codes of conduct and open source, unities and so forth and I think over, the years A lot has changed in how we, talk about the internet and how we, understand the internet both in the, media and in technical circles how we, think about regulation and so in terms, of like moving with the times a little, bit I think right now is the moment to, talk about Ai and so it's the first year, that we have taken just one big topic as, the focus area for the Internet Health, Report and gone deep on just that and, with AI it's really all the things that, hurt or harm the health of the internet, the most we see those magnified or, Amplified with AI in a lot of ways but, there's also a lot of opportunity right, and there's a lot of things that are in, flux and things that are adaptable I, think to what we do right now so this is, an exciting moment an important moment, to be talking about it that's so well, put and appreciate Mozilla digging into, this subject and covering a lot of, really important aspects of it um I've, listened to the first episode of the, podcast the IRL podcast that's coming, out with a lot of these stories Bridget, you you've been posting that from your, perspective as you were talking with, salana and maybe the team around this, and thinking about like why is now the, time to talk about AI what were what, were some of your initial thoughts or, your perceptions about AI kind of at the, outset of this project yeah I mean, pretty much plus plus to everything that, Solana said but I think for me not, really having a hard tech background you, know I'm not an engineer I'm not an AI, expert but I'm somebody who cares about, technology and sees the ways that it, impacts all of us even if you don't, think of yourself as a techie and so I, deeply appreciate the way that Solana, with the Internet Health Report and the, entire team at Mozilla have really made, these conversations accessible right and, so I'm sure a lot of folks out there, probably not listening to this podcast, but might say this has nothing to do, with me AI like what does that have to, do with me and I think this podcast and, the Internet Health Report really push, us back on that notion because from the, way that our medical issues are, diagnosed to the way that we vote in, elections all over the globe AI impacts, all of us and so it is imperative that, we all understand the way it impacts us, the potential for harm but also the, potential for good things too right and, so not just focusing on the harmful, impacts but asking well what can be, better and what why does it have to be, like this how how can we you know have, space for dreaming and hoping that, things can be better than they are and, so I think that the thing that really, draws me to these stories are the ways, that they are made so accessible where, everybody can understand hey this really, impacts all of us it's interesting you, sort of brought up the the concept of, you know a lot of people maybe not, thinking so much about the impact of AI, on their lives do you think that there's, a side of that that has been sort of, exacerbated by the kind of futuristic, Terminator scenario like sort of hype, around like what comes to mind when, people say AI is maybe like that's the, harmful thing that could happen not so, much you know automating weapon systems, or things things that could happen in, the healthare system or other things, from your side and like the people that, you've talked to or may maybe just in, your day-to-day life do you find that to, be part part of the issue or how can we, think about like the general, population's perception of AI and maybe, how that needs to to shift a little bit, I think there there's so many ways so, many directions you could take an answer, to that question but I definitely think, you know there is this kind of, exclusionary, kind of it's magic there's an element of, dark arts to it this Mystique around AI, that really serves the people who are, use it to exert power in different ways, and so when we're asking for AI to be, more transparent and more understandable, it's partly about demystifying to an, extent that we can actually get to the, heart of how do these systems work how, does it affect me what can I do and so, yeah I think that kind of obscurity and, also that you know elevating it to this, higher art form that normal people can't, understand I think that's part of where, the power lies you know you have that, with other other forms of knowledge and, Power Systems as well so a big part of, what we're doing is you know bringing it, down to a level where explaining how it, works how things can work differently, and often times and which is what we, have in in the podcast is people, explaining how they were harmed by a, system but they've deigned decided to, design it in a different way to do, something different and sometimes it's, through those stories of just people, building something even if it's on a, smaller scale building something and, that gives you this this kind of, realization oh yeah I was just, completely taking for granted that we, have to collect data in this way or that, have to you know ignore privacy in this, way or you know it just like opens your, horizons for the hoping and the dreaming, and I think even when we're talking, about data futures or you know AI in the, future a better future sometimes I think, like what we don't want to get into is, like oh robots and it's not like sci-fi, future really it's more like you and me, real life kind of future because even, though we're talking about very advanced, technology in some cases it affects, people who aren't even on the internet, like it affects you when you're walking, down the street like there are all kinds, of ways that aren't very high tech and, just very like basic daily life where, you encounter these Technologies and so, yeah it's bringing it down to the ground, level where we can talk about it and, where we can also approach it with, Grassroots communities when it's, appropriate yeah and just to add on to, that you know when we were first working, on the podcast script something that, Solana said that will really always, stick with me is when we're talking, about like oh how do you phrase certain, things as it pertains to technology or, Ai and she said oh we don't like to say, AI does this or the technology does that, because that's not actually true it's, people who are programming AI to do this, or people who are programming technology, to do this and that really blew my mind, the way that I had just sort of believed, this idea that oh the technology is, going to do what the technolog is going, to do like you know it's this mystical, robot that I have no insight into how, that really obscured the humans with, power to make decisions about AI it, really obscured their role in a way that, I think really benefited them and so I, just really agree with all of the points, that that solanus made yeah I think that, one of the themes even in the first, episode that that I listened to which, was which was so wonderful I encourage, everyone to maybe finish this if they're, listening to this now and then, immediately go over and listen to the, the IRL podcast but yeah one of the, themes that I think was starting to come, out for me was the connection of this, technology to the the data side as well, and like you were both talking about, there's a human element behind this part, of it is what humans decide to do with, the technology but another side of it is, that this technology is inherently, behaving in a certain way because of the, data that humans have generated and the, data that they've chose to to put into, the training of these these algorithms, and this data isn't sort of just created, in a vacuum like there's there's a human, element behind behind the the data side, as you were kind of looking at the, stories that that were coming in and, what you were curating for the for the, podcast and the report how big of a how, much of it was around like the the, applications of AI versus like the the, data side as well because I know that, that's a huge part of what can go wrong, with with these sorts of systems in the, second episode we're talking about the, gig economy and workers tracking their, own data and taking ownership of their, own data in order to reverse engineer, the algorithms of the gig platform to, figure out if they're getting a fair, deal or if they're even getting what, they're being told that they're being, getting which is which is difficult to, assess so tools like that where you're, thinking about well changing the, perspective on who does data belong to, you know data that's generated by you or, by your community or by you as laborers, who should that belong to and who should, have power and control over it those are, questions that are being asked I mean, not just in this the sphere of, Technology but also among Regulators we, also have an episode where we look at um, geospatial data and who has access to, geospatial data and labeling that data, in order to interpret what you see and, the stories that you can tell based on a, place or people who live in the place, can change a lot depending on what your, motives are and so we have a story from, uh one of the the research fellows at, the Dare Institute who is looking at the, spatial Legacy of a part height in South, Africa looking at how have townships, changed over time and how do you measure, that with geospatial data and what kind, of data sets can you create to actually, document the differences in the, landscape that are not being tracked or, talked about by the, [Music], government, [Music], so much of what we talk about on this, podcast has to do with the data side of, things which is whether it's just the, practicalities of often that's where, like trying to practically build these, systems is difficult is in the data side, of things or it's the element of like, ownership like you're talking about and, the these other elements of I'm thinking, of like language communities in, particular that is the area that that I, work and this idea that like in certain, cases big Tech or whoever it is is, really kind of mining these communities, for the language data that they have to, offer and the systems that are built out, of that really don't provide a benefit, back down to to those communities um, where their data has been leveraged so, yeah I really really appreciate that, that perspective before we get into a, couple of the the individual stories, Bridget as you were kind of curating, these stories for the podcast how did, you decide what themes to to kind of, focus on I'm sure there were so many, stories related to to AI that were, interesting in one respect and and not, another what went into the kind of, curation process and deciding kind of, what to focus on and what what stories, to feature on the podcast oh well I wish, I could say it was all me but it, definitely was a a team effort with, Solana and the rest of the folks at, Mozilla an amazing team of writers and, researchers who put this together I, would say and Sal I would love your, thoughts as well I would say the stories, that resonated the most are the ones, that really have that human element at, the center and so you know the stories, where you hear you know oh I was an, engineer at Google and I experienced, this and this is how it felt for me to, experience that this is what that felt, like to be going through that experience, I think that the way that these stories, aren't just about the tech and the, people who make it and the policy folks, that shape it but really what brought, them there and how how they wound up, there and the emotional experience of of, being in those situations having that as, the focal point I think is really you, know at the at the center of what makes, the podcast tick and again I can't I, wish I could take all the credit but I, really cannot yeah and it's hard I mean, how do you pick stories you know I think, we wanted to get into different corners, of the issue so like with the Internet, Health Report we have this collection of, data visuals a compilation of research, that really asks the questions who has, power over Ai and so you scroll through, that and you you have some different, perspectives you know that shows what, what do we even mean when we talk about, power in that context and what are some, of the facts around you know how the, technolog is distributed controlled you, know who dominates in that space who's, making more money in that space and then, the podcast kind of answers the question, well what can be done so it's looking at, some of those areas you know what areas, of big Tech dominance of AI could we put, a question mark to what areas of, surveillance could we put a question, mark to you know AI powered surveillance, and then thinking about where where are, the opportunities you know the a lot of, people are talking about the, opportunities of AI and Healthcare or, the opportunities of AI for addressing, poverty or whatever but I think it's, also important to figure out well how, are you actually critical how do you, assess whether AI is trustworthy because, a lot of people will tell you this is, really good this is good for you or this, is good for Africa or this is good for, for women or whatever but it's not, always true sometimes it's you need more, people to help assess whether that is, really the case and so it's a discourse, it's a conversation and it needs to be, many-sided in order for us to really get, smarter about how do we build AI That's, better yeah and I would also just love, to add like something that you just said, really jumped at at me this idea of, being critical I love technology, technology is like it gave me my wings, when I was a young person but part of, that love is also criticism part of that, love is challenging it to be better and, challenging the people who have power, and the people that make it and I don't, know I want to get to a place where, being a a tech critic or a skeptic even, is seen as a form of love of Technology, because you want it to be better you, want it to be used to ask the questions, of you know why can't this be better, that's exactly it and I think when we're, talking about opening up that, conversation to others so that it's not, just tech people who are talking among, themselves about this it's for instance, the thing I've been repeating a lot in, in the past couple of days is you know I, might not be a data scientist or an, engineer but if your AI system is, harming me then I know something about, it that you maybe don't and so there has, to be a way for me to be able to engage, with you in some kind of you know maybe, it's not a conversation where I call you, on the phone but maybe there's a way, that I interact with your system or I'm, able to get through to a helpline or or, something you know that we have to make, whether these systems work on a small, scale or on a mass scale we need to make, them adaptable to the input of people, who who have knowledge to contribute to, them yeah I so happy that that all these, things are getting brought up and I I, think certain things like you brought up, the idea of this sort of power imbalance, which you highlight both in the podcast, and the sort of report and the facts, around that um I think that that term, gets thrown out in relation to AI a lot, but people might not have certain ways, to think about like what what does that, mean what does the power and balance, mean and what what are the enduser, implications of that and I think in like, the facts that you're showing on the, website but also in the stories so one, of the ones in the episode I listened to, was from shaan I think was her her name, and describing some of what kind of, happens when Western entities sort of, create this technology for certain, purposes, that have implications for someone all, the way over on the other side of the, world I'm not sure if you could maybe, kind of bring out some of those things, and and how is the power and balance, kind of how does it play out in that in, that sort of situation so here's a, here's a a line from her that I feel, like really gets it she says the, relationship that many of us have with, technology is one-sided especially in, the global South where a lot of this, Tech the apps that we use that devices, that we use have been built in other, places in other contexts by people who, have not really sort of imagined us as, the end users and that is a really, important issue because that Tech is not, built for you with you in mind or your, needs in mind that is a sign that you're, excluded from those conversations and I, think that something about that line, really gets it to me where the people, who are designing the the technology, that you're going to be using have not, even really thought of you let alone as, a as a as an end user the way that she, describes it but as a person they have, not thought about how might this fit, into your into your life who what kind, of life are you living what is your life, look like and so all the different ways, that that can be used against people and, it really goes back to what you were, saying before about data how so much of, technology is the way that it's used is, so extric right and like how I don't, know it's just such a limited, perspective that folks will Design, Technology that just takes and takes and, takes from us but doesn't really give us, a lot back or even really see us as as, people or humans yeah and the added, context which we don't really get into, in this podcast episode is that sha Khan, is leads research for the digital rights, foundation in Pakistan and that's an, influential digital Rights group there, and so when things do go wrong with some, of the big platforms or with content, moderation and such they're the group, that gets invited in to give advice to, the big platforms and so what she's, saying is we get asked to come in and, fix it after it's broken after it's, caused harm instead of these systems, being designed from the beginning in a, way that they're not intended to cause, harm and in terms of how big platforms, do they care about the people who are, using their systems the vast majority of, people who use their systems who are not, in the United States for instance and, then how do they collaborate with local, groups around the world research groups, groups that also use use AI to track, disinformation track hate speech that, kind of thing how how does that work how, do these systems how can we make them, work better I'm sure I'm just uh, thinking of like our our listeners maybe, there's certain people out there that, are thinking like well the technology, that we're building in our team we have, an expectation for how it's going to be, used and we don't see that as as harmful, and how can we possibly know all the, different ways that that people could, use our our technology what would you, encourage them with to just in terms of, their own thinking maybe about how how, this sort of Technology could be used, versus how they're envisioning it being, used which might be two different things, well one thing that I would say right, off the bat I think this is definitely a, question for Solana but I I would, encourage people to listen to the, episode that we put out last week about, the tech that we called Tech we won't, build Laura Nolan's story of the work, that she refused to build at Google I, think is such an interesting one because, she talks about how you know she didn't, really know what she was building and, that the way that the team that she was, on at Google working on Project Maven, was designed you really couldn't be, super sure what the work that you would, be you know building would or what you, would be working on what it be be go on, to be used for but that once she started, poking around and asking the right, questions and like talking to people in, other departments she did that work of, like finding out like oh God I'm, building something that could be used, for horrible purposes and that is not, what I set out to do and that it's not, what I want to be doing and so I think, that she her story is one that really, resonates with me because I think it, provides a really interesting blueprint, for how folks you know can do a little, bit of the of the investigation about, what the potential causes or what the, potential for harm that the technology, that they're working to build can be, used for I think another thing that, could speak to your question Daniel, would be this idea that you can design, one siiz fits all Technology Solutions, for everything I think that's a tricky, one sometimes where there's this default, imagined user which often ends up, looking a lot like the developer them, themselves you know so one example we, have is the databases of of images that, are used for Dermatology and for for, systems that are used used to to, diagnose skin diseases or skin cancer, that where the data sets are almost, entirely of people with white skin and, then don't work for people who don't, have white skin and then so what what is, what is the solution you know like why, does it have to be just that one big, data set that is going to lead to Mis, diagnosis for countless numbers of, people then why don't we make other, systems why don't we have community-, based systems why don't we have, indigenous communities or language, communities people who are building, their own tools and Technologies and, data sets that actually work for them, and then not have to deal with the, arrogance of somebody telling you no, this works for you this really works for, you even though you can prove that it, doesn't and you're you know the wrong, people get misidentified and sent to, jail and like we have we have so many, harms at this point point across the use, of these Technologies whether it's, Biometrics or facial recognition, technology or like it's endless you can, pick any topic almost and you can find, some kind of harm so if we're going to, diminish that if we're going to make, systems that are more trustworthy we, need to we need to learn from these, experiences because there are so many, now you know there's no there's no, reason for it to be that way like let's, just make it, better, [Music], [Music], I know that one of the things that you, draw out in in some of the information, that you put up online and I'm sure will, come out in the podcast as well as like, one element is like why why do things, have to be this way or or what tech, should we not be building maybe even if, it is possible the other side of this is, accountability I think that you draw out, in terms of well you know there's a lot, of people to gain from from AI there's a, lot of applications that are already, permeating Our Lives I think even on the, last episode with Chris we were talking, about just how quickly AI is and, applications of AI are spreading at a, rate that's much higher than like, regulation is happening so who is really, accountable here and to whom and I I, guess that's connected to the power, element of it as well what did you learn, in terms of this side accountability and, AI as you as you put together the, material for the report I think here, there's accountability in a lot of, different areas as well because you have, accountability from businesses or, government ments you have big Tech, accountability is one that I think is, probably most familiar to people where, we ask for more information about, content that's harmful and how it's, moderated and that sort of how, recommendation systems work on social, media platforms that type of thing but, you also have I think one example that, stood out for me is with the gig work in, the the episode we did on on gig work, there's one woman who is a delivery, worker who's the head of a an, Association of delivery workers in, Ecuador and she's on the streets of Kito, and when she's interviewed about how, these systems work and what concerns her, about them one of the things she said is, that the government is so scared to fall, behind with AI that they're willing to, go with anything they're so happy to, have all these gig platform companies, coming from all different kinds of, countries with almost no demands made of, them for the fairness or the human, rights of workers, because they want the systems to be, there and to thrive and to be like part, of this story this happy story of AI, success and she's saying but you know in, this eagerness to be have a seat at the, table for the governments they're, willing to overlook all kinds of things, that they wouldn't Overlook necessarily, in in other areas of Labor and so you, have a lot of that I think where you, have this, obfuscation through the technology that, somehow makes people look the other way, whether it's the consumers or the, workers or the governments or the tech, workers themselves and that's that's, part of what we need to when we're, talking about transparency it's not just, can you make a privacy policy that's, really clear it's all these other things, as well like being really clear and, honest about what are the limitations of, a system who are they actually working, for you know all that stuff so it's a, difficult question to answer because AI, is everywhere yeah and and like don't, you both I mean let's be real don't you, both kind of feel like as consumers or, users of Technology there is a little, bit of like looking the other way and, not asking too many questions involved, like every single time I order something, from Amazon Uber Eats I know that if I, think about it hard enough I'm like well, I really shouldn't be doing this this, really isn't good and I don't know I, just wonder like like I I think it for, being honest a lot of us have probably, experienced that in one way or the other, and I think it you know the the work of, the Internet Health Report that salana, is doing really asks us to confront that, a little clearer and maybe not look the, other way and maybe not just be like oh, this order it it's convenient I won't, think too much about it or whatever but, really what is like grapple with that, and the implications a little bit Yeah, but at the same time you know it, shouldn't be on the end user to to, navigate all those things as well like, that's also why we're talking so much, about the systems and the the regulation, and everything because, it can't just be on one one party or, another to figure this out like it has, to be joint Solutions joint, responsibility to make sure that things, improve yeah I think that there's um all, sorts of things at play here on the user, side and on the and on the backend, system side there's certainly things, like within a system if you think about, like even something as simple as how, data is transmitted right if I'm, processing speech on a device like I can, choose to transmit that speech up into, the cloud and it's stored somewhere and, then things happen with it maybe that I, intend and I don't intend or I can, choose to like process that speech and, maybe only send like very anonymized, metadata back up to some type of, centralized system so like there's very, real implications for like how you, design a system and then also I think, though there's real amazing work that, can be done on even people that aren't, technical But realize like you were, talking about earlier I forget which one, of you about how users can tell, developers how a system is failing them, or harming them in ways that the, developers never even envisioned so I, think yeah there's a lot of things at at, play here for sure that that come into, it on on both sides of of this and and I, guess the third side that might be kind, of interesting to discuss here is the, research side so research I think people, have always thought well like we should, just like research things and figure out, like what's possible it doesn't matter, like what it is we just need to figure, out like what's possible and what can we, can we do right and something that's, always been fascinating to me is just, the extremely short cycle in AI between, research and the application in real, systems and it's like a paper is, published maybe even before a conference, happens it's published on the archive, there's like seven different G GitHub, repos that implement the thing it's all, you know out there already and like, people can just grab it and go with, whatever was literally just researched, and and peer reviewed which is so, strange to me like how quickly that can, happen and I I know that you highlight, certain elements of that also in the, report and the podcast how often often, from y's perspective like as you were, working on this how often did like the, research side of things kind of flavor, the, conversation that like research was, actually kind of impacting users on a, maybe a shorter cycle than people were, thinking I actually never thought of the, cycle in particular but it has been a, real I eye opener how, influential research and journals are on, technology like even without thinking of, of the speed it was very confusing to me, that the academic publishing cycle could, be so influential to the business sector, in a way that I don't know how is there, in any other sector where that happens, in that way and and I guess on that, scale certainly not other Sciences like, physics or chemistry and like it's much, longer than from my perspective yeah, it's a different sort of thing which is, also why we we chose to highlight and, visualize results of research that talk, about the research papers themselves, which wasn't what where I thought we, would end up at the beginning but it, made a lot of sense when we were talking, about well how do you get to the core of, where decisions are made around Ai and, if it's the proofs of concept if they're, being driven and funded and you know, really kind of the the tone for for what, is developed is coming with an incentive, that is being set by by big Tech then, you get a certain kind of research which, is different from if you were getting, something that was coming at it from a, different angle or maybe not from an, elite university in the United States, and maybe somewhere else or in a, different language altogether yeah so so, we look a lot at what kinds of data sets, are being used for benchmarking in AI, research and again this is an original, research by us this is compilations of, research that we've put together, sometimes we've found research and then, visualized it you know made it more, beautiful made it more accessible so, that more people can can enjoy it and, understand some of the lessons from it, yeah I would encourage people our our, listeners to check out if you go to the, Internet Health Report site which will, be linked in our show notes and there's, a fax page where some of these things, are are visualized and I'm sure more, content will will be coming too but, there's some really interesting, perspectives on both the sort of power, IM balance on various different scales, whether that be by sort of frequency of, data set usage or um investments in AI, in different parts of the world all all, very interesting sort of different, angles at this which tell a certain, aspect of of the story maybe as we as we, get closer to the to the end here I'll, ask both of you to to respond but for, the sort of practitioner out there who's, listening to this podcast and might sort, of be thinking oh I I wasn't really, thinking about as much about maybe, having to say no to developing certain, Technologies or they're thinking oh they, they're really is a lot to dig in here, in terms of thinking more about the data, that I'm using thinking more about, kind of Downstream uses of that of the, technology that I'm building how would, you encourage them based on the the, stories that you have told and are, telling through the report and the, podcast how would you encourage them to, really be a positive force in this in, this AI field and kind of help shape the, the future of what AI is is becoming any, thoughts either one of you could start, well I'll start I just love the question, um and I think in making the podcast one, of the things that really struck me are, what a great resource we have in folks, who work in Tech whether you're an, engineer or you know honestly what, whatever you're doing in Tech like the, stories of people who have pushed back, challenged power from within tech, companies have been so impactful and, inspiring to me and so I would just say, like if you're a rank and file Tech, employee you have so much power and so, much agency and there are so many folks, who are using that power and wielding it, in such interesting and inspiring ways, and so yeah I would say really, recognizing and owning and walking in, that power these are Fields where people, are constantly learning and constantly, pushing themselves and so it can also be, maybe it's time to to learn from a, different Source learn different things, ask different questions you know be, curious in other directions particular, when you're thinking about the social, potential social harms or or risks of, these Technologies like to listen to the, people because there's brilliant like, magnificent research about how things, can harm but also how things can be done, better and so it does require an open, mind to say okay well maybe this the way, that I've been taught to do this or the, way that I've been doing this for 10, years maybe that isn't the only way, maybe there could be a different way but, that does require yeah some real empathy, and willingness to listen and to engage, with others a lot of the people that we, highlight in the show are I think what, we would consider Heroes even if they're, small projects big projects I would say, if you're inspired or move but by what, they're doing you know reach out to them, support them back them share their work, with others elevated right because it's, not that these things aren't happening, it's not that you don't have great data, sets that are created in other parts of, the world it just it takes somebody to, vouch for them and help Elevate and and, help create more diversity in the types, of ideas that we consider a part of this, greater discourse about AI yeah that's, it's so encouraging and I'm just I'm so, thrilled by the content that you're, putting out and just the thought of like, a a person working in Tech listening to, these stories and whether it's at lunch, one day or in a meeting like bringing up, this story and saying hey I like I heard, about this what do you think have you, thought about this before what what are, we thinking in this area or like have, you ever considered this that's just so, encouraging to me to think that like, those conversations will be happening, and really appreciate both of your, amazingly hard work on this and just the, the uh the content that you're putting, out for our listeners we'll link, everything that we talked about in our, show notes so please don't wait after, you listen to this episode episode just, go over and and start streaming the the, IRL um podcast and catch up on on that, and yeah I I know I'll be watching as, the episodes uh as the episodes come out, so thank you both really appreciate you, taking time to uh to join us thanks a, million from us as well thank you you, just described our, Dream thanks so, [Music], much, all right that is our show for this week, if you dig it don't forget to subscribe, head to practical AI FM for all the ways, and if practical AI has benefited your, life Pay It Forward by sharing the show, with a friend or colleague word of mouth, is the number one way people find shows, like ours thanks again to fastly for, fronting our static assets to fly.io for, back in our Dynamic requests to break, master cylinder for the beats and to you, for listening we appreciate, that's all for now we'll talk to you, again on the next, [Music], one |
UCZbtcRQHGpU_tt2yM4C7k0A | NRrH9-kRMwc | Changelog | youtube#video | The geopolitics of artificial intelligence | In this Fully-Connected episode, Chris and Daniel explore the geopolitics, economics, and power-brokering of artificial intelligence. What does control of AI mean for nations, corporations, and universities? What does control or access to AI mean for conflict and autonomy? The world is changing rapidly, and the rate of change is accelerating. Daniel and Chris look behind the curtain in the halls of power.
Leave us a comment (https://changelog.com/practicalai/186/discuss)
Changelog++ (https://changelog.com/++) members support our work, get closer to the metal, and make the ads disappear. Join today!
Featuring:
• Chris Benson – Twitter (https://twitter.com/chrisbenson) , GitHub (https://github.com/chrisbenson) , LinkedIn (https://www.linkedin.com/in/chrisbenson) , Website (https://chrisbenson.com)
• Daniel Whitenack – Twitter (https://twitter.com/dwhitena) , GitHub (https://github.com/dwhitena) , Website (https://www.datadan.io/)
Show Notes:
Source articles for our conversation
• The Geopolitics Of Artificial Intelligence (https://www.forbes.com/sites/cognitiveworld/2019/01/28/the-geopolitics-of-artificial-intelligence)
• Geopolitical implications of AI and digital surveillance adoption (https://www.brookings.edu/research/geopolitical-implications-of-ai-and-digital-surveillance-adoption)
• Artificial intelligence is already upending geopolitics (https://techcrunch.com/2022/04/06/artificial-intelligence-is-already-upending-geopolitics)
• Huge “foundation models” are turbo-charging AI progress (https://www.economist.com/interactive/briefing/2022/06/11/huge-foundation-models-are-turbo-charging-ai-progress)
Learning Resource
• National Artificial Intelligence Initiative (https://www.ai.gov)
Something missing or broken? PRs welcome! (https://github.com/thechangelog/show-notes/blob/master/practicalai/practical-ai-186.md) | 36 | 2 | 0 | so we're far enough into this you know, current AI Revolution to where it's a, point of prestige people have been, hearing if you don't get into it you're, going to get left way way behind and, there's there's truth to that and we're, starting to see that truth now in terms, of what different social or political, groups you know whether they be Nations, or corporations we're already seeing, power shifts and some of that is, prestige based some of that is the, ability to drive economic interests, obviously some of that Drive military, interests but they're all related and, now that AI is touching every field, there is it's it's super super important, so it's going to it's going to change, all of those and it it already, [Music], is welcome to practical AI a weekly, podcast making artificial intelligence, practical productive and accessible to, everyone this is where conversations, around AI machine learning and data, science happen join us at practical ai., fm/ community and follow the show on, Twitter we're at practical aif FM thank, you to our partners at fastly for, shipping our pods super fast all around, the world check them out at, [Music], fast.com welcome to another fully, connected episode of the Practical AI, podcast this is where Chris and I keep, you fully connected with everything, that's happening in the AI Community, we'll take some time to discuss some of, the latest AI news and dig into some, learning resources to help you level up, your machine learning game I'm Daniel, whack I'm a data scientist with s, International and I'm joined as always, by my co-host Chris Benson who is a, strategist at loed Martin how you doing, Chris doing very well Daniel how you, doing today doing pretty good lots of, exciting progress on on various fronts, and with projects lots of new results, coming out so um I feel like there's a, lot of plates spinning which is good but, then you know you have to kind of Bring, In Focus sometimes I don't know do you, ever read um productivity hack type, books and that sort of thing I hate to, say it but yeah I I get I get those, feelings of desperation and I go I got, to level up and so occasionally yeah I, not constantly but yes I I I confess, have there been any hacks that that have, really helped you over time turning off, email and slack and like it it work I, will like I will like put in a slack, notice on like on some of the channels, that are teams and I'll be like I'm, going gone for a little while just to, focus I've learned I have to set the, expectation but yeah I'm I'm starting to, really focus on time to think and get, things done versus time to collaborate, both of which are very very important, but I've learned that if I try to do, them all at the same time it it often is, is not what I wanted yeah I've really, liked the reminder slash you can set up, reminders and there's also automatic, reminders in Gmail yep for like remind, me of this email and you know next, Tuesday or something like that that's, been really really helpful for me in in, all sorts of ways it's like the single, great, feature at least for my workflows that, that I've seen in one of the things that, I use for quite a while but yeah I don't, know if they use AI to determine when to, remind you about things or if it's it's, all rules based but however they're, doing it it it works for me that sounds, good no that works that works well, speaking of redefining, workflows and the power of artificial, intelligence and machine Learning, Systems last week, we had this discussion on a fully, connected episode about kind of large, models sentient some new paradigms and, models and that sort of thing which was, really fun another side of this though, that I think we wanted to follow up on, in this episode is maybe a more global, perspective of how artificial, intelligence how machine learning is, Shifting kind of both like, geopolitical social economic change in, the world and as, practitioners how that should be on our, radar as as we're building systems that, are contributing to that so yeah I know, that you put in a lot of thought about, this I spend a lot of time on this topic, as you know so when and I'm coming at, this from a person who maybe doesn't, spend as much time thinking about maybe, thinking systematically about things but, not necessarily politically about things, because I I even remember like around, the time of gdpr that came out there's a, lot of discussion about regulation, around algorithms and that sort of thing, but it was more of a it got a lot of, news because it was maybe this first, really really big regulation around this, sort of stuff but as you're maybe, following this area more closely how, have you seen the discussion, of AI plus politics plus economics plus, social change in the world how how have, you seen those progress generally over, the last couple years oh there's so many, there's so many you know paths we can, take down that I I'll actually start, with the one that you just brought up, and that's gdpr and that you know was, the first as you pointed out big, regulation to regulate data concerns in, Europe but it's scope was fairly limited, and it kind of addressed everything in a, in a uniform Manner and a little bit, ambiguous it was a bit ambiguous and in, conversations in European related, conversations that I've had I've I've, heard a lot of criticism over the, subsequent years I think I think the, Hope was that it might be the first very, imperfect step that would then we would, learnings would occur and and further, regulation that was a little bit more, insightful and thoughtful Having learned, a bit as we as we forged through this, this new landscape and I think that may, have somewhat stalled uh at some levels, and I think having that has been it's, there was a recent conversation I had, where it was interesting uh some strong, strongly worded conversation against, gdpr from someone I was talking with I, guess one followup to that is do you, think that regulations around AI or, machine learning systems are keeping up, with the sort of widespread deployments, and applications of okay I I that was, sort of a rhetorical question but I I, thought I would just mention it to be, completely transparent so there's this, wide gap between deployment and scale of, AI and machine learning systems and, regulations around those there are so, many things to go into that rabbit hole, on, I mean AI is affecting it affects, politics directly and I don't mean the, output of an algorithm I'm talking about, having the capability of conducting both, applying Ai and novel research there are, things that people don't think about, that that is a form of prestige for, instance yeah you know so you know we we, tend to go to things like economics and, you know spec you know the science and, all that but but the ability for a, nation to do that is a point of Pride, and the perceptions among among nations, or large corporations it could be any, large entities and what they're able to, project in terms of their capacity and, that it has a huge impact on business, and on people's perception of business, and thus economics at a large scale so I, mean that's just one little one little, rabbit hole we can go down but there's a, lot out there how much around sort of a, nation states Focus on an AI quote, strategy do you think and this is kind, of generalities but how much of that do, you think is merely for The Prestige and, to not not sort of get left behind or, how much of it do you think is related, to real strategies that are core to, whether it's the economics or the social, aspects or the political aspects within, a nation so it's it's a great question, and it has uh the answer is yes to, everything but on different time scales, and priorities and budgets so we're far, enough into this you know current AI, Revolution you know I mean you and I, have been doing this podcast for four, years now you know we started in July of, recording this yeah it's July of 2022 as, we record this and we started this in, July 2018 and we were several years into, it when we started this and so it's far, enough to where it's a point of prestige, people have been hearing if you don't, get into it you're going to get left way, way behind there's truth to that and, we're starting to see that truth now in, terms of what different different social, or political groups you know whether, they be Nations or corporations or, whatever social division you want to, make uh in there we're seeing we're, already seeing power shifts in a bunch, of different areas and some of that is, is prestige based some of that is the, ability to drive economic interests, obviously some of that's to drive, military interests which is obviously, kind of the the industry I'm in on my, day job but they're all related and, academic too if you are a country who is, trying to uh to build its its uh, educational system and you need your, universities to be the types of, destinations that will draw not only, your citizens in but citizens from other, countries and your trying to build that, well there are not enough professors in, AI out there they're not even close it's, a here in the United States it's a, massive problem that we have about not, enough instructors just to teach the, basics that spreads around the globe and, there are partions of of the globe that, are really struggling to find anybody, that is competent to teach these areas, and so that impacts the universities, each University ability to to be rep, reputable enough to draw a Daniel whack, in you know or or somebody with your, interests you know you know you went, through some years back as you were, going for your PhD you had to make, choices on where you were going to go, and the students of today are making, those choices but the landscape is, changing and now that AI is touching, every field there is it's super super, important so it's going to it's going to, change all of those and it it already is, and I think that at if we're looking at, a global scale there's the sort of, nation state actors but then also global, companies and organizations I'm even, just thinking of of s in my my own, context just because we're now kind of, intentionally making efforts in the AI, and natural language processing area and, were establishing intentional projects, in that area the sort of pipeline of, talent into into SI it's sort of some, new opportunities have Arisen even with, that kind of pipeline of of talent that, maybe just wouldn't have even known, about our organization were it not for, those efforts so I think there's also, this pressure at a company level to have, a visible AI efforted regardless of if, they really understand what their goals, are there it's this I don't want to get, left behind but also I want to make sure, and get some of this talent because it, seems like everyone's trying to get this, talent and I do Wonder both at the, political level and the corporate level, those kind of higher leadership at what, level do politicians and at what level, do corporate exe, execs, actually understand the implications of, establishing an AI strategy within their, whatever is under their purview right so, it's it's becoming very common to have, uh both National level AI strategies and, corporate level and for the most part A, lot of them look a lot alike uh as you, move across different organizations, which is probably a tell it probably is, and I I think the differentiation occurs, with leaders who are very forward, leaning and they're spending a lot of, time thinking about where they want to, get to versus what they have today and I, think that makes a big difference on, whether or not their approach is, actually going to be viable in you know, from an investment standpoint in terms, of its outcome but yeah I mean I know, for a fact fact that there are leaders, of state that are directly involved in, these efforts not because they have, expertise in it but because they, understand that their National interest, is hinged to it so this is a maybe a, lower level question but I think it's, connected to this if you are an AI, practitioner out there or a technical, person or a tech lead or a manager, whether it's in the a government, organization or in a corporate, organization where this sort of trickle, down AI strategy is reaching you and you, sort of got a mandate to like do, something with AI but it's unclear to, you maybe what that means or how the, value comes out of that yes what, recommendation would you give to such a, person to like navigate that scenario, because I do think it's happening in, many places well it's funny that you ask, that on our show called practical AI, because my answer is incredibly, practical as you won't be surprised and, that is for your organiz ation or your, nation what are the challenges that, you're expecting to face and I think a, fantastic example of that is the AI and, Africa series that we've been doing over, the past year has been or or maybe, longer now it's been fantastic in seeing, these AI researchers in various African, States addressing the needs of their, populations and they are they are, channeling productive AI research search, to address those and when I have, conversations in with with other people, throughout the world in other context I, actually point to that directly and say, that's a fantastic way of approaching, that because big fluffy AI strategy is, is is fine but if it's not something, that makes a difference in outcomes just, it's a waste of money and time and, effort and stuff so you've got to bring, it all the way down to solving real, needs, [Music], [Music], so Chris I've been reading a few, articles related to this and we'll link, some of those in our in our show notes, but I think what you were just talking, about is really interesting in that in, this series that we've been doing about, AI in Africa we've learned that, applications of, AI within the local either language, ecology or geopolitical situation or, nonprofit situ or what whatever, situation are being applied are very, different than often a parallel in in a, location maybe in a western country for, example the agriculture things that we, talked about like the way AI is being, applied in agriculture in the west is, quite a bit different than the sort of, large scale application that's needed, within the African context and we, learned that with with our guests um on, one of the previous Spotlight shows and, as I've been reading in these articles, it's talking about kind of new models of, growth and how AI, will shift sort of power structures and, that sort of thing but one of the thing, things that's interesting is that uh you, know AI, systems applied systematically and very, globally if they're coming purely from a, from a perspective of one nation state, they might try to scale out globally but, in a way that's very irrelevant to other, contexts and for example like an an, effort to apply text machine translation, for every language of the world would, ignore the fact that some languages of, the world have no written form right so, true what does that mean when we say, like that we're changing we're creating, this new structure of like growth and, enabling wider Commerce with machine, translation and that sort of thing when, actually if it if your context doesn't, fit into that model of growth then, you're further marginalized in in some, senses absolutely I mean I would kind of, summarize that by saying that diversity, matters diversity of experience and, diversity of of the challenges of a, particular culture or group of people, and that the complexities that are, arising in their experiences have to be, accounted for if they want to use AI in, that toolbox to address those things and, so so going back to your original point, which I thought made a lot of sense was, the fact that if you're not customizing, what how you're using Ai and the focus, of your research on the particular needs, of your area your the and the those, issues which arise from from your point, in a diverse World you'll get uh a, substandard outcome from that so you, can't you can't take you know something, that might be a good approach in the, United States and drop it into a country, that has a very different culture and a, very different economy and stuff it's, not going to work well so it it takes, that thoughtfulness and so when people, when I see somebody when I say somebody, meaning like a a nation or a corporation, or something like that just kind of, copying what the others are doing it, always makes me cringe a little bit, because it shows me that I either they, didn't understand the need for that that, focus and that customization or that uh, they simply weren't thoughtful enough, about it it so yeah yeah and it makes me, wonder kind of generally if if AI, systems continue to be dominated in, terms of their development and the, strategy around how they're developed, continues to be dominate dominated by a, few certain actors that runs the risk of, a lot of at the minimum irrelevance when, they're applied in a whole variety of, contexts but at the most a sort of harm, when they're applied in in many contexts, and the number of unintended, consequences that you can have without, that is is pretty key and the way that, you apply them for Better or For Worse, directly affects the power structures of, the of the institutions and and Nations, that we're talking about so it has a, very real and extensive outcome much of, which is outside the scope of what, people are thinking about when they're, trying to apply it so it can affect both, how those organizations are the, relationships they have with others or, other nation states and it also affects, the internals of those organizations and, where where budgets EMP power lay going, forward because of Investments people, are making there and that can be in the, private sector it can be in the public, sector in terms of Education in terms of, government obviously can be in in, military Investments and approaches, forward so there's so many places that, it it has uh it has consequences that, based on observation I would say usually, aren't expansively seen ahead of time or, not predicted and what are some of those, key shifts in power or shifts and power, structures that you think would be worth, worth highlighting is is one sort of, nation state government versus private, sector what other ones are kind of in, your mind when you're thinking about, shifts of of power in various ways well, at at the highest level if we're talking, kind of nation state level competition, in a general sense there are aspirations, that Nations have and they compete with, each other in a in a variety of of, domains there's economic competition, there's academic competition uh on this, show we've talked many times as we've, you know people often speak of kind of, the the competition that has arisen in, AI between the United States and China, you know and the economics involved, around it and the number of academic, papers being lished all of these, contribute to trying to position, obviously as an offsh sheet of that, there's the way that power projection in, a military context is is changing over, time and AI is certainly affecting that, and so we're at a really curious moment, in history right now and I say curious, not meaning good or bad just kind of one, of those moments where you go you know, you you start watching and and one of, those is as we are recording this Russia, you know invaded Ukraine a few months, ago and the whole world has kind of, banded together thank God you know and, and stood up for the the world order of, of not invading your neighbors and, killing your neighbors but the if you, look at how that affects non-military, concerns you have every nation in the, world is watching how the conflict and, the economics around it with the, sanctions and everything else are being, affected and a lot of those mechanisms, are now being optimized with AI, algorithms so you have these little AI, Solutions sprinkled all over the place, economically and Military capability and, all that and then you have everybody in, the world kind of watching to see what, happens and before I abandon the, military thing to move back into the, general thing I'll note that we are, proliferating AI capability all over the, place which will proliferate autonomy, all over the place and so the nature of, conflict between these nations States is, also changing and in Ukraine Ukraine is, doing this heroic job of of defeating, big platforms these big tanks and things, and expensive aircraft with little, missiles that only cost a few thousand, doar and some of those missiles uh have, capabilities and and over the future we, will see more and more autonomy and AI, enablement in those types of things so, you're seeing a world where conflict, will be judged by the pr proliferation, of many many many more than than we have, now mostly autonomous things and so that, also changes the need on investment and, and so as you're looking at that, countries are having to think if I'm, going to be safe from an aggressor in, this case like Russia going forward how, do I invest to do that they're having to, do that in the military context they're, having to do that in the economic, context they're having to do that in the, academic context and then all of these, Global uh organizations that are all, household Nam stores are having to react, because they're operating in those, environments so it's really it has this, endless web of influence that's going, around and also those that are in charge, of or have the power in in certain, domains of Technology often times make, make an even more visible impact in, these sorts of conflict zones, potentially than even nation state, actors so in the Ukraine thing I'm just, thinking of like well even in certain, cases more so than any other government, or state inter intervening in that, situation you have a lot of companies, whether that be IBM Dell meta Facebook, Apple who made a big impact by ceasing, operations within Russia as a result of, the conflict and you you just see the, power that has in pulling away that that, technological capability there there's a, there's a huge impact by that and then, you have even individuals you know like, Elon musk who did all the stuff with, Tesla and his starlink satellite stuff, in Ukraine and you know whether whatever, you think of Elon Musk you you must, realize you know this this had at least, publicly a very visible impact on the, effort to see support from that type of, person from that type of technology and, so whether it's a perception thing or an, actual kind of tangible imp, those that hold the technology and I, think more specifically are really, plugged into this advanced technology, like AI enabled Technologies autonomy, they hold a lot of the power maybe even, over nation states at least in certain, scenarios Oh indeed yeah I mean I agree, with that completely the the the sway of, powerful people with powerful uh, corporate backing has tremendous impact, on the decisions that nation states are, making so AI is an incredibly valuable, National resource or corporate resource, depending on what structure you're in, and so like any valuable resource it is, now being used and has been for some, time to change the balance of power and, change F Future Path so this is an, uncommon conversation for us you know, we're usually focused you know more on, the the practice of using AI or AI, research and stuff like that but we're, living in this larger context which we, which we our community often isn't, paying super close attention to, necessarily and we'll think about things, like AI ethics uh but that's at the, practitioner level as opposed to the uh, the environment that these activities, that we're all engaged in has made a, huge impact on above us so it's all, connected we're not working in isolation, as we do these, [Music], things, [Music], Chris you brought up autonomy as one of, the things that's at play in this whole, geopolitical side of artificial, intelligence I'm wondering as you've, thought a lot about, autonomy both used by governments and, and used by companies and other things, as that's becoming more widespread and, Global in its application what are the, Strategic and maybe human security risks, associated with a wider spread of of, autonomy or systems that maybe that, maybe operate with very little human, input if any autonomy will be pervasive, going forward and and I'm not going to, put a timeline on that and you can, Define pervasive however you want but, what I have certainly observed for a, number of years now is this steady, progression you see things happening in, the news you know you dra you drew you, know Tesla into that and and other there, are many other companies also driving, autonomy forward we're going toward a, world where many of our activities are, autonomous and it will change what it, means to live day-to-day as a person in, any culture and so that's some of what, we have to navigate going forward and, and doing that changes the power, structures associated with with those, cultures and who is uh influencing, different things the creators of the, autonomy and with the ability to apply, autonomy to certain points in their, society are are having outweighed, influence compared to others in that so, we're definitely going in that direction, clearly military applications clearly uh, many many different Industries are are, doing that I've long said that there, will be a point in our lifetime where it, becomes uncommon for us to drive cars, and I'm not a spring chicken anymore, because the technology is moving really, really fast there and so uh and we're, already seeing I mean there are Teslas, all over where people are using you know, Tesla technology to drive autonomously, and that's only going to get better and, better across all uh autonomy, manufacturers so it it will not take, long to see crash reports that the, number of autonomously caused crashes is, quite tiny compared to the number of, human cause crashes for driving cars for, instance same thing for Aviation, military has led the way in autonomy uh, for Aviation mainly because they can, because the civilian world is still, quite frightened of assuming that a, machine is going to fly The Airliner for, them but the data tells a very clear, story about safety there and and, capability so yeah that's going to be, World whether it's be robots or whether, it be vehicles or whether it be other, tools that we have in our work in our in, our houses this is part of our lives and, the people who bring us those tools and, allow them to happen will be the ones, with the power whether they be, politicians or corporate leaders or or, whatever what do you think about, companies that would explicitly sort of, put in their set of principles that hey, we are going to build explicitly build, human in the loop AI systems and we're, not we're not going to venture and maybe, that's too broad of a of a statement but, how would you encourage uh people to, think about that site both in terms of, the strategy and as a sort of f you know, wider reaching principle within an, organization I think it depends on the, application it's funny I have a lot of, friends and colleagues that I have these, debates with this is you know what we're, chatting about over coffee uh on a, regular basis and I'm going to come down, with what maybe most folks may not agree, with me but I tend to come down with, opinions that aren't what I want but, they're what I think is inevitable and, what I think is inevitable and there, will be many many instances where humans, and AI are interacting because the, nature of the work itself is human it, requires both human in Ai and not, because we want it to be but because, that's fundamentally how the work gets, done it's human centered work but there, are also many activities that don't, necessarily need a human in the loop it, might make us more comfortable it might, preserve jobs things like that but it's, not the most efficient route and so, whether or not I like that or not being, irrelevant I think that we will see that, going forward where there are where we, get to a point where if it's not a human, centered Activity The partnership with a, human in the loop versus a human not in, the loop it just doesn't make sense, anymore the human becomes the Big, Challenge the limitation performance, wise in terms of speed all sorts of, things and we will see activities that, occur without a human in the loop, because at the end of the day they're, going to have to be and I see that a lot, and it makes people when I get into, specifics it makes them very very, uncomfortable at times but I I that, doesn't change the fact that I think, that will happen so there is a need for, us to be very very careful with our, decisions on that but then we're also, inevitably have going to have to to get, comfortable with autonomy uh all over, the place in some of those cases not I I, know most people are terrified of the, idea of getting on that airliner and, flying cross country with no one in the, cockpit and I don't think that'll happen, soon I think there will be a human pilot, that sits there and basically does, nothing but monitor the systems with an, ability for an override but that Pilot's, skill will be far far Far Below what the, autopilot can do automatically so that's, strictly will be done to make the humans, in the feel better because your backup, your human is going to be orders of, magnitude less capable of handling that, aircraft in an emergency than your, autopilot so that's the kind of thing, that is inevitable at some point here, well I do have to make a confession and, that's so this is going to seem op topic, but I've been using them as my editor, since you know when whenever I I don't, know years and years and years but I'm, now not completely but I'm using vs code, a lot because of co-pilot there we go I, knew that was coming yeah so so me too, and this I think really brings home yeah, I'm just I love it and I know there's, mixed opinions on it but I would say, overall most people that I've talked, about that have really dug in and tried, to use it like legitimately tried to use, co-pilot yes are are pretty astounded, with the efficiency gains and just like, what you're able to do with it so for, those that aren't familiar co-pilot from, GitHub and Microsoft is a a coding, assistant that's sort of built into VSS, code and I think it actually does, support other editors now although I, wasn't able to quite get it set up, otherwise but it's just amazing like all, of those piece like as a human I can, focus on the bits that are really, important for me to logically consider, in terms of how the program flows and, maybe more complicated bits of it and, the other things which are like get this, data from this database like write a SQL, query or whatever like boom it just does, it like almost like really good like, maybe I modify a couple things but often, I just actually don't because it's, pretty good yeah it's yeah it is it is, amazing and I think that that's a good, example of I really don't believe that, program in as a whole is going to be, automated I mean they've been saying, this since like programming started like, there's going to be automated things I, think there will be a lot of things that, will be easy to generate but I think, that programming will not go away that's, my own opinion so interesting I think, programming I mean the the way co-pilot, you know the model that drives co-pilot, is using all of that open source code in, GitHub and there's a whole debate about, whether that's an appropriate use of, Open Source to create a business that, Microsoft's been criticized for in the, last few weeks but the the fact is that, that model is learning from the you know, a wealth of the best code on the planet, and so much like that airliner that, doesn't really need the pilot flying the, plane and I'm using I am using visual, studio code myself oh because I'm, working on a project that's Hands-On, code and and doing that so I'm coding, every day but I i' I've got to say I, don't know that I agree with you there I, I think I I'm starting to feel like that, airline pilot who's sitting there kind, of just saying yes I'm accepting that, code yes I'm accepting that code and but, it's just doing it yeah I think that the, fundamental difference in my mind is, that similar to what we were actually, talking about last week is this apparent, coherence that's produced by these types, of models I think perception wise you as, a human coder like I never expected it, to be able to produce a function like, that but it's because of this vast, wealth of data which it's able to, assemble apparent coherence out of but, the bits of the the things that I've, seen in copilot are the bits of things, that are really like specialized logical, pieces of the thing that are that are, specific to my context still require a, lot of tweaking and I think it actually, I mean you can comment on this cuz, you're way more familiar with the, Aerospace use cases and all of that but, my impression would be like an autopilot, for a 737 or something that is flying, between known routes in the US is, probably able to Almost Do Everything, perfectly if you sort of created a new a, complete new airplane and just put the, same model in the new airplane like it's, not going to work right so there does, still need to be this F tuning and I, think that that's where the human the, human element comes in like there's, still a there's still an adaptation to, out of domain data right I'm GNA make a, stretch here and I'm not speaking, literally but you know like we were just, talking about I think last week about, you know these visual Transformers and, and the amazing things that they can, take from the text input and we were, talking about uh addressing different, domains where you're taking the same, techniques but what if one of those, domains that you're talking about is, conceiving of some of the software, systems ahead so instead of instead of, drawing pictures of raccoons which I was, really enjoying by the way instead what, if it is conceiving of software, architecture for a particular problem, set and then you already have things, like co- pilot that can go and find just, the right code to fulfill each of the of, the uh the things you're trying to do, there and so I'm not saying that we're, there at this moment but what I'm saying, is I can certainly conceive of putting, chocolate and peanut butter together in, the context of coating and and having, something that's uh particularly tasty, yeah so I don't know uh you have a point, there but I don't know if that point, will survive very long is is kind of, what I'm getting at in terms of the, require and I love programming I love to, see human I think it's a wonderful thing, for human to do and which is why I've, stuck with it off and on for all these, years but I also it won't surprise me, when there's no utility for a human to, be there anymore I think it's one of, those things that domains in which we, operate continually evolve as well so, like as soon as I'm writing code to do a, thing on Mars that I already wrote code, for to do the thing on earth it seems to, me that there will be a sort of outof, domain issues that are unexpected and, will need kind of human input over time, and so I think it maybe is just like, what you're part of what you're saying, is the jumps or or the adaptations that, we're able to handle now are kind of, fine-tuning adjustments for domain and, the generalist models that are able to, switch between different domains the, switching will probably become easier, over time but also the domains that, we're exploring are becoming, increasingly different and and big over, time so the question will be I mean how, do both of those Trends evolve over time, that that's a really interesting um, interesting question I think so I'll, speculate as we're kind of winding up a, little bit on to kind of bring it back, to that kind of how are power how is, power shifting you know at the corporate, or geopolitical level and the role of AI, and we're talking about these, capabilities that we've been talking, about over recent episodes those who, have the creative insights where they, can take advantage of these capabilities, and see opportunities but you know the, thing that humans still have, right now is is you have a form of very, limited creativity in AI in other words, it's not self-aware but you can create, you can create those raccoon on Rocket, pictures now that we were talking about, which is pretty cool but it's not, sensient and it's not self-aware and it, doesn't have a special understanding of, the overall world at all the different, scope levels that that we have right, there's an apparent intent behind the, model but it's only a perception yes and, so we have the real thing there and so, and that it will take a while to Eclipse, all that and so there's a role for, humans and the humans that learn to do, that really well and are very flexible, and creative in the way they approach, the world are the ones who will have the, power so you're saying because I, switched from VM to co to VSS code and, co-pilot I will have the power you are a, power Monger Dani white you're just, grabbing power where you see it I see, this in you I I understand how this, works but yes but it will be it will be, those who take that and recognize, something and can go do do something new, that their peers are not yet able to do, that will continue whether they be in, whether they be technical or not, technical people will have the power, because of these resources that we have, and they will sway them at all levels of, uh from from the practitioner all the, way up to the geopolitical leader so, yeah well um I think that's a that's a, good way to sort of come to a close, maybe one more thing uh Chris if maybe, there's practitioners out there that are, they're aware of what they're doing in, their own in their own company they're, aware maybe even of sort of best, practices across industry but they're, just curious about maybe looking at some, of the conversation that's happening at, the geopolitical level around artificial, intelligence just so they can learn kind, of broad Trends and what's being talked, about about their industry at the maybe, government level is there a place that, they can go to to at least be exposed to, some of those things yeah and I'll I'll, I'll give one in a second but I I'll, lead by saying that that most large, organizations and most nation states now, have official AI resources on their, websites and such and so whatever, country you happen to be listening in in, and we have listeners all over the world, your nation has resources there for you, uh as you and I are sitting here in the, United States and all point to our own, government's resource uh starting point, is at the URL a.gov and if you go there, it is called the national artificial, intelligence initiative and it was, created by a law in 2021 called the, national artificial intelligence, initiative act I'm sorry of 2020 I got, the year wrong there and so it is a ite, where you can start to see how how the, United States government at large this, is not specific to military DOD has a, strategy you can Google most militaries, also have that so if you're if you have, an interest in your country or military, or whatever all of these different, domains or Dimensions has these, resources online if you go to the ai., goov one that the US government has it, has what they call Strategic pillars in, it it has different sections with uh, documents such as strategy documents and, different Publications and some of the, laws associated with it and then they, all they have other resources available, so if you're interested to see how the, people in power over you are thinking, about Ai and how it may directly, influence you in your life and your, family you should go and see what these, governments are thinking and you know, what I'm going to finish by saying, participate in the process in your where, you're at so that you can influence, people toward the right decisions yeah I, know I mean this is happening at a local, level too even in my small town, we recently had a bunch of discussions, locally about facial recognition in, policing that were going on in our local, community and yeah so this is happening, across the board thanks so much Chris, for for helping me learn a bunch today, it's a fun fun discussion yeah it was, we'll talk to you soon take, [Music], care all right all right that is, practical AI for this week if this is, your first time listening subscribe now, at practical a. FM or just search for, practical AI in your favorite podcast, app we're in there and if you're a, longtime listener please do share the, show with your friends it is the best, way you can help practical AI succeed, thanks again to fastly for shipping our, shows super fast all around the world to, break master cylinder for the beats and, to you for listening we appreciate you, that's all for this week we'll talk to, again next time, [Music], k |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.