Dataset Viewer
Auto-converted to Parquet Duplicate
audio
audioduration (s)
14.8
29.9
text
stringlengths
248
534
source
stringclasses
1 value
If you're using large language models in production, you'll want a way to log the prompts and the responses. This is useful to evaluate the performance and check for quality in the responses. And it's also useful to gather data that can later be used for fine tuning. This is true, whether you're using private models li...
0
Now there are a number of approaches, including using services like Braintrust or HumanLoop or using libraries like LangFuse, which provide a hosted offering and also offer a way for you to do it on a self -hosted basis. But today I'm going to show you a very simple approach that allows you to take any OpenAI style end...
0
I've put the scripts up on Trellis OpenAI Logger. This is an open source and public repo on GitHub. It's also a package that has been pushed to PyPy, so you can easily install this with pip install Trellis OpenAI Logger. I'll start by showing you a very quick example of how to use this logger to log the results or the ...
0
Now, I've just git cloned the repo over into Windsurf, and I've opened up the readme here. And I want to show you how to quickly start and use the Trellis OpenAI Logger. It's very much the same as if you're normally importing the OpenAI library. So instead of importing OpenAI from the original library, you import it fr...
0
You pass in optionally your API key and your base URL if you're going to be using your own custom URL or API key. If you're just using OpenAI, you don't need to specify these. But there's one extra parameter you pass in when initializing your client, which is the Postgres URL here. And here I've just put in a local Pos...
0
I'm going to set up a temporary folder. I'll just make a directory here called temp, and I'll move into that temporary directory. Then I'll create a virtual environment, UVVenv, and I'll install Trellis OpenAI Logger, which is just like installing OpenAI because this is a wrapper that will send and log your data to Pos...
0
I'll show you how to do that later, and get a response from OpenAI using GPT 4 .1 Mini, and print the response. So I'm going to copy all of this. Now before I do that, there's one more step I need to do, which is export my OpenAI API key. So I need to paste it in there. I'm just going to do that off screen now. Okay, I...
0
I'll paste in that little snippet and press Enter. And you can see I get back a response, and it's got the answer 1 and 2 added is 3. So that's good. And I'll exit now my interpreter with Control -D. So at this point, I've just sent a query to OpenAI, and now it should be logged. We can take a look at what the log look...
0
If we go all the way down to a section here on querying the logs, we can connect to the Postgres database locally. And I'll go through this more slowly in a moment. And we can then make a query to inspect the last conversation, just like this. And here you can see the latest record. So you can see the time, the model, ...
0
Latency and the total tokens. So there you have a very simple example. And now I'm going to show you how you can set up that local database, or how you can point Trellis OpenAI logger to a database that's running on a digital ocean droplet. So first I'll show you that local setup. Just go to the database section. And y...
0
This will be handy for testing in the same way that I just tested out with the quick demo above. And also you should set a secure password. I recommend not using your secure password. You should put in something else. So you want to create a .env file with these variables. You can just copy paste this code and run it i...
0
You can find the instructions for Windows or Ubuntu on ChatGPT. Next, we're going to create a database. Now, it'll probably say it'll fail because I've already created it. That's fine. And once it's created, we're going to run this local setup here. Now, I actually should be back here in the logger folder. And I'll run...
0
And just to show you what's happening with the local setup here, we're installing dbMate, which is being used to update the database to the structure we want. We're then setting a database URL for dbMate, which will help us with the migrations. And then we're running the migrations. And the migrations are set here in t...
0
So we want to store the model name, the endpoint, the input messages, which is a list of messages, raw response, latency, status code, prompt tokens, completion tokens, total tokens, and then metadata. So this is the format that we're going to store everything in. And then we can run a quick test just to check that the...
0
And if you do want to delete that database, you can do drop db with LLM logs. So this is how we create the local database. If you want to do a test, you can go to logging traces. And you can run a ready -made script I have, which is just example .py. So to run that, you first want to source the environment variables, w...
0
And when we check this here, this is not working because in my environment variables, I have got an old database URL. And this is actually the correct approach to use if we're going to run from a droplet because we want to point to a remote URL. But we can just set the database URL for the local database like this here...
0
You can run it. And what it does is it sends a request. I'll show you the code in a moment. It gets a response. And then it logs it. And the request you can see here in example .py. It's getting the connection info for the database from a URL. It's testing the connection. And then it's going to load up the database URL...
0
And then it's getting a response to a simple question here, which just says hello. And then it's going to query the log data to make sure that it is actually present and telling us that it has successfully logged and found logs within the database. So that's just a simple script you can run to test your database is wor...
0
And then you can run a simple SQL command like this. And it should show us. Let's just run that again. It should show us the last request. Indeed, it says hello. And here's the response and the latency and then the total tokens. So that's how you set up a local database. Pretty fast. I'll now show you ways to set up a ...
0
You want those logs to be stored on that server and then you can retrieve them later. And I'll talk much more about retrieving the logs and how to use them for doing evals and doing fine tuning in a later video. So let's look at how we set up a remote database this time, a remote Postgres database. And for that, we go ...
0
The simplest, well, I don't know what is simplest, but let me describe the two ways. The first way is you set up a Ubuntu server and you'd install Postgres and you use that as your database. The second approach is DigitalOcean and other providers will directly provide databases. And so you can directly set up a databas...
0
Now, if you set up a droplet, it will probably be a little bit cheaper, but the managed database has more capabilities, things like backups, for example. Now, you can enable backups on a standard droplet too, but the managed database probably has got more tailored functionality for running databases robustly. So maybe ...
0
I'll also note that to run Postgres comfortably, you probably want to have two gigabytes minimum of RAM, maybe four, ideally four to eight. So it's going to be hard to run it with the $6 per month droplet. And therefore, probably it's going to be about $12 per month if you go with the specs that are like this here in m...
0
First, we're going to make sure that the DigitalOcean client is installed with brewinstall .tl or D -O -C -T -L. That's DigitalOcean client. And we're going to then authenticate. So for this, when you run it, you'll be asked for an API key, and then you're going to log in. Now, to connect to your droplet, you're going ...
0
So here's how you would generate an SSH key. I've already generated this one. And here's how you would add it to DigitalOcean. Okay, and with the SSH key generated, we can now proceed to the automated scripts for creating the droplet. So I'm just going to run those, and then I'll go over to that script and explain how ...
0
And then it should move to starting up the next droplet. I'll show you now what that startup script looks like, is the create droplet script here. It's going to start off by setting up the droplet name, the SSH key name. That's a default database password, which will be grabbed from your .env file, and I recommend cust...
0
So here is a quick validation and a warning if it's using the default password. Then it's going to create an SSH key if it doesn't exist already and you haven't created it. And once that is created, it'll move on to delete any droplet that has the same name. Then it's going to create a cloud initialization configuratio...
0
So if you do need to update the migrations, you do need to do it in two places right now. And there's a warning on that in the readme. But this format here matches the logging format we talked about before. And once this cloud init has been set up, I should mention also that dbmate is going to be installed because that...
0
So this is allowing dbmate to run the migrations. And once we've run the migrations, then we have to define the run command, which is going to involve creating this database here. First of all, starting Postgres, creating the database, then updating the password, making sure the listen addresses are set, creating the c...
0
And then we're going to run the setup db script, which should allow us to have the database in the right format with the correct tables. So with that script defined, this is a script that will run after initialization, after the droplet is created. We then actually create the droplet here. We're going to specify a 2GB ...
0
Once the droplet is ready, we'll get the IP. We'll then create the droplet and we'll print out the droplet IP and the connection string. Once that's all been done, then we'll have to wait a minute or two for the droplet to be fully set up before we're able to update the database URL and then run the example on that rem...
0
I'm just going to copy this over into my .env file now. And with that done, I can try sourcing my environment variables and running the example. So it looks like we've successfully connected to the remote and we've logged this example here. And again, we can connect manually to see the logs if we like. This time, rathe...
0
And again, we can use this query to check what is logged and see the contents of that latest log here. Okay. So we now have shown how to set up a remote database using a droplet and how to query the logs from it. I'll show you now the last way to set up the Postgres database, which is by directly creating a database on...
0
First, you need to exit from the Postgres, which I can do here, just with Control -D or slash Q. So let me now clean up that droplet and move to setting up a direct managed database. I'm going to say no, I won't delete the SSH key from DigitalOcean. So for a managed database, I'll take you through the steps here.
0
First thing is we're literally going to create a database using the creation command from DigitalOcean. We're going to call it LLM loggerDB. And once we've got that created, we're going to try and get the connection URL. Now, actually, the connection URL is already available. So if you want to get it, again, you could ...
0
And I recommend putting it in between double quotes. So the way you want to paste it in is something like this here. So I'm going to take that and put it into my .env file. Okay, so with that added into my .env file, I can now source that database URL and run this script, ManageDBSetup. And all this does is it runs the...
0
So this is setting up the structure again that we want the logs to be stored in. Now, I can't run this just yet because I need to wait until the database has started up. I can maybe try it and we'll see. So, yeah, it's saying no such host. And that's just because the database needs a moment to start up. Once it is star...
0
And I've just given it a moment so I can run this script again. And I think that is run successfully. So we're in a position to run the example file. It's connecting to the remote database. And you can see it's connecting this time to the ManageDB on DigitalOcean. And it's logging. And yet again, we can go down to the ...
0
First, by connecting to the database, by sourcing environment variables, and then by running our command here. And we should be able to now see. Again, let me just run that. And you can see here the log of the command that we've run through the example. So that rounds up the three different ways that you can create a d...
0
And you now are able to very easily log all of the traces, including the response times and the token usage of any queries you make to an OpenAI -style API. I've just shown you how to use actual OpenAI's API. But you can also use any other APIs that provide you with an OpenAI -style endpoint. In the next video, or in a...
0
They're also very useful if you need to do some fine -tuning, because you can take these responses from customers. You can filter out the private data, potentially. And then you can improve on the answers and use those to improve the performance of your LLMs through fine -tuning. That's it for this video, folks. Hope y...
0
Today was day three and the final day of the AI Engineer World's Fair in San Francisco. I'm going to cover the highlights from the keynotes and then the two tracks I mostly attended on reinforcement learning and on robotics. The day started off with a keynote from Google from Logan Kilpatrick, and he announced the laun...
0
And somewhere around Cloud4. This, he says, is the last relaunch of the Gemini Pro 2 .5 model, suggesting the next time there is a launch of the Pro, it would be number three. He also gave a few tips on what might be coming from Google over the next few months. It appears there will be some Gemini embeddings that will ...
0
Google probably has an excellent store of data, and so it just makes sense to release these kinds of models, the embedding models, and indeed open source models, because they have access and a competitive advantage on the data front. There's a suggestion that Gemini code might be released. Of course, OpenAI have Codex....
0
Interestingly, there was a survey done later in the day, and it shows that one of the biggest problems, in fact, the biggest problem listed by developers in the survey is running evals. The second biggest problem is the reliability of LNMs. Now, moving on to some analysis from George Cameron, one of the founders of Art...
0
Out of the top 10 open weights models today, there are just half of them that are US, or not necessarily US, but well, are they US? I guess they are all US, whereas the others are Chinese -based, and indeed the top two, DeepSeq, the recently re -released DeepSeq R1 and Quen, a model from Quen here, are top of the open ...
0
Interestingly, NVIDIA has got two of the top five from the Western models, Microsoft and LAMA not even featuring here at the moment on the open weights models in the reasoning category. When you move then to non -reasoning category, LAMA does make an appearance here with the Maverick and the Instruct model. Of course, ...
0
Now, there was an interesting developer survey. I've mentioned it briefly there already, but done by Bar Your Own of Amplify Partners, a VC firm, I believe. And this survey of developers asked, what techniques are you using to customize your AI systems? And on the x -axis here, it goes up as far as 80%, by the way. So ...
0
But fine -tuning at 40%. And you'll often hear on YouTube or at these conferences that, you know, fine -tuning is not really used so much because the models are just so strong in general. So I was kind of surprised to see this number here, that 40 % of developers are using fine -tuning to customize their models. So, yo...
0
And you have 75%, 80 % saying text, then about 20 % saying image, same for audio and less for video. So you can see there's a long way to go here in adoption still of the image and the voice models and indeed the video models. And there was a slide shown after this, which shows, do you think you will soon? Adopt this m...
0
So there's clear signs in the data that there's going to be first a jump up in the audio voice, maybe then a little bit of image and then last up in the video front. Now, one of the last keynotes I'll highlight is from Open Router founder Alex Atala, who actually was a founder of OpenSea, I believe, the NFT trading fir...
0
But Open Router is a platform that allows you to use with one API to make calls to any of the public LLMs like OpenAI, like Gemini, like what's still missing? And Tropic. And I thought initially, well, this is kind of silly. Why would I want to do this? Especially because they charge, I think, a 5 % premium. Why don't ...
0
But yeah, I mean, even before this conference, I've come to appreciate just the convenience of having one API key with a fixed spending limit and you can hit all the models. And this is the growth here. There's been 10 to 100 % growth month on month for the last two years. And they're processing, I think, something lik...
0
If you assume like a dollar per million tokens on average, that would come out to a volume of hundreds of millions of dollars flowing through in terms of compute credits every single month. So a very fast growing platform. And I know this graph is very hard to see, but there's an added benefit to using Open Router I di...
0
And because Open Router is able to route to any of those, and I should say Open Router because I'm Irish, but because Open Router can use any of these, it is able to provide a better uptime guarantee than any of these endpoints individually. So it's kind of at this weird advantage that it can provide higher uptime guar...
0
My view is reinforcement learning is still, people don't have a handle on exactly how the big labs are doing it. And there are a lot of mixed messages that are coming out about what is effective, what does work in terms of improving performance or not.
0
And one of the interesting aspects of improving reasoning models is that probably the simplest way to improve reasoning through tuning is simply taking high quality reasoning traces from the best private models and not even doing reinforcement learning, just doing straight supervised fine tuning. So running these reinf...
0
If you do it at a lower level, there may be cases where it improves your performance a bit, but there are so many variables and there seems to be a lot of contradictions that come out. I'll just give you one example here. Sorry, I've been showing this slide for a few seconds now without saying anything. This is a slide...
0
And there are some pretty unintuitive things that were found here. A set of thoughts were generated by sampling questions, first of all, from science, maths, and other knowledge data sets, and then getting open source models like DeepSeq, like Quen, to generate high quality answers and using these questions and answers...
0
So if you took all of those traces and used some programmatic verification of the answers, either LLM as a judge or just numerical verification, and you filter out the wrong ones, that actually didn't improve the results, which is very counterintuitive. Now, granted, I looked a little bit at the paper and 80 % of the i...
0
Furthermore, when you are training a very weak model, if you are providing some wrong answers, just the fact that those wrong answers are structured well because they're from a much stronger model probably will help to improve the weaker model. So the message I'm seeing here is that when you train weak models to try an...
0
I'll point to maybe as well to my SFT versus GRPO video, which talks about just doing straight fine tuning versus reinforcement learning, because I think some of the messages, even though that video is from earlier this year, are actually holding up fairly well. Now, Nathan Lambert, who works with Allen AI and has been...
0
This is the approach Allen AI takes. It's more purist open source approach. And Nathan Lambert gave a talk also in the reinforcement learning track. Here's a slide where he highlights how overthinking is still a major problem, not just in open source models, but private models. A nice question you can ask is what's two...
0
But even O1 Preview and O1 Mini take 200 tokens, whereas some of these non -thinking models will just straight up answer it within five, seven or 12 tokens. So controlling the amount of reasoning is not something that is really controlled entirely. Actually, that was mentioned by, I think maybe one of the founders of D...
0
Interestingly, in building these open source data sets like this one from Bespoke Labs, the length of the response is used as an indicator of response quality. But I mean, that's kind of a contradiction if at the same time the length is kind of a bad indicator because the models tend to overthink. So this just drives h...
0
He's of the view that language models are not particularly good at planning today. And when they do plan, as in codecs or in cloud code, it's because they're being prompted very carefully in order to do the planning phase. If you ask a model, even a reasoning one to take on a difficult question, it won't necessarily do...
0
And so this little piece of pseudo kind of response here shows a little planning section of the response, then a thinking section, and then an answer. And this is Nathan's hypothesis that maybe there will need to be a period where we train models to do some planning, then thinking, and then answering. Now, to round thi...
0
I was able to have a nice chat with Greg and with Mike and also with Brian of ARK Prize. And here is a little video giving you a preview of ARK Prize 3, which is not out yet. It will be there next year. And it's actually based on these little computer games. So not only will you have to get these grids right, but you w...
0
And I know this is kind of bad quality, but, you know, think of some kind of Atari games here where you have patterns and games that need to be solved. There will be a few examples and you will have to go into the game, figure out what the rules are and solve it. Much like, you know, games that are fun for humans. But ...
0
Now, one of the questions I asked, I think it was, Greg, you know, why doesn't something like Google's Alpha Zero that solved chess solve these kinds of problems easily? And part of the answer is that, well, even Alpha Zero had access to a lot of data, a lot of training data. Yes, the rules weren't laid out explicitly ...
0
You can just kind of play the game live, but you're not given a vast range of training data. So that makes it a very different challenge. Now, I'll move on to the last section here on robotics. And this started off for me with the talk from Waymo here. Sorry for the kind of dull, bright screen, but Waymo now live in qu...
0
Thanks to my friend who booked me one because you need to have a phone in the region to be able to, you need to set your phone region to here to be able to use it. San Diego, I think Vegas, Dallas, New Orleans, Nashville, Boston, and also Tokyo, Japan. So Waymo really taking over. I believe that the number of rides, an...
0
And interestingly, this prompted me to look up with chat GPT some of the features of Waymo. And there are a few things that are interesting. One is that apparently, and I would need to double check this, there are four NVIDIA H100 GPUs in each Waymo. Yeah, correct me in the comments if that's wrong, but obviously that ...
0
So it can map, basically every street is mapped within a centimeter resolution. And this allows, in combination with Waymo's LiDAR, so when it uses its LiDAR to detect everything around it, it's able to kind of correspond that to the map, the pre -map. And that gives it very good grounding in where it is. So it's not j...
0
That is kind of the big difference between Waymo, where they have pre -mapping plus LiDAR, and you have Tesla that's just vision -based, but no mapping. So one of the big questions, of course, with Waymo or any type of autonomous driving is, well, how do you handle with the 0 .000, like 5 or 6 zeros, 1 % of cases where...
0
There was an example where the car was going by a red light, but that made sense because there was a man on the side of the road, like a construction worker, waving them through. So this is a weird situation where you actually should break the red light because you're being instructed to maybe by a police officer or so...
0
And the answer in today's video was, well, today's presentation was, well, you use Gemini 2 .5 Pro because it can describe in a very, in a natural language way, what's happening within images. This is maybe the image of being passed through by a construction worker with a red light. You're not going to be able to see i...
0
So they have literally fine -tuned a model, which I think is based on Gemini 2 .5 Pro and the fine -tuned model is called Emma. They fine -tuned it on tons of trajectories taken from cars. And this is able to generalize extremely well. And I don't know why I didn't think of this myself. Now, the question is, how do you...
0
Gemini Pro, I assume, is very large, like maybe, I don't know, definitely hundreds of billions of parameters. So you're talking about even finding quantization techniques or potentially distilling the model so that you can shit it on those four H100s. If indeed, that's how much compute they need within a Waymo. Now, th...
0
And the motivation you'll typically hear at these conferences is that there are a large number of vacancies. If you look at the sectors, say leisure, healthcare, construction, professional services, transportation, manufacturing, all five out of six of those that were professional services are jobs that require physica...
0
And this, therefore, is, you know, one of the motivations. And I'm realizing here, this is not the right name for this presentation because it was given. I think, I'm not quite sure who this talk was given by. It might have been the talk by NVIDIA on Groot. So Groot is a model in the same way Lama is a foundation langu...
0
Actions being the positions the motors should be in in the future based on the language instruction you give, like pick up the apple or move the arm or something like this. And hopefully I'll be covering the Groot model a bit later on this channel. So robotics was the theme of the day for one of the tracks. And just th...
0
There are probably some of the best known humanoid robots are Tesla's Optimus, which might retail at maybe 40, 50, 60 ,000. There's also FIGURE, maybe similar type of robot. Apparently the Tesla robot is particularly strong. It's suited for industrial applications. Well, this is an open source robot, K -Bot, that comes...
0
It's similar maybe in functionality it could do, but it's not got motors that are quite as strong as Optimus. Now, maybe that's being a bit too generous to K -Scale. That's what was said at the talk here. But it's great to see there are fully open source designs. And it's great to see this price tag here, which is not ...
0
But there are also setups now that are making these a lot easier to train. You don't need physical robot data to train them. You can train them in simulation environments, either just by creating simulation paths of the robot, or sometimes by using things like an Apple Vision Pro and then moving your hands kind of in t...
0
So simulation is making it a lot easier to gather data. Of course, one of the key problems in robotics then is now if you have this simulation data, how can you make it transfer as robustly as possible to the practical robots? Robots. But robots, although only one of the tracks out is probably well over 20 tracks throu...
0
And maybe I would say even in one year, there'll probably be two, three tracks of robotics and could go on significantly from there. I can see how the stage, the state of the models, these language vision action models, I can see how they resemble a little bit the early multimodal models like say EDIFIX 1, or even some...
0
There's just a lot of room, a lot of low -hanging fruit for improving the performance of transformers as applied to robotics. So folks, that is it for this presentation. These three days of the World's Fair, you can check out. I've made a playlist. I'll put it in the description. If you want to see all three days of re...
0
And the other, which I think is quite an instructive video, is fine -tuning a language model for multi -turn agent actions using MCP. And the MCP is actually a browser MCP where you train it to follow a certain navigation path. So that's it from this AI World's Fair. Let me know of any comments below. Cheers, folks. Ch...
0
NVIDIA is one of the most valuable companies in the world because of the GPUs it sells and so it's useful to understand how far NVIDIA is ahead of its competition. Now there are many providers such as Intel with their Gaudi offering, there's also AMD, there's Amazon who has Tranium but today I want to focus on Google's...
0
I'll look at the throughput performance, time to first token and also some cost comparisons. Before I show you the results I want to explain how I'm setting up this benchmarking. First of all I'll talk a little bit about these three types of hardware, two NVIDIA and one Google. Then I'll talk about how I set up the ben...
0
I'll then explain how to test out GPUs, namely NVIDIA GPUs, how to test out TPUs and last of all show you some results and give some commentary. Now I've set up this chat here with chat GPT, I'll put a link to it in the description and I've done a little comparison here on screen between the three types of hardware. So...
0
Now first looking at VRAM which is the memory per unit, we've got 80 gigabytes for the SXM 100, we've got 141 for the H200 so quite a bit bigger and for the V6e it's just 32. So Google's hardware at least at the latest generation and there is a newer V7 generation that will have much higher VRAM, it's currently quite s...
0
Now the NVIDIA B200 which I'll hopefully look at in an upcoming video has got even more VRAM, it's got 180 gigabytes so you can see this is increasing to allow model providers to allow those model providers to fit larger models on fewer GPUs. The next parameter to look at here is the high bandwidth memory speed.
0
This is critical in moving the weights from the VRAM into the internal part of the GPU or TPU that does the calculations and here you can see that NVIDIA's units they are certainly faster in bringing information into the inner cores where calculations occur. The H200 is a little bit faster than the H100, maybe 30 to 40...
0
Often to fit large models you need to put multiple GPUs together, you can put them together in parallel or you can put them together kind of in series. We'll talk about that later, it's tensor parallel and pipeline parallel. You can see also a speed here so this is gigabytes per second and the speed for the H100 and H2...
0
Now the next unit here to look at is the flops. This is the floating point operations per second and I've compared them here on the basis of 16 -bit non -sparse flops. NVIDIA reports their flops on a sparse basis which is basically just multiplying by two but I want to normalize to compare and interestingly these units...
0
So Google has got a lot of compute for the high bandwidth memory speed and a lot of compute for the VRAM and probably that means that in quite a few cases the TPUs are going to be limited in their throughput by how fast the weights can be read into the inner part of the TPU rather than just the computing or the computi...
0
Okay so when I do a comparison I'm going to run a Gemma 27b model and I'll show you the the table for that in a moment but I'm going to run it on the following configurations. I'll run it on eight of these V6E units so that's eight times 32 is 256. I'll run it on one H200 and two H100s. So I'm trying to get very roughl...
0
Now when I look at the cost comparison I'm going to take the cost of the exact numbers of GPUs or TPUs that I use and then divide by the throughput in tokens so we can have kind of a fair comparison of cost per million input plus output tokens. If you didn't catch all I said I'm going to go through it now in a bit more...
0
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
9