episode stringlengths 45 100 | text stringlengths 1 528 | timestamp_link stringlengths 56 56 |
|---|---|---|
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | fission reactions would produce neutrons of the right energy to continue the reaction. | https://karpathy.ai/lexicap/0009-large.html#01:05:50.320 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | And that was demonstrated in Germany, I think in 1938, if I remember correctly. | https://karpathy.ai/lexicap/0009-large.html#01:05:57.440 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | The first nuclear weapon patent was 1939 by the French. So this was actually going on well before | https://karpathy.ai/lexicap/0009-large.html#01:06:01.440 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | World War II really got going. And then the British probably had the most advanced capability | https://karpathy.ai/lexicap/0009-large.html#01:06:16.480 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | in this area. But for safety reasons, among others, and just resources, they moved the program | https://karpathy.ai/lexicap/0009-large.html#01:06:22.640 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | from Britain to the US and then that became Manhattan Project. So the reason why we couldn't | https://karpathy.ai/lexicap/0009-large.html#01:06:30.160 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | have any kind of oversight of nuclear weapons and nuclear technology | https://karpathy.ai/lexicap/0009-large.html#01:06:40.560 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | was because we were basically already in an arms race and a war. | https://karpathy.ai/lexicap/0009-large.html#01:06:46.560 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | LR But you mentioned then in the 20s and 30s. So what are the echoes? The way you've described | https://karpathy.ai/lexicap/0009-large.html#01:06:50.800 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | this story, I mean, there's clearly echoes. Why do you think most AI researchers, | https://karpathy.ai/lexicap/0009-large.html#01:07:00.960 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | folks who are really close to the metal, they really are not concerned about AI. They don't | https://karpathy.ai/lexicap/0009-large.html#01:07:06.800 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | think about it, whether it's they don't want to think about it. But why do you think that is, | https://karpathy.ai/lexicap/0009-large.html#01:07:11.760 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | is what are the echoes of the nuclear situation to the current AI situation? And what can we do | https://karpathy.ai/lexicap/0009-large.html#01:07:18.240 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | about it? BF I think there is a kind of motivated cognition, which is a term in psychology means | https://karpathy.ai/lexicap/0009-large.html#01:07:27.120 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | that you believe what you would like to be true, rather than what is true. And it's unsettling | https://karpathy.ai/lexicap/0009-large.html#01:07:35.520 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | to think that what you're working on might be the end of the human race, obviously. So you would | https://karpathy.ai/lexicap/0009-large.html#01:07:46.000 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | rather instantly deny it and come up with some reason why it couldn't be true. And I have, | https://karpathy.ai/lexicap/0009-large.html#01:07:52.640 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | I collected a long list of reasons that extremely intelligent, competent AI scientists have come up | https://karpathy.ai/lexicap/0009-large.html#01:08:00.560 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | with for why we shouldn't worry about this. For example, calculators are superhuman at arithmetic | https://karpathy.ai/lexicap/0009-large.html#01:08:08.160 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | and they haven't taken over the world. So there's nothing to worry about. Well, okay, my five year | https://karpathy.ai/lexicap/0009-large.html#01:08:16.800 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | old, you know, could have figured out why that was an unreasonable and really quite weak argument. | https://karpathy.ai/lexicap/0009-large.html#01:08:22.000 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | Another one was, while it's theoretically possible that you could have superhuman AI destroy the | https://karpathy.ai/lexicap/0009-large.html#01:08:29.040 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | world, it's also theoretically possible that a black hole could materialize right next to the | https://karpathy.ai/lexicap/0009-large.html#01:08:40.320 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | earth and destroy humanity. I mean, yes, it's theoretically possible, quantum theoretically, | https://karpathy.ai/lexicap/0009-large.html#01:08:45.680 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | extremely unlikely that it would just materialize right there. But that's a completely bogus analogy, | https://karpathy.ai/lexicap/0009-large.html#01:08:50.960 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | because, you know, if the whole physics community on earth was working to materialize a black hole | https://karpathy.ai/lexicap/0009-large.html#01:08:58.080 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | in near earth orbit, right? Wouldn't you ask them, is that a good idea? Is that going to be safe? | https://karpathy.ai/lexicap/0009-large.html#01:09:04.240 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | You know, what if you succeed? Right. And that's the thing, right? The AI community is sort of | https://karpathy.ai/lexicap/0009-large.html#01:09:10.160 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | refused to ask itself, what if you succeed? And initially I think that was because it was too hard, | https://karpathy.ai/lexicap/0009-large.html#01:09:16.720 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | but, you know, Alan Turing asked himself that, and he said, we'd be toast, right? If we were lucky, | https://karpathy.ai/lexicap/0009-large.html#01:09:24.240 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | we might be able to switch off the power, but probably we'd be toast. But there's also an aspect | https://karpathy.ai/lexicap/0009-large.html#01:09:32.720 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | that because we're not exactly sure what the future holds, it's not clear exactly, | https://karpathy.ai/lexicap/0009-large.html#01:09:37.600 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | so technically what to worry about, sort of how things go wrong. And so there is something, | https://karpathy.ai/lexicap/0009-large.html#01:09:45.200 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | it feels like, maybe you can correct me if I'm wrong, but there's something paralyzing about | https://karpathy.ai/lexicap/0009-large.html#01:09:53.360 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | worrying about something that logically is inevitable, but you have to think about it, | https://karpathy.ai/lexicap/0009-large.html#01:09:58.800 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | logically is inevitable, but you don't really know what that will look like. | https://karpathy.ai/lexicap/0009-large.html#01:10:05.200 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | Yeah, I think that's, it's a reasonable point and, you know, it's certainly in terms of | https://karpathy.ai/lexicap/0009-large.html#01:10:10.720 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | existential risks, it's different from, you know, asteroid collides with the earth, right? Which, | https://karpathy.ai/lexicap/0009-large.html#01:10:18.480 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | again, is quite possible, you know, it's happened in the past, it'll probably happen again, | https://karpathy.ai/lexicap/0009-large.html#01:10:24.000 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | we don't know right now, but if we did detect an asteroid that was going to hit the earth | https://karpathy.ai/lexicap/0009-large.html#01:10:29.520 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | in 75 years time, we'd certainly be doing something about it. | https://karpathy.ai/lexicap/0009-large.html#01:10:34.960 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | Well, it's clear there's got big rock and there's, | https://karpathy.ai/lexicap/0009-large.html#01:10:39.760 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | we'll probably have a meeting and see what do we do about the big rock with AI. | https://karpathy.ai/lexicap/0009-large.html#01:10:42.080 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | Right, with AI, I mean, there are very few people who think it's not going to happen within the | https://karpathy.ai/lexicap/0009-large.html#01:10:46.160 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | next 75 years. I know Rod Brooks doesn't think it's going to happen, maybe Andrew Ng doesn't | https://karpathy.ai/lexicap/0009-large.html#01:10:50.160 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | think it's happened, but, you know, a lot of the people who work day to day, you know, as you say, | https://karpathy.ai/lexicap/0009-large.html#01:10:56.160 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | at the rock face, they think it's going to happen. I think the median estimate from AI researchers is | https://karpathy.ai/lexicap/0009-large.html#01:11:02.800 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | somewhere in 40 to 50 years from now, or maybe, you know, I think in Asia, they think it's going | https://karpathy.ai/lexicap/0009-large.html#01:11:10.640 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | to be even faster than that. I'm a little bit more conservative, I think it'd probably take | https://karpathy.ai/lexicap/0009-large.html#01:11:16.000 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | longer than that, but I think, you know, as happened with nuclear weapons, it can happen | https://karpathy.ai/lexicap/0009-large.html#01:11:24.080 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | overnight that you have these breakthroughs and we need more than one breakthrough, but, | https://karpathy.ai/lexicap/0009-large.html#01:11:30.720 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | you know, it's on the order of half a dozen, I mean, this is a very rough scale, but sort of | https://karpathy.ai/lexicap/0009-large.html#01:11:34.960 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | half a dozen breakthroughs of that nature would have to happen for us to reach the superhuman AI. | https://karpathy.ai/lexicap/0009-large.html#01:11:40.640 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | But the, you know, the AI research community is vast now, the massive investments from governments, | https://karpathy.ai/lexicap/0009-large.html#01:11:49.920 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | from corporations, tons of really, really smart people, you know, you just have to look at the | https://karpathy.ai/lexicap/0009-large.html#01:11:57.280 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | rate of progress in different areas of AI to see that things are moving pretty fast. So to say, | https://karpathy.ai/lexicap/0009-large.html#01:12:03.360 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | oh, it's just going to be thousands of years, I don't see any basis for that. You know, I see, | https://karpathy.ai/lexicap/0009-large.html#01:12:09.200 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | you know, for example, the Stanford 100 year AI project, right, which is supposed to be sort of, | https://karpathy.ai/lexicap/0009-large.html#01:12:15.920 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | you know, the serious establishment view, their most recent report actually said it's probably | https://karpathy.ai/lexicap/0009-large.html#01:12:26.400 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | not even possible. Oh, wow. | https://karpathy.ai/lexicap/0009-large.html#01:12:32.400 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | Right. Which if you want a perfect example of people in denial, that's it. Because, you know, | https://karpathy.ai/lexicap/0009-large.html#01:12:35.280 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | for the whole history of AI, we've been saying to philosophers who said it wasn't possible, | https://karpathy.ai/lexicap/0009-large.html#01:12:42.880 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | well, you have no idea what you're talking about. Of course it's possible, right? Give me an argument | https://karpathy.ai/lexicap/0009-large.html#01:12:49.520 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | for why it couldn't happen. And there isn't one, right? And now, because people are worried that | https://karpathy.ai/lexicap/0009-large.html#01:12:53.920 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | maybe AI might get a bad name, or I just don't want to think about this, they're saying, okay, | https://karpathy.ai/lexicap/0009-large.html#01:13:00.400 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | well, of course, it's not really possible. You know, imagine if, you know, the leaders of the | https://karpathy.ai/lexicap/0009-large.html#01:13:06.080 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | cancer biology community got up and said, well, you know, of course, curing cancer, | https://karpathy.ai/lexicap/0009-large.html#01:13:12.240 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | it's not really possible. There'd be complete outrage and dismay. And, you know, I find this | https://karpathy.ai/lexicap/0009-large.html#01:13:17.360 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | really a strange phenomenon. So, okay, so if you accept that it's possible, | https://karpathy.ai/lexicap/0009-large.html#01:13:28.320 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | and if you accept that it's probably going to happen, the point that you're making that, | https://karpathy.ai/lexicap/0009-large.html#01:13:35.680 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | you know, how does it go wrong? A valid question. Without that, without an answer to that question, | https://karpathy.ai/lexicap/0009-large.html#01:13:42.400 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | then you're stuck with what I call the gorilla problem, which is, you know, the problem that | https://karpathy.ai/lexicap/0009-large.html#01:13:50.160 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | the gorillas face, right? They made something more intelligent than them, namely us, a few million | https://karpathy.ai/lexicap/0009-large.html#01:13:54.320 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | years ago, and now they're in deep doo doo. So there's really nothing they can do. They've lost | https://karpathy.ai/lexicap/0009-large.html#01:14:00.480 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | the control. They failed to solve the control problem of controlling humans, and so they've | https://karpathy.ai/lexicap/0009-large.html#01:14:07.680 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | lost. So we don't want to be in that situation. And if the gorilla problem is the only formulation | https://karpathy.ai/lexicap/0009-large.html#01:14:13.760 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | you have, there's not a lot you can do, right? Other than to say, okay, we should try to stop, | https://karpathy.ai/lexicap/0009-large.html#01:14:20.240 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | you know, we should just not make the humans, or in this case, not make the AI. And I think | https://karpathy.ai/lexicap/0009-large.html#01:14:26.640 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | that's really hard to do. I'm not actually proposing that that's a feasible course of | https://karpathy.ai/lexicap/0009-large.html#01:14:31.760 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | action. I also think that, you know, if properly controlled AI could be incredibly beneficial. | https://karpathy.ai/lexicap/0009-large.html#01:14:40.320 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | But it seems to me that there's a consensus that one of the major failure modes is this | https://karpathy.ai/lexicap/0009-large.html#01:14:48.800 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | loss of control, that we create AI systems that are pursuing incorrect objectives. And because | https://karpathy.ai/lexicap/0009-large.html#01:14:56.720 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | the AI system believes it knows what the objective is, it has no incentive to listen to us anymore, | https://karpathy.ai/lexicap/0009-large.html#01:15:05.040 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | so to speak, right? It's just carrying out the strategy that it has computed as being the optimal | https://karpathy.ai/lexicap/0009-large.html#01:15:12.240 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | solution. And, you know, it may be that in the process, it needs to acquire more resources to | https://karpathy.ai/lexicap/0009-large.html#01:15:21.680 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | increase the possibility of success or prevent various failure modes by defending itself against | https://karpathy.ai/lexicap/0009-large.html#01:15:30.480 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | interference. And so that collection of problems, I think, is something we can address. The other | https://karpathy.ai/lexicap/0009-large.html#01:15:36.800 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | problems are, roughly speaking, you know, misuse, right? So even if we solve the control problem, | https://karpathy.ai/lexicap/0009-large.html#01:15:45.920 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | we make perfectly safe controllable AI systems. Well, why? You know, why does Dr. Evil going to | https://karpathy.ai/lexicap/0009-large.html#01:15:55.680 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | use those, right? He wants to just take over the world and he'll make unsafe AI systems that then | https://karpathy.ai/lexicap/0009-large.html#01:16:01.600 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | get out of control. So that's one problem, which is sort of a, you know, partly a policing problem, | https://karpathy.ai/lexicap/0009-large.html#01:16:06.480 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | partly a sort of a cultural problem for the profession of how we teach people what kinds | https://karpathy.ai/lexicap/0009-large.html#01:16:12.960 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | of AI systems are safe. You talk about autonomous weapon system and how pretty much everybody | https://karpathy.ai/lexicap/0009-large.html#01:16:21.280 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | agrees that there's too many ways that that can go horribly wrong. This great slaughterbots movie | https://karpathy.ai/lexicap/0009-large.html#01:16:26.000 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | that kind of illustrates that beautifully. I want to talk about that. That's another, | https://karpathy.ai/lexicap/0009-large.html#01:16:32.000 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | there's another topic I'm having to talk about. I just want to mention that what I see is the | https://karpathy.ai/lexicap/0009-large.html#01:16:36.960 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | third major failure mode, which is overuse, not so much misuse, but overuse of AI that we become | https://karpathy.ai/lexicap/0009-large.html#01:16:41.200 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | overly dependent. So I call this the WALL E problem. So if you've seen WALL E, the movie, | https://karpathy.ai/lexicap/0009-large.html#01:16:49.760 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | all right, all the humans are on the spaceship and the machines look after everything for them, | https://karpathy.ai/lexicap/0009-large.html#01:16:54.960 |
Stuart Russell: Long-Term Future of Artificial Intelligence | Lex Fridman Podcast #9 | and they just watch TV and drink big gulps. And they're all sort of obese and stupid and they | https://karpathy.ai/lexicap/0009-large.html#01:17:00.240 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.