workspace stringclasses 1
value | channel stringclasses 1
value | sentences stringlengths 1 3.93k | ts stringlengths 26 26 | user stringlengths 2 11 | sentence_id stringlengths 44 53 | timestamp float64 1.5B 1.56B | __index_level_0__ int64 0 106k |
|---|---|---|---|---|---|---|---|
pythondev | help | `MyModel._meta` usually | 2017-10-04T15:21:30.000226 | Frieda | pythondev_help_Frieda_2017-10-04T15:21:30.000226 | 1,507,130,490.000226 | 95,703 |
pythondev | help | it's not a plain-ol-Python-object, AFAIK | 2017-10-04T15:21:48.000582 | Frieda | pythondev_help_Frieda_2017-10-04T15:21:48.000582 | 1,507,130,508.000582 | 95,704 |
pythondev | help | works in REPL :confused: | 2017-10-04T15:22:15.000197 | Suellen | pythondev_help_Suellen_2017-10-04T15:22:15.000197 | 1,507,130,535.000197 | 95,705 |
pythondev | help | hmm. very well | 2017-10-04T15:22:24.000554 | Frieda | pythondev_help_Frieda_2017-10-04T15:22:24.000554 | 1,507,130,544.000554 | 95,706 |
pythondev | help | OK, while _possible_, it's _uncommon_. how about that? :smile: | 2017-10-04T15:22:34.000432 | Frieda | pythondev_help_Frieda_2017-10-04T15:22:34.000432 | 1,507,130,554.000432 | 95,707 |
pythondev | help | Maybe old-style classes from py2 couldn't do it? | 2017-10-04T15:22:38.000195 | Suellen | pythondev_help_Suellen_2017-10-04T15:22:38.000195 | 1,507,130,558.000195 | 95,708 |
pythondev | help | > while _possible_, it's _uncommon_.
Yeah, but I wish it was more common. | 2017-10-04T15:23:35.000452 | Suellen | pythondev_help_Suellen_2017-10-04T15:23:35.000452 | 1,507,130,615.000452 | 95,709 |
pythondev | help | re: py2, maybe? re: wishing it was more common, I'm not sure how I feel about that. I kinda like things to be flatter | 2017-10-04T15:24:17.000591 | Frieda | pythondev_help_Frieda_2017-10-04T15:24:17.000591 | 1,507,130,657.000591 | 95,710 |
pythondev | help | Flat is indeed better than nested.. But it just looks so cute and simple! | 2017-10-04T15:24:46.000433 | Suellen | pythondev_help_Suellen_2017-10-04T15:24:46.000433 | 1,507,130,686.000433 | 95,711 |
pythondev | help | I mean, you know what it does right away, without even having worked with nested classes before. | 2017-10-04T15:25:13.000139 | Suellen | pythondev_help_Suellen_2017-10-04T15:25:13.000139 | 1,507,130,713.000139 | 95,712 |
pythondev | help | eh, you might be surprised on that "you know what it does right away" thing | 2017-10-04T15:25:49.000262 | Frieda | pythondev_help_Frieda_2017-10-04T15:25:49.000262 | 1,507,130,749.000262 | 95,713 |
pythondev | help | i had a lot of students that didn't understand the class meta pattern | 2017-10-04T15:25:59.000036 | Frieda | pythondev_help_Frieda_2017-10-04T15:25:59.000036 | 1,507,130,759.000036 | 95,714 |
pythondev | help | <@Mallie> :taco: <@Frieda> :taco: (My first tacos) | 2017-10-04T15:38:06.000209 | Seema | pythondev_help_Seema_2017-10-04T15:38:06.000209 | 1,507,131,486.000209 | 95,715 |
pythondev | help | can someone tell me whats the difference between json.dump and json.dumps | 2017-10-04T15:56:35.000224 | Del | pythondev_help_Del_2017-10-04T15:56:35.000224 | 1,507,132,595.000224 | 95,716 |
pythondev | help | dump = Serialize obj as a JSON formatted stream to fp (a .write()-supporting file-like object) | 2017-10-04T15:59:39.000154 | Orpha | pythondev_help_Orpha_2017-10-04T15:59:39.000154 | 1,507,132,779.000154 | 95,717 |
pythondev | help | dumps = Serialize obj to a JSON formatted str | 2017-10-04T15:59:48.000216 | Orpha | pythondev_help_Orpha_2017-10-04T15:59:48.000216 | 1,507,132,788.000216 | 95,718 |
pythondev | help | Anyone seen an issue where pycoverage is not counting a particular branch but that branch is definitely running? | 2017-10-05T01:44:35.000039 | Marcie | pythondev_help_Marcie_2017-10-05T01:44:35.000039 | 1,507,167,875.000039 | 95,719 |
pythondev | help | The last `elif` in this branch definitely runs and the test fails if I put an exception in it but pycoverage is saying it's missing :confused:
``` if service['health_protocol'].startswith('HTTP'):
url = '{health_protocol}://$SERVICE_IP:$SERVICE_PORT{health_endpoint}'.format(**service)
script = 'curl --silent --fail {url}'.format(url=url)
elif service['health_protocol'] == 'TCP':
script = 'nc -vz $SERVICE_IP $SERVICE_PORT | grep succeeded'
elif service['health_protocol'] == 'SCRIPT':
script = service['health_endpoint']``` | 2017-10-05T01:45:21.000093 | Marcie | pythondev_help_Marcie_2017-10-05T01:45:21.000093 | 1,507,167,921.000093 | 95,720 |
pythondev | help | Hey guys, what technical approach would you recommend to the following situations:
My web app has notifications. When a certain global thing is created (e.g new feature added), I want every user to receive a notification.
I’m thinking of passing this message (that a new thing was created) to my `notifications` service, which would use the `Process` class to create a new process and run the function which would iterate through all the users there. Is this alright? | 2017-10-05T03:12:12.000257 | Tatum | pythondev_help_Tatum_2017-10-05T03:12:12.000257 | 1,507,173,132.000257 | 95,721 |
pythondev | help | <@Patty> :
In my code, i create connection to google-cloud-compute by creating object like below:
```
compute = discovery.build("compute", "v1", credentials=credentials)
```
if i create `discovery.build` for each object ( as in for each request ), that would involve concurrent processing of Cloud Resources, using different resources AND
with this - would it not fail ? in case if there are lot of requests ?
should there be a limit on how many `discovery.build` my code should fork ?
or there is no limit ?
same question can also be put up as :Should i use counting semaphore or something of that sort ?
To avoid creating n number of `discovery.build` objects. ? | 2017-10-05T05:19:19.000199 | Florene | pythondev_help_Florene_2017-10-05T05:19:19.000199 | 1,507,180,759.000199 | 95,722 |
pythondev | help | <@Tatum> I’d just use an async task queue to do that for you | 2017-10-05T05:55:34.000230 | Meg | pythondev_help_Meg_2017-10-05T05:55:34.000230 | 1,507,182,934.00023 | 95,723 |
pythondev | help | Ah but I’d run it in a server which already uses some | 2017-10-05T05:56:10.000047 | Tatum | pythondev_help_Tatum_2017-10-05T05:56:10.000047 | 1,507,182,970.000047 | 95,724 |
pythondev | help | I figured there’s no sense to add more when I could just spawn another process | 2017-10-05T05:56:26.000317 | Tatum | pythondev_help_Tatum_2017-10-05T05:56:26.000317 | 1,507,182,986.000317 | 95,725 |
pythondev | help | Since the work is independant | 2017-10-05T05:56:34.000040 | Tatum | pythondev_help_Tatum_2017-10-05T05:56:34.000040 | 1,507,182,994.00004 | 95,726 |
pythondev | help | that’s what an async task queue is for | 2017-10-05T05:56:44.000153 | Meg | pythondev_help_Meg_2017-10-05T05:56:44.000153 | 1,507,183,004.000153 | 95,727 |
pythondev | help | ah f, it’d do it (spawn the process) itself huh | 2017-10-05T05:57:06.000253 | Tatum | pythondev_help_Tatum_2017-10-05T05:57:06.000253 | 1,507,183,026.000253 | 95,728 |
pythondev | help | celery has the concept of workers | 2017-10-05T05:57:37.000057 | Meg | pythondev_help_Meg_2017-10-05T05:57:37.000057 | 1,507,183,057.000057 | 95,729 |
pythondev | help | which are independent processes that pick up jobs from a message queue, execute and then wait for another job to arrive | 2017-10-05T05:58:12.000074 | Meg | pythondev_help_Meg_2017-10-05T05:58:12.000074 | 1,507,183,092.000074 | 95,730 |
pythondev | help | yeah | 2017-10-05T05:58:56.000270 | Tatum | pythondev_help_Tatum_2017-10-05T05:58:56.000270 | 1,507,183,136.00027 | 95,731 |
pythondev | help | please don’t make your own unless you _really_ know what you’re doing. I had to clean up a big mess from a previous dev who tried doing exactly what you’re doing | 2017-10-05T05:59:00.000193 | Meg | pythondev_help_Meg_2017-10-05T05:59:00.000193 | 1,507,183,140.000193 | 95,732 |
pythondev | help | You’re completely right | 2017-10-05T05:59:13.000062 | Tatum | pythondev_help_Tatum_2017-10-05T05:59:13.000062 | 1,507,183,153.000062 | 95,733 |
pythondev | help | I have celery set up as well | 2017-10-05T05:59:16.000454 | Tatum | pythondev_help_Tatum_2017-10-05T05:59:16.000454 | 1,507,183,156.000454 | 95,734 |
pythondev | help | This is just another task | 2017-10-05T05:59:24.000138 | Tatum | pythondev_help_Tatum_2017-10-05T05:59:24.000138 | 1,507,183,164.000138 | 95,735 |
pythondev | help | yup :slightly_smiling_face: | 2017-10-05T05:59:29.000004 | Meg | pythondev_help_Meg_2017-10-05T05:59:29.000004 | 1,507,183,169.000004 | 95,736 |
pythondev | help | Talk about over-engineering | 2017-10-05T05:59:35.000139 | Tatum | pythondev_help_Tatum_2017-10-05T05:59:35.000139 | 1,507,183,175.000139 | 95,737 |
pythondev | help | now, you’d just have to have your client-side do a poll on an interval to check for notifications | 2017-10-05T05:59:51.000332 | Meg | pythondev_help_Meg_2017-10-05T05:59:51.000332 | 1,507,183,191.000332 | 95,738 |
pythondev | help | Thanks for clearing this up, I had a hunch it wasn’t the right approach | 2017-10-05T05:59:52.000198 | Tatum | pythondev_help_Tatum_2017-10-05T05:59:52.000198 | 1,507,183,192.000198 | 95,739 |
pythondev | help | :slightly_smiling_face: | 2017-10-05T05:59:56.000003 | Meg | pythondev_help_Meg_2017-10-05T05:59:56.000003 | 1,507,183,196.000003 | 95,740 |
pythondev | help | somehow, the guy got his version running moderately well, but it didn’t handle issues very well and would consistently crash. One of the first things I did was bring in Celery into the project and migrate everything over | 2017-10-05T06:01:12.000248 | Meg | pythondev_help_Meg_2017-10-05T06:01:12.000248 | 1,507,183,272.000248 | 95,741 |
pythondev | help | Yeah that’s the thing right? Handling errors and reporting back/retrying adequately | 2017-10-05T06:01:44.000085 | Tatum | pythondev_help_Tatum_2017-10-05T06:01:44.000085 | 1,507,183,304.000085 | 95,742 |
pythondev | help | _much_ more reliable, and you’ve already passed the third hurdle | 2017-10-05T06:01:49.000081 | Meg | pythondev_help_Meg_2017-10-05T06:01:49.000081 | 1,507,183,309.000081 | 95,743 |
pythondev | help | exactly | 2017-10-05T06:01:51.000244 | Meg | pythondev_help_Meg_2017-10-05T06:01:51.000244 | 1,507,183,311.000244 | 95,744 |
pythondev | help | that’s the difference between a toy project and something that’s production-ready: how it handles things you don’t expect | 2017-10-05T06:02:29.000426 | Meg | pythondev_help_Meg_2017-10-05T06:02:29.000426 | 1,507,183,349.000426 | 95,745 |
pythondev | help | so, one area I’m unclear on | 2017-10-05T06:02:57.000421 | Meg | pythondev_help_Meg_2017-10-05T06:02:57.000421 | 1,507,183,377.000421 | 95,746 |
pythondev | help | are you using polling on the client to check for updates? | 2017-10-05T06:03:09.000207 | Meg | pythondev_help_Meg_2017-10-05T06:03:09.000207 | 1,507,183,389.000207 | 95,747 |
pythondev | help | the client as in who? | 2017-10-05T06:03:19.000384 | Tatum | pythondev_help_Tatum_2017-10-05T06:03:19.000384 | 1,507,183,399.000384 | 95,748 |
pythondev | help | the one spawning the task? | 2017-10-05T06:03:22.000270 | Tatum | pythondev_help_Tatum_2017-10-05T06:03:22.000270 | 1,507,183,402.00027 | 95,749 |
pythondev | help | no. you have a web app, so I assume its consuming an API | 2017-10-05T06:03:51.000273 | Meg | pythondev_help_Meg_2017-10-05T06:03:51.000273 | 1,507,183,431.000273 | 95,750 |
pythondev | help | both browser and mobile capable | 2017-10-05T06:03:59.000004 | Meg | pythondev_help_Meg_2017-10-05T06:03:59.000004 | 1,507,183,439.000004 | 95,751 |
pythondev | help | oh that’s a whole other separate thing | 2017-10-05T06:04:08.000010 | Tatum | pythondev_help_Tatum_2017-10-05T06:04:08.000010 | 1,507,183,448.00001 | 95,752 |
pythondev | help | how are you looking for updates in that? | 2017-10-05T06:04:12.000179 | Meg | pythondev_help_Meg_2017-10-05T06:04:12.000179 | 1,507,183,452.000179 | 95,753 |
pythondev | help | because you can have the notifications in the server | 2017-10-05T06:04:25.000263 | Meg | pythondev_help_Meg_2017-10-05T06:04:25.000263 | 1,507,183,465.000263 | 95,754 |
pythondev | help | there’s a specific notifications server which accepts websockets | 2017-10-05T06:04:31.000374 | Tatum | pythondev_help_Tatum_2017-10-05T06:04:31.000374 | 1,507,183,471.000374 | 95,755 |
pythondev | help | :thumbsup: | 2017-10-05T06:04:39.000175 | Meg | pythondev_help_Meg_2017-10-05T06:04:39.000175 | 1,507,183,479.000175 | 95,756 |
pythondev | help | it is connected to the main app through RabbitMQ | 2017-10-05T06:04:42.000227 | Tatum | pythondev_help_Tatum_2017-10-05T06:04:42.000227 | 1,507,183,482.000227 | 95,757 |
pythondev | help | and is listening for notifications, once it receives one it checks if the recipient is connected and if he is - he receives a notif | 2017-10-05T06:04:59.000361 | Tatum | pythondev_help_Tatum_2017-10-05T06:04:59.000361 | 1,507,183,499.000361 | 95,758 |
pythondev | help | good idea | 2017-10-05T06:05:20.000004 | Meg | pythondev_help_Meg_2017-10-05T06:05:20.000004 | 1,507,183,520.000004 | 95,759 |
pythondev | help | and you’ve thought it through | 2017-10-05T06:05:26.000248 | Meg | pythondev_help_Meg_2017-10-05T06:05:26.000248 | 1,507,183,526.000248 | 95,760 |
pythondev | help | so what the celery task would do is create the `Notification` object and maybe send a message through rabbitmq | 2017-10-05T06:05:28.000174 | Tatum | pythondev_help_Tatum_2017-10-05T06:05:28.000174 | 1,507,183,528.000174 | 95,761 |
pythondev | help | are you using rabbit as the backend for celery? | 2017-10-05T06:05:43.000082 | Meg | pythondev_help_Meg_2017-10-05T06:05:43.000082 | 1,507,183,543.000082 | 95,762 |
pythondev | help | yeah | 2017-10-05T06:05:47.000139 | Tatum | pythondev_help_Tatum_2017-10-05T06:05:47.000139 | 1,507,183,547.000139 | 95,763 |
pythondev | help | ouch… | 2017-10-05T06:05:51.000328 | Meg | pythondev_help_Meg_2017-10-05T06:05:51.000328 | 1,507,183,551.000328 | 95,764 |
pythondev | help | they’re two separate instances though, methinks | 2017-10-05T06:05:57.000411 | Tatum | pythondev_help_Tatum_2017-10-05T06:05:57.000411 | 1,507,183,557.000411 | 95,765 |
pythondev | help | why, whats wrong with that | 2017-10-05T06:06:02.000260 | Tatum | pythondev_help_Tatum_2017-10-05T06:06:02.000260 | 1,507,183,562.00026 | 95,766 |
pythondev | help | I really wouldn’t reccommend it | 2017-10-05T06:06:04.000101 | Meg | pythondev_help_Meg_2017-10-05T06:06:04.000101 | 1,507,183,564.000101 | 95,767 |
pythondev | help | not advised in the docs | 2017-10-05T06:06:10.000187 | Meg | pythondev_help_Meg_2017-10-05T06:06:10.000187 | 1,507,183,570.000187 | 95,768 |
pythondev | help | wow really | 2017-10-05T06:06:15.000274 | Tatum | pythondev_help_Tatum_2017-10-05T06:06:15.000274 | 1,507,183,575.000274 | 95,769 |
pythondev | help | redis is more stable | 2017-10-05T06:06:15.000342 | Meg | pythondev_help_Meg_2017-10-05T06:06:15.000342 | 1,507,183,575.000342 | 95,770 |
pythondev | help | I’ll need to revisit them, then | 2017-10-05T06:06:21.000130 | Tatum | pythondev_help_Tatum_2017-10-05T06:06:21.000130 | 1,507,183,581.00013 | 95,771 |
pythondev | help | This started out as a hackathon project and my initial idea was to just get celery working | 2017-10-05T06:06:38.000202 | Tatum | pythondev_help_Tatum_2017-10-05T06:06:38.000202 | 1,507,183,598.000202 | 95,772 |
pythondev | help | I used rabbit as both the broker and backend in the first iteration | 2017-10-05T06:06:42.000084 | Meg | pythondev_help_Meg_2017-10-05T06:06:42.000084 | 1,507,183,602.000084 | 95,773 |
pythondev | help | what ended up happening, it was storing the results in memory, as well as the jobs | 2017-10-05T06:07:04.000078 | Meg | pythondev_help_Meg_2017-10-05T06:07:04.000078 | 1,507,183,624.000078 | 95,774 |
pythondev | help | and they got lost? :smile: | 2017-10-05T06:07:15.000343 | Tatum | pythondev_help_Tatum_2017-10-05T06:07:15.000343 | 1,507,183,635.000343 | 95,775 |
pythondev | help | and eventually everything clogged up so the workers weren’t picking up new jobs | 2017-10-05T06:07:20.000001 | Meg | pythondev_help_Meg_2017-10-05T06:07:20.000001 | 1,507,183,640.000001 | 95,776 |
pythondev | help | yep, and server crashed, everything was lost | 2017-10-05T06:07:33.000195 | Meg | pythondev_help_Meg_2017-10-05T06:07:33.000195 | 1,507,183,653.000195 | 95,777 |
pythondev | help | wow, how is that scenario not handled | 2017-10-05T06:07:51.000374 | Tatum | pythondev_help_Tatum_2017-10-05T06:07:51.000374 | 1,507,183,671.000374 | 95,778 |
pythondev | help | it’s not that hard to predict it clogging up and eating all the memory, wonder why Celery hasn’t done anything about it | 2017-10-05T06:08:18.000306 | Tatum | pythondev_help_Tatum_2017-10-05T06:08:18.000306 | 1,507,183,698.000306 | 95,779 |
pythondev | help | AFAIA RabbitMQ has something like durable queues which store the data ? | 2017-10-05T06:08:31.000118 | Tatum | pythondev_help_Tatum_2017-10-05T06:08:31.000118 | 1,507,183,711.000118 | 95,780 |
pythondev | help | <https://denibertovic.com/posts/celery-best-practices/> | 2017-10-05T06:09:23.000412 | Meg | pythondev_help_Meg_2017-10-05T06:09:23.000412 | 1,507,183,763.000412 | 95,781 |
pythondev | help | maybe they have, with 4.x | 2017-10-05T06:09:37.000285 | Meg | pythondev_help_Meg_2017-10-05T06:09:37.000285 | 1,507,183,777.000285 | 95,782 |
pythondev | help | I’mm using 3.1.x | 2017-10-05T06:09:44.000173 | Meg | pythondev_help_Meg_2017-10-05T06:09:44.000173 | 1,507,183,784.000173 | 95,783 |
pythondev | help | mhm | 2017-10-05T06:10:24.000436 | Tatum | pythondev_help_Tatum_2017-10-05T06:10:24.000436 | 1,507,183,824.000436 | 95,784 |
pythondev | help | Thanks for the information :slightly_smiling_face: | 2017-10-05T06:10:38.000369 | Tatum | pythondev_help_Tatum_2017-10-05T06:10:38.000369 | 1,507,183,838.000369 | 95,785 |
pythondev | help | so, I moved to redis for result storage | 2017-10-05T06:12:02.000352 | Meg | pythondev_help_Meg_2017-10-05T06:12:02.000352 | 1,507,183,922.000352 | 95,786 |
pythondev | help | since that’s basically what its built for: fast, concurrent key-value storage | 2017-10-05T06:12:18.000352 | Meg | pythondev_help_Meg_2017-10-05T06:12:18.000352 | 1,507,183,938.000352 | 95,787 |
pythondev | help | wait, result storage? | 2017-10-05T06:12:38.000372 | Tatum | pythondev_help_Tatum_2017-10-05T06:12:38.000372 | 1,507,183,958.000372 | 95,788 |
pythondev | help | As in the results of a task? | 2017-10-05T06:12:46.000245 | Tatum | pythondev_help_Tatum_2017-10-05T06:12:46.000245 | 1,507,183,966.000245 | 95,789 |
pythondev | help | no, return of task | 2017-10-05T06:12:56.000093 | Meg | pythondev_help_Meg_2017-10-05T06:12:56.000093 | 1,507,183,976.000093 | 95,790 |
pythondev | help | eg, task state, return values, etc | 2017-10-05T06:13:22.000173 | Meg | pythondev_help_Meg_2017-10-05T06:13:22.000173 | 1,507,184,002.000173 | 95,791 |
pythondev | help | wow, look what I use | 2017-10-05T06:14:06.000174 | Tatum | pythondev_help_Tatum_2017-10-05T06:14:06.000174 | 1,507,184,046.000174 | 95,792 |
pythondev | help | <https://github.com/celery/django-celery-results> | 2017-10-05T06:14:08.000248 | Tatum | pythondev_help_Tatum_2017-10-05T06:14:08.000248 | 1,507,184,048.000248 | 95,793 |
pythondev | help | and I used to query the model haha | 2017-10-05T06:15:22.000173 | Tatum | pythondev_help_Tatum_2017-10-05T06:15:22.000173 | 1,507,184,122.000173 | 95,794 |
pythondev | help | gotcha. I use redis for that, mainly because there are alot of short-term tasks to execute | 2017-10-05T06:15:27.000342 | Meg | pythondev_help_Meg_2017-10-05T06:15:27.000342 | 1,507,184,127.000342 | 95,795 |
pythondev | help | if I were to use that, the writes to db would be significant | 2017-10-05T06:15:52.000291 | Meg | pythondev_help_Meg_2017-10-05T06:15:52.000291 | 1,507,184,152.000291 | 95,796 |
pythondev | help | I do have tasks that write to the database | 2017-10-05T06:16:17.000048 | Meg | pythondev_help_Meg_2017-10-05T06:16:17.000048 | 1,507,184,177.000048 | 95,797 |
pythondev | help | but they do a query and update inside the task, and that’s for the more infrequent, longer-running tasks | 2017-10-05T06:16:46.000227 | Meg | pythondev_help_Meg_2017-10-05T06:16:46.000227 | 1,507,184,206.000227 | 95,798 |
pythondev | help | mine are long-running ish | 2017-10-05T06:17:55.000382 | Tatum | pythondev_help_Tatum_2017-10-05T06:17:55.000382 | 1,507,184,275.000382 | 95,799 |
pythondev | help | might be minutes too | 2017-10-05T06:18:08.000182 | Tatum | pythondev_help_Tatum_2017-10-05T06:18:08.000182 | 1,507,184,288.000182 | 95,800 |
pythondev | help | gotcha | 2017-10-05T06:18:19.000237 | Meg | pythondev_help_Meg_2017-10-05T06:18:19.000237 | 1,507,184,299.000237 | 95,801 |
pythondev | help | I’m developing a Hackerrank-esque site | 2017-10-05T06:18:21.000224 | Tatum | pythondev_help_Tatum_2017-10-05T06:18:21.000224 | 1,507,184,301.000224 | 95,802 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.