workspace stringclasses 1
value | channel stringclasses 1
value | sentences stringlengths 1 3.93k | ts stringlengths 26 26 | user stringlengths 2 11 | sentence_id stringlengths 44 53 | timestamp float64 1.5B 1.56B | __index_level_0__ int64 0 106k |
|---|---|---|---|---|---|---|---|
pythondev | help | and if you’re still on the two second/item average, that’s basically four days | 2017-09-19T06:10:37.000119 | Meg | pythondev_help_Meg_2017-09-19T06:10:37.000119 | 1,505,801,437.000119 | 94,103 |
pythondev | help | which means you’ll definitely want to be able to track your progress locally so you don’t have to rerun it from scratch when it inevitably fails partway through for some reason. :slightly_smiling_face: | 2017-09-19T06:11:10.000048 | Junita | pythondev_help_Junita_2017-09-19T06:11:10.000048 | 1,505,801,470.000048 | 94,104 |
pythondev | help | yup | 2017-09-19T06:11:15.000408 | Meg | pythondev_help_Meg_2017-09-19T06:11:15.000408 | 1,505,801,475.000408 | 94,105 |
pythondev | help | MAybe batch it down to 1000s at a time | 2017-09-19T06:11:27.000293 | Tracey | pythondev_help_Tracey_2017-09-19T06:11:27.000293 | 1,505,801,487.000293 | 94,106 |
pythondev | help | not a problem if it takes 2 weeks | 2017-09-19T06:11:35.000255 | Tracey | pythondev_help_Tracey_2017-09-19T06:11:35.000255 | 1,505,801,495.000255 | 94,107 |
pythondev | help | do you have any kind of programming experience? | 2017-09-19T06:11:52.000104 | Meg | pythondev_help_Meg_2017-09-19T06:11:52.000104 | 1,505,801,512.000104 | 94,108 |
pythondev | help | I know python on a basic level and have experience in javascript | 2017-09-19T06:12:25.000370 | Tracey | pythondev_help_Tracey_2017-09-19T06:12:25.000370 | 1,505,801,545.00037 | 94,109 |
pythondev | help | well a couple things there - google often has apis you can interface with directly instead of trying to scrape a page. I’d look into that to see if that’s possible | 2017-09-19T06:13:19.000191 | Junita | pythondev_help_Junita_2017-09-19T06:13:19.000191 | 1,505,801,599.000191 | 94,110 |
pythondev | help | it’ll also help because they’ll eventually shut you off otherwise | 2017-09-19T06:13:32.000107 | Junita | pythondev_help_Junita_2017-09-19T06:13:32.000107 | 1,505,801,612.000107 | 94,111 |
pythondev | help | otherwise, I’d just sort of sketch out your approach on a piece of paper and get to building the individual pieces in isolation, as it’ll make the whole thing feel a lot more manageable | 2017-09-19T06:14:23.000044 | Junita | pythondev_help_Junita_2017-09-19T06:14:23.000044 | 1,505,801,663.000044 | 94,112 |
pythondev | help | <@Junita> the image search api is deprecated though | 2017-09-19T06:14:41.000376 | Meg | pythondev_help_Meg_2017-09-19T06:14:41.000376 | 1,505,801,681.000376 | 94,113 |
pythondev | help | thats what I thought I had read <@Meg> | 2017-09-19T06:14:56.000142 | Tracey | pythondev_help_Tracey_2017-09-19T06:14:56.000142 | 1,505,801,696.000142 | 94,114 |
pythondev | help | <https://developers.google.com/image-search/v1/devguide> | 2017-09-19T06:15:05.000017 | Meg | pythondev_help_Meg_2017-09-19T06:15:05.000017 | 1,505,801,705.000017 | 94,115 |
pythondev | help | (I feel ethics-bound to mention that this application is probably strongly against copyright law) | 2017-09-19T06:15:16.000069 | Gabriele | pythondev_help_Gabriele_2017-09-19T06:15:16.000069 | 1,505,801,716.000069 | 94,116 |
pythondev | help | ah, is it? is there not a replacement? | 2017-09-19T06:15:18.000405 | Junita | pythondev_help_Junita_2017-09-19T06:15:18.000405 | 1,505,801,718.000405 | 94,117 |
pythondev | help | use custom search instead, which apparently is an umbrella replacement | 2017-09-19T06:15:36.000109 | Meg | pythondev_help_Meg_2017-09-19T06:15:36.000109 | 1,505,801,736.000109 | 94,118 |
pythondev | help | <@Gabriele> you can usually specify licensing in the search | 2017-09-19T06:15:38.000261 | Junita | pythondev_help_Junita_2017-09-19T06:15:38.000261 | 1,505,801,738.000261 | 94,119 |
pythondev | help | <https://developers.google.com/custom-search/> | 2017-09-19T06:15:45.000018 | Meg | pythondev_help_Meg_2017-09-19T06:15:45.000018 | 1,505,801,745.000018 | 94,120 |
pythondev | help | <@Meg> thanks for that | 2017-09-19T06:16:06.000024 | Tracey | pythondev_help_Tracey_2017-09-19T06:16:06.000024 | 1,505,801,766.000024 | 94,121 |
pythondev | help | It might be possible to get results with a legit licence, yeah | 2017-09-19T06:16:11.000100 | Gabriele | pythondev_help_Gabriele_2017-09-19T06:16:11.000100 | 1,505,801,771.0001 | 94,122 |
pythondev | help | I have no idea if that will be possible through the custom search though, it is through the UI | 2017-09-19T06:16:29.000346 | Junita | pythondev_help_Junita_2017-09-19T06:16:29.000346 | 1,505,801,789.000346 | 94,123 |
pythondev | help | any particular libraries I should look at? I know Boto will handle the upload to S3 and urllib will grab the images from the search, what can I use to resize the image / create the thumb? | 2017-09-19T06:17:15.000204 | Tracey | pythondev_help_Tracey_2017-09-19T06:17:15.000204 | 1,505,801,835.000204 | 94,124 |
pythondev | help | I’d use `requests` in place of urlib | 2017-09-19T06:17:32.000302 | Meg | pythondev_help_Meg_2017-09-19T06:17:32.000302 | 1,505,801,852.000302 | 94,125 |
pythondev | help | and use `Pillow` for the image manipulation | 2017-09-19T06:17:47.000008 | Meg | pythondev_help_Meg_2017-09-19T06:17:47.000008 | 1,505,801,867.000008 | 94,126 |
pythondev | help | Ok thanks for that | 2017-09-19T06:17:52.000234 | Tracey | pythondev_help_Tracey_2017-09-19T06:17:52.000234 | 1,505,801,872.000234 | 94,127 |
pythondev | help | if you want speed, you can use imagemagick and call it via subprocess | 2017-09-19T06:18:07.000070 | Meg | pythondev_help_Meg_2017-09-19T06:18:07.000070 | 1,505,801,887.00007 | 94,128 |
pythondev | help | huh, apparently imagemagick has python bindings | 2017-09-19T06:18:44.000134 | Meg | pythondev_help_Meg_2017-09-19T06:18:44.000134 | 1,505,801,924.000134 | 94,129 |
pythondev | help | <https://wiki.python.org/moin/ImageMagick> | 2017-09-19T06:18:44.000370 | Meg | pythondev_help_Meg_2017-09-19T06:18:44.000370 | 1,505,801,924.00037 | 94,130 |
pythondev | help | or wand | 2017-09-19T06:19:42.000131 | Meg | pythondev_help_Meg_2017-09-19T06:19:42.000131 | 1,505,801,982.000131 | 94,131 |
pythondev | help | <http://docs.wand-py.org/en/0.4.4/> | 2017-09-19T06:19:42.000381 | Meg | pythondev_help_Meg_2017-09-19T06:19:42.000381 | 1,505,801,982.000381 | 94,132 |
pythondev | help | Cool, thanks for those <@Meg> | 2017-09-19T06:19:52.000245 | Tracey | pythondev_help_Tracey_2017-09-19T06:19:52.000245 | 1,505,801,992.000245 | 94,133 |
pythondev | help | <https://github.com/thumbor/thumbor> <- that's much faster than shelling out to imagemagick | 2017-09-19T06:22:07.000080 | Carri | pythondev_help_Carri_2017-09-19T06:22:07.000080 | 1,505,802,127.00008 | 94,134 |
pythondev | help | Oooh thats interesting thanks <@Carri> | 2017-09-19T06:23:39.000119 | Tracey | pythondev_help_Tracey_2017-09-19T06:23:39.000119 | 1,505,802,219.000119 | 94,135 |
pythondev | help | The feature of image detection algorithms look really helpful | 2017-09-19T06:24:53.000390 | Tracey | pythondev_help_Tracey_2017-09-19T06:24:53.000390 | 1,505,802,293.00039 | 94,136 |
pythondev | help | although if you were to use Pillow, and are running on a machine that supports SIMD instructions I'd recommend this drop-in replacement that's substantially faster than vanilla Pillow <https://github.com/uploadcare/pillow-simd> | 2017-09-19T06:25:28.000073 | Carri | pythondev_help_Carri_2017-09-19T06:25:28.000073 | 1,505,802,328.000073 | 94,137 |
pythondev | help | > Pillow-SIMD with AVX2 is always 16 to 40 times faster than ImageMagick and outperforms Skia, the high-speed graphics library used in Chromium. | 2017-09-19T06:26:19.000055 | Carri | pythondev_help_Carri_2017-09-19T06:26:19.000055 | 1,505,802,379.000055 | 94,138 |
pythondev | help | isn’t smid integrated in almost all CPUs post Pentium 3? | 2017-09-19T06:38:34.000348 | Meg | pythondev_help_Meg_2017-09-19T06:38:34.000348 | 1,505,803,114.000348 | 94,139 |
pythondev | help | but dang, nice find! <@Carri> :taco: | 2017-09-19T06:39:47.000034 | Meg | pythondev_help_Meg_2017-09-19T06:39:47.000034 | 1,505,803,187.000034 | 94,140 |
pythondev | help | yeah, in some forms. although it has evolved to SSE4 these days, and AVX support from 2011 | 2017-09-19T06:55:28.000061 | Carri | pythondev_help_Carri_2017-09-19T06:55:28.000061 | 1,505,804,128.000061 | 94,141 |
pythondev | help | Anyone know how it's possible to run a function in a class through a subclass with multiprocessing?
So I have a `class tracker` which I'm calling with a subclass called `class Track(tracker)`.
I then have a list containing many subclasses: `Track`, I would like to call a function and give the same input "an image" to each subclass `Track` by using multiprocessing.
Currently it is working using:
```
for i in range(len(listOfTracks)):
listOfTracks[i].update(image)
```
And then i can extract the information from each Track using a call to `pos = listOfTracks[i].get_position`
When I'm trying to add multiprocessing to it, I never receive the updated position, but I can see the image I input is different, but it seems like it's not saving or updating as before, since the `Tracks.get_position` call always return the same position.
```
procs = []
for i in range(len(tracker)):
p = mp.Process(target=tracker[i].renew, args=(img,))
procs.append(p)
p.start()
for p in procs:
p.join()
``` | 2017-09-19T07:02:28.000150 | Adell | pythondev_help_Adell_2017-09-19T07:02:28.000150 | 1,505,804,548.00015 | 94,142 |
pythondev | help | Hi folks, I have doubt.Anybody have python script, to run the sql query and save the output particular field value and pass the arguments to the new sql query.. | 2017-09-19T07:57:36.000256 | Bell | pythondev_help_Bell_2017-09-19T07:57:36.000256 | 1,505,807,856.000256 | 94,143 |
pythondev | help | Hi <@Bell>, you could do that with sqlalchemy, but it may be possible to do it in pure SQL too | 2017-09-19T08:01:30.000281 | Fabiola | pythondev_help_Fabiola_2017-09-19T08:01:30.000281 | 1,505,808,090.000281 | 94,144 |
pythondev | help | <@Bell> what db are you using? | 2017-09-19T08:02:22.000296 | Vada | pythondev_help_Vada_2017-09-19T08:02:22.000296 | 1,505,808,142.000296 | 94,145 |
pythondev | help | <@Vada> I want to connect mysql db and have to execute querry using python code | 2017-09-19T08:03:04.000164 | Bell | pythondev_help_Bell_2017-09-19T08:03:04.000164 | 1,505,808,184.000164 | 94,146 |
pythondev | help | Hi, i am new to python. do u have any example or script | 2017-09-19T08:03:59.000265 | Bell | pythondev_help_Bell_2017-09-19T08:03:59.000265 | 1,505,808,239.000265 | 94,147 |
pythondev | help | <http://docs.sqlalchemy.org/en/latest/core/connections.html> | 2017-09-19T08:05:56.000045 | Fabiola | pythondev_help_Fabiola_2017-09-19T08:05:56.000045 | 1,505,808,356.000045 | 94,148 |
pythondev | help | <@Bell> if you are just doing basic queries look at mysqlclient here <https://github.com/PyMySQL/mysqlclient-python> | 2017-09-19T08:06:06.000013 | Vada | pythondev_help_Vada_2017-09-19T08:06:06.000013 | 1,505,808,366.000013 | 94,149 |
pythondev | help | It's pretty simple and easy to use. There are examples here: <https://mysqlclient.readthedocs.io/> | 2017-09-19T08:06:27.000344 | Vada | pythondev_help_Vada_2017-09-19T08:06:27.000344 | 1,505,808,387.000344 | 94,150 |
pythondev | help | If you are doing anything more complex have a look at an ORM like sqlalchemy as <@Fabiola> has said | 2017-09-19T08:06:50.000093 | Vada | pythondev_help_Vada_2017-09-19T08:06:50.000093 | 1,505,808,410.000093 | 94,151 |
pythondev | help | Let me check on the docs and ping later. Thanks for both of you | 2017-09-19T08:08:26.000271 | Bell | pythondev_help_Bell_2017-09-19T08:08:26.000271 | 1,505,808,506.000271 | 94,152 |
pythondev | help | My SO question on `celerybeat` got an upvote after 7 months! There is still hope :astonished: <https://stackoverflow.com/questions/42322486/celery-beat-schedule-schedule-to-run-on-load-then-on-interval> | 2017-09-19T11:09:52.000049 | Mallie | pythondev_help_Mallie_2017-09-19T11:09:52.000049 | 1,505,819,392.000049 | 94,153 |
pythondev | help | If anyone knows the answer in here that'd be awesome. | 2017-09-19T11:10:11.000805 | Mallie | pythondev_help_Mallie_2017-09-19T11:10:11.000805 | 1,505,819,411.000805 | 94,154 |
pythondev | help | Why not schedule a task on startup? | 2017-09-19T11:21:38.000420 | Vada | pythondev_help_Vada_2017-09-19T11:21:38.000420 | 1,505,820,098.00042 | 94,155 |
pythondev | help | your project will have a main.py or something right? | 2017-09-19T11:21:55.000257 | Vada | pythondev_help_Vada_2017-09-19T11:21:55.000257 | 1,505,820,115.000257 | 94,156 |
pythondev | help | <@Mallie> | 2017-09-19T11:22:02.000370 | Vada | pythondev_help_Vada_2017-09-19T11:22:02.000370 | 1,505,820,122.00037 | 94,157 |
pythondev | help | <@Vada> Because it is a longer running task and I don't want it to overlap, nor preferably would I have to track state for objects updated (which is an option). Right now I want it to run hourly and it finishes within 30 minutes, but I don't want it to run over itself. Scheduling a task on startup would just mean enqueuing it when the service starts? That would be esp. troublesome in the way I stated because I need to restart the celery services when I deploy code (it is a production Django project). I have hooks to reset the service so it'd be great if the functionality existed so I could just delete the right file or run the right command to run the task on startup but not have the interval locked again to always run 1 hour from when it was originally scheduled. | 2017-09-19T11:29:21.000622 | Mallie | pythondev_help_Mallie_2017-09-19T11:29:21.000622 | 1,505,820,561.000622 | 94,158 |
pythondev | help | As it is now it is easily worked around by not updating code related to the task during the interval (:00 - :30) every hour, and if the code isn't related, just restarting the services. So it is managed by controlling releases. | 2017-09-19T11:30:43.000249 | Mallie | pythondev_help_Mallie_2017-09-19T11:30:43.000249 | 1,505,820,643.000249 | 94,159 |
pythondev | help | But if I did have to make a critical code update to the task, I wish I could just force it to reset after purging the queue. | 2017-09-19T11:30:59.000536 | Mallie | pythondev_help_Mallie_2017-09-19T11:30:59.000536 | 1,505,820,659.000536 | 94,160 |
pythondev | help | I see two options | 2017-09-19T11:33:33.000447 | Vada | pythondev_help_Vada_2017-09-19T11:33:33.000447 | 1,505,820,813.000447 | 94,161 |
pythondev | help | 1) You have a check (e.g. a cache lock or anything else) which the task checks - if the lock is active the task dies | 2017-09-19T11:34:04.000871 | Vada | pythondev_help_Vada_2017-09-19T11:34:04.000871 | 1,505,820,844.000871 | 94,162 |
pythondev | help | 2) The task calls itself on finish - scheduling the next run that way | 2017-09-19T11:34:16.000309 | Vada | pythondev_help_Vada_2017-09-19T11:34:16.000309 | 1,505,820,856.000309 | 94,163 |
pythondev | help | neither would use beat | 2017-09-19T11:34:19.000074 | Vada | pythondev_help_Vada_2017-09-19T11:34:19.000074 | 1,505,820,859.000074 | 94,164 |
pythondev | help | I think beat is wrong for this if you want to schedule it from after it runs on startup | 2017-09-19T11:34:35.000095 | Vada | pythondev_help_Vada_2017-09-19T11:34:35.000095 | 1,505,820,875.000095 | 94,165 |
pythondev | help | and definitely wrong if it cannot run concurrently | 2017-09-19T11:34:45.000054 | Vada | pythondev_help_Vada_2017-09-19T11:34:45.000054 | 1,505,820,885.000054 | 94,166 |
pythondev | help | So (2) is interesting - is there a way to do that in celery? I wasn't aware. Like, I could see the end of the task and say `run again in n minutes`? That'd be fine, I thought `celerybeat` was the only option for scheduling | 2017-09-19T11:38:04.000273 | Mallie | pythondev_help_Mallie_2017-09-19T11:38:04.000273 | 1,505,821,084.000273 | 94,167 |
pythondev | help | I've not done it myself, but considering python supports recursion, you should be able to just call the task again | 2017-09-19T11:38:34.000412 | Vada | pythondev_help_Vada_2017-09-19T11:38:34.000412 | 1,505,821,114.000412 | 94,168 |
pythondev | help | or call an api endpoint for which generates a new task | 2017-09-19T11:38:49.000054 | Meg | pythondev_help_Meg_2017-09-19T11:38:49.000054 | 1,505,821,129.000054 | 94,169 |
pythondev | help | e.g.
```
@task
def this_task():
# do stuff
this_task.delay()
``` | 2017-09-19T11:39:19.000841 | Vada | pythondev_help_Vada_2017-09-19T11:39:19.000841 | 1,505,821,159.000841 | 94,170 |
pythondev | help | you could also do something at the end of your run like
```
task = SomeTask()
task.apply_async(args = args, queue = 'queue')
``` | 2017-09-19T11:39:49.000577 | Meg | pythondev_help_Meg_2017-09-19T11:39:49.000577 | 1,505,821,189.000577 | 94,171 |
pythondev | help | <@Meg> that means you need another task to call that endpoint to trigger the task though :wink: | 2017-09-19T11:40:18.000135 | Vada | pythondev_help_Vada_2017-09-19T11:40:18.000135 | 1,505,821,218.000135 | 94,172 |
pythondev | help | not always | 2017-09-19T11:40:57.000402 | Meg | pythondev_help_Meg_2017-09-19T11:40:57.000402 | 1,505,821,257.000402 | 94,173 |
pythondev | help | So I didn't realize
```eta (datetime) – Absolute time and date of when the task should be executed. May not be specified if countdown is also supplied.```
existed, nor `countdown` for that matter | 2017-09-19T11:41:10.000375 | Mallie | pythondev_help_Mallie_2017-09-19T11:41:10.000375 | 1,505,821,270.000375 | 94,174 |
pythondev | help | have a view handler | 2017-09-19T11:41:11.000145 | Meg | pythondev_help_Meg_2017-09-19T11:41:11.000145 | 1,505,821,271.000145 | 94,175 |
pythondev | help | and post to said URL | 2017-09-19T11:41:22.000114 | Meg | pythondev_help_Meg_2017-09-19T11:41:22.000114 | 1,505,821,282.000114 | 94,176 |
pythondev | help | yep although if the only reason you are doing it every hour is so it doesn't overlap, you may as well just do the countdown | 2017-09-19T11:42:01.000725 | Vada | pythondev_help_Vada_2017-09-19T11:42:01.000725 | 1,505,821,321.000725 | 94,177 |
pythondev | help | Yeah just saw that too hah | 2017-09-19T11:42:25.000353 | Mallie | pythondev_help_Mallie_2017-09-19T11:42:25.000353 | 1,505,821,345.000353 | 94,178 |
pythondev | help | I thought the only options were `do now` or `celerybeat` so that's cool | 2017-09-19T11:42:42.000477 | Mallie | pythondev_help_Mallie_2017-09-19T11:42:42.000477 | 1,505,821,362.000477 | 94,179 |
pythondev | help | <@Meg> but why would that be better than the task calling itself? | 2017-09-19T11:43:45.000398 | Vada | pythondev_help_Vada_2017-09-19T11:43:45.000398 | 1,505,821,425.000398 | 94,180 |
pythondev | help | <@Vada> :taco: | 2017-09-19T11:43:52.000190 | Mallie | pythondev_help_Mallie_2017-09-19T11:43:52.000190 | 1,505,821,432.00019 | 94,181 |
pythondev | help | I mean that is the task calling itself, but via an API endpoint right? | 2017-09-19T11:44:02.000356 | Vada | pythondev_help_Vada_2017-09-19T11:44:02.000356 | 1,505,821,442.000356 | 94,182 |
pythondev | help | ty <@Mallie> | 2017-09-19T11:44:06.000135 | Vada | pythondev_help_Vada_2017-09-19T11:44:06.000135 | 1,505,821,446.000135 | 94,183 |
pythondev | help | If this Slack existed when I first was working with celery would've been great lol | 2017-09-19T11:44:07.000155 | Mallie | pythondev_help_Mallie_2017-09-19T11:44:07.000155 | 1,505,821,447.000155 | 94,184 |
pythondev | help | ah, never mind | 2017-09-19T11:44:07.000222 | Meg | pythondev_help_Meg_2017-09-19T11:44:07.000222 | 1,505,821,447.000222 | 94,185 |
pythondev | help | Instead of my 7 mo issue and SO question | 2017-09-19T11:44:16.000454 | Mallie | pythondev_help_Mallie_2017-09-19T11:44:16.000454 | 1,505,821,456.000454 | 94,186 |
pythondev | help | I was thinking recursion depth | 2017-09-19T11:44:19.000662 | Meg | pythondev_help_Meg_2017-09-19T11:44:19.000662 | 1,505,821,459.000662 | 94,187 |
pythondev | help | and trying to avoid bottoming out | 2017-09-19T11:44:27.000124 | Meg | pythondev_help_Meg_2017-09-19T11:44:27.000124 | 1,505,821,467.000124 | 94,188 |
pythondev | help | Haha I can post an SO response as well if you want | 2017-09-19T11:44:35.000686 | Vada | pythondev_help_Vada_2017-09-19T11:44:35.000686 | 1,505,821,475.000686 | 94,189 |
pythondev | help | so when a task calls itself, does it get put in the broker? | 2017-09-19T11:44:44.000119 | Meg | pythondev_help_Meg_2017-09-19T11:44:44.000119 | 1,505,821,484.000119 | 94,190 |
pythondev | help | or does it immediately execute on the same worker | 2017-09-19T11:44:52.000465 | Meg | pythondev_help_Meg_2017-09-19T11:44:52.000465 | 1,505,821,492.000465 | 94,191 |
pythondev | help | yeah delay doesn't actually call the task, it posts it to the message broker | 2017-09-19T11:45:04.000309 | Vada | pythondev_help_Vada_2017-09-19T11:45:04.000309 | 1,505,821,504.000309 | 94,192 |
pythondev | help | gotcha | 2017-09-19T11:45:15.000437 | Meg | pythondev_help_Meg_2017-09-19T11:45:15.000437 | 1,505,821,515.000437 | 94,193 |
pythondev | help | ah, then `run` would execute immediately on the worker | 2017-09-19T11:45:26.000357 | Meg | pythondev_help_Meg_2017-09-19T11:45:26.000357 | 1,505,821,526.000357 | 94,194 |
pythondev | help | so it should become a separate stack - although I'd be very interested/surprised if that isn't the case | 2017-09-19T11:45:27.000461 | Vada | pythondev_help_Vada_2017-09-19T11:45:27.000461 | 1,505,821,527.000461 | 94,195 |
pythondev | help | FWIW, I prefer `apply_async` vs `delay` | 2017-09-19T11:45:42.000289 | Meg | pythondev_help_Meg_2017-09-19T11:45:42.000289 | 1,505,821,542.000289 | 94,196 |
pythondev | help | They're essentially the same thing righht? | 2017-09-19T11:45:58.000125 | Vada | pythondev_help_Vada_2017-09-19T11:45:58.000125 | 1,505,821,558.000125 | 94,197 |
pythondev | help | because it lets me specify the args, queue and any potential tasks to run after sequence | 2017-09-19T11:46:06.000485 | Meg | pythondev_help_Meg_2017-09-19T11:46:06.000485 | 1,505,821,566.000485 | 94,198 |
pythondev | help | `delay` is a simplified `apply_async` with non-overrideable defaults | 2017-09-19T11:46:27.000746 | Meg | pythondev_help_Meg_2017-09-19T11:46:27.000746 | 1,505,821,587.000746 | 94,199 |
pythondev | help | yeah that's true | 2017-09-19T11:46:34.000025 | Vada | pythondev_help_Vada_2017-09-19T11:46:34.000025 | 1,505,821,594.000025 | 94,200 |
pythondev | help | I default to delay, but use all of them in some places. | 2017-09-19T11:46:45.000523 | Vada | pythondev_help_Vada_2017-09-19T11:46:45.000523 | 1,505,821,605.000523 | 94,201 |
pythondev | help | So another question, and this may get more into broker specifics, but does anyone know how celery/RabbitMQ would handle scheduling *a lot* of tasks (30k+)? Right now I update a large number of objects via `requests`, so I just queue them up in order once an hour - that was really the only good option with using `celerybeat`, but with what I just learned, it would be conceivable to set a timeout 60 min from the run time for _each_ object at the startup. Approximately 60 min is/will be sufficient for these objects to update, so the container task to just enqueue them (unscheduled) at the same time has been fine and i wouldn't make things more complicated without a reason. Just wondering how it might handle it if I had an interest in doing it differently in the future, e.g. I wanted objects to update at a different rate. So a few parts:
1) Would it have any issue with retaining an `eta` or `timeout` for a lot of tasks (let's just say 100k)
2) Is there a simple way to inspect if a task is currently scheduled? | 2017-09-19T11:54:35.000326 | Mallie | pythondev_help_Mallie_2017-09-19T11:54:35.000326 | 1,505,822,075.000326 | 94,202 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.