workspace stringclasses 1
value | channel stringclasses 1
value | sentences stringlengths 1 3.93k | ts stringlengths 26 26 | user stringlengths 2 11 | sentence_id stringlengths 44 53 | timestamp float64 1.5B 1.56B | __index_level_0__ int64 0 106k |
|---|---|---|---|---|---|---|---|
pythondev | help | for 2 | 2017-09-19T11:56:03.000679 | Vada | pythondev_help_Vada_2017-09-19T11:56:03.000679 | 1,505,822,163.000679 | 94,203 |
pythondev | help | you mean personally, or by a program? | 2017-09-19T11:56:09.000385 | Vada | pythondev_help_Vada_2017-09-19T11:56:09.000385 | 1,505,822,169.000385 | 94,204 |
pythondev | help | Programmatically, b/c it would basically be `if this object task not scheduled scheduled, schedule this object task` because I would need to run the checker on some interval since objects may be added by users | 2017-09-19T11:57:31.000388 | Mallie | pythondev_help_Mallie_2017-09-19T11:57:31.000388 | 1,505,822,251.000388 | 94,205 |
pythondev | help | Then, have the object task schedule its next run when it is done, and so on | 2017-09-19T11:57:48.000192 | Mallie | pythondev_help_Mallie_2017-09-19T11:57:48.000192 | 1,505,822,268.000192 | 94,206 |
pythondev | help | I'm not sure if I could do that without tracking state | 2017-09-19T11:57:58.000396 | Mallie | pythondev_help_Mallie_2017-09-19T11:57:58.000396 | 1,505,822,278.000396 | 94,207 |
pythondev | help | so for 1, no it won't have an issue - but you may have a bottleneck in terms of your numbers of workers. | 2017-09-19T11:58:11.000402 | Vada | pythondev_help_Vada_2017-09-19T11:58:11.000402 | 1,505,822,291.000402 | 94,208 |
pythondev | help | so regardless of ETA | 2017-09-19T11:58:20.000704 | Vada | pythondev_help_Vada_2017-09-19T11:58:20.000704 | 1,505,822,300.000704 | 94,209 |
pythondev | help | if there aren't enough workers to get the job done, it'll take a while | 2017-09-19T11:58:30.000664 | Vada | pythondev_help_Vada_2017-09-19T11:58:30.000664 | 1,505,822,310.000664 | 94,210 |
pythondev | help | Well now enqueing them all at once only takes about 30 minutes with current config, so I have an external job to check the stats to make sure it doesn't need to be tweaked and so on. Spreading them out would be perfectly acceptable, it was just unmanageable with `celerybeat` obv. | 2017-09-19T12:00:01.000312 | Mallie | pythondev_help_Mallie_2017-09-19T12:00:01.000312 | 1,505,822,401.000312 | 94,211 |
pythondev | help | for 2) You can check what tasks are in the queue and inspect the workers (e.g. <http://docs.celeryproject.org/en/latest/userguide/workers.html?highlight=revoke#inspecting-workers>) | 2017-09-19T12:00:14.000059 | Vada | pythondev_help_Vada_2017-09-19T12:00:14.000059 | 1,505,822,414.000059 | 94,212 |
pythondev | help | Ah sick | 2017-09-19T12:00:49.000686 | Mallie | pythondev_help_Mallie_2017-09-19T12:00:49.000686 | 1,505,822,449.000686 | 94,213 |
pythondev | help | One of the things we have in house is an RSS feed processor which can't be run concurrently | 2017-09-19T12:00:56.000182 | Vada | pythondev_help_Vada_2017-09-19T12:00:56.000182 | 1,505,822,456.000182 | 94,214 |
pythondev | help | celery has a cookbook for this | 2017-09-19T12:01:03.000051 | Vada | pythondev_help_Vada_2017-09-19T12:01:03.000051 | 1,505,822,463.000051 | 94,215 |
pythondev | help | <http://docs.celeryproject.org/en/latest/tutorials/task-cookbook.html#ensuring-a-task-is-only-executed-one-at-a-time> | 2017-09-19T12:01:03.000419 | Vada | pythondev_help_Vada_2017-09-19T12:01:03.000419 | 1,505,822,463.000419 | 94,216 |
pythondev | help | specific to django | 2017-09-19T12:01:17.000034 | Vada | pythondev_help_Vada_2017-09-19T12:01:17.000034 | 1,505,822,477.000034 | 94,217 |
pythondev | help | so that is worth looking at as well | 2017-09-19T12:01:24.000245 | Vada | pythondev_help_Vada_2017-09-19T12:01:24.000245 | 1,505,822,484.000245 | 94,218 |
pythondev | help | Great great <@Vada> :taco: | 2017-09-19T12:01:42.000270 | Mallie | pythondev_help_Mallie_2017-09-19T12:01:42.000270 | 1,505,822,502.00027 | 94,219 |
pythondev | help | el dubble taco | 2017-09-19T12:01:45.000616 | Mallie | pythondev_help_Mallie_2017-09-19T12:01:45.000616 | 1,505,822,505.000616 | 94,220 |
pythondev | help | :slightly_smiling_face: | 2017-09-19T12:01:53.000156 | Vada | pythondev_help_Vada_2017-09-19T12:01:53.000156 | 1,505,822,513.000156 | 94,221 |
pythondev | help | That'll be great to consider later if I have an interest in tweaking things for users that may warrant more frequent updates and so on so very cool | 2017-09-19T12:02:25.000470 | Mallie | pythondev_help_Mallie_2017-09-19T12:02:25.000470 | 1,505,822,545.00047 | 94,222 |
pythondev | help | The thing I query actually may chance every second, but the 60 minutes is an acceptable "free" aggregate update | 2017-09-19T12:03:00.000120 | Mallie | pythondev_help_Mallie_2017-09-19T12:03:00.000120 | 1,505,822,580.00012 | 94,223 |
pythondev | help | So that'd be cool to be able to tweak it per-object | 2017-09-19T12:03:10.000454 | Mallie | pythondev_help_Mallie_2017-09-19T12:03:10.000454 | 1,505,822,590.000454 | 94,224 |
pythondev | help | And I just added `memcached` a few weeks ago | 2017-09-19T12:03:50.000142 | Mallie | pythondev_help_Mallie_2017-09-19T12:03:50.000142 | 1,505,822,630.000142 | 94,225 |
pythondev | help | Also, up to you if you want to nab the question <@Vada>, I think the alternatives you have raised are a good solution | 2017-09-19T12:05:22.000459 | Mallie | pythondev_help_Mallie_2017-09-19T12:05:22.000459 | 1,505,822,722.000459 | 94,226 |
pythondev | help | So I'd accept it and you could get them sweet SO duckets | 2017-09-19T12:05:44.000027 | Mallie | pythondev_help_Mallie_2017-09-19T12:05:44.000027 | 1,505,822,744.000027 | 94,227 |
pythondev | help | I mean in an ideal world you'd schedule a task every time it updates as a post-commit hook | 2017-09-19T12:08:53.000004 | Vada | pythondev_help_Vada_2017-09-19T12:08:53.000004 | 1,505,822,933.000004 | 94,228 |
pythondev | help | Well our deployment isn't based on a repo action - it is rolled into a single command but it requires someone (me) to tell it to release | 2017-09-19T12:10:16.000371 | Mallie | pythondev_help_Mallie_2017-09-19T12:10:16.000371 | 1,505,823,016.000371 | 94,229 |
pythondev | help | But if I were to leave `celerybeat` and just make sure each object schedules itself out at end run, if there is a necessary change it would still affect them all after a release so should be fine | 2017-09-19T12:11:18.000536 | Mallie | pythondev_help_Mallie_2017-09-19T12:11:18.000536 | 1,505,823,078.000536 | 94,230 |
pythondev | help | Every release I just restart `celerybeat` and `celeryd` in case related code changes - I have never experienced issues there | 2017-09-19T12:12:35.000305 | Mallie | pythondev_help_Mallie_2017-09-19T12:12:35.000305 | 1,505,823,155.000305 | 94,231 |
pythondev | help | Post db commit :wink: | 2017-09-19T12:12:38.000280 | Vada | pythondev_help_Vada_2017-09-19T12:12:38.000280 | 1,505,823,158.00028 | 94,232 |
pythondev | help | not git commit | 2017-09-19T12:12:48.000615 | Vada | pythondev_help_Vada_2017-09-19T12:12:48.000615 | 1,505,823,168.000615 | 94,233 |
pythondev | help | oh ha | 2017-09-19T12:12:49.000175 | Mallie | pythondev_help_Mallie_2017-09-19T12:12:49.000175 | 1,505,823,169.000175 | 94,234 |
pythondev | help | but I can see how that misled you | 2017-09-19T12:13:20.000277 | Vada | pythondev_help_Vada_2017-09-19T12:13:20.000277 | 1,505,823,200.000277 | 94,235 |
pythondev | help | I was a bit confused as to how that'd work lol | 2017-09-19T12:13:50.000317 | Mallie | pythondev_help_Mallie_2017-09-19T12:13:50.000317 | 1,505,823,230.000317 | 94,236 |
pythondev | help | :stuck_out_tongue: | 2017-09-19T12:14:52.000524 | Vada | pythondev_help_Vada_2017-09-19T12:14:52.000524 | 1,505,823,292.000524 | 94,237 |
pythondev | help | It would be quite amusing if commiting code affected prod in that way | 2017-09-19T12:15:13.000418 | Vada | pythondev_help_Vada_2017-09-19T12:15:13.000418 | 1,505,823,313.000418 | 94,238 |
pythondev | help | but not a good idea | 2017-09-19T12:15:16.000138 | Vada | pythondev_help_Vada_2017-09-19T12:15:16.000138 | 1,505,823,316.000138 | 94,239 |
pythondev | help | "Our software is so agile we couldn't stop it if we wanted to" | 2017-09-19T12:16:03.000223 | Mallie | pythondev_help_Mallie_2017-09-19T12:16:03.000223 | 1,505,823,363.000223 | 94,240 |
pythondev | help | Well sheesh after my experience on SO and talking within other groups I was thinking no one used celery | 2017-09-19T12:18:25.000362 | Mallie | pythondev_help_Mallie_2017-09-19T12:18:25.000362 | 1,505,823,505.000362 | 94,241 |
pythondev | help | So this has been great | 2017-09-19T12:18:37.000079 | Mallie | pythondev_help_Mallie_2017-09-19T12:18:37.000079 | 1,505,823,517.000079 | 94,242 |
pythondev | help | I use it quite a bit | 2017-09-19T12:19:25.000687 | Meg | pythondev_help_Meg_2017-09-19T12:19:25.000687 | 1,505,823,565.000687 | 94,243 |
pythondev | help | and i think <@Patty> made a few PRs to the project | 2017-09-19T12:19:39.000472 | Meg | pythondev_help_Meg_2017-09-19T12:19:39.000472 | 1,505,823,579.000472 | 94,244 |
pythondev | help | <@Collette> is one of the core contributors as well | 2017-09-19T12:20:57.000309 | Vada | pythondev_help_Vada_2017-09-19T12:20:57.000309 | 1,505,823,657.000309 | 94,245 |
pythondev | help | it's used, a lot | 2017-09-19T12:21:06.000452 | Vada | pythondev_help_Vada_2017-09-19T12:21:06.000452 | 1,505,823,666.000452 | 94,246 |
pythondev | help | Joining this Slack has already paid for itself :money_mouth_face: | 2017-09-19T12:21:10.000368 | Mallie | pythondev_help_Mallie_2017-09-19T12:21:10.000368 | 1,505,823,670.000368 | 94,247 |
pythondev | help | but is sadly completely underfunded | 2017-09-19T12:21:12.000220 | Vada | pythondev_help_Vada_2017-09-19T12:21:12.000220 | 1,505,823,672.00022 | 94,248 |
pythondev | help | <@Vada> I got that impression, which is unfortunate because is there _any_ alternative in Python? | 2017-09-19T12:21:49.000118 | Mallie | pythondev_help_Mallie_2017-09-19T12:21:49.000118 | 1,505,823,709.000118 | 94,249 |
pythondev | help | there are a few, but none so complete | 2017-09-19T12:22:08.000625 | Vada | pythondev_help_Vada_2017-09-19T12:22:08.000625 | 1,505,823,728.000625 | 94,250 |
pythondev | help | why do i need to restart server (python manage.py runserver) for my admin data changes to appear on the site | 2017-09-19T12:22:15.000032 | Domonique | pythondev_help_Domonique_2017-09-19T12:22:15.000032 | 1,505,823,735.000032 | 94,251 |
pythondev | help | <@Domonique> we have a very active <#C0LMFRMB5|django> channel which is the best place to ask django q's :wink: | 2017-09-19T12:22:30.000586 | Vada | pythondev_help_Vada_2017-09-19T12:22:30.000586 | 1,505,823,750.000586 | 94,252 |
pythondev | help | <@Mallie> I've used huey and pythonrq | 2017-09-19T12:23:12.000469 | Vada | pythondev_help_Vada_2017-09-19T12:23:12.000469 | 1,505,823,792.000469 | 94,253 |
pythondev | help | thanks , i guess I should learn to use slack , before django :slightly_smiling_face: | 2017-09-19T12:23:24.000302 | Domonique | pythondev_help_Domonique_2017-09-19T12:23:24.000302 | 1,505,823,804.000302 | 94,254 |
pythondev | help | haha it's ok, this is an appropriate channel but given the growth of the team to nearly 10k it can be hard to filter through all the different topics so we encourage people going to specialist channels | 2017-09-19T12:24:06.000749 | Vada | pythondev_help_Vada_2017-09-19T12:24:06.000749 | 1,505,823,846.000749 | 94,255 |
pythondev | help | so, I use celery alot, and the previous dev rolled his own system using zeromq and other bits and pieces | 2017-09-19T12:24:29.000357 | Meg | pythondev_help_Meg_2017-09-19T12:24:29.000357 | 1,505,823,869.000357 | 94,256 |
pythondev | help | it was a pretty fragile async task network, tbh | 2017-09-19T12:24:44.000374 | Meg | pythondev_help_Meg_2017-09-19T12:24:44.000374 | 1,505,823,884.000374 | 94,257 |
pythondev | help | celery is _much_ more robust and featured | 2017-09-19T12:24:52.000422 | Meg | pythondev_help_Meg_2017-09-19T12:24:52.000422 | 1,505,823,892.000422 | 94,258 |
pythondev | help | celery is amazing. i built biglearn using it. | 2017-09-19T12:38:48.000652 | Johana | pythondev_help_Johana_2017-09-19T12:38:48.000652 | 1,505,824,728.000652 | 94,259 |
pythondev | help | gotta love the groups, signatures, and chains. | 2017-09-19T12:41:32.000508 | Johana | pythondev_help_Johana_2017-09-19T12:41:32.000508 | 1,505,824,892.000508 | 94,260 |
pythondev | help | how do you structure your docs dir for github pages? | 2017-09-19T13:35:09.000478 | Orpha | pythondev_help_Orpha_2017-09-19T13:35:09.000478 | 1,505,828,109.000478 | 94,261 |
pythondev | help | <@Orpha> first 3 links all have good info <http://bfy.tw/E0wi> | 2017-09-19T13:47:19.000294 | Bruno | pythondev_help_Bruno_2017-09-19T13:47:19.000294 | 1,505,828,839.000294 | 94,262 |
pythondev | help | Thnx | 2017-09-19T13:51:37.000727 | Orpha | pythondev_help_Orpha_2017-09-19T13:51:37.000727 | 1,505,829,097.000727 | 94,263 |
pythondev | help | does anybody have any experience working with `django-graphene` and GraphQL? i'm having some difficulty creating a custom queryset for a model file and would love some help: <https://stackoverflow.com/questions/46241419/annotate-with-django-graphene-and-filters> | 2017-09-19T14:40:08.000031 | Keshia | pythondev_help_Keshia_2017-09-19T14:40:08.000031 | 1,505,832,008.000031 | 94,264 |
pythondev | help | <@Keshia> there is a <#C0LMFRMB5|django> channel | 2017-09-19T15:01:14.000535 | Meg | pythondev_help_Meg_2017-09-19T15:01:14.000535 | 1,505,833,274.000535 | 94,265 |
pythondev | help | thank you | 2017-09-19T15:01:32.000775 | Keshia | pythondev_help_Keshia_2017-09-19T15:01:32.000775 | 1,505,833,292.000775 | 94,266 |
pythondev | help | I have been guided from <#C07EFN21K|random> in regards to some help I need with webpack. I'm having some scss issues that I need some guidance on. | 2017-09-19T17:23:37.000269 | Kayce | pythondev_help_Kayce_2017-09-19T17:23:37.000269 | 1,505,841,817.000269 | 94,267 |
pythondev | help | you’re going to want to require you scss in your js.
```
import React from "react";
import ReactDOM from "react-dom";
import App from "./App";
require('./sass/main.scss');
```
in my webpack config i have this for scss loader
```
test: /\.scss$/i,
loader: ExtractTextPlugin.extract('style-loader', 'css!sass')
},
``` | 2017-09-19T17:53:48.000086 | Johana | pythondev_help_Johana_2017-09-19T17:53:48.000086 | 1,505,843,628.000086 | 94,268 |
pythondev | help | the app i’m pulling from is a bit dated so this may have changed syntactically. | 2017-09-19T17:54:39.000412 | Johana | pythondev_help_Johana_2017-09-19T17:54:39.000412 | 1,505,843,679.000412 | 94,269 |
pythondev | help | require and imports :slightly_smiling_face: | 2017-09-19T17:58:29.000078 | Meg | pythondev_help_Meg_2017-09-19T17:58:29.000078 | 1,505,843,909.000078 | 94,270 |
pythondev | help | Ok, is it absolutely necessary to have a js file for the scss import? Or is there a way to set that up in the webpack.config.js? | 2017-09-19T17:59:51.000103 | Kayce | pythondev_help_Kayce_2017-09-19T17:59:51.000103 | 1,505,843,991.000103 | 94,271 |
pythondev | help | it’s webpack that will see the import and extract the css for you. | 2017-09-19T18:01:03.000001 | Johana | pythondev_help_Johana_2017-09-19T18:01:03.000001 | 1,505,844,063.000001 | 94,272 |
pythondev | help | as it builds your project. | 2017-09-19T18:01:09.000131 | Johana | pythondev_help_Johana_2017-09-19T18:01:09.000131 | 1,505,844,069.000131 | 94,273 |
pythondev | help | my style above also uses the sass way guide. <http://thesassway.com/beginner/how-to-structure-a-sass-project> | 2017-09-19T18:01:48.000284 | Johana | pythondev_help_Johana_2017-09-19T18:01:48.000284 | 1,505,844,108.000284 | 94,274 |
pythondev | help | Mine does as well. I'll setup a js file to import the scss and see how that goes. | 2017-09-19T18:03:24.000418 | Kayce | pythondev_help_Kayce_2017-09-19T18:03:24.000418 | 1,505,844,204.000418 | 94,275 |
pythondev | help | in your plugin you are already specifying what you want the output to be. | 2017-09-19T18:04:03.000143 | Johana | pythondev_help_Johana_2017-09-19T18:04:03.000143 | 1,505,844,243.000143 | 94,276 |
pythondev | help | `new ExtractTextPlugin('./website/static/css/bundle.css')` | 2017-09-19T18:04:11.000250 | Johana | pythondev_help_Johana_2017-09-19T18:04:11.000250 | 1,505,844,251.00025 | 94,277 |
pythondev | help | you can even include hashes and shas if you want for caching. | 2017-09-19T18:04:42.000039 | Johana | pythondev_help_Johana_2017-09-19T18:04:42.000039 | 1,505,844,282.000039 | 94,278 |
pythondev | help | Got it working. I still don't see why there cant be a way to roughly do the following
```
module.exports = {
scss-entry : "./path/to/style.scss",
scss-output : "./path/to/style.css",
use : [{"sass-2-css" : {source-map : True}}, "autoprefixer"]
...
}
``` | 2017-09-19T18:36:00.000253 | Kayce | pythondev_help_Kayce_2017-09-19T18:36:00.000253 | 1,505,846,160.000253 | 94,279 |
pythondev | help | 'roughly ' as in very roughly | 2017-09-19T18:36:49.000191 | Kayce | pythondev_help_Kayce_2017-09-19T18:36:49.000191 | 1,505,846,209.000191 | 94,280 |
pythondev | help | Yea, when I've done that in the past webpack would make extra js files. I can try it layer. | 2017-09-19T18:47:18.000067 | Johana | pythondev_help_Johana_2017-09-19T18:47:18.000067 | 1,505,846,838.000067 | 94,281 |
pythondev | help | Since I don't need to use any JS at the moment, I would rather not setup a js file specifically for requiring my scss | 2017-09-19T18:48:27.000059 | Kayce | pythondev_help_Kayce_2017-09-19T18:48:27.000059 | 1,505,846,907.000059 | 94,282 |
pythondev | help | hello all! I need to export a slkearn model to a flask api, I can export the trained model using pickle, but I'm stucked trying to export my fitted vectorizer.
Does anyone has an approach to resolve this? | 2017-09-19T19:10:56.000104 | Dortha | pythondev_help_Dortha_2017-09-19T19:10:56.000104 | 1,505,848,256.000104 | 94,283 |
pythondev | help | pythonistas - I'm studying the various parallel processing modules (threading, multiprocessing, concurrent.futures) and could use some help. I realize that threading allows you to share variables between threads (which apparently MP does not allow). Is there a consensus on which library actually processes concurrent tasks faster? Which module is used for most applications? Can anyone provide a few reasons why I would use one over the other? | 2017-09-19T20:46:19.000249 | Margeret | pythondev_help_Margeret_2017-09-19T20:46:19.000249 | 1,505,853,979.000249 | 94,284 |
pythondev | help | <@Margeret>, I too am a n00b with python parallel processing (p^3?) modules. After looking at some examples in `multiprocessing` and `concurrent`, I decided to go with `concurrent` mainly because it was easier for me to understand and fit my example (parallel api requests). | 2017-09-19T21:21:36.000054 | Winnifred | pythondev_help_Winnifred_2017-09-19T21:21:36.000054 | 1,505,856,096.000054 | 94,285 |
pythondev | help | <@Winnifred> can you share your code? I'm interested how you implemented `concurrent` | 2017-09-19T21:38:14.000243 | Margeret | pythondev_help_Margeret_2017-09-19T21:38:14.000243 | 1,505,857,094.000243 | 94,286 |
pythondev | help | <@Margeret>, I can, but I think the docs would be more helpful since the use context is fairly straightforward. Check that out here: <https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example>. | 2017-09-19T21:51:32.000118 | Winnifred | pythondev_help_Winnifred_2017-09-19T21:51:32.000118 | 1,505,857,892.000118 | 94,287 |
pythondev | help | Does Anyone here integrated Bitcoin wallet with Django? | 2017-09-20T02:42:12.000250 | Collen | pythondev_help_Collen_2017-09-20T02:42:12.000250 | 1,505,875,332.00025 | 94,288 |
pythondev | help | hi is there someone who can help me with django serializer | 2017-09-20T04:55:24.000156 | Meredith | pythondev_help_Meredith_2017-09-20T04:55:24.000156 | 1,505,883,324.000156 | 94,289 |
pythondev | help | ? | 2017-09-20T04:55:29.000119 | Meredith | pythondev_help_Meredith_2017-09-20T04:55:29.000119 | 1,505,883,329.000119 | 94,290 |
pythondev | help | it’s usually best just to ask your question, and if someone knows they’ll pipe up | 2017-09-20T04:56:00.000514 | Junita | pythondev_help_Junita_2017-09-20T04:56:00.000514 | 1,505,883,360.000514 | 94,291 |
pythondev | help | thanks <@Junita> for the heads up, | 2017-09-20T04:56:28.000140 | Meredith | pythondev_help_Meredith_2017-09-20T04:56:28.000140 | 1,505,883,388.00014 | 94,292 |
pythondev | help | I can't figure out how to populate the serializer fields with any model related to them | 2017-09-20T04:57:04.000553 | Meredith | pythondev_help_Meredith_2017-09-20T04:57:04.000553 | 1,505,883,424.000553 | 94,293 |
pythondev | help | <@Meredith> django rest serializers or django serializers? | 2017-09-20T05:20:19.000201 | Vada | pythondev_help_Vada_2017-09-20T05:20:19.000201 | 1,505,884,819.000201 | 94,294 |
pythondev | help | Some people who are experts in how to parallelize python code? | 2017-09-20T05:50:11.000291 | Adell | pythondev_help_Adell_2017-09-20T05:50:11.000291 | 1,505,886,611.000291 | 94,295 |
pythondev | help | what's your question? | 2017-09-20T06:17:50.000052 | Suellen | pythondev_help_Suellen_2017-09-20T06:17:50.000052 | 1,505,888,270.000052 | 94,296 |
pythondev | help | <@Suellen> I have this code the commented part is my try on multiprocessing. The issue is that I can see the update function takes a new image as input, but I can't extract the position anymore from the instance, if I run the code without the multiprocessing part it works fine with extraction of position, but with it doesn't and I can't really see the reason and how to fix it :thinking_face::
```
class Track(correlation_tracker):
def __init__(self):
correlation_tracker.__init__(self)
def start(self, img, det):
correlation_tracker.start_track(self, img, det)
def renew(self, img):
print("her")
score = correlation_tracker.update(self, img)
print(score)
# p = Pool()
# result = p.amap(correlation_tracker.update, (self, img))
# print(result.get())
def loc(self):
return correlation_tracker.get_position(self)
def update(tracker, image):
tracker.renew(image)
```
```
def main():
tracker = [Track() for _ in range(len(points))]
[tracker[i].start(img, rectangle(*rect)) for i, rect in enumerate(points)]
p = Pool(8)
while True:
# pool = mp.Pool(mp.cpu_count())
procs = []
retval, img = cam.read()
if not retval:
print("Couldn't read frame from device")
start = time.time()
for i in range(len(tracker)):
update(tracker[i], img)
# p = mp.Process(target=update, args=(tracker[i], img))
# p.start()
# procs.append(p)
# for p in procs:
# p.join()
print("Time taken (tracker update):{0:.5f}".format(time.time() - start))
for i in range(len(tracker)):
rect = tracker[i].get_position()
rect = tracker[i].loc()
pt1 = (int(rect.left()), int(rect.top()))
pt2 = (int(rect.right()), int(rect.bottom()))
cv2.rectangle(img, pt1, pt2, (255, 255, 255), 3)
cv2.imshow("Image", img)
``` | 2017-09-20T07:14:26.000385 | Adell | pythondev_help_Adell_2017-09-20T07:14:26.000385 | 1,505,891,666.000385 | 94,297 |
pythondev | help | What is `correlation_tracker`? Is that safe for use across multiple processes? | 2017-09-20T07:25:00.000189 | Gabriele | pythondev_help_Gabriele_2017-09-20T07:25:00.000189 | 1,505,892,300.000189 | 94,298 |
pythondev | help | And when you say you can't extract the position, what actually happens instead? | 2017-09-20T07:25:13.000204 | Gabriele | pythondev_help_Gabriele_2017-09-20T07:25:13.000204 | 1,505,892,313.000204 | 94,299 |
pythondev | help | <@Gabriele> correlation_tracker is from the dlib library <http://dlib.net/imaging.html#correlation_tracker>. I'm not sure if its safe, but I suppose since I initialise several instances of it, that each instance should be able to run in multiple processes ? (I might be wrong here)
About extracting the position, I can execute the command but I don't get an updated position from the tracker, it stays the same value as when I initialised it. | 2017-09-20T07:41:21.000197 | Adell | pythondev_help_Adell_2017-09-20T07:41:21.000197 | 1,505,893,281.000197 | 94,300 |
pythondev | help | It's hard to make assumptions about 3rd party software although I'd hope that each process would be sufficiently isolated. What is the actual result variable you're watching here, or the function you expect to do the work? I don't know dlib. | 2017-09-20T07:53:48.000078 | Gabriele | pythondev_help_Gabriele_2017-09-20T07:53:48.000078 | 1,505,894,028.000078 | 94,301 |
pythondev | help | I have an image with some detections on (bounding boxes marking each person).
I'm initialising a subclass of correlation_tracker which I called Tracker for each detection.
Next image I'm calling each subclasses.update function with the new image and then the function updates the tracker to a new position depending on the image input (it searches nearby its old position) and it gives a confidence score in how well it did it as an output.
Then I call the subclass.get_position() which should return the trackers position, but this position stays static (with the same numbers as when I initialised it with the `[tracker[i].start(img, rectangle(*rect)) for i, rect in enumerate(points)]` call. | 2017-09-20T07:58:58.000125 | Adell | pythondev_help_Adell_2017-09-20T07:58:58.000125 | 1,505,894,338.000125 | 94,302 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.