workspace stringclasses 1
value | channel stringclasses 1
value | sentences stringlengths 1 3.93k | ts stringlengths 26 26 | user stringlengths 2 11 | sentence_id stringlengths 44 53 | timestamp float64 1.5B 1.56B | __index_level_0__ int64 0 106k |
|---|---|---|---|---|---|---|---|
pythondev | help | ```
$ python2
Python 2.7.13 (default, Jul 28 2017, 10:11:31)
[GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.38)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> reversed([1,2,3])
<listreverseiterator object at 0x10795fa10>
``` | 2017-10-02T08:02:56.000083 | Collette | pythondev_help_Collette_2017-10-02T08:02:56.000083 | 1,506,931,376.000083 | 95,303 |
pythondev | help | ¯\_(ツ)_/¯ | 2017-10-02T08:03:23.000029 | Collette | pythondev_help_Collette_2017-10-02T08:03:23.000029 | 1,506,931,403.000029 | 95,304 |
pythondev | help | since 2.4 ;( I know some still on 2.2 :joy: | 2017-10-02T08:04:01.000145 | Carri | pythondev_help_Carri_2017-10-02T08:04:01.000145 | 1,506,931,441.000145 | 95,305 |
pythondev | help | Oh, <@Patty> I made a mistake: `[::-11]`
This is the corrected version: ```
$ python3.6 -m timeit -s 'l = [list(range(x, x+100)) for x in range(100**2,-1,-100)]' '[i for subl in l for i in subl[::-1]]'
1000 loops, best of 3: 365 usec per loop
``` | 2017-10-02T08:04:33.000208 | Collette | pythondev_help_Collette_2017-10-02T08:04:33.000208 | 1,506,931,473.000208 | 95,306 |
pythondev | help | Even worse than `reversed` | 2017-10-02T08:04:42.000248 | Collette | pythondev_help_Collette_2017-10-02T08:04:42.000248 | 1,506,931,482.000248 | 95,307 |
pythondev | help | `[[9, 10, 11], [6, 7, 8], [3, 4, 5], [0, 1, 2]]`
If all the sub lists were linked lists, then you wouldn't need to iterate over every sub list.
because all you'd have to do is connect 8 to 9, 5 to 6, and so forth. | 2017-10-02T08:05:13.000096 | Winnie | pythondev_help_Winnie_2017-10-02T08:05:13.000096 | 1,506,931,513.000096 | 95,308 |
pythondev | help | <@Patty> | 2017-10-02T08:05:48.000038 | Winnie | pythondev_help_Winnie_2017-10-02T08:05:48.000038 | 1,506,931,548.000038 | 95,309 |
pythondev | help | <@Collette> hmm maybe that changed 2 to 3 | 2017-10-02T08:06:04.000219 | Patty | pythondev_help_Patty_2017-10-02T08:06:04.000219 | 1,506,931,564.000219 | 95,310 |
pythondev | help | <@Winnie> how so? `10 -> 11`, but you need `11 -> 10` from your example output. | 2017-10-02T08:06:17.000308 | Collette | pythondev_help_Collette_2017-10-02T08:06:17.000308 | 1,506,931,577.000308 | 95,311 |
pythondev | help | @evan but they aren’t so it’s a moot point | 2017-10-02T08:06:21.000021 | Patty | pythondev_help_Patty_2017-10-02T08:06:21.000021 | 1,506,931,581.000021 | 95,312 |
pythondev | help | <@Patty> I know, but what I'm asking is, is there someway to make them so | 2017-10-02T08:06:56.000115 | Winnie | pythondev_help_Winnie_2017-10-02T08:06:56.000115 | 1,506,931,616.000115 | 95,313 |
pythondev | help | <@Winnie> not without changing how lists fundamentally work | 2017-10-02T08:07:25.000380 | Patty | pythondev_help_Patty_2017-10-02T08:07:25.000380 | 1,506,931,645.00038 | 95,314 |
pythondev | help | <@Joie> 2.4 hasn't been updated in almost 10 years, 2.2 in almost 15. It's not worth writing bckwards compatible code for these versions. Especially as you can normally simply upgrade to 2.7 without many (if any) code changes. | 2017-10-02T08:07:27.000170 | Vada | pythondev_help_Vada_2017-10-02T08:07:27.000170 | 1,506,931,647.00017 | 95,315 |
pythondev | help | <@Winnie> mind answering my question? | 2017-10-02T08:07:40.000044 | Collette | pythondev_help_Collette_2017-10-02T08:07:40.000044 | 1,506,931,660.000044 | 95,316 |
pythondev | help | <@Patty> but a deque is a double ended linked list right? | 2017-10-02T08:08:09.000330 | Winnie | pythondev_help_Winnie_2017-10-02T08:08:09.000330 | 1,506,931,689.00033 | 95,317 |
pythondev | help | <@Collette> | 2017-10-02T08:08:29.000117 | Winnie | pythondev_help_Winnie_2017-10-02T08:08:29.000117 | 1,506,931,709.000117 | 95,318 |
pythondev | help | Some are still on windows xp and RHEL 5. Does that mean we should always imply windows xp and RHEL 5 when talking about programming and such? | 2017-10-02T08:08:54.000139 | Collette | pythondev_help_Collette_2017-10-02T08:08:54.000139 | 1,506,931,734.000139 | 95,319 |
pythondev | help | RHEL 4, even. | 2017-10-02T08:09:23.000328 | Collette | pythondev_help_Collette_2017-10-02T08:09:23.000328 | 1,506,931,763.000328 | 95,320 |
pythondev | help | No, but there are consideration to make. If you're writing a library for other users, then you should consider the version boundaries you wish to support. If it's your own internal, then it's a non-issue | 2017-10-02T08:09:41.000239 | Carri | pythondev_help_Carri_2017-10-02T08:09:41.000239 | 1,506,931,781.000239 | 95,321 |
pythondev | help | python3.5+ | 2017-10-02T08:10:12.000256 | Collette | pythondev_help_Collette_2017-10-02T08:10:12.000256 | 1,506,931,812.000256 | 95,322 |
pythondev | help | easy | 2017-10-02T08:10:15.000449 | Collette | pythondev_help_Collette_2017-10-02T08:10:15.000449 | 1,506,931,815.000449 | 95,323 |
pythondev | help | [a:[9, 10, 11]:b, c:[6, 7, 8]:d, e:[3, 4, 5]:f, g:[0, 1, 2]:h]
b is the end of the list, a then points to c, d then points to e...
you never need to access the inner elements, just the head and tail of each list | 2017-10-02T08:10:44.000109 | Winnie | pythondev_help_Winnie_2017-10-02T08:10:44.000109 | 1,506,931,844.000109 | 95,324 |
pythondev | help | <@Collette> | 2017-10-02T08:11:27.000041 | Winnie | pythondev_help_Winnie_2017-10-02T08:11:27.000041 | 1,506,931,887.000041 | 95,325 |
pythondev | help | Yep as <@Collette> says <@Carri> most people nowadays will only support 3+, 3.5+ especially with async features. A person writing a library should only have to consider what they personally need to support. It should the businesses/other peoples duties to fulfill the requirements of the software they ant to use, instead of forcing their requirements onto the provider | 2017-10-02T08:12:45.000008 | Vada | pythondev_help_Vada_2017-10-02T08:12:45.000008 | 1,506,931,965.000008 | 95,326 |
pythondev | help | especially in open source where it is free | 2017-10-02T08:12:52.000014 | Vada | pythondev_help_Vada_2017-10-02T08:12:52.000014 | 1,506,931,972.000014 | 95,327 |
pythondev | help | *when I say nowadays, I mean when starting a new library nowadays - not historical projects nowadays (although that looks to be starting to happen as well) | 2017-10-02T08:13:38.000248 | Vada | pythondev_help_Vada_2017-10-02T08:13:38.000248 | 1,506,932,018.000248 | 95,328 |
pythondev | help | <@Winnie> yes deque is linked lists under the hood in c land | 2017-10-02T08:13:47.000051 | Patty | pythondev_help_Patty_2017-10-02T08:13:47.000051 | 1,506,932,027.000051 | 95,329 |
pythondev | help | <@Patty> so I guess the real moot point is that it's n time anyway since you'd have to convert | 2017-10-02T08:15:07.000204 | Winnie | pythondev_help_Winnie_2017-10-02T08:15:07.000204 | 1,506,932,107.000204 | 95,330 |
pythondev | help | <@Winnie> to be honest, if you care about such things you almost certainly picked wrong language | 2017-10-02T08:16:56.000214 | Collette | pythondev_help_Collette_2017-10-02T08:16:56.000214 | 1,506,932,216.000214 | 95,331 |
pythondev | help | <@Meg> lists are just c arrays? | 2017-10-02T08:17:14.000347 | Winnie | pythondev_help_Winnie_2017-10-02T08:17:14.000347 | 1,506,932,234.000347 | 95,332 |
pythondev | help | I don’t think there is any reason not to try to squeeze time out of python, or to do exercises to show best practices in terms of timing | 2017-10-02T08:18:07.000126 | Patty | pythondev_help_Patty_2017-10-02T08:18:07.000126 | 1,506,932,287.000126 | 95,333 |
pythondev | help | yeah | 2017-10-02T08:18:17.000234 | Winnie | pythondev_help_Winnie_2017-10-02T08:18:17.000234 | 1,506,932,297.000234 | 95,334 |
pythondev | help | But I'm the only who actually tries to measure these times | 2017-10-02T08:18:36.000049 | Collette | pythondev_help_Collette_2017-10-02T08:18:36.000049 | 1,506,932,316.000049 | 95,335 |
pythondev | help | I did, too | 2017-10-02T08:19:08.000209 | Patty | pythondev_help_Patty_2017-10-02T08:19:08.000209 | 1,506,932,348.000209 | 95,336 |
pythondev | help | so have I :stuck_out_tongue: | 2017-10-02T08:19:32.000329 | Winnie | pythondev_help_Winnie_2017-10-02T08:19:32.000329 | 1,506,932,372.000329 | 95,337 |
pythondev | help | uh, slack threads | 2017-10-02T08:19:36.000004 | Collette | pythondev_help_Collette_2017-10-02T08:19:36.000004 | 1,506,932,376.000004 | 95,338 |
pythondev | help | > if you care about such things you almost certainly picked wrong language
I don't agree with this meme. | 2017-10-02T08:19:47.000192 | Suellen | pythondev_help_Suellen_2017-10-02T08:19:47.000192 | 1,506,932,387.000192 | 95,339 |
pythondev | help | <@Patty> do you really think your examples are correct? That lists of three elements are good inputs to measure performances of various algorithms? | 2017-10-02T08:20:34.000239 | Collette | pythondev_help_Collette_2017-10-02T08:20:34.000239 | 1,506,932,434.000239 | 95,340 |
pythondev | help | <@Patty>, <@Collette> got a q about using redis as a backend for celery. I have a ton of short lived tasks executing as part of a chord with a callback afterwards. Due to the way a chord works, each task needs to return a result, which is saved in the task key in redis.
Over the weekend, got this error message in redis:
`Can't save in background: fork: Cannot allocate memory`,
with corresponding errors in the celery workers:
```ResponseError: Command # 1 (SET celery-task-meta-26aa6a83-2238-4f49-b3ec-555be58bfd91 {"status": "SUCCESS", "traceback": null, "result": "True", "children": []}) of pipeline caused error: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.```
I'm running this on a AWS T2 small instance with 2GB RAM, and there's about 2.5M keys in the redis db. I could use a swap file, but is the best solution a larger instance with more ram? | 2017-10-02T08:20:45.000207 | Meg | pythondev_help_Meg_2017-10-02T08:20:45.000207 | 1,506,932,445.000207 | 95,341 |
pythondev | help | <@Collette> it was a quick answer while I was on the train | 2017-10-02T08:21:01.000378 | Patty | pythondev_help_Patty_2017-10-02T08:21:01.000378 | 1,506,932,461.000378 | 95,342 |
pythondev | help | <@Collette>
```
evan@mint ~ $ python3.6 -m timeit -s 'from collections import deque; l = [list(range(x, x+100)) for x in range(100**2,-1,-100)]; nl = deque()' 'for i in l: nl.extendleft(reversed(i))'
10000 loops, best of 3: 61.4 usec per loop
``` | 2017-10-02T08:21:13.000064 | Winnie | pythondev_help_Winnie_2017-10-02T08:21:13.000064 | 1,506,932,473.000064 | 95,343 |
pythondev | help | <@Meg> my suggestion is always to use a real message broker, like rabbitmq | 2017-10-02T08:21:21.000154 | Collette | pythondev_help_Collette_2017-10-02T08:21:21.000154 | 1,506,932,481.000154 | 95,344 |
pythondev | help | I'm using rabbit as a broker | 2017-10-02T08:21:32.000178 | Meg | pythondev_help_Meg_2017-10-02T08:21:32.000178 | 1,506,932,492.000178 | 95,345 |
pythondev | help | and redis as the backend | 2017-10-02T08:21:36.000069 | Meg | pythondev_help_Meg_2017-10-02T08:21:36.000069 | 1,506,932,496.000069 | 95,346 |
pythondev | help | ah | 2017-10-02T08:21:49.000040 | Collette | pythondev_help_Collette_2017-10-02T08:21:49.000040 | 1,506,932,509.00004 | 95,347 |
pythondev | help | sry | 2017-10-02T08:21:54.000011 | Collette | pythondev_help_Collette_2017-10-02T08:21:54.000011 | 1,506,932,514.000011 | 95,348 |
pythondev | help | but apparently I'm overwhelming redis on the small instance I'm using | 2017-10-02T08:22:06.000158 | Meg | pythondev_help_Meg_2017-10-02T08:22:06.000158 | 1,506,932,526.000158 | 95,349 |
pythondev | help | and there are still a large number of tasks waiting in the queue, since the workers aren't pickiung up | 2017-10-02T08:22:26.000051 | Meg | pythondev_help_Meg_2017-10-02T08:22:26.000051 | 1,506,932,546.000051 | 95,350 |
pythondev | help | probably because of the redis issue | 2017-10-02T08:22:36.000220 | Meg | pythondev_help_Meg_2017-10-02T08:22:36.000220 | 1,506,932,556.00022 | 95,351 |
pythondev | help | > MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk
Disk IO is flying through the roof? | 2017-10-02T08:23:21.000210 | Collette | pythondev_help_Collette_2017-10-02T08:23:21.000210 | 1,506,932,601.00021 | 95,352 |
pythondev | help | Yep. Not surprising, since I have about 25 celery workers indexing web pages in solr | 2017-10-02T08:24:12.000079 | Meg | pythondev_help_Meg_2017-10-02T08:24:12.000079 | 1,506,932,652.000079 | 95,353 |
pythondev | help | and since they're running as a chord, the results need to be saved | 2017-10-02T08:24:21.000266 | Meg | pythondev_help_Meg_2017-10-02T08:24:21.000266 | 1,506,932,661.000266 | 95,354 |
pythondev | help | <@Winnie> your measurement is also incorrect. You should move `nl = deque()` to the body of the script from setup section | 2017-10-02T08:24:39.000302 | Collette | pythondev_help_Collette_2017-10-02T08:24:39.000302 | 1,506,932,679.000302 | 95,355 |
pythondev | help | 25 workers? wow, that's huge | 2017-10-02T08:24:50.000055 | Collette | pythondev_help_Collette_2017-10-02T08:24:50.000055 | 1,506,932,690.000055 | 95,356 |
pythondev | help | each task runs for about a half second to retrieve a web page, extract text with beautifuloup and insert into solr | 2017-10-02T08:24:56.000437 | Meg | pythondev_help_Meg_2017-10-02T08:24:56.000437 | 1,506,932,696.000437 | 95,357 |
pythondev | help | <@Collette> why? | 2017-10-02T08:24:58.000353 | Winnie | pythondev_help_Winnie_2017-10-02T08:24:58.000353 | 1,506,932,698.000353 | 95,358 |
pythondev | help | and I have about 10M plus pages to index | 2017-10-02T08:25:47.000144 | Meg | pythondev_help_Meg_2017-10-02T08:25:47.000144 | 1,506,932,747.000144 | 95,359 |
pythondev | help | got about 15% of the way over the weekend before this cropped up | 2017-10-02T08:26:05.000330 | Meg | pythondev_help_Meg_2017-10-02T08:26:05.000330 | 1,506,932,765.00033 | 95,360 |
pythondev | help | <@Winnie> because you're re-using a single instance of deque, it's shared between timeit invocations. But the point of `setup` is only to prepare data and things that should not affect the script itself (like imports) | 2017-10-02T08:26:26.000268 | Collette | pythondev_help_Collette_2017-10-02T08:26:26.000268 | 1,506,932,786.000268 | 95,361 |
pythondev | help | <@Meg> on a single box? | 2017-10-02T08:26:47.000146 | Collette | pythondev_help_Collette_2017-10-02T08:26:47.000146 | 1,506,932,807.000146 | 95,362 |
pythondev | help | <@Meg> are you on AWS? Their IOPS isn't good :confused: | 2017-10-02T08:27:05.000032 | Suellen | pythondev_help_Suellen_2017-10-02T08:27:05.000032 | 1,506,932,825.000032 | 95,363 |
pythondev | help | nope, three C4.Large instances | 2017-10-02T08:27:16.000329 | Meg | pythondev_help_Meg_2017-10-02T08:27:16.000329 | 1,506,932,836.000329 | 95,364 |
pythondev | help | redis is running on a GP2 volume type, with 450/3000 IOPS | 2017-10-02T08:28:04.000320 | Meg | pythondev_help_Meg_2017-10-02T08:28:04.000320 | 1,506,932,884.00032 | 95,365 |
pythondev | help | <@Collette> oh you're saying that I'm actually extending onto the previous queues made? | 2017-10-02T08:28:34.000365 | Winnie | pythondev_help_Winnie_2017-10-02T08:28:34.000365 | 1,506,932,914.000365 | 95,366 |
pythondev | help | I'd simply disable RDB snapshots in that case | 2017-10-02T08:28:37.000245 | Collette | pythondev_help_Collette_2017-10-02T08:28:37.000245 | 1,506,932,917.000245 | 95,367 |
pythondev | help | <@Winnie> correct | 2017-10-02T08:28:43.000259 | Collette | pythondev_help_Collette_2017-10-02T08:28:43.000259 | 1,506,932,923.000259 | 95,368 |
pythondev | help | oh I wasn't aware. thanks | 2017-10-02T08:29:43.000017 | Winnie | pythondev_help_Winnie_2017-10-02T08:29:43.000017 | 1,506,932,983.000017 | 95,369 |
pythondev | help | thanks all. I gotta get back to watching my lecture :stuck_out_tongue: | 2017-10-02T08:30:15.000321 | Winnie | pythondev_help_Winnie_2017-10-02T08:30:15.000321 | 1,506,933,015.000321 | 95,370 |
pythondev | help | i have a date like this 2017-12-01 10:24:17.807000+00:00
and im trying to compare with time right now but get an error about +00:00 | 2017-10-02T10:40:11.000194 | Georgetta | pythondev_help_Georgetta_2017-10-02T10:40:11.000194 | 1,506,940,811.000194 | 95,371 |
pythondev | help | date = 2017-12-01 10:24:17.807000+00:00
dt = datetime.datetime.strptime(date, "%Y-%m-%d %H:%M:%S.%f")
if dt > datetime.datetime.now(): .... | 2017-10-02T10:40:57.000038 | Georgetta | pythondev_help_Georgetta_2017-10-02T10:40:57.000038 | 1,506,940,857.000038 | 95,372 |
pythondev | help | i got this error : ValueError at /
unconverted data remains: +00:00 | 2017-10-02T10:41:18.000107 | Georgetta | pythondev_help_Georgetta_2017-10-02T10:41:18.000107 | 1,506,940,878.000107 | 95,373 |
pythondev | help | Your format string does not include anything that specifies what should be done with the timezone part of the string you're trying to parse | 2017-10-02T10:47:50.000145 | Antionette | pythondev_help_Antionette_2017-10-02T10:47:50.000145 | 1,506,941,270.000145 | 95,374 |
pythondev | help | If you know it will always be utc you can probably just add `+00:00 to the end of the format string, otherwise you will need to take care of it before calling striptime | 2017-10-02T10:49:38.000533 | Antionette | pythondev_help_Antionette_2017-10-02T10:49:38.000533 | 1,506,941,378.000533 | 95,375 |
pythondev | help | There's also the `%z` formatting option but off the top of my head I don't know if that applies to to both `+0000` and `+00:00` or only `+0000` | 2017-10-02T10:51:09.000570 | Antionette | pythondev_help_Antionette_2017-10-02T10:51:09.000570 | 1,506,941,469.00057 | 95,376 |
pythondev | help | Hi, I'm using `celery` and have one specific periodic tasks that is running very often. I want to keep all the logging as it is, but only suppress logs for only this one specific task. How can I achieve that? | 2017-10-02T11:20:51.000462 | Mirian | pythondev_help_Mirian_2017-10-02T11:20:51.000462 | 1,506,943,251.000462 | 95,377 |
pythondev | help | <https://stackoverflow.com/questions/23101018/selectively-log-requests-using-logging-module> | 2017-10-02T11:25:44.000091 | Meg | pythondev_help_Meg_2017-10-02T11:25:44.000091 | 1,506,943,544.000091 | 95,378 |
pythondev | help | <@Meg> :taco: | 2017-10-02T11:38:47.000079 | Mirian | pythondev_help_Mirian_2017-10-02T11:38:47.000079 | 1,506,944,327.000079 | 95,379 |
pythondev | help | I'm assuming most people here use something like SQL Alchemy opposed to writing raw SQL, is that correct? | 2017-10-02T12:02:24.000289 | Rosamaria | pythondev_help_Rosamaria_2017-10-02T12:02:24.000289 | 1,506,945,744.000289 | 95,380 |
pythondev | help | It varies. There is SQLAlchemy, someone just brought up Pony ORM but there was talk of python-mysql (name?) the other day. Others use Django. I would say ORM is probably more preferred, but that is just based on frequency of chatters. | 2017-10-02T12:04:08.000571 | Mallie | pythondev_help_Mallie_2017-10-02T12:04:08.000571 | 1,506,945,848.000571 | 95,381 |
pythondev | help | and use cases too | 2017-10-02T12:04:59.000212 | Meg | pythondev_help_Meg_2017-10-02T12:04:59.000212 | 1,506,945,899.000212 | 95,382 |
pythondev | help | sure, an ORM doesn't cover every possibility, because SQL is pretty complex at times | 2017-10-02T12:05:26.000285 | Meg | pythondev_help_Meg_2017-10-02T12:05:26.000285 | 1,506,945,926.000285 | 95,383 |
pythondev | help | and you should be familiar with the ORM library's downsides and use that to determine whether to use raw SQL or the library | 2017-10-02T12:05:55.000511 | Meg | pythondev_help_Meg_2017-10-02T12:05:55.000511 | 1,506,945,955.000511 | 95,384 |
pythondev | help | I've found that SQL alchemy ends up adding quite a lot of overhead | 2017-10-02T12:08:41.000449 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:08:41.000449 | 1,506,946,121.000449 | 95,385 |
pythondev | help | That's personal hearsay though | 2017-10-02T12:09:11.000069 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:09:11.000069 | 1,506,946,151.000069 | 95,386 |
pythondev | help | I prefer to use the native MySQL DB python library, even though it makes life harder sometimes | 2017-10-02T12:10:00.000683 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:10:00.000683 | 1,506,946,200.000683 | 95,387 |
pythondev | help | But I'm open to change | 2017-10-02T12:10:05.000493 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:10:05.000493 | 1,506,946,205.000493 | 95,388 |
pythondev | help | how maintainable is that, though? | 2017-10-02T12:10:28.000039 | Meg | pythondev_help_Meg_2017-10-02T12:10:28.000039 | 1,506,946,228.000039 | 95,389 |
pythondev | help | within a team and whatnot | 2017-10-02T12:10:37.000238 | Meg | pythondev_help_Meg_2017-10-02T12:10:37.000238 | 1,506,946,237.000238 | 95,390 |
pythondev | help | I've found creating helper functions has been helpful with that | 2017-10-02T12:13:25.000282 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:13:25.000282 | 1,506,946,405.000282 | 95,391 |
pythondev | help | But you are right that it has the potential to cause security concerns with poor data sanitzation | 2017-10-02T12:13:44.000762 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:13:44.000762 | 1,506,946,424.000762 | 95,392 |
pythondev | help | I don't have the real data behind SQLAlchemy slow-downs, but if you profile, you'll see that each time you do a select for example, it has to take a general data structure in and output a string. Because it's so generic though, there seem to be a lot of extra calls | 2017-10-02T12:15:08.000188 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:15:08.000188 | 1,506,946,508.000188 | 95,393 |
pythondev | help | Sounds like a good weekend project :slightly_smiling_face: | 2017-10-02T12:15:21.000081 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:15:21.000081 | 1,506,946,521.000081 | 95,394 |
pythondev | help | Interesting ok, thanks for the input guys. One more question, not super Flask relevant but:
Do you guys have a good resource / reference on best practices when writing a RESTful backend?
To clarify, I'm wondering what most people do when
* a required parameter was not sent
- My decision so far has been to set the status code to `422`, but I don't know what the response JSON object should be... maybe `{}`?
* the required parameter that was sent was not found in the DB
- My decision so far has been to set the status code to `404`, as the ID was not found, BUT again I don't know what the response JSON object should be... | 2017-10-02T12:17:40.000127 | Rosamaria | pythondev_help_Rosamaria_2017-10-02T12:17:40.000127 | 1,506,946,660.000127 | 95,395 |
pythondev | help | For RESTful back-end, love both Flask and Eve | 2017-10-02T12:18:09.000430 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:18:09.000430 | 1,506,946,689.00043 | 95,396 |
pythondev | help | Eve I've found forces you to be more precise with what you want | 2017-10-02T12:18:24.000396 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:18:24.000396 | 1,506,946,704.000396 | 95,397 |
pythondev | help | There's a whole schema declaration you're doing and it comes with a MongoDB back-end basically | 2017-10-02T12:18:40.000478 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:18:40.000478 | 1,506,946,720.000478 | 95,398 |
pythondev | help | Flask I've found has more flexibility | 2017-10-02T12:18:53.000400 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:18:53.000400 | 1,506,946,733.0004 | 95,399 |
pythondev | help | Oh, I guess I'm talking not specifically about libraries/frameworks, but literally what to do when those events occur | 2017-10-02T12:19:04.000219 | Rosamaria | pythondev_help_Rosamaria_2017-10-02T12:19:04.000219 | 1,506,946,744.000219 | 95,400 |
pythondev | help | This presentation was from FOREVER ago | 2017-10-02T12:20:10.000520 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:20:10.000520 | 1,506,946,810.00052 | 95,401 |
pythondev | help | <https://www.slideshare.net/MikePearce/api-anti-patterns-4920731> | 2017-10-02T12:20:11.000331 | Marcelina | pythondev_help_Marcelina_2017-10-02T12:20:11.000331 | 1,506,946,811.000331 | 95,402 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.