workspace stringclasses 1
value | channel stringclasses 1
value | sentences stringlengths 1 3.93k | ts stringlengths 26 26 | user stringlengths 2 11 | sentence_id stringlengths 44 53 | timestamp float64 1.5B 1.56B | __index_level_0__ int64 0 106k |
|---|---|---|---|---|---|---|---|
pythondev | help | ``` # These are the final, 'correct' arrangements of sheets
# Data sets 51 - 52
['X', ['Sheet A', 'Location 1', 'Upright'],
['Sheet B', 'Location 2', 'Upright'],
['Sheet C', 'Location 3', 'Upright'],
['Sheet D', 'Location 4', 'Upright']],
['O', ['Sheet A', 'Location 1', 'Upright'],
['Sheet B', 'Location 2', 'Upright'],
['Sheet C', 'Location 3', 'Upright'],
['Sheet D', 'Location 4', 'Upright']]
]``` | 2017-09-12T03:19:12.000213 | Lanora | pythondev_help_Lanora_2017-09-12T03:19:12.000213 | 1,505,186,352.000213 | 93,303 |
pythondev | help | Upright's position is 2. You need to compare params[2] | 2017-09-12T03:20:41.000337 | Florentina | pythondev_help_Florentina_2017-09-12T03:20:41.000337 | 1,505,186,441.000337 | 93,304 |
pythondev | help | ```Traceback (most recent call last):
File "C:/Users/Peter/Desktop/IFB104 - Assessment 1/billboard-final.py", line 407, in <module>
paste_up(data_sets[52])
File "C:/Users/Peter/Desktop/IFB104 - Assessment 1/billboard-final.py", line 363, in paste_up
if params[2] == 'Upright':
IndexError: tuple index out of range``` | 2017-09-12T03:21:11.000331 | Lanora | pythondev_help_Lanora_2017-09-12T03:21:11.000331 | 1,505,186,471.000331 | 93,305 |
pythondev | help | could you show us how you call `paste_up` ? | 2017-09-12T03:22:31.000223 | Ciera | pythondev_help_Ciera_2017-09-12T03:22:31.000223 | 1,505,186,551.000223 | 93,306 |
pythondev | help | ```### Call the student's function to display the billboard
### ***** Change the number in the argument to this function
### ***** to test your code with a different data setpaste_up(data_sets[52])``` | 2017-09-12T03:23:06.000233 | Lanora | pythondev_help_Lanora_2017-09-12T03:23:06.000233 | 1,505,186,586.000233 | 93,307 |
pythondev | help | ```paste_up(data_sets[52])``` | 2017-09-12T03:23:20.000023 | Lanora | pythondev_help_Lanora_2017-09-12T03:23:20.000023 | 1,505,186,600.000023 | 93,308 |
pythondev | help | That's the extent of it. | 2017-09-12T03:23:29.000089 | Lanora | pythondev_help_Lanora_2017-09-12T03:23:29.000089 | 1,505,186,609.000089 | 93,309 |
pythondev | help | I've been trying to figure this out for hours, it's driving me mad and I'm sure it's quite basic. | 2017-09-12T03:23:52.000089 | Lanora | pythondev_help_Lanora_2017-09-12T03:23:52.000089 | 1,505,186,632.000089 | 93,310 |
pythondev | help | this prints 'Upright' | 2017-09-12T03:25:10.000099 | Florentina | pythondev_help_Florentina_2017-09-12T03:25:10.000099 | 1,505,186,710.000099 | 93,311 |
pythondev | help | ```
paste_up(*data_sets[52])
# in paste_up
if params[1][-1] == 'Upright':
``` | 2017-09-12T03:25:22.000105 | Carri | pythondev_help_Carri_2017-09-12T03:25:22.000105 | 1,505,186,722.000105 | 93,312 |
pythondev | help | No go, I don't think params is the right thing, was just trying different possibilities at this point. | 2017-09-12T03:26:20.000003 | Lanora | pythondev_help_Lanora_2017-09-12T03:26:20.000003 | 1,505,186,780.000003 | 93,313 |
pythondev | help | Hi All, Im new to Django and Atom, anyone please clarify some of my doubts..
How to create Django project in Atom.
bit confused with the docs I have read, Im Using Windows and Powershell is the only option to create Django project or any other way to create Django project in atom?? | 2017-09-12T05:10:07.000227 | Rosalina | pythondev_help_Rosalina_2017-09-12T05:10:07.000227 | 1,505,193,007.000227 | 93,314 |
pythondev | help | Better luck if you ask that in <#C0LMFRMB5|django> :slightly_smiling_face: | 2017-09-12T05:16:57.000049 | Ciera | pythondev_help_Ciera_2017-09-12T05:16:57.000049 | 1,505,193,417.000049 | 93,315 |
pythondev | help | In python 3, if you have a list with unpredictable, mixed types, is there a way to efficiently order that list (actual order doesn’t matter, just that the order is stable between calls)? - In python2, I would have just used `sorted` | 2017-09-12T05:38:40.000141 | Cristy | pythondev_help_Cristy_2017-09-12T05:38:40.000141 | 1,505,194,720.000141 | 93,316 |
pythondev | help | a list is ordered | 2017-09-12T05:40:35.000394 | Ciera | pythondev_help_Ciera_2017-09-12T05:40:35.000394 | 1,505,194,835.000394 | 93,317 |
pythondev | help | ^ sort | 2017-09-12T05:40:52.000142 | Cristy | pythondev_help_Cristy_2017-09-12T05:40:52.000142 | 1,505,194,852.000142 | 93,318 |
pythondev | help | sorry, what I meant is that I want to sort the contents of two lists so that if they contain the same elements (but in different orders), the sorted versions will have the same ordering | 2017-09-12T05:44:07.000201 | Cristy | pythondev_help_Cristy_2017-09-12T05:44:07.000201 | 1,505,195,047.000201 | 93,319 |
pythondev | help | ```
>>> def key_function(x):
... if isinstance(x, int):
... x = '0%d' % x
... return x
>>> sorted([1, '1', 2, '2', 3, '3'], key=key_function)
[1, 2, 3, '1', '2', '3']
``` | 2017-09-12T05:44:53.000033 | Carri | pythondev_help_Carri_2017-09-12T05:44:53.000033 | 1,505,195,093.000033 | 93,320 |
pythondev | help | something like that ? | 2017-09-12T05:44:55.000454 | Carri | pythondev_help_Carri_2017-09-12T05:44:55.000454 | 1,505,195,095.000454 | 93,321 |
pythondev | help | oh | 2017-09-12T05:45:03.000301 | Carri | pythondev_help_Carri_2017-09-12T05:45:03.000301 | 1,505,195,103.000301 | 93,322 |
pythondev | help | yeah, but with Nones, tuples, namedtuples, lists, strings, enums, etc. | 2017-09-12T05:45:15.000274 | Cristy | pythondev_help_Cristy_2017-09-12T05:45:15.000274 | 1,505,195,115.000274 | 93,323 |
pythondev | help | you'd need an `isinstance` check for each type and determine how to handle the type. At least with a key function you don't end up casting all types to something like string in the result | 2017-09-12T05:46:37.000060 | Carri | pythondev_help_Carri_2017-09-12T05:46:37.000060 | 1,505,195,197.00006 | 93,324 |
pythondev | help | I feel like this might be an X-Y problem. Why do you want them sorted in the same way? <https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem> | 2017-09-12T05:47:19.000456 | Fabiola | pythondev_help_Fabiola_2017-09-12T05:47:19.000456 | 1,505,195,239.000456 | 93,325 |
pythondev | help | <@Fabiola> - kind-of, I’m trying to pair-up and remove common elements from two collections of unhashable objects | 2017-09-12T05:48:43.000010 | Cristy | pythondev_help_Cristy_2017-09-12T05:48:43.000010 | 1,505,195,323.00001 | 93,326 |
pythondev | help | can you give an example of the 2 lists input, and what you'd expect the output to be ? | 2017-09-12T05:49:55.000223 | Carri | pythondev_help_Carri_2017-09-12T05:49:55.000223 | 1,505,195,395.000223 | 93,327 |
pythondev | help | I'd suggest using sets, but you can't because they're unhashable? Are there duplicate items in the lists? | 2017-09-12T05:51:08.000427 | Fabiola | pythondev_help_Fabiola_2017-09-12T05:51:08.000427 | 1,505,195,468.000427 | 93,328 |
pythondev | help | <@Cristy> order doesn't matter you say? :slightly_smiling_face: | 2017-09-12T05:52:12.000289 | Suellen | pythondev_help_Suellen_2017-09-12T05:52:12.000289 | 1,505,195,532.000289 | 93,329 |
pythondev | help | And if there are duplicates, does there need to be an equal number of the objects in both lists to in order to remove them all? ie. is it one for one? | 2017-09-12T05:53:30.000336 | Fabiola | pythondev_help_Fabiola_2017-09-12T05:53:30.000336 | 1,505,195,610.000336 | 93,330 |
pythondev | help | ```
>>>> from collections import namedtuple
>>>> Person = namedtuple('Person', 'name age')
>>>> data = [1.0, '13', 27, 'x', None, (1, 2, 3), [1337], Person('Alex', 24)]
>>>> sorted(data, key=repr)
['13', 'x', (1, 2, 3), 1.0, 27, None, Person(name='Alex', age=24), [1337]]
``` | 2017-09-12T05:54:11.000051 | Suellen | pythondev_help_Suellen_2017-09-12T05:54:11.000051 | 1,505,195,651.000051 | 93,331 |
pythondev | help | nice idea there <@Suellen> | 2017-09-12T05:54:42.000298 | Ciera | pythondev_help_Ciera_2017-09-12T05:54:42.000298 | 1,505,195,682.000298 | 93,332 |
pythondev | help | I suspect it's not ideal, but more often than not an object has a stable `repr`esentation. | 2017-09-12T05:55:16.000109 | Suellen | pythondev_help_Suellen_2017-09-12T05:55:16.000109 | 1,505,195,716.000109 | 93,333 |
pythondev | help | Could you use `id` instead? | 2017-09-12T05:55:39.000245 | Fabiola | pythondev_help_Fabiola_2017-09-12T05:55:39.000245 | 1,505,195,739.000245 | 93,334 |
pythondev | help | <@Suellen> - thanks, I thought about that, the idea of repring everything emotionally hurts me, esp from a performance side, but maybe is best | 2017-09-12T05:55:56.000438 | Cristy | pythondev_help_Cristy_2017-09-12T05:55:56.000438 | 1,505,195,756.000438 | 93,335 |
pythondev | help | <https://docs.python.org/3/library/functions.html#id> | 2017-09-12T05:56:31.000167 | Fabiola | pythondev_help_Fabiola_2017-09-12T05:56:31.000167 | 1,505,195,791.000167 | 93,336 |
pythondev | help | <@Fabiola> - For duplicates I need to still pair up 1-1 | 2017-09-12T05:56:34.000426 | Cristy | pythondev_help_Cristy_2017-09-12T05:56:34.000426 | 1,505,195,794.000426 | 93,337 |
pythondev | help | id might just about work, (this is an optimization after all) but not really, as many objects will compare equal with different ids | 2017-09-12T05:57:13.000288 | Cristy | pythondev_help_Cristy_2017-09-12T05:57:13.000288 | 1,505,195,833.000288 | 93,338 |
pythondev | help | thanks for suggestions | 2017-09-12T05:57:25.000203 | Cristy | pythondev_help_Cristy_2017-09-12T05:57:25.000203 | 1,505,195,845.000203 | 93,339 |
pythondev | help | I've made some progress. | 2017-09-12T06:24:11.000272 | Lanora | pythondev_help_Lanora_2017-09-12T06:24:11.000272 | 1,505,197,451.000272 | 93,340 |
pythondev | help | ```def paste_up(*params):
for entry in data_sets:
sheet, location, axis = entry[1]
if location == ('Location 1'):
print('Location 1')
elif location == ('Location 2'):
print('Location 2')
elif location == ('Location 3'):
print('Location 3')
elif location == ('Location 4'):
print('Location 4')
else:
print('Null')
break
``` | 2017-09-12T06:24:16.000222 | Lanora | pythondev_help_Lanora_2017-09-12T06:24:16.000222 | 1,505,197,456.000222 | 93,341 |
pythondev | help | The code now iterates through the list but I'm unsure how to get it to target a specific "data set" in the list? | 2017-09-12T06:24:49.000422 | Lanora | pythondev_help_Lanora_2017-09-12T06:24:49.000422 | 1,505,197,489.000422 | 93,342 |
pythondev | help | ``` # Data sets 51 - 52
['X', ['Sheet A', 'Location 1', 'Upright'],
['Sheet B', 'Location 2', 'Upright'],
['Sheet C', 'Location 3', 'Upright'],
['Sheet D', 'Location 4', 'Upright']],
['O', ['Sheet A', 'Location 1', 'Upright'],
['Sheet B', 'Location 2', 'Upright'],
['Sheet C', 'Location 3', 'Upright'],
['Sheet D', 'Location 4', 'Upright']]``` | 2017-09-12T06:25:02.000213 | Lanora | pythondev_help_Lanora_2017-09-12T06:25:02.000213 | 1,505,197,502.000213 | 93,343 |
pythondev | help | Paste up is triggered with the following. | 2017-09-12T06:25:28.000118 | Lanora | pythondev_help_Lanora_2017-09-12T06:25:28.000118 | 1,505,197,528.000118 | 93,344 |
pythondev | help | ```paste_up(data_sets[52])``` | 2017-09-12T06:25:32.000188 | Lanora | pythondev_help_Lanora_2017-09-12T06:25:32.000188 | 1,505,197,532.000188 | 93,345 |
pythondev | help | Am I missing something simple <@Ciera>? | 2017-09-12T06:26:36.000158 | Lanora | pythondev_help_Lanora_2017-09-12T06:26:36.000158 | 1,505,197,596.000158 | 93,346 |
pythondev | help | your first `for` loop goes through `X` and `0` you should make a seconde one to iterate through the entries | 2017-09-12T06:41:24.000126 | Ciera | pythondev_help_Ciera_2017-09-12T06:41:24.000126 | 1,505,198,484.000126 | 93,347 |
pythondev | help | `sheet, location, axis = entry[1]` Here you just take the first entry but you should loop to use them all | 2017-09-12T06:42:02.000139 | Ciera | pythondev_help_Ciera_2017-09-12T06:42:02.000139 | 1,505,198,522.000139 | 93,348 |
pythondev | help | If I try any others I just get this. | 2017-09-12T06:44:09.000380 | Lanora | pythondev_help_Lanora_2017-09-12T06:44:09.000380 | 1,505,198,649.00038 | 93,349 |
pythondev | help | ``` sheet, location, axis = entry[23]
IndexError: list index out of range``` | 2017-09-12T06:44:13.000021 | Lanora | pythondev_help_Lanora_2017-09-12T06:44:13.000021 | 1,505,198,653.000021 | 93,350 |
pythondev | help | yeah that's because `X` and `0` only have 4 elements. So you get an index error when trying to access the 23th | 2017-09-12T06:49:00.000095 | Ciera | pythondev_help_Ciera_2017-09-12T06:49:00.000095 | 1,505,198,940.000095 | 93,351 |
pythondev | help | Alrighty, but how does one get the variable from the command to copy over to the function? | 2017-09-12T07:06:48.000199 | Lanora | pythondev_help_Lanora_2017-09-12T07:06:48.000199 | 1,505,200,008.000199 | 93,352 |
pythondev | help | ```paste_up(data_sets[0]) - 0 Being the variable``` | 2017-09-12T07:07:14.000253 | Lanora | pythondev_help_Lanora_2017-09-12T07:07:14.000253 | 1,505,200,034.000253 | 93,353 |
pythondev | help | I've tried a few different things like *params. | 2017-09-12T07:07:34.000409 | Lanora | pythondev_help_Lanora_2017-09-12T07:07:34.000409 | 1,505,200,054.000409 | 93,354 |
pythondev | help | ```def paste_up(*params):``` | 2017-09-12T07:07:40.000291 | Lanora | pythondev_help_Lanora_2017-09-12T07:07:40.000291 | 1,505,200,060.000291 | 93,355 |
pythondev | help | I do not get what you are trying to say :disappointed: | 2017-09-12T07:25:00.000376 | Florentina | pythondev_help_Florentina_2017-09-12T07:25:00.000376 | 1,505,201,100.000376 | 93,356 |
pythondev | help | I'd like a bit of advice. I'm using imagehash to help me find similar images. I have 20,000 to hash, so it's pretty slow. Would either threading, asyncnio, or multiprocessing speed up this process? I've never really used any of these before. | 2017-09-12T07:56:58.000274 | Amalia | pythondev_help_Amalia_2017-09-12T07:56:58.000274 | 1,505,203,018.000274 | 93,357 |
pythondev | help | <@Amalia> assuming you are just generating a db of hashes than yes they will definitely help as you can hash them independently | 2017-09-12T07:57:52.000404 | Vada | pythondev_help_Vada_2017-09-12T07:57:52.000404 | 1,505,203,072.000404 | 93,358 |
pythondev | help | and then use the db for lookups | 2017-09-12T07:58:09.000341 | Meg | pythondev_help_Meg_2017-09-12T07:58:09.000341 | 1,505,203,089.000341 | 93,359 |
pythondev | help | any particular one you would recommend? | 2017-09-12T07:58:31.000193 | Amalia | pythondev_help_Amalia_2017-09-12T07:58:31.000193 | 1,505,203,111.000193 | 93,360 |
pythondev | help | but if you're iterating over the images each time, expect it to take a while | 2017-09-12T07:58:32.000060 | Meg | pythondev_help_Meg_2017-09-12T07:58:32.000060 | 1,505,203,112.00006 | 93,361 |
pythondev | help | All the above could help although I'd recommend asyncio as that is most up to date one and definitely worth learning | 2017-09-12T07:58:43.000426 | Vada | pythondev_help_Vada_2017-09-12T07:58:43.000426 | 1,505,203,123.000426 | 93,362 |
pythondev | help | <@Amalia> are you planning to iterate over the image set each time, or are you storing the hashes for later lookup comparisons? | 2017-09-12T07:59:14.000160 | Meg | pythondev_help_Meg_2017-09-12T07:59:14.000160 | 1,505,203,154.00016 | 93,363 |
pythondev | help | i think just a one time iterate | 2017-09-12T08:00:22.000184 | Amalia | pythondev_help_Amalia_2017-09-12T08:00:22.000184 | 1,505,203,222.000184 | 93,364 |
pythondev | help | If you iterate like that then yes it will take a very long time | 2017-09-12T08:01:39.000426 | Vada | pythondev_help_Vada_2017-09-12T08:01:39.000426 | 1,505,203,299.000426 | 93,365 |
pythondev | help | just to be clear, iterate once and then save the results for later lookup | 2017-09-12T08:02:44.000032 | Amalia | pythondev_help_Amalia_2017-09-12T08:02:44.000032 | 1,505,203,364.000032 | 93,366 |
pythondev | help | Ok, with the async model you'll iterate once to add all the hashing tasks to the pool and then wait for them to finish before doing the later lookup | 2017-09-12T08:05:59.000117 | Vada | pythondev_help_Vada_2017-09-12T08:05:59.000117 | 1,505,203,559.000117 | 93,367 |
pythondev | help | Adding to the pool is basically instantaneous. It's the waiting for all the tasks to finish that will take some time | 2017-09-12T08:06:34.000218 | Vada | pythondev_help_Vada_2017-09-12T08:06:34.000218 | 1,505,203,594.000218 | 93,368 |
pythondev | help | thank you. I assume this still only uses 1 core. As there is no waiting for responses, such as for across the internet, is using aync much quicker? | 2017-09-12T08:13:50.000198 | Amalia | pythondev_help_Amalia_2017-09-12T08:13:50.000198 | 1,505,204,030.000198 | 93,369 |
pythondev | help | not really | 2017-09-12T08:15:55.000108 | Meg | pythondev_help_Meg_2017-09-12T08:15:55.000108 | 1,505,204,155.000108 | 93,370 |
pythondev | help | if you're using multiple cores, then yes | 2017-09-12T08:16:04.000080 | Meg | pythondev_help_Meg_2017-09-12T08:16:04.000080 | 1,505,204,164.00008 | 93,371 |
pythondev | help | but as far as using one core, you may have to use threading to help out with that. Not sure if asyncio does that implicitly | 2017-09-12T08:16:35.000234 | Meg | pythondev_help_Meg_2017-09-12T08:16:35.000234 | 1,505,204,195.000234 | 93,372 |
pythondev | help | you may want to ask in <#C07G5276F|async> about that | 2017-09-12T08:17:41.000202 | Meg | pythondev_help_Meg_2017-09-12T08:17:41.000202 | 1,505,204,261.000202 | 93,373 |
pythondev | help | :+1: | 2017-09-12T08:18:00.000218 | Amalia | pythondev_help_Amalia_2017-09-12T08:18:00.000218 | 1,505,204,280.000218 | 93,374 |
pythondev | help | Hey What's up guys ! I have a question regarding testing that's been on my mind lately and I was hoping you could enlighten me. So here goes: How do you effectively test a function that makes a call to a website and returns a response object ? Like in the case of a web scraper for example ? (code sample below) | 2017-09-12T08:51:51.000103 | Gwenda | pythondev_help_Gwenda_2017-09-12T08:51:51.000103 | 1,505,206,311.000103 | 93,375 |
pythondev | help | ```
import requests
def make_soup(url):
response = requests.get(url)
soup = BeautifulSoup(response.text)
make_soup("<http://google.co.uk>")
``` | 2017-09-12T08:52:29.000423 | Gwenda | pythondev_help_Gwenda_2017-09-12T08:52:29.000423 | 1,505,206,349.000423 | 93,376 |
pythondev | help | Is it relevant to want to assert that make_soup called `requests.get` with the correct url ? or should I pass requests as a dependency ? | 2017-09-12T08:54:32.000264 | Gwenda | pythondev_help_Gwenda_2017-09-12T08:54:32.000264 | 1,505,206,472.000264 | 93,377 |
pythondev | help | Not really sure how to tackle this | 2017-09-12T08:54:39.000550 | Gwenda | pythondev_help_Gwenda_2017-09-12T08:54:39.000550 | 1,505,206,479.00055 | 93,378 |
pythondev | help | thanks in advance ! | 2017-09-12T08:54:46.000242 | Gwenda | pythondev_help_Gwenda_2017-09-12T08:54:46.000242 | 1,505,206,486.000242 | 93,379 |
pythondev | help | to make sure `request.get` is called with the correct `url` you should mock request for this test | 2017-09-12T08:58:25.000164 | Ciera | pythondev_help_Ciera_2017-09-12T08:58:25.000164 | 1,505,206,705.000164 | 93,380 |
pythondev | help | I think this package <https://pypi.python.org/pypi/requests-mock> may help | 2017-09-12T08:58:49.000202 | Luana | pythondev_help_Luana_2017-09-12T08:58:49.000202 | 1,505,206,729.000202 | 93,381 |
pythondev | help | I use the excellent responses library for mocking requests: <https://github.com/getsentry/responses> | 2017-09-12T08:58:50.000381 | Fabiola | pythondev_help_Fabiola_2017-09-12T08:58:50.000381 | 1,505,206,730.000381 | 93,382 |
pythondev | help | I haven't used the library recommended by <@Luana> so I can't compare, but the key thing is to mock one way or another. Without mocking, you're not testing your code so much as testing the API / website itself. It makes your tests fragile and slow to run, as network problems will make your tests fail etc. | 2017-09-12T09:00:31.000114 | Fabiola | pythondev_help_Fabiola_2017-09-12T09:00:31.000114 | 1,505,206,831.000114 | 93,383 |
pythondev | help | Nice !! thank you very much <@Fabiola> <@Luana> <@Ciera> ! | 2017-09-12T09:01:22.000141 | Gwenda | pythondev_help_Gwenda_2017-09-12T09:01:22.000141 | 1,505,206,882.000141 | 93,384 |
pythondev | help | I suppose the same approach would apply to DB insertions ? | 2017-09-12T09:01:42.000127 | Gwenda | pythondev_help_Gwenda_2017-09-12T09:01:42.000127 | 1,505,206,902.000127 | 93,385 |
pythondev | help | definitely | 2017-09-12T09:02:05.000391 | Luana | pythondev_help_Luana_2017-09-12T09:02:05.000391 | 1,505,206,925.000391 | 93,386 |
pythondev | help | I prefer `pook` myself, no need to specifically mock any lib but it mocks the network layer | 2017-09-12T09:02:31.000408 | Beula | pythondev_help_Beula_2017-09-12T09:02:31.000408 | 1,505,206,951.000408 | 93,387 |
pythondev | help | <https://pook.readthedocs.io/en/latest/examples.html> | 2017-09-12T09:02:44.000266 | Beula | pythondev_help_Beula_2017-09-12T09:02:44.000266 | 1,505,206,964.000266 | 93,388 |
pythondev | help | You can basically:
```
@pook.get('<https://myurl>')
def test_my_url():
my_lib_function()
assert pook.isdone()
``` | 2017-09-12T09:03:16.000128 | Beula | pythondev_help_Beula_2017-09-12T09:03:16.000128 | 1,505,206,996.000128 | 93,389 |
pythondev | help | It has a load of matchers you can use with it | 2017-09-12T09:03:30.000532 | Beula | pythondev_help_Beula_2017-09-12T09:03:30.000532 | 1,505,207,010.000532 | 93,390 |
pythondev | help | Are people writing separate functional tests for the APIs at all? | 2017-09-12T09:04:26.000631 | Gabriele | pythondev_help_Gabriele_2017-09-12T09:04:26.000631 | 1,505,207,066.000631 | 93,391 |
pythondev | help | Doesn't that typically depend if the API provides you a "sandbox" or demo env? | 2017-09-12T09:05:01.000178 | Beula | pythondev_help_Beula_2017-09-12T09:05:01.000178 | 1,505,207,101.000178 | 93,392 |
pythondev | help | You tell me. :slightly_smiling_face: This is not an area I work in much. The APIs I use are public ones and I've not encountered that sort of functionality. | 2017-09-12T09:05:39.000017 | Gabriele | pythondev_help_Gabriele_2017-09-12T09:05:39.000017 | 1,505,207,139.000017 | 93,393 |
pythondev | help | Generally speaking, I don't if I don't have a testing env for them due to rate limits and such | 2017-09-12T09:06:06.000346 | Beula | pythondev_help_Beula_2017-09-12T09:06:06.000346 | 1,505,207,166.000346 | 93,394 |
pythondev | help | I mock all the major response cases so I know if I can handle the error conditions around their API - generally speaking most places are decent about not breaking existing APIs | 2017-09-12T09:06:39.000316 | Beula | pythondev_help_Beula_2017-09-12T09:06:39.000316 | 1,505,207,199.000316 | 93,395 |
pythondev | help | Assuming their devs are decent, anyway | 2017-09-12T09:06:44.000339 | Beula | pythondev_help_Beula_2017-09-12T09:06:44.000339 | 1,505,207,204.000339 | 93,396 |
pythondev | help | The people I'm working on this with have made some functional tests. I don't know if it's necessary: <https://github.com/keboola/sapi-python-client/tree/master/tests> | 2017-09-12T09:07:03.000421 | Fabiola | pythondev_help_Fabiola_2017-09-12T09:07:03.000421 | 1,505,207,223.000421 | 93,397 |
pythondev | help | One of the APIs I use is the Nominatim endpoint... it's barely documented and the source code is spaghetti PHP, so I have little confidence in it :smile: | 2017-09-12T09:07:27.000196 | Gabriele | pythondev_help_Gabriele_2017-09-12T09:07:27.000196 | 1,505,207,247.000196 | 93,398 |
pythondev | help | oh, that's a fun one to use | 2017-09-12T09:07:57.000075 | Meg | pythondev_help_Meg_2017-09-12T09:07:57.000075 | 1,505,207,277.000075 | 93,399 |
pythondev | help | especially when the rate limiting doc says 'depends' | 2017-09-12T09:08:13.000156 | Meg | pythondev_help_Meg_2017-09-12T09:08:13.000156 | 1,505,207,293.000156 | 93,400 |
pythondev | help | with no hard limits | 2017-09-12T09:08:19.000328 | Meg | pythondev_help_Meg_2017-09-12T09:08:19.000328 | 1,505,207,299.000328 | 93,401 |
pythondev | help | yup. Unfortunately I refuse to use Google Maps/Places, and Factual are discontinuing their Javascript API in favour of an app that semi-secretly tracks their users, so my options for a public places database are very limited | 2017-09-12T09:09:53.000405 | Gabriele | pythondev_help_Gabriele_2017-09-12T09:09:53.000405 | 1,505,207,393.000405 | 93,402 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.