instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
What is the '@=' symbol for in Python?
I know @ is for decorators, but what is @= for in Python? Is it just reservation for some future idea? This is just one of my many questions while reading tokenizer.py.
From the documentation: The @ (at) operator is intended to be used for matrix multiplication. No builtin Python types implement this operator. The @ operator was introduced in Python 3.5. @= is matrix multiplication followed by assignment, as you would expect. They map to __matmul__, __rmatmul__ or __imatmul__ similar to how + and += map to __add__, __radd__ or __iadd__. The operator and the rationale behind it are discussed in detail in PEP 465.
Multiple loops with asyncio
Is it possible to have multiple loops with asyncio? If the response is yes how can I do that? My use case is: * I extract urls from a list of websites in async * For each "sub url list", I would crawl them in async/ Example to extract urls: import asyncio import aiohttp from suburls import extractsuburls @asyncio.coroutine def extracturls(url): subtasks = [] response = yield from aiohttp.request('GET', url) suburl_list = yield from response.text() for suburl in suburl_list: subtasks.append(asyncio.Task(extractsuburls(suburl))) loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*subtasks)) if __name__ == '__main__': urls_list = ['http://example1.com', 'http://example2.com'] for url in url_list: subtasks.append(asyncio.Task(extractsuburls(url))) loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*subtasks)) loop.close() If I execute this code I'll have an error when python will try to launch the second loop witch says that a loop is already running. P.S: my module "extractsuburls" uses aiohttp to perform web request. EDIT: Well, I've try this solution: import asyncio import aiohttp from suburls import extractsuburls @asyncio.coroutine def extracturls( url ): subtasks = [] response = yield from aiohttp.request('GET', url) suburl_list = yield from response.text() jobs_loop = asyncio.new_event_loop() for suburl in suburl_list: subtasks.append(asyncio.Task(extractsuburls(suburl))) asyncio.new_event_loop(jobs_loop) jobs_loop.run_until_complete(asyncio.gather(*subtasks)) jobs_loop.close() if __name__ == '__main__': urls_list = ['http://example1.com', 'http://example2.com'] for url in url_list: subtasks.append(asyncio.Task(extractsuburls(url))) loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*subtasks)) loop.close() But I've this error: loop argument must agree with Future Any idea?
You don't need several event loops, just use yield from gather(*subtasks) in extracturls() coroutine: import asyncio import aiohttp from suburls import extractsuburls @asyncio.coroutine def extracturls(url): subtasks = [] response = yield from aiohttp.request('GET', url) suburl_list = yield from response.text() for suburl in suburl_list: subtasks.append(extractsuburls(suburl)) yield from asyncio.gather(*subtasks) if __name__ == '__main__': urls_list = ['http://example1.com', 'http://example2.com'] for url in url_list: subtasks.append(extractsuburls(url)) loop = asyncio.get_event_loop() loop.run_until_complete(asyncio.gather(*subtasks)) loop.close() As result you get waiting for subtasks until extracturls finished.
How does django know which migrations have been run?
How does django know whether a migration has been applied yet? It usually gets it right, but when it doesn't I don't ever know where to start troubleshooting.
Django writes a record into the table django_migrations consisting of some information like the app the migration belongs to, the name of the migration, and the date it was applied.
django-rest-framework 3.0 create or update in nested serializer
With django-rest-framework 3.0 and having these simple models: class Book(models.Model): title = models.CharField(max_length=50) class Page(models.Model): book = models.ForeignKey(Books, related_name='related_book') text = models.CharField(max_length=500) And given this JSON request: { "book_id":1, "pages":[ { "page_id":2, "text":"loremipsum" }, { "page_id":4, "text":"loremipsum" } ] } How can I write a nested serializer to process this JSON and for each page for the given book either create a new page or update if it exists. class RequestSerializer(serializers.Serializer): book_id = serializers.IntegerField() page = PageSerializer(many=True) class PageSerializer(serializers.ModelSerializer): class Meta: model = Page I know that instantiating the serializer with an instance will update the current one but how should I use it inside the create method of nested serializer?
Firstly, do you want to support creating new book instances, or only updating existing ones? If you only ever wanted to create new book instances you could do something like this... class PageSerializer(serializers.Serializer): text = serializers.CharField(max_length=500) class BookSerializer(serializers.Serializer): page = PageSerializer(many=True) title = serializers.CharField(max_length=50) def create(self, validated_data): # Create the book instance book = Book.objects.create(title=validated_data['title']) # Create or update each page instance for item in validated_data['pages']: page = Page(id=item['page_id'], text=item['text'], book=book) page.save() return book Note that I haven't included the book_id here. When we're creating book instances we won't be including a book id. When we're updating book instances we'll typically include the book id as part of the URL, rather than in the request data. If you want to support both create and update of book instances then you need to think about how you want to handle pages that are not included in the request, but are currently associated with the book instance. You might choose to silently ignore those pages and leave them as they are, you might want to raise a validation error, or you might want to delete them. Let's assume that you want to delete any pages not included in the request. def create(self, validated_data): # As before. ... def update(self, instance, validated_data): # Update the book instance instance.title = validated_data['title'] instance.save() # Delete any pages not included in the request page_ids = [item['page_id'] for item in validated_data['pages']] for page in instance.books: if page.id not in page_ids: page.delete() # Create or update page instances that are in the request for item in validated_data['pages']: page = Page(id=item['page_id'], text=item['text'], book=instance) page.save() return instance It's also possible that you might want to only support book updates, and not support creation, in which case, only include the update() method. There are also various ways you could reduce the number of queries eg. using bulk create/deletion, but the above would do the job in a fairly straightforward way. As you can see there are subtleties in the types of behavior you might want when dealing with nested data, so think carefully about exactly what behavior you're expecting in various cases. Also note that I've been using Serializer in the above example rather than ModelSerializer. In this case it's simpler just to include all the fields in the serializer class explicitly, rather than relying on the automatic set of fields that ModelSerializer generates by default.
multiprocessing vs threading vs asyncio in Python 3.4
I found that in Python 3.4 there are few different libraries for multiprocessing/threading: multiprocessing vs threading vs asyncio. But I don't know which one to use or is the "recommended one". Do they do the same thing, or are different? If so, which one is used for what? I want to write a program that uses multicores in my computer. But I don't know which library I should learn.
They are intended for (slightly) different purposes and/or requirements. CPython (a typical, mainline Python implementation) still has the global interpreter lock so a multi-threaded application (a standard way to implement parallel processing nowadays) is suboptimal. That's why multiprocessing may be preferred over threading. But not every problem may be effectively split into [almost independent] pieces, so there may be a need in heavy interprocess communications. That's why multiprocessing may not be preferred over threading in general. asyncio (this technique is available not only in Python, other languages and/or frameworks also have it, e.g. Boost.ASIO) is a method to effectively handle a lot of I/O operations from many simultaneous sources w/o need of parallel code execution. So it's just a solution (a good one indeed!) for a particular task, not for parallel processing in general.
Django Model MultipleChoice
I know there isn't MultipleChoiceField for a Model, you can only use it on Forms. Today I face an issue when analyzing a new project related with Multiple Choices. I would like to have a field like a CharField with choices with the option of multiple choice. I solved this issue other times by creating a CharField and managed the multiple choices in the form with a forms.MultipleChoiceField and store the choices separated by commas. In this project, due to configuration, I cannot do it as I mention above, I need to do it in the Models, and I prefer NOT to edit the Django admin form neither use forms. I need a Model Field with multiple choices option Have someone solved anything like this via Models ? Maybe overriding some of the models function or using a custom widget... I don't know, I'm kinda lost here. Edit I'm aware off simple choices, I would like to have something like: class MODEL(models.Model): MY_CHOICES = ( ('a', 'Hola'), ('b', 'Hello'), ('c', 'Bonjour'), ('d', 'Boas'), ) ... ... my_field = models.CharField(max_length=1, choices=MY_CHOICES) ... but with the capability of saving multiple choices not only 1 choice.
You need to think about how you are going to store the data at a database level. This will dictate your solution. Presumably, you want a single column in a table that is storing multiple values. This will also force you to think about how to will serialize - for example, you can't simply do comma separated if you need to store strings that might contain commas. However, you are best using a solution like one of the following: https://pypi.python.org/pypi/django-multiselectfield/ https://pypi.python.org/pypi/django-select-multiple-field/
What is with this change of unpacking behavior from Python2 to Python3
Yesterday I came across this odd unpacking difference between Python 2 and Python 3, and did not seem to find any explanation after a quick Google search. Python 2.7.8 a = 257 b = 257 a is b # False a, b = 257, 257 a is b # False Python 3.4.2 a = 257 b = 257 a is b # False a, b = 257, 257 a is b # True I know it probably does not affect the correctness of a program, but it does bug me a little. Could anyone give some insights about this difference in unpacking?
This behaviour is at least in part to do with how the interpreter does constant folding and how the REPL executes code. First, remember that CPython first compiles code (to AST and then bytecode). It then evaluates the bytecode. During compilation, the script looks for objects that are immutable and caches them. It also deduplicates them. So if it sees a = 257 b = 257 it will store a and b against the same object: import dis def f(): a = 257 b = 257 dis.dis(f) #>>> 4 0 LOAD_CONST 1 (257) #>>> 3 STORE_FAST 0 (a) #>>> #>>> 5 6 LOAD_CONST 1 (257) #>>> 9 STORE_FAST 1 (b) #>>> 12 LOAD_CONST 0 (None) #>>> 15 RETURN_VALUE Note the LOAD_CONST 1. The 1 is the index into co_consts: f.__code__.co_consts #>>> (None, 257) So these both load the same 257. Why doesn't this occur with: $ python2 Python 2.7.8 (default, Sep 24 2014, 18:26:21) >>> a = 257 >>> b = 257 >>> a is b False $ python3 Python 3.4.2 (default, Oct 8 2014, 13:44:52) >>> a = 257 >>> b = 257 >>> a is b False ? Each line in this case is a separate compilation unit and the deduplication cannot happen across them. It works similarly to compile a = 257 run a = 257 compile b = 257 run b = 257 compile a is b run a is b As such, these code objects will both have unique constant caches. This implies that if we remove the line break, the is will return True: >>> a = 257; b = 257 >>> a is b True Indeed this is the case for both Python versions. In fact, this is exactly why >>> a, b = 257, 257 >>> a is b True returns True as well; it's not because of any attribute of unpacking; they just get placed in the same compilation unit. This returns False for versions which don't fold properly; filmor links to Ideone which shows this failing on 2.7.3 and 3.2.3. On these versions, the tuples created do not share their items with the other constants: import dis def f(): a, b = 257, 257 print(a is b) print(f.__code__.co_consts) #>>> (None, 257, (257, 257)) n = f.__code__.co_consts[1] n1 = f.__code__.co_consts[2][0] n2 = f.__code__.co_consts[2][1] print(id(n), id(n1), id(n2)) #>>> (148384292, 148384304, 148384496) Again, though, this is not about a change in how the objects are unpacked; it is only a change in how the objects are stored in co_consts.
Unexpected keyword argument "context" when using appcfg.py
I tried to update a project on Google App Engine via appcfg.py: C:\> "C:\Program Files (x86)\Google\google_appengine\appcfg.py" update c:\secondApp But I get the following error immediately (top lines are ignored): File "C:\Python27\lib\urllib2.py", line 1240, in https_open context=self._context) TypeError: do_open() got an unexpected keyword argument 'context' I decided to dig into the file urllib2.py and find the problem. After a few minutes of code reviewing, I came to this conclusion that an overload accepting a parameter named context does not exist. So, I changed the original code snippet: def https_open(self, req): return self.do_open(httplib.HTTPSConnection, req, context=self._context) to def https_open(self, req): return self.do_open(httplib.HTTPSConnection, req) and Voila! It works (although another problem regarding authentication arise, but the first problem get solved). But it is very strange that in an official release there exists a bug like this while is very likely other people have encountered the same problem. Surprisingly, I couldn't find this issue reported by anyone else! Is there anything wrong in the module? Am I mixing wrong versions of installed packages? Any help? My Google App Engine SDK Version: 1.9.17 x64 My Installed Python Version: 2.7.9 x64 My Platform: Windows 8.1 x64 and I am not familiar with Python :D Solution: As Migel Tissera mentioned the problem is about authentication. But I tried his proposed command and got the same error. Fortunately, I executed the following command and success! (I added --noauth_local_webserver and --no_cookies switches too): appcfg.py --noauth_local_webserver --oauth2 --skip_sdk_update_check --no_cookies update c:\secondApp
I ran in to the same problem about half an hour ago.. It's actually nothing to do with the urllib2 file, it's got something to do with the authentication. This fixed it for me.. Use appcfg.py with --oauth2 flag. Try this, appcfg.py --oauth2 update /path/to/your/app. In the first time you will see the browser window where you'll need to allow access to your account. Then you can deploy your app without entering email and password. I hope this helps. Glad to post my first answer here! :) Thanks, Migel
Why the performance difference between numpy.zeros and numpy.zeros_like?
I finally found a performance bottleneck in my code but am confused as to what the reason is. To solve it I changed all my calls of numpy.zeros_like to instead use numpy.zeros. But why is zeros_like sooooo much slower? For example (note e-05 on the zeros call): >>> timeit.timeit('np.zeros((12488, 7588, 3), np.uint8)', 'import numpy as np', number = 10) 5.2928924560546875e-05 >>> timeit.timeit('np.zeros_like(x)', 'import numpy as np; x = np.zeros((12488, 7588, 3), np.uint8)', number = 10) 1.4402990341186523 But then strangely writing to an array created with zeros is noticeably slower than an array created with zeros_like: >>> timeit.timeit('x[100:-100, 100:-100] = 1', 'import numpy as np; x = np.zeros((12488, 7588, 3), np.uint8)', number = 10) 0.4310588836669922 >>> timeit.timeit('x[100:-100, 100:-100] = 1', 'import numpy as np; x = np.zeros_like(np.zeros((12488, 7588, 3), np.uint8))', number = 10) 0.33325695991516113 My guess is zeros is using some CPU trick and not actually writing to the memory to allocate it. This is done on the fly when it's written to. But that still doesn't explain the massive discrepancy in array creation times. I'm running Mac OS X Yosemite with the current numpy version: >>> numpy.__version__ '1.9.1'
My timings in Ipython are (with a simplier timeit interface): In [57]: timeit np.zeros_like(x) 1 loops, best of 3: 420 ms per loop In [58]: timeit np.zeros((12488, 7588, 3), np.uint8) 100000 loops, best of 3: 15.1 µs per loop When I look at the code with IPython (np.zeros_like??) I see: res = empty_like(a, dtype=dtype, order=order, subok=subok) multiarray.copyto(res, 0, casting='unsafe') while np.zeros is a blackbox - pure compiled code. Timings for empty are: In [63]: timeit np.empty_like(x) 100000 loops, best of 3: 13.6 µs per loop In [64]: timeit np.empty((12488, 7588, 3), np.uint8) 100000 loops, best of 3: 14.9 µs per loop So the extra time in zeros_like is in that copy. In my tests, the difference in assignment times (x[]=1) is negligible. My guess is that zeros, ones, empty are all early compiled creations. empty_like was added as a convenience, just drawing shape and type info from its input. zeros_like was written with more of an eye toward easy programming maintenance (reusing empty_like) than for speed. np.ones and np.full also use the np.empty ... copyto sequence, and show similar timings. https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/array_assign_scalar.c appears to be file that copies a scalar (such as 0) to an array. I don't see a use of memset. https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/alloc.c has calls to malloc and calloc. https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/ctors.c - source for zeros and empty. Both call PyArray_NewFromDescr_int, but one ends up using npy_alloc_cache_zero and the other npy_alloc_cache. npy_alloc_cache in alloc.c calls alloc. npy_alloc_cache_zero calls npy_alloc_cache followed by a memset. Code in alloc.c is further confused with a THREAD option. More on the calloc v malloc+memset difference at: Why malloc+memset is slower than calloc? But with caching and garbage collection, I wonder whether the calloc/memset distinction applies. This simple test with the memory_profile package supports the claim that zeros and empty allocate memory 'on-the-fly', while zeros_like allocates everything up front: N = (1000, 1000) M = (slice(None, 500, None), slice(500, None, None)) Line # Mem usage Increment Line Contents ================================================ 2 17.699 MiB 0.000 MiB @profile 3 def test1(N, M): 4 17.699 MiB 0.000 MiB print(N, M) 5 17.699 MiB 0.000 MiB x = np.zeros(N) # no memory jump 6 17.699 MiB 0.000 MiB y = np.empty(N) 7 25.230 MiB 7.531 MiB z = np.zeros_like(x) # initial jump 8 29.098 MiB 3.867 MiB x[M] = 1 # jump on usage 9 32.965 MiB 3.867 MiB y[M] = 1 10 32.965 MiB 0.000 MiB z[M] = 1 11 32.965 MiB 0.000 MiB return x,y,z
Python coding convention "Wrong continued indentation before block: found by pylint
I used pylint to check my python code, and found this convention problem: C:11, 0: Wrong continued indentation before block. + this_time <= self.max): ^ | (bad-continuation) I tried to refine for times but the problem is still present, can someone help? Thanks! if len(remaining_obj_list) > 0: for i in a_list: this_time = self.__get_time(i) for remaining_obj in remaining_obj_list: if (remaining_obj.get_time() # to fit 78 char rule + this_time <= self.max): i.append(remaining_obj) remaining_obj.set_used(True) if 0 == len(self.__get_unused_list): break
Pylint doesn't want such continuation to start on the same column as the next indentation block. Also, notice that the message includes a hint on columns that it considers correct.
How to use Gensim doc2vec with pre-trained word vectors?
I recently came across the doc2vec addition to Gensim. How can I use pre-trained word vectors (e.g. found in word2vec original website) with doc2vec? Or is doc2vec getting the word vectors from the same sentences it uses for paragraph-vector training? Thanks.
Note that the "DBOW" (dm=0) training mode doesn't require or even create word-vectors as part of the training. It merely learns document vectors that are good at predicting each word in turn (much like the word2vec skip-gram mode). (Before gensim 0.12.0, there was the parameter train_words mentioned in another comment, which some documentation suggested will co-train words. However, I don't believe this ever actually worked. Starting in gensim 0.12.0, there is the parameter dbow_words, which works to skip-gram train words simultaneous with DBOW doc-vectors. Note that this makes training take longer – by a factor related to window. So if you don't need word-vectors, you may still leave this off.) In the "DM" training method (dm=1), word-vectors are inherently trained during the process along with doc-vectors, and are likely to also affect the quality of the doc-vectors. It's theoretically possible to pre-initialize the word-vectors from prior data. But I don't know any strong theoretical or experimental reason to be confident this would improve the doc-vectors. One fragmentary experiment I ran along these lines suggested the doc-vector training got off to a faster start – better predictive qualities after the first few passes – but this advantage faded with more passes. Whether you hold the word vectors constant or let them continue to adjust withe the new training is also likely an important consideration... but which choice is better may depend on your goals, dataset, and the quality/relevance of the preexisting word-vectors. (You could repeat my experiment with the intersect_word2vec_format() method available in gensim 0.12.0, and try different levels of making preloaded vectors resistant-to-new-training via the syn0_lockf values. But remember this is experimental territory: the basic doc2vec results don't rely on, or even necessarily improve with, reused word vectors.)
Why does the asyncio's event loop suppress the KeyboardInterrupt on Windows?
I have this really small test program which does nothing apart from a executing an asyncio event loop: import asyncio asyncio.get_event_loop().run_forever() When I run this program on Linux and press Ctrl+C, the program will terminate correctly with a KeyboardInterrupt exception. On Windows pressing Ctrl+C does nothing (tested with Python 3.4.2). A simple inifinite loop with time.sleep() raises the KeyboardInterrupt correctly even on Windows: import time while True: time.sleep(3600) Why does the asyncio's event loop suppress the KeyboardInterrupt on Windows?
This is a bug, sure. See issue on python bug-tracker for the problem solving progress.
Apply vs transform on a group object
Consider the following dataframe: A B C D 0 foo one 0.162003 0.087469 1 bar one -1.156319 -1.526272 2 foo two 0.833892 -1.666304 3 bar three -2.026673 -0.322057 4 foo two 0.411452 -0.954371 5 bar two 0.765878 -0.095968 6 foo one -0.654890 0.678091 7 foo three -1.789842 -1.130922 The following commands work: > df.groupby('A').apply(lambda x: (x['C'] - x['D'])) > df.groupby('A').apply(lambda x: (x['C'] - x['D']).mean()) but none of the following work: > df.groupby('A').transform(lambda x: (x['C'] - x['D'])) ValueError: could not broadcast input array from shape (5) into shape (5,3) > df.groupby('A').transform(lambda x: (x['C'] - x['D']).mean()) TypeError: cannot concatenate a non-NDFrame object Why? The example on the documentation seems to suggest that calling transform on a group allows one to do row-wise operation processing: # Note that the following suggests row-wise operation (x.mean is the column mean) zscore = lambda x: (x - x.mean()) / x.std() transformed = ts.groupby(key).transform(zscore) In other words, I thought that transform is essentially a specific type of apply (the one that does not aggregate). Where am I wrong? For reference, below is the construction of the original dataframe above: df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' : randn(8), 'D' : randn(8)})
As I felt similarly confused with .transform operation vs. .apply I found a few answers shedding some light on the issue. This answer for example was very helpful. My takeout so far is that .transform will work (or deal) with Series (columns) in isolation from each other. What this means is that in your last two calls: df.groupby('A').transform(lambda x: (x['C'] - x['D'])) df.groupby('A').transform(lambda x: (x['C'] - x['D']).mean()) You asked .transform to take values from two columns and 'it' actually does not 'see' both of them at the same time (so to speak). transform will look at the dataframe columns one by one and return back a series (or group of series) 'made' of scalars which are repeated len(input_column) times. So this scalar, that should be used by .transform to make the Series is a result of some reduction function applied on an input Series (and only on ONE series/column at a time). Consider this example (on your dataframe): zscore = lambda x: (x - x.mean()) / x.std() # Note that it does not reference anything outside of 'x' and for transform 'x' is one column. df.groupby('A').transform(zscore) will yield: C D 0 0.989 0.128 1 -0.478 0.489 2 0.889 -0.589 3 -0.671 -1.150 4 0.034 -0.285 5 1.149 0.662 6 -1.404 -0.907 7 -0.509 1.653 Which is exactly the same as if you would use it on only on one column at a time: df.groupby('A')['C'].transform(zscore) yielding: 0 0.989 1 -0.478 2 0.889 3 -0.671 4 0.034 5 1.149 6 -1.404 7 -0.509 Note that .apply in the last example (df.groupby('A')['C'].apply(zscore)) would work in exactly the same way, but it would fail it you would try to use it on a dataframe: df.groupby('A').apply(zscore) gives error: ValueError: operands could not be broadcast together with shapes (6,) (2,) So where else is .transform useful? The simplest case is trying to assign results of reduction function back to original dataframe. df['sum_C'] = df.groupby('A')['C'].transform(sum) df.sort('A') # to clearly see the scalar ('sum') applies to the whole column of the group yielding: A B C D sum_C 1 bar one 1.998 0.593 3.973 3 bar three 1.287 -0.639 3.973 5 bar two 0.687 -1.027 3.973 4 foo two 0.205 1.274 4.373 2 foo two 0.128 0.924 4.373 6 foo one 2.113 -0.516 4.373 7 foo three 0.657 -1.179 4.373 0 foo one 1.270 0.201 4.373 Trying the same with .apply would give NaNs in sum_C. Because .apply would return a reduced Series, which it does not know how to broadcast back: df.groupby('A')['C'].apply(sum) giving: A bar 3.973 foo 4.373 There are also cases when .transform is used to filter the data: df[df.groupby(['B'])['D'].transform(sum) < -1] A B C D 3 bar three 1.287 -0.639 7 foo three 0.657 -1.179 I hope this adds a bit more clarity.
hash function in Python 3.3 returns different results between sessions
I've implemented a BloomFilter in python 3.3, and got different results every session. Drilling down this weird behavior got me to the internal hash() function - it returns different hash values for the same string every session. Example: >>> hash("235") -310569535015251310 ----- opening a new python console ----- >>> hash("235") -1900164331622581997 Why is this happening? Why is this useful?
Python uses a random hash seed to prevent attackers from tar-pitting your application by sending you keys designed to collide. See the original vulnerability disclosure. By offsetting the hash with a random seed (set once at startup) attackers can no longer predict what keys will collide. You can set a fixed seed or disable the feature by setting the PYTHONHASHSEED environment variable; the default is random but you can set it to a fixed positive integer value, with 0 disabling the feature altogether. Python versions 2.7 and 3.2 have the feature disabled by default (use the -R switch or set PYTHONHASHSEED=random to enable it); it is enabled by default in Python 3.3 and up. If you were relying on the order of keys in a Python dictionary or set, then don't. Python uses a hash table to implement these types and their order depends on the insertion and deletion history as well as the random hash seed. Also see the object.__hash__() special method documentation: Note: By default, the __hash__() values of str, bytes and datetime objects are “salted” with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python. This is intended to provide protection against a denial-of-service caused by carefully-chosen inputs that exploit the worst case performance of a dict insertion, O(n^2) complexity. See http://www.ocert.org/advisories/ocert-2011-003.html for details. Changing hash values affects the iteration order of dicts, sets and other mappings. Python has never made guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds). See also PYTHONHASHSEED. If you need a stable hash implementation, you probably want to look at the hashlib module; this implements cryptographic hash functions. The pybloom project uses this approach. Since the offset consists of a prefix and a suffix (start value and final XORed value, respectively) you cannot just store the offset, unfortunately. On the plus side, this does mean that attackers cannot easily determine the offset with timing attacks either.
How to update code from git to a Docker container
I have a Dockerfile trying to deploy a Django code to a container FROM ubuntu:latest MAINTAINER { myname } #RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sou$ RUN apt-get update RUN DEBIAN_FRONTEND=noninteractive apt-get install -y tar git curl dialog wget net-tools nano buil$ RUN DEBIAN_FRONTEND=noninteractive apt-get install -y python python-dev python-distribute python-p$ RUN mkdir /opt/app WORKDIR /opt/app #Pull Code RUN git clone git@bitbucket.org/{user}/{repo} RUN pip install -r website/requirements.txt #EXPOSE = ["8000"] CMD python website/manage.py runserver 0.0.0.0:8000 And then I build my code as docker build -t dockerhubaccount/demo:v1 . and this pull my code from bitbucket to the container. I run it as docker run -p 8000:8080 -td felixcheruiyot/demo:v1 and things appear to work fine. Now I want to update the code i.e since I used git clone ..., I have this confusion: How can I update my code when I have new commits and upon Docker containers build it ships with the new code (Note: when I run build it does not fetch it because of cache) What is the best workflow for this kind of approach
There are a couple of approaches you can use. You can use docker build --no-cache to avoid using the cache of the git clone. The startup command calls git pull. So instead of running python manage.py, you'd have something like CMD cd /repo && git pull && python manage.py or use a start script if things are more complex. I tend to prefer 2. You can also run a cron job to update the code in your container, but that's a little more work, and goes somewhat against the docker philosophy.
How to do Django JSON Web Token Authentication without forcing the user to re-type their password?
My Django application uses the Rest Framework JWT for authentication. It works great and very elegant. However, I have a use-case which I am struggling to build. I have already coded up a working solution for the "Forgot Password" workflow. I allow an un-authenticated user to reset their password if-and-only-if they click on a secret link that I send to their email address. However, I would like to modify this solution such that after the password-reset workflow is successfully completed, the user is automatically logged in without having to retype their username and (new) password. I would like to do this to make the user's experience as frictionless as possible. The problem is I do not know how to make this work without having the user re-type their password (or storing it in clear-text in the DB which is obviously very bad). Below is the current way I get the JWT token. You can see that in line #12, I need the user's clear password. I don't have it. I only have the encrypted password stored in my_user.password. How can I use the encrypted password in my_user.password instead of the clear password to obtain the JWT? If I cannot use it, then how is this workflow achieved using the Rest Framework JWT? from rest_framework_jwt.views import ObtainJSONWebToken from rest_framework status from django.contrib.auth.models import User my_user = User.objects.get(pk=1) ojwt = ObtainJSONWebToken() if "_mutable" in dir(request.DATA): mutable = request.DATA._mutable request.DATA._mutable = True request.DATA['username'] = my_user.username request.DATA['password'] = "<my_user's clear password>" if "_mutable" in dir(request.DATA): request.DATA._mutable = mutable token_response = ojwt.post(request) if status.is_success(token_response.status_code): # Tell the user login succeeded!! else: # Tell the user login failed. # But hopefully, this shouldn't happen
When working with Django REST Framework JWT, it is typically expected that the user is generating the token on their own. Because you are generating the token on behalf of the user, you can't use any of the standard views to make it work. You are going to need to generate the token on your own, similar to how DRF JWT does it in the views. This means using something like the following for your view code from rest_framework_jwt.settings import api_settings from datetime import datetime jwt_payload_handler = api_settings.JWT_PAYLOAD_HANDLER jwt_encode_handler = api_settings.JWT_ENCODE_HANDLER my_user = User.objects.get(pk=1) # replace with your existing logic payload = jwt_payload_handler(my_user) # Include original issued at time for a brand new token, # to allow token refresh if api_settings.JWT_ALLOW_REFRESH: payload['orig_iat'] = timegm( datetime.utcnow().utctimetuple() ) return { 'token': jwt_encode_handler(payload) } This should allow you to manually generate the token within the view, without having to know the user's password.
Can you patch *just* a nested function with closure, or must the whole outer function be repeated?
A 3rd party library we use contains a rather long function that uses a nested function inside it. Our use of that library triggers a bug in that function, and we very much would like to solve that bug. Unfortunately, the library maintainers are somewhat slow with fixes, but we don't want to have to fork the library. We also cannot hold our release until they have fixed the issue. We would prefer to use monkey-patching to fix this issue here as that is easier to track than patching the source. However, to repeat a very large function where just replacing the inner function would be enough feels overkill, and makes it harder for others to see what exactly we changed. Are we stuck with a static patch to the library egg? The inner function relies on closing over a variable; a contrived example would be: def outerfunction(*args): def innerfunction(val): return someformat.format(val) someformat = 'Foo: {}' for arg in args: yield innerfunction(arg) where we would want to replace just the implementation of innerfunction(). The actual outer function is far, far longer. We'd reuse the closed-over variable and maintain the function signature, of course.
Yes, you can replace an inner function, even if it is using a closure. You'll have to jump through a few hoops though. Please take into account: You need to create the replacement function as a nested function too, to ensure that Python creates the same closure. If the original function has a closure over the names foo and bar, you need to define your replacement as a nested function with the same names closed over. More importantly, you need to use those names in the same order; closures are referenced by index. Monkey patching is always fragile and can break with the implementation changing. This is no exception. Retest your monkey patch whenever you change versions of the patched library. To understand how this will work, I'll first explain how Python handles nested functions. Python uses code objects to produce function objects as needed. Each code object has an associated constants list, and the code objects for nested functions are stored in that list: >>> def outerfunction(*args): ... def innerfunction(val): ... return someformat.format(val) ... someformat = 'Foo: {}' ... for arg in args: ... yield innerfunction(arg) ... >>> outerfunction.__code__ <code object outerfunction at 0x105b27ab0, file "<stdin>", line 1> >>> outerfunction.__code__.co_consts (None, <code object innerfunction at 0x100769db0, file "<stdin>", line 2>, 'Foo: {}') The co_consts sequence is a tuple, so we cannot just swap out the inner code object. I'll show later on how we'll produce a new function object with just that code object replaced. Next, we need to cover closures. At compile time, Python determines that a) someformat is not a local name in innerfunction and that b) it is closing over the same name in outerfunction. Python not only then generates the bytecode to produce the correct name lookups, the code objects for both the nested and the outer functions are annotated to record that someformat is to be closed over: >>> outerfunction.__code__.co_cellvars ('someformat',) >>> outerfunction.__code__.co_consts[1].co_freevars ('someformat',) You want to make sure that the replacement inner code object only ever lists those same names as free variables, and does so in the same order. Closures are created at run-time; the byte-code to produce them is part of the outer function: >>> import dis >>> dis.dis(outerfunction) 2 0 LOAD_CLOSURE 0 (someformat) 3 BUILD_TUPLE 1 6 LOAD_CONST 1 (<code object innerfunction at 0x1047b2a30, file "<stdin>", line 2>) 9 MAKE_CLOSURE 0 12 STORE_FAST 1 (innerfunction) # ... rest of disassembly omitted ... The LOAD_CLOSURE bytecode there creates a closure for the someformat variable; Python creates as many closures as used by the function in the order they are first used in the inner function. This is an important fact to remember for later. The function itself looks up these closures by position: >>> dis.dis(outerfunction.__code__.co_consts[1]) 3 0 LOAD_DEREF 0 (someformat) 3 LOAD_ATTR 0 (format) 6 LOAD_FAST 0 (val) 9 CALL_FUNCTION 1 12 RETURN_VALUE The LOAD_DEREF opcode picked the closure at position 0 here to gain access to the someformat closure. In theory this also means you can use entirely different names for the closures in your inner function, but for debugging purposes it makes much more sense to stick to the same names. It also makes verifying that the replacement function will slot in properly easier, as you can just compare the co_freevars tuples if you use the same names. Now for the swapping trick. Functions are objects like any other in Python, instances of a specific type. The type isn't exposed normally, but the type() call still returns it. The same applies to code objects, and both types even have documentation: >>> type(outerfunction) <type 'function'> >>> print type(outerfunction).__doc__ function(code, globals[, name[, argdefs[, closure]]]) Create a function object from a code object and a dictionary. The optional name string overrides the name from the code object. The optional argdefs tuple specifies the default argument values. The optional closure tuple supplies the bindings for free variables. >>> type(outerfunction.__code__) <type 'code'> >>> print type(outerfunction.__code__).__doc__ code(argcount, nlocals, stacksize, flags, codestring, constants, names, varnames, filename, name, firstlineno, lnotab[, freevars[, cellvars]]) Create a code object. Not for the faint of heart. We'll use these type objects to produce a new code object with updated constants, and then a new function object with updated code object: def replace_inner_function(outer, new_inner): """Replace a nested function code object used by outer with new_inner The replacement new_inner must use the same name and must at most use the same closures as the original. """ if hasattr(new_inner, '__code__'): # support both functions and code objects new_inner = new_inner.__code__ # find original code object so we can validate the closures match ocode = outer.__code__ function, code = type(outer), type(ocode) iname = new_inner.co_name orig_inner = next( const for const in ocode.co_consts if isinstance(const, code) and const.co_name == iname) # you can ignore later closures, but since they are matched by position # the new sequence must match the start of the old. assert (orig_inner.co_freevars[:len(new_inner.co_freevars)] == new_inner.co_freevars), 'New closures must match originals' # replace the code object for the inner function new_consts = tuple( new_inner if const is orig_inner else const for const in outer.__code__.co_consts) # create a new function object with the new constants return function( code(ocode.co_argcount, ocode.co_nlocals, ocode.co_stacksize, ocode.co_flags, ocode.co_code, new_consts, ocode.co_names, ocode.co_varnames, ocode.co_filename, ocode.co_name, ocode.co_firstlineno, ocode.co_lnotab, ocode.co_freevars, ocode.co_cellvars), outer.__globals__, outer.__name__, outer.__defaults__, outer.__closure__) The above function validates that the new inner function (which can be passed in as either a code object or as a function) will indeed use the same closures as the original. It then creates new code and function objects to match the old outer function object, but with the nested function (located by name) replaced with your monkey patch. To demonstrate that the above all works, lets replace innerfunction with one that increments each formatted value by 2: >>> def create_inner(): ... someformat = None # the actual value doesn't matter ... def innerfunction(val): ... return someformat.format(val + 2) ... return innerfunction ... >>> new_inner = create_inner() The new inner function is created as a nested function too; this is important as it ensures that Python will use the correct bytecode to look up the someformat closure. I used a return statement to extract the function object, but you could also look at create_inner._co_consts to grab the code object. Now we can patch the original outer function, swapping out just the inner function: >>> new_outer = replace_inner_function(outerfunction, new_inner) >>> list(outerfunction(6, 7, 8)) ['Foo: 6', 'Foo: 7', 'Foo: 8'] >>> list(new_outer(6, 7, 8)) ['Foo: 8', 'Foo: 9', 'Foo: 10'] The original function echoed out the original values, but the new returned values incremented by 2. You can even create new replacement inner functions that use fewer closures: >>> def demo_outer(): ... closure1 = 'foo' ... closure2 = 'bar' ... def demo_inner(): ... print closure1, closure2 ... demo_inner() ... >>> def create_demo_inner(): ... closure1 = None ... def demo_inner(): ... print closure1 ... >>> replace_inner_function(demo_outer, create_demo_inner.__code__.co_consts[1])() foo So, to complete the picture: Create your monkey-patch inner function as a nested function with the same closures Use replace_inner_function() to produce a new outer function Monkey patch the original outer function to use the new outer function produced in step 2.
Why does creating a list from a list make it larger?
I'm seeing some inconsistencies when using sys.getsizeof on what should be identical lists. (Python 2.7.5) >>> lst = [0,1,2,3,4,5,6,7,8,9] >>> sys.getsizeof(lst) 76 >>> lst2 = list(lst) >>> sys.getsizeof(lst2) 104 >>> lst3 = list(lst2) >>> sys.getsizeof(lst3) 104 >>> sys.getsizeof(lst[:]) 76 >>> sys.getsizeof(lst2[:]) 76 Does anybody have a simple explanation?
With a list literal, the VM creates the list with a set length. When passing a sequence to the list() constructor the elements are added one by one (via list.extend()) and as such the list is resized when appropriate. Since the resize operation overallocates in order to amortize the cost, the final list will usually be larger than the source list.
Is it better to store big number in list?
Is it memory efficient to store big number in list? Why does the following happens? >>> A = 100**100 >>> sys.getsizeof(A) 102 >>> B = [100**100] >>> sys.getsizeof(B) 40 Why size of A and B are not equal? >>> C = [1,100**100] >>> sys.getsizeof(C) 44 >>> D = [1000**1000, 100**100] >>> sys.getsizeof(D) 44 Why size of C and D are equal?
sys.getsizeof() returns the shallow size, i.e. the size of the list object itself but not of the objects it contains. From the documentation: Only the memory consumption directly attributed to the object is accounted for, not the memory consumption of objects it refers to. If you'd like to compute the deep size, it might be worth giving Pympler a try: >>> from pympler.asizeof import asizeof >>> A = 100**100 >>> asizeof(A) 120 >>> B = [100**100] >>> asizeof(B) 200 Thus, on my computer, placing the long inside a list adds 80 bytes of overhead.
What kind of python magic does dir() perform with __getattr__?
The following is in python 2.7 with MySQLdb 1.2.3. I needed a class wrapper to add some attributes to objects which didn't support it (classes with __slots__ and/or some class written in C) so I came out with something like this: class Wrapper(object): def __init__(self, obj): self._wrapped_obj = obj def __getattr__(self, obj): return getattr(self._wrapped_obj, attr) I was expecting that the dir() builtin called on my instance of Wrapper should have returned just the names inherited by object plus wrapped_obj, and I discovered that this is actually the case for most cases, but not for all. I tried this with a custom old style class, a custom new style class, and some builtin classes, it always worked this way: the only exception that i found is when the wrapped object was an instance of the class _mysql.connection. In this case, dir() on my object happens to know also all the method names attached to the wrapped connection object. I read in the python documentation about dir, and this behaviour appears to be legit: dir is supposed to return a list of "interesting names", not the "real" content of the instance. But I really can't figure how it does this: it actually understands the implementation of my __getattr__ and resolves to the attached item? If this is true, why only with that connection class and not for instance with a simpler dict? Here is some pasted code as an example of this curious behaviour: >>> from _mysql import connection >>> c = connection(**connection_parameters) >>> c <_mysql.connection open to '127.0.0.1' at a16920> >>> >>> dir(c) ['affected_rows', 'autocommit', 'change_user', 'character_set_name', 'close', 'commit', 'dump_debug_info', 'errno', 'error', 'escape', 'escape_string', 'field_count', 'get_character_set_info', 'get_host_info', 'get_proto_info', 'get_server_info', 'info', 'insert_id', 'kill', 'next_result', 'ping', 'query', 'rollback', 'select_db', 'set_character_set', 'set_server_option', 'shutdown', 'sqlstate', 'stat', 'store_result', 'string_literal', 'thread_id', 'use_result', 'warning_count'] >>> >>> w = Wrapper(c) >>> dir(w) ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_wrapped_obj', 'affected_rows', 'autocommit', 'change_user', 'character_set_name', 'close', 'commit', 'dump_debug_info', 'errno', 'error', 'escape', 'escape_string', 'field_count', 'get_character_set_info', 'get_host_info', 'get_proto_info', 'get_server_info', 'info', 'insert_id', 'kill', 'next_result', 'ping', 'query', 'rollback', 'select_db', 'set_character_set', 'set_server_option', 'shutdown', 'sqlstate', 'stat', 'store_result', 'string_literal', 'thread_id', 'use_result', 'warning_count'] >>> >>> d = Wrapper({}) >>> dir(d) ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_wrapped_obj'] >>>
There are two deprecated attributes in Python 2, object.__members__ and object.__methods__; these were aimed at supporting dir() on extension types (C-defined objects): object.__methods__ Deprecated since version 2.2: Use the built-in function dir() to get a list of an object’s attributes. This attribute is no longer available. object.__members__ Deprecated since version 2.2: Use the built-in function dir() to get a list of an object’s attributes. This attribute is no longer available. These were removed from Python 3, but because your connection object (at leasts in the older version you are using) still provides a __methods__ attribute that is found through your __getattr__ hook and used by dir() here. If you add a print statement to the __getattr__ method you'll see the attributes being accessed: >>> class Wrapper(object): ... def __init__(self, obj): ... self._wrapped_obj = obj ... def __getattr__(self, obj): ... print 'getattr', obj ... return getattr(self._wrapped_obj, attr) ... >>> dir(Wrapper({})) getattr __members__ getattr __methods__ ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_wrapped_obj'] For new-style objects, the newer __dir__ method supported by dir() is properly looked up on the type only so you don't see that being accessed here. The project HISTORY file suggests the attributes were removed in the big Python 3 compatibility update for 1.2.4 beta 1.
Anaconda vs miniconda space
On my desktop PC I have anaconda installed, and on my laptop - to save space - I thought i'd install miniconda and be selective about the modules I install. So I installed a handful, numpy, scipy etc. I didn't install anything which isn't part of the default anaconda install, but I just realized my miniconda install is taking up more space than the anaconda install! (1.8GB vs 2.2GB). (no environments in either) The bulk of the difference comes from the pkgs folder. The miniconda install seems to have the tar.bz2 of all of the installed packages as well as the exploded versions. Are these safe to delete? Will they be deleted automatically after a while? Is there an option to not cache these? P.S. I'm developing on both windows and mac (i've tried installed anaconda and miniconda on both mac and windows to see, and I get very similar results).
You can safely delete the tar.bz2 files. They are only used as a cache. The command conda clean -t will clean them automatically.
Deterministic python script behaves in non-deterministic way
I have a script which uses no randomisation that gives me different answers when I run it. I expect the answer to be the same, every time I run the script. The problem appears to only happen for certain (ill-conditioned) input data. The snippet comes from an algorithm to compute a specific type of controller for a linear system, and it mostly consists of doing linear algebra (matrix inversions, Riccati equation, eigenvalues). Obviously, this is a major worry for me, as I now cannot trust my code to give me the right results. I know the result can be wrong for poorly conditioned data, but I expect consistently wrong. Why is the answer not always the same on my Windows machine? Why do the Linux & Windows machine not give the same results? I'm using Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win 32, with Numpy version 1.8.2 and Scipy 0.14.0. (Windows 8, 64bit). The code is below. I've also tried running the code on two Linux machines, and there the script always gives the same answer (but the machines gave differing answers). One was running Python 2.7.8, with Numpy 1.8.2 and Scipy 0.14.0. The second was running Python 2.7.3 with Numpy 1.6.1 and Scipy 0.12.0. I solve the Riccati equation three times, and then print the answers. I expect the same answer every time, instead I get the sequence '1.75305103767e-09; 3.25501787302e-07; 3.25501787302e-07'. import numpy as np import scipy.linalg matrix = np.matrix A = matrix([[ 0.00000000e+00, 2.96156260e+01, 0.00000000e+00, -1.00000000e+00], [ -2.96156260e+01, -6.77626358e-21, 1.00000000e+00, -2.11758237e-22], [ 0.00000000e+00, 0.00000000e+00, 2.06196064e+00, 5.59422224e+01], [ 0.00000000e+00, 0.00000000e+00, 2.12407340e+01, -2.06195974e+00]]) B = matrix([[ 0. , 0. , 0. ], [ 0. , 0. , 0. ], [ -342.35401351, -14204.86532216, 31.22469724], [ 1390.44997337, 342.33745324, -126.81720597]]) Q = matrix([[ 5.00000001, 0. , 0. , 0. ], [ 0. , 5.00000001, 0. , 0. ], [ 0. , 0. , 0. , 0. ], [ 0. , 0. , 0. , 0. ]]) R = matrix([[ -3.75632852e+04, -0.00000000e+00, 0.00000000e+00], [ -0.00000000e+00, -3.75632852e+04, 0.00000000e+00], [ 0.00000000e+00, 0.00000000e+00, 4.00000000e+00]]) counter = 0 while counter < 3: counter +=1 X = scipy.linalg.solve_continuous_are(A, B, Q, R) print(-3449.15531628 - X[0,0]) My numpy config is as below print np.show_config() lapack_opt_info: libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md'] library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include'] blas_opt_info: libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md'] library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include'] openblas_info: NOT AVAILABLE lapack_mkl_info: libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md'] library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include'] blas_mkl_info: libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md'] library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include'] mkl_info: libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md'] library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32'] define_macros = [('SCIPY_MKL_H', None)] include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include'] None (edits to trim the question down)
In general, linalg libraries on Windows give different answers on different runs at machine precision level. I never heard of an explanation why this happens only or mainly on Windows. If your matrix is ill conditioned, then the inv will be largely numerical noise. On Windows the noise is not always the same in consecutive runs, on other operating systems the noise might be always the same but can differ depending on the details of the linear algebra library, on threading options, cache usage and so on. I've seen on and posted to the scipy mailing list several examples for this on Windows, I was using mostly the official 32 bit binaries with ATLAS BLAS/LAPACK. The only solution is to make the outcome of your calculation not depend so much on floating point precision issues and numerical noise, for example regularize the matrix inverse, use generalized inverse, pinv, reparameterize or similar.
Running ansible-playbook using Python API
How can I run a playbook in python script? What is the equivalent of the following using ansible module in python: ansible -i hosts dbservers -m setup ansible-playbook -i hosts -vvvv -k site.yml I was looking at their documenation in http://docs.ansible.com/developing_api.html but they have very limited examples.
This covered in the Ansible documentation under "Python API." For example, ansible -i hosts dbservers -m setup is implemented via: import ansible.runner runner = ansible.runner.Runner( module_name='setup', module_args='', pattern='dbservers', ) dbservers_get_facts = runner.run() There are a bunch of non-documented parameters in the __init__ method of Runner (from ansible.runner). There's too many to list inline, but I've included some of the parameters in this post as a guess to what you're specifically looking for. class Runner(object): ''' core API interface to ansible ''' # see bin/ansible for how this is used... def __init__(self, host_list=C.DEFAULT_HOST_LIST, # ex: /etc/ansible/hosts, legacy usage module_path=None, # ex: /usr/share/ansible module_name=C.DEFAULT_MODULE_NAME, # ex: copy module_args=C.DEFAULT_MODULE_ARGS, # ex: "src=/tmp/a dest=/tmp/b" ... pattern=C.DEFAULT_PATTERN, # which hosts? ex: 'all', 'acme.example.org' remote_user=C.DEFAULT_REMOTE_USER, # ex: 'username' remote_pass=C.DEFAULT_REMOTE_PASS, # ex: 'password123' or None if using key remote_port=None, # if SSH on different ports private_key_file=C.DEFAULT_PRIVATE_KEY_FILE, # if not using keys/passwords sudo_pass=C.DEFAULT_SUDO_PASS, # ex: 'password123' or None ... sudo=False, # whether to run sudo or not sudo_user=C.DEFAULT_SUDO_USER, # ex: 'root' module_vars=None, # a playbooks internals thing play_vars=None, # play_file_vars=None, # role_vars=None, # role_params=None, # default_vars=None, # extra_vars=None, # extra vars specified with he playbook(s) is_playbook=False, # running from playbook or not? inventory=None, # reference to Inventory object ... su=False, # Are we running our command via su? su_user=None, # User to su to when running command, ex: 'root' su_pass=C.DEFAULT_SU_PASS, vault_pass=None, ... ): For instance, the above command that specifies a sudo user and pass would be: runner = ansible.runner.Runner( module_name='setup', module_args='', pattern='dbservers', remote_user='some_user' remote_pass='some_pass_or_python_expression_that_returns_a_string' ) For playbooks, look into playbook.PlayBook, which takes a similar set of initializers: class PlayBook(object): ''' runs an ansible playbook, given as a datastructure or YAML filename. ... ''' # ***************************************************** def __init__(self, playbook = None, host_list = C.DEFAULT_HOST_LIST, module_path = None, .... and can be executed with the .run() method. e.g.: from ansible.playbook import PlayBook pb = PlayBook(playbook='/path/to/book.yml, --other initializers--) pb.run() more robust usage can be found in the ansible-playbook file. As far as I know, translating playbooks to Python modules is a bit more involved, but the documentation listed above should get you covered and you can reuse the YAML parser built into Ansible to convert playbooks to variables.
Order of Serializer Validation in Django REST Framework
Situation While working with validation in the Django REST Framework's ModelSerializer, I have noticed that the Meta.model fields are always validated, even when it does not necessarily make sense to do so. Take the following example for a User model's serialization: I have an endpoint that creates a user. As such, there is a password field and a confirm_password field. If the two fields do not match, the user cannot be created. Likewise, if the requested username already exists, the user cannot be created. The user POSTs improper values for each of the fields mentioned above An implementation of validate has been made in the serializer (see below), catching the non-matching password and confirm_password fields Implementation of validate: def validate(self, data): if data['password'] != data.pop('confirm_password'): raise serializers.ValidationError("Passwords do not match") return data Problem Even when the ValidationError is raised by validate, the ModelSerializer still queries the database to check to see if the username is already in use. This is evident in the error-list that gets returned from the endpoint; both the model and non-field errors are present. Consequently, I would like to know how to prevent model validation until after non-field validation has finished, saving me a call to my database. Attempt at solution I have been trying to go through the DRF's source to figure out where this is happening, but I have been unsuccessful in locating what I need to override in order to get this to work.
Since most likely your username field has unique=True set, Django REST Framework automatically adds a validator that checks to make sure the new username is unique. You can actually confirm this by doing repr(serializer()), which will show you all of the automatically generated fields, which includes the validators. Validation is run in a specific, undocumented order Field deserialization called (serializer.to_internal_value and field.run_validators) serializer.validate_[field] is called for each field Serializer-level validators are called (serializer.run_validation followed by serializer.run_validators) serializer.validate is called So the problem that you are seeing is that the field-level validation is called before your serializer-level validation. While I wouldn't recommend it, you can remove the field-level validator by setting extra_kwargs in your serilalizer's meta. class Meta: extra_kwargs = { "username": { "validators": [], }, } You will need to re-implement the unique check in your own validation though, along with any additional validators that have been automatically generated.
Browser performance tests through selenium
We are using protractor for testing internal AngularJS applications. Besides functional tests, we check for performance regressions with the help of protractor-perf which is based on nodejs browser-perf library. Because, "Performance is a feature". With protractor-perf we can measure and assert different performance characteristics while making browser actions, for example: browser.get('http://www.angularjs.org'); perf.start(); // Start measuring the metrics element(by.model('todoText')).sendKeys('write a protractor test'); element(by.css('[value="add"]')).click(); perf.stop(); // Stop measuring the metrics if (perf.isEnabled) { // Is perf measuring enabled ? // Check for perf regressions, just like you check for functional regressions expect(perf.getStats('meanFrameTime')).toBeLessThan(60); }; Now, for an another internal application we have a set of selenium-based tests written in Python. Is it possible to check for performance regressions with selenium-python, or should I rewrite the tests using protractor to be able to write browser performance tests?
There is a possibility to get closer to what browser-perf is doing by collecting the chrome performance logs and analyzing them. To get performance logs, turn on performance logs by tweaking loggingPrefs desired capability: from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities caps = DesiredCapabilities.CHROME caps['loggingPrefs'] = {'performance': 'ALL'} driver = webdriver.Chrome(desired_capabilities=caps) driver.get('https://stackoverflow.com') logs = [json.loads(log['message'])['message'] for log in driver.get_log('performance')] with open('devtools.json', 'wb') as f: json.dump(logs, f) driver.close() At this point, devtools.json file would contain a bunch of trace records: [ { "params": { "timestamp": 1419571233.19293, "frameId": "16639.1", "requestId": "16639.1", "loaderId": "16639.2", "type": "Document", "response": { "mimeType": "text/plain", "status": 200, "fromServiceWorker": false, "encodedDataLength": -1, "headers": { "Access-Control-Allow-Origin": "*", "Content-Type": "text/plain;charset=US-ASCII" }, "url": "data:,", "statusText": "OK", "connectionId": 0, "connectionReused": false, "fromDiskCache": false } }, "method": "Network.responseReceived" }, { "params": { "timestamp": 1419571233.19294, "encodedDataLength": 0, "requestId": "16639.1" }, "method": "Network.loadingFinished" }, .. ] Now, the question is, what to do with it. One option that was initially suggested during the Google Test Automation Conference is to submit the logs to webpagetest.org. There is an example in java available here, but, at the moment, I had no luck implementing it in Python. In theory, the UI report generated by webpagetest.org would look like this: They also provide the metrics in JSON/XML and other formats that can be further analyzed. This is really something, thanks to Vivek Singh for the pointing comment. browser-perf also uses the logging functionality to pick up the tracing logs, and analyzes the data.
Why does numpy std() give a different result to matlab std()?
I try to convert matlab code to numpy and figured out that numpy has a different result with the std function. in matlab std([1,3,4,6]) ans = 2.0817 in numpy np.std([1,3,4,6]) 1.8027756377319946 Is this normal? And how should I handle this?
The NumPy function np.std takes an optional parameter ddof: "Delta Degrees of Freedom". By default, this is 0. Set it to 1 to get the MATLAB result: >>> np.std([1,3,4,6], ddof=1) 2.0816659994661326 To add a little more context, in the calculation of the variance (of which the standard deviation is the square root) we typically divide by the number of values we have. But if we select a random sample of N elements from a larger distribution and calculate the variance, division by N can lead to an underestimate of the actual variance. To fix this, we can lower the number we divide by (the degrees of freedom) to a number less than N (usually N-1). The ddof parameter allows us change the divisor by the amount we specify. Unless told otherwise, NumPy will calculate the biased estimator for the variance (ddof=0, dividing by N). This is what you want if you are working with the entire distribution (and not a subset of values which have been randomly picked from a larger distribution). If the ddof parameter is given, NumPy divides by N - ddof instead. The default behaviour of MATLAB's std is to correct the bias for sample variance by dividing by N-1. This gets rid of some of (but probably not all of) of the bias in the standard deviation. This is likely to be what you want if you're using the function on a random sample of a larger distribution. The nice answer by @hbaderts gives further mathematical details.
Why Python returns True when checking if an empty string is in another?
My limited brain cannot understand why this happens: >>> print '' in 'lolsome' True In PHP, a equivalent comparison returns false: var_dump(strpos('', 'lolsome'));
From the documentation: For the Unicode and string types, x in y is true if and only if x is a substring of y. An equivalent test is y.find(x) != -1. Note, x and y need not be the same type; consequently, u'ab' in 'abc' will return True. Empty strings are always considered to be a substring of any other string, so "" in "abc" will return True. From looking at your print call, you're using 2.x. To go deeper, look at the bytecode: >>> def answer(): ... '' in 'lolsome' >>> dis.dis(answer) 2 0 LOAD_CONST 1 ('') 3 LOAD_CONST 2 ('lolsome') 6 COMPARE_OP 6 (in) 9 POP_TOP 10 LOAD_CONST 0 (None) 13 RETURN_VALUE COMPARE_OP is where we are doing our boolean operation and looking at the source code for in reveals where the comparison happens: TARGET(COMPARE_OP) { w = POP(); v = TOP(); if (PyInt_CheckExact(w) && PyInt_CheckExact(v)) { /* INLINE: cmp(int, int) */ register long a, b; register int res; a = PyInt_AS_LONG(v); b = PyInt_AS_LONG(w); switch (oparg) { case PyCmp_LT: res = a < b; break; case PyCmp_LE: res = a <= b; break; case PyCmp_EQ: res = a == b; break; case PyCmp_NE: res = a != b; break; case PyCmp_GT: res = a > b; break; case PyCmp_GE: res = a >= b; break; case PyCmp_IS: res = v == w; break; case PyCmp_IS_NOT: res = v != w; break; default: goto slow_compare; } x = res ? Py_True : Py_False; Py_INCREF(x); } else { slow_compare: x = cmp_outcome(oparg, v, w); } Py_DECREF(v); Py_DECREF(w); SET_TOP(x); if (x == NULL) break; PREDICT(POP_JUMP_IF_FALSE); PREDICT(POP_JUMP_IF_TRUE); DISPATCH(); } and where cmp_outcome is in the same file, it's easy to find our next clue: res = PySequence_Contains(w, v); which is in abstract.c: { Py_ssize_t result; if (PyType_HasFeature(seq->ob_type, Py_TPFLAGS_HAVE_SEQUENCE_IN)) { PySequenceMethods *sqm = seq->ob_type->tp_as_sequence; if (sqm != NULL && sqm->sq_contains != NULL) return (*sqm->sq_contains)(seq, ob); } result = _PySequence_IterSearch(seq, ob, PY_ITERSEARCH_CONTAINS); return Py_SAFE_DOWNCAST(result, Py_ssize_t, int); } and to come up for air from the source, we find this next function in the documentation: objobjproc PySequenceMethods.sq_contains This function may be used by PySequence_Contains() and has the same signature. This slot may be left to NULL, in this case PySequence_Contains() simply traverses the sequence until it finds a match. and further down in the same documentation: int PySequence_Contains(PyObject *o, PyObject *value) Determine if o contains value. If an item in o is equal to value, return 1, otherwise return 0. On error, return -1. This is equivalent to the Python expression value in o. Where '' isn't null, the sequence 'lolsome' can be thought to contain it.
Case insensitive argparse choices
Is it possible to check argparse choices in case-insensitive manner? import argparse choices = ["win64", "win32"] parser = argparse.ArgumentParser() parser.add_argument("-p", choices=choices) print(parser.parse_args(["-p", "Win32"])) results in: usage: choices.py [-h] [-p {win64,win32}] choices.py: error: argument -p: invalid choice: 'Win32' (choose from 'win64','win32')
Transform the argument into lowercase by using type = lambda s : s.lower() for the -p switch. As pointed out by chepner in the comments, since str.lower is already an appropriate function, the lambda wrapper is not necessarily needed and you could instead simply use type = str.lower directly.
Matplotlib issue on OS X ("ImportError: cannot import name _thread")
At some point in the last few days, Matplotlib stopped working for me on OS X. Here's the error I get when trying to import matplotlib: Traceback (most recent call last): File "/my/path/to/script/my_script.py", line 15, in <module> import matplotlib.pyplot as plt File "/Library/Python/2.7/site-packages/matplotlib/pyplot.py", line 34, in <module> from matplotlib.figure import Figure, figaspect File "/Library/Python/2.7/site-packages/matplotlib/figure.py", line 40, in <module> from matplotlib.axes import Axes, SubplotBase, subplot_class_factory File "/Library/Python/2.7/site-packages/matplotlib/axes/__init__.py", line 4, in <module> from ._subplots import * File "/Library/Python/2.7/site-packages/matplotlib/axes/_subplots.py", line 10, in <module> from matplotlib.axes._axes import Axes File "/Library/Python/2.7/site-packages/matplotlib/axes/_axes.py", line 22, in <module> import matplotlib.dates as _ # <-registers a date unit converter File "/Library/Python/2.7/site-packages/matplotlib/dates.py", line 126, in <module> from dateutil.rrule import (rrule, MO, TU, WE, TH, FR, SA, SU, YEARLY, File "/Library/Python/2.7/site-packages/dateutil/rrule.py", line 14, in <module> from six.moves import _thread ImportError: cannot import name _thread The only system change I can think of was the Apple-forced NTP update and maybe some permission changes I did in /usr/local to get Brew working again. I tried reinstalling both Matplotlib and Python-dateutil via Pip, but this did not help. Also tried a reboot. I'm running Python 2.7.6, which is located in /usr/bin/python. I'm running Yosemite (OS X 10.10.1).
sudo pip uninstall python-dateutil sudo pip install python-dateutil==2.2 I had the same error message this afternoon as well, although I did recently upgrade to Yosemite. I'm not totally sure I understand why reverting dateutil to a previous version works for me, but since running the above I'm having no trouble (I generally use pyplot inline in an ipython notebook).
Python/Selenium incognito/private mode
I can not seem to find any documentation on how to make Selenium open the browser in incognito mode. Do I have to setup a custom profile in the browser or?
First of all, since selenium by default starts up a browser with a clean, brand-new profile, you are actually already browsing privately. Referring to: Python - Start firefox with Selenium in private mode How might I simulate a private browsing experience in Watir? (Selenium) But you can strictly enforce/turn on incognito/private mode anyway. For chrome pass --incognito command-line argument: --incognito Causes the browser to launch directly in incognito mode. from selenium import webdriver chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--incognito") driver = webdriver.Chrome(chrome_options=chrome_options) driver.get('https://google.com') FYI, here is what it would open up: For firefox, set browser.privatebrowsing.autostart to True: from selenium import webdriver firefox_profile = webdriver.FirefoxProfile() firefox_profile.set_preference("browser.privatebrowsing.autostart", True) driver = webdriver.Firefox(firefox_profile=firefox_profile) FYI, this corresponds to the following checkbox in settings:
Can't connect to S3 buckets with periods in their name, when using Boto on Heroku
We're getting a certificate error when trying to connect to our S3 bucket using Boto. Strangely, this only manifests itself when accessing a bucket with periods in its name WHILE running on Heroku. from boto.s3.connection import S3Connection conn = S3Connection({our_s3_key}, {our_s3_secret}) bucket = conn.get_bucket('ourcompany.images') Raises the following error: CertificateError: hostname 'ourcompany.images.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com' But the same code works fine when run locally, and would also work on Heroku if the bucket name were 'ourcompany-images' instead of 'ourcompany.images'
According to the relevant github issue, add this to the configuration: [s3] calling_format = boto.s3.connection.OrdinaryCallingFormat Or, specify the calling_format while instantiating an S3Connection: from boto.s3.connection import OrdinaryCallingFormat conn = S3Connection(access_key, secret_key, calling_format=OrdinaryCallingFormat()) The code worked for you locally and didn't work on heroku, most likely, because of the different python versions used. I suspect you are using 2.7.9 runtime on heroku, which has enabled certificate checks for stdlib http clients.
How to use Python requests to fake a browser visit?
I want to get the content from the below website. If I use a browser like Firefox or Chrome I could get the real website page I want, but if I use the Python requests package (or wget command) to get it, it returns a totally different HTML page. I thought the developer of the website had made some blocks for this, so the question is: How do I fake a browser visit by using python requests or command wget? http://www.ichangtou.com/#company:data_000008.html
Provide a User-Agent header: import requests url = 'http://www.ichangtou.com/#company:data_000008.html' headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} response = requests.get(url, headers=headers) print(response.content) FYI, here is a list of User-Agent strings for different browsers: List of all Browsers As a side note, there is a pretty useful third-party package called fake-useragent that provides a nice abstraction layer over user agents: fake-useragent Up to date simple useragent faker with real world database Demo: >>> from fake_useragent import UserAgent >>> ua = UserAgent() >>> ua.chrome u'Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1667.0 Safari/537.36' >>> ua.random u'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.67 Safari/537.36'
member variable string gets treated as Tuple in Python
I am currently learning Python with the help of CodeAcademy. My problem may be related to their web application, but my suspicion is I am just wrong on a very fundamental level here. If you want to follow along I am referring to CodeAcademy.com -> Python -> Classes 6/11 My code looks like this: class Car(object): condition = "new" def __init__(self, model, color, mpg): self.model = model, self.color = color, self.mpg = mpg my_car = Car("DeLorean", "silver", 88) print my_car.model print my_car.color print my_car.mpg print my_car.condition What is suppossed to happen, is, that every member variable of the object my_car gets printed on screen. I was expecting that like condition, color and model would be treated as a string, but instead get treated as a Tuple. The output looks like this: ('DeLorean',) #Tuple ('silver',) #Tuple 88 new #String None Which leads to the validation failing, because CA expects "silver" but the code returns ('silver',). Where is the error in my code on this?
In your __init__, you have: self.model = model, self.color = color, which is how you define a tuple. Change the lines to self.model = model self.color = color without the comma: >>> a = 2, >>> a (2,) vs >>> a = 2 >>> a 2
Python: lists and copy of them
I can not explain the following behaviour: l1 = [1, 2, 3, 4] l1[:][0] = 888 print(l1) # [1, 2, 3, 4] l1[:] = [9, 8, 7, 6] print(l1) # [9, 8, 7, 6] It seems to be that l1[:][0] refers to a copy, whereas l1[:] refers to the object itself.
This is caused by python's feature that allows you to assign a list to a slice of another list, i.e. l1= [1,2,3,4] l1[:2] = [9, 8] print l1 will set l1's first two values to 9 and 8 respectively. Similarly, l1[:]= [9, 8, 7, 6] assigns new values to all elements of l1. More info about assignments in the docs.
How to add package data recursively in Python setup.py?
I have a new library that has to include a lot of subfolders of small datafiles, and I'm trying to add them as package data. Imagine I have my library as so: library - foo.py - bar.py data subfolderA subfolderA1 subfolderA2 subfolderB subfolderB1 .... I want to add all of the data in all of the subfolders through setup.py, but it seems like I manually have to go into every single subfolder (there are 100 or so) and add an init.py file. Furthermore, will setup.py find these files recursively, or do I need to manually add all of these in setup.py like: package_data={ 'mypackage.data.folderA': ['*'], 'mypackage.data.folderA.subfolderA1': ['*'], 'mypackage.data.folderA.subfolderA2': ['*'] }, I can do this with a script, but seems like a super pain. How can I achieve this in setup.py? PS, the heirarhcy of these folders is important because this is a database of material files and we want the filetree to be preserved when we present them in a GUI to the user, so it would be to our advantage to keep this file structure intact.
Use Setuptools instead of distutils. Use data files instead of package data. These do not require __init__.py. Generate the lists of files and directories using standard Python code, instead of writing it literally: data_files = [] directories = glob.glob('data/subfolder?/subfolder??/') for directory in directories: files = glob.glob(directory+'*') data_files.append((directory, files)) # then pass data_files to setup()
Is .ix() always better than .loc() and .iloc() since it is faster and supports integer and label access?
I'm learning the Python pandas library. Coming from an R background, the indexing and selecting functions seem more complicated than they need to be. My understanding it that .loc() is only label based and .iloc() is only integer based. Why should I ever use .loc() and .iloc() if .ix() is faster and supports integer and label access?
Please refer to the doc Different Choices for Indexing, it states clearly when and why you should use .loc, .iloc over .ix, it's about explicit use case: .ix supports mixed integer and label based access. It is primarily label based, but will fall back to integer positional access unless the corresponding axis is of integer type. .ix is the most general and will support any of the inputs in .loc and .iloc. .ix also supports floating point label schemes. .ix is exceptionally useful when dealing with mixed positional and label based hierachical indexes. However, when an axis is integer based, ONLY label based access and not positional access is supported. Thus, in such cases, it’s usually better to be explicit and use .iloc or .loc. Hope this helps.
How to print Y axis label horizontally in a matplotlib / pylab chart?
I'm creating very simple charts with matplotlib / pylab Python module. The letter "y" that labels the Y axis is on its side. You would expect this if the label was longer, such as a word, so as not to extend the outside of the graph to the left too much. But for a one letter label, this doesn't make sense, the label should be upright. My searches have come up blank. How can I print the "y" horizontally?
It is very simple. After plotting the label, you can simply change the rotation: from matplotlib import pyplot as plt plt.ion() plt.plot([1,2,3]) h = plt.ylabel('y') h.set_rotation(0) plt.draw() Alternatively, you can pass the rotation as an argument, i.e plt.ylabel('y',rotation=0)
Non-blocking I/O with asyncio
I'm trying to write a networked game with Pygame and asyncio, but I can't work out how to avoid hanging on reads. Here is my code for the client: @asyncio.coroutine def handle_client(): print("Connected!") reader, writer = yield from asyncio.open_connection('localhost', 8000) while True: mouse_up = False for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() elif event.type == pygame.MOUSEBUTTONUP: mouse_up = True if mouse_up: print("Writing") writer.write(b"Mouse up") print("Waiting to read") line = yield from reader.read(2**12) print(line.decode()) writer.close() This hangs on the line line = yield from reader.read(2**12). I previously thought that the point of asyncio was that it was non-blocking, and so if there wasn't any data to read it would just continue executing. I see now that this isn't the case. How do I integrate the asyncio networking code with the Pygame drawing and event code?
The point of yield from is to switch the execution to the asyncio's event loop and to block the current coroutine until the result is available. To schedule a task without blocking the current coroutine, you could use asyncio.async(). To print read-so-far data without blocking the pygame loop: @asyncio.coroutine def read(reader, callback): while True: data = yield from reader.read(2**12) if not data: # EOF break callback(data) @asyncio.coroutine def echo_client(): reader, ... chunks = [] asyncio.async(read(reader, chunks.append)) while True: pygame.event.pump() # advance pygame event loop ... if chunks: # print read-so-far data print(b''.join(chunks).decode()) del chunks[:] yield from asyncio.sleep(0.016) # advance asyncio loop There should be no blocking calls inside the while loop. read() and sleep() coroutines run concurrently in the same thread (obviously you could run other coroutines concurrently too).
Why does `if Exception` work in Python
In this answer http://stackoverflow.com/a/27680814/3456281, the following construct is presented a=[1,2] while True: if IndexError: print ("Stopped.") break print(a[2]) which actually prints "Stopped." and breaks (tested with Python 3.4.1). Why?! Why is if IndexError even legal? Why does a[2] not raise an IndexError with no try ... except around?
All objects have a boolean value. If not otherwise defined, that boolean value is True. So this code is simply the equivalent of doing if True; so execution reaches the break statement immediately and the print is never reached.
Custom unique_together key name
I have a model with a unique_together defined for 3 fields to be unique together: class MyModel(models.Model): clid = models.AutoField(primary_key=True, db_column='CLID') csid = models.IntegerField(db_column='CSID') cid = models.IntegerField(db_column='CID') uuid = models.CharField(max_length=96, db_column='UUID', blank=True) class Meta(models.Meta): unique_together = [ ["csid", "cid", "uuid"], ] Now, if I attempt to save a MyModel instance with an existing csid+cid+uuid combination, I would get: IntegrityError: (1062, "Duplicate entry '1-1-1' for key 'CSID'") Which is correct. But, is there a way to customize that key name? (CSID in this case) In other words, can I provide a name for a constraint listed in unique_together? As far as I understand, this is not covered in the documentation.
Its not well documented, but depending on if you are using Django 1.6 or 1.7 there are two ways you can do this: In Django 1.6 you can override the unique_error_message, like so: class MyModel(models.Model): clid = models.AutoField(primary_key=True, db_column='CLID') csid = models.IntegerField(db_column='CSID') cid = models.IntegerField(db_column='CID') # .... def unique_error_message(self, model_class, unique_check): if model_class == type(self) and unique_check == ("csid", "cid", "uuid"): return _('Your custom error') else: return super(MyModel, self).unique_error_message(model_class, unique_check) Or in Django 1.7: class MyModel(models.Model): clid = models.AutoField(primary_key=True, db_column='CLID') csid = models.IntegerField(db_column='CSID') cid = models.IntegerField(db_column='CID') uuid = models.CharField(max_length=96, db_column='UUID', blank=True) class Meta(models.Meta): unique_together = [ ["csid", "cid", "uuid"], ] error_messages = { NON_FIELD_ERRORS: { 'unique_together': "%(model_name)s's %(field_labels)s are not unique.", } }
Is there an analysis speed or memory usage advantage to using HDF5 for large array storage (instead of flat binary files)?
I am processing large 3D arrays, which I often need to slice in various ways to do a variety of data analysis. A typical "cube" can be ~100GB (and will likely get larger in the future) It seems that the typical recommended file format for large datasets in python is to use HDF5 (either h5py or pytables). My question is: is there any speed or memory usage benefit to using HDF5 to store and analyze these cubes over storing them in simple flat binary files? Is HDF5 more appropriate for tabular data, as opposed to large arrays like what I am working with? I see that HDF5 can provide nice compression, but I am more interested in processing speed and dealing with memory overflow. I frequently want to analyze only one large subset of the cube. One drawback of both pytables and h5py is it seems is that when I take a slice of the array, I always get a numpy array back, using up memory. However, if I slice a numpy memmap of a flat binary file, I can get a view, which keeps the data on disk. So, it seems that I can more easily analyze specific sectors of my data without overrunning my memory. I have explored both pytables and h5py, and haven't seen the benefit of either so far for my purpose.
HDF5 Advantages: Organization, flexibility, interoperability Some of the main advantages of HDF5 are its hierarchical structure (similar to folders/files), optional arbitrary metadata stored with each item, and its flexibility (e.g. compression). This organizational structure and metadata storage may sound trivial, but it's very useful in practice. Another advantage of HDF is that the datasets can be either fixed-size or flexibly sized. Therefore, it's easy to append data to a large dataset without having to create an entire new copy. Additionally, HDF5 is a standardized format with libraries available for almost any language, so sharing your on-disk data between, say Matlab, Fortran, R, C, and Python is very easy with HDF. (To be fair, it's not too hard with a big binary array, too, as long as you're aware of the C vs. F ordering and know the shape, dtype, etc of the stored array.) HDF advantages for a large array: Faster I/O of an arbitrary slice Just as the TL/DR: For an ~8GB 3D array, reading a "full" slice along any axis took ~20 seconds with a chunked HDF5 dataset, and 0.3 seconds (best-case) to over three hours (worst case) for a memmapped array of the same data. Beyond the things listed above, there's another big advantage to a "chunked"* on-disk data format such as HDF5: Reading an arbitrary slice (emphasis on arbitrary) will typically be much faster, as the on-disk data is more contiguous on average. *(HDF5 doesn't have to be a chunked data format. It supports chunking, but doesn't require it. In fact, the default for creating a dataset in h5py is not to chunk, if I recall correctly.) Basically, your best case disk-read speed and your worst case disk read speed for a given slice of your dataset will be fairly close with a chunked HDF dataset (assuming you chose a reasonable chunk size or let a library choose one for you). With a simple binary array, the best-case is faster, but the worst-case is much worse. One caveat, if you have an SSD, you likely won't notice a huge difference in read/write speed. With a regular hard drive, though, sequential reads are much, much faster than random reads. (i.e. A regular hard drive has long seek time.) HDF still has an advantage on an SSD, but it's more due its other features (e.g. metadata, organization, etc) than due to raw speed. First off, to clear up confusion, accessing an h5py dataset returns an object that behaves fairly similarly to a numpy array, but does not load the data into memory until it's sliced. (Similar to memmap, but not identical.) Have a look at the h5py introduction for more information. Slicing the dataset will load a subset of the data into memory, but presumably you want to do something with it, at which point you'll need it in memory anyway. If you do want to do out-of-core computations, you can fairly easily for tabular data with pandas or pytables. It is possible with h5py (nicer for big N-D arrays), but you need to drop down to a touch lower level and handle the iteration yourself. However, the future of numpy-like out-of-core computations is Blaze. Have a look at it if you really want to take that route. The "unchunked" case First off, consider a 3D C-ordered array written to disk (I'll simulate it by calling arr.ravel() and printing the result, to make things more visible): In [1]: import numpy as np In [2]: arr = np.arange(4*6*6).reshape(4,6,6) In [3]: arr Out[3]: array([[[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [ 12, 13, 14, 15, 16, 17], [ 18, 19, 20, 21, 22, 23], [ 24, 25, 26, 27, 28, 29], [ 30, 31, 32, 33, 34, 35]], [[ 36, 37, 38, 39, 40, 41], [ 42, 43, 44, 45, 46, 47], [ 48, 49, 50, 51, 52, 53], [ 54, 55, 56, 57, 58, 59], [ 60, 61, 62, 63, 64, 65], [ 66, 67, 68, 69, 70, 71]], [[ 72, 73, 74, 75, 76, 77], [ 78, 79, 80, 81, 82, 83], [ 84, 85, 86, 87, 88, 89], [ 90, 91, 92, 93, 94, 95], [ 96, 97, 98, 99, 100, 101], [102, 103, 104, 105, 106, 107]], [[108, 109, 110, 111, 112, 113], [114, 115, 116, 117, 118, 119], [120, 121, 122, 123, 124, 125], [126, 127, 128, 129, 130, 131], [132, 133, 134, 135, 136, 137], [138, 139, 140, 141, 142, 143]]]) The values would be stored on-disk sequentially as shown on line 4 below. (Let's ignore filesystem details and fragmentation for the moment.) In [4]: arr.ravel(order='C') Out[4]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143]) In the best case scenario, let's take a slice along the first axis. Notice that these are just the first 36 values of the array. This will be a very fast read! (one seek, one read) In [5]: arr[0,:,:] Out[5]: array([[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35]]) Similarly, the next slice along the first axis will just be the next 36 values. To read a complete slice along this axis, we only need one seek operation. If all we're going to be reading is various slices along this axis, then this is the perfect file structure. However, let's consider the worst-case scenario: A slice along the last axis. In [6]: arr[:,:,0] Out[6]: array([[ 0, 6, 12, 18, 24, 30], [ 36, 42, 48, 54, 60, 66], [ 72, 78, 84, 90, 96, 102], [108, 114, 120, 126, 132, 138]]) To read this slice in, we need 36 seeks and 36 reads, as all of the values are separated on disk. None of them are adjacent! This may seem pretty minor, but as we get to larger and larger arrays, the number and size of the seek operations grows rapidly. For a large-ish (~10Gb) 3D array stored in this way and read in via memmap, reading a full slice along the "worst" axis can easily take tens of minutes, even with modern hardware. At the same time, a slice along the best axis can take less than a second. For simplicity, I'm only showing "full" slices along a single axis, but the exact same thing happens with arbitrary slices of any subset of the data. Incidentally there are several file formats that take advantage of this and basically store three copies of huge 3D arrays on disk: one in C-order, one in F-order, and one in the intermediate between the two. (An example of this is Geoprobe's D3D format, though I'm not sure it's documented anywhere.) Who cares if the final file size is 4TB, storage is cheap! The crazy thing about that is that because the main use case is extracting a single sub-slice in each direction, the reads you want to make are very, very fast. It works very well! The simple "chunked" case Let's say we store 2x2x2 "chunks" of the 3D array as contiguous blocks on disk. In other words, something like: nx, ny, nz = arr.shape slices = [] for i in range(0, nx, 2): for j in range(0, ny, 2): for k in range(0, nz, 2): slices.append((slice(i, i+2), slice(j, j+2), slice(k, k+2))) chunked = np.hstack([arr[chunk].ravel() for chunk in slices]) So the data on disk would look like chunked: array([ 0, 1, 6, 7, 36, 37, 42, 43, 2, 3, 8, 9, 38, 39, 44, 45, 4, 5, 10, 11, 40, 41, 46, 47, 12, 13, 18, 19, 48, 49, 54, 55, 14, 15, 20, 21, 50, 51, 56, 57, 16, 17, 22, 23, 52, 53, 58, 59, 24, 25, 30, 31, 60, 61, 66, 67, 26, 27, 32, 33, 62, 63, 68, 69, 28, 29, 34, 35, 64, 65, 70, 71, 72, 73, 78, 79, 108, 109, 114, 115, 74, 75, 80, 81, 110, 111, 116, 117, 76, 77, 82, 83, 112, 113, 118, 119, 84, 85, 90, 91, 120, 121, 126, 127, 86, 87, 92, 93, 122, 123, 128, 129, 88, 89, 94, 95, 124, 125, 130, 131, 96, 97, 102, 103, 132, 133, 138, 139, 98, 99, 104, 105, 134, 135, 140, 141, 100, 101, 106, 107, 136, 137, 142, 143]) And just to show that they're 2x2x2 blocks of arr, notice that these are the first 8 values of chunked: In [9]: arr[:2, :2, :2] Out[9]: array([[[ 0, 1], [ 6, 7]], [[36, 37], [42, 43]]]) To read in any slice along an axis, we'd read in either 6 or 9 contiguous chunks (twice as much data as we need) and then only keep the portion we wanted. That's a worst-case maximum of 9 seeks vs a maximum of 36 seeks for the non-chunked version. (But the best case is still 6 seeks vs 1 for the memmapped array.) Because sequential reads are very fast compared to seeks, this significantly reduces the amount of time it takes to read an arbitrary subset into memory. Once again, this effect becomes larger with larger arrays. HDF5 takes this a few steps farther. The chunks don't have to be stored contiguously, and they're indexed by a B-Tree. Furthermore, they don't have to be the same size on disk, so compression can be applied to each chunk. Chunked arrays with h5py By default, h5py doesn't created chunked HDF files on disk (I think pytables does, by contrast). If you specify chunks=True when creating the dataset, however, you'll get a chunked array on disk. As a quick, minimal example: import numpy as np import h5py data = np.random.random((100, 100, 100)) with h5py.File('test.hdf', 'w') as outfile: dset = outfile.create_dataset('a_descriptive_name', data=data, chunks=True) dset.attrs['some key'] = 'Did you want some metadata?' Note that chunks=True tells h5py to automatically pick a chunk size for us. If you know more about your most common use-case, you can optimize the chunk size/shape by specifying a shape tuple (e.g. (2,2,2) in the simple example above). This allows you to make reads along a particular axis more efficient or optimize for reads/writes of a certain size. I/O Performance comparison Just to emphasize the point, let's compare reading in slices from a chunked HDF5 dataset and a large (~8GB), Fortran-ordered 3D array containing the same exact data. I've cleared all OS caches between each run, so we're seeing the "cold" performance. For each file type, we'll test reading in a "full" x-slice along the first axis and a "full" z-slize along the last axis. For the Fortran-ordered memmapped array, the "x" slice is the worst case, and the "z" slice is the best case. The code used is in a gist (including creating the hdf file). I can't easily share the data used here, but you could simulate it by an array of zeros of the same shape (621, 4991, 2600) and type np.uint8. The chunked_hdf.py looks like this: import sys import h5py def main(): data = read() if sys.argv[1] == 'x': x_slice(data) elif sys.argv[1] == 'z': z_slice(data) def read(): f = h5py.File('/tmp/test.hdf5', 'r') return f['seismic_volume'] def z_slice(data): return data[:,:,0] def x_slice(data): return data[0,:,:] main() memmapped_array.py is similar, but has a touch more complexity to ensure the slices are actually loaded into memory (by default, another memmapped array would be returned, which wouldn't be an apples-to-apples comparison). import numpy as np import sys def main(): data = read() if sys.argv[1] == 'x': x_slice(data) elif sys.argv[1] == 'z': z_slice(data) def read(): big_binary_filename = '/data/nankai/data/Volumes/kumdep01_flipY.3dv.vol' shape = 621, 4991, 2600 header_len = 3072 data = np.memmap(filename=big_binary_filename, mode='r', offset=header_len, order='F', shape=shape, dtype=np.uint8) return data def z_slice(data): dat = np.empty(data.shape[:2], dtype=data.dtype) dat[:] = data[:,:,0] return dat def x_slice(data): dat = np.empty(data.shape[1:], dtype=data.dtype) dat[:] = data[0,:,:] return dat main() Let's have a look at the HDF performance first: jofer at cornbread in ~ $ sudo ./clear_cache.sh jofer at cornbread in ~ $ time python chunked_hdf.py z python chunked_hdf.py z 0.64s user 0.28s system 3% cpu 23.800 total jofer at cornbread in ~ $ sudo ./clear_cache.sh jofer at cornbread in ~ $ time python chunked_hdf.py x python chunked_hdf.py x 0.12s user 0.30s system 1% cpu 21.856 total A "full" x-slice and a "full" z-slice take about the same amount of time (~20sec). Considering this is an 8GB array, that's not too bad. Most of the time And if we compare this to the memmapped array times (it's Fortran-ordered: A "z-slice" is the best case and an "x-slice" is the worst case.): jofer at cornbread in ~ $ sudo ./clear_cache.sh jofer at cornbread in ~ $ time python memmapped_array.py z python memmapped_array.py z 0.07s user 0.04s system 28% cpu 0.385 total jofer at cornbread in ~ $ sudo ./clear_cache.sh jofer at cornbread in ~ $ time python memmapped_array.py x python memmapped_array.py x 2.46s user 37.24s system 0% cpu 3:35:26.85 total Yes, you read that right. 0.3 seconds for one slice direction and ~3.5 hours for the other. The time to slice in the "x" direction is far longer than the amount of time it would take to load the entire 8GB array into memory and select the slice we wanted! (Again, this is a Fortran-ordered array. The opposite x/z slice timing would be the case for a C-ordered array.) However, if we're always wanting to take a slice along the best-case direction, the big binary array on disk is very good. (~0.3 sec!) With a memmapped array, you're stuck with this I/O discrepancy (or perhaps anisotropy is a better term). However, with a chunked HDF dataset, you can choose the chunksize such that access is either equal or is optimized for a particular use-case. It gives you a lot more flexibility. In summary Hopefully that helps clear up one part of your question, at any rate. HDF5 has many other advantages over "raw" memmaps, but I don't have room to expand on all of them here. Compression can speed some things up (the data I work with doesn't benefit much from compression, so I rarely use it), and OS-level caching often plays more nicely with HDF5 files than with "raw" memmaps. Beyond that, HDF5 is a really fantastic container format. It gives you a lot of flexibility in managing your data, and can be used from more or less any programming language. Overall, try it and see if it works well for your use case. I think you might be surprised.
Socket won't bind: no such device
So I have this piece of Python 3 code: import socket s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW) s.bind(('eth0', 0)) s.send(eth_packet) This code works on my Raspberry Pi, but not on my external server. When I try to run it on my external server i get: # sudo python3 test.py s.send(eth_packet) socket.error: [Errno 19] No such device And I checked the network interfaces output (via an python script): External server (debian): ['lo [index=1, IPv4=127.0.0.1, IPv6=::1]', 'eth0:0 [index=2, IPv4=xxxxx, IPv6=None]', 'eth0 [index=2, IPv4=yyyyyy, IPv6=zzzzzzz]'] Raspberry pi: ['lo [index=1, IPv4=127.0.0.1, IPv6=None]', 'eth0 [index=2, IPv4=rrrrr, IPv6=None]'] Can someone explain what is going on? I just want to send a handcrafted message but this error keeps bugging me, can this be a problem with the drivers of my server? This is the same result as ifconfig. Edit Ok, i used strace for this example: #!/usr/bin/env python3 import socket import binascii import struct test= '000a959d6816' packet= struct.pack("!6s", binascii.unhexlify(bytes(test, 'UTF-8'))) s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW) s.bind(('eth0', 0)) s.send(packet) And this is the important part about strace: socket(PF_PACKET, SOCK_RAW, 0) = 3 ioctl(3, SIOCGIFINDEX, {ifr_name="eth0", ifr_index=2}) = 0 bind(3, {sa_family=AF_PACKET, proto=0000, if2, pkttype=PACKET_HOST, addr(0)={0, }, 20) = 0 sendto(3, "\0\n\225\235h\26", 6, 0, NULL, 0) = -1 ENXIO (No such device or address) open("test.py", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=247, ...}) = 0 ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff86c5f090) = -1 ENOTTY (Inappropriate ioctl for device) fstat(4, {st_mode=S_IFREG|0644, st_size=247, ...}) = 0 lseek(4, 0, SEEK_CUR) = 0 dup(4) = 5 fcntl(5, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE) fstat(5, {st_mode=S_IFREG|0644, st_size=247, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc1251c2000 lseek(5, 0, SEEK_CUR) = 0 read(5, "#!/usr/bin/env python3\n\nimport s"..., 4096) = 247 close(5) = 0 munmap(0x7fc1251c2000, 4096) = 0 lseek(4, 0, SEEK_SET) = 0 lseek(4, 0, SEEK_CUR) = 0 read(4, "#!/usr/bin/env python3\n\nimport s"..., 4096) = 247 close(4) = 0 write(2, "Traceback (most recent call last"..., 143Traceback (most recent call last): File "test.py", line 11, in <module> s.send(packet) socket.error: [Errno 6] No such device or address ) = 143 rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7fc1264050a0}, {0x428787, [], SA_RESTORER, 0x7fc1264050a0}, 8) = 0 close(3) = 0
When you bind a RAW socket with family PACKET on an interface, you need a tuple with 2 objects: (interfaceName, protoNumber) or 5 objects: (interfaceName, protoNumber, pkttype, hatype, haddr) You specify 0 in protoNumber but maybe protoNumber 0 doesn't exist in your system. Documentation about packet family: packet(7) sll_protocol is the standard ethernet protocol type in network byte order as defined in the include file. Try to find the right protocol number in linux/if_ether.h.
How `pip install` a package that has non-Python dependencies?
Many python packages have build dependencies on non-Python packages. I'm specifically thinking of lxml and cffi, but this dilemma applies to a lot of packages on PyPI. Both of these packages have unadvertised build dependencies on non-Python packages like libxml2-dev, libxslt-dev, zlib1g-dev, and libffi-dev. The websites for lxml and cffi declare some of these dependencies, but it appears that there is no way to do figure this out from a command line. As a result, there are hundreds of questions on SO that take this general form: pip install foo fails with an error: "fatal error: bar.h: No such file or directory". How do I fix it? Is this a misuse of pip or is this how it is intended to work? Is there a sane way to know what build dependencies to install before running pip? My current approach is: I want to install a package called foo. pip install foo foo has a dependency on a Python package bar. If bar build fails, then look at error message and guess/google what non-Python dependency I need to install. sudo apt-get install libbaz-dev sudo pip install bar Repeat until bar succeeds. sudo pip uninstall foo Repeat entire process until no error messages. Step #4 is particularly annoying. Apparently pip (version 1.5.4) installs the requested package first, before any dependencies. So if any dependencies fail, you can't just ask pip to install it again, because it thinks its already installed. There's also no option to install just the dependencies, so you must uninstall the package and then reinstall it. Is there some more intelligent process for using pip?
This is actually a comment about the answer suggesting using apt-get but I don't have enough reputation points to leave one. If you use virtualenv a lot, then installing the python-packages through apt-get can become a pain, as you can get mysterious errors when the python packages installed system-wide and the python packages installed in your virtualenv try to interact with each other. One thing that I have found that does help is to use the build-dep feature. To build the matplotlib dependencies, for example: sudo apt-get build-dep python-matplotlib And then activate your virtual environment and do pip install matplotlib. It will still go through the build process but many of the dependencies will be taken care of for you. This is sort what the cran repositories suggest when installing R packages in ubuntu.
pip cffi package installation failed on osx
I am installing cffi package for cryptography and Jasmin installation. I did some research before posting question, so I found following option but which is seems not working: System Mac OSx 10.9.5 python2.7 Error c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. Please guide me on following issue. Thanks Command env DYLD_LIBRARY_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi LOG bhushanvaiude$ env DYLD_LIBRARY_PATH=/usr/local/opt/openssl/lib/ ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future" LDFLAGS="-L/usr/local/opt/openssl/lib" CFLAGS="-I/usr/local/opt/openssl/include" sudo -E pip install cffi Password: Downloading/unpacking cffi Downloading cffi-0.8.6.tar.gz (196kB): 196kB downloaded Running setup.py egg_info for package cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. Downloading/unpacking pycparser (from cffi) Downloading pycparser-2.10.tar.gz (206kB): 206kB downloaded Running setup.py egg_info for package pycparser Installing collected packages: cffi, pycparser Running setup.py install for cffi warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. building '_cffi_backend' extension cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 Complete output from command /Users/****project path***/bin/python -c "import setuptools;__file__='/Users/****project path***/build/cffi/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/7w/8z_mn3g120n34bv0w780gnd00000gn/T/pip-e6d6Ay-record/install-record.txt --single-version-externally-managed --install-headers /Users/****project path***/include/site/python2.7: warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] 1 warning generated. running install running build running build_py creating build creating build/lib.macosx-10.9-intel-2.7 creating build/lib.macosx-10.9-intel-2.7/cffi copying cffi/__init__.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/api.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/backend_ctypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/commontypes.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/cparser.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/ffiplatform.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/gc_weakref.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/lock.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/model.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_cpy.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/vengine_gen.py -> build/lib.macosx-10.9-intel-2.7/cffi copying cffi/verifier.py -> build/lib.macosx-10.9-intel-2.7/cffi running build_ext building '_cffi_backend' extension creating build/temp.macosx-10.9-intel-2.7 creating build/temp.macosx-10.9-intel-2.7/c cc -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -I/usr/local/opt/openssl/include -Qunused-arguments -pipe -Wno-error=unused-command-line-argument-hard-error-in-future -DUSE__THREAD -I@@HOMEBREW_CELLAR@@/libffi/3.0.13/lib/libffi-3.0.13/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c c/_cffi_backend.c -o build/temp.macosx-10.9-intel-2.7/c/_cffi_backend.o warning: unknown warning option '-Werror=unused-command-line-argument-hard-error-in-future'; did you mean '-Werror=unused-command-line-argument'? [-Wunknown-warning-option] c/_cffi_backend.c:13:10: fatal error: 'ffi.h' file not found #include <ffi.h> ^ 1 warning and 1 error generated. error: command 'cc' failed with exit status 1 ---------------------------------------- Cleaning up...
In your terminal try and run: xcode-select --install After that try installing the package again. By default, XCode installs itself as the IDE and does not set up the environment for the use by command line tools; for example, the /usr/include folder will be missing. Running the above command will install the tools necessary to run compilation from the command line and create the required symbolic links. Since Python packages compile native code parts using the command-line interface of XCode, this step is required to install Python packages that include native components. You only need to do this once per XCode install/upgrade, or if you see a similar error.
Check if key exists in a Python dict in Jinja2 templates
I have a python dictionary: settings = { "foo" : "baz", "hello" : "world" } This variable settings is then available in the Jinja2 template. I want to check if a key myProperty exists in the settings dict within my template, and if so take some action: {% if settings.hasKey(myProperty) %} takeSomeAction(); {% endif %} What is the equivalent of hasKey that I can use?
Like Mihai and karelv have noted, this works: {% if 'blabla' in item %} ... {% endif %} I get a 'dict object' has no attribute 'blabla' if I use {% if item.blabla %} and item does not contain a blabla key
Is Python's == an equivalence relation on the floats?
In native Python, without using NumPy (for which numpy.nan != numpy.nan) there is no NaN, so am I right in thinking that Python's floating point == is reflexive? Then since it is symmetric (a == b implies b == a) and transitive (if a==b and b==c then a==c), can we say that Python's == is an equivalence relation on the floats? EDIT: OK, so I learned that there is a NaN: float('nan') (thanks @unutbu) which will propagate through various operations, but does any native Python method return it (rather than raising an Exception) without me introducing it by this assignment?
== is reflexive for all numbers, zero, -zero, ininity, and -infinity, but not for nan. You can get inf, -inf, and nan in native Python just by arithmetic operations on literals, like below. These behave correctly, as in IEEE 754 and without math domain exception: >>> 1e1000 == 1e1000 True >>> 1e1000/1e1000 == 1e1000/1e1000 False 1e1000 is a very big number, so float and double represent it as an infinity. infinity is equal to infinity infinity divided by infinity is not a number not a number != not a number Floating-point arithmetic in Python also works OK for infinity minus infinity etc.: >>> x = 1e1000 >>> x inf >>> x+x inf >>> x-x nan >>> x*2 inf >>> x == x True >>> x-x == x-x False >>> And for the zero and minus zero case: >>> inf = float("inf") >>> 1/inf 0.0 >>> -1/inf -0.0 >>> -1/inf == 1/inf True >>>
Finding consecutive consonants in a word
I need a code that will show me the consecutive consonants in a word. for example for suiteConsonnes("concertation") I need to obtain ["c","nc","rt","t","n"] here is my code: def SuiteConsonnes(mot): consonnes=[] for x in mot: if x in "bcdfghjklmnprstvyz": consonnes += x + '' return consonnes I manage to find the consonants, but I don't see how to find them consecutively. Can anybody tell me what I need to do?
You can use regular expressions, implemented in the re module Better solution >>> re.findall(r'[bcdfghjklmnpqrstvwxyz]+', "concertation", re.IGNORECASE) ['c', 'nc', 'rt', 't', 'n'] [bcdfghjklmnprstvyz]+ matches any sequence of one or more characters from the character class re.IGNORECASE enables a case in sensitive match on the characters. That is >>> re.findall(r'[bcdfghjklmnpqrstvwxyz]+', "CONCERTATION", re.IGNORECASE) ['C', 'NC', 'RT', 'T', 'N'] Another Solution >>> import re >>> re.findall(r'[^aeiou]+', "concertation",) ['c', 'nc', 'rt', 't', 'n'] [^aeiou] Negated character class. Matches anything character other than the one in this character class. That is in short Matches consonents in the string + quantifer + matches one or more occurence of the pattern in the string Note This will also find the non alphabetic, adjacent characters in the solution. As the character class is anything other than vowels Example >>> re.findall(r'[^aeiou]+', "123concertation",) ['123c', 'nc', 'rt', 't', 'n'] If you are sure that the input always contain alphabets, this solution is ok re.findall(pattern, string, flags=0) Return all non-overlapping matches of pattern in string, as a list of strings. The string is scanned left-to-right, and matches are returned in the order found. If you are curious about how the result is obtained for re.findall(r'[bcdfghjklmnpqrstvwxyz]+', "concertation") concertation | c concertation | # o is not present in the character class. Matching ends here. Adds match, 'c' to ouput list concertation | n concertation | c concertation | # Match ends again. Adds match 'nc' to list # And so on
Redis Pubsub and Message Queueing
My overall question is: Using Redis for PubSub, what happens to messages when publishers push messages into a channel faster than subscribers are able to read them? For example, let's say I have: A simple publisher publishing messages at the rate of 2 msg/sec. A simple subscriber reading messages at the rate of 1 msg/sec. My naive assumption would be the subscriber would only see 50% of the messages published onto Redis. To test this theory, I wrote two scripts: pub.py queue = redis.StrictRedis(host='localhost', port=6379, db=0) channel = queue.pubsub() for i in range(10): queue.publish("test", i) time.sleep(0.5) sub.py r = redis.StrictRedis(host='localhost', port=6379, db=0) p = r.pubsub() p.subscribe('test') while True: message = p.get_message() if message: print "Subscriber: %s" % message['data'] time.sleep(1) Results When I ran sub.py first, immediately followed by pub.py, I found that sub.py actually displayed all the messages (1-10), one after another with a delay of 1 second in between. My initial assumption was wrong, Redis is queuing messages. More tests needed. When I ran pub.py first, then waited 5 seconds before running sub.py, I found that sub.py only displayed the second half of the messages (5-10). I would have assumed this originally, but given my previous results, I would have thought messages were queued, which led me to the following conclusion... Conclusions Redis server appears to queue messages for each client, for each channel. As long as a client is listening, it doesn't matter how fast it reads messages. As long as it's connected, messages will remain queued for that client, for that channel. Remaining Questions Are these conclusions valid? If so, how long will client/channel messages remained queued? If so, is there a redis-cli info command to see how many messages are queued (for each client/channel)?
The tests are valid, but the conclusions are partially wrong. Redis does not queue anything on pub/sub channels. On the contrary, it tends to read the item from the publisher socket, and write the item in all the subscriber sockets, ideally in the same iteration of the event loop. Nothing is kept in Redis data structures. Now, as you demonstrated, there is still some kind of buffering. It is due to the usage of TCP/IP sockets, and Redis communication buffers. Sockets have buffers, and of course, TCP comes with some flow control mechanisms. It avoids the loss of data when buffers are full. If a subscriber is not fast enough, data will accumulate in its socket buffer. When it is full, TCP will block the communication and prevents Redis to push more information in the socket. Redis also manages output communication buffers (on top of the ones of the sockets) to generate data formatted with the Redis protocol. So when the output buffer of the socket is full, the event loop will mark the socket as non writable, and data will remain in Redis output buffers. Provided the TCP connection is still valid, data can remain in the buffers for a very long time. Now, both the socket and Redis output buffer are bound. If the subscribers are really too slow, and a lot of data accumulate, Redis will ultimately close the connection with subscribers (as a safety mechanism). By default, for pub/sub, Redis has a soft limit at 8 MB, and a hard limit at 32 MB, per connection buffer. If the output buffer reaches the hard limit, or if it remains between the soft and hard limit for more than 60 seconds, the connection with the slow subscriber will be closed. Knowing the number of pending messages is not easy. It can be evaluated by looking at the size of the pending information in the socket buffers, and the Redis output buffers. For Redis output buffers, you can use the CLIENT LIST command (from redis-cli). The size of the output buffer is returned in the obl and oll fields (in bytes). For socket buffers, there is no Redis command. However, on Linux, it is possible to build a script to interpret the content of the /proc/net/tcp file. See an example here. This script probably needs to be adapted to your system.
Python matplotlib Cairo error
I'm using something simpler than the sample code on the pyplot tutorial website: import matplotlib.pyplot as plt plt.plot([1,2,3,4,5]) plt.show() but when I run it, I get the error: TypeError: Couldn't find foreign struct converter for 'cairo.Context' I'm on Python 3.4.2, and have a fresh install of matplotlib, cairocfft>0.6, numpy>=1.6 from pip. I tried uninstalling cairocfft and grabbing python-cairo from the Arch repositories, but now I have the error: NotImplementedError: Surface.create_for_data: Not Implemented yet. Is there a way to draw a basic line graph without installing many libraries? I'm not enthusiastic on installing pyqt4, as this blogpost recommends. This github issue suggests installing gi-cairo, but gi-cairo is not on the Arch repositories, nor could I find it on PyPI (my own search fail?) I remember this being a breeze on Python2, but have migrated to Python3 now.
This is in case someone is having the same problem on Ubuntu 14.04, as I did using Python 3.4.3. By using bits and hints from JDong's answer, I've solved the problem as follows. (Basically change the MatPlotLib backend to qt5agg.) Install python3-pyqt5. sudo apt-get install python3-pyqt5 Find out where the matplotlibrc file is so you can edit it. This can be done using the following in Python console. import matplotlib matplotlib.matplotlib_fname() Edit the matplotlibrc file (you'll probably require sudo), find the line beginning with backend :, and change it to backend : qt5agg. If such a line doesn't exist, just create one. The above steps have solved it for me on Ubuntu 14.04. I hope that helps.
Why is hardcoding this list slower than calculating it?
I want to assign the first 1024 terms of this sequence to a list. I initially guessed that hardcoding this list would be the fastest way. I also tried generating the list algorithmically and found this to be faster than hardcoding. I therefore tested various compromise approaches, using increasingly long hardcoded lists and algorithmically extending to 1024 items. The fastest way I found involved hardcoding the first 128 items and generating the rest. I'd like to understand why hardcoding the first 128 items in the sequence and calculating the rest is faster than hardcoding all 1024 items. Code and profile results are shown below, using Python 3.4.2 Shell (IDLE) and cProfile timeit ( Thanks to Veedrac's answer for the improved profiling code). I've left the hardcoded lists on one very long line to avoid cluttering the question with rows of numbers, but apart from this the code does not require horizontal scrolling. Code def hardcoded(): m = [0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,9,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,10] def softcoded(): m = [0] for k in range(10): m += m m[-1] += 1 def hybrid(): m = [0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7] for k in range(3): m += m m[-1] += 1 from timeit import Timer def p_time(func, n=10000): print(func.__name__) print(min(Timer(func).repeat(10, n)) / n) p_time(hardcoded) p_time(softcoded) p_time(hybrid) Timings hardcoded 1.593102711162828e-05 softcoded 1.1183638458442147e-05 hybrid 9.69246251002005e-06 I ran all of the timings several times and pasted in the lowest.
Firstly, your timing is done in a high overhead way. You should use this instead: def hardcoded(): m = [0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,9,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,10] def softcoded(): m = [0] for k in range(10): m += m m[-1] += 1 def hybrid(): m = [0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7] for k in range(3): m += m m[-1] += 1 from timeit import Timer def p_time(func, n=10000): print(func.__name__) print(min(Timer(func).repeat(10, n)) / n) p_time(hardcoded) p_time(softcoded) p_time(hybrid) My timings on Python 3.4 are hardcoded 4.190810100408271e-06 softcoded 4.509894398506731e-06 hybrid 3.4970380016602578e-06 which doesn't quite agree with yours. This is probably because cProfile will add quite a bit of overhead to the first case (hence to move to timeit.Timer). To understand why, let's look at the disassembly for hardcoded: 0 LOAD_CONST 1 (0) 3 LOAD_CONST 2 (1) 6 LOAD_CONST 1 (0) 9 LOAD_CONST 3 (2) 12 LOAD_CONST 1 (0) ... 3063 LOAD_CONST 2 (1) 3066 LOAD_CONST 1 (0) 3069 LOAD_CONST 11 (10) 3072 BUILD_LIST 1024 3075 STORE_FAST 0 (m) 3078 LOAD_CONST 0 (None) 3081 RETURN_VALUE This is several kilobytes of code that needs to go through a big switch that decodes each LOAD_CONST instruction, reads its argument, finds the argument from hardcoded.__code__.co_consts (which is (None, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)) and add that to the list. It is done this way because a list is mutable so you need to build a new one each time. CPython will do this in the opcodes because it's fast for most lists; it's only slow when building large, low entropy lists. In contrary, m += m is able to just duplicate the list (an internal C routine), increment a few reference counts and return. This requires no expensive decode stage, no reading elements from tuples and is very cache friendly. However, the call m += m is only low-overhead when the constant-time bytecode dispatch, type-lookup and internal bookkeeping is amortized out over the whole operation. When m is small, this is not the case so it ends up being slower than the LOAD_CONST loop from before. This is why the hybrid approach is faster but the full loop is slower; the first 7 iterations are slower than just using LOAD_CONST. We can improve hybrid by unrolling: def hybrid(): m = [0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7] m *= 2 m[-1] += 1 m *= 2 m[-1] += 1 m *= 2 m[-1] += 1 return m and again with a bit of compression: def hybrid(): m = [0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4, 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 6] m *= 16 m[127::128] = 7, 8, 7, 9, 7, 8, 7, 10 return m But really you should just cache the result and copy it: M = [0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,9,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,10] def copy(): m = M[:] In fact, if possible you should try tuples, as immutable values don't need to be copied or regenerated at all: def immutable(): m = (0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,9,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,10) Timings on Python 3.4: hybrid 2.64736270182766e-06 copy 2.3061631014570593e-06 immutable 1.1302909697405994e-07 Note that the hybrid being timed is my improved version.
Eigenvectors computed with numpy's eigh and svd do not match
Consider singular value decomposition M=USV*. Then the eigenvalue decomposition of M* M gives M* M= V (S* S) V*=VS* U* USV*. I wish to verify this equality with numpy by showing that the eigenvectors returned by eigh function are the same as those returned by svd function: import numpy as np np.random.seed(42) # create mean centered data A=np.random.randn(50,20) M= A-np.array(A.mean(0),ndmin=2) # svd U1,S1,V1=np.linalg.svd(M) S1=np.square(S1) V1=V1.T # eig S2,V2=np.linalg.eigh(np.dot(M.T,M)) indx=np.argsort(S2)[::-1] S2=S2[indx] V2=V2[:,indx] # both Vs are in orthonormal form assert np.all(np.isclose(np.linalg.norm(V1,axis=1), np.ones(V1.shape[0]))) assert np.all(np.isclose(np.linalg.norm(V1,axis=0), np.ones(V1.shape[1]))) assert np.all(np.isclose(np.linalg.norm(V2,axis=1), np.ones(V2.shape[0]))) assert np.all(np.isclose(np.linalg.norm(V2,axis=0), np.ones(V2.shape[1]))) assert np.all(np.isclose(S1,S2)) assert np.all(np.isclose(V1,V2)) The last assertion fails. Why?
Just play with small numbers to debug your problem. Start with A=np.random.randn(3,2) instead of your much larger matrix with size (50,20) In my random case, I find that v1 = array([[-0.33872745, 0.94088454], [-0.94088454, -0.33872745]]) and for v2: v2 = array([[ 0.33872745, -0.94088454], [ 0.94088454, 0.33872745]]) they only differ for a sign, and obviously, even if normalized to have unit module, the vector can differ for a sign. Now if you try the trick assert np.all(np.isclose(V1,-1*V2)) for your original big matrix, it fails... again, this is OK. What happens is that some vectors have been multiplied by -1, some others haven't. A correct way to check for equality between the vectors is: assert allclose(abs((V1*V2).sum(0)),1.) and indeed, to get a feeling of how this works you can print this quantity: (V1*V2).sum(0) that indeed is either +1 or -1 depending on the vector: array([ 1., -1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., -1., 1., 1., 1., -1., -1.]) EDIT: This will happen in most cases, especially if starting from a random matrix. Notice however that this test will likely fail if one or more eigenvalues has an eigenspace of dimension larger than 1, as pointed out by @Sven Marnach in his comment below: There might be other differences than just vectors multiplied by -1. If any of the eigenvalues has a multi-dimensional eigenspace, you might get an arbitrary orthonormal basis of that eigenspace, and to such bases might be rotated against each other by an arbitraty unitarian matrix
numpy division with RuntimeWarning: invalid value encountered in double_scalars
I wrote the following script: import numpy d = numpy.array([[1089, 1093]]) e = numpy.array([[1000, 4443]]) answer = numpy.exp(-3 * d) answer1 = numpy.exp(-3 * e) res = answer.sum()/answer1.sum() print res But I got this result and with the error occurred: nan C:\Users\Desktop\test.py:16: RuntimeWarning: invalid value encountered in double_scalars res = answer.sum()/answer1.sum() It seems to be that the input element were too small that python turned them to be zeros, but indeed the division has its result. How to solve this kind of problem?
you can't solve it. Simply answer1.sum()==0, and you can't perform a division by zero. This happens because answer1 is the exponential of 2 very large, negative numbers, so that the result is rounded to zero. nan is returned in this case because of the division by zero. Now to solve your problem you could: go for a library for high-precision mathematics, like mpmath. But that's less fun. as an alternative to a bigger weapon, do some math manipulation, as detailed below. go for a tailored scipy/numpy function that does exactly what you want! Check out @Warren Weckesser answer. Here I explain how to do some math manipulation that helps on this problem. We have that for the numerator: exp(-x)+exp(-y) = exp(log(exp(-x)+exp(-y))) = exp(log(exp(-x)*[1+exp(-y+x)])) = exp(log(exp(-x) + log(1+exp(-y+x))) = exp(-x + log(1+exp(-y+x))) where above x=3* 1089 and y=3* 1093. Now, the argument of this exponential is -x + log(1+exp(-y+x)) = -x + 6.1441934777474324e-06 For the denominator you could proceed similarly but obtain that log(1+exp(-z+k)) is already rounded to 0, so that the argument of the exponential function at the denominator is simply rounded to -z=-3000. You then have that your result is exp(-x + log(1+exp(-y+x)))/exp(-z) = exp(-x+z+log(1+exp(-y+x)) = exp(-266.99999385580668) which is already extremely close to the result that you would get if you were to keep only the 2 leading terms (i.e. the first number 1089 in the numerator and the first number 1000 at the denominator): exp(3*(1089-1000))=exp(-267) For the sake of it, let's see how close we are from the solution of Wolfram alpha (link): Log[(exp[-3*1089]+exp[-3*1093])/([exp[-3*1000]+exp[-3*4443])] -> -266.999993855806522267194565420933791813296828742310997510523 The difference between this number and the exponent above is +1.7053025658242404e-13, so the approximation we made at the denominator was fine. The final result is 'exp(-266.99999385580668) = 1.1050349147204485e-116 From wolfram alpha is (link) 1.105034914720621496.. × 10^-116 # Wolfram alpha. and again, it is safe to use numpy here too.
When using asyncio, how do you allow all running tasks to finish before shutting down the event loop
I have the following code: @asyncio.coroutine def do_something_periodically(): while True: asyncio.async(my_expensive_operation()) yield from asyncio.sleep(my_interval) if shutdown_flag_is_set: print("Shutting down") break I run this function until complete. The problem occurs when shutdown is set - the function completes and any pending tasks are never run. (You see this as an error task: <Task pending coro=<report() running at script.py:33> wait_for=<Future pending cb=[Task._wakeup()]>> ). How do I schedule a shutdown correctly? To give some context, I'm writing a system monitor which reads from /proc/stat every 5 seconds, computes the cpu usage in that period, and then sends the result to a server. I want to keep scheduling these monitoring jobs until I receive sigterm, when I stop scheduling, wait for all current jobs to finish, and exit gracefully.
You can retrieve unfinished tasks and run the loop again until they finished, then close the loop or exit your program. pending = asyncio.Task.all_tasks() loop.run_until_complete(asyncio.gather(*pending)) pending is a list of pending tasks. asyncio.gather() allows to wait on several tasks at once. If you want to ensure all the tasks are completed inside a coroutine (maybe you have a "main" coroutine), you can do it this way, for instance: @asyncio.coroutine def do_something_periodically(): while True: asyncio.async(my_expensive_operation()) yield from asyncio.sleep(my_interval) if shutdown_flag_is_set: print("Shutting down") break yield from asyncio.gather(*asyncio.Task.all_tasks()) Also, in this case, since all the tasks are created in the same coroutine, you already have access to the tasks: @asyncio.coroutine def do_something_periodically(): tasks = [] while True: tasks.append(asyncio.async(my_expensive_operation())) yield from asyncio.sleep(my_interval) if shutdown_flag_is_set: print("Shutting down") break yield from asyncio.gather(*tasks)
Ipython console in Spyder stuck on "connecting to kernel"
I am new to python and coming from Matlab and I have installed the latest version of Python(x,y) (2.7.9.0) on my Win 8 64 bit PC. The problem that I have is that, each time I start Spyder, the default IPython console gets stuck on "connecting to kernel". I can see that a new kernel is launched each time because a new .json file appears in the directory ".ipython\profile_default\security". I can access this kernel by opening a new IPython console by clicking on "connect to an existing kernel" and then browsing to find it, then it works fine (except that the variables I create do not appear in the variable explorer). I can also quit the kernel from this new IPython console but this does not solve my problem because when I launch a new IPython console by clicking on "open an IPython console" or restarting Spyder, it still hangs on "connecting to kernel" and creates a new .json file. The closest issue that I could find on a forum is this one, the only difference being that I do not have the "import sitecustomize" error in the internal console. I have tried uninstalling Python(x,y) and python but to no avail. Any hint would be really appreciated.
I run "Reset Spyder Settings" from the Windows Menu in the Anaconda section.
Python Urllib2 SSL error
Python 2.7.9 is now much more strict about SSL certificate verification. Awesome! I'm not surprised that programs that were working before are now getting CERTIFICATE_VERIFY_FAILED errors. But I can't seem to get them working (without disabling certificate verification entirely). One program was using urllib2 to connect to Amazon S3 over https. I download the root CA certificate into a file called "verisign.pem" and try this: import urllib2, ssl context = ssl.create_default_context() context.load_verify_locations(cafile = "./verisign.pem") print context.get_ca_certs() urllib2.urlopen("https://bucket.s3.amazonaws.com/", context=context) and I still get CERTIFICATE_VERIFY_FAILED errors, even though the root CA is printed out correctly in line 4. openssl can connect to this server fine. In fact, here is the command I used to get the CA cert: openssl s_client -showcerts -connect bucket.s3.amazonaws.com:443 < /dev/null I took the last cert in the chain and put it in a PEM file, which openssl reads fine. It's a Verisign certificate with: Serial number: 35:97:31:87:f3:87:3a:07:32:7e:ce:58:0c:9b:7e:da Subject key identifier: 7F:D3:65:A7:C2:DD:EC:BB:F0:30:09:F3:43:39:FA:02:AF:33:31:33 SHA1 fingerprint: F4:A8:0A:0C:D1:E6:CF:19:0B:8C:BC:6F:BC:99:17:11:D4:82:C9:D0 Any ideas how to get this working with validation enabled?
To summarize the comments about the cause of the problem and explain the real problem in more detail: If you check the trust chain for the OpenSSL client you get the following: [0] 54:7D:B3:AC:BF:... /CN=*.s3.amazonaws.com [1] 5D:EB:8F:33:9E:... /CN=VeriSign Class 3 Secure Server CA - G3 [2] F4:A8:0A:0C:D1:... /CN=VeriSign Class 3 Public Primary Certification Authority - G5 [OT] A1:DB:63:93:91:... /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority The first certificate [0] is the leaf certificate sent by the server. The following certifcates [1] and [2] are chain certificates sent by the server. The last certificate [OT] is the trusted root certificate, which is not sent by the server but is in the local storage of trusted CA. Each certificate in the chain is signed by the next one and the last certificate [OT] is trusted, so the trust chain is complete. If you check the trust chain instead by a browser (e.g. Google Chrome using the NSS library) you get the following chain: [0] 54:7D:B3:AC:BF:... /CN=*.s3.amazonaws.com [1] 5D:EB:8F:33:9E:... /CN=VeriSign Class 3 Secure Server CA - G3 [NT] 4E:B6:D5:78:49:... /CN=VeriSign Class 3 Public Primary Certification Authority - G5 Here [0] and [1] are again sent by the server, but [NT] is the trusted root certificate. While this looks from the subject exactly like the chain certificate [2] the fingerprint says that the certificates are different. If you would take a closer looks at the certificates [2] and [NT] you would see, that the public key inside the certificate is the same and thus both [2] and [NT] can be used to verify the signature for [1] and thus can be used to build the trust chain. This means, that while the server sends the same certificate chain in all cases there are multiple ways to verify the chain up to a trusted root certificate. How this is done depends on the SSL library and on the known trusted root certificates: [0] (*.s3.amazonaws.com) | [1] (Verisign G3) --------------------------\ | | /------------------ [2] (Verisign G5 F4:A8:0A:0C:D1...) | | | | certificates sent by server | .....|...............................................................|................ | locally trusted root certificates | | | [OT] Public Primary Certification Authority [NT] Verisign G5 4E:B6:D5:78:49 OpenSSL library Google Chrome (NSS library) But the question remains, why your verification was unsuccessful. What you did was to take the trusted root certificate used by the browser (Verisign G5 4E:B6:D5:78:49) together with OpenSSL. But the verification in browser (NSS) and OpenSSL work slightly different: NSS: build trust chain from certificates send by the server. Stop building the chain when we got a certificate signed by any of the locally trusted root certificates. OpenSSL_ build trust chain from the certificates sent by the server. After this is done check if we have a trusted root certificate signing the latest certificate in the chain. Because of this subtle difference OpenSSL is not able to verify the chain [0],[1],[2] against root certificate [NT], because this certificate does not sign the latest element in chain [2] but instead [1]. If the server would instead only sent a chain of [0],[1] then the verification would succeed. This is a long known bug and there exist patches and hopefully the issue if finally addressed in OpenSSL 1.0.2 with the introduction of the X509_V_FLAG_TRUSTED_FIRST option.
"SSL: CERTIFICATE_VERIFY_FAILED" Error
I am getting this error Exception in thread Thread-3: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run self.__target(*self.__args, **self.__kwargs) File "/Users/Matthew/Desktop/Skypebot 2.0/bot.py", line 271, in process info = urllib2.urlopen(req).read() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 431, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 449, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1240, in https_open context=self._context) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1197, in do_open raise URLError(err) URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)> This is the code that is causing this error: if input.startswith("!web"): input = input.replace("!web ", "") url = "https://domainsearch.p.mashape.com/index.php?name=" + input req = urllib2.Request(url, headers={ 'X-Mashape-Key': 'XXXXXXXXXXXXXXXXXXXX' }) info = urllib2.urlopen(req).read() Message.Chat.SendMessage ("" + info) The API Im using requires me to use the https. How can I make it bypass the verification?
If you just want to bypass verification, you can create a new SSLContext. By default newly created contexts use CERT_NONE. Be carefull with this as stated in section 17.3.7.2.1 When calling the SSLContext constructor directly, CERT_NONE is the default. Since it does not authenticate the other peer, it can be insecure, especially in client mode where most of time you would like to ensure the authenticity of the server you’re talking to. Therefore, when in client mode, it is highly recommended to use CERT_REQUIRED. But if you just want it to work now for some other reason you can do the following, you'll have to import ssl as well: input = input.replace("!web ", "") url = "https://domainsearch.p.mashape.com/index.php?name=" + input req = urllib2.Request(url, headers={ 'X-Mashape-Key': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' }) gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1) # Only for gangstars info = urllib2.urlopen(req, context=gcontext).read() Message.Chat.SendMessage ("" + info) This should get round your problem but you're not really solving any of the issues, but you won't see the [SSL: CERTIFICATE_VERIFY_FAILED] because you now aren't verifying the cert! To add to the above, if you want to know more about why you are seeing these issues you will want to have a look at PEP 476. This PEP proposes to enable verification of X509 certificate signatures, as well as hostname verification for Python's HTTP clients by default, subject to opt-out on a per-call basis. This change would be applied to Python 2.7, Python 3.4, and Python 3.5. There is an advised opt out which isn't dissimilar to my advice above: import ssl # This restores the same behavior as before. context = ssl._create_unverified_context() urllib.urlopen("https://no-valid-cert", context=context) It also features a highly discouraged option via monkeypatching which you don't often see in python: import ssl ssl._create_default_https_context = ssl._create_unverified_context Which overrides the default function for context creation with the function to create an unverified context. This highly discouraged option can be seen in the wild here!
pandas groupby sort within groups
I want to group my dataframe by two columns and then sort the aggregated results within the groups. In [167]: df Out[167]: count job source 0 2 sales A 1 4 sales B 2 6 sales C 3 3 sales D 4 7 sales E 5 5 market A 6 3 market B 7 2 market C 8 4 market D 9 1 market E In [168]: df.groupby(['job','source']).agg({'count':sum}) Out[168]: count job source market A 5 B 3 C 2 D 4 E 1 sales A 2 B 4 C 6 D 3 E 7 I would now like to sort the count column in descending order within each of the groups. And then take only the top three rows. To get something like: count job source market A 5 D 4 B 3 sales E 7 C 6 B 4
What you want to do is actually again a groupby (on the result of the first groupby): sort and take the first three elements per group. Starting from the result of the first groupby: In [60]: df_agg = df.groupby(['job','source']).agg({'count':sum}) We group by the first level of the index: In [63]: g = df_agg['count'].groupby(level=0, group_keys=False) Then we want to sort ('order') each group and take the first three elements: In [64]: res = g.apply(lambda x: x.order(ascending=False).head(3)) However, for this, there is a shortcut function to do this, nlargest: In [65]: g.nlargest(3) Out[65]: job source market A 5 D 4 B 3 sales E 7 C 6 B 4 dtype: int64
Heroku TypeError: parse_requirements() missing 1 required keyword argument: 'session'
I am trying to migeate an app to the cedar-14 stack from cedar on Heroku. In my requirements.txt file I have: .... robobrowser==0.5.1 .... When I try to deploy by pushing the project to heroku I get: Collecting robobrowser==0.5.1 (from -r requirements.txt (line 17)) Downloading robobrowser-0.5.1.tar.gz Traceback (most recent call last): File "<string>", line 20, in <module> File "/tmp/pip-build-PqCF2A/robobrowser/setup.py", line 38, in <module> for requirement in parse_requirements('requirements.txt') File "/app/.heroku/python/lib/python2.7/site-packages/pip-6.0.6-py2.7.egg/pip/req/req_file.py", line 19, in parse_requirements "parse_requirements() missing 1 required keyword argument: " TypeError: parse_requirements() missing 1 required keyword argument: 'session' Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 20, in <module> File "/tmp/pip-build-PqCF2A/robobrowser/setup.py", line 38, in <module> for requirement in parse_requirements('requirements.txt') File "/app/.heroku/python/lib/python2.7/site-packages/pip-6.0.6-py2.7.egg/pip/req/req_file.py", line 19, in parse_requirements "parse_requirements() missing 1 required keyword argument: " TypeError: parse_requirements() missing 1 required keyword argument: 'session' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-PqCF2A/robobrowser How can I fix this?
I ran into this problem installing wabbit_wappa for Python. I 'fixed' it by changing a line in setup.py from: install_reqs = parse_requirements('requirements.txt') to install_reqs = parse_requirements('requirements.txt', session=False) and it installed just fine.
How to post/put json data to ListSerializer
I'm reading about customizing multiple update here and I haven't figured out in what case the custom ListSerializer update method is called. I would like to update multiple objects at once, I'm not worried about multiple create or delete at the moment. From the example in the docs: # serializers.py class BookListSerializer(serializers.ListSerializer): def update(self, instance, validated_data): # custom update logic ... class BookSerializer(serializers.Serializer): ... class Meta: list_serializer_class = BookListSerializer And my ViewSet # api.py class BookViewSet(ModelViewSet): queryset = Book.objects.all() serializer_class = BookSerializer And my url setup using DefaultRouter # urls.py router = routers.DefaultRouter() router.register(r'Book', BookViewSet) urlpatterns = patterns('', url(r'^api/', include(router.urls)), ... So I have this set up using the DefaultRouter so that /api/Book/ will use the BookSerializer. Is the general idea that if I POST/PUT/PATCH an array of JSON objects to /api/Book/ then the serializer should switch over to BookListSerializer? I've tried POST/PUT/PATCH JSON data list to this /api/Book/ that looks like: [ {id:1,title:thing1}, {id:2, title:thing2} ] but it seems to still treat the data using BookSerializer instead of BookListSerializer. If I submit via POST I get Invalid data. Expected a dictionary, but got list. and if I submit via PATCH or PUT then I get a Method 'PATCH' not allowed error. Question: Do I have to adjust the allowed_methods of the DefaultRouter or the BookViewSet to allow POST/PATCH/PUT of lists? Are the generic views not set up to work with the ListSerializer? I know I could write my own list deserializer for this, but I'm trying to stay up to date with the new features in DRF 3 and it looks like this should work but I'm just missing some convention or some option.
Django REST framework by default assumes that you are not dealing with bulk data creation, updates, or deletion. This is because 99% of people are not dealing with bulk data creation, and DRF leaves the other 1% to third-party libraries. In Django REST framework 2.x and 3.x, a third party package exists for this. Now, you are trying to do bulk creation but you are getting an error back that says Invalid data. Expected a dictionary, but got list This is because you are sending in a list of objects to create, instead of just sending in one. You can get around this a few ways, but the easiest is to just override get_serializer on your view to add the many=True flag to the serializer when it is a list. def get_serializer(self, *args, **kwargs): if "data" in kwargs: data = kwargs["data"] if isinstance(data, list): kwargs["many"] = True return super(MyViewSet, self).get_serializer(*args, **kwargs) This will allow Django REST framework to know to automatically use the ListSerializer when creating objects in bulk. Now, for other operations such as updating and deleting, you are going to need to override the default routes. I'm going to assume that you are using the routes provided by Django REST framework bulk, but you are free to use whatever method names you want. You are going to need to add methods for bulk PUT and PATCH to the view as well. from rest_framework.response import Response def bulk_update(self, request, *args, **kwargs): partial = kwargs.pop("partial", False) queryset = self.filter_queryset(self.get_queryset)) serializer = self.get_serializer(instance=queryset, data=request.data, many=True) serializer.is_valid(raise_exception=True) self.perform_update(serializer) return Response(serializer.data) def partial_bulk_update(self, *args, **kwargs): kargs["partial"] = True return super(MyView, self).bulk_update(*args, **kwargs) This won't work out of the box as Django REST framework doesn't support bulk updates by default. This means you also have to implement your own bulk updates. The current code will handle bulk updates as though you are trying to update the entire list, which is how the old bulk updating package previously worked. While you didn't ask for bulk deletion, that wouldn't be particularly difficult to do. def bulk_delete(self, request, *args, **kwargs): queryset = self.filter_queryset(self.get_queryset()) self.perform_delete(queryset) return Response(status=204) This has the same effect of removing all objects, the same as the old bulk plugin. None of this code was tested. If it doesn't work, consider it as a detailed example.
pip install: Please check the permissions and owner of that directory
When installing pip and python I have ran into a snag that says: The directory '/Users/Parthenon/Library/Logs/pi' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag. because I now have to install using sudo. I had python and a handful of libraries already installed on my Mac, I'm running Yosemite. I recently had to do a clean wipe and then reinstall of the OS. Now I'm getting this prompt and I'm having trouble figuring out how to change it Before my command line was Parthenon$ now it's Philips-MBP:~ Parthenon$ I am the sole owner of this computer and this is the only account on it. This seems to be a problem when upgrading to python 3.4, nothing seems to be in the right place, virtualenv isn't going where I expect it to, etc.
I also saw this change on my Mac when I went from running 'pip' to 'sudo pip' Adding '-H' to sudo causes the message to go away for me. E.g. sudo -H pip install foo 'man sudo' tells me that '-H' causes sudo to set $HOME to the target users (root in this case). So it appears pip is looking into $HOME/Library/Log and sudo by default isn't setting HOME to ~root. Not surprisingly ~/Library/Log is owned by you as a user rather than root. I suspect this is some recent change in pip. I'll run it with 'sudo -H' for now to work around.
MkDocs and MathJax
I'm new to MkDocs and am writing some technical documentation that requires latex. I've successfully built a small website with one of the MkDocs themes, however it won't properly display the latex equations. I followed the instructions at: http://www.vlfeat.org/matconvnet/developers/ as well as the instructions following the python-markdown-mathjax link from that page. I have also tinkered with adding appropriate lines to my mkdocs.yaml file, similar to: https://github.com/EdyJ/vehicle-physics-docs/blob/master/mkdocs.yml However, issuing the command 'mkdocs build' still results in a site that doesn't render the equations. I've also tried adding a -x mathjax flag with the mkdocs build command. I've scoured the web and have been tinkering for quite a bit of time now. Can anyone shed light on what I need to do to get these two playing together?
This is actually easier than I expected. First I installed the Python-Markdown-Math Extension: pip install https://github.com/mitya57/python-markdown-math/archive/master.zip Then I created a new MkDocs project: mkdocs new test_math Next I edited the test_math/docs/index.md file to be as follows (sample borrowed from the MathJax documentation): # MathJax Test Page When \(a \ne 0\), there are two solutions to \(ax^2 + bx + c = 0\) and they are $$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$ Finally, I edited the test_math/config.yaml file to be as follows: site_name: Test Math extra_javascript: - http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML markdown_extensions: - mdx_math I was unsure if this would work, but I ran the test server to see: mkdocs serve I then opened my browser and loaded http://127.0.0.1:8000/. The page displayed with the sample equations properly formatted: Then I remembered that the OP asked for this to work with ReadTheDocs, so I added the following line to the config: theme: readthedocs My browser reloaded and the following (properly formatted equations) displayed: I should note that I'm getting some weird error about fontawesome not loading. With the MkdDocs' theme, the equations disappear after a minute (when the error appears in the browser's console). However, in the ReadTheDocs theme, the equations display properly, even with the error. Either way, I believe this error is related to some other issue on my local machine. Finally, the Bounty is... Looking for an answer drawing from credible and/or official sources I don't normally advertise this, but since you asked, I am the lead developer of Python-Markdown, I work regularly with mitya57 (the creator of Python-Markdown-Math Extension) as he is one of two other developers with commit access to Python-Markdown, and I am a contributor to MkDocs (one of those contributions being support for Python-Markdown Extensions).
Return Pandas dataframe from PostgreSQL query with sqlalchemy
I want to query a PostgreSQL database and return the output as a Pandas dataframe. I use sqlalchemy to create a connection the the database: from sqlalchemy import create_engine engine = create_engine('postgresql://user@localhost:5432/mydb') I write a Pandas dataframe to a database table: i=pd.read_csv(path) i.to_sql('Stat_Table',engine,if_exists='replace') Based upon the docs, looks like pd.read_sql_query() should accept a SQLAlchemy engine: a=pd.read_sql_query('select * from Stat_Table',con=engine) But it throws an error: ProgrammingError: (ProgrammingError) relation "stat_table" does not exist I'm using Pandas version 0.14.1. What's the right way to do this?
You are bitten by the case (in)sensitivity issues with PostgreSQL. If you quote the table name in the query, it will work: df = pd.read_sql_query('select * from "Stat_Table"',con=engine) But personally, I would advise to just always use lower case table names (and column names), also when writing the table to the database to prevent such issues. From the PostgreSQL docs (http://www.postgresql.org/docs/8.0/static/sql-syntax.html#SQL-SYNTAX-IDENTIFIERS): Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower case To explain a bit more: you have written a table with the name Stat_Table to the database (and sqlalchemy will quote this name, so it will be written as "Stat_Table" in the postgres database). When doing the query 'select * from Stat_Table' the unquoted table name will be converted to lower case stat_table, and so you get the message that this table is not found. See eg also Are PostgreSQL column names case-sensitive?
How do I install a Python package with a .whl file?
I'm having trouble installing a Python package (specifically, JPype1 0.5.7) on my Windows machine, and would like to install it with Christoph Gohlke's Window binaries. (Which, to my experience, alleviated much of the fuss for many other package installations.) However, while Christoph used to provide .exe files in the past, recently it seems he's uploading .whl files instead. http://www.lfd.uci.edu/~gohlke/pythonlibs/#jpype But how do I install .whl files? Notes: I've found documents on wheel, but they don't seem so staightforward in explaining how to install .whl files. This question is a duplicate with this question, which wasn't directly answered.
I just used the following which was quite simple. First open a console then cd to where you've downloaded your file like some-package.whl and use pip install some-package.whl Note: if pip.exe is not recognized, you may find it in the "Scripts" directory from where python has been installed. If pip is not installed, this page can help: How do I install pip on Windows? Note: for clarification If you copy the *.whl file to your local drive (ex. C:\some-dir\some-file.whl) use the following command line parameters -- pip install C:/some-dir/some-file.whl
AttributeError using pyBrain _splitWithPortion - object type changed?
I'm testing out pybrain following the basic classification tutorial here and a different take on it with some more realistic data here. However I receive this error when applying trndata._convertToOneOfMany() with the error: AttributeError: 'SupervisedDataSet' object has no attribute '_convertToOneOfMany The data set is created as a classification.ClassificationDataSet object however calling splitWithProportion seems to change it supervised.SupervisedDataSet object, so being fairly new to Python this error doesn't seem such a surprise as the supervised.SupervisedDataSet doesn't have that method, classification.ClassificationDataSet does. Code here. However the same exact code is used across so many tutorials I feel that I must be missing something as plenty of other people have it working. I've looked at changes to the codebase on github and there's nothing around this function, I've also tried running under Python 3 vs 2.7 but no difference. If anyone has any pointers to get me back on the right path and that would be very much appreciated. #flatten the 64x64 data in to one dimensional 4096 ds = ClassificationDataSet(4096, 1 , nb_classes=40) for k in xrange(len(X)): #length of X is 400 ds.addSample(np.ravel(X[k]),y[k]) # a new sample consisting of input and target print(type(ds)) tstdata, trndata = ds.splitWithProportion( 0.25 ) print(type(trndata)) trndata._convertToOneOfMany() tstdata._convertToOneOfMany()
I had the same problem. I added the following code to make it work on my machine. tstdata_temp, trndata_temp = alldata.splitWithProportion(0.25) tstdata = ClassificationDataSet(2, 1, nb_classes=3) for n in xrange(0, tstdata_temp.getLength()): tstdata.addSample( tstdata_temp.getSample(n)[0], tstdata_temp.getSample(n)[1] ) trndata = ClassificationDataSet(2, 1, nb_classes=3) for n in xrange(0, trndata_temp.getLength()): trndata.addSample( trndata_temp.getSample(n)[0], trndata_temp.getSample(n)[1] ) This converts tstdata and trndata back to the ClassificationDataSet type.
Is it possible to save datetime to DynamoDB?
I have the next code: users_table = Table(users_table_name, connection=Core.aws_dynamodb_connection) users_table.put_item(data={ "login": login, "password": hashlib.sha256(password.encode("utf-8")).hexdigest(), "profile": profile, "registration_date": datetime.now() # PROBLEM IS HERE }) But when I run it, it fails with error: TypeError: Unsupported type "< type 'datetime.datetime' >" for value "2015-01-12 05:02:57.053131" I've tried a lot of ways, but it seems that it isn't possible to save datetime to DynamoDB. Btw it works fine in MongoDB. Is there any solution?
Okay, I see that DynamoDB does not support any date types. So the only solution is to use unix-like time as integer, or save date as string.
How can you slice with string keys instead of integers on a python OrderedDict?
Since an OrderedDict has the features of both a list (with ordered elements), and a dictionary (with keys instead of indexes), it would seem natural that you could slice using keys. >>> from collections import OrderedDict >>> cities = OrderedDict((('san francisco', 650), ('new york', 212), ('shanghai', 8621), ('barcelona', 42423))) >>> test['shanghai':] # I want all the cities from shanghai to the end of the list TypeError: unhashable type What's interesting about this is that it's not the error you'd see due to OrderedDictionary.__getslice__not being implemented. I tried adding a my own __getslice__ method to OrderedDict, but I keep running into this TypeError problem. It seems like Python is doing some kind of type checking to enforce that slice keys are only integers, before they even get passed to the __getslice__ function, how unpythonic! >>> class BetterOrderedDict(OrderedDict): def __getslice__(self, start=None, end=None, step=1): return 'potato' >>> test = BetterOrderedDict((('one', 1), ('two', 2), ('three', 3), ('four', 4))) >>> print test[1:4] 'potato' # ok this makes sense so far >>> test['one':'four'] TypeError: unhashable type # WTF, strings are hashable! So my question is, why cant I implement non-int slices, what kind of type-checking is preventing the slice keys from even reaching my __getslice__ function, and can I override it by implementing my BetterOrderedDict in C with bindings?
__getslice__ is deprecated way of implementing slicing. Instead you should handle slice objects with __getitem__: from collections import OrderedDict class SlicableDict(OrderedDict): def __getitem__(self, key): if isinstance(key, slice): return 'potato({},{},{})'.format(key.start, key.stop, key.step) return super(SlicableDict, self).__getitem__(key) >>> s = SlicableDict(a=1, b=2, c=3) >>> s SlicableDict([('a', 1), ('c', 3), ('b', 2)]) >>> s['a'] 1 >>> s['a':'c'] 'potato(a,c,None)' And if you need more than potato, than you can implement all three slicing operations following way: def _key_slice_to_index_slice(items, key_slice): try: if key_slice.start is None: start = None else: start = next(idx for idx, (key, value) in enumerate(items) if key == key_slice.start) if key_slice.stop is None: stop = None else: stop = next(idx for idx, (key, value) in enumerate(items) if key == key_slice.stop) except StopIteration: raise KeyError return slice(start, stop, key_slice.step) class SlicableDict(OrderedDict): def __getitem__(self, key): if isinstance(key, slice): items = self.items() index_slice = _key_slice_to_index_slice(items, key) return SlicableDict(items[index_slice]) return super(SlicableDict, self).__getitem__(key) def __setitem__(self, key, value): if isinstance(key, slice): items = self.items() index_slice = _key_slice_to_index_slice(items, key) items[index_slice] = value.items() self.clear() self.update(items) return return super(SlicableDict, self).__setitem__(key, value) def __delitem__(self, key): if isinstance(key, slice): items = self.items() index_slice = _key_slice_to_index_slice(items, key) del items[index_slice] self.clear() self.update(items) return return super(SlicableDict, self).__delitem__(key)
Selenium Element not visible exception
I have been tasked with writing a parser to click a button on a website and I am having issues to click only one of the buttons. The following code works on every button except one. Here's the html: http://pastebin.com/6dLF5ru8 here's the source html: http://pastebin.com/XhsedGLb python code: driver = webdriver.Firefox() ... el = driver.find_element_by_id("-spel-nba") actions.move_to_element(el) actions.sleep(.1) actions.click() actions.perform() I am getting this error. ElementNotVisibleException: Message: Element is not currently visible and so may not be interacted with as per Saifur I just tried waits with the same element not visible exception: wait = WebDriverWait(driver, 10) wait.until(EC.presence_of_element_located((By.XPATH, "//input[contains(@id,'spsel')][@value='nba']"))).click()
If you look at the page source, you'll understand that almost all of theSELECT, DIV elements are faked and created from JavaScript, that is why webdriver cannot SEE them. There's a workaround though, by using ActionChains to open your developer console, and inject an artificial CLICK on the desired element, which in fact, is the Label triggering the NBA data loading... here's a working example: from selenium import webdriver from selenium.webdriver.common import action_chains, keys import time driver = webdriver.Firefox() driver.get('Your URL here...') assert 'NBA' in driver.page_source action = action_chains.ActionChains(driver) # open up the developer console, mine on MAC, yours may be diff key combo action.send_keys(keys.Keys.COMMAND+keys.Keys.ALT+'i') action.perform() time.sleep(3) # this below ENTER is to rid of the above "i" action.send_keys(keys.Keys.ENTER) # inject the JavaScript... action.send_keys("document.querySelectorAll('label.boxed')[1].click()"+keys.Keys.ENTER) action.perform() Alternatively to replace all the ActionChains commands, you can simply run execute_script like this: driver.execute_script("document.querySelectorAll('label.boxed')[1].click()") There you go, at least on my local file anyway... Hope this helps!
Find p-value (significance) in scikit-learn LinearRegression
How can I find the p-value (significance) of each coefficient? lm = sklearn.linear_model.LinearRegression() lm.fit(x,y)
scikit-learn's LinearRegression doesn't calculate this information but you can easily extend the class to do it: from sklearn import linear_model from scipy import stats import numpy as np class LinearRegression(linear_model.LinearRegression): """ LinearRegression class after sklearn's, but calculate t-statistics and p-values for model coefficients (betas). Additional attributes available after .fit() are `t` and `p` which are of the shape (y.shape[1], X.shape[1]) which is (n_features, n_coefs) This class sets the intercept to 0 by default, since usually we include it in X. """ def __init__(self, *args, **kwargs): if not "fit_intercept" in kwargs: kwargs['fit_intercept'] = False super(LinearRegression, self)\ .__init__(*args, **kwargs) def fit(self, X, y, n_jobs=1): self = super(LinearRegression, self).fit(X, y, n_jobs) sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1]) se = np.array([ np.sqrt(np.diagonal(sse[i] * np.linalg.inv(np.dot(X.T, X)))) for i in range(sse.shape[0]) ]) self.t = self.coef_ / se self.p = 2 * (1 - stats.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1])) return self Stolen from here. You should take a look at statsmodels for this kind of statistical analysis in Python.
Dynamically exclude or include a field in Django REST framework serializer
I have a serializer in Django REST framework defined as follows: class QuestionSerializer(serializers.Serializer): id = serializers.CharField() question_text = QuestionTextSerializer() topic = TopicSerializer() Now I have two API views that use the above serializer: class QuestionWithTopicView(generics.RetrieveAPIView): # I wish to include all three fields - id, question_text # and topic in this API. serializer_class = QuestionSerializer class QuestionWithoutTopicView(generics.RetrieveAPIView): # I want to exclude topic in this API. serializer_class = ExamHistorySerializer One solution is to write two different serializers. But there must be a easier solution to conditionally exclude a field from a given serializer.
Have you tried this technique class QuestionSerializer(serializers.Serializer): def __init__(self, *args, **kwargs): remove_fields = kwargs.pop('remove_fields', None) super(QuestionSerializer, self).__init__(*args, **kwargs) if remove_fields: # for multiple fields in a list for field_name in remove_fields: self.fields.pop(field_name) class QuestionWithoutTopicView(generics.RetrieveAPIView): serializer_class = QuestionSerializer(remove_fields=['field_to_remove1' 'field_to_remove2']) If not, once try it.
How can I write unit tests against code that uses matplotlib?
I'm working on a python (2.7) program that produce a lot of different matplotlib figure (the data are not random). I'm willing to implement some test (using unittest) to be sure that the generated figures are correct. For instance, I store the expected figure (data or image) in some place, I run my function and compare the result with the reference. Is there a way to do this ?
In my experience, image comparison tests end up bring more trouble than they are worth. This is especially the case if you want to run continuous integration across multiple systems (like TravisCI) that may have slightly different fonts or available drawing backends. It can be a lot of work to keep the tests passing even when the functions work perfectly correctly. Furthermore, testing this way requires keeping images in your git repository, which can quickly lead to repository bloat if you're changing the code often. A better approach in my opinion is to (1) assume matplotlib is going to actually draw the figure correctly, and (2) run numerical tests against the data returned by the plotting functions. (You can also always find this data inside the Axes object if you know where to look.) For example, say you want to test a simple function like this: import numpy as np import matplotlib.pyplot as plt def plot_square(x, y): y_squared = np.square(y) return plt.plot(x, y_squared) Your unit test might then look like def test_plot_square1(): x, y = [0, 1, 2], [0, 1, 2] line, = plot_square(x, y) x_plot, y_plot = line.get_xydata().T np.testing.assert_array_equal(y_plot, np.square(y)) Or, equivalently, def test_plot_square2(): f, ax = plt.subplots() x, y = [0, 1, 2], [0, 1, 2] plot_square(x, y) x_plot, y_plot = ax.lines[0].get_xydata().T np.testing.assert_array_equal(y_plot, np.square(y))
How to write __getitem__ cleanly?
In Python, when implementing a sequence type, I often (relatively speaking) find myself writing code like this: class FooSequence(collections.abc.Sequence): # Snip other methods def __getitem__(self, key): if isinstance(key, int): # Get a single item elif isinstance(key, slice): # Get a whole slice else: raise TypeError('Index must be int, not {}'.format(type(key).__name__)) The code checks the type of its argument explicitly with isinstance(). This is regarded as an antipattern within the Python community. How do I avoid it? I cannot use functools.singledispatch, because that's quite deliberately incompatible with methods (it will attempt to dispatch on self, which is entirely useless since we're already dispatching on self via OOP polymorphism). It works with @staticmethod, but what if I need to get stuff out of self? Casting to int() and then catching the TypeError, checking for a slice, and possibly re-raising is still ugly, though perhaps slightly less so. It might be cleaner to convert integers into one-element slices and handle both situations with the same code, but that has its own problems (return 0 or [0]?).
As much as it seems odd, I suspect that the way you have it is the best way to go about things. Patterns generally exist to encompass common use cases, but that doesn't mean that they should be taken as gospel when following them makes life more difficult. The main reason that PEP 443 gives for balking at explicit typechecking is that it is "brittle and closed to extension". However, that mainly applies to custom functions that take a number of different types at any time. From the Python docs on __getitem__: For sequence types, the accepted keys should be integers and slice objects. Note that the special interpretation of negative indexes (if the class wishes to emulate a sequence type) is up to the __getitem__() method. If key is of an inappropriate type, TypeError may be raised; if of a value outside the set of indexes for the sequence (after any special interpretation of negative values), IndexError should be raised. For mapping types, if key is missing (not in the container), KeyError should be raised. The Python documentation explicitly states the two types that should be accepted, and what to do if an item that is not of those two types is provided. Given that the types are provided by the documentation itself, it's unlikely to change (doing so would break far more implementations than just yours), so it's likely not worth the trouble to go out of your way to code against Python itself potentially changing. If you're set on avoiding explicit typechecking, I would point you toward this SO answer. It contains a concise implementation of a @methdispatch decorator (not my name, but i'll roll with it) that lets @singledispatch work with methods by forcing it to check args[1] (arg) rather than args[0] (self). Using that should allow you to use custom single dispatch with your __getitem__ method. Whether or not you consider either of these "pythonic" is up to you, but remember that while The Zen of Python notes that "Special cases aren't special enough to break the rules", it then immediately notes that "practicality beats purity". In this case, just checking for the two types that the documentation explicitly states are the only things __getitem__ should support seems like the practical way to me.
Using Python+Theano with OpenCL in an AMD GPU
I'm trying to use Python with Theano to accelerate some code with OpenCL. I installed libgpuarray and pygpu as instructed (I think), and got no errors. The installation detected the OpenCL runtime installed. I just cannot run the Theano example for OpenCL, mainly because I don't know how to specify my GPU. My GPU is a Radeon HD 5340/5450/5470, according to inxi. All code in the Theano documentation uses device=cuda0 and the only place where OpenCL is mentioned, it says device=openclN where N is a number. I tried device=opencl0 and got a pygpu error saying that the correct format is opencl<int>:<int>. I have since tried any number of combinations of numbers (opencl0:0 and such), and always an GpuArrayException: Unknown error. My system is Ubuntu 14.04 x64 and my hardware is a Toshiba Satellite, 15". I installed Theano with pip, and later installed libgpuarray following the instructions on their site. What am I doing wrong?
opencl0:0 is correct. Could you confirm that pyopencl works? You may have a problem with your opencl (or drivers/cl compiler). However, I think Theano does not quite work with OpenCL at the moment. The current state is there is partial support, enough for "hello world", but not enough to run any significant code. See: https://github.com/Theano/Theano/issues/2189 https://github.com/Theano/Theano/issues/1471 https://github.com/Theano/Theano/issues/2190 https://github.com/Theano/Theano/pull/1732 To summarize, no, most stuff is not ported (including Elemwise, for example, which is a common op). I would really like to see Theano on OpenCL though. That would be a great thing for AMD to pitch in on. Soon :)
How to PATCH a single field using Django Rest Framework?
I have a model 'MyModel' with many fields and I would like to update a field 'status' using PATCH method. I'm using class based views. Is there any way to implement PATCH?
Serializers allow partial updates by specifying partial=True when initializing the serialzer. This is how PATCH requests are handled by default in the generic views. serializer = CommentSerializer(comment, data=request.data, partial=True) This will allow you to update individual fields in a serializer, or all of the fields if you want, without any of the restrictions of a standard PUT request.
Suppress InsecureRequestWarning: Unverified HTTPS request is being made in Python2.6
I am writing scripts in Python2.6 with use of pyVmomi and while using one of the connection methods: service_instance = connect.SmartConnect(host=args.ip, user=args.user, pwd=args.password) I get the following warning: /usr/lib/python2.6/site-packages/requests/packages/urllib3/connectionpool.py:734: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.org/en/latest/security.html InsecureRequestWarning) What's interesting is that I do not have urllib3 installed with pip (but it's there in /usr/lib/python2.6/site-packages/requests/packages/urllib3/). I have tried as suggested here import urllib3 ... urllib3.disable_warnings() but that didn't change anything.
The reason doing urllib3.disable_warnings() didn't work for you is because it looks like you're using a separate instance of urllib3 vendored inside of requests. I gather this based on the path here: /usr/lib/python2.6/site-packages/requests/packages/urllib3/connectionpool.py To disable warnings in requests' vendored urllib3, you'll need to import that specific instance of the module: import requests from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
How do I dissolve a pattern in a numpy array?
Excuse the strange title, I couldn't really think of a suitable wording. Say I have an array like: arr = [[0 1 1 1 1 1 1 1 0], [0 0 1 1 1 1 1 0 0], [0 0 0 1 1 1 0 0 0], [0 0 0 0 1 0 0 0 0], [0 0 0 0 0 0 0 0 0]] I'm looking to "etch" away the 1s that touch 0s, which would result in: arr = [[0 0 1 1 1 1 1 0 0], [0 0 0 1 1 1 0 0 0], [0 0 0 0 1 0 0 0 0], [0 0 0 0 0 0 0 0 0], [0 0 0 0 0 0 0 0 0]] . I've tried a few things with the likes of np.roll but it seems inefficient (and has edge effects). Is there a nice short way of doing this?
Morpholocial erosion can be used here. Morphological erosion sets a pixel at (i, j) to the minimum over all pixels in the neighborhood centered at (i, j). source data Out[39]: array([[0, 1, 1, 1, 1, 1, 1, 1, 0], [0, 0, 1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 1, 1, 1, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]]) structure Out[40]: array([[0, 1, 0], [1, 1, 1], [0, 1, 0]]) eroded = binary_erosion(data, structure, border_value=1).astype(int) eroded Out[42]: array([[0, 0, 1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 1, 1, 1, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]])
CPU Flame Graphs for Python
Brendan Gregg's CPU Flame Graphs are a way of visualising CPU usage over a period of time based on call stacks. His FlameGraph github project provides a language-independent way to plot these graphs: For each language, FlameGraph requires a way of providing stack input in the form of lines like this: grandparent_func;parent_func;func 42 This means that the instrumented program was observed running function func, where that was called from parent_func, in turn called from top-level function grandparent_func. It says that call stack was observed 42 times. How can I gather stack information from Python programs and provide it to FlameGraph? For bonus points: How can that be extended so that both the C and Python stack is shown, or even down to the kernel on Linux (in a similar way to some of the Java and node.js flame graphs on Brendan's website)?
Maybe you can try sys.setprofile, which is the core for the standard python profiler profile and cProfile. This method sets a hook to the "call" and "return" events of every function, including those functions of C-API. The system’s profile function is called similarly to the system’s trace function (see settrace()), but it isn’t called for each executed line of code (only on call and return, but the return event is reported even when an exception has been set). Below is a working example: from time import clock t0 = clock() def getFun(frame): code = frame.f_code return code.co_name+' in '+code.co_filename+':'+str(code.co_firstlineno) def trace_dispatch(frame, event, arg): if event in [ "c_call" , 'call', 'return', 'c_return']: t = int((clock()-t0)*1000) f = frame stack=[] while(f): stack.insert( 0,getFun(f) ) f = f.f_back print event, '\t', '; '.join(stack), '; ', t import sys sys.setprofile(trace_dispatch) try: execfile('test.py') finally: sys.setprofile(None) Test.py def f(x): return x+1 def main(x): return f(x) main(10) This will print out c_call 0 call <module> in test.py:2 ; 1 call <module> in test.py:2; main in test.py:5 ; 1 call <module> in test.py:2; main in test.py:5; f in test.py:2 ; 5 return <module> in test.py:2; main in test.py:5; f in test.py:2 ; 8 return <module> in test.py:2; main in test.py:5 ; 11 return <module> in test.py:2 ; 14 c_return 18 c_call 21 See a more comprehensive profiling function here. C stack in python You cannot access the C stack within the python interpreter. It is necessary to use a debugger or profiler that supports C/C++. I would recommand gdb python.
QPython or Kivy for Android programming with Python - producing installable apk
Having read several Q&A's on SO, I realize that one has 2 options i.e. QPython and Kivy to do programming for Android, however, apparently both take different approaches. I am trying to validate my understanding and see if I am missing some key piece of information. QPython allows usage of Kivy library for developing graphical applications QPython and Kivy both use SL4A, while QPython has expanded standard SL4A (or it's bindings for Python) by adding some NFC and similar functions QPython is used to create python scripts that can use wide range of modules, libraries, but they need QPython installed to be executed on target device. There is no way to package script into an apk. Kivy OTOH, allows developer to write applications that compile to apk, using their cloud based build system (alternative - local build system can be set up on Ubuntu Linux) [However, I noticed that most of the sample apk's that use Kivy are pretty large, in the 40MB range. Did I miss anything ?] QPython apk has 2 version i.e. one for Python-2.7 and another one for Python-3.x. For Kivy, I'm not sure which version it is. QPython example script (HelloWorld.py) doesn't seem to behave as expected, from latest QPython-3.x from Market, on an Android Kitkat (4.4.2) system. I get the dialog to enter text, but then I expect a Toast to popup, but nothing happens. Get the impression that both QPython and Kivy are developed by a single developer each (or only one person is really active at present), and don't yet have a biggish community. [This is my biggest concern] I notice that there are 3-4 questions with 'qpython' tag on SO, and more than thousand with 'kivy'! Also get the impression that at this moment Kivy development is somewhat more active (perhaps quite active), but for QPython I don't have a clear picture. Kivy seems to be trying to expand the nature of application that could possibly be written using it, compare to QPython. There are API's like plyer and pyjnius that help expand the possibilities. Perhaps quite significantly, compared to QPython. Both QPython and Kivy seem to be heavily under development. Program (/ script) crashes (/ failures) seem to be reported on both set of tools. Overall, the opinion as a result (of above points) appears to swing in favour of Kivy, a bit more. Is the understanding correct ? Did I miss any crucial point ? This is not a rhetorical question, and I am looking for factual answers only.
QPython allows usage of Kivy library for developing graphical applications Yes, qpython is an interpreter + associated tools, and has some nice kivy integration. You can't compile the kivy code to a standalone apk with qpython+android alone though. QPython and Kivy both use SL4A, while QPython has expanded standard SL4A (or it's bindings for Python) by adding some NFC and similar functions Kivy does not use SL4A. We achieve android api integration mainly through pyjnius, a library for automatically wrapping java classes with python, which lets you call the java api directly. We also have abstracted some standard things to a pythonic interface with plyer. (I saw later that you already have found these) QPython is used to create python scripts that can use wide range of modules, libraries, but they need QPython installed to be executed on target device. There is no way to package script into an apk. I don't use qpython much, but I think this is correct, although there may be some tools turn scripts to apks in some circumstances (e.g. you could use kivy's build tools if you have a kivy interface, or maybe sl4a has something for this). Kivy OTOH, allows developer to write applications that compile to apk, using their cloud based build system (alternative - local build system can be set up on Ubuntu Linux) [However, I noticed that most of the sample apk's that use Kivy are pretty large, in the 40MB range. Did I miss anything ?] We have a basic cloud based build system but nothing else like that right now, almost everyone builds apks on their own machine using our build tools for android. These run on linux or OSX, and can easily be run in a virtual machine if necessary. A minimal app has about 7MB APK size due to the necessity of bundling the python interpreter and a lot of modules. QPython apk has 2 version i.e. one for Python-2.7 and another one for Python-3.x. For Kivy, I'm not sure which version it is. Kivy itself supports python3, but our android build tools only support python2.7 for now. Get the impression that both QPython and Kivy are developed by a single developer each (or only one person is really active at present), and don't yet have a biggish community. [This is my biggest concern] I notice that there are 3-4 questions with 'qpython' tag on SO, and more than thousand with 'kivy'! Kivy development is quite active with several regular contributors plus more sporadic ones. You can see for example recent commit activity on github. It's certainly a lot more than a single developer! Kivy seems to be trying to expand the nature of application that could possibly be written using it, compare to QPython. There are API's like plyer and pyjnius that help expand the possibilities. Perhaps quite significantly, compared to QPython. I think qpython includes pyjnius. Not sure about plyer. Both QPython and Kivy seem to be heavily under development. Program (/ script) crashes (/ failures) seem to be reported on both set of tools. I'm not sure what you're looking at, but any non-trivial project will have crashes/failures/bugs reported. I don't think kivy is particularly inherently unstable. Overall, the opinion as a result (of above points) appears to swing in favour of Kivy, a bit more. I would have said that qpython and kivy are quite different things. Qpython lets you write and run scripts, while kivy is a graphical framework and associated tools to create standalone apps. There's some overlap with tasks that could be achieved with both of them, but also plenty of things where they are not both suitable - for instance, I think qpython is probably much more convenient to make quick scripts with no gui for e.g. simple automation (I think this is possible), whereas I think kivy is a far better choice for creating standalone apps with non-trivial guis. Of course you can write kivy code in qpython, as discussed, but this isn't a great user experience for anything non-trivial and you need a desktop/laptop machine anyway to make a standalone apk.
Wheel file installation
How do I install a .whl file? I have the Wheel library but I don't know how to use it to install those files. I have the .whl file but I don't know how to run it. Please help.
You normally use a tool like pip to install wheels. Leave it to the tool to discover and download the file if this is for a project hosted on PyPI. For this to work, you do need to install the wheel package: pip install wheel You can then tell pip to install the project (and it'll download the wheel if available), or the wheel file directly: pip install project_name # discover, download and install pip install wheel_file.whl # directly install the wheel The wheel module, once installed, also is runnable from the command line, you can use this to install already-downloaded wheels: python -m wheel install wheel_file.whl Also see the wheel project documentation.
sudo pip install django
So this is my first attempt at trying to install Django, and when I ran it, it successfully installed Django-1.7.3 but I received these warnings below. I wasn't able to find any information about it online so I was hoping someone could clarify what they mean, if I need to fix them, and how I could go about doing that? Thanks! below is the output from my terminal macbook:~ Asif$ sudo pip install Django Password: The directory '/Users/Asif/Library/Logs/pip' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag. The directory '/Users/Asif/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag. The directory '/Users/Asif/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag. Collecting Django Downloading Django-1.7.3-py2.py3-none-any.whl (7.4MB) 100% |################################| 7.4MB 2.3MB/s Installing collected packages: Django Successfully installed Django-1.7.3 enter code here
These messages are just telling you that after issuing sudo the current user has changed to root and root isn't the owner of those directories or one of the parent directories. sudo -H sets the $HOME environment variable to /root and would probably hide these but the way you did it is perfectly fine. I'm pretty sure these messages are nothing to be concerned about, but its always good to see that people are reading them and making sure. Also, to verify this, you can try: $ sudo env | less and $ sudo -H env | less and pay attention to the $HOME and $USER variables
Pandas DataFrame to List of Lists
It's easy to turn a list of lists into a pandas dataframe: import pandas as pd df = pd.DataFrame([[1,2,3],[3,4,5]]) But how do I turn df back into a list of lists? lol = df.what_to_do_now? print lol # [[1,2,3],[3,4,5]]
You could access the underlying array and call its tolist method: >>> df = pd.DataFrame([[1,2,3],[3,4,5]]) >>> lol = df.values.tolist() >>> lol [[1L, 2L, 3L], [3L, 4L, 5L]]
Will pandas dataframe object work with sklearn kmeans clustering?
dataset is pandas dataframe. This is sklearn.cluster.KMeans km = KMeans(n_clusters = n_Clusters) km.fit(dataset) prediction = km.predict(dataset) This is how I decide which entity belongs to which cluster. for i in range(len(prediction)): cluster_fit_dict[dataset.index[i]] = prediction[i] This is how dataset looks: A 1 2 3 4 5 6 B 2 3 4 5 6 7 C 1 4 2 7 8 1 ... where A,B,C are indices Is this the correct way of using kmeans? Any help is appreciated. I hope I dont have to pull an all-nighter Thanks in advance.
Assuming all the values in the dataframe are numeric, # Convert DataFrame to matrix mat = dataset.as_matrix() # Using sklearn km = sklearn.cluster.KMeans(n_clusters=5) km.fit(mat) # Get cluster assignment labels labels = km.labels_ # Format results as a DataFrame results = pandas.DataFrame([dataset.index,labels]).T Alternatively, you could try KMeans++ for Pandas.
Should I be adding the Django migration files in the .gitignore file?
Should I be adding the Django migration files in the .gitignore file? I've recently been getting a lot of git issues due to migration conflicts and was wondering if I should be marking migration files as ignore. If so, how would I go about adding all of the migrations that I have in my apps, and adding them to the .gitignore file?
Quoting from the Django migrations documentation: The migration files for each app live in a “migrations” directory inside of that app, and are designed to be committed to, and distributed as part of, its codebase. You should be making them once on your development machine and then running the same migrations on your colleagues’ machines, your staging machines, and eventually your production machines. If you follow this process, you shouldn't be getting any merge conflicts in the migration files. To mitigate any issues you currently have, you should specify which repository or branch has the authoritative version of the migration files, and then use git's attribute mechanism to specify the merge strategy "ours" for these files. This will tell git to always ignore external changes to these files and prefer the local version.
Django REST Framework upload image: "The submitted data was not a file"
I am leaning how to upload file in Django, and here I encounter a should-be-trivial problem, with the error: The submitted data was not a file. Check the encoding type on the form. Below is the detail. Note: I also looked at Django Rest Framework ImageField, and I tried serializer = ImageSerializer(data=request.data, files=request.FILES) but I get TypeError: __init__() got an unexpected keyword argument 'files' I have a Image model which I would like to interact with via Django REST framework: models.py class Image(models.Model): image = models.ImageField(upload_to='item_images') owner = models.ForeignKey( User, related_name='uploaded_item_images', blank=False, ) time_created = models.DateTimeField(auto_now_add=True) serializers.py class ImageSerializer(serializers.ModelSerializer): image = serializers.ImageField( max_length=None, use_url=True, ) class Meta: model = Image fields = ("id", 'image', 'owner', 'time_created', ) settings.py 'DEFAULT_PARSER_CLASSES': ( 'rest_framework.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser', ), The front end (using AngularJS and angular-restmod or $resource) send JSON data with owner and image of the form: Input: {"owner": 5, "image": "data:image/jpeg;base64,/9j/4QqdRXhpZgAATU0A..."} In the backend, request.data shows {u'owner': 5, u'image': u'data:image/jpeg;base64,/9j/4QqdRXhpZgAATU0AKgAAA..."} But then ImageSerializer(data=request.data).errors shows the error ReturnDict([('image', [u'The submitted data was not a file. Check the encoding type on the form.'])]) I wonder what I should do to fix the error? EDIT: JS part The related front end codes consists of two parts: a angular-file-dnd directive (available here) to drop the file onto the page and angular-restmod, which provides CRUD operations: <!-- The template: according to angular-file-dnd, --> <!-- it will store the dropped image into variable $scope.image --> <div file-dropzone="[image/png, image/jpeg, image/gif]" file="image" class='method' data-max-file-size="3" file-name="imageFileName"> <div layout='row' layout-align='center'> <i class="fa fa-upload" style='font-size:50px;'></i> </div> <div class='text-large'>Drap & drop your photo here</div> </div> # A simple `Image` `model` to perform `POST` $scope.image_resource = Image.$build(); $scope.upload = function() { console.log("uploading"); $scope.image_resource.image = $scope.image; $scope.image_resource.owner = Auth.get_profile().user_id; return $scope.image_resource.$save(); }; An update concerning the problem: right now I switched to using ng-file-upload, which sends image data in proper format.
The problem that you are hitting is that Django REST framework expects files to be uploaded as multipart form data, through the standard file upload methods. This is typically a file field, but the JavaScript Blob object also works for AJAX. You are looking to upload the files using a base64 encoded string, instead of the raw file, which is not supported by default. There are implementations of a Base64ImageField out there, but the most promising one came by a pull request. Since these were mostly designed for Django REST framework 2.x, I've improved upon the one from the pull request and created one that should be compatible with DRF 3. serializers.py from rest_framework import serializers class Base64ImageField(serializers.ImageField): """ A Django REST framework field for handling image-uploads through raw post data. It uses base64 for encoding and decoding the contents of the file. Heavily based on https://github.com/tomchristie/django-rest-framework/pull/1268 Updated for Django REST framework 3. """ def to_internal_value(self, data): from django.core.files.base import ContentFile import base64 import six import uuid # Check if this is a base64 string if isinstance(data, six.string_types): # Check if the base64 string is in the "data:" format if 'data:' in data and ';base64,' in data: # Break out the header from the base64 content header, data = data.split(';base64,') # Try to decode the file. Return validation error if it fails. try: decoded_file = base64.b64decode(data) except TypeError: self.fail('invalid_image') # Generate file name: file_name = str(uuid.uuid4())[:12] # 12 characters are more than enough. # Get the file name extension: file_extension = self.get_file_extension(file_name, decoded_file) complete_file_name = "%s.%s" % (file_name, file_extension, ) data = ContentFile(decoded_file, name=complete_file_name) return super(Base64ImageField, self).to_internal_value(data) def get_file_extension(self, file_name, decoded_file): import imghdr extension = imghdr.what(file_name, decoded_file) extension = "jpg" if extension == "jpeg" else extension return extension This should be used in replacement of the standard ImageField provided by Django REST framework. So your serializer would become class ImageSerializer(serializers.ModelSerializer): image = Base64ImageField( max_length=None, use_url=True, ) class Meta: model = Image fields = ("id", 'image', 'owner', 'time_created', ) This should allow you to either specify a base64-encoded string, or the standard Blob object that Django REST framework typically expects.
Understanding execute async script in Selenium
I've been using selenium (with python bindings and through protractor mostly) for a rather long time and every time I needed to execute a javascript code, I've used execute_script() method. For example, for scrolling the page (python): driver.execute_script("window.scrollTo(0, document.body.scrollHeight);") Or, for infinite scrolling inside an another element (protractor): var div = element(by.css('div.table-scroll')); var lastRow = element(by.css('table#myid tr:last-of-type')); browser.executeScript("return arguments[0].offsetTop;", lastRow.getWebElement()).then(function (offset) { browser.executeScript('arguments[0].scrollTop = arguments[1];', div.getWebElement(), offset).then(function() { // assertions }); }); Or, for getting a dictionary of all element attributes (python): driver.execute_script('var items = {}; for (index = 0; index < arguments[0].attributes.length; ++index) { items[arguments[0].attributes[index].name] = arguments[0].attributes[index].value }; return items;', element) But, WebDriver API also has execute_async_script() which I haven't personally used. What use cases does it cover? When should I use execute_async_script() instead of the regular execute_script()? The question is selenium-specific, but language-agnostic.
When should I use execute_async_script() instead of the regular execute_script()? When it comes to checking conditions on the browser side, all checks you can perform with execute_async_script can be performed with execute_script. Even if what you are checking is asynchronous. I know because once upon a time there was a bug with execute_async_script that made my tests fail if the script returned results too quickly. As far as I can tell, the bug is gone now so I've been using execute_async_script but for months beforehand, I used execute_script for tasks where execute_async_script would have been more natural. For instance, performing a check that requires loading a module with RequireJS to perform the check: driver.execute_script(""" // Reset in case it's been used already. window.__selenium_test_check = undefined; require(["foo"], function (foo) { window.__selenium_test_check = foo.computeSomething(); }); """) result = driver.wait(lambda driver: driver.execute_script("return window.__selenium_test_check;")) The require call is asynchronous. The problem with this though, besides leaking a variable into the global space, is that it multiplies the network requests. Each execute_script call is a network request. The wait method works by polling: it runs the test until the returned value is true. This means one network request per check that wait performs (in the code above). When you test locally it is not a big deal. If you have to go through the network because you are having the browsers provisioned by a service like Sauce Labs (which I use, so I'm talking from experience), each network request slows down your test suite. So using execute_async_script not only allows writing a test that looks more natural (call a callback, as we normally do with asynchronous code, rather than leak into the global space) but it also helps the performance of your tests. result = driver.execute_async_script(""" var done = arguments[0]; require(["foo"], function (foo) { done(foo.computeSomething()); }); """) The way I see it now is that if a test is going to hook into asynchronous code on the browser side to get a result, I use execute_async_script. If it is going to do something for which there is no asynchronous method available, I use execute_script.
Is list[i:j] guaranteed to be an empty list if list[j] precedes list[i]?
The Python tutorial explains slice behavior when indices are negative, but I can't find documentation describing the behavior when the end index precedes the start index. (I've also looked at Explain Python's slice notation, and perhaps I'm not reading carefully enough, but the answers there don't seem to address this point.) The behavior that I observe is that an empty list is returned, which seems reasonable to me. However, it also would seem reasonable to me for this to return a list of items between i and j in reversed order or to simply raise an exception. Is list[i:j] guaranteed to be an empty list if list[j] precedes list[i]?
Yes, if j <= i is true, the resulting slice is empty, for standard Python types. To get the results in reverse order, you need to add a negative stride: list[i:j:-1] because explicit is better than implicit. This is documented in Common Sequence Operations, footnote 4: The slice of s from i to j is defined as the sequence of items with index k such that i <= k < j. If i or j is greater than len(s), use len(s). If i is omitted or None, use 0. If j is omitted or None, use len(s). If i is greater than or equal to j, the slice is empty. Bold emphasis mine. Custom types are free to interpret this differently.
In Python, why is 'r+' but not 'rw' used to mean "read & write"?
In Python, when opening a file, we use 'r' to indicate read-only and 'w' write-only. Then we use 'r+' to mean "read and write". Why not use 'rw'? Doesn't 'rw' looks more natural than 'r+'? Edit on Jan. 25th: Oh.. I guess my question looks a little confusing.. What I was trying to ask is: 'r' is the first letter of 'read' and 'w' the first letter of 'write' so 'r' and 'w' look natural to map to 'read' and 'write'. However, when it comes to 'read and write', Python uses 'r+' instead of 'rw'. So the question is actually about the naming rationale instead of the behavior differences between them.
Python copies the modes from C's fopen() call. r+ is what C uses, and Python stuck with the 40-year-old convention.
Why does `False is False is False` evaluate to `True`?
Why in Python it is evaluated this way: >>> False is False is False True but when tried with parenthesis is behaving as expected: >>> (False is False) is False False
Chaining operators like a is b is c is equivalent to a is b and b is c. So the first example is False is False and False is False, which evaluates to True and True which evaluates to True Having parenthesis leads to the result of one evaluation being compared with the next variable (as you say you expect), so (a is b) is c compares the result of a is b with c.
Is django prefetch_related supposed to work with GenericRelation
UPDATE: An Open Ticked about this issue: 24272 What's all about? Django has a GenericRelation class, which adds a “reverse” generic relationship to enable an additional API. It turns out we can use this reverse-generic-relation for filtering or ordering, but we can't use it inside prefetch_related. I was wondering if this is a bug, or its not supposed to work, or its something that can be implemented in the feature. Let me show you with some examples what I mean. Lets say we have two main models: Movies and Books. Movies have a Director Books have an Author And we want to assign tags to our Movies and Books, but instead of using MovieTag and BookTag models, we want to use a single TaggedItem class with a GFK to Movie or Book. Here is the model structure: from django.db import models from django.contrib.contenttypes.fields import GenericForeignKey, GenericRelation from django.contrib.contenttypes.models import ContentType class TaggedItem(models.Model): tag = models.SlugField() content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = GenericForeignKey('content_type', 'object_id') def __unicode__(self): return self.tag class Director(models.Model): name = models.CharField(max_length=100) def __unicode__(self): return self.name class Movie(models.Model): name = models.CharField(max_length=100) director = models.ForeignKey(Director) tags = GenericRelation(TaggedItem, related_query_name='movies') def __unicode__(self): return self.name class Author(models.Model): name = models.CharField(max_length=100) def __unicode__(self): return self.name class Book(models.Model): name = models.CharField(max_length=100) author = models.ForeignKey(Author) tags = GenericRelation(TaggedItem, related_query_name='books') def __unicode__(self): return self.name And some initial data: >>> from tags.models import Book, Movie, Author, Director, TaggedItem >>> a = Author.objects.create(name='E L James') >>> b1 = Book.objects.create(name='Fifty Shades of Grey', author=a) >>> b2 = Book.objects.create(name='Fifty Shades Darker', author=a) >>> b3 = Book.objects.create(name='Fifty Shades Freed', author=a) >>> d = Director.objects.create(name='James Gunn') >>> m1 = Movie.objects.create(name='Guardians of the Galaxy', director=d) >>> t1 = TaggedItem.objects.create(content_object=b1, tag='roman') >>> t2 = TaggedItem.objects.create(content_object=b2, tag='roman') >>> t3 = TaggedItem.objects.create(content_object=b3, tag='roman') >>> t4 = TaggedItem.objects.create(content_object=m1, tag='action movie') So as the docs show we can do stuff like this. >>> b1.tags.all() [<TaggedItem: roman>] >>> m1.tags.all() [<TaggedItem: action movie>] >>> TaggedItem.objects.filter(books__author__name='E L James') [<TaggedItem: roman>, <TaggedItem: roman>, <TaggedItem: roman>] >>> TaggedItem.objects.filter(movies__director__name='James Gunn') [<TaggedItem: action movie>] >>> Book.objects.all().prefetch_related('tags') [<Book: Fifty Shades of Grey>, <Book: Fifty Shades Darker>, <Book: Fifty Shades Freed>] >>> Book.objects.filter(tags__tag='roman') [<Book: Fifty Shades of Grey>, <Book: Fifty Shades Darker>, <Book: Fifty Shades Freed>] But, if we try to prefetch some related data of TaggedItem via this reverse generic relation, we are going to get an AttributeError. >>> TaggedItem.objects.all().prefetch_related('books') Traceback (most recent call last): ... AttributeError: 'Book' object has no attribute 'object_id' Some of you may ask, why I just don't use content_object instead of books here? The reason is, because this only work when we want to: 1) prefetch only one level deep from querysets containing different type of content_object. >>> TaggedItem.objects.all().prefetch_related('content_object') [<TaggedItem: roman>, <TaggedItem: roman>, <TaggedItem: roman>, <TaggedItem: action movie>] 2) prefetch many levels but from querysets containing only one type of content_object. >>> TaggedItem.objects.filter(books__author__name='E L James').prefetch_related('content_object__author') [<TaggedItem: roman>, <TaggedItem: roman>, <TaggedItem: roman>] But, if we want both 1) and 2) (to prefetch many levels from queryset containing different types of content_objects, we can't use content_object. >>> TaggedItem.objects.all().prefetch_related('content_object__author') Traceback (most recent call last): ... AttributeError: 'Movie' object has no attribute 'author_id' Django thinks that all content_objects are Books, and thus they have an Author. Now imagine the situation where we want to prefetch not only the books with their author, but also the movies with their director. Here are few attempts. The silly way: >>> TaggedItem.objects.all().prefetch_related( ... 'content_object__author', ... 'content_object__director', ... ) Traceback (most recent call last): ... AttributeError: 'Movie' object has no attribute 'author_id' Maybe with custom Prefetch object? >>> >>> TaggedItem.objects.all().prefetch_related( ... Prefetch('content_object', queryset=Book.objects.all().select_related('author')), ... Prefetch('content_object', queryset=Movie.objects.all().select_related('director')), ... ) Traceback (most recent call last): ... ValueError: Custom queryset can't be used for this lookup. Some solutions of this problem are shown here. But that's a lot of massage over the data which I want to avoid. I really like the API coming from the reversed generic relations, it would be very nice to be able to do prefetchs like that: >>> TaggedItem.objects.all().prefetch_related( ... 'books__author', ... 'movies__director', ... ) Traceback (most recent call last): ... AttributeError: 'Book' object has no attribute 'object_id' Or like that: >>> TaggedItem.objects.all().prefetch_related( ... Prefetch('books', queryset=Book.objects.all().select_related('author')), ... Prefetch('movies', queryset=Movie.objects.all().select_related('director')), ... ) Traceback (most recent call last): ... AttributeError: 'Book' object has no attribute 'object_id' But as you can see, we aways get that AttributeError. I'm using Django 1.7.3 and Python 2.7.6. And i'm curious why Django is throwing that error? Why is Django searching for an object_id in the Book model? Why I think this may be a bug? Usually when we ask prefetch_related to resolve something it can't, we see: >>> TaggedItem.objects.all().prefetch_related('some_field') Traceback (most recent call last): ... AttributeError: Cannot find 'some_field' on TaggedItem object, 'some_field' is an invalid parameter to prefetch_related() But here, it is different. Django actually tries to resolve the relation... and fails. Is this a bug which should be reported? I have never reported anything to Django so that's why I'm asking here first. I'm unable to trace the error and decide for myself if this is a bug, or a feature which could be implemented.
If you want to retrieve Book instances and prefetch the related tags use Book.objects.prefetch_related('tags'). No need to use the reverse relation here. You can also have a look at the related tests in the Django source code. Also the Django documentation states that prefetch_related() is supposed to work with GenericForeignKey and GenericRelation: prefetch_related, on the other hand, does a separate lookup for each relationship, and does the ‘joining’ in Python. This allows it to prefetch many-to-many and many-to-one objects, which cannot be done using select_related, in addition to the foreign key and one-to-one relationships that are supported by select_related. It also supports prefetching of GenericRelation and GenericForeignKey. UPDATE: To prefetch the content_object for a TaggedItem you can use TaggedItem.objects.all().prefetch_related('content_object'), if you want to limit the result to only tagged Book objects you could additionally filter for the ContentType (not sure if prefetch_related works with the related_query_name). If you also want to get the Author together with the book you need to use select_related() not prefetch_related() as this is a ForeignKey relationship, you can combine this in a custom prefetch_related() query: from django.contrib.contenttypes.models import ContentType from django.db.models import Prefetch book_ct = ContentType.objects.get_for_model(Book) TaggedItem.objects.filter(content_type=book_ct).prefetch_related( Prefetch( 'content_object', queryset=Book.objects.all().select_related('author') ) )
Why doesn't the namedtuple module use a metaclass to create nt class objects?
I spent some time investigating the collections.namedtuple module a few weeks ago. The module uses a factory function which populates the dynamic data (the name of the new namedtuple class, and the class attribute names) into a very large string. Then exec is executed with the string (which represents the code) as the argument, and the new class is returned. Does anyone know why it was done this way, when there is a specific tool for this kind of thing readily available, i.e. the metaclass? I haven't tried to do it myself, but it seems like everything that is happening in the namedtuple module could have been easily accomplished using a namedtuple metaclass, like so: class namedtuple(type): etc etc.
There are some hints in the issue 3974. The author proposed a new way to create named tuples, which was rejected with the following comments: It seems the benefit of the original version is that it's faster, thanks to hardcoding critical methods. - Antoine Pitrou There is nothing unholy about using exec. Earlier versions used other approaches and they proved unnecessarily complex and had unexpected problems. It is a key feature for named tuples that they are exactly equivalent to a hand-written class. - Raymond Hettinger Additionally, here is the part of the description of the original namedtuple recipe: ... the recipe has evolved to its current exec-style where we get all of Python's high-speed builtin argument checking for free. The new style of building and exec-ing a template made both the __new__ and __repr__ functions faster and cleaner than in previous versions of this recipe. If you're looking for some alternative implementations: abstract base class + mix-in for named tuples recipe by Jan Kaliszewski metaclass-based implementation by Aaron Iles (see his blog post)
Windows Scipy Install: No Lapack/Blas Resources Found
I am trying to install python and a series of packages onto a 64bit windows 7 desktop. I have installed Python 3.4, have Microsoft Visual Studio C++ installed, and have successfully installed numpy, pandas and a few others. I am getting the following error when trying to install scipy; numpy.distutils.system_info.NotFoundError: no lapack/blas resources found I am using pip install offline, the install command I am using is; pip install --no-index --find-links="S:\python\scipy 0.15.0" scipy I have read the posts on here about requiring a compiler which if I understand correctly is the VS C++ compiler. I am using the 2010 version as I am using Python 3.4. This has worked for other packages. Do I have to use the window binary or is there a way I can get pip install to work? Many thanks for the help
The following link should solve all problems with Windows and SciPy; just choose the appropriate download. I was able to pip install the package with no problems. Every other solution I have tried gave me big headaches. Source: http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy Command: pip install [Local File Location][Your specific file such as scipy-0.16.0-cp27-none-win_amd64.whl] This assumes you have installed the following already: 1) Install Visual Studio 2015/2013 with Python Tools (Is integrated into the setup options on install of 2015) 2) Install Visual Studio C++ Compiler for Python Source: http://www.microsoft.com/en-us/download/details.aspx?id=44266 File Name: VCForPython27.msi 3) Install Python Version of choice Source: python.org File Name (e.g.): python-2.7.10.amd64.msi
Unpickling a python 2 object with python 3
I'm wondering if there is a way to load an object that was pickled in Python 2.4, with Python 3.4. I've been running 2to3 on a large amount of company legacy code to get it up to date. Having done this, when running the file I get the following error: File "H:\fixers - 3.4\addressfixer - 3.4\trunk\lib\address\address_generic.py" , line 382, in read_ref_files d = pickle.load(open(mshelffile, 'rb')) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 1: ordinal not in range(128) looking at the pickled object in contention, it's a dict in a dict, containing keys and values of type str. So my question is: Is there a way to load an object, originally pickled in python 2.4, with python 3.4?
You'll have to tell pickle.load() how to convert Python bytestring data to Python 3 strings, or you can tell pickle to leave them as bytes. The default is to try and decode all string data as ASCII, and that decoding fails. See the pickle.load() documentation: Optional keyword arguments are fix_imports, encoding and errors, which are used to control compatibility support for pickle stream generated by Python 2. If fix_imports is true, pickle will try to map the old Python 2 names to the new names used in Python 3. The encoding and errors tell pickle how to decode 8-bit string instances pickled by Python 2; these default to ‘ASCII’ and ‘strict’, respectively. The encoding can be ‘bytes’ to read these 8-bit string instances as bytes objects. Setting the encoding to latin1 allows you to import the data directly: with open(mshelffile, 'rb') as f: d = pickle.load(f, encoding='latin1') but you'll need to verify that none of your strings are decoded using the wrong codec; Latin-1 works for any input as it maps the byte values 0-255 to the first 256 Unicode codepoints directly. The alternative would be to load the data with encoding='bytes', and decode all bytes keys and values afterwards.
How to iterate over columns of pandas dataframe to run regression
I'm sure this is simple, but as a complete newbie to python, I'm having trouble figuring out how to iterate over variables in a pandas dataframe and run a regression with each. Here's what I'm doing: all_data = {} for ticker in ['FIUIX', 'FSAIX', 'FSAVX', 'FSTMX']: all_data[ticker] = web.get_data_yahoo(ticker, '1/1/2010', '1/1/2015') prices = DataFrame({tic: data['Adj Close'] for tic, data in all_data.iteritems()}) returns = prices.pct_change() I know I can run a regression like this: regs = sm.OLS(returns.FIUIX,returns.FSTMX).fit() but suppose I want to do this for each column in the dataframe. In particular, I want to regress FIUIX on FSTMX, and then FSAIX on FSTMX, and then FSAVX on FSTMX. After each regression I want to store the residuals. I've tried various versions of the following, but I must be getting the syntax wrong: resids = {} for k in returns.keys(): reg = sm.OLS(returns[k],returns.FSTMX).fit() resids[k] = reg.resid I think the problem is I don't know how to refer to the returns column by key, so returns[k] is probably wrong. Any guidance on the best way to do this would be much appreciated. Perhaps there's a common pandas approach I'm missing.
for column in df: print(df[column])
Determine if an attribute is a `DeferredAttribute` in django
The Context I have located a rather critical bug in Django Cache Machine that causes it's invalidation logic to lose its mind after a upgrading from Django 1.4 to 1.7. The bug is localized to invocations of only() on models that extend cache machine's CachingMixin. It results in deep recursions that occasionally bust the stack, but otherwise create huge flush_lists that cache machine uses for bi-directional invalidation for models in ForeignKey relationships. class MyModel(CachingMixin): id = models.CharField(max_length=50, blank=True) nickname = models.CharField(max_length=50, blank=True) favorite_color = models.CharField(max_length=50, blank=True) content_owner = models.ForeignKey(OtherModel) m = MyModel.objects.only('id').all() The Bug The bug occurs in the following lines(https://github.com/jbalogh/django-cache-machine/blob/f827f05b195ad3fc1b0111131669471d843d631f/caching/base.py#L253-L254). In this case self is a instance of MyModel with a mix of deferred and undeferred attributes: fks = dict((f, getattr(self, f.attname)) for f in self._meta.fields if isinstance(f, models.ForeignKey)) Cache Machine does bidirectional invalidation across ForeignKey relationships. It does this by looping over all the fields in a Model and storing a series of pointers in cache that point to objects that need invalidated when the object in question is invalidated. The use of only() in the Django ORM does some meta programming magic that overrides the unfetched attributes with Django's DeferredAttribute implementation. Under normal circumstances an access to favorite_color would invoke DeferredAttribute.__get__(https://github.com/django/django/blob/18f3e79b13947de0bda7c985916d5a04e28936dc/django/db/models/query_utils.py#L121-L146) and fetch the attribute either from the result cache or the data source. It does this by fetching the undeferred representation of the Model in question and calling another only() query on it. This is the problem when looping over the foreign keys in the Model and accessing their values, Cachine Machine introduces an unintentional recursion. getattr(self, f.attname) on an attribute that is deferred induces a fetch of a Model that has the CachingMixin applied and has deferred attributes. This starts the whole caching process over again. The Question I would like to open a PR to fix this and I believe the answer to this is as simple as skipping over the deferred attributes, but I'm not sure how to do it because accessing the attribute causes the fetch process to start. If all I have is a handle on an instance of a Model with a mix of deferred and undeferred attributes, Is there a way to determine if an attribute is a DeferredAttribute without accessing it? fks = dict((f, getattr(self, f.attname)) for f in self._meta.fields if (isinstance(f, models.ForeignKey) and <f's value isn't a Deferred attribute))
Here is how to check if a field is deferred: from django.db.models.query_utils import DeferredAttribute is_deferred = isinstance(model_instance.__class__.__dict__.get(field.attname), DeferredAttribute): Taken from: https://github.com/django/django/blob/1.9.4/django/db/models/base.py#L393
import vs __import__( ) vs importlib.import_module( )?
I noticed Flask was using Werkzeug to __import__ a module, and I was a little confused. I went and checked out the docs on it and saw that it seems to give you more control somehow in terms of where it looks for the module, but I'm not sure exactly how and I have zero idea how it's different from importlib.import_module. The odd thing in the Werkzeug example is that it just says __import__(import_name), so I don't see how that's any different from just using the import statement, since it's ignoring the optional extra parameters. Can anyone explain? I looked at other people having asked similar questions on SO previously but they weren't very clearly phrased questions and the answers didn't address this at all.
__import__ is a low-level hook function that's used to import modules; it can be used to import a module dynamically by giving the module name to import as a variable, something the import statement won't let you do. importlib.import_module() is a wrapper around that hook to produce a nice API for the functionality; it is a very recent addition to Python 2, and has been more fleshed out in Python 3. Codebases that use __import__ generally do so because they want to remain compatible with older Python 2 releases, e.g. anything before Python 2.7. One side-effect of using __import__ can be that it returns the imported module and doesn't add anything to the namespace; you can import with it without having then to delete the new name if you didn't want that new name; using import somename will add somename to your namespace, but __import__('somename') instead returns the imported module, which you can then ignore. Werkzeug uses the hook for that reason in one location. All other uses are to do with dynamic imports. Werkzeug supports Python 2.6 still so cannot use importlib.
Explain the aggregate functionality in Spark
I am looking for some better explanation of the aggregate functionality that is available via spark in python. The example I have is as follows (using pyspark from Spark 1.2.0 version) sc.parallelize([1,2,3,4]).aggregate( (0, 0), (lambda acc, value: (acc[0] + value, acc[1] + 1)), (lambda acc1, acc2: (acc1[0] + acc2[0], acc1[1] + acc2[1]))) Output: (10, 4) I get the expected result (10,4) which is sum of 1+2+3+4 and 4 elements. If I change the initial value passed to the aggregate function to (1,0) from (0,0) I get the following result sc.parallelize([1,2,3,4]).aggregate( (1, 0), (lambda acc, value: (acc[0] + value, acc[1] + 1)), (lambda acc1, acc2: (acc1[0] + acc2[0], acc1[1] + acc2[1]))) Output: (19, 4) The value increases by 9. If I change it to (2,0), the value goes to (28,4) and so on. Can someone explain to me how this value is calculated? I expected the value to go up by 1 not by 9, expected to see (11,4) instead I am seeing (19,4).
Aggregate lets you transform and combine the values of the RDD at will. It uses two functions: The first one transforms and adds the elements of the original collection [T] in a local aggregate [U] and takes the form: (U,T) => U. You can see it as a fold and therefore it also requires a zero for that operation. This operation is applied locally to each partition in parallel. Here is where the key of the question lies: The only value that should be used here is the ZERO value for the reduction operation. This operation is executed locally on each partition, therefore, adding anything to that zero value will add to the result multiplied by the number of partitions of the RDD. The second operation takes 2 values of the result type of the previous operation [U] and combines it in to one value. This operation will reduce the partial results of each partition and produce the actual total. For example: Given an RDD of Strings: val rdd:RDD[String] = ??? Let's say you want to the aggregate of the length of the strings in that RDD, so you would do: 1) The first operation will transform strings into size (int) and accumulate the values for size. val stringSizeCummulator: (Int, String) => Int = (total, string) => total + string.lenght` 2) provide the ZERO for the addition operation (0) val ZERO = 0 3) an operation to add two integers together: val add: (Int, Int) => Int = _ + _ Putting it all together: rdd.aggregate(ZERO, stringSizeCummulator, add) So, why is the ZERO needed? When the cummulator function is applied to the first element of a partition, there's no running total. ZERO is used here. Eg. My RDD is: - Partition 1: ["Jump", "over"] - Partition 2: ["the", "wall"] This will result: P1: stringSizeCummulator(ZERO, "Jump") = 4 stringSizeCummulator(4, "over") = 8 P2: stringSizeCummulator(ZERO, "the") = 3 stringSizeCummulator(3, "wall") = 7 Reduce: add(P1, P2) = 15
Flask app "Restarting with stat"
I've built a few Flask apps, but on my latest project I noticed something a little strange in development mode. The second line of the usual message in the terminal which always reads: * Running on http://127.0.0.1:5000/ * Restarting with reloader has been replaced by: * Restarting with stat I don't think I've done anything different, in fact, I started by cloning a starter-kit project that I have used many times, which itself, does not display this behavior. I also notice that this project consumes about 15% CPU steadily, whereas my other project are barely a blip. Any ideas why this is happening?
Check your version of Werkzeug. Version 0.10 was just released and numerous changes went into the reloader. One change is that a default polling reloader is used; the old pyinotify reloader was apparently inaccurate. If you want more efficient polling, install the watchdog package. You can see the code related to this here. When Werkzeug can't find watchdog, it uses the stat reloader, otherwise it uses whatever reloader watchdog uses, which can vary by platform. This message is just so you know which one is in use.
Correct way to obtain confidence interval with scipy
I have a 1-dimensional array of data: a = np.array([1,2,3,4,4,4,5,5,5,5,4,4,4,6,7,8]) for which I want to obtain the 68% confidence interval (ie: the 1 sigma). The first comment in this answer states that this can be achieved using scipy.stats.norm.interval from the scipy.stats.norm function, via: from scipy import stats import numpy as np mean, sigma = np.mean(a), np.std(a) conf_int = stats.norm.interval(0.68, loc=mean, scale=sigma) But a comment in this post states that the actual correct way of obtaining the confidence interval is: conf_int = stats.norm.interval(0.68, loc=mean, scale=sigma / np.sqrt(len(a))) that is, a 1/np.sqrt(len(a)) factor is used on the sigma. The question is: which version is the correct one?
The 68% confidence interval for a single draw from a normal distribution with mean mu and std deviation sigma is stats.norm.interval(0.68, loc=mu, scale=sigma) The 68% confidence interval for the mean of N draws from a normal distribution with mean mu and std deviation sigma is stats.norm.interval(0.68, loc=mu, scale=sigma/sqrt(N)) Intuitively, these formulas make sense, since if you hold up a jar of jelly beans and ask a large number of people to guess the number of jelly beans, each individual may be off by a lot -- the same std deviation sigma -- but the average of the guesses will do a remarkably fine job of estimating the actual number and this is reflected by the standard deviation of the mean shrinking by a factor of 1/sqrt(N). If a single draw has variance sigma**2, then by the Bienaymé formula, the sum of N uncorrelated draws has variance N*sigma**2. The mean is equal to the sum divided by N. When you multiply a random variable (like the sum) by a constant, the variance is multiplied by the constant squared. That is Var(cX) = c**2 * Var(X) So the variance of the mean equals (variance of the sum)/N**2 = N * sigma**2 / N**2 = sigma**2 / N and so the standard deviation of the mean (which is the square root of the variance) equals sigma/sqrt(N). This is the origin of the sqrt(N) in the denominator. Here is some example code, based on Tom's code, which demonstrates the claims made above: import numpy as np from scipy import stats N = 10000 a = np.random.normal(0, 1, N) mean, sigma = a.mean(), a.std(ddof=1) conf_int_a = stats.norm.interval(0.68, loc=mean, scale=sigma) print('{:0.2%} of the single draws are in conf_int_a' .format(((a >= conf_int_a[0]) & (a < conf_int_a[1])).sum() / float(N))) M = 1000 b = np.random.normal(0, 1, (N, M)).mean(axis=1) conf_int_b = stats.norm.interval(0.68, loc=0, scale=1 / np.sqrt(M)) print('{:0.2%} of the means are in conf_int_b' .format(((b >= conf_int_b[0]) & (b < conf_int_b[1])).sum() / float(N))) prints 68.03% of the single draws are in conf_int_a 67.78% of the means are in conf_int_b Beware that if you define conf_int_b with the estimates for mean and sigma based on the sample a, the mean may not fall in conf_int_b with the desired frequency. If you take a sample from a distribution and compute the sample mean and std deviation, mean, sigma = a.mean(), a.std() be careful to note that there is no guarantee that these will equal the population mean and standard deviation and that we are assuming the population is normally distributed -- those are not automatic givens! If you take a sample and want to estimate the population mean and standard deviation, you should use mean, sigma = a.mean(), a.std(ddof=1) since this value for sigma is the unbiased estimator for the population standard deviation.
python equivalent of get() in R (= use string to retrieve value of symbol)
In R, the get(s) function retrieves the value of the symbol whose name is stored in the character variable (vector) s, e.g. X <- 10 r <- "XVI" s <- substr(r,1,1) ## "X" get(s) ## 10 takes the first symbol of the Roman numeral r and translates it to its integer equivalent. Despite spending a while poking through R-Python dictionaries and Googling various combinations of "metaprogramming", "programming on the language", "symbol", "string", etc., I haven't come up with anything. (I am a very experienced R user and a novice Python user.) (I know the example above is a (very!) poor way to approach the problem. I'm interested in the general answer to this question, not specifically in converting Roman numerals to integers ...)
You can use locals: s = 1 locals()['s'] EDIT: Actually, get in R is more versatile - get('as.list') will give you back as.list. For class members, in Python, we can use getattr (here), and for built-in things like len, getattr(__builtins__, 'len') works.
Problems in implementing Horner's method in Python
So I have written down the codes for evaluating polynomial using three different methods. Horner's method should be the fastest, while the naive method should be the slowest, right? But how come the time for computing it is not what I expect? And the time for calculation sometimes turns out to be exactly the same for itera and naive method. What's wrong with it? import numpy.random as npr import time def Horner(c,x): p=0 for i in c[-1::-1]: p = p*x+i return p def naive(c,x): n = len(c) p = 0 for i in range(len(c)): p += c[i]*x**i return p def itera(c,x): p = 0 xi = 1 for i in range(len(c)): p += c[i]*xi xi *= x return p c=npr.uniform(size=(500,1)) x=-1.34 start_time=time.time() print Horner(c,x) print time.time()-start_time start_time=time.time() print itera(c,x) print time.time()-start_time start_time=time.time() print naive(c,x) print time.time()-start_time here are some of the results: [ 2.58646959e+69] 0.00699996948242 [ 2.58646959e+69] 0.00600004196167 [ 2.58646959e+69] 0.00600004196167 [ -3.30717922e+69] 0.00899982452393 [ -3.30717922e+69] 0.00600004196167 [ -3.30717922e+69] 0.00600004196167 [ -2.83469309e+69] 0.00999999046326 [ -2.83469309e+69] 0.00999999046326 [ -2.83469309e+69] 0.0120000839233
Your profiling can be much improved. Plus, we can make your code run 200-500x faster. (1) Rinse and repeat You can't run just one iteration of a performance test, for two reasons. Your time resolution might not be good enough. This is why you sometimes got the same time for two implementations: the time for one run was near the resolution of your timing mechanism, so you recorded only one "tick". There are all sorts of factors that affect performance. Your best bet for a meaningful comparison will be a lot of iterations. You don't need gazillions of runs (though, of course, that doesn't hurt), but you estimate and adjust the number of iterations until the variance is within a level acceptable to your purpose. timeit is a nice little module for profiling Python code. I added this to bottom of your script. import timeit n = 1000 print 'Horner', timeit.timeit( number = n, setup='from __main__ import Horner, c, x', stmt='Horner(c,x)' ) print 'naive', timeit.timeit( number = n, setup='from __main__ import naive, c, x', stmt='naive(c,x)', ) print 'itera', timeit.timeit( number = n, setup='from __main__ import itera, c, x', stmt='itera(c,x)', ) Which produces Horner 1.8656351566314697 naive 2.2408010959625244 itera 1.9751169681549072 Horner is the fastest, but it's not exactly blowing the doors off the other two. (2) Look at what is happening...very carefully Python has operator overloading, so it's easy to miss seeing this. npr.uniform(size=(500,1)) is giving you a 500 x 1 numpy structure of random numbers. So what? Well, c[i] isn't a number. It's a numpy array with one element. Numpy overloads the operators so you can do things like multiply an array by a scalar. That's fine, but using an array for every element is a lot of overhead, so it's harder to see the difference between the algorithms. Instead, let's try a simple Python list: import random c = [random.random() for _ in range(500)] And now, Horner 0.034661054611206055 naive 0.12771987915039062 itera 0.07331395149230957 Whoa! All the time times just got faster (by 10-60x). Proportionally, the Horner implementation got even faster than the other two. We removed the overhead on all three, and can now see the "bare bones" difference. Horner is 4x faster than naive and 2x faster than itera. (3) Alternate runtimes You're using Python 2. I assume 2.7. Let's see how Python 3.4 fares. (Syntax adjustment: you'll need to put parenthesis around the argument list to print.) Horner 0.03298933599944576 naive 0.13706714100044337 itera 0.06771054599812487 About the same. Let's try PyPy, a JIT implementation of Python. (The "normal" Python implementation is called CPython.) Horner 0.006507158279418945 naive 0.07541298866271973 itera 0.005059003829956055 Nice! Each implementation is now running 2-5x faster. Horner is now 10x the speed of naive, but slightly slower than itera. JIT runtimes are more difficult to profile than interpreters. Let's increase the number of iterations to 50000, and try it just to make sure. Horner 0.12749004364013672 naive 3.2823100090026855 itera 0.06546688079833984 (Note that we have 50x the iterations, but only 20x the time...the JIT hadn't taken full effect for many of the first 1000 runs.) Same conclusions, but the differences are even more pronounced. Granted, the idea of JIT is to profile, analyze, and rewrite the program at runtime, so if your goal is to compare algorithms, this is going to add a lot of non-obvious implementation detail. Nonetheless, comparing runtimes can be useful in giving a broader perspective. There are a few more things. For example, your naive implementation computes a variable it never uses. You use range instead of xrange. You could try iterating backwards with an index rather than a reverse slice. Etc. None of these changed the results much for me, but they were worth considering.
You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application
I am working on Django project with virtualenv and connect it to local postgres database. when i run the project is says, ImportError: No module named psycopg2.extensions then i used this command to install pip install psycopg2 then during the installation it gives following error. Downloading/unpacking psycopg2==2.4.4 Downloading psycopg2-2.4.4.tar.gz (648kB): 648kB downloaded Running setup.py (path:/home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2/setup.py) egg_info for package psycopg2 Error: You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application. Complete output from command python setup.py egg_info: running egg_info creating pip-egg-info/psycopg2.egg-info writing pip-egg-info/psycopg2.egg-info/PKG-INFO writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found Error: You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application. ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /home/muhammadtaqi/Projects/MyProjects/OnlineElectionCampaign/venv/build/psycopg2 Storing debug log for failure in /home/muhammadtaqi/.pip/pip.log
Use these following commands, this will solve the error: sudo apt-get install postgresql then fire: sudo apt-get install python-psycopg2 and last: sudo apt-get install libpq-dev