instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
How to dynamically build a JSON object with Python?
I am new to Python and I am playing with JSON data. I would like to dynamically build a JSON object by adding some key-value to an existing JSON object. I tried the following but I get TypeError: 'str' object does not support item assignment: import json json_data = json.dumps({}) json_data["key"] = "value" print 'JSON: ', json_data
You build the object before encoding it to a JSON string: data = {} data['key'] = 'value' json_data = json.dumps(data) JSON is a serialization format, textual data representing a structure. It is not, itself, that structure.
Websockets in Flask
I'm currently researching websocket support in Python and am a bit confused with the offerings. On one hand it's possible to use Flask + gevent. On the other hand, uwsgi has socket support and at last there is an extension that bundles both uwsgi and gevent. What's the problem with implementing websockets with only one of these? What do I win by mixing them? Changing the question What does adding gevent do that threaded uwsgi won't?
In regular HTTP requests the connections between client and server are short-lived, a client connects to the server, sends a request, receives the response and then closes the connection. In this model the server can serve a large number of clients using a small number of workers. The concurrency model in this situation is typically based on threads, processes or a combination of both. When you use websocket the problem is more complex, because a websocket connection is open for a long period of time, so the server cannot use a small pool of workers to serve a large number of clients, each client needs to get its own dedicated worker. If you use threads and/or processes then your app will not scale to support a large number of clients because you can't have large number of threads/processes. This is where gevent enters the picture. Gevent has a concurrency model based on greenlets, which scale much better than threads/processes. So serving websocket connections with a gevent based server allows you support more clients, due to the lightweight nature of greenlets. With uWSGI you have a choice of concurrency models to use with web sockets, and that includes the greenlet based model from gevent. You can also use gevent's web server standalone if you want. But note that gevent does not know anything about web sockets, it is just a server. To use websocket connections you have to add an implementation of the websocket server. There are two extensions for Flask that simplify the use of websockets. The Flask-Sockets extension by Kenneth Reitz is a wrapper for gevent and gevent-websocket. The Flask-SocketIO extension (shameless plug as I'm the author) is a wrapper for gevent and gevent-socketio on the server, plus Socket.IO on the client. Socket.IO is higher level socket protocol that can use web socket if available but can also use other transport mechanisms on older browsers. I hope this helps!
Python super __init__ inheritance
I have the following Python 2.7 code: class Frame: def __init__(self, image): self.image = image class Eye(Frame): def __init__(self, image): super(Eye, self).__init__() self.some_other_defined_stuff() I'm trying to extend the __init__() method so that when I instantiate an 'Eye' it does a bunch of other stuff (self.some_other_defined_stuff()), in addition to what Frame sets up. Frame.__init__() needs to run first. I get the following error: super(Eye, self).__init__() TypeError: must be type, not classobj Which I do not understand the logical cause of. Can someone explain please? I'm used to just typing 'super' in ruby.
There are two errors here: super() only works for new-style classes; use object as a base class for Frame to make it use new-style semantics. You still need to call the overridden method with the right arguments; pass in image to the __init__ call. So the correct code would be: class Frame(object): def __init__(self, image): self.image = image class Eye(Frame): def __init__(self, image): super(Eye, self).__init__(image) self.some_other_defined_stuff()
How to install python opencv through Conda?
I'm trying to install opencv for python through anaconda, but I can't seem to figure this out. I tried conda install opencv conda install cv2 I also tried searching conda search cv No cigar. I ran across this which lists opencv as an included package: http://docs.continuum.io/anaconda/pkgs.html After running conda info I noticed my version is 3.4.1 but I couldn't seem to find any information about this version online. I'm pretty confused about this. Am I missing something pretty obvious here? If opencv was available for a previous version of anaconda, then why wouldn't it be available for the newer version? And why does that link only show me docs for version 1.9.2?
You can install it using binstar: conda install -c https://conda.binstar.org/menpo opencv
python requests - POST Multipart/form-data without filename in HTTP request
I am trying to replicate the following POST request using the requests module in python: POST /example/asdfas HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Content-Type: multipart/form-data; boundary=---------------------------241652170216373 Content-Length: 279 -----------------------------241652170216373 Content-Disposition: form-data; name="value_1" 12345 -----------------------------241652170216373 Content-Disposition: form-data; name="value_2" 67890 -----------------------------241652170216373-- The documentation for requests suggests that the files argument should be used. When I attempt the following call: import requests requests.post('http://example.com/example/asdfas', files={'value_1': '12345', 'value_2': '67890'}) I get the following HTTP request: 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate, compress', 'Content-Length': '264', 'User-Agent': 'python-requests/2.2.1 CPython/3.3.2 Windows/7', 'Content-Type': 'multipart/form-data; boundary=273f13699c02429db4eb95c97f757d38' --273f13699c02429db4eb95c97f757d38 Content-Disposition: form-data; name="value_1"; filename="value_1" 12345 --273f13699c02429db4eb95c97f757d38 Content-Disposition: form-data; name="value_2"; filename="value_2" 67890 --273f13699c02429db4eb95c97f757d38-- I have also tried to use the data argument: import requests requests.post('http://example.com/example/asdfas', data={'value_1': '12345', 'value_2': '67890'}) resulting in the following HTTP request: 'Content-Type': 'application/x-www-form-urlencoded', 'Content-Length': '27', 'User-Agent': 'python-requests/2.2.1 CPython/3.3.2 Windows/7', 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate, compress' value_2=67890&value_1=12345 The issue I'm having is that using the files argument results in a call that the server doesn't recognize, presumably due to the unexpected "filename" information sent in the HTTP request. Using the data argument sends the wrong Content-Type header. The first request is known to be working on the server I wish to send the request to - what is the correct function call to identically replicate the first HTTP request? Edit: Sample HTML form to replicate the working request: <!DOCTYPE html> <html> <head> <title></title> </head> <body> <form action="http://example.com/example/asdfas" method="post" enctype="multipart/form-data"> <label for="v1">Value 1</label> <input id="v1" type="text" name="value_1"> <label for="v2">Value 2</label> <input id="v2" type="text" name="value_2"> <input type="submit"> </form> </body> </html>
The solution is to use tuples when passing parameters to the files argument: import requests requests.post('http://example.com/example/asdfas', files={'value_1': (None, '12345'), 'value_2': (None, '67890')}) Works as expected: 'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate, compress', 'Content-Length': '228', 'User-Agent': 'python-requests/2.2.1 CPython/3.3.2 Windows/7', 'Content-Type': 'multipart/form-data; boundary=85e90a4bbb05474ca1e23dbebdd68ed9' --85e90a4bbb05474ca1e23dbebdd68ed9 Content-Disposition: form-data; name="value_1" 12345 --85e90a4bbb05474ca1e23dbebdd68ed9 Content-Disposition: form-data; name="value_2" 67890 --85e90a4bbb05474ca1e23dbebdd68ed9--
How do I find out what key failed in Python KeyError?
If I catch a KeyError, how can I tell what lookup failed? def POIJSON2XML (location_node, POI_JSON): try: man_JSON = POI_JSON["FastestMan"] woman_JSON = POI_JSON["FastestWoman"] except KeyError: # How can I tell what key ("FastestMan" or "FastestWoman") caused the error? LogErrorMessage ("POIJSON2XML", "Can't find mandatory key in JSON")
Take the current exception (I used it as e in this case); then for a KeyError the first argument is the key that raised the exception. Therefore we can do: except KeyError as e: # One would do it as 'KeyError, e:' in Python 2. cause = e.args[0] With that, you have the offending key stored in cause.
Django Get All Users
I am just starting out with Django and I am messing around just trying to pull a full list of users from postgres. I used the following code: group = Group.objects.get(name="Admins") usersList = group.user_set.all() How could you pull all users? I don't want to have to pick or assign a group. group = Group.objects.get() #Doesn't Work. usersList = group.user_set.all()
from django.contrib.auth.models import User users = User.objects.all()
iterrows pandas get next rows value
I have a df in pandas import pandas as pd df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value']) I want to iterate over rows in df. For each row i want rows value and next rows value Something like(it does not work): for i, row in df.iterrows(): print row['value'] i1, row1 = next(df.iterrows()) print row1['value'] As a result I want 'AA' 'BB' 'BB' 'CC' 'CC' *Wrong index error here At this point i have mess way to solve this for i in range(0, df.shape[0]) print df.irow(i)['value'] print df.irow(i+1)['value'] Is there more efficient way to solve this issue?
Firstly, your "messy way" is ok, there's nothing wrong with using indices into the dataframe, and this will not be too slow. iterrows() itself isn't terribly fast. A version of your first idea that would work would be: row_iterator = df.iterrows() _, last = row_iterator.next() # take first item from row_iterator for i, row in row_iterator: print(row['value']) print(last['value']) last = row The second method could do something similar, to save one index into the dataframe: last = df.irow(0) for i in range(1, df.shape[0]): print(last) print(df.irow(i)) last = df.irow(i) When speed is critical you can always try both and time the code.
What happens if an object's __hash__ changes?
In Python, I know that the value __hash__ returns for a given object is supposed to be the same for the lifetime of that object. But, out of curiosity, what happens if it isn't? What sort of havoc would this cause? class BadIdea(object): def __hash__(self): return random.randint(0, 10000) I know __contains__ and __getitem__ would behave strangely, and dicts and sets would act odd because of that. You also might end up with "orphaned" values in the dict/set. What else could happen? Could it crash the interpreter, or corrupt internal structures?
Your main problem would indeed be with dicts and sets. If you insert an object into a dict/set, and that object's hash changes, then when you try to retrieve that object you will end up looking in a different spot in the dict/set's underlying array and hence won't find the object. This is precisely why dict keys should always be immutable. Here's a small example: let's say we put o into a dict, and o's initial hash is 3. We would do something like this (a slight simplification but gets the point across): Hash table: 0 1 2 3 4 5 6 7 +---+---+---+---+---+---+---+---+ | | | | o | | | | | +---+---+---+---+---+---+---+---+ ^ we put o here, since it hashed to 3 Now let's say the hash of o changes to 6. If we want to retrieve o from the dict, we'll look at spot 6, but there's nothing there! This will cause a false negative when querying the data structure. In reality, each element of the array above could have a "value" associated with it in the case of a dict, and there could be multiple elements in a single spot (e.g. a hash collision). Also, we'd generally take the hash value modulo the size of the array when deciding where to put the element. Irrespective of all these details, though, the example above still accurately conveys what could go wrong when the hash code of an object changes. Could it crash the interpreter, or corrupt internal structures? No, this won't happen. When we say an object's hash changing is "dangerous", we mean dangerous in the sense that it essentially defeats the purpose of hashing and makes the code difficult if not impossible to reason about. We don't mean dangerous in the sense that it could cause crashes.
lxml runtime error: Reason: Incompatible library version: etree.so requires version 12.0.0 or later, but libxml2.2.dylib provides version 10.0.0
I have a perplexing problem. I have used mac version 10.9, anaconda 3.4.1, python 2.7.6. Developing web application with python-amazon-product-api. i have overcome an obstacle about installing lxml, referencing clang error: unknown argument: '-mno-fused-madd' (python package installation failure). but another runtime error happened. Here is the output from webbrowser. Exception Type: ImportError Exception Value: dlopen(/Users/User_Name/Documents/App_Name/lib/python2.7/site-packages/lxml/etree.so, 2): Library not loaded: libxml2.2.dylib Referenced from: /Users/User_Name/Documents/App_Name/lib/python2.7/site-packages/lxml/etree.so Reason: Incompatible library version: etree.so requires version 12.0.0 or later, but libxml2.2.dylib provides version 10.0.0 Not sure how to proceed and have searched here and elsewhere for this particular error. Any help is much appreciated!
This worked for me: brew install libxml2 brew install libxslt brew link libxml2 --force brew link libxslt --force
TypeError - Translate takes one argument.(2 given) Python
I have the following code import nltk, os, json, csv, string, cPickle from scipy.stats import scoreatpercentile lmtzr = nltk.stem.wordnet.WordNetLemmatizer() def sanitize(wordList): answer = [word.translate(None, string.punctuation) for word in wordList] answer = [lmtzr.lemmatize(word.lower()) for word in answer] return answer words = [] for filename in json_list: words.extend([sanitize(nltk.word_tokenize(' '.join([tweet['text'] for tweet in json.load(open(filename,READ))])))]) I've tested lines 2-4 in a separate testing.py file when I wrote import nltk, os, json, csv, string, cPickle from scipy.stats import scoreatpercentile wordList= ['\'the', 'the', '"the'] print wordList wordList2 = [word.translate(None, string.punctuation) for word in wordList] print wordList2 answer = [lmtzr.lemmatize(word.lower()) for word in wordList2] print answer freq = nltk.FreqDist(wordList2) print freq and the command prompt returns ['the','the','the'], which is what I wanted (removing punctuation). However, when I put the exact same code in a different file, python returns a TypeError stating that File "foo.py", line 8, in <module> for tweet in json.load(open(filename, READ))])))]) File "foo.py", line 2, in sanitize answer = [word.translate(None, string.punctuation) for word in wordList] TypeError: translate() takes exactly one argument (2 given) json_list is a list of all the file paths (I printed and check that this list is valid). I'm confused on this TypeError because everything works perfectly fine when I'm just testing it in a different file.
I suspect your issue has to do with the differences between str.translate and unicode.translate (these are also the differences between str.translate on Python 2 versus Python 3). I suspect your original code is being sent unicode instances while your test code is using regular 8-bit str instances. I don't suggest converting Unicode strings back to regular str instances, since unicode is a much better type for handling text data (and it is the future!). Instead, you should just adapt to the new unicode.translate syntax. With regular str.translate (on Python 2), you can pass an optional deletechars argument and the characters in it would be removed from the string. For unicode.translate (and str.translate on Python 3), the extra argument is no longer allowed, but translation table entries with None as their value will be deleted from the output. To solve the problem you'll need to create an appropriate translation table. A translation table is a dictionary mapping from Unicode ordinals (that is, ints) to ordinals, strings or None. A helper function for making them exists in Python 2 as string.maketrans (and Python 3 as a method of the str type), but the Python 2 version of it doesn't handle the case we care about (putting None values into the table). You can build an appropriate dictionary yourself with something like {ord(c): None for c in string.punctuation}.
ImportError: No module named _io in ubuntu 14.04
I just fresh installed ubuntu 14.04LTS and i am trying to use pip but i am getting the following traceback: (nlmanagement)psychok7@Ultrabook:~/code/work/nlmanagement$ pip freeze Traceback (most recent call last): File "/home/psychok7/code/work/venv/nlmanagement/bin/pip", line 9, in <module> load_entry_point('pip==1.1', 'console_scripts', 'pip')() File "/home/psychok7/code/work/venv/nlmanagement/local/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg/pkg_resources.py", line 337, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/home/psychok7/code/work/venv/nlmanagement/local/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg/pkg_resources.py", line 2279, in load_entry_point return ep.load() File "/home/psychok7/code/work/venv/nlmanagement/local/lib/python2.7/site-packages/distribute-0.6.24-py2.7.egg/pkg_resources.py", line 1989, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "/home/psychok7/code/work/venv/nlmanagement/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/__init__.py", line 10, in <module> from pip.backwardcompat import walk_packages, console_to_str File "/home/psychok7/code/work/venv/nlmanagement/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg/pip/backwardcompat.py", line 77, in <module> from urllib2 import URLError, HTTPError File "/usr/lib/python2.7/urllib2.py", line 94, in <module> import httplib File "/usr/lib/python2.7/httplib.py", line 79, in <module> import mimetools File "/usr/lib/python2.7/mimetools.py", line 6, in <module> import tempfile File "/usr/lib/python2.7/tempfile.py", line 32, in <module> import io as _io File "/usr/lib/python2.7/io.py", line 51, in <module> import _io ImportError: No module named _io any ideas?? i have tried sudo apt-get install python3-pip and sudo apt-get install python-pip
I had the same problem. This happened in Ubuntu - 14.04 and Virtual environment's Python version - 2.7.3 After spending a day in debugging, posting my answer here, hope it helps for future visitors. Found that io.py is invoking _io module. I think _io module is internal to interpreter so, replacing python binary alone with the newer version would fix. (In my case, Cloudera Manager 5.x Agent's virtualenv on ubuntu 14.04 was using python interpreter 2.7.3, replaced it with 2.7.6. echo "Using " && python2.7 --version mv myenv/bin/python myenv/bin/python.bak; cp `which python2.7` myenv/bin/python When a new virtualenv is created, it uses updated python binary. Hence, that explains accepted answer in the thread.
Python: Checking if a 'Dictionary' is empty doesn't seem to work
I am trying to check if a dictionary is empty but it doesn't behave properly. It just skips it and displays ONLINE without anything except of display the message. Any ideas why ? def isEmpty(self, dictionary): for element in dictionary: if element: return True return False def onMessage(self, socket, message): if self.isEmpty(self.users) == False: socket.send("Nobody is online, please use REGISTER command" \ " in order to register into the server") else: socket.send("ONLINE " + ' ' .join(self.users.keys()))
Empty dictionaries evaluate to False in Python: >>> dct = {} >>> bool(dct) False >>> not dct True >>> Thus, your isEmpty function is unnecessary. All you need to do is: def onMessage(self, socket, message): if not self.users: socket.send("Nobody is online, please use REGISTER command" \ " in order to register into the server") else: socket.send("ONLINE " + ' ' .join(self.users.keys()))
How can a plug-in enhance Anki's JavaScript?
Anki enables cards to use JavaScript. For example, a card can contain something like: <script> //JavaScript code here </script> and the JavaScript code will be executed when the card is shown. In order to allow more flexibility by enabling such scripts to interact with the Anki back-end (for example in order to change the values of the note's fields, to add tags, to affect the scheduling etc), I would like to write a plug-in for Anki (version 2) that would implement some back-end functions and enable a card's JavaScript script to invoke them. For example, say I have a (Python) function in my plug-in that interacts with Anki's objects: def myFunc(): # use plug-in's ability to interact with Anki's objects to do stuff I want to be able to allow cards' JavaScript to invoke that function, for example to have something like this in a card: <script> myFunc(); // This should invoke the plug-in's myFunc(). </script> I know how to add hooks so that various Anki events invoke my plug-in's functions, but I want to allow the JavaScript from within a card to do so. Can this at all be done, and if so then how? Thanks!
Having read the post linked to by @Louis, and discussed the issue with some colleagues, and messed around trying various things out, I've finally managed to come up with a solution: The idea can be summarised in these two key points (and two sub-key points): The plug-in can create one or more objects that will be "exposed" to the cards' JavaScript scripts, so that card scripts can access these objects - their fields and methods - as if they were part of the scripts' scope. in order to do it the objects must be instances of a specific class (or subclass thereof), and each method and property that is to be exposed to card scripts must be declared as such with a proper PyQt decorator. and PyQt provides the functionality to "inject" such objects to a webview. The plug-in has to ensure this injection occurs every time Anki's reviewer's webview is (re-)initialised. The following code shows how to acheive this. It provides card scripts with a way to check the current state ("question" or "answer") and with a way to access (read, and - more importantly - write) the note's fields. from aqt import mw # Anki's main window object from aqt import mw QObject # Our exposed object will be an instance of a subclass of QObject. from aqt import mw pyqtSlot # a decorator for exposed methods from aqt import mw pyqtProperty # a decorator for exposed properties from anki.hooks import wrap # We will need this to hook to specific Anki functions in order to make sure the injection happens in time. # a class whose instance(s) we can expose to card scripts class CardScriptObject(QObject): # some "private" fields - card scripts cannot access these directly _state = None _card = None _note = None # Using pyqtProperty we create a property accessible from the card script. # We have to provide the type of the property (in this case str). # The second argument is a getter method. # This property is read-only. To make it writeable we would add a setter method as a third argument. state = pyqtProperty(str, lambda self: self._state) # The following methods are exposed to the card script owing to the pyqtSlot decorator. # Without it they would be "private". @pyqtSlot(str, result = str) # We have to provide the argument type(s) (excluding self), # as well as the type of the return value - with the named result argument, if a value is to be returned. def getField(self, name): return self._note[name] # Another method, without a return value: @pyqtSlot(str, str) def setField(self, name, value): self._note[name] = value self._note.flush() # An example of a method that can be invoked with two different signatures - # pyqtSlot has to be used for each possible signature: # (This method replaces the above two. # All three have been included here for the sake of the example.) @pyqtSlot(str, result = str) @pyqtSlot(str, str) def field(self, name, value = None): # sets a field if value given, gets a field otherwise if value is None: return self._note[name] self._note[name] = value self._note.flush() cardScriptObject = CardScriptObject() # the object to expose to card scripts flag = None # This flag is used in the injection process, which follows. # This is a hook to Anki's reviewer's _initWeb method. # It lets the plug-in know the reviewer's webview is being initialised. # (It would be too early to perform the injection here, as this method is called before the webview is initialised. # And it would be too late to do it after _initWeb, as the first card would have already been shown. # Hence this mechanism.) def _initWeb(): global flag flag = True # This is a hook to Anki's reviewer's _showQuestion method. # It populates our cardScriptObject's "private" fields with the relevant values, # and more importantly, it exposes ("injects") the object to the webview's JavaScript scope - # but only if this is the first card since the last initialisation, otherwise the object is already exposed. def _showQuestion(): global cardScriptObject, flag if flag: flag = False # The following line does the injection. # In this example our cardScriptObject will be accessible from card scripts # using the name pluginObject. mw.web.page().mainFrame().addToJavaScriptWindowObject("pluginObject", cardScriptObject) cardScriptObject._state = "question" cardScriptObject._card = mw.reviewer.card cardScriptObject._note = mw.reviewer.card.note() # The following hook to Anki's reviewer's _showAnswer is not necessary for the injection, # but in this example it serves to update the state. def _showAnswer(): global cardScriptObject cardScriptObject._state = "answer" # adding our hooks # In order to already have our object injected when the first card is shown (so that its scripts can "enjoy" this plug-in), # and in order for the card scripts to have access to up-to-date information, # our hooks must be executed _before_ the relevant Anki methods. mw.reviewer._initWeb = wrap(mw.reviewer._initWeb, _initWeb, "before") mw.reviewer._showQuestion = wrap(mw.reviewer._showQuestion, _showQuestion, "before") mw.reviewer._showAnswer = wrap(mw.reviewer._showAnswer, _showAnswer, "before") This is it! With such a plug-in installed a JavaScript script from within a card can use pluginObject.state to check whether it is run as part of the question or as part of the answer (could also be acheived by wrapping the question part in the answer template with a script that sets a variable, but this is neater), pluginObject.field(name) to get the value of a field from the note (could also be acheived by injecting the field directly into the JavaScript code with Anki's pre-processor) and pluginObject.field(name, value) to set the value of a field in the note (couldn't be done till now, as far as I know). Of course, many other pieces of functionality could be programmed into our CardScriptObject to allow card scripts to do much more (read/change configuration, implement another question/answer mechanism, interact with the scheduler, etc...). If anybody can suggest improvements I'd be interested to hear. Specifically, I'm interested in: whether there is a neater way to expose methods and properties, so as to allow more signature flexibility; and whether there is a less cumbersome way to perform the injection.
What is the meaning of "<" for Python dictionaries?
I've noticed that Python lets me do this: >>> {1: "foo"} < {2: "bar"} True It lets me do the same thing for lists, deques, etc. What are the semantics of < when applied to dictionaries in Python? In general where can I find out the semantics of < for any given type of collection? In most cases it seems not to be found in the documentation. For example: >>> help(dict.__cmp__) Help on wrapper_descriptor: __cmp__(...) x.__cmp__(y) <==> cmp(x,y) >>> help(cmp) Help on built-in function cmp in module __builtin__: cmp(...) cmp(x, y) -> integer Return negative if x<y, zero if x==y, positive if x>y. I ask because I have a list of tuples of the form (int, dict). I want to sort this array based on the first element, but if the first elements are equal for two items then I don't care about the second. I'd like to know if myArray.sort() will do something complicated involving recursing through the dicts in this case, or if it will just return an arbitrary value.
Quoting from comparison docs, Tuples and Lists Tuples and lists are compared lexicographically using comparison of corresponding elements. This means that to compare equal, each element must compare equal and the two sequences must be of the same type and have the same length. If not equal, the sequences are ordered the same as their first differing elements. For example, cmp([1,2,x], [1,2,y]) returns the same as cmp(x,y). If the corresponding element does not exist, the shorter sequence is ordered first (for example, [1,2] < [1,2,3]). Dictionaries Mappings (dictionaries) compare equal if and only if their sorted (key, value) lists compare equal. (The implementation computes this efficiently, without constructing lists or sorting.) Outcomes other than equality are resolved consistently, but are not otherwise defined. (Earlier versions of Python [prior to 2.7.6] used lexicographic comparison of the sorted (key, value) lists, but this was very expensive for the common case of comparing for equality. An even earlier version of Python compared dictionaries by identity only, but this caused surprises because people expected to be able to test a dictionary for emptiness by comparing it to {}.) Also, find this part of the documentation, which specifically comparing sequence types with themselves and other types, Sequence objects may be compared to other objects with the same sequence type. The comparison uses lexicographical ordering: first the first two items are compared, and if they differ this determines the outcome of the comparison; if they are equal, the next two items are compared, and so on, until either sequence is exhausted. If two items to be compared are themselves sequences of the same type, the lexicographical comparison is carried out recursively. If all items of two sequences compare equal, the sequences are considered equal. If one sequence is an initial sub-sequence of the other, the shorter sequence is the smaller (lesser) one. Lexicographical ordering for strings uses the ASCII ordering for individual characters. Note that comparing objects of different types is legal. The outcome is deterministic but arbitrary: the types are ordered by their name. Thus, a list is always smaller than a string, a string is always smaller than a tuple, etc. (The rules for comparing objects of different types should not be relied upon; they may change in a future version of the language.) Mixed numeric types are compared according to their numeric value, so 0 equals 0.0, etc. Actual dictionary comparison, as per Python 2.7 source code, goes like this Compare the length of keys first. (-1 is returned if first has lesser keys, 1 if second has lesser keys) If they are the same, then it tries to find a key for which either the key is missing in the other or different (this is called as characterizing the dict) It does the step 2, either ways, both a, b and b, a. If either of them is empty, then both the dictionaries are assumed to be equal. Now, the differences we got from characterizing the dictionaries will be compared to get the actual comparison result.
How to convert string to uppercase / lowercase in Jinja2?
I am trying to convert to upper case a string in a Jinja template I am working on. In the template documentation, I read: upper(s) Convert a value to uppercase. So I wrote this code: {% if student.departament == "Academy" %} Academy {% elif upper(student.department) != "MATHS DEPARTMENT" %} Maths department {% endif %} But I am getting this error: UndefinedError: 'upper' is undefined So, how do you convert a string to uppercase in Jinja2?
Filters are used with the |filter syntax: {% elif student.department|upper != "MATHS DEPARTMENT" %} Maths department {% endif %} or you can use the str.upper() method: {% elif student.department.upper() != "MATHS DEPARTMENT" %} Maths department {% endif %} Jinja syntax is Python-like, not actual Python. :-)
Detect and exclude outliers in Pandas dataframe
I have a pandas dataframe with few columns. Now I know that certain rows are outliers based on a certain column value. For instance columns - 'Vol' has all values around 12.xx and one value which is 4000 Now I would like to exclude those rows that have Vol Column like this. So essentially I need to put a filter such that we select all rows wehre the values of a certain column are within say 3 standard deviations from mean. Whats an elegant way to achieve this.
Use boolean indexing as you would do in numpy.array df=pd.DataFrame({'Data':np.random.normal(size=200)}) #example dataset of normally distributed data. df[np.abs(df.Data-df.Data.mean())<=(3*df.Data.std())] #keep only the ones that are within +3 to -3 standard deviations in the column 'Data'. df[~(np.abs(df.Data-df.Data.mean())>(3*df.Data.std()))] #or if you prefer the other way around For a series it is similar: S=pd.Series(np.random.normal(size=200)) S[~((S-S.mean()).abs()>3*S.std())]
sqlalchemy : executing raw sql with parameter bindings
I'm trying to run this simple raw sql statement with parameters with SQLALchemy (within an alembic script) : from alembic import op t = {"code": "123", "description": "one two three"} op.execute("insert into field_tags (id, field_id, code, description) "+ "values (1,'zasz', :code ,:description')", t) And I get the following error : sqlalchemy.exc.StatementError: A value is required for bind parameter 'description' (original cause: InvalidRequestError: A value is required for bind parameter 'description') "insert into field_tags (id, field_id, code, description) values (1, 'math', %(code)s ,%(description)s)" [] The solution: t = {"code": "123", "description": "one two three"} from sqlalchemy.sql import text op.get_bind().execute(text("insert into field_tags (id, field_id, code, description) "+ "values (1,'zasz', :code ,:description')"), **t)
You need to get the connection object, call execute() on it and pass query parameters as keyword arguments: from alembic import op from sqlalchemy.sql import text conn = op.get_bind() conn.execute(text("""insert into field_tags (id, field_id, code, description) values (1, 'zasz', :code , :description')"""), **t) Also see: How to execute raw SQL in SQLAlchemy-flask app.
Check if current thread is main thread, in Python
This has been answered for Android, Objective C and C++ before, but apparently not for Python. How do I reliably determine whether the current thread is the main thread? I can think of a few approaches, none of which really satisfy me, considering it could be as easy as comparing to threading.MainThread if it existed. Check the thread name The main thread is instantiated in threading.py like this: Thread.__init__(self, name="MainThread") so one could do if threading.current_thread().name == 'MainThread' but is this name fixed? Other codes I have seen checked whether MainThread is contained anywhere in the thread's name. Store the starting thread I could store a reference to the starting thread the moment the program starts up, i.e. while there are no other threads yet. This would be absolutely reliable, but way too cumbersome for such a simple query? Is there a more concise way of doing this?
The problem with threading.current_thread().name == 'MainThread' is that one can always do: threading.current_thread().name = 'MyName' assert threading.current_thread().name == 'MainThread' # will fail Perhaps the following is more solid: threading.current_thread().__class__.__name__ == '_MainThread' Having said that, one may still cunningly do: threading.current_thread().__class__.__name__ = 'Grrrr' assert threading.current_thread().__class__.__name__ == '_MainThread' # will fail But this option still seems better; "after all, we're all consenting adults here." UPDATE: Python 3.4 introduced threading.main_thread() which is much prettier than the above: assert threading.current_thread() == threading.main_thread() UPDATE 2: As mentioned in the comment below, by user1300959, another viable and clean option is: isinstance(threading.current_thread(), threading._MainThread)
Celery with RabbitMQ: AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'
I'm running the First Steps with Celery Tutorial. We define the following task: from celery import Celery app = Celery('tasks', broker='amqp://guest@localhost//') @app.task def add(x, y): return x + y Then call it: >>> from tasks import add >>> add.delay(4, 4) But I get the following error: AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for' I'm running both the celery worker and the rabbit-mq server. Rather strangely, celery worker reports the task as succeeding: [2014-04-22 19:12:03,608: INFO/MainProcess] Task test_celery.add[168c7d96-e41a-41c9-80f5-50b24dcaff73] succeeded in 0.000435483998444s: 19 Why isn't this working?
Just keep reading tutorial. It will be explained in Keep Results chapter. To start Celery you need to provide just broker parameter, which is required to send messages about tasks. If you want to retrieve information about state and results returned by finished tasks you need to set backend parameter. You can find full list with description in Configuration docs: CELERY_RESULT_BACKEND.
Unable to perform collectstatic
I am new to django ! When i use the command python manage.py collectstatic i get this error django.core.exceptions.ImproperlyConfigured: You're using the staticfiles app without having set the STATIC_ROOT setting to a filesystem path But i can successfully run the server . My static files declarations are : STATIC_ROOT = '' STATIC_URL = '/static/' STATICFILES_DIRS = ( ('assets', os.path.join(PROJECT_DIR, '../static')), ) and debug is set to true DEBUG = True How can i fix this? Else am missing any installation packages ?
Try this, PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__)) STATIC_ROOT = os.path.join(PROJECT_DIR, 'static') Look at https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-STATIC_ROOT
Python Equality Check Difference
Suppose we want some block of code to be executed when both 'a' and 'b' are equal to say 5. Then we can write like : if a == 5 and b == 5: # do something But a few days ago, I just involuntarily wrote a similar condition check as : if a == b and b == 5: # do something which made me think, is there any difference between the two ? Also, there is one other way, if a == b == 5: # do something Is there any difference, any difference in terms of process of evaluation or execution or time taken ? and also which one is the better or which is better to use? Is it related to the concept of transitivity ?
Since they are basically equivalent, you could also consider the way you read/think about the code: if a == 5 and b == 5: # do something can be read as "if a equals 5 and b equals 5, then do ...". You have to think/conclude, that then also a will be equal to b. This is opposite to the next example: if a == b and b == 5: # do something This reads as "if a is equal to b and b equal to 5" and you have to conclude that then also a will be equal to 5 This is why I prefer the last example: if a == b == 5: # do something If you are familiar with Python (thanks to Itzkata) it is immediately clear that all three things must be equal (to 5). If however people with less experience in Python (but programming skills in other languages) see this, they might evaluate this to if (a == b) == 5: which would compare the boolean result of the first comparison with the integer 5, which is not what Python does and might lead to different results (consider for example with a=0, b=0: a==b==0 is true while (a==b) == 0 is not! The manual says: There are eight comparison operations in Python. They all have the same priority (which is higher than that of the Boolean operations). Comparisons can be chained arbitrarily; for example, x < y <= z is equivalent to x < y and y <= z, except that y is evaluated only once (but in both cases z is not evaluated at all when x < y is found to be false). There might even be a difference, for example if evaulating b in your example would have a side effect. Regarding transitivity, you are right.
Does range() really create lists?
Both my professor and this guy claim that range creates a list of values. "Note: The range function simply returns a list containing the numbers from x to y-1. For example, range(5, 10) returns the list [5, 6, 7, 8, 9]." I believe this is to be inaccurate because: type(range(5, 10)) <class 'range'> Furthermore, the only apparent way to access the integers created by range is to iterate through them, which leads me to believe that labeling range as a lists is incorrect.
In Python 2.x, range returns a list, but in Python 3.x range returns an immutable sequence, of type range. Python 2.x: >>> type(range(10)) <type 'list'> >>> type(xrange(10)) <type 'xrange'> Python 3.x: >>> type(range(10)) <class 'range'> In Python 2.x, if you want to get an iterable object, like in Python 3.x, you can use xrange function, which returns an immutable sequence of type xrange. Advantage of xrange over range in Python 2.x: The advantage of xrange() over range() is minimal (since xrange() still has to create the values when asked for them) except when a very large range is used on a memory-starved machine or when all of the range’s elements are never used (such as when the loop is usually terminated with break). Note: Furthermore, the only apparent way to access the integers created by range() is to iterate through them, Nope. Since range objects in Python 3 are immutable sequences, they support indexing as well. Quoting from the range function documentation, Ranges implement all of the common sequence operations except concatenation and repetition ... Range objects implement the collections.abc.Sequence ABC, and provide features such as containment tests, element index lookup, slicing and support for negative indices. For example, >>> range(10, 20)[5] 15 >>> range(10, 20)[2:5] range(12, 15) >>> list(range(10, 20)[2:5]) [12, 13, 14] >>> list(range(10, 20, 2)) [10, 12, 14, 16, 18] >>> 18 in range(10, 20) True >>> 100 in range(10, 20) False All these are possible with that immutable range sequence. Recently, I faced a problem and I think it would be appropriate to include here. Consider this Python 3.x code from itertools import islice numbers = range(100) items = list(islice(numbers, 10)) while items: items = list(islice(numbers, 10)) print(items) One would expect this code to print every ten numbers as a list, till 99. But, it would run infinitely. Can you reason why? Solution Because the range returns an immutable sequence, not an iterator object. So, whenever islice is done on a range object, it always starts from the beginning. Think of it as a drop-in replacement for an immutable list. Now the question comes, how will you fix it? Its simple, you just have to get an iterator out of it. Simply change numbers = range(100) to numbers = iter(range(100)) Now, numbers is an iterator object and it remembers how long it has been iterated before. So, when the islice iterates it, it just starts from the place where it previously ended.
No distributions at all found for some package
error when installing some package but its actualy existing example django-ajax-filtered-fields==0.5 Downloading/unpacking django-ajax-filtered-fields==0.5 (from -r requirements.example.pip (line 13)) Could not find any downloads that satisfy the requirement django-ajax-filtered-fields==0.5(from -r requirements.example.pip (line 13)) No distributions at all found for django-ajax-filtered-fields==0.5 Storing debug log for failure in /home/pd/.pip/pip.log (peecs)pd@admin:~/proj/django/peecs$ pip install django-ajax-filtered-fields==0.5 --allow-unverified django-ajax-filtered-fields==0.5 Downloading/unpacking django-ajax-filtered-fields==0.5 Could not find any downloads that satisfy the requirement django-ajax-filtered-fields==0.5 Some externally hosted files were ignored (use --allow-external django-ajax-filtered-fields to allow). Cleaning up... No distributions at all found for django-ajax-filtered-fields==0.5 Storing debug log for failure in /home/pd/.pip/pip.log
I got the solution ,Try with --allow-unverified syntax: pip install packagename=version --allow-unverified packagename Some package condains insecure and unverifiable files. it will not download to the system . and it can be solved by using this method --allow-unverified. it will allow the installation. Eg: pip install django-ajax-filtered-fields==0.5 --allow-unverified django-ajax-filtered-fields
django nginx static files 404
Here are my settings : STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(BASE_DIR, "static"), ) STATIC_ROOT = '/home/django-projects/tshirtnation/staticfiles' Here's my nginx configuration: server { server_name 77.241.197.95; access_log off; location /static/ { alias /home/django-projects/tshirtnation/staticfiles/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } I've run python manage.py collectstatic and it has copied all static files. I run my server with gunicorn_django --bind:my-ip:8001 and everything seems to be working except for static files. EDIT: I've run sudo tail /var/log/nginx/error.log and there seems to be no errors of static files not found :/
I encountered the same problem and was able to fix my nginx configuration by removing the trailing / from the /static/ location. location /static { # "/static" NOT "/static/" # ... }
How do you find the IQR in Numpy?
Is there a baked-in Numpy/Scipy function to find the interquartile range? I can do it pretty easily myself, but mean() exists which is basically sum/len... def IQR(dist): return np.percentile(dist, 75) - np.percentile(dist, 25)
np.percentile takes multiple percentile arguments, and you are slightly better off doing: q75, q25 = np.percentile(x, [75 ,25]) iqr = q75 - q25 or iqr = np.subtract(*np.percentile(x, [75, 25])) than making two calls to percentile: In [8]: x = np.random.rand(1e6) In [9]: %timeit q75, q25 = np.percentile(x, [75 ,25]); iqr = q75 - q25 10 loops, best of 3: 24.2 ms per loop In [10]: %timeit iqr = np.subtract(*np.percentile(x, [75, 25])) 10 loops, best of 3: 24.2 ms per loop In [11]: %timeit iqr = np.percentile(x, 75) - np.percentile(x, 25) 10 loops, best of 3: 33.7 ms per loop
Getting console.log output from Firefox with Selenium
I'm trying to get a web page's console.log output from Firefox via the python Selenium API bindings. Based on the code for Chrome, and some advice from the documentation, I tried the following: from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities d = DesiredCapabilities.FIREFOX d['loggingPrefs'] = { 'browser':'ALL' } fp = webdriver.FirefoxProfile() fp.set_preference('webdriver.log.file', '/tmp/firefox_console') driver = webdriver.Firefox(capabilities=d,firefox_profile=fp) driver.set_window_size(1280,1024) driver.get('http://foo.com') try: WebDriverWait(driver,10).until(lambda driver: driver.execute_script("return document.readyState") == "complete") for entry in driver.get_log('browser'): print entry finally: driver.quit() But, for even a simple example page that calls console.log("foo"), I don't see "foo" either in the log entries returned via the API or in the /tmp/firefox_console file. Am I doing something wrong? Or is this a Selenium limitation?
Your code is correct when it comes to the get_log function, just add a print statement at the end like so: from selenium import webdriver from selenium.webdriver.common.desired_capabilities import DesiredCapabilities # enable browser logging d = DesiredCapabilities.FIREFOX d['loggingPrefs'] = {'browser': 'ALL'} driver = webdriver.Firefox(capabilities=d) # load some site driver.get('http://foo.com') # print messages for entry in driver.get_log('browser'): print entry print driver.quit() In fact: print len(driver.get_log('browser')) returns 53 in my example with this as a sample entry in the list: {u'timestamp': 1407591650751, u'message': u"Expected ':' but found '}'. Declaration dropped.", u'level': u'WARNING'} Seems like a bad char problem. As for why there is no output in the /tmp/firefox_console file, I have no clue, the logger seems to throw some webdriver debug info but no console.log output. EDIT: Apparently the above code does not return data from console.log. It's not a Selenium bug as far as I can tell but a problem with Firefox. I managed to get around it by installing the Firebug along with ConsoleExport plugin for Firebug, then point it to some logging server. See also this SO answer for details on how to enable Firebug programmatically from Selenium. See this gist for more details: https://gist.github.com/CGenie/fc63536a8467ae6ef945
Broken references in Virtualenvs
I recently installed a bunch of dotfiles on my Mac along with some other applications (I changed to iTerm instead of Terminal, and Sublime as my default text editor) but ever since, all my virtual environments have stopped working, although their folders inside .virtualenvs are still there and they give the following error whenever I try to run anything in them: dyld: Library not loaded: @executable_path/../.Python Referenced from: /Users/[user]/.virtualenvs/modclass/bin/python Reason: image not found Trace/BPT trap: 5 I have removed all the files related to dotfiles and have restored my .bash_profile to what it was before, but the problem persists. Is there any way to diagnose the problem or solve it in an easy way (e.g. not requiring to create all the virtualenvs all over again)?
I found the solution to the problem here, so all credit goes to the author. The gist is that when you create a virtualenv, many symlinks are created to the Homebrew installed Python. Here is one example: $ ls -la ~/.virtualenvs/my-virtual-env ... lrwxr-xr-x 1 ryan staff 78 Jun 25 13:21 .Python -> /usr/local/Cellar/python/2.7.7/Frameworks/Python.framework/Versions/2.7/Python ... When you upgrade Python using Homebrew and then run brew cleanup, the symlinks in the virtualenv point to paths that no longer exist (because Homebrew deleted them). The symlinks needs to point to the newly installed Python: lrwxr-xr-x 1 ryan staff 78 Jun 25 13:21 .Python -> /usr/local/Cellar/python/2.7.8_1/Frameworks/Python.framework/Versions/2.7/Python The solution is to remove the symlinks in the virtualenv and then recreate them: find ~/.virtualenvs/my-virtual-env/ -type l -delete virtualenv ~/.virtualenvs/my-virtual-env It's probably best to check what links will be deleted first before deleting them: find ~/.virtualenvs/my-virtual-env/ -type l In my opinion, it's even better to only delete broken symlinks. You can do this using GNU find: gfind ~/.virtualenvs/my-virtual-env/ -type l -xtype l -delete You can install GNU find with Homebrew if you don't already have it: brew install findutils Notice that by default, GNU programs installed with Homebrew tend to be prefixed with the letter g. This is to avoid shadowing the find binary that ships with OS X.
Set LD_LIBRARY_PATH before importing in python
Python uses the PYTHONPATH environment-variable to determine in which folders it should look for modules. You can play around with it by modifying sys.path, which works nicely for pure Python-Modules. But when a module uses shared object files or static libraries, it looks for those in LD_LIBRARY_PATH (on linux), but this can't be changed as easily and is platform dependent as far as I know. The quick-fix for this problem is of course to set the environment-variable or invoke the script like LD_LIBRARY_PATH=. ./script.py, but then you'll have to set it again for every new shell you open. Also, the .so files in my case will always be in the same directory as the .py file, but may very well be moved to another absolute path, so I'd like to set them automatically every time I invoke the script. How can I edit the path in which the Python interpreter looks for libraries platform-independently on runtime? EDIT: I already tried os.environ['LD_LIBRARY_PATH'] = os.getcwd(), but to no avail.
I would use: import os os.environ['LD_LIBRARY_PATH'] = os.getcwd() # or whatever path you want This sets the LD_LIBRARY_PATH environment variable for the duration/lifetime of the execution of the current process only. EDIT: it looks like this needs to be set before starting Python: Changing LD_LIBRARY_PATH at runtime for ctypes So I'd suggest going with a wrapper .sh (or .py if you insist) script. Also, as @chepner pointed out, you might want to consider installing your .so files in a standard location (within the virtualenv). See also Setting LD_LIBRARY_PATH from inside Python
Cannot find reference 'xxx' in __init__.py - Python / Pycharm
I have a project in Pycharm organized as follows: -- Sources |--__init__.py |--Calculators |--__init__.py |--Filters.py |--Controllers |--__init__.py |--FiltersController.py |--Viewers |--__init__.py |--DataVisualization.py |--Models |--__init__.py |--Data All of my __init__.py, except for the one right above Sources are blank files. I am receiving a lot of warnings of the kind: Cannot find reference 'xxx' in init.py For example, my FiltersController.py has this piece of code: import numpy.random as npr bootstrap = npr.choice(image_base.data[max(0, x-2):x+3, max(0, y-2):y+3].flatten(), size=(3, 3), replace=True) And I get this warning: Cannot find reference 'choice' in __init__.py I'm googling wondering what does this mean and what should I do to code properly in Python. Thank you in advance.
This is a bug in pycharm. PyCharm seems to be expecting the referenced module to be included in an __all__ = [] statement. For proper coding etiquette, should you include the __all__ statement from your modules? ..this is actually the question we hear young Spock answering while he was being tested, to which he responded: "It is morally praiseworthy but not morally obligatory." To get around it, you can simply disable that (extremely non-critical) (highly useful) inspection globally, or suppress it for the specific function or statement. To do so: put the caret over the erroring text ('choice', from your example above) Bring up the intention menu (alt-enter by default, mine is set to alt-backspace) hit the right arrow to open the submenu, and select the relevant action PyCharm has its share of small bugs like this, but in my opinion its benefits far outweigh its drawbacks. If you'd like to try another good IDE, there's also Spyder/Spyderlib. I know this is quite a bit after you asked your question, but I hope this helps (you, or someone else). Edited: Originally, I thought that this was specific to checking __all__, but it looks like it's the more general 'Unresolved References' check, which can be very useful. It's probably best to use statement-level disabling of the feature, either by using the menu as mentioned above, or by specifying # noinspection PyUnresolvedReferences on the line preceding the statement.
FIll between two vertical lines in matplotlib
I went through the examples in the matplotlib documentation, but it wasn't clear to me how I can make a plot that fills the area between two specific vertical lines. For example, say I want to create a plot between x=0.2 and x=4 (for the full y range of the plot). Should I use fill_between, fill or fill_betweenx? Can I use the where condition for this?
It sounds like you want axvspan, rather than one of the fill between functions. The differences is that axvspan (and axhspan) will fill up the entire y (or x) extent of the plot regardless of how you zoom. For example, let's use axvspan to highlight the x-region between 8 and 14: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(20)) ax.axvspan(8, 14, alpha=0.5, color='red') plt.show() You could use fill_betweenx to do this, but the extents (both x and y) of the rectangle would be in data coordinates. With axvspan, the y-extents of the rectangle default to 0 and 1 and are in axes coordinates (in other words, percentages of the height of the plot). To illustrate this, let's make the rectangle extend from 10% to 90% of the height (instead of taking up the full extent). Try zooming or panning, and notice that the y-extents say fixed in display space, while the x-extents move with the zoom/pan: import matplotlib.pyplot as plt fig, ax = plt.subplots() ax.plot(range(20)) ax.axvspan(8, 14, ymin=0.1, ymax=0.9, alpha=0.5, color='red') plt.show()
Python3 rounding to nearest even
Python3.4 rounds to the nearest even (in the tie-breaker case). >>> round(1.5) 2 >>> round(2.5) 2 But it only seems to do this when rounding to an integer. >>> round(2.75, 1) 2.8 >>> round(2.85, 1) 2.9 In the final example above, I would have expected 2.8 as the answer when rounding to the nearest even. Why is there a discrepancy between the two behaviors?
Floating point numbers are only approximations; 2.85 cannot be represented exactly: >>> format(2.85, '.53f') '2.85000000000000008881784197001252323389053344726562500' It is slightly over 2.85. 0.5 and 0.75 can be represented exactly with binary fractions (1/2 and 1/2 + 1/4, respectively). The round() function documents this explicitly: Note: The behavior of round() for floats can be surprising: for example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This is not a bug: it’s a result of the fact that most decimal fractions can’t be represented exactly as a float. See Floating Point Arithmetic: Issues and Limitations for more information.
How to determine what is the probability distribution function from a numpy array?
I have searched around and to my surprise it seems that this question has not been answered. I have a Numpy array containing 10000 values from measurements. I have plotted a histogram with Matplotlib, and by visual inspection the values seem to be normally distributed: However, I would like to validate this. I have found a normality test implemented under scipy.stats.mstats.normaltest, but the result says otherwise. I get this output: (masked_array(data = [1472.8855375088663], mask = [False], fill_value = 1e+20) , masked_array(data = [ 0.], mask = False, fill_value = 1e+20) ) which means that the chances that the dataset is normally distributed are 0. I have re-run the experiments and tested them again obtaining the same outcome, and in the "best" case the p value was 3.0e-290. I have tested the function with the following code and it seems to do what I want: import numpy import scipy.stats as stats mu, sigma = 0, 0.1 s = numpy.random.normal(mu, sigma, 10000) print stats.normaltest(s) (1.0491016699730547, 0.59182113002186942) If I have understood and used the function correctly it means that the values are not normally distributed. (And honestly I have no idea why there is a difference in the output, i.e. less details.) I was pretty sure that it is a normal distribution (although my knowledge of statistics is basic), and I don't know what could the alternative be. How can I check what is the probability distribution function in question? EDIT: My Numpy array containing 10000 values is generated like this (I know that's not the best way to populate a Numpy array), and afterwards the normaltest is run: values = numpy.empty(shape=10000, 1)) for i in range(0, 10000): values[i] = measurement(...) # The function returns a float print normaltest(values) EDIT 2: I have just realised that the discrepancy between the outputs is because I have inadvertently used two different functions (scipy.stats.normaltest() and scipy.stats.mstats.normaltest()), but it does not make a difference since the relevant part of the output is the same regardless of the used function. EDIT 3: Fitting the histogram with the suggestion from askewchan: plt.plot(bin_edges, scipy.stats.norm.pdf(bin_edges, loc=values.mean(), scale=values.std())) results in this: EDIT 4: Fitting the histogram with the suggestion from user user333700: scipy.stats.t.fit(data) results in this:
Assuming you have used the test correctly, my guess is that you have a small deviation from a normal distribution and because your sample size is so large, even small deviations will lead to a rejection of the null hypothesis of a normal distribution. One possibility is to visually inspect your data by plotting a normed histogram with a large number of bins and the pdf with loc=data.mean() and scale=data.std(). There are alternative test for testing normality, statsmodels has Anderson-Darling and Lillifors (Kolmogorov-Smirnov) tests when the distribution parameters are estimated. However, I expect that the results will not differ much given the large sample size. The main question is whether you want to test whether your sample comes "exactly" from a normal distribution, or whether you are just interested in whether your sample comes from a distribution that is very close to the normal distribution, close in terms of practical usage. To elaborate on the last point: http://jpktd.blogspot.ca/2012/10/tost-statistically-significant.html http://www.graphpad.com/guides/prism/6/statistics/index.htm?testing_for_equivalence2.htm As the sample size increases a hypothesis test gains more power, that means that the test will be able to reject the null hypothesis of equality even for smaller and smaller differences. If we keep our significance level fixed, then eventually we will reject tiny differences that we don't really care about. An alternative type of hypothesis test is where we want to show that our sample is close to the given point hypothesis, for example two samples have almost the same mean. The problem is that we have to define what our equivalence region is. In the case of goodness of fit tests we need to choose a distance measure and define a threshold for the distance measure between the sample and the hypothesized distribution. I have not found any explanation where intuition would help to choose this distance threshold. stats.normaltest is based on deviations of skew and kurtosis from those of the normal distribution. Anderson-Darling is based on a integral of the weighted squared differences between the cdf. Kolmogorov-Smirnov is based on the maximum absolute difference between the cdf. chisquare for binned data would be based on the weighted sum of squared bin probabilities. and so on. I only ever tried equivalence testing with binned or discretized data, where I used a threshold from some reference cases which was still rather arbitrary. In medical equivalence testing there are some predefined standards to specify when two treatments can be considered as equivalent, or similarly as inferior or superior in the one sided version.
Overloading __dict__() on python class
I have a class where I want to get the object back as a dictionary, so I implemented this in the __dict__(). Is this correct? I figured once I did that, I could then use the dict (custom object), and get back the object as a dictionary, but that does not work. Should you overload __dict__()? How can you make it so a custom object can be converted to a dictionary using dict()?
__dict__ is not a special method on Python objects. Instead, it is used for the attribute dictionary. dict() never uses it. Instead, you could support iteration; when dict() is passed an iterable that produces key-value pairs, a new dictionary object with those key-value pairs is produced. You can provide an iterable by implementing a __iter__ method, which should return an iterator. Implementing that method as a generator function suffices: class Foo(object): def __init__(self, *values): self.some_sequence = values def __iter__(self): for key in self.some_sequence: yield (key, 'Value for {}'.format(key)) Demo: >>> class Foo(object): ... def __init__(self, *values): ... self.some_sequence = values ... def __iter__(self): ... for key in self.some_sequence: ... yield (key, 'Value for {}'.format(key)) ... >>> f = Foo('bar', 'baz', 'eggs', 'ham') >>> dict(f) {'baz': 'Value for baz', 'eggs': 'Value for eggs', 'bar': 'Value for bar', 'ham': 'Value for ham'} You could also subclass dict, or implement the Mapping abstract class, and dict() would recognize either and copy keys and values over to a new dictionary object. This is a little more work, but may be worth it too if you want your custom class to act like a mapping everywhere else too.
importing pyspark in python shell
This is a copy of someone else's question on another forum that was never answered, so I thought I'd re-ask it here, as I have the same issue. (See http://geekple.com/blogs/feeds/Xgzu7/posts/351703064084736) I have Spark installed properly on my machine and am able to run python programs with the pyspark modules without error when using ./bin/pyspark as my python interpreter. However, when I attempt to run the regular Python shell, when I try to import pyspark modules I get this error: from pyspark import SparkContext and it says "No module named pyspark". How can I fix this? Is there an environment variable I need to set to point Python to the pyspark headers/libraries/etc.? If my spark installation is /spark/, which pyspark paths do I need to include? Or can pyspark programs only be run from the pyspark interpreter?
If it prints such error: ImportError: No module named py4j.java_gateway Please add $SPARK_HOME/python/build to PYTHONPATH: export SPARK_HOME=/Users/pzhang/apps/spark-1.1.0-bin-hadoop2.4 export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/build:$PYTHONPATH
Python3.4 on Sublime Text 3
I followed these steps to get Python 3 on Sublime Text 3 Select the menu Tools > Build > New Build System and enter the following: { "cmd": ["python3", "$file"] , "selector": "source.python" , "file_regex": "file \"(...*?)\", line ([0-9]+)" } After that, saved it to the following (Mac-specific) directory: ~/Library/Application Support/Sublime Text 3/Packages/User but I'm getting this error when I try to run my code on Python 3 in Sublime [Errno 2] No such file or directory: 'python3'
You need to provide the full path to python3, since Sublime Text does not read your ~/.bash_profile file. Open up Terminal, type which python3, and use that full path: { "cmd": ["path/to/python3", "$file"], "selector": "source.python", "file_regex": "file \"(...*?)\", line ([0-9]+)" }
Python 3.x BaseHTTPServer or http.server
I am trying to make a BaseHTTPServer program. I prefer to use Python 3.3 or 3.2 for it. I find the doc hard to understand regarding what to import but tried changing the import from: from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer to: from http.server import BaseHTTPRequestHandler,HTTPServer and then the import works and the program start and awaits a GET request. BUT when the request arrives an exception is raised: File "C:\Python33\lib\socket.py", line 317, in write return self._sock.send(b) TypeError: 'str' does not support the buffer interface Question: Is there a version of BaseHTTPServer or http.server that works out of the box with Python3.x or am I doing something wrong? This is "my" program that I try running in Python 3.3 and 3.2: #!/usr/bin/python # from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer from http.server import BaseHTTPRequestHandler,HTTPServer PORT_NUMBER = 8080 # This class will handle any incoming request from # a browser class myHandler(BaseHTTPRequestHandler): # Handler for the GET requests def do_GET(self): print ('Get request received') self.send_response(200) self.send_header('Content-type','text/html') self.end_headers() # Send the html message self.wfile.write("Hello World !") return try: # Create a web server and define the handler to manage the # incoming request server = HTTPServer(('', PORT_NUMBER), myHandler) print ('Started httpserver on port ' , PORT_NUMBER) # Wait forever for incoming http requests server.serve_forever() except KeyboardInterrupt: print ('^C received, shutting down the web server') server.socket.close() The Program work partly in Python2.7 but gives this exception after 2-8 requests: error: [Errno 10054] An existing connection was forcibly closed by the remote host
Your program in python 3.xx does work right out of the box - except for one minor problem. The issue is not in your code but the place where you are writing these lines: self.wfile.write("Hello World !") You are trying to write "string" in there, but bytes should go there. So you need to convert your string to bytes. Here, see my code, which is almost same as you and works perfectly. Its written in python 3.4 from http.server import BaseHTTPRequestHandler, HTTPServer import time hostName = "localhost" hostPort = 9000 class MyServer(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write(bytes("<html><head><title>Title goes here.</title></head>", "utf-8")) self.wfile.write(bytes("<body><p>This is a test.</p>", "utf-8")) self.wfile.write(bytes("<p>You accessed path: %s</p>" % self.path, "utf-8")) self.wfile.write(bytes("</body></html>", "utf-8")) myServer = HTTPServer((hostName, hostPort), MyServer) print(time.asctime(), "Server Starts - %s:%s" % (hostName, hostPort)) try: myServer.serve_forever() except KeyboardInterrupt: pass myServer.server_close() print(time.asctime(), "Server Stops - %s:%s" % (hostName, hostPort)) Please notice the way I convert them from string to bytes using the "UTF-8" encoding. Once you do this change in your program, your program should work fine.
Why can a dictionary be unpacked as a tuple?
Today, I saw one statement which didn't throw an exception. Can anyone explain the theory behind it? >>> x, y = {'a': 2, 'b': 5} >>> x 'a' >>> y 'b'
In Python, every iterable can be unpacked1: >>> x,y,z = [1, 2, 3] # A list >>> x,y,z (1, 2, 3) >>> x,y,z = 1, 2, 3 # A tuple >>> x,y,z (1, 2, 3) >>> x,y,z = {1:'a', 2:'b', 3:'c'} # A dictionary >>> x,y,z (1, 2, 3) >>> x,y,z = (a for a in (1, 2, 3)) # A generator >>> x,y,z (1, 2, 3) >>> Moreover, because iterating over a dictionary returns only its keys: >>> for i in {1:'a', 2:'b', 3:'c'}: ... print i ... 1 2 3 >>> unpacking a dictionary (which iterates over it) likewise unpacks only its keys. 1Actually, I should say that every iterable can be unpacked as long as the names to unpack into equals the length of the iterable: >>> a,b,c = [1, 2, 3] # Number of names == len(iterable) >>> >>> a,b = [1, 2, 3] # Too few names Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: too many values to unpack (expected 2) >>> >>> a,b,c,d = [1, 2, 3] # Too many names Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: need more than 3 values to unpack >>> But this is only the case for Python 2.x. In Python 3.x, you have extended iterable unpacking, which allows you to unpack an iterable of any (finite) size into just the names you need: >>> # Python 3.x interpreter ... >>> a, *b, c = [1, 2, 3, 4] >>> a, b, c (1, [2, 3], 4) >>> >>> a, *b = [1, 2, 3, 4] >>> a, b (1, [2, 3, 4]) >>> >>> *a, b, c = [1, 2, 3, 4] >>> a, b, c ([1, 2], 3, 4) >>>
Principal components analysis using pandas dataframe
How can I calculate Principal Components Analysis from data in a pandas dataframe?
Most sklearn objects work with panadas dataframes just fine, would something like this work for you? import pandas as pd import numpy as np from sklearn.decomposition import PCA df = pd.DataFrame(data=np.random.normal(0, 1, (20, 10))) pca = PCA(n_components=5) pca.fit(df) You can access the components themselves with pca.components_
Convert List to a list of tuples python
I am newbie to python and need to convert a list to dictionary. I know that we can convert list of tuples to dictionary. This is the input list: L = [1,term1, 3, term2, x, term3,... z, termN] and I want to convert this list to a list of tuples (OR to a dictionary) like this: [(1, term1), (3, term2), (x, term3), ...(z, termN)] How can we do that easily python?
>>> L = [1, "term1", 3, "term2", 4, "term3", 5, "termN"] # Create an iterator >>> it = iter(L) # zip the iterator with itself >>> zip(it, it) [(1, 'term1'), (3, 'term2'), (4, 'term3'), (5, 'termN')] You want to group three items at a time? >>> zip(it, it, it) You want to group N items at a time? # Create N copies of the same iterator it = [iter(L)] * N # Unpack the copies of the iterator, and pass them as parameters to zip >>> zip(*it)
Converting to (not from) ipython Notebook format
IPython Notebook comes with nbconvert, which can export notebooks to other formats. But how do I convert text in the opposite direction? I ask because I already have materials, and a good workflow, in a different format, but I would like to take advantage of Notebook's interactive environment. A likely solution: A notebook can be created by importing a .py file, and the documentation states that when nbconvert exports a notebook as a python script, it embeds directives in comments that can be used to recreate the notebook. But the information comes with a disclaimer about the limitations of this method, and the accepted format is not documented anywhere that I could find. (A sample is shown, oddly enough, in the section describing notebook's JSON format). Can anyone provide more information, or a better alternative? Edit (1 March 2016): The accepted answer no longer works, because for some reason this input format is not supported by version 4 of the Notebook API. I have added a self-answer showing how to import a notebook with the current (v4) API. (I am not un-accepting the current answer, since it solved my problem at the time and pointed me to the resources I used in my self-answer.)
The IPython API has functions for reading and writing notebook files. You should use this API and not create JSON directly. For example, the following code snippet converts a script test.py into a notebook test.ipynb. import IPython.nbformat.current as nbf nb = nbf.read(open('test.py', 'r'), 'py') nbf.write(nb, open('test.ipynb', 'w'), 'ipynb') Regarding the format of the .py file understood by nbf.read it is best to simply look into the parser class IPython.nbformat.v3.nbpy.PyReader. The code can be found here (it is not very large): https://github.com/ipython/ipython/blob/master/jupyter_nbformat/v3/nbpy.py Edit: This answer was originally written for IPyhton 3. I don't know how to do this properly with IPython 4. Here is an updated version of the link above, pointing to the version of nbpy.py from the IPython 3.2.1 release: https://github.com/ipython/ipython/blob/rel-3.2.1/IPython/nbformat/v3/nbpy.py Basically you use special comments such as # <codecell> or # <markdowncell> to separate the individual cells. Look at the line.startswith statements in PyReader.to_notebook for a complete list.
Asking the user for input until they give a valid response
I am writing a program that must accept input from the user. #note: Python 2.7 users should use `raw_input`, the equivalent of 3.X's `input` age = int(input("Please enter your age: ")) if age >= 18: print("You are able to vote in the United States!") else: print("You are not able to vote in the United States.") This works as expected if the user enters sensible data. C:\Python\Projects> canyouvote.py Please enter your age: 23 You are able to vote in the United States! But if they make a mistake, then it crashes: C:\Python\Projects> canyouvote.py Please enter your age: dickety six Traceback (most recent call last): File "canyouvote.py", line 1, in <module> age = int(input("Please enter your age: ")) ValueError: invalid literal for int() with base 10: 'dickety six' Instead of crashing, I would like it to try getting the input again. Like this: C:\Python\Projects> canyouvote.py Please enter your age: dickety six Sorry, I didn't understand that. Please enter your age: 26 You are able to vote in the United States! How can I accomplish this? What if I also wanted to reject values like -1, which is a valid int, but nonsensical in this context?
The simplest way to accomplish this would be to put the input method in a while loop. Use continue when you get bad input, and break out of the loop when you're satisfied. When Your Input Might Raise an Exception Use try and catch to detect when the user enters data that can't be parsed. while True: try: # Note: Python 2.x users should use raw_input, the equivalent of 3.x's input age = int(input("Please enter your age: ")) except ValueError: print("Sorry, I didn't understand that.") #better try again... Return to the start of the loop continue else: #age was successfully parsed! #we're ready to exit the loop. break if age >= 18: print("You are able to vote in the United States!") else: print("You are not able to vote in the United States.") Implementing Your Own Validation Rules If you want to reject values that Python can successfully parse, you can add your own validation logic. while True: data = input("Please enter a loud message (must be all caps): ") if not data.isupper(): print("Sorry, your response was not loud enough.") continue else: #we're happy with the value given. #we're ready to exit the loop. break while True: data = input("Pick an answer from A to D:") if data.lower() not in ('a', 'b', 'c', 'd'): print("Not an appropriate choice.") else: break Combining Exception Handling and Custom Validation Both of the above techniques can be combined into one loop. while True: try: age = int(input("Please enter your age: ")) except ValueError: print("Sorry, I didn't understand that.") continue if age < 0: print("Sorry, your response must not be negative.") continue else: #age was successfully parsed, and we're happy with its value. #we're ready to exit the loop. break if age >= 18: print("You are able to vote in the United States!") else: print("You are not able to vote in the United States.") Encapsulating it All in a Function If you need to ask your user for a lot of different values, it might be useful to put this code in a function, so you don't have to retype it every time. def get_non_negative_int(prompt): while True: try: value = int(input(prompt)) except ValueError: print("Sorry, I didn't understand that.") continue if value < 0: print("Sorry, your response must not be negative.") continue else: break return value age = get_non_negative_int("Please enter your age: ") kids = get_non_negative_int("Please enter the number of children you have: ") salary = get_non_negative_int("Please enter your yearly earnings, in dollars: ") Putting it all together You can extend this idea to make a very generic input function: def sanitised_input(prompt, type_=None, min_=None, max_=None, range_=None): if min_ is not None and max_ is not None and max_ < min_: raise ValueError("min_ must be less than or equal to max_.") while True: ui = input(prompt) if type_ is not None: try: ui = type_(ui) except ValueError: print("Input type must be {0}.".format(type_.__name__)) continue if max_ is not None and ui > max_: print("Input must be less than or equal to {0}.".format(max_)) elif min_ is not None and ui < min_: print("Input must be greater than or equal to {0}.".format(min_)) elif range_ is not None and ui not in range_: if isinstance(range_, range): template = "Input must be between {0.start} and {0.stop}." print(template.format(range_)) else: template = "Input must be {0}." if len(range_) == 1: print(template.format(*range_)) else: print(template.format(" or ".join((", ".join(map(str, range_[:-1])), str(range_[-1]))))) else: return ui With usage such as: age = sanitised_input("Enter your age: ", int, 1, 101) answer = sanitised_input("Enter your answer", str.lower, range_=('a', 'b', 'c', 'd')) Common Pitfalls, and Why you Should Avoid Them The Redundant Use of Redundant input Statements This method works but is generally considered poor style: data = input("Please enter a loud message (must be all caps): ") while not data.isupper(): print("Sorry, your response was not loud enough.") data = input("Please enter a loud message (must be all caps): ") It might look attractive initially because it's shorter than the while True method, but it violates the Don't Repeat Yourself principle of software development. This increases the likelihood of bugs in your system. What if you want to backport to 2.7 by changing input to raw_input, but accidentally change only the first input above? It's a SyntaxError just waiting to happen. Recursion Will Blow Your Stack If you've just learned about recursion, you might be tempted to use it in get_non_negative_int so you can dispose of the while loop. def get_non_negative_int(prompt): try: value = int(input(prompt)) except ValueError: print("Sorry, I didn't understand that.") return get_non_negative_int(prompt) if value < 0: print("Sorry, your response must not be negative.") return get_non_negative_int(prompt) else: return value This appears to work fine most of the time, but if the user enters invalid data enough times, the script will terminate with a RuntimeError: maximum recursion depth exceeded. You may think "no fool would make 1000 mistakes in a row", but you're underestimating the ingenuity of fools!
What rules does Pandas use to generate a view vs a copy?
I'm confused about the rules Pandas uses when deciding that a selection from a dataframe is a copy of the original dataframe, or a view on the original. If I have, for example, df = pd.DataFrame(np.random.randn(8,8), columns=list('ABCDEFGH'), index=[1, 2, 3, 4, 5, 6, 7, 8]) I understand that a query returns a copy so that something like foo = df.query('2 < index <= 5') foo.loc[:,'E'] = 40 will have no effect on the original dataframe, df. I also understand that scalar or named slices return a view, so that assignments to these, such as df.iloc[3] = 70 or df.ix[1,'B':'E'] = 222 will change df. But I'm lost when it comes to more complicated cases. For example, df[df.C <= df.B] = 7654321 changes df, but df[df.C <= df.B].ix[:,'B':'E'] does not. Is there a simple rule that Pandas is using that I'm just missing? What's going on in these specific cases; and in particular, how do I change all values (or a subset of values) in a dataframe that satisfy a particular query (as I'm attempting to do in the last example above)? Note: This is not the same as this question; and I have read the documentation, but am not enlightened by it. I've also read through the "Related" questions on this topic, but I'm still missing the simple rule Pandas is using, and how I'd apply it to — for example — modify the values (or a subset of values) in a dataframe that satisfy a particular query.
Here's the rules, subsequent override: All operations generate a copy If inplace=True is provided, it will modify in-place; only some operations support this An indexer that sets, e.g. .loc/.ix/.iloc/.iat/.at will set inplace. An indexer that gets on a single-dtyped object is almost always a view (depending on the memory layout it may not be that's why this is not reliable). This is mainly for efficiency. (the example from above is for .query; this will always return a copy as its evaluated by numexpr) An indexer that gets on a multiple-dtyped object is always a copy. Your example of chained indexing df[df.C <= df.B].ix[:,'B':'E'] is not guaranteed to work (and thus you shoulld never do this). Instead do: df.ix[df.C <= df.B, 'B':'E'] as this is faster and will always work The chained indexing is 2 separate python operations and thus cannot be reliably intercepted by pandas (you will oftentimes get a SettingWithCopyWarning, but that is not 100% detectable either). The dev docs, which you pointed, offer a much more full explanation.
pyodbc and python 3.4 on Windows
pyodbc is a very nice thing, but the Windows installers only work with their very specific python version. With the release of Python 3.4, the only available installers just stop once they don't see 3.3 in the registry (though 3.4 is certainly there). Copying the .pyd and .egg-info files from a 3.3 installation into the 3.4 site-packages directory doesn't seem to do the trick. When importing pyodbc, an ImportError is thrown: ImportError: DLL load failed: %1 is not a valid Win32 application. Is there a secret sauce that can be added to make the 3.3 file work correctly? Or do we just need to wait for a 3.4 installer version?
The different versions of Python are (for the most part) not binary-compatible, and thus any compiled extensions (such as pyodbc) will only work for a specific version. Note that pure-Python packages (the ones that are completely written in Python, and have no non-Python dependencies) do not need to be compiled, and thus can be written to support multiple Python versions. Also note that it is technically possible for a compiled extension to be written such that it works for Python 3.2 as well as 3.3, 3.4, and the future 3.x's to come, but they have to limit themselves to the "stable ABI" as specified by PEP 384, and most extensions do not do this. As far as I know, pyodbc is not limited to the stable ABI and must be compiled separately for each Python version. That said, it is also possible to compile your own version of pyodbc from source, as long as you have the required tools and expertise. (But I'm guessing if you're asking this question, you don't. I don't either, otherwise I'd include some tips in this answer.) As you have already commented, pypyodbc may be your best bet, as it is a pure-Python package. Installing pypyodbc can be done via the commandline: C:\Python34\Scripts>pip install pypyodbc Using it as drop-in replacement of pyodbc can be done using: import pypyodbc as pyodbc [The current version of pyodbc at the time of this edit is 3.0.10, and it does support Python 3.4. Of course, it's still useful to be aware of pypyodbc in case pyodbc falls behind again when future versions of Python are released.]
Invalid transaction persisting across requests
Summary One of our threads in production hit an error and is now producing InvalidRequestError: This session is in 'prepared' state; no further SQL can be emitted within this transaction. errors, on every request with a query that it serves, for the rest of its life! It's been doing this for days, now! How is this possible, and how can we prevent it going forward? Background We are using a Flask app on uWSGI (4 processes, 2 threads), with Flask-SQLAlchemy providing us DB connections to SQL Server. The problem seemed to start when one of our threads in production was tearing down its request, inside this Flask-SQLAlchemy method: @teardown def shutdown_session(response_or_exc): if app.config['SQLALCHEMY_COMMIT_ON_TEARDOWN']: if response_or_exc is None: self.session.commit() self.session.remove() return response_or_exc ...and somehow managed to call self.session.commit() when the transaction was invalid. This resulted in sqlalchemy.exc.InvalidRequestError: Can't reconnect until invalid transaction is rolled back getting output to stdout, in defiance of our logging configuration, which makes sense since it happened during the app context tearing down, which is never supposed to raise exceptions. I'm not sure how the transaction got to be invalid without response_or_exec getting set, but that's actually the lesser problem AFAIK. The bigger problem is, that's when the "'prepared' state" errors started, and haven't stopped since. Every time this thread serves a request that hits the DB, it 500s. Every other thread seems to be fine: as far as I can tell, even the thread that's in the same process is doing OK. Wild guess The SQLAlchemy mailing list has an entry about the "'prepared' state" error saying it happens if a session started committing and hasn't finished yet, and something else tries to use it. My guess is that the session in this thread never got to the self.session.remove() step, and now it never will. I still feel like that doesn't explain how this session is persisting across requests though. We haven't modified Flask-SQLAlchemy's use of request-scoped sessions, so the session should get returned to SQLAlchemy's pool and rolled back at the end of the request, even the ones that are erroring (though admittedly, probably not the first one, since that raised during the app context tearing down). Why are the rollbacks not happening? I could understand it if we were seeing the "invalid transaction" errors on stdout (in uwsgi's log) every time, but we're not: I only saw it once, the first time. But I see the "'prepared' state" error (in our app's log) every time the 500s occur. Configuration details We've turned off expire_on_commit in the session_options, and we've turned on SQLALCHEMY_COMMIT_ON_TEARDOWN. We're only reading from the database, not writing yet. We're also using Dogpile-Cache for all of our queries (using the memcached lock since we have multiple processes, and actually, 2 load-balanced servers). The cache expires every minute for our major query. Updated 2014-04-28: Resolution steps Restarting the server seems to have fixed the problem, which isn't entirely surprising. That said, I expect to see it again until we figure out how to stop it. benselme (below) suggested writing our own teardown callback with exception handling around the commit, but I feel like the bigger problem is that the thread was messed up for the rest of its life. The fact that this didn't go away after a request or two really makes me nervous!
Edit 2016-06-05: A PR that solves this problem has been merged on May 26, 2016. Flask PR 1822 Edit 2015-04-13: Mystery solved! TL;DR: Be absolutely sure your teardown functions succeed, by using the teardown-wrapping recipe in the 2014-12-11 edit! Started a new job also using Flask, and this issue popped up again, before I'd put in place the teardown-wrapping recipe. So I revisited this issue and finally figured out what happened. As I thought, Flask pushes a new request context onto the request context stack every time a new request comes down the line. This is used to support request-local globals, like the session. Flask also has a notion of "application" context which is separate from request context. It's meant to support things like testing and CLI access, where HTTP isn't happening. I knew this, and I also knew that that's where Flask-SQLA puts its DB sessions. During normal operation, both a request and an app context are pushed at the beginning of a request, and popped at the end. However, it turns out that when pushing a request context, the request context checks whether there's an existing app context, and if one's present, it doesn't push a new one! So if the app context isn't popped at the end of a request due to a teardown function raising, not only will it stick around forever, it won't even have a new app context pushed on top of it. That also explains some magic I hadn't understood in our integration tests. You can INSERT some test data, then run some requests and those requests will be able to access that data despite you not committing. That's only possible since the request has a new request context, but is reusing the test application context, so it's reusing the existing DB connection. So this really is a feature, not a bug. That said, it does mean you have to be absolutely sure your teardown functions succeed, using something like the teardown-function wrapper below. That's a good idea even without that feature to avoid leaking memory and DB connections, but is especially important in light of these findings. I'll be submitting a PR to Flask's docs for this reason. (Here it is) Edit 2014-12-11: One thing we ended up putting in place was the following code (in our application factory), which wraps every teardown function to make sure it logs the exception and doesn't raise further. This ensures the app context always gets popped successfully. Obviously this has to go after you're sure all teardown functions have been registered. # Flask specifies that teardown functions should not raise. # However, they might not have their own error handling, # so we wrap them here to log any errors and prevent errors from # propagating. def wrap_teardown_func(teardown_func): @wraps(teardown_func) def log_teardown_error(*args, **kwargs): try: teardown_func(*args, **kwargs) except Exception as exc: app.logger.exception(exc) return log_teardown_error if app.teardown_request_funcs: for bp, func_list in app.teardown_request_funcs.items(): for i, func in enumerate(func_list): app.teardown_request_funcs[bp][i] = wrap_teardown_func(func) if app.teardown_appcontext_funcs: for i, func in enumerate(app.teardown_appcontext_funcs): app.teardown_appcontext_funcs[i] = wrap_teardown_func(func) Edit 2014-09-19: Ok, turns out --reload-on-exception isn't a good idea if 1.) you're using multiple threads and 2.) terminating a thread mid-request could cause trouble. I thought uWSGI would wait for all requests for that worker to finish, like uWSGI's "graceful reload" feature does, but it seems that's not the case. We started having problems where a thread would acquire a dogpile lock in Memcached, then get terminated when uWSGI reloads the worker due to an exception in a different thread, meaning the lock is never released. Removing SQLALCHEMY_COMMIT_ON_TEARDOWN solved part of our problem, though we're still getting occasional errors during app teardown during session.remove(). It seems these are caused by SQLAlchemy issue 3043, which was fixed in version 0.9.5, so hopefully upgrading to 0.9.5 will allow us to rely on the app context teardown always working. Original: How this happened in the first place is still an open question, but I did find a way to prevent it: uWSGI's --reload-on-exception option. Our Flask app's error handling ought to be catching just about anything, so it can serve a custom error response, which means only the most unexpected exceptions should make it all the way to uWSGI. So it makes sense to reload the whole app whenever that happens. We'll also turn off SQLALCHEMY_COMMIT_ON_TEARDOWN, though we'll probably commit explicitly rather than writing our own callback for app teardown, since we're writing to the database so rarely.
Usefulness of def __init__(self)?
I am fairly new to python, and noticed these posts: Python __init__ and self what do they do? and Python Classes without using def __init__(self) After playing around with it, however, I noticed that these two classes give apparently equivalent results- class A(object): def __init__(self): self.x = 'Hello' def method_a(self, foo): print self.x + ' ' + foo (from this question) and class B(object): x = 'Hello' def method_b(self,foo): print self.x + ' ' + foo Is there any real difference between these two? Or, more generally, does __init__ change anything inherently about the attributes of a class? In the documentation it is mentioned that __init__ is called when the instance is created. Does this mean that x in class B is established before instantiation?
Yeah, check this out: class A(object): def __init__(self): self.lst = [] class B(object): lst = [] and now try: >>> x = B() >>> y = B() >>> x.lst.append(1) >>> y.lst.append(2) >>> x.lst [1, 2] >>> x.lst is y.lst True and this: >>> x = A() >>> y = A() >>> x.lst.append(1) >>> y.lst.append(2) >>> x.lst [1] >>> x.lst is y.lst False Does this mean that x in class B is established before instantiation? Yes, it's a class attribute (it is shared between instances). While in class A it's an instance attribute. It just happens that strings are immutable, thus there is no real difference in your scenario (except that class B uses less memory, because it defines only one string for all instances). But there is a huge one in my example.
How do I run graphx with Python / pyspark?
I am attempting to run Spark graphx with Python using pyspark. My installation appears correct, as I am able to run the pyspark tutorials and the (Java) GraphX tutorials just fine. Presumably since GraphX is part of Spark, pyspark should be able to interface it, correct? Here are the tutorials for pyspark: http://spark.apache.org/docs/0.9.0/quick-start.html http://spark.apache.org/docs/0.9.0/python-programming-guide.html Here are the ones for GraphX: http://spark.apache.org/docs/0.9.0/graphx-programming-guide.html http://ampcamp.berkeley.edu/big-data-mini-course/graph-analytics-with-graphx.html Can anyone convert the GraphX tutorial to be in Python?
It looks like the python bindings to GraphX are delayed at least to Spark 1.4 1.5 ∞. It is waiting behind the Java API. You can track the status at SPARK-3789 GRAPHX Python bindings for GraphX - ASF JIRA
Pandas: Replacing column values in dataframe
I'm trying to replace the values in one column of a dataframe. The column ('female') only contains the values 'female' and 'male'. I have tried the following: w['female']['female']='1' w['female']['male']='0' But receive the exact same copy of the previous results. I would ideally like to get some output which resembles the following loop element-wise. if w['female'] =='female': w['female'] = '1'; else: w['female'] = '0'; I've looked through the gotchas documentation (http://pandas.pydata.org/pandas-docs/stable/gotchas.html) but cannot figure out why nothing happens. Any help will be appreciated.
If I understand right, you want something like this: w['female'] = w['female'].map({'female': 1, 'male': 0}) (Here I convert the values to numbers instead of strings containing numbers. You can convert them to "1" and "0", if you really want, but I'm not sure why you'd want that.) The reason your code doesn't work is because using ['female'] on a column (the second 'female' in your w['female']['female']) doesn't mean "select rows where the value is 'female'". It means to select rows where the index is 'female', of which there may not be any in your DataFrame.
Trying to figure out how the 'with..as' construct works in python
I am trying to learn python and I landed on the with..as construct, that used like this: with open("somefile.txt", 'rt') as file: print(file.read()) # at the end of execution file.close() is called automatically. So as a learning strategy I tried to do the following: class Derived(): def __enter__(self): print('__enter__') def __exit__(self, exc_type, exc_value, traceback): print('__exit__') with Derived() as derived: print(derived) and I got this output: __enter__ None __exit__ My question is then: why did print(derived) return a None object and not a Derived object?
The name derived is bound to the object returned by the __enter__ method, which is None. Try: def __enter__(self): print('__enter__') return self Docs: object.__enter__(self) Enter the runtime context related to this object. The with statement will bind this method’s return value to the target(s) specified in the as clause of the statement, if any.
Pandas Dataframe: split column into multiple columns, right-align inconsistent cell entries
I have a pandas dataframe with a column named 'City, State, Country'. I want to separate this column into three new columns, 'City, 'State' and 'Country'. 0 HUN 1 ESP 2 GBR 3 ESP 4 FRA 5 ID, USA 6 GA, USA 7 Hoboken, NJ, USA 8 NJ, USA 9 AUS Splitting the column into three columns is trivial enough: location_df = df['City, State, Country'].apply(lambda x: pd.Series(x.split(','))) However, this creates left-aligned data: 0 1 2 0 HUN NaN NaN 1 ESP NaN NaN 2 GBR NaN NaN 3 ESP NaN NaN 4 FRA NaN NaN 5 ID USA NaN 6 GA USA NaN 7 Hoboken NJ USA 8 NJ USA NaN 9 AUS NaN NaN How would one go about creating the new columns with the data right-aligned? Would I need to iterate through every row, count the number of commas and handle the contents individually?
I'd do something like the following: foo = lambda x: pd.Series([i for i in reversed(x.split(','))]) rev = df['City, State, Country'].apply(foo) print rev 0 1 2 0 HUN NaN NaN 1 ESP NaN NaN 2 GBR NaN NaN 3 ESP NaN NaN 4 FRA NaN NaN 5 USA ID NaN 6 USA GA NaN 7 USA NJ Hoboken 8 USA NJ NaN 9 AUS NaN NaN I think that gets you what you want but if you also want to pretty things up and get a City, State, Country column order, you could add the following: rev.rename(columns={0:'Country',1:'State',2:'City'},inplace=True) rev = rev[['City','State','Country']] print rev City State Country 0 NaN NaN HUN 1 NaN NaN ESP 2 NaN NaN GBR 3 NaN NaN ESP 4 NaN NaN FRA 5 NaN ID USA 6 NaN GA USA 7 Hoboken NJ USA 8 NaN NJ USA 9 NaN NaN AUS
Update a dataframe in pandas while iterating row by row
I have a pandas data frame that looks like this (its a pretty big one) date exer exp ifor mat 1092 2014-03-17 American M 528.205 2014-04-19 1093 2014-03-17 American M 528.205 2014-04-19 1094 2014-03-17 American M 528.205 2014-04-19 1095 2014-03-17 American M 528.205 2014-04-19 1096 2014-03-17 American M 528.205 2014-05-17 now I would like to iterate row by row and as I go through each row, the value of ifor in each row can change depending on some conditions and I need to lookup another dataframe. Now, how do I update this as I iterate. Tried a few things none of them worked. for i, row in df.iterrows(): if <something>: row['ifor'] = x else: row['ifor'] = y df.ix[i]['ifor'] = x None of these approaches seem to work. I don't see the values updated in the dataframe.
You can assign values in the loop using df.set_value: for i, row in df.iterrows(): ifor_val = something if <condition>: ifor_val = something_else df.set_value(i,'ifor',ifor_val) if you don't need the row values you could simply iterate over the indices of df, but I kept the original for-loop in case you need the row value for something not shown here.
Is the use of del bad?
I commonly use del in my code to delete objects: >>> array = [4, 6, 7, 'hello', 8] >>> del(array[array.index('hello')]) >>> array [4, 6, 7, 8] >>> But I have heard many people say that the use of del is unpythonic. Is using del bad practice? >>> array = [4, 6, 7, 'hello', 8] >>> array[array.index('hello'):array.index('hello')+1] = '' >>> array [4, 6, 7, 8] >>> If not, why are there many ways to accomplish the same thing in python? Is one better than the others? Option 1: using del >>> arr = [5, 7, 2, 3] >>> del(arr[1]) >>> arr [5, 2, 3] >>> Option 2: using list.remove() >>> arr = [5, 7, 2, 3] >>> arr.remove(7) >>> arr [5, 2, 3] >>> Option 3: using list.pop() >>> arr = [5, 7, 2, 3] >>> arr.pop(1) 7 >>> arr [5, 2, 3] >>> Option 4: using slicing >>> arr = [5, 7, 2, 3] >>> arr[1:2] = '' >>> arr [5, 2, 3] >>> I am sorry if this question appears to be opinion-based, but I am looking for a reasonable answer to my question, and I will add a bounty after 2 days if I don't get a suitable answer. Edit: Since there are many alternates to using del to delete certain parts of objects, the one unique factor left of del is its ability to remove objects completely: >>> a = 'hello' >>> b = a >>> del(a) >>> a Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'a' is not defined >>> b 'hello' >>> However, what is the point of using it to 'undefine' objects? Also, why does the following code change both variables: >>> a = [] >>> b = a >>> a.append(9) >>> a [9] >>> b [9] >>> But the del statement does not achieve the same effect? >>> a = [] >>> b = a >>> del(a) >>> a Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'a' is not defined >>> b [] >>>
The other answers are looking at it from a technical point of view (i.e. what's the best way to modify a list), but I would say the (much) more important reason people are suggesting e.g. slicing is that it doesn't modify the original list. The reason for this in turn is that usually, the list came from somewhere. If you modify it, you can unknowningly cause very bad and hard-to-detect side effects, which can cause bugs elsewhere in the program. Or even if you don't cause a bug immediately, you'll make your program overall harder to understand and reason about, and debug. For example, list comprehensions/generator expressions are nice in that they never mutate the "source" list they are passed: [x for x in lst if x != "foo"] # creates a new list (x for x in lst if x != "foo") # creates a lazy filtered stream This is of course often more expensive (memory wise) because it creates a new list but a program that uses this approach is mathematically purer and easier to reason about. And with lazy lists (generators and generator expressions), even the memory overhead will disappear, and computations are only executed on demand; see http://www.dabeaz.com/generators/ for an awesome introduction. And you should not think too much about optimization when designing your program (see http://programmers.stackexchange.com/questions/80084/is-premature-optimization-really-the-root-of-all-evil). Also, removing an item from a list is quite expensive, unless it's a linked list (which Python's list isn't; for linked list, see collections.deque). In fact, side-effect free functions and immutable data structures are the basis of Functional Programming, a very powerful programming paradigm. However, under certain circumstances, it's OK to modify a data structure in place (even in FP, if the language allows it), such as when it's a locally created one, or copied from the function's input: def sorted(lst): ret = list(lst) # make a copy # mutate ret return ret — this function appears to be a pure function from the outside because it doesn't modify its inputs (and also only depends on its arguments and nothing else (i.e. it has no (global) state), which is another requirement for something to be a Pure Function). So as long as you know what you're doing, del is by no means bad; but use any sort of data mutation with extreme care and only when you have to. Always start out with a possibly less efficient but more correct and mathematically elegant code. ...and learn Functional Programming :) P.S. note that del can also be used to delete local variables and thus eliminate references to objects in memory, which is often useful for whatever GC related purposes. Answer to your second question: As to the second part of your question about del removing objects completely — that's not the case: in fact in Python, it is not even possible to tell the interpreter/VM to remove an object from memory because Python is a garbage collected language (like Java, C#, Ruby, Haskell etc) and it's the runtime that decides what to remove and when. Instead, what del does when called on a variable (as opposed to a dictionary key or list item) like this: del a is that it only removes the local (or global) variable and not what the variable points to (every variable in Python holds a pointer/reference to its contents not the content itself). In fact, since locals and globals are stored as a dictionary under the hood (see locals() and globals()), del a is equivalent to: del locals()['a'] or del globals()['a'] when applied to a global. so if you have: a = [] b = a you're making a list, storing a reference to it in a and then making another copy of that reference and storing it into b without copying/touching the list object itself. Therefore, these two calls affect one and the same object: a.append(1) b.append(2) # the list will be [1, 2] whereas deleting b is in no way related to touching what b points to: a = [] b = a del b # a is still untouched and points to a list Also, even when you call del on an object attribute (e.g. del self.a), you're still actually modifying a dictionary self.__dict__ just like you are actually modifying locals()/globals() when you do del a. P.S. as Sven Marcnah has pointed out that del locals()['a'] does not actually delete the local variable a when inside a function, which is correct. This is probably due to locals() returning a copy of the actual locals. However, the answer is still generally valid.
Is there "Edit and Continue" in PyCharm? Reload code into running program like in Eclipse / PyDev?
Hi all Python developers! In Eclipse with PyDev it is possible to edit a Python file while debugging. On save, the PyDev debugger will reload the updated code into the running program and uses my new code. How can I do the same thing in JetBrains PyCharm (using Community Edition)? Eclipse / PyDev writes an output like this when I do that: pydev debugger: Start reloading module: "MyWidget" ... pydev debugger: Updated function code: <function close at 0x055F4E70> pydev debugger: reload finished I searched settings and web and could not find any hint. Very glad about any idea. Thx. Edit: I found out in Eclipse/PyDev one has to be in debug mode to be able to use this feature. I tested in PyCharm, but there was no reload done.
After all I found a useful and acceptable workaround for my question. It works in PyCharm Community Edition 3.1.2 and I assume it will do in commercial edition as well. I tested on a mid-scale project using Python 2.7.6, PySide (Qt) with one main window and 20+ widgets, tabs, whatever. Follow these steps... Work in Pycharm on a python project :-) Execute your code in Debug mode (did not tried Release so far) Edit some code in one your modules imported during the life of your program Make your program pause. To achieve this, you can click the "Pause" button of in PyCharms Debug view and then any place in your applications main window where it would need to do something (for example on a tab header). If you have a long a running task and no UI, you may place a breakpoint in a place your program often comes by. In the Debug view, switch to the Console tab. There is a button on the left Show command line. Click this. In the console, type in reload(MyModifiedModule) if this call fails, write import MyModifiedModule and try again. Click resume in PyCharm. Try the code you fixed. There are some restrictions on this... It won't fix changes in your main method or main window, cause it won't be created again. In my tests I could not reload widgets from Qt. But it worked for classes like data containers or workers. May the force be with you as you try this and do not hesitate to add your experiences. Thank you.
How to properly assert that exception raises in pytest?
Code: # coding=utf-8 import pytest def whatever(): return 9/0 def test_whatever(): try: whatever() except ZeroDivisionError as exc: pytest.fail(exc, pytrace=True) Output: ================================ test session starts ================================= platform linux2 -- Python 2.7.3 -- py-1.4.20 -- pytest-2.5.2 plugins: django, cov collected 1 items pytest_test.py F ====================================== FAILURES ====================================== ___________________________________ test_whatever ____________________________________ def test_whatever(): try: whatever() except ZeroDivisionError as exc: > pytest.fail(exc, pytrace=True) E Failed: integer division or modulo by zero pytest_test.py:12: Failed ============================== 1 failed in 1.16 seconds ============================== How to make pytest print traceback, so I would see where in whatver function exception was raised?
pytest.raises(Exception) is what you need. Code import pytest def test_passes(): with pytest.raises(Exception) as e_info: x = 1 / 0 def test_passes_without_info(): with pytest.raises(Exception): x = 1 / 0 def test_fails(): with pytest.raises(Exception) as e_info: x = 1 / 1 def test_fails_without_info(): with pytest.raises(Exception): x = 1 / 1 # Don't do this. Assertions are caught as exceptions. def test_passes_but_should_not(): try: x = 1 / 1 assert False except Exception: assert True # Even if the appropriate exception is caught, it is bad style, # because the test result is less informative # than it would be with pytest.raises(e) # (it just says pass or fail.) def test_passes_but_bad_style(): try: x = 1 / 0 assert False except ZeroDivisionError: assert True def test_fails_but_bad_style(): try: x = 1 / 1 assert False except ZeroDivisionError: assert True Output ============================================================================================= test session starts ============================================================================================== platform linux2 -- Python 2.7.6 -- py-1.4.26 -- pytest-2.6.4 collected 7 items test.py ..FF..F =================================================================================================== FAILURES =================================================================================================== __________________________________________________________________________________________________ test_fails __________________________________________________________________________________________________ def test_fails(): with pytest.raises(Exception) as e_info: > x = 1 / 1 E Failed: DID NOT RAISE test.py:13: Failed ___________________________________________________________________________________________ test_fails_without_info ____________________________________________________________________________________________ def test_fails_without_info(): with pytest.raises(Exception): > x = 1 / 1 E Failed: DID NOT RAISE test.py:17: Failed ___________________________________________________________________________________________ test_fails_but_bad_style ___________________________________________________________________________________________ def test_fails_but_bad_style(): try: x = 1 / 1 > assert False E assert False test.py:43: AssertionError ====================================================================================== 3 failed, 4 passed in 0.02 seconds ======================================================================================
Argparse with required subparser
I'm using Python 3.4, I'm trying to use argparse with subparsers, and I want to have a similar behavior to the one in Python 2.x where if I don't supply a positional argument (to indicate the subparser/subprogram) I'll get a helpful error message. I.e., with python2 I'll get the following error message: $ python2 subparser_test.py usage: subparser_test.py [-h] {foo} ... subparser_test.py: error: too few arguments I'm setting the required attribute as suggested in http://stackoverflow.com/a/22994500/3061818, however that gives me an error with Python 3.4.0: TypeError: sequence item 0: expected str instance, NoneType found - full traceback: $ python3 subparser_test.py Traceback (most recent call last): File "subparser_test.py", line 17, in <module> args = parser.parse_args() File "/usr/local/Cellar/python3/3.4.0/Frameworks/Python.framework/Versions/3.4/lib/python3.4/argparse.py", line 1717, in parse_args args, argv = self.parse_known_args(args, namespace) File "/usr/local/Cellar/python3/3.4.0/Frameworks/Python.framework/Versions/3.4/lib/python3.4/argparse.py", line 1749, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/usr/local/Cellar/python3/3.4.0/Frameworks/Python.framework/Versions/3.4/lib/python3.4/argparse.py", line 1984, in _parse_known_args ', '.join(required_actions)) TypeError: sequence item 0: expected str instance, NoneType found This is my program subparser_test.py - adapted from https://docs.python.org/3.2/library/argparse.html#sub-commands: import argparse # sub-command functions def foo(args): print('"foo()" called') # create the top-level parser parser = argparse.ArgumentParser() subparsers = parser.add_subparsers() subparsers.required = True # create the parser for the "foo" command parser_foo = subparsers.add_parser('foo') parser_foo.set_defaults(func=foo) args = parser.parse_args() args.func(args) Related question: Why does this argparse code behave differently between Python 2 and 3?
You need to give subparsers a dest. parser = argparse.ArgumentParser() subparsers = parser.add_subparsers(dest='cmd') subparsers.required = True Now: 1909:~/mypy$ argdev/python3 stack23349349.py usage: stack23349349.py [-h] {foo} ... stack23349349.py: error: the following arguments are required: cmd In order to issue this 'missing arguments' error message, the code needs to give that argument a name. For a positional argument (like subparses), that name is (by default) the 'dest'. There's a (minor) note about this in the SO answer you linked. One of the few 'patches' to argparse in the last Python release changed how it tests for 'required' arguments. Unfortunately it introduced this bug regarding subparsers. This needs to be fixed in the next release (if not sooner).
django-admin.py makemessages not working
I am trying to translate a string. {% load i18n %} {% trans "Well, Hello there, how are you?" %} to... Hola amigo, ¿que tal? My settings.py file has this: LOCALE_PATHS = ( os.path.join(BASE_DIR, 'translations'), ) And I am getting this: (env)glitch:translations nathann$ django-admin.py compilemessages CommandError: Can't find msgfmt. Make sure you have GNU gettext tools 0.15 or newer installed. I also don't understand this error message. (env)glitch:ipals nathann$ django-admin.py makemessages -l es CommandError: This script should be run from the Django Git tree or your project or app tree. If you did indeed run it from the Git checkout or your project or application, maybe you are just missing the conf / locale(in the django tree) or locale(for project and application) directory? It is not created automatically, you have to create it by hand if you want to enable i18n for your project or application. The docs: https://docs.djangoproject.com/en/1.6/ref/django-admin/#django-admin-makemessages And for bonus upvotes, a related question: gettext wasn't linked when I installed it... Any help with this one? Should I force it? glitch:translations nathann$ brew link gettext Warning: gettext is keg-only and must be linked with --force Note that doing so can interfere with building software. Thanks! UPDATES: I have since changed the name of translations to locale and updated my settings.py accordingly. then I ran this again and it's still complaining about gettext: (env)glitch:ipals nathann$ mv translations/ locale (env)glitch:ipals nathann$ django-admin.py makemessages -l es CommandError: Can't find xgettext. Make sure you have GNU gettext tools 0.15 or newer installed. I also found this: Understand homebrew and keg-only dependencies after reading this: (env)glitch:ipals nathann$ brew install gettext Warning: gettext-0.18.3.2 already installed (env)glitch:ipals nathann$ brew link gettext Warning: gettext is keg-only and must be linked with --force Note that doing so can interfere with building software.
After making sure I had this in settings: LOCALE_PATHS = ( os.path.join(BASE_DIR, 'locale'), ) print(LOCALE_PATHS) I double checked I had the locale directory in the right place with its name spelled correctly. I ended up linking gettext (after asking about that on superuser): brew link gettext --force manage.py compilemessages django-admin.py makemessages -l es And BAM. I've got my po file. But the doctor says: Warning: Some keg-only formula are linked into the Cellar. Linking a keg-only formula, such as gettext, into the cellar with `brew link <formula>` will cause other formulae to detect them during the `./configure` step. This may cause problems when compiling those other formulae. Binaries provided by keg-only formulae may override system binaries with other strange results. You may wish to `brew unlink` these brews: gettext
How can I "unpivot" specific columns from a pandas DataFrame?
I have a pandas DataFrame, eg: x = DataFrame.from_dict({'farm' : ['A','B','A','B'], 'fruit':['apple','apple','pear','pear'], '2014':[10,12,6,8], '2015':[11,13,7,9]}) ie: 2014 2015 farm fruit 0 10 11 A apple 1 12 13 B apple 2 6 7 A pear 3 8 9 B pear How can I convert it to this: ? farm fruit value year 0 A apple 10 2014 1 B apple 12 2014 2 A pear 6 2014 3 B pear 8 2014 4 A apple 11 2015 5 B apple 13 2015 6 A pear 7 2015 7 B pear 9 2015 I have tried stack and unstack but haven't been able to make it work. Thanks!
This can be done with pd.melt(): # value_name is 'value' by default, but setting it here to make it clear pd.melt(x, id_vars=['farm', 'fruit'], var_name='year', value_name='value') Result: farm fruit year value 0 A apple 2014 10 1 B apple 2014 12 2 A pear 2014 6 3 B pear 2014 8 4 A apple 2015 11 5 B apple 2015 13 6 A pear 2015 7 7 B pear 2015 9 [8 rows x 4 columns] I'm not sure how common "melt" is as the name for this kind of operation, but that's what it's called in R's reshape2 package, which probably inspired the name here.
How to convert webpage into PDF by using Python
I was finding solution to print webpage into local file PDF, using Python. one of the good solution is to use Qt, found here, https://bharatikunal.wordpress.com/2010/01/. It didn't work at the beginning as I had problem with the installation of PyQt4 because it gave error messages such as 'ImportError: No module named PyQt4.QtCore', and 'ImportError: No module named PyQt4.QtCore'. It was because PyQt4's not installed properly. I used to have the libraries located at C:\Python27\Lib however it's not for PyQt4. In fact, it simply needs to download from http://www.riverbankcomputing.com/software/pyqt/download (mind the correct Python version you are using), and install it to C:\Python27 (my case). That's it. Now the scripts runs fine so I want to share it. for more options in using Qprinter, please refer to http://qt-project.org/doc/qt-4.8/qprinter.html#Orientation-enum.
You also can use pdfkit: import pdfkit pdfkit.from_url('http://google.com', 'out.pdf')
django - comparing old and new field value before saving
I have a django model, and I need to compare old and new values of field BEFORE saving. I've tried the save() inheritence, and pre_save signal. It was triggered correctly, but I can't find the list of actualy changed fields and can't compare old and new values. There is a way? I need it for optimization of presave actions. Thank you!
There is very simple django way for doing it. "Memorise" the values in model init like this: def __init__(self, *args, **kwargs): super(MyClass, self).__init__(*args, **kwargs) self.initial_parametername = self.parametername --- self.initial_parameternameX = self.parameternameX Real life example: At class: def __init__(self, *args, **kwargs): super(MyClass, self).__init__(*args, **kwargs) self.__important_fields = ['target_type', 'target_id', 'target_object', 'number', 'chain', 'expiration_date'] for field in self.__important_fields: setattr(self, '__original_%s' % field, getattr(self, field)) def has_changed(self): for field in self.__important_fields: orig = '__original_%s' % field if getattr(self, orig) != getattr(self, field): return True return False And then in modelform save method: def save(self, force_insert=False, force_update=False, commit=True): # Prep the data obj = super(MyClassForm, self).save(commit=False) if obj.has_changed(): # If we're down with commitment, save this shit if commit: obj.save(force_insert=True) return obj
AWS aws.push ImportError: No module named boto in Ubuntu
I'm trying to follow this tutorial: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Ruby_rails.html in order to deploy a Ruby on Rails app in AWS with Ubuntu. Everything went ok (I can run my app in local), until the final step. When I run aws.push I get next error. roberto@ubuntu:~/dev/myapp$ git aws.push Traceback (most recent call last): File ".git/AWSDevTools/aws.elasticbeanstalk.push", line 21, in <module> from aws.dev_tools import * File "/home/roberto/dev/myapp/.git/AWSDevTools/aws/dev_tools.py", line 5, in <module> import boto ImportError: No module named boto I have read this post git aws.push: No module named boto and run: pip install boto pip freeze > reqIuirements.txt git add . git commit -m "Added boto" git aws.push But still the same result. UPDATE: I think the problem is related to my python versions. When I run which python I get /usr/bin/python. If I do ls this folder I see python, python2, python2.7, python3, python3.4. When I run python I get: Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> I don't know what to do. The problem was the first boto installation it didn't work due to permissions problems, and I didn't realize. I ran sudo pip install boto and everything went OK this time.
What happened is that the eb command line interface available in that specific AWS tutorial (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Ruby_rails.html) does not provide the latest version of boto. When you did pip install boto you installed the latest version from https://github.com/boto/boto which solved the issue.
Python 3.4.0 with MySQL database
I have installed Python version 3.4.0 and I would like to do a project with MySQL database. I downloaded and tried installing MySQLdb, but it wasn't successful for this version of Python. Any suggestions how could I fix this problem and install it properly?
MySQLdb does not support Python 3 but it is not the only MySQL driver for Python. mysqlclient is essentially just a fork of MySQLdb with Python 3 support merged in (and a few other improvements). PyMySQL is a pure python MySQL driver, which means it is slower, but it does not require a compiled C component or MySQL libraries and header files to be installed on client machines. It has Python 3 support. Another option is simply to use another database system like PostgreSQL.
Pandas percentage of total with groupby
This is obviously simple, but as a numpy newbe I'm getting stuck. I have a CSV file that contains 3 columns, the State, the Office ID, and the Sales for that office. I want to calculate the percentage of sales per office in a given state (total of all percentages in each state is 100%). df = pd.DataFrame({'state': ['CA', 'WA', 'CO', 'AZ'] * 3, 'office_id': range(1, 7) * 2, 'sales': [np.random.randint(100000, 999999) for _ in range(12)]}) df.groupby(['state', 'office_id']).agg({'sales': 'sum'}) This returns: sales state office_id AZ 2 839507 4 373917 6 347225 CA 1 798585 3 890850 5 454423 CO 1 819975 3 202969 5 614011 WA 2 163942 4 369858 6 959285 I can't seem to figure out how to "reach up" to the state level of the groupby to total up the sales for the entire state to calculate the fraction.
Paul H's answer is right that you will have to make a second groupby object, but you can calculate the percentage in a simpler way -- just groupby the stateoffice and divide the sales column by its sum. Starting with the state_office df in Paul H's answer: state_pcts = state_office.groupby(level=0).apply(lambda x: 100*x/float(x.sum()))
python BeautifulSoup parsing table
Folks, I learning python requests and BeautifulSoup. For the exercise, I've chosen to write a quick NYC parking ticket parser. I am able to get a html response which is quite ugly. I need to grab the lineItemsTable and parse all the tickets. You can reproduce the page by going here: https://paydirect.link2gov.com/NYCParking-Plate/ItemSearch and entering a NY plate T630134C soup = BeautifulSoup(plateRequest.text) #print(soup.prettify()) #print soup.find_all('tr') table = soup.find("table", { "class" : "lineItemsTable" }) for row in table.findAll("tr"): cells = row.findAll("td") print cells Can someone please help me out? Simple looking for all tr does not get me anywhere. Thanks!!!
Here you go: data = [] table = soup.find('table', attrs={'class':'lineItemsTable'}) table_body = table.find('tbody') rows = table_body.find_all('tr') for row in rows: cols = row.find_all('td') cols = [ele.text.strip() for ele in cols] data.append([ele for ele in cols if ele]) # Get rid of empty values This gives you: [ [u'1359711259', u'SRF', u'08/05/2013', u'5310 4 AVE', u'K', u'19', u'125.00', u'$'], [u'7086775850', u'PAS', u'12/14/2013', u'3908 6th Ave', u'K', u'40', u'125.00', u'$'], [u'7355010165', u'OMT', u'12/14/2013', u'3908 6th Ave', u'K', u'40', u'145.00', u'$'], [u'4002488755', u'OMT', u'02/12/2014', u'NB 1ST AVE @ E 23RD ST', u'5', u'115.00', u'$'], [u'7913806837', u'OMT', u'03/03/2014', u'5015 4th Ave', u'K', u'46', u'115.00', u'$'], [u'5080015366', u'OMT', u'03/10/2014', u'EB 65TH ST @ 16TH AV E', u'7', u'50.00', u'$'], [u'7208770670', u'OMT', u'04/08/2014', u'333 15th St', u'K', u'70', u'65.00', u'$'], [u'$0.00\n\n\nPayment Amount:'] ] Couple of things to note: The last row in the output above, the Payment Amount is not a part of the table but that is how the table is laid out. You can filter it out by checking if the length of the list is less than 7. The last column of every row will have to be handled separately since it is an input text box.
Python Scipy FFT wav files
I have a handful of wav files. I'd like to use SciPy FFT to plot the frequency spectrum of these wav files. How would I go about doing this?
Python provides several api to do this fairly quickly. I download the sheep-bleats wav file from this link. You can save it on the desktop and cd there within terminal. These lines in the python prompt should be enough: (omit >>>) import matplotlib.pyplot as plt from scipy.fftpack import fft from scipy.io import wavfile # get the api fs, data = wavfile.read('test.wav') # load the data a = data.T[0] # this is a two channel soundtrack, I get the first track b=[(ele/2**8.)*2-1 for ele in a] # this is 8-bit track, b is now normalized on [-1,1) c = fft(b) # calculate fourier transform (complex numbers list) d = len(c)/2 # you only need half of the fft list (real signal symmetry) plt.plot(abs(c[:(d-1)]),'r') plt.show() Here is a plot for the input signal: Here is the spectrum For the correct output, you will have to convert the xlabelto the frequency for the spectrum plot. k = arange(len(data)) T = len(data)/fs # where fs is the sampling frequency frqLabel = k/T If you are have to deal with a bunch of files, you can implement this as a function: put these lines in the test2.py: import matplotlib.pyplot as plt from scipy.io import wavfile # get the api from scipy.fftpack import fft from pylab import * def f(filename): fs, data = wavfile.read(filename) # load the data a = data.T[0] # this is a two channel soundtrack, I get the first track b=[(ele/2**8.)*2-1 for ele in a] # this is 8-bit track, b is now normalized on [-1,1) c = fft(b) # create a list of complex number d = len(c)/2 # you only need half of the fft list plt.plot(abs(c[:(d-1)]),'r') savefig(filename+'.png',bbox_inches='tight') Say, I have test.wav and test2.wav in the current working dir, the following command in python prompt interface is sufficient: import test2 map(test2.f, ['test.wav','test2.wav']) Assuming you have 100 such files and you do not want to type their names individually, you need the glob package: import glob import test2 files = glob.glob('./*.wav') for ele in files: f(ele) quit() You will need to add getparams in the test2.f if your .wav files are not of the same bit.
How to avoid the spell check on string in Pycharm
Where is the option to disable the spell check on the strings for the PyCharm IDE? I hate the jagged line under my comments and strings.
You can go to File -> Settings -> Editor -> Inspections. You then expand the list under "Spelling" in the middle window and uncheck the option Typo. This is it. However, practically most of the jagged line is caused by violations of the "PEP 8 coding style"(see here for more info). So, if you would like to disable this option too, you can , in the same window, expand the list under "Python" and uncheck the option "PEP 8 coding style violation".
AttributeError when using "import dateutil" and "dateutil.parser.parse()" but no problems when using "from dateutil import parser"
I was playing with the dateutil module in Python 2.7.3. I simply wanted to use: import dateutil dateutil.parser.parse("01-02-2013") But I got an error: AttributeError: 'module' object has no attribute 'parser' I checked what attributes dateutil does have: print dir(dateutil) # output: ['__author__', '__builtins__', '__doc__', '__file__', '__license__', # '__name__', '__package__', '__path__', '__version__'] The thing is, when I try to import parser from dateutil directly, it does seem to exist: from dateutil import parser print parser.parse("01-02-2013") # output: 2013-01-02 00:00:00 After the from dateutil import parser, parser has also magically appeared in the imported dateutil itself: print dir(dateutil) # output: ['__author__', '__builtins__', '__doc__', '__file__', '__license__', # '__name__', '__package__', '__path__', '__version__', 'parser', # 'relativedelta', 'tz'] Note that some other attributes (like rrule) are still missing from this list. Anyone knows what's going on?
You haven't imported dateutil.parser. You can see it, but you have to somehow import it. >>> import dateutil.parser >>> dateutil.parser.parse("01-02-2013") datetime.datetime(2013, 1, 2, 0, 0) That's because the parser.py is a module in the dateutil package. It's a separate file in the folder structure. Answer to the question you asked in the comments, the reason why relativedelta and tz appear in the namespace after you've from dateutil import parser is because parser itself imports relativedelta and tz. If you look at the source code of dateutil/parser.py, you can see the imports. # -*- coding:iso-8859-1 -*- """ Copyright (c) 2003-2007 Gustavo Niemeyer <gustavo@niemeyer.net> This module offers extensions to the standard Python datetime module. """ ... snip ... from . import relativedelta from . import tz
How to start a Celery worker from a script/module __main__?
I've define a Celery app in a module, and now I want to start the worker from the same module in its __main__, i.e. by running the module with python -m instead of celery from the command line. I tried this: app = Celery('project', include=['project.tasks']) # do all kind of project-specific configuration # that should occur whenever this module is imported if __name__ == '__main__': # log stuff about the configuration app.start(['worker', '-A', 'project.tasks']) but now Celery thinks I'm running the worker without arguments: Usage: worker <command> [options] Show help screen and exit. Options: -A APP, --app=APP app instance to use (e.g. module.attr_name) [snip] The usage message is the one you get from celery --help, as if it didn't get a command. I've also tried app.worker_main(['-A', 'project.tasks']) but that complains about the -A not being recognized. So how do I do this? Or alternatively, how do I pass a callback to the worker to have it log information about its configuration?
using app.worker_main method (v3.1.12): ± cat start_celery.py #!/usr/bin/python from myapp import app if __name__ == "__main__": argv = [ 'worker', '--loglevel=DEBUG', ] app.worker_main(argv)
What is the purpose of collections.ChainMap?
In Python 3.3 a ChainMap class was added to the collections module: A ChainMap class is provided for quickly linking a number of mappings so they can be treated as a single unit. It is often much faster than creating a new dictionary and running multiple update() calls. Example: >>> from collections import ChainMap >>> x = {'a': 1, 'b': 2} >>> y = {'b': 10, 'c': 11} >>> z = ChainMap(y, x) >>> for k, v in z.items(): print(k, v) a 1 c 11 b 10 It was motivated by this issue and made public by this one (no PEP was created). As far as I understand, it is an alternative to having an extra dictionary and maintaining it with update()s. The questions are: What use cases does ChainMap cover? Are there any real world examples of ChainMap? Is it used in third-party libraries that switched to python3? Bonus question: is there a way to use it on Python2.x? I've heard about it in Transforming Code into Beautiful, Idiomatic Python PyCon talk by Raymond Hettinger and I'd like to add it to my toolkit, but I lack in understanding when should I use it.
I like @b4hand's examples, and indeed I have used in the past ChainMap-like structures (but not ChainMap itself) for the two purposes he mentions: multi-layered configuration overrides, and variable stack/scope emulation. I'd like to point out two other motivations/advantages/differences of ChainMap, compared to using a dict-update loop, thus only storing the "final" version": More information: since a ChainMap structure is "layered", it supports answering question like: Am I getting the "default" value, or an overridden one? What is the original ("default") value? At what level did the value get overridden (borrowing @b4hand's config example: user-config or command-line-overrides)? Using a simple dict, the information needed for answering these questions is already lost. Speed tradeoff: suppose you have N layers and at most M keys in each, constructing a ChainMap takes O(N) and each lookup O(N) worst-case[*], while construction of a dict using an update-loop takes O(NM) and each lookup O(1). This means that if you construct often and only perform a few lookups each time, or if M is big, ChainMap's lazy-construction approach works in your favor. [*] The analysis in (2) assumes dict-access is O(1), when in fact it is O(1) on average, and O(M) worst case. See more details here.
Spark Context Textfile: load multiple files
I need to process multiple files scattered across various directories. I would like to load all these up in a single RDD and then perform map/reduce on it. I see that SparkContext is able to load multiple files from a single directory using wildcards. I am not sure how to load up files from multiple folders. The following code snippet fails: for fileEntry in files: fileName = basePath + "/" + fileEntry lines = sc.textFile(fileName) if retval == None: retval = lines else: retval = sc.union(retval, lines) This fails on the third loop with the following error message: retval = sc.union(retval, lines) TypeError: union() takes exactly 2 arguments (3 given) Which is bizarre given I am providing only 2 arguments. Any pointers appreciated.
How about this phrasing instead? sc.union([sc.textFile(basepath + "\n" + f) for f in files]) In Scala SparkContext.union() has two variants, one that takes vararg arguments, and one that takes a list. Only the second one exists in Python (since Python does not have polymorphism). UPDATE You can use a single textFile call to read multiple files. sc.textFile(','.join(files))
Why isn't IEnumerable consumed?/how do generators work in c# compared to python
So I thought I understood c# yield return as being largely the same as pythons yield which I thought that I understood. I thought that the compiler transforms a function into an object with a pointer to where execution should be resumed and when a request for the next value comes along the object runs up to the next yield where it updates the pointer of where to resume execution and returns a value. In python this works sort of similarly to lazy evaluation in that it produces values as needed but once the values are used once they can be gc'ed if not save in another variable. Trying to iterate over the result of such a function twice returns an empty iterable unless you transform it to a list. ex. def y(): list = [1,2,3,4] for i in list: yield str(i) ys = y() print "first ys:" print ",".join(ys) print "second ys:" print ",".join(ys) outputs first ys: 1,2,3,4 second ys: Until recently I thought the same thing was true for c# but trying it out in dotnetfiddle failed. http://dotnetfiddle.net/W5Cbv6 using System; using System.Linq; using System.Collections.Generic; public class Program { public static IEnumerable<string> Y() { var list = new List<string> {"1","2","3","4","5"}; foreach(var i in list) { yield return i; } } public static void Main() { var ys = Y(); Console.WriteLine("first ys"); Console.WriteLine(string.Join(",", ys)); Console.WriteLine("second ys"); Console.WriteLine(string.Join(",", ys)); } } outputs first ys 1,2,3,4,5 second ys 1,2,3,4,5 What is happening here? Is it caching the result? It can't be right, otherwise File.ReadLines would blow up on huge files? Is it simply restarting the function from the top a second time? note: I'm a bit uncertain about some of the terminology of generators and coroutines so I've tried to avoid labelling.
You're very close. An IEnumerable is an object capable of creating an iterator (an IEnumerator). An IEnumerator behaves exactly as you've described. So the IEnumerable generates generators. Unless you go out of your way to generate some sort of state shared between the generated iterators, IEnumerator objects won't affect each other, whether they are from separate calls to the iterator block or another IEnumerator generated by the same IEnumerable.
Circular list iterator in Python
I need to iterate over a circular list, possibly many times, each time starting with the last visited item. The use case is a connection pool. A client asks for connection, an iterator checks if pointed-to connection is available and returns it, otherwise loops until it finds one that is available. Is there a neat way to do it in Python?
Use itertools.cycle, that's its exact purpose: from itertools import cycle lst = ['a', 'b', 'c'] pool = cycle(lst) for item in pool: print item, Output: a b c a b c ... (Loops forever, obviously) In order to manually advance the iterator and pull values from it one by one, simply call next(pool): >>> next(pool) 'a' >>> next(pool) 'b'
Python import error: 'No module named' does exist
I am getting this stack trace when I start pyramid pserve: % python $(which pserve) ../etc/development.ini Traceback (most recent call last): File "/home/hughdbrown/.local/bin/pserve", line 9, in <module> load_entry_point('pyramid==1.5', 'console_scripts', 'pserve')() File "/home/hughdbrown/.virtualenvs/ponder/local/lib/python2.7/site-packages/pyramid-1.5-py2.7.egg/pyramid/scripts/pserve.py", line 51, in main return command.run() File "/home/hughdbrown/.virtualenvs/ponder/local/lib/python2.7/site-packages/pyramid-1.5-py2.7.egg/pyramid/scripts/pserve.py", line 316, in run global_conf=vars) File "/home/hughdbrown/.virtualenvs/ponder/local/lib/python2.7/site-packages/pyramid-1.5-py2.7.egg/pyramid/scripts/pserve.py", line 340, in loadapp return loadapp(app_spec, name=name, relative_to=relative_to, **kw) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 271, in loadobj global_conf=global_conf) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 454, in get_context section) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 406, in get_context global_conf=global_conf) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 337, in _loadfunc return loader.get_context(object_type, name, global_conf) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/loadwsgi.py", line 681, in get_context obj = lookup_object(self.spec) File "/home/hughdbrown/.virtualenvs/ponder/lib/python2.7/site-packages/PasteDeploy-1.5.2-py2.7.egg/paste/deploy/util.py", line 68, in lookup_object module = __import__(parts) File "/home/hughdbrown/.virtualenvs/ponder/local/lib/python2.7/site-packages/ponder-0.0.40-py2.7.egg/ponder/server/__init__.py", line 10, in <module> from ponder.server.views import Endpoints, route ImportError: No module named views This works fine from a python REPL: % python Python 2.7.5+ (default, Feb 27 2014, 19:37:08) [GCC 4.8.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from ponder.server.views import Endpoints, route >>> and from a command line import: % python -c "from ponder.server.views import Endpoints, route" An abridged tree output shows what I am working with: % tree ├── __init__.py ├── ponder │   ├── __init__.py │   ├── server │   │   ├── __init__.py │   │   └── views │   │   ├── environment_templates.py │   │   ├── groups.py │   │   ├── __init__.py │   │   ├── instances.py │   │   ├── tasks.py │   │   └── users.py My PYTHONPATH is set to the root of this tree: % echo $PYTHONPATH /home/hughdbrown/workspace/ept/ponder/lib I am running this in a virtualenv that uses python 2.7. I have had this working off and on today but I can't figure out where the problem is. For one thing, the __init__.py seems to be okay with some imports that come just before: from .database import get_db from .config import parser from .views import Endpoints, route (I changed the last line to an absolute import. No luck.) Things that I have tried: Rebuilding virtualenv Setting PYTHONPATH Using absolute paths in code Looking for circular imports I am open to further suggestions in how to debug this error. So the mistake I made was to look only at the source tree. The problem was really in the runtime environmment, in my virtualenv. And when I looked there, I found that the desired files were not being installed. The problem, at root, was the setup.py.
My usual trick is to simply print sys.path in the actual context where the import problem happens. In your case it'd seem that the place for the print is in /home/hughdbrown/.local/bin/pserve . Then check dirs & files in the places that path shows..
subprocess.check_output return code
I am using: grepOut = subprocess.check_output("grep " + search + " tmp", shell=True) To run a terminal command, I know that I can use a try/except to catch the error but how can I get the value of the error code? I found this on the official documentation: exception subprocess.CalledProcessError Exception raised when a process run by check_call() or check_output() returns a non-zero exit status. returncode Exit status of the child process. But there are no examples given and Google was of no help.
You can get the error code and results from the exception that is raised. This can be done through the fields returncode and output. For example: import subprocess try: grepOut = subprocess.check_output("grep " + "test" + " tmp", shell=True) except subprocess.CalledProcessError as grepexc: print "error code", grepexc.returncode, grepexc.output
Python: how to kill child process(es) when parent dies?
The child process is started with subprocess.Popen(arg) Is there a way to ensure it is killed when parent terminates abnormally? I need this to work both on Windows and Linux. I am aware of this solution for Linux. Edit: the requirement of starting a child process with subprocess.Popen(arg) can be relaxed, if a solution exists using a different method of starting a process.
Heh, I was just researching this myself yesterday! Assuming you can't alter the child program: On Linux, prctl(PR_SET_PDEATHSIG, ...) is probably the only reliable choice. (If it's absolutely necessary that the child process be killed, then you might want to set the death signal to SIGKILL instead of SIGTERM; the code you linked to uses SIGTERM, but the child does have the option of ignoring SIGTERM if it wants to.) On Windows, the most reliable options is to use a Job object. The idea is that you create a "Job" (a kind of container for processes), then you place the child process into the Job, and you set the magic option that says "when no-one holds a 'handle' for this Job, then kill the processes that are in it". By default, the only 'handle' to the job is the one that your parent process holds, and when the parent process dies, the OS will go through and close all its handles, and then notice that this means there are no open handles for the Job. So then it kills the child, as requested. (If you have multiple child processes, you can assign them all to the same job.) This answer has sample code for doing this, using the win32api module. That code uses CreateProcess to launch the child, instead of subprocess.Popen. The reason is that they need to get a "process handle" for the spawned child, and CreateProcess returns this by default. If you'd rather use subprocess.Popen, then here's an (untested) copy of the code from that answer, that uses subprocess.Popen and OpenProcess instead of CreateProcess: import subprocess import win32api import win32con import win32job hJob = win32job.CreateJobObject(None, "") extended_info = win32job.QueryInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation) extended_info['BasicLimitInformation']['LimitFlags'] = win32job.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE win32job.SetInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation, extended_info) child = subprocess.Popen(...) # Convert process id to process handle: perms = win32con.PROCESS_TERMINATE | win32con.PROCESS_SET_QUOTA hProcess = win32api.OpenProcess(perms, False, child.pid) win32job.AssignProcessToJobObject(hJob, hProcess) Technically, there's a tiny race condition here in case the child dies in between the Popen and OpenProcess calls, you can decide whether you want to worry about that. One downside to using a job object is that when running on Vista or Win7, if your program is launched from the Windows shell (i.e., by clicking on an icon), then there will probably already be a job object assigned and trying to create a new job object will fail. Win8 fixes this (by allowing job objects to be nested), or if your program is run from the command line then it should be fine. If you can modify the child (e.g., like when using multiprocessing), then probably the best option is to somehow pass the parent's PID to the child (e.g. as a command line argument, or in the args= argument to multiprocessing.Process), and then: On POSIX: Spawn a thread in the child that just calls os.getppid() occasionally, and if the return value ever stops matching the pid passed in from the parent, then call os._exit(). (This approach is portable to all Unixes, including OS X, while the prctl trick is Linux-specific.) On Windows: Spawn a thread in the child that uses OpenProcess and os.waitpid. Example using ctypes: from ctypes import WinDLL, WinError from ctypes.wintypes import DWORD, BOOL, HANDLE # Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx SYNCHRONIZE = 0x00100000 kernel32 = WinDLL("kernel32.dll") kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD) kernel32.OpenProcess.restype = HANDLE parent_handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid) # Block until parent exits os.waitpid(parent_handle, 0) os._exit(0) This avoids any of the possible issues with job objects that I mentioned. If you want to be really, really sure, then you can combine all these solutions. Hope that helps!
Numpy selecting specific column index per row by using a list of indexes
I'm struggling to select the specific columns per row of a numpy matrix. I'm after a good way of solving the following issue Suppose I have the following matrix which I would call X [1, 2, 3] [4, 5, 6] [7, 8, 9] I also have a list of column indexes per every row which I would call Y [1, 0, 2] I need to get the values [2] [4] [9] Instead of a list with indexes Y I can also produce a matrix with the same shape as X where every column is a bool/(int in the range 0-1 value indicating whether this is the required column. [0, 1, 0] [1, 0, 0] [0, 0, 1] I know this can be done with iterating over the array and selecting the column values I need, however I was wondering if there is a better solution. This will be executed frequently on big arrays of data and that's why i has to run as fast as it can. Thank you
If you've got a boolean array you can do direct selection based on that like so: >>> a = np.array([True, True, True, False, False]) >>> b = np.array([1,2,3,4,5]) >>> b[a] array([1, 2, 3]) To go along with your initial example you could do the following: >>> a = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> b = np.array([[False,True,False],[True,False,False],[False,False,True]]) >>> a[b] array([2, 4, 9]) You can also add in an arange and do direct selection on that, though depending on how you're generating your boolean array and what your code looks like YMMV. >>> a = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> a[np.arange(len(a)), [1,0,2]] array([2, 4, 9]) Hope that helps, let me know if you've got any more questions.
how to call python function from NodeJS
i have an Express NodeJS application , but i have a MachineLearning algorithm to use in Python , is there anyway i can call python functions from my NodeJs application , to make use of the power of MachineLearning libraries .
Easiest way I know of is to use "child_process" package which comes packaged with node. Then you can do something like: var spawn = require("child_process").spawn; var process = spawn('python',["path/to/script.py", arg1, arg2, ...]); Then all you have to do is make sure that you import sys in your python script, and then you can access arg1 using sys.argv[1], arg2 using sys.argv[2], and so on. To send data back to node just do the following the in python script: print(dataToSendBack) sys.stdout.flush() And then node can listen for data using: process.stdout.on('data', function (data){ // Do something with the data returned from python script }); Since this allows multiple arguments to be passed to a script using spawn, you can restructure a python script so that one of the arguments decides which function to call, and the other argument gets passed to that function, etc. Hope this was clear. Let me know if something needs clarification.
Is there a reason Python 3 enumerates slower than Python 2?
Python 3 appears to be slower in enumerations for a minimum loop than Python 2 by a significant margin, which appears to be getting worse with newer versions of Python 3. I have Python 2.7.6, Python 3.3.3, and Python 3.4.0 installed on my 64-bit windows machine, (Intel i7-2700K - 3.5 GHz) with both 32-bit and 64-bit versions of each Python installed. While there is no significant difference in execution speed between 32-bit and 64-bit for a given version within its limitations as to memory access, there is a very significant difference between different version levels. I'll let the timing results speak for themselves as follows: C:\**Python34_64**\python -mtimeit -n 5 -r 2 -s"cnt = 0" "for i in range(10000000): cnt += 1" 5 loops, best of 2: **900 msec** per loop C:\**Python33_64**\python -mtimeit -n 5 -r 2 -s"cnt = 0" "for i in range(10000000): cnt += 1" 5 loops, best of 2: **820 msec** per loop C:\**Python27_64**\python -mtimeit -n 5 -r 2 -s"cnt = 0" "for i in range(10000000): cnt += 1" 5 loops, best of 2: **480 msec** per loop Since the Python 3 "range" is not the same as Python 2's "range", and is functionally the same as Python 2's "xrange", I also timed that as follows: C:\**Python27_64**\python -mtimeit -n 5 -r 2 -s"cnt = 0" "for i in **xrange**(10000000): cnt += 1" 5 loops, best of 2: **320 msec** per loop One can easily see that version 3.3 is almost twice as slow as version 2.7 and Python 3.4 is about 10% slower than that again. My question: Is there an environment option or setting that corrects this, or is it just inefficient code or the interpreter doing more for the Python 3 version? The answer seems to be that Python 3 uses the "infinite precision" integers that used to be called "long" in Python 2.x its default "int" type without any option to use the Python 2 fixed bit-length "int" and it is processing of these variable length "int"'s that is taking the extra time as discussed in the answers and comments below. It may be that Python 3.4 is somewhat slower than Python 3.3 because of changes to memory allocation to support synchronization that slightly slow memory allocation/deallocation, which is likely the main reason that the current version of "long" processing runs slower.
The difference is due to the replacement of the int type with the long type. Obviously operations with long integers are going to be slower because the long operations are more complex. If you force python2 to use longs by setting cnt to 0L the difference goes away: $python2 -mtimeit -n5 -r2 -s"cnt=0L" "for i in range(10000000): cnt += 1L" 5 loops, best of 2: 1.1 sec per loop $python3 -mtimeit -n5 -r2 -s"cnt=0" "for i in range(10000000): cnt += 1" 5 loops, best of 2: 686 msec per loop $python2 -mtimeit -n5 -r2 -s"cnt=0L" "for i in xrange(10000000): cnt += 1L" 5 loops, best of 2: 714 msec per loop As you can see on my machine python3.4 is faster than both python2 using range and using xrange when using longs. The last benchmark with python's 2 xrange shows that the difference in this case is minimal. I don't have python3.3 installed, so I cannot make a comparison between 3.3 and 3.4, but as far as I know nothing significant changed between these two versions (regarding range), so the timings should be about the same. If you see a significant difference try to inspect the generated bytecode using the dis module. There was a change about memory allocators (PEP 445) but I have no idea whether the default memory allocators were modified and which consequences there were performance-wise.
Scikit-learn balanced subsampling
I'm trying to create N balanced random subsamples of my large unbalanced dataset. Is there a way to do this simply with scikit-learn / pandas or do I have to implement it myself? Any pointers to code that does this? These subsamples should be random and can be overlapping as I feed each to separate classifier in a very large ensemble of classifiers. In Weka there is tool called spreadsubsample, is there equivalent in sklearn? http://wiki.pentaho.com/display/DATAMINING/SpreadSubsample (I know about weighting but that's not what I'm looking for.)
Here is my first version that seems to be working fine, feel free to copy or make suggestions on how it could be more efficient (I have quite a long experience with programming in general but not that long with python or numpy) This function creates single random balanced subsample. edit: The subsample size now samples down minority classes, this should probably be changed. def balanced_subsample(x,y,subsample_size=1.0): class_xs = [] min_elems = None for yi in np.unique(y): elems = x[(y == yi)] class_xs.append((yi, elems)) if min_elems == None or elems.shape[0] < min_elems: min_elems = elems.shape[0] use_elems = min_elems if subsample_size < 1: use_elems = int(min_elems*subsample_size) xs = [] ys = [] for ci,this_xs in class_xs: if len(this_xs) > use_elems: np.random.shuffle(this_xs) x_ = this_xs[:use_elems] y_ = np.empty(use_elems) y_.fill(ci) xs.append(x_) ys.append(y_) xs = np.concatenate(xs) ys = np.concatenate(ys) return xs,ys For anyone trying to make the above work with a Pandas DataFrame, you need to make a couple of changes: Replace the np.random.shuffle line with this_xs = this_xs.reindex(np.random.permutation(this_xs.index)) Replace the np.concatenate lines with xs = pd.concat(xs) ys = pd.Series(data=np.concatenate(ys),name='target')
sorting by a custom list in pandas
After reading through: http://pandas.pydata.org/pandas-docs/version/0.13.1/generated/pandas.DataFrame.sort.html I still can't seem to figure out how to sort a column by a custom list. Obviously, the default sort is alphabetical. I'll give an example. Here is my (very abridged) dataframe: Player Year Age Tm G 2967 Cedric Hunter 1991 27 CHH 6 5335 Maurice Baker 2004 25 VAN 7 13950 Ratko Varda 2001 22 TOT 60 6141 Ryan Bowen 2009 34 OKC 52 6169 Adrian Caldwell 1997 31 DAL 81 I want to be able to sort by Player, Year and then Tm. The default sort by Player and Year is fine for me, in normal order. However, I do not want Team sorted alphabetically b/c I want TOT always at the top. Here is the list I created: sorter = ['TOT', 'ATL', 'BOS', 'BRK', 'CHA', 'CHH', 'CHI', 'CLE', 'DAL', 'DEN', 'DET', 'GSW', 'HOU', 'IND', 'LAC', 'LAL', 'MEM', 'MIA', 'MIL', 'MIN', 'NJN', 'NOH', 'NOK', 'NOP', 'NYK', 'OKC', 'ORL', 'PHI', 'PHO', 'POR', 'SAC', 'SAS', 'SEA', 'TOR', 'UTA', 'VAN', 'WAS', 'WSB'] After reading through the link above, I thought this would work but it didn't: df.sort(['Player', 'Year', 'Tm'], ascending = [True, True, sorter]) It still has ATL at the top, meaning that it sorted alphabetically and not according to my custom list. Any help would really be greatly appreciated, I just can't figure this out.
I just discovered that with pandas 15.1 it is possible to use categorical series (http://pandas.pydata.org/pandas-docs/stable/10min.html#categoricals) As for your example, lets define the same data-frame and sorter: import pandas as pd # Create DataFrame df = pd.DataFrame( {'id':[2967, 5335, 13950, 6141, 6169],\ 'Player': ['Cedric Hunter', 'Maurice Baker' ,\ 'Ratko Varda' ,'Ryan Bowen' ,'Adrian Caldwell'],\ 'Year': [1991 ,2004 ,2001 ,2009 ,1997],\ 'Age': [27 ,25 ,22 ,34 ,31],\ 'Tm':['CHH' ,'VAN' ,'TOT' ,'OKC' ,'DAL'],\ 'G':[6 ,7 ,60 ,52 ,81]}) # Define the sorter sorter = ['TOT', 'ATL', 'BOS', 'BRK', 'CHA', 'CHH', 'CHI', 'CLE', 'DAL','DEN',\ 'DET', 'GSW', 'HOU', 'IND', 'LAC', 'LAL', 'MEM', 'MIA', 'MIL',\ 'MIN', 'NJN', 'NOH', 'NOK', 'NOP', 'NYK', 'OKC', 'ORL', 'PHI',\ 'PHO', 'POR', 'SAC', 'SAS', 'SEA', 'TOR', 'UTA', 'VAN',\ 'WAS', 'WSB'] With the data-frame and sorter, which is a category-order, we can do the following in pandas 15.1: # Convert Tm-column to category and in set the sorter as categories hierarchy # Youc could also do both lines in one just appending the cat.set_categories() df.Tm = df.Tm.astype("category") df.Tm.cat.set_categories(sorter, inplace=True) print(df.Tm) Out[48]: 0 CHH 1 VAN 2 TOT 3 OKC 4 DAL Name: Tm, dtype: category Categories (38, object): [TOT < ATL < BOS < BRK ... UTA < VAN < WAS < WSB] df.sort_values(["Tm"]) ## 'sort' changed to 'sort_values' Out[49]: Age G Player Tm Year id 2 22 60 Ratko Varda TOT 2001 13950 0 27 6 Cedric Hunter CHH 1991 2967 4 31 81 Adrian Caldwell DAL 1997 6169 3 34 52 Ryan Bowen OKC 2009 6141 1 25 7 Maurice Baker VAN 2004 5335
python: deque vs list performance comparison
In python docs I can see that deque is a special collection highly optimized for poping/adding items from left or right sides. E.g. documentation says: Deques are a generalization of stacks and queues (the name is pronounced “deck” and is short for “double-ended queue”). Deques support thread-safe, memory efficient appends and pops from either side of the deque with approximately the same O(1) performance in either direction. Though list objects support similar operations, they are optimized for fast fixed-length operations and incur O(n) memory movement costs for pop(0) and insert(0, v) operations which change both the size and position of the underlying data representation. I decided to make some comparisons using ipython. Could anyone explain me what I did wrong here: In [31]: %timeit range(1, 10000).pop(0) 10000 loops, best of 3: 114 us per loop In [32]: %timeit deque(xrange(1, 10000)).pop() 10000 loops, best of 3: 181 us per loop In [33]: %timeit deque(range(1, 10000)).pop() 1000 loops, best of 3: 243 us per loop
Could anyone explain me what I did wrong here Yes, your timing is dominated by the time to create the list or deque. The time to do the pop is insignificant in comparison. Instead you should isolate the thing you're trying to test (the pop speed) from the setup time: In [1]: from collections import deque In [2]: s = range(1000) In [3]: d = deque(s) In [4]: s_append, s_pop = s.append, s.pop In [5]: d_append, d_pop = d.append, d.pop In [6]: %timeit s_pop(); s_append(None) 10000000 loops, best of 3: 115 ns per loop In [7]: %timeit d_pop(); d_append(None) 10000000 loops, best of 3: 70.5 ns per loop That said, the real differences between deques and list in terms of performance are: Deques have O(1) speed for appendleft() and popleft() while lists have O(n) performance for insert(0, value) and pop(0). List append performance is hit and miss because it uses realloc() under the hood. As a result, it tends to have over-optimistic timings in simple code (because the realloc doesn't have to move data) and really slow timings in real code (because fragmentation forces realloc to move all the data). In contrast, deque append performance is consistent because it never reallocs and never moves data.
Know the depth of a dictionary
Supposing we have this dict: d = {'a':1, 'b': {'c':{}}} What would be the most straightforward way of knowing the nesting depth of it?
You'll have to recurse: def depth(d, level=1): if not isinstance(d, dict) or not d: return level return max(depth(d[k], level + 1) for k in d) max() is needed to pick the greatest depth for the current dictionary under scrutiny at each level, a dictionary with 3 keys of each different depths should reflect the greatest depth at that level. Demo: >>> d = {'a':1, 'b': {'c':{}}} >>> depth(d) 3 >>> d = {'foo': {'bar': {'baz': 0}, 'spam': {'ham': {'monty': 1}, 'eric': 'idle'}}, 'john': 'cleese'} >>> depth(d) 5
Dot-boxplots from DataFrames
Dataframes in Pandas have a boxplot method, but is there any way to create dot-boxplots in Pandas, or otherwise with seaborn? By a dot-boxplot, I mean a boxplot that shows the actual data points (or a relevant sample of them) inside the plot, e.g. like the example below (obtained in R).
For a more precise answer related to OP's question (with Pandas): import pandas as pd import numpy as np import matplotlib.pyplot as plt data = pd.DataFrame({ "A":np.random.normal(0.8,0.2,20), "B":np.random.normal(0.8,0.1,20), "C":np.random.normal(0.9,0.1,20)} ) data.boxplot() for i,d in enumerate(data): y = data[d] x = np.random.normal(i+1, 0.04, len(y)) plt.plot(x, y, mfc = ["orange","blue","yellow"][i], mec='k', ms=7, marker="o", linestyle="None") plt.hlines(1,0,4,linestyle="--") Old version (more generic) : With matplotlib : import numpy as np import matplotlib.pyplot as plt a = np.random.normal(0,2,1000) b = np.random.normal(-2,7,100) data = [a,b] plt.boxplot(data) # Or you can use the boxplot from Pandas for i in [1,2]: y = data[i-1] x = np.random.normal(i, 0.02, len(y)) plt.plot(x, y, 'r.', alpha=0.2) Which gives that : Inspired from this tutorial Hope this helps !
Django: ValueError: Lookup failed for model referenced by field account.UserProfile.user: auth.User
Getting this error when running python manage.py migrate: ValueError: Lookup failed for model referenced by field account.UserProfile.user: auth.User Steps I did: 1. Created project and added new app: $ django-admin.py startproject djdev $ cd djdev $ python manage.py startapp account 2. I added new app to INSTALLED_APPS in djdev/settings.py: ... 'django.contrib.staticfiles', 'account', ) ... 3. Created a new UserProfile model class in account/models.py: from django.db import models from django.contrib.auth.models import User class UserProfile(models.Model): """ User Profile having one-to-one relations with User """ class Meta: db_table = 'user_profile' ordering = ['id'] user = models.OneToOneField(User, db_column='id_user', related_name='profile') mobile_no = models.CharField('Mobile no.', db_column='contact_no_home', max_length=16, blank=True, null=True) address_line_1 = models.CharField('Address Line 1', db_column='contact_address_line_1_home', max_length=140, blank=True, null=True) address_line_2 = models.CharField('Address Line 2', db_column='contact_address_line_2_home', max_length=140, blank=True, null=True) office_mobile_no = models.CharField('Mobile no.', db_column='contact_no_office', max_length=16, blank=True, null=True) office_address_line_1 = models.CharField('Address Line 1', db_column='contact_address_line_1_office', max_length=140, blank=True, null=True) office_address_line_2 = models.CharField('Address Line 2', db_column='contact_address_line_2_office', max_length=140, blank=True, null=True) about = models.TextField('About me', blank=True, null=True) note = models.CharField('Note', max_length=255, blank=True, null=True) def __unicode__(self): return self.user.name 4. Started migrating: $ python manage.py makemigrations account $ python manage.py migrate After executing last command python manage.py migrate I'm getting this error: Operations to perform: Synchronize unmigrated apps: (none) Apply all migrations: admin, contenttypes, account, auth, sessions Synchronizing apps without migrations: Creating tables... Installing custom SQL... Installing indexes... Running migrations: Applying account.0001_initial...Traceback (most recent call last): File "./manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/home/vinay/python_webapps/django-trunk/django/core/management/__init__.py", line 427, in execute_from_command_line utility.execute() File "/home/vinay/python_webapps/django-trunk/django/core/management/__init__.py", line 419, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/vinay/python_webapps/django-trunk/django/core/management/base.py", line 288, in run_from_argv self.execute(*args, **options.__dict__) File "/home/vinay/python_webapps/django-trunk/django/core/management/base.py", line 337, in execute output = self.handle(*args, **options) File "/home/vinay/python_webapps/django-trunk/django/core/management/commands/migrate.py", line 146, in handle executor.migrate(targets, plan, fake=options.get("fake", False)) File "/home/vinay/python_webapps/django-trunk/django/db/migrations/executor.py", line 62, in migrate self.apply_migration(migration, fake=fake) File "/home/vinay/python_webapps/django-trunk/django/db/migrations/executor.py", line 90, in apply_migration if self.detect_soft_applied(migration): File "/home/vinay/python_webapps/django-trunk/django/db/migrations/executor.py", line 134, in detect_soft_applied apps = project_state.render() File "/home/vinay/python_webapps/django-trunk/django/db/migrations/state.py", line 83, in render model=lookup_model ValueError: Lookup failed for model referenced by field account.UserProfile.user: auth.User NOTE: Django Version I'm using: 1.8.dev20140507130401
This is already fixed in the master branch. Fixed in commits: https://code.djangoproject.com/changeset/8f6dff372b174e772920de6d82bd085f1a74eaf2 https://code.djangoproject.com/changeset/35c2a14a49ac3cb25dcff818b280bf0b4c290287 You can install it until a proper release is built: pip install https://github.com/django/django/zipball/master Test: models.py from django.db import models from django.contrib.auth.models import User class Test(models.Model): user = models.OneToOneField(User) Results [__env] $ ./manage.py makemigrations Migrations for 'data': 0001_initial.py: - Create model Test [__env] $ ./manage.py migrate Operations to perform: Synchronize unmigrated apps: admin, contenttypes, auth, sessions (... ommited ...) Running migrations: Applying data.0001_initial... OK
Capturing output of python script run inside a docker container
The aim here is to use a docker container as a secure sandbox to run untrusted python scripts in, but to do so from within python using the docker-py module, and be able to capture the output of that script. I'm running a python script foo.py inside a docker container (it's set as the ENTRYPOINT command in my Dockerfile, so it's executed as soon as the container is run) and am unable to capture the output of that script. When I run the container via the normal CLI using docker run -v /host_dirpath:/cont_dirpath my_image (host_dirpath is the directory containing foo.py) I get the expected output of foo.py printed to stdout, which is just a dictionary of key-value pairs. However, I'm trying to do this from within python using the docker-py module, and somehow the script output is not being captured by the logs method. Here's the python code I'm using: from docker import Client docker = Client(base_url='unix://var/run/docker.sock', version='1.10', timeout=10) contid = docker.create_container('my_image', volumes={"/cont_dirpath":""}) docker.start(contid, binds={"/host_dirpath": {"bind": "/cont_dirpath"} }) print "Docker logs: " + str(docker.logs(contid)) Which just results in "Docker logs: " - nothing is being captured in the logs, neither stdout nor stderr (I tried raising an exception inside foo.py to test this). The results I'm after are calculated by foo.py and are currently just printed to stdout with a python print statement. How can I get this to be included in the docker container logs so I can read it from within python? Or capture this output some other way from outside the container? Any help would be greatly appreciated. Thanks in advance! EDIT: Still no luck with docker-py, but it is working well when running the container with the normal CLI using subprocess.Popen - the output is indeed correctly grabbed by stdout when doing this.
You are experiencing this behavior because python buffers its outputs by default. Take this example: vagrant@docker:/vagrant/tmp$ cat foo.py #!/usr/bin/python from time import sleep while True: print "f00" sleep(1) then observing the logs from a container running as a daemon does not show anything: vagrant@docker:/vagrant/tmp$ docker logs -f $(docker run -d -v $(pwd):/app dockerfile/python python /app/foo.py) but if you disable the python buffered output with the -u command line parameter, everything shows up: vagrant@docker:/vagrant/tmp$ docker logs -f $(docker run -d -v $(pwd):/app dockerfile/python python -u /app/foo.py) f00 f00 f00 f00 You can also inject the PYTHONUNBUFFERED environment variable: vagrant@docker:/vagrant/tmp$ docker logs -f $(docker run -d -v $(pwd):/app -e PYTHONUNBUFFERED=0 dockerfile/python python /app/foo.py) f00 f00 f00 f00 Note that this behavior affects only containers running without the -t or --tty parameter.
Why does backward recursion execute faster than forward recursion in python
I made an algorithm in Python for counting the number of ways of getting an amount of money with different coin denominations: @measure def countChange(n, coin_list): maxIndex = len(coin_list) def count(n, current_index): if n>0 and maxIndex>current_index: c = 0 current = coin_list[current_index] max_coeff = int(n/current) for coeff in range(max_coeff+1): c+=count(n-coeff*current, current_index+1) elif n==0: return 1 else: return 0 return c return count(n, 0) My algorithm uses an index to get a coin denomination and, as you can see, my index is increasing in each stack frame I get in. I realized that the algorithm could be written in this way also: @measure def countChange2(n, coin_list): maxIndex = len(coin_list) def count(n, current_index): if n>0 and 0<=current_index: c = 0 current = coin_list[current_index] max_coeff = int(n/current) for coeff in range(max_coeff+1): c+=count(n-coeff*current, current_index-1) elif n==0: return 1 else: return 0 return c return count(n, maxIndex-1) This time, the index is decreasing each stack frame I get in. I compared the execution time of the functions and I got a very noteworthy difference: print(countChange(30, range(1, 31))) print(countChange2(30, range(1, 31))) >> Call to countChange took 0.9956174254208345 secods. >> Call to countChange2 took 0.037631815734429974 secods. Why is there a great difference in the execution times of the algorithms if I'm not even caching the results? Why does the increasing order of the index affect this execution time?
This doesn't really have anything to do with dynamic programming, as I understand it. Just reversing the indices shouldn't make something "dynamic". What's happening is that the algorithm is input sensitive. Try feeding the input in reversed order. For example, print(countChange(30, list(reversed(range(1, 31))))) print(countChange2(30, list(reversed(range(1, 31))))) Just as some sorting algorithms are extremely fast with already sorted data and very slow with reversed data, you've got that kind of algorithm here. In the case where the input is increasing, countChange needs a lot more iterations to arrive at its final answer, and thus seems a lot slower. However, when the input is decreasing, the performance characteristics are reversed.
getting sheet names from openpyxl
I have a moderately large xlsx file (around 14 MB) and OpenOffice hangs trying to open it. I was trying to use openpyxl to read the content, following this tutorial. The code snippet is as follows: from openpyxl import load_workbook wb = load_workbook(filename = 'large_file.xlsx', use_iterators = True) ws = wb.get_sheet_by_name(name = 'big_data') The problem is, I don't know the sheet name, and Sheet1/Sheet2.. etc. didn't work (returned NoneType object). I could not find a documentation telling me How to get the sheet names for an xlsx files using openpyxl. Can anyone help me?
Use get_sheet_names() method: Returns the list of the names of worksheets in the workbook. Names are returned in the worksheets order. print wb.get_sheet_names() You can also get worksheet objects from wb.worksheets: ws = wb.worksheets[0]
Plotting pandas timedelta
I have a pandas dataframe that has two datetime64 columns and one timedelta64 column that is the difference between the two columns. I'm trying to plot a histogram of the timedelta column to visualize the time differences between the two events. However, just using df['time_delta'] results in: TypeError: ufunc add cannot use operands with types dtype('<m8[ns]') and dtype('float64') Trying to convert the timedelta column to : float--> df2 = df1['time_delta'].astype(float) results in: TypeError: cannot astype a timedelta from [timedelta64[ns]] to [float64] How would one create a histogram of pandas timedelta data?
Here are ways to convert timedeltas, docs are here In [2]: pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s') Out[2]: 0 0 days, 00:00:01 1 1 days, 00:00:01 2 2 days, 00:00:01 3 3 days, 00:00:01 4 4 days, 00:00:01 dtype: timedelta64[ns] Convert to seconds (is an exact conversion) In [3]: (pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')).astype('timedelta64[s]') Out[3]: 0 1 1 86401 2 172801 3 259201 4 345601 dtype: float64 Convert using astype will round to that unit In [4]: (pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')).astype('timedelta64[D]') Out[4]: 0 0 1 1 2 2 3 3 4 4 dtype: float64 Division will give an exact repr In [5]: (pd.to_timedelta(np.arange(5),unit='d')+pd.to_timedelta(1,unit='s')) / np.timedelta64(1,'D') Out[5]: 0 0.000012 1 1.000012 2 2.000012 3 3.000012 4 4.000012 dtype: float64
Loading text file containing both float and string using numpy.loadtxt
I have a text file: data.txt which contains 5.1,3.5,1.4,0.2,Iris-setosa 4.9,3.0,1.4,0.2,Iris-setosa 5.8,2.7,4.1,1.0,Iris-versicolor 6.2,2.2,4.5,1.5,Iris-versicolor 6.4,3.1,5.5,1.8,Iris-virginica 6.0,3.0,4.8,1.8,Iris-virginica How do I load this data using numpy.loadtxt() so that I get a numpy array after loading such as [['5.1' '3.5' '1.4' '0.2' 'Iris-setosa'] ['4.9' '3.0' '1.4' '0.2' 'Iris-setosa'] ...]? I tried np.loadtxt(open("data.txt"), 'r', dtype={ 'names': ( 'sepal length', 'sepal width', 'petal length', 'petal width', 'label'), 'formats': ( np.float, np.float, np.float, np.float, np.str)}, delimiter= ',', skiprows=0)
If you use np.genfromtxt, you could specify dtype=None, which will tell genfromtxt to intelligently guess the dtype of each column. Most conveniently, it relieves you of the burder of specifying the number of bytes required for the string column. (Omitting the number of bytes, by specifying e.g. np.str, does not not work.) In [58]: np.genfromtxt('data.txt', delimiter=',', dtype=None, names=('sepal length', 'sepal width', 'petal length', 'petal width', 'label')) Out[58]: array([(5.1, 3.5, 1.4, 0.2, 'Iris-setosa'), (4.9, 3.0, 1.4, 0.2, 'Iris-setosa'), (5.8, 2.7, 4.1, 1.0, 'Iris-versicolor'), (6.2, 2.2, 4.5, 1.5, 'Iris-versicolor'), (6.4, 3.1, 5.5, 1.8, 'Iris-virginica'), (6.0, 3.0, 4.8, 1.8, 'Iris-virginica')], dtype=[('sepal_length', '<f8'), ('sepal_width', '<f8'), ('petal_length', '<f8'), ('petal_width', '<f8'), ('label', 'S15')]) If you do want to use np.loadtxt, then to fix your code with minimal changes, you could use: np.loadtxt("data.txt", dtype={'names': ('sepal length', 'sepal width', 'petal length', 'petal width', 'label'), 'formats': (np.float, np.float, np.float, np.float, '|S15')}, delimiter=',', skiprows=0) The main difference if simply changing np.str to |S15 (a 15-byte string). Also note that open("data.txt"), 'r' should be open("data.txt", 'r'). But since np.loadtxt can accept a filename, you don't really need to use open at all.
check if a value exists in pandas dataframe index
I am sure there is an obvious way to do this but cant think of anything slick right now. Basically instead of raising exception I would like to get TRUE or FALSE to see if a value exists in pandas df index. df = pandas.DataFrame({'test':[1,2,3,4]}, index=['a','b','c','d']) df.loc['g'] # (should give False) What I have working now is the following sum(df.index == 'g')
This should do the trick 'g' in df.index
FigureCanvasAgg' object has no attribute 'invalidate' ? python plotting
I've been following 'python for data analysis'. On pg. 345, you get to this code to plot returns across a variety of stocks. However, the plotting function does not work for me. I get FigureCanvasAgg' object has no attribute 'invalidate' ? names = ['AAPL','MSFT', 'DELL', 'MS', 'BAC', 'C'] #goog and SF did not work def get_px(stock, start, end): return web.get_data_yahoo(stock, start, end)['Adj Close'] px = pd.DataFrame({n: get_px(n, '1/1/2009', '6/1/2012') for n in names}) #fillna method pad uses last valid observation to fill px = px.asfreq('B').fillna(method='pad') rets = px.pct_change() df2 = ((1 + rets).cumprod() - 1) df2.ix[0] = 1 df2.plot() UPDATE: full traceback --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-122-df192c0432be> in <module>() 6 df2.ix[0] = 1 7 ----> 8 df2.plot() //anaconda/lib/python2.7/site-packages/pandas/tools/plotting.pyc in plot_frame(frame, x, y, subplots, sharex, sharey, use_index, figsize, grid, legend, rot, ax, style, title, xlim, ylim, logx, logy, xticks, yticks, kind, sort_columns, fontsize, secondary_y, **kwds) 1634 logy=logy, sort_columns=sort_columns, 1635 secondary_y=secondary_y, **kwds) -> 1636 plot_obj.generate() 1637 plot_obj.draw() 1638 if subplots: //anaconda/lib/python2.7/site-packages/pandas/tools/plotting.pyc in generate(self) 854 self._compute_plot_data() 855 self._setup_subplots() --> 856 self._make_plot() 857 self._post_plot_logic() 858 self._adorn_subplots() //anaconda/lib/python2.7/site-packages/pandas/tools/plotting.pyc in _make_plot(self) 1238 if not self.x_compat and self.use_index and self._use_dynamic_x(): 1239 data = self._maybe_convert_index(self.data) -> 1240 self._make_ts_plot(data, **self.kwds) 1241 else: 1242 lines = [] //anaconda/lib/python2.7/site-packages/pandas/tools/plotting.pyc in _make_ts_plot(self, data, **kwargs) 1319 self._maybe_add_color(colors, kwds, style, i) 1320 -> 1321 _plot(data[col], i, ax, label, style, **kwds) 1322 1323 self._make_legend(lines, labels) //anaconda/lib/python2.7/site-packages/pandas/tools/plotting.pyc in _plot(data, col_num, ax, label, style, **kwds) 1293 def _plot(data, col_num, ax, label, style, **kwds): 1294 newlines = tsplot(data, plotf, ax=ax, label=label, -> 1295 style=style, **kwds) 1296 ax.grid(self.grid) 1297 lines.append(newlines[0]) //anaconda/lib/python2.7/site-packages/pandas/tseries/plotting.pyc in tsplot(series, plotf, **kwargs) 79 80 # set date formatter, locators and rescale limits ---> 81 format_dateaxis(ax, ax.freq) 82 left, right = _get_xlim(ax.get_lines()) 83 ax.set_xlim(left, right) //anaconda/lib/python2.7/site-packages/pandas/tseries/plotting.pyc in format_dateaxis(subplot, freq) 258 subplot.xaxis.set_major_formatter(majformatter) 259 subplot.xaxis.set_minor_formatter(minformatter) --> 260 pylab.draw_if_interactive() //anaconda/lib/python2.7/site-packages/IPython/utils/decorators.pyc in wrapper(*args, **kw) 41 def wrapper(*args,**kw): 42 wrapper.called = False ---> 43 out = func(*args,**kw) 44 wrapper.called = True 45 return out //anaconda/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.pyc in draw_if_interactive() 227 figManager = Gcf.get_active() 228 if figManager is not None: --> 229 figManager.canvas.invalidate() 230 231 AttributeError: 'FigureCanvasAgg' object has no attribute 'invalidate'
I found this error to be due to a combination of: using pandas plotting with a series or dataframe member method plotting with a date index using %matplotlib inline magic in ipython importing the pylab module before the matplotlib magic So the following will fail on a newly started kernel in an ipython notebook: # fails import matplotlib.pylab %matplotlib inline import pandas ser = pandas.Series(range(10), pandas.date_range(end='2014-01-01', periods=10)) ser.plot() The best way to solve this is to move the magic up to the top: # succeeds %matplotlib inline # moved up import matplotlib.pylab import pandas ser = pandas.Series(range(10), pandas.date_range(end='2014-01-01', periods=10)) ser.plot() However the problem also goes away if you pass the series to a matplotlib plotting method, don't use a date index, or simply don't import the matplotlib.pylab module.
Why does pycharm propose to change method to static
The new pycharm release (3.1.3 community edition) proposes to convert the methods that don't work with the current object's state to static. What is the practical reason for that? Some kind of micro-performance(-or-memory)-optimization?
PyCharm "thinks" that you might have wanted to have a static method, but you forgot to declare it to be static. PyCharm proposes this because your function does not change the classes state - e.g. you don't use self in its body - and might be intended to be callable without object creation.
Python: Selenium with PhantomJS empty page source
I'm having trouble with Selenium and PhantomJS on Windows7 when I want to get the source of the page of an URL. browser.page_source returns only <html><head></head></html>. I've put a sleep before browser.page_source but it didn't help. This is my code: from selenium import webdriver browser = webdriver.PhantomJS('phantomjs-1.9.7-windows\phantomjs.exe') url = 'myurl' browser.get(url) print browser.page_source On Linux with the same version of PhantomJS it works perfectly. Also it works on Windows Server 2003.
by default phantomjs use SSLv3, but many sites after bug in ssl migrate to tls. That's why you has blank page. use service_args=['--ignore-ssl-errors=true', '--ssl-protocol=any'] browser = webdriver.PhantomJS('phantomjs-1.9.7-windows\phantomjs.exe', service_args=['--ignore-ssl-errors=true', '--ssl-protocol=any'])
Python pickle protocol choice?
I an using python 2.7 and trying to pickle an object. I am wondering what the real difference is between the pickle protocols. import numpy as np import pickle class data(object): def __init__(self): self.a = np.zeros((100, 37000, 3), dtype=np.float32) d = data() print "data size: ", d.a.nbytes/1000000. print "highest protocol: ", pickle.HIGHEST_PROTOCOL pickle.dump(d,open("noProt", 'w')) pickle.dump(d,open("prot0", 'w'), protocol=0) pickle.dump(d,open("prot1", 'w'), protocol=1) pickle.dump(d,open("prot2", 'w'), protocol=2) out >> data size: 44.4 out >> highest protocol: 2 then I found that the saved files have different sizes on disk: noProt: 177.6MB prot0: 177.6MB prot1: 44.4MB prot2: 44.4MB I know that prot0 is a human readable text file, so I don't want to use it. I guess protocol 0 is the one given by default. I wonder what's the difference between protocols 1 and 2, is there a reason why I should chose one or another? What's is the better to use, pickle or cPickle?
From the pickle module data format documentation: There are currently 3 different protocols which can be used for pickling. Protocol version 0 is the original ASCII protocol and is backwards compatible with earlier versions of Python. Protocol version 1 is the old binary format which is also compatible with earlier versions of Python. Protocol version 2 was introduced in Python 2.3. It provides much more efficient pickling of new-style classes. [...] If a protocol is not specified, protocol 0 is used. If protocol is specified as a negative value or HIGHEST_PROTOCOL, the highest protocol version available will be used. Stick with protocol version 2, especially if you are using custom classes derived from object (new-style classes). Most modern code does, theses days. Unless you need to maintain backwards compatibility with older Python versions, it's easiest to just stick with the highest protocol version you can lay your hands on: with open("prot2", 'wb') as pfile: pickle.dump(d, pfile, protocol=pickle.HIGHEST_PROTOCOL) Because this is a binary format, make sure to use 'wb' as the file mode! cPickle and pickle are mostly compatible; the differences lie in the API offered. For most use-cases, just stick with cPickle; it is faster. Quoting the documentation again: First, cPickle can be up to 1000 times faster than pickle because the former is implemented in C. Second, in the cPickle module the callables Pickler() and Unpickler() are functions, not classes. This means that you cannot use them to derive custom pickling and unpickling subclasses. Most applications have no need for this functionality and should benefit from the greatly improved performance of the cPickle module.
Is it possible to wrap a function from a shared library using F2PY?
I'm developing a package that requires Python bindings for the dgtsv subroutine from the LAPACK Fortran library. At the moment, I'm distributing the Fortran source file, dgtsv.f, alongside my Python code, and using numpy.distutils to automatically wrap it and compile it into a shared library, _gtsv.so, that is callable from Python. Here's what my setup.py file looks like at the moment: from numpy.distutils.core import setup, Extension, build_ext import os fortran_sources = ["dgtsv.f"] gtsv = Extension( name="pyfnnd._gtsv", sources=[os.path.join("pyfnnd", "LAPACK", ff) for ff in fortran_sources], extra_link_args=['-llapack'] ) setup( name='pyfnnd', py_modules=['_fnndeconv', 'demo', '_tridiag_solvers'], cmdclass={'build_ext': build_ext.build_ext}, ext_modules=[gtsv], ) Note that in order to actually use _gtsv.so, I still have to link against a pre-existing LAPACK shared library (extra_link_args=['-llapack']). Since this library should already contain the dgtsv subroutine, it seems to me that it would be cleaner to just wrap the function in the existing shared library, rather than having to distribute the actual Fortran source. However I've never come across any examples of using F2PY to wrap functions that are part of a shared library rather than just raw Fortran source code. Is this possible?
I think you just need ctypes, there is a complete example on calling a lapack function on this page: http://www.sagemath.org/doc/numerical_sage/ctypes.html You get your function like this: import ctypes from ctypes.util import find_library lapack = ctypes.cdll.LoadLibrary(find_library("lapack")) dgtsv = lapack.dgtsv_
How do I get interactive plots again in Spyder/IPython/matplotlib?
I upgraded from Python(x,y) 2.7.2.3 to 2.7.6.0 in Windows 7 (and was happy to see that I can finally type function_name? and see the docstring in the Object Inspector again) but now the plotting doesn't work as it used to. Previously (Spyder 2.1.9, IPython 0.10.2, matplotlib 1.2.1), when I plotted this script, for instance, it would plot the subplots side-by-side in an interactive window: Now (Spyder 2.2.5, IPython 1.2.0, Matplotlib 1.3.1) when I try to plot things, it does the subplots as tiny inline PNGs, which is a change in IPython: So I went into options and found this: which seems to say that I can get the old interactive plots back, with the 4 subplots displayed side-by-side, but when I switch to "Automatic", and try to plot something, it does nothing. No plots at all. If I switch this drop-down to Qt, or uncheck "Activate support", it only plots the first subplot, or part of it, and then stops: How do I get the old behavior of 4 side-by-side subplots in a single figure that I can interact with?
After selection the "Automatic" option in the Preferences window, I was able to make interactive plots by closing and opening Spyder.
Plotting with a transparent marker but non-transparent edge
I'm trying to make a plot in matplotlib with transparent markers which have a fixed color edge . However, I can't seem to achieve a marker with transparent fill. I have a minimum working example here: import numpy as np import matplotlib.pyplot as plt x = np.arange(10) y1 = 2*x + 1 y2 = 3*x - 5 plt.plot(x,y1, 'o-', lw=6, ms=14) plt.plot(x,y2, 'o', ms=14, markerfacecolor=None, alpha=0.5, markeredgecolor='red', markeredgewidth=5) plt.show() I tried two techniques I found online to achieve this: 1) Setting alpha parameter. However, this makes the marker edge transparent too, which is not the desired effect. 2) Setting markerfacecolor=None, although this has no effect on my plot Is there a solution to this please?
This is tricky in Matplotlib... you have to use a string "None" instead of the value None, then you can just do: plt.plot(x,y2, 'o', ms=14, markerfacecolor="None", markeredgecolor='red', markeredgewidth=5)
How to use flake8 for Python 3 ?
In this code snippet, def add(x:int, y:int) -> int: return x + y there are function annotations that are only supported after python 3.0 When I execute flake8 for this python code: $ flake8 7.3.py -vv checking 7.3.py def add(x: int, y: int) -> int: return x + y 7.3.py:1:11: E901 SyntaxError: invalid syntax I got the invalid syntax error, but it should be valid syntax. How can I use flake8 to check the syntax that is only supported in Python 3.x?
See: https://bugs.launchpad.net/pyflakes/+bug/989203 NB: Whilst this bug report indicates some level of resolution, testing the latest version of pyflakes 0.8.1 this lack of Python 3 Annotations still exists. I guess you'd have to file a separate new feature request to pyflakes. pyflakes Bugs $ cat - > foo.py def add(x:int, y:int) -> int: return x + y ^D $ pyflakes --version 0.8.1 $ pyflakes foo.py foo.py:1:10: invalid syntax def add(x:int, y:int) -> int: ^ UPDATE (20140514): As it turns out the actual answer to this problem is to run pyflakes or flake8 under Python 3.x instead of Python 2.x. It makes sense :) So do something like this: /usr/bin/python3 -m pyflakes foo.py See: http://codepad.org/9BKxSZaD
Delete cell ipython 2.0
There is probably a very easy solution to this but I'm note sure what it is. Delete cell command in ipython 1.2 used to be: ctrl-m d according to the help menu in version 2.0 it says, Shift-d: delete cell(press twice) I've tried holding shift then pressing d twice and various combinations with shift and d and nothing seems to work.. I am just doing this wrong or does the command actually not work? Any help would be appreciated.
In the new IPython cells could have two states: when it has a green selection rectangle around it you can edit what's inside; when it has a grey rectangle around it you edit the cell itself (copy/paste/delete). Enter/Return makes it go green, Esc makes it go grey. When it is gray, 'dd' will delete it.
Is it acceptable practice to unit-test a program in a different language?
I have a static library I created from C++, and would like to test this using a Driver code. I noticed one of my professors like to do his tests using python, but he simply executes the program (not a library in this case, but an executable) using random test arguments. I would like to take this approach, but I realized that this is a library and doesn't have a main function; that would mean I should either create a Driver.cpp class, or wrap the library into python using SWIG or boost python. I’m planning to do the latter because it seems more fun, but logically, I feel that there is going to be more bugs when trying to wrap a library to a different language just to test it, rather than test it in its native language. Is testing programs in a different language an accepted practice in the real world, or is this bad practice?
I'd say that it's best to test the API that your users will be exposed to. Other tests are good to have as well, but that's the most important aspect. If your users are going to write C/C++ code linking to your library, then it would be good to have tests making use of your library the same way. If you are going to ship a Python wrapper (why not?) then you should have Python tests. Of course, there is a convenience aspect to this, as well. It may be easier to write tests in Python, and you might have time constraints that make it more appealing, etc. I guess what I'm saying is: There's nothing inherently wrong with tests being in a different language from the code under test (that's totally normal for testing a REST API, for instance), but make sure you have tests for the public-facing API at a minimum. Aside, on terminology: I don't think the types of tests you are describing are "unit tests" in the usual sense of the term. Probably "functional test" would be more accurate. A unit test typically tests a very small component - such as a function call - that might be one piece of larger functionality. Unit tests like these are often "white box" tests, so you can see the inner workings of your code. Testing something from a user's point-of-view (such as your professor's commandline tests) are "black box" tests, and in these examples are at a more functional level rather than "unit" level. I'm sure plenty of people may disagree with that, though - it's not a rigidly-defined set of terms.
Generate random numbers replicating arbitrary distribution
I have data wherein I have a variable z that contains around 4000 values (from 0.0 to 1.0) for which the histogram looks like this. Now I need to generate a random variable, call it random_z which should replicate the above distribution. What I have tried so far is to generate a normal distribution centered at 1.0 so that I can remove all those above 1.0 to get a distribution that will be similar. I have been using numpy.random.normal but the problem is that I cannot set the range from 0.0 to 1.0, because usually normal distribution has a mean = 0.0 and std dev = 1.0. Is there another way to go about generating this distribution in Python?
If you want to bootstrap you could use random.choice() on your observed series. Here I'll assume you'd like to smooth a bit more than that and you aren't concerned with generating new extreme values. Use pandas.Series.quantile() and a uniform [0,1] random number generator, as follows. Training Put your random sample into a pandas Series, call this series S Production Generate a random number u between 0.0 and 1.0 the usual way, e.g., random.random() return S.quantile(u) If you'd rather use numpy than pandas, from a quick reading it looks like you can substitute numpy.percentile() in step 2. Principle of Operation: From the sample S, pandas.series.quantile() or numpy.percentile() is used to calculate the inverse cumulative distribution function for the method of Inverse transform sampling. The quantile or percentile function (relative to S) transforms a uniform [0,1] pseudo random number to a pseudo random number having the range and distribution of the sample S. Simple Sample Code If you need to minimize coding and don't want to write and use functions that only returns a single realization, then it seems numpy.percentile bests pandas.Series.quantile. Let S be a pre-existing sample. u will be the new uniform random numbers newR will be the new randoms drawn from a S-like distribution. >>> import numpy as np I need a sample of the kind of random numbers to be duplicated to put in S. For the purposes of creating an example, I am going to raise some uniform [0,1] random numbers to the third power and call that the sample S. By choosing to generate the example sample in this way, I will know in advance -- from the mean being equal to the definite integral of (x^3)(dx) evaluated from 0 to 1 -- that the mean of S should be 1/(3+1) = 1/4 = 0.25 In your application, you would need to do something else instead, perhaps read a file, to create a numpy array S containing the data sample whose distribution is to be duplicated. >>> S = pow(np.random.random(1000),3) # S will be 1000 samples of a power distribution Here I will check that the mean of S is 0.25 as stated above. >>> S.mean() 0.25296623781420458 # OK get the min and max just to show how np.percentile works >>> S.min() 6.1091277680105382e-10 >>> S.max() 0.99608676594692624 The numpy.percentile function maps 0-100 to the range of S. >>> np.percentile(S,0) # this should match the min of S 6.1091277680105382e-10 # and it does >>> np.percentile(S,100) # this should match the max of S 0.99608676594692624 # and it does >>> np.percentile(S,[0,100]) # this should send back an array with both min, max [6.1091277680105382e-10, 0.99608676594692624] # and it does >>> np.percentile(S,np.array([0,100])) # but this doesn't.... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 2803, in percentile if q == 0: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() This isn't so great if we generate 100 new values, starting with uniforms: >>> u = np.random.random(100) because it will error out, and the scale of u is 0-1, and 0-100 is needed. This will work: >>> newR = np.percentile(S, (100*u).tolist()) which works fine but might need its type adjusted if you want a numpy array back >>> type(newR) <type 'list'> >>> newR = np.array(newR) Now we have a numpy array. Let's check the mean of the new random values. >>> newR.mean() 0.25549728059744525 # close enough