instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Virtual environment in R?
I've found several posts about best practice, reproducibility and workflow in R, for example: How to increase longer term reproducibility of research (particularly using R and Sweave) Complete substantive examples of reproducible research using R One of the major preoccupations is ensuring portability of code, in the sense that moving it to a new machine (possibly running a different OS) is relatively straightforward and gives the same results. Coming from a Python background, I'm used to the concept of a virtual environment. When coupled with a simple list of required packages, this goes some way to ensuring that the installed packages and libraries are available on any machine without too much fuss. Sure, it's no guarantee - different OSes have their own foibles and peculiarities - but it gets you 95% of the way there. Does such a thing exist within R? Even if it's not as sophisticated. For example simply maintaining a plain text list of required packages and a script that will install any that are missing? I'm about to start using R in earnest for the first time, probably in conjunction with Sweave, and would ideally like to start in the best way possible! Thanks for your thoughts.
I'm going to use the comment posted by @cboettig in order to resolve this question. Packrat Packrat is a dependency management system for R. Gives you three important advantages (all of them focused in your portability needs) Isolated : Installing a new or updated package for one project won’t break your other projects, and vice versa. That’s because packrat gives each project its own private package library. Portable: Easily transport your projects from one computer to another, even across different platforms. Packrat makes it easy to install the packages your project depends on. Reproducible: Packrat records the exact package versions you depend on, and ensures those exact versions are the ones that get installed wherever you go. What's next? Walkthrough guide: http://rstudio.github.io/packrat/walkthrough.html Most common commands: http://rstudio.github.io/packrat/commands.html Using Packrat with RStudio: http://rstudio.github.io/packrat/rstudio.html Limitations and caveats: http://rstudio.github.io/packrat/limitations.html
How to use pip with python 3.4 on windows?
Just installed a fresh copy of python 3.4.1 on windows 7. Here says that pip is included by default. but i could not find any pip.exe or some pip commands. Do I need to install it separately or it exists somewhere?
i have Windows7 Python 3.4.1; following command suggested by Guss worked well C:\Users>py -m pip install requests Output Downloading/unpacking requests Installing collected packages: requests Successfully installed requests Cleaning up...
Searching PyPI by topic
For every python package you can specify a list of classifiers. Among others there is a Topic classifier, that puts the package in the specified categories that can be browsed on PyPI. For example, numpy has the following topics: Topic :: Software Development Topic :: Scientific/Engineering Is there a way to search by topic programmatically using pip search or other third-party libraries?
You can search PyPI by classifier via the XMLRPC API, using the browse() method: try: import xmlrpclib # Python 2 except ImportError: import xmlrpc.client as xmlrpclib # Python 3 pypi = xmlrpclib.ServerProxy('http://pypi.python.org/pypi') packages = pypi.browse([ "Topic :: Software Development", "Topic :: Scientific/Engineering", ]) In the example above, packages contains a list of [package, version] lists for all packages which satisfy both the "Topic :: Software Development" and "Topic :: Scientific/Engineering" classifiers: >>> {pkg: ver for pkg, ver in packages if "numpy" in pkg} { 'nose-numpyseterr': '0.1', 'msgpack-numpy': '0.3.2', 'numpy': '1.8.1', 'idx2numpy': '1.0b' } From there, you can retrieve more information about a given release: >>> release = pypi.release_data('numpy', '1.8.1') >>> release['download_url'] 'http://sourceforge.net/projects/numpy/files/NumPy/' >>> release['platform'] 'Windows,Linux,Solaris,Mac OS-X,Unix' >>> release['downloads'] { 'last_day': 5818, 'last_month': 187688, 'last_week': 44764 } ... etc.
Python Flask Intentional Empty Response
Is there a way to return a response (from make_response() object or similar) with certain properties so that it doesn't render the page again and doesn't do anything else either. I am trying to run a code on the server without generating any output A simple 'return None' produces: ValueError: View function did not return a response This should be possible because the following only downloads a file and doesn't render the template: myString = "First line of a document" response = make_response(myString) response.headers["Content-Disposition"] = "attachment; filename=myFile.txt" return response Any help will be greatly appreciated!
You are responding to a request, your HTTP server must return something. The HTTP 'empty response' response is 204 No Content: return ('', 204) Note that returning a file to the browser is not an empty response, just different from a HTML response.
Implementing a "Kurtosis filter" using scipys generic_filter
I have a 5000*5000 numpy array on which I want to calculate the Kurtosis for windows of size 25. I tried putting scipys own kurtosis function in the generic_filter found in ndimage.filters like so: import numpy as np from scipy.stats import kurtosis from scipy.ndimage.filters import generic_filter mat = np.random.random_sample((5000, 5000)) kurtosis_filter = generic_filter(mat, kurtosis, size=25, mode='reflect') This never ends and I'm not sure at all of it gives the correct answer. So my first question is if this is a correct way to use the generic_filter with a scipy function. If it happened to be correct, then it is too slow for it to be of any use to me. So my next question would be if there's a faster way to achieve this? For example, thinking about a standard deviation you can simply do something like: usual_mean = uniform_filter(mat, size=25, mode='reflect') mean_of_squared = uniform_filter(np.multiply(mat,mat), size=25, mode='reflect') standard_deviation = (mean_of_squared - np.multiply(usual_mean,usual_mean))**.5 This is blazing fast and simply comes from the fact that $\sigma^2 = E[(X -\mu)^2] = E[X^2] - (E[X])^2$.
Your approach is correct, but as you note, it is way too slow for the task at hand. Consider how large your task is in the numerically best implementation (not bothering about boundary values): def kurt(X, w): n, m = X.shape K = np.zeros_like(X) for i in xrange(w, n-w): # 5000 iterations for j in xrange(w, m-w): # 5000 iterations x = X[i-w:i+w+1,j-w:j+w+1].flatten() # copy 25*25=625 values x -= x.mean() # calculate and subtract mean x /= np.sqrt((x**2).mean()) # normalize by stddev (625 mult.) K[i,j] = (x**4).mean() - 3. # 2*625 = 1250 multiplications return K So we have 5000*5000*1875 ~ 47 billion (!) multiplications. This will even be too slow to be useful in a plain C implementation, let alone by passing a Python function kurtosis() to the inner loop of generic_filter(). The latter is actually calling a C extension function, but there are negligible benefits since it must call back into Python at each iteration, which is very expensive. So, the actual problem is that you need a better algorithm. Since scipy doesn't have it, let's develop it step by step here. The key observation that permits acceleration of this problem is that the kurtosis calculations for successive windows are based on mostly the same values, except one row (25 values) which is replaced. So, instead of recalculating the kurtosis from scratch using all 625 values, we attempt to keep track of previously calculated sums and update them such that only the 25 new values need to be processed. This requires expanding the (x - mu)**4 factor, since only the running sums over x, x**2, x**3 and x**4 can be easily updated. There is no nice cancellation as in the formula for the standard deviation that you mentioned, but it is entirely feasible: def kurt2(X, w): n, m = X.shape K = np.zeros_like(X) W = 2*w + 1 for j in xrange(m-W+1): for i in xrange(n-W+1): x = X[i:i+W,j:j+W].flatten() x2 = x*x x3 = x2*x x4 = x2*x2 M1 = x.mean() M2 = x2.mean() M3 = x3.mean() M4 = x4.mean() M12 = M1*M1 V = M2 - M12; K[w+i,w+j] = (M4 - 4*M1*M3 + 3*M12*(M12 + 2*V)) / (V*V) - 3 return K Note: The algorithm written in this form is numerically less stable, since we let numerator and denominator become individually very large, while previously we were dividing early to prevent this (even at the cost of a sqrt). However, I found that for the kurtosis this was never an issue for practical applications. In the code above, I have tried to minimize the number of multiplications. The running means M1, M2, M3 and M4 can now be updated rather easily, by subtracting the contributions of the row that is no longer part of the window and adding the contributions of the new row. Let's implement this: def kurt3(X, w): n, m = X.shape K = np.zeros_like(X) W = 2*w + 1 N = W*W Xp = np.zeros((4, W, W), dtype=X.dtype) xp = np.zeros((4, W), dtype=X.dtype) for j in xrange(m-W+1): # reinitialize every time we reach row 0 Xp[0] = x1 = X[:W,j:j+W] Xp[1] = x2 = x1*x1 Xp[2] = x3 = x2*x1 Xp[3] = x4 = x2*x2 s = Xp.sum(axis=2) # make sure we sum along the fastest index S = s.sum(axis=1) # the running sums s = s.T.copy() # circular buffer of row sums M = S / N M12 = M[0]*M[0] V = M[1] - M12; # kurtosis at row 0 K[w,w+j] = (M[3] - 4*M[0]*M[2] + 3*M12*(M12 + 2*V)) / (V*V) - 3 for i in xrange(n-W): xp[0] = x1 = X[i+W,j:j+W] # the next row xp[1] = x2 = x1*x1 xp[2] = x3 = x2*x1 xp[3] = x4 = x2*x2 k = i % W # index in circular buffer S -= s[k] # remove cached contribution of old row s[k] = xp.sum(axis=1) # cache new row S += s[k] # add contributions of new row M = S / N M12 = M[0]*M[0] V = M[1] - M12; # kurtosis at row != 0 K[w+1+i,w+j] = (M[3] - 4*M[0]*M[2] + 3*M12*(M12 + 2*V)) / (V*V) - 3 return K Now that we have a good algorithm, we note that the timing results are still rather disappointing. Our problem is now that Python + numpy is the wrong language for such a number crunching job. Let's write a C extension! Here is _kurtosismodule.c: #include <Python.h> #include <numpy/arrayobject.h> static inline void add_line(double *b, double *S, const double *x, size_t W) { size_t l; double x1, x2; b[0] = b[1] = b[2] = b[3] = 0.; for (l = 0; l < W; ++l) { b[0] += x1 = x[l]; b[1] += x2 = x1*x1; b[2] += x2*x1; b[3] += x2*x2; } S[0] += b[0]; S[1] += b[1]; S[2] += b[2]; S[3] += b[3]; } static PyObject* py_kurt(PyObject* self, PyObject* args) { PyObject *objK, *objX, *objB; int w; PyArg_ParseTuple(args, "OOOi", &objK, &objX, &objB, &w); double *K = PyArray_DATA(objK); double *X = PyArray_DATA(objX); double *B = PyArray_DATA(objB); size_t n = PyArray_DIM(objX, 0); size_t m = PyArray_DIM(objX, 1); size_t W = 2*w + 1, N = W*W, i, j, k, I, J; double *S = B + 4*W; double *x, *b, M, M2, V; for (j = 0, J = m*w + w; j < m-W+1; ++j, ++J) { S[0] = S[1] = S[2] = S[3] = 0.; for (k = 0, x = X + j, b = B; k < W; ++k, x += m, b += 4) { add_line(b, S, x, W); } M = S[0] / N; M2 = M*M; V = S[1] / N - M2; K[J] = ((S[3] - 4*M*S[2]) / N + 3*M2*(M2 + 2*V)) / (V*V) - 3; for (i = 0, I = J + m; i < n-W; ++i, x += m, I += m) { b = B + 4*(i % W); // row in circular buffer S[0] -= b[0]; S[1] -= b[1]; S[2] -= b[2]; S[3] -= b[3]; add_line(b, S, x, W); M = S[0] / N; M2 = M*M; V = S[1] / N - M2; K[I] = ((S[3] - 4*M*S[2]) / N + 3*M2*(M2 + 2*V)) / (V*V) - 3; } } Py_RETURN_NONE; } static PyMethodDef methods[] = { {"kurt", py_kurt, METH_VARARGS, ""}, {0} }; PyMODINIT_FUNC init_kurtosis(void) { Py_InitModule("_kurtosis", methods); import_array(); } Build with: python setup.py build_ext --inplace where setup.py is: from distutils.core import setup, Extension module = Extension('_kurtosis', sources=['_kurtosismodule.c']) setup(ext_modules=[module]) Note that we don't allocate any memory in the C extension. This way, we don't have to get into any mess with reference counts/garbage collection. We just use an entry point in Python: import _kurtosis def kurt4(X, w): # add type/size checking if you like K = np.zeros(X.shape, np.double) scratch = np.zeros(8*(w + 1), np.double) _kurtosis.kurt(K, X, scratch, w) return K Finally, let's do the timing: In [1]: mat = np.random.random_sample((5000, 5000)) In [2]: %timeit K = kurt4(mat, 12) # 2*12 + 1 = 25 1 loops, best of 3: 5.25 s per loop A very reasonable performance given the size of the task!
Sending messages with Telegram - APIs or CLI?
I would like to be able to send a message to a group chat in Telegram. I want to run a python script (which makes some operations that already works) and then, if some parameters have some values the script should send a message to a group chat through Telegram. I am using Ubuntu, and Python 2.7 I think, if I am not wrong, that I have two ways to do that: Way One: make the Python script connect to the Telegram APIs directly and send the message (https://core.telegram.org/api). Way Two: make the Python script call the Telegram's CLI (https://github.com/vysheng/tg), pass some values to this and then the message is sent by the Telegram's CLI. I think that the first way is longer, so a good idea might be using the Way Two. In this case I really don't know how to proceed. I don't know lots about scripts in linux, but I tried to do this: #!/bin/bash cd /home/username/tg echo "msg user#******** messagehere" | ./telegram sleep 10 echo "quit" | ./telegram this works at a half: it sends the message correctly, but then the process remains open. And second problem, I have no clue on how to call that from python and how to pass some value to this script. The value that I would like to pass to the script is the "messagehere" var: this would be a 100/200 characters message, defined from inside the python script. Does anyone has any clues on that? Thanks for replies, I hope this might be useful for someone else.
Telegram recently released their new Bot API which makes sending/receiving messages trivial. I suggest you also take a look at that and see if it fits your needs, it beats wrapping the client library or integrating with their MTProto API. import urllib import urllib2 # Generate a bot ID here: https://core.telegram.org/bots#botfather bot_id = "{YOUR_BOT_ID}" # Request latest messages result = urllib2.urlopen("https://api.telegram.org/bot" + bot_id + "/getUpdates").read() print result # Send a message to a chat room (chat room ID retrieved from getUpdates) result = urllib2.urlopen("https://api.telegram.org/bot" + bot_id + "/sendMessage", urllib.urlencode({ "chat_id": 0, "text": 'my message' })).read() print result Unfortunately I haven't seen any Python libraries you can interact directly with, but here is a NodeJS equivalent I worked on for reference.
Why does a function that returns itself max out recursion in python 3
Why does this code give the error: RuntimeError: maximum recursion depth exceeded during compilation? print_test never calls itself, hence I would think it isn't a recursive function. def print_test(): print("test") return print_test print_test() #prints 'test' print() #a quick way of writing "print_test()()()()()()()()()()()()()..." eval("print_test"+"()"*10000) #should print 'test' 10000 times When I tested it, It worked in Python 2.7.7rc1 but gave the error in Python 3.3.5. Pdb give a short call stack, unlike the tall one that normally exists when exceeding maximum recursion depth. Traceback (most recent call last): File "/usr/lib/python3.3/pdb.py", line 1662, in main pdb._runscript(mainpyfile) File "/usr/lib/python3.3/pdb.py", line 1543, in _runscript self.run(statement) File "/usr/lib/python3.3/bdb.py", line 405, in run exec(cmd, globals, locals) File "<string>", line 1, in <module> File "/home/beet/overflow.py", line 1, in <module> def print_test(): I am wondering this out of curiosity, and realize this would not be best programming practices.
I believe this has to do with Issue #5765. Apply a hard recursion limit in the compiler [as of 3.3] Not 100% sure, but this code runs on 3.2.3: def f(): return f eval("f" + "()" * 10000) but fails on my 3.4.1 which leaves me to suspect this change caused it. If someone would confirm or deny this that would be pretty cool.
How do you add additional files to a wheel?
How do control what files are included in a wheel? It appears MANIFEST.in isn't used by python setup.py bdist_wheel. UPDATE: I was wrong about the difference between installing from a source tarball vs a wheel. The source distribution includes files specified in MANIFEST.in, but the installed package only has python files. Steps are needed to identify additional files that should be installed, whether the install is via source distribution, egg, or wheel. Namely, package_data is needed for additional package files, and data_files for files outside your package like command line scripts or system config files. Original Question I have a project where I've been using python setup.py sdist to build my package, MANIFEST.in to control the files included and excluded, and pyroma and check-manifest to confirm my settings. I recently converted it to dual Python 2 / 3 code, and added a setup.cfg with [bdist_wheel] universal = 1 I can build a wheel with python setup.py bdist_wheel, and it appears to be a universal wheel as desired. However, it doesn't include all of the files specified in MANIFEST.in. What gets installed? I dug deeper, and now know more about packaging and wheel. Here's what I learned: I upload two package files to the multigtfs project on PyPi: multigtfs-0.4.2.tar.gz - the source tar ball, which includes all the files in MANIFEST.in. multigtfs-0.4.2-py2.py3-none-any.whl - The binary distribution in question. I created two new virtual environments, both with Python 2.7.5, and installed each package (pip install multigtfs-0.4.2.tar.gz). The two environments are almost identical. They have different .pyc files, which are the "compiled" Python files. There are log files which record the different paths on disk. The install from the source tar ball includes a folder multigtfs-0.4.2-py27.egg-info, detailing the installation, and the wheel install has a multigtfs-0.4.2.dist-info folder, with the details of that process. However, from the point of view of code using the multigtfs project, there is no difference between the two installation methods. Explicitly, neither has the .zip files used by my test, so the test suite will fail: $ django-admin startproject demo $ cd demo $ pip install psycopg2 # DB driver for PostGIS project $ createdb demo # Create PostgreSQL database $ psql -d demo -c "CREATE EXTENSION postgis" # Make it a PostGIS database $ vi demo/settings.py # Add multigtfs to INSTALLED_APPS, # Update DATABASE to set ENGINE to django.contrib.gis.db.backends.postgis # Update DATABASE to set NAME to test $ ./manage.py test multigtfs.tests # Run the tests ... IOError: [Errno 2] No such file or directory: u'/Users/john/.virtualenvs/test/lib/python2.7/site-packages/multigtfs/tests/fixtures/test3.zip' Specifying additional files Using the suggestions from the answers, I added some additional directives to setup.py: from __future__ import unicode_literals # setup.py now requires some funky binary strings ... setup( name='multigtfs', packages=find_packages(), package_data={b'multigtfs': ['test/fixtures/*.zip']}, include_package_data=True, ... ) This installs the zip files (as well as the README) to the folder, and tests now run correctly. Thanks for the suggestions!
Have you tried using package_data in your setup.py? MANIFEST.in seems targetted for python versions <= 2.6, I'm not sure if higher versions even look at it. After exploring https://github.com/pypa/sampleproject, their MANIFEST.in says: # If using Python 2.6 or less, then have to include package data, even though # it's already declared in setup.py include sample/*.dat which seems to imply this method is outdated. Meanwhile, in setup.py they declare: setup( name='sample', ... # If there are data files included in your packages that need to be # installed, specify them here. If using Python 2.6 or less, then these # have to be included in MANIFEST.in as well. package_data={ 'sample': ['package_data.dat'], }, ... ) (I'm not sure why they chose a wildcard in MANIFEST.in and a filename in setup.py. They refer to the same file) Which, along with being simpler, again seems to imply that the package_data route is superior to the MANIFEST.in method. Well, unless you have to support 2.6 that is, in which case my prayers go out to you.
Very strange behavior of operator 'is' with methods
Why is the first result False, should it not be True? >>> from collections import OrderedDict >>> OrderedDict.__repr__ is OrderedDict.__repr__ False >>> dict.__repr__ is dict.__repr__ True
For user-defined functions, in Python 2 unbound and bound methods are created on demand, through the descriptor protocol; OrderedDict.__repr__ is such a method object, as the wrapped function is implemented as a pure-Python function. The descriptor protocol will call the __get__ method on objects that support it, so __repr__.__get__() is called whenever you try to access OrderedDict.__repr__; for classes None (no instance) and the class object itself are passed in. Because you get a new method object each time the function __get__ method is invoked, is fails. It is not the same method object. dict.__repr__ is not a custom Python function but a C function, and its __get__ descriptor method essentially just returns self when accessed on the class. Accessing the attribute gives you the same object each time, so is works: >>> dict.__repr__.__get__(None, dict) is dict.__repr__ # None means no instance True Methods have a __func__ attribute referencing the wrapped function, use that to test for identity: >>> OrderedDict.__repr__ <unbound method OrderedDict.__repr__> >>> OrderedDict.__repr__.__func__ <function __repr__ at 0x102c2f1b8> >>> OrderedDict.__repr__.__func__.__get__(None, OrderedDict) <unbound method OrderedDict.__repr__> >>> OrderedDict.__repr__.__func__ is OrderedDict.__repr__.__func__ True Python 3 does away with unbound methods, function.__get__(None, classobj) returns the function object itself (so it behaves like dict.__repr__ does). But you will see the same behaviour with bound methods, methods retrieved from an instance.
HTTPError 403 (Forbidden) with Django and python-social-auth connecting to Google with OAuth2
Using python-social-auth, I get a 403: Forbiden error message after accepting access from google
You need to add the Google+ API to the list of enabled APIs on the Google Developer Console (under APIs) Note: If you want to see the real error message, use the traceback to look at the content of the response variable (response.text). I use werkzeug for that (django-extensions + python manage.py runserver_plus).
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?
... soup = BeautifulSoup(html, "lxml") File "/Library/Python/2.7/site-packages/bs4/__init__.py", line 152, in __init__ % ",".join(features)) bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library? The above outputs on my Terminal. I am on Mac OS 10.7.x. I have Python 2.7.1, and followed this tutorial to get Beautiful Soup and lxml, which both installed successfully and work with a separate test file located here. In the Python script that causes this error, I have included this line: from pageCrawler import comparePages And in the pageCrawler file I have included the following two lines: from bs4 import BeautifulSoup from urllib2 import urlopen Any help in figuring out what the problem is and how it can be solved would much be appreciated.
I have a suspicion that this is related the the parser that BS will use to read the HTML. They document it here but if you're like me (on OSX) you might be stuck with something that requires a bit of work: You'll notice that in the BS4 documentation page above, they point out that by default BS4 will use the Python built-in HTML parser. Assuming you are in OSX, the Apple-bundled version of Python is 2.7.2 which is not lenient for character formatting. I hit this same problem, so I upgraded by version of Python to work around it. Doing this in a virtualenv will minimize disruption to other projects. If doing that sounds like a pain, you can switch over to the LXML parser: pip install lxml And then try: soup = BeautifulSoup(html, "lxml") Depending on your scenario, that might be good enough. I found this annoying enough to warrant upgrading my version of Python. Using virtualenv, you can migrate your packages fairly easily
Pandas read_sql with parameters
Are there any examples of how to pass parameters with an SQL query in Pandas? In particular I'm using an SQLAlchemy engine to connect to a PostgreSQL database. So far I've found that the following works: df = psql.read_sql(('select "Timestamp","Value" from "MyTable" ' 'where "Timestamp" BETWEEN %s AND %s'), db,params=[datetime(2014,6,24,16,0),datetime(2014,6,24,17,0)], index_col=['Timestamp']) The Pandas documentation says that params can also be passed as a dict, but I can't seem to get this to work having tried for instance: df = psql.read_sql(('select "Timestamp","Value" from "MyTable" ' 'where "Timestamp" BETWEEN :dstart AND :dfinish'), db,params={"dstart":datetime(2014,6,24,16,0),"dfinish":datetime(2014,6,24,17,0)}, index_col=['Timestamp']) What is the recommended way of running these types of queries from Pandas?
The read_sql docs say this params argument can be a list, tuple or dict (see docs). To pass the values in the sql query, there are different syntaxes possible: ?, :1, :name, %s, %(name)s (see PEP249). But not all of these possibilities are supported by all database drivers, which syntax is supported depends on the driver you are using (psycopg2 in your case I suppose). In your second case, when using a dict, you are using 'named arguments', and according to the psycopg2 documentation, they support the %(name)s style (and so not the :name I suppose), see http://initd.org/psycopg/docs/usage.html#query-parameters. So using that style should work: df = psql.read_sql(('select "Timestamp","Value" from "MyTable" ' 'where "Timestamp" BETWEEN %(dstart)s AND %(dfinish)s'), db,params={"dstart":datetime(2014,6,24,16,0),"dfinish":datetime(2014,6,24,17,0)}, index_col=['Timestamp'])
python : can reduce be translated into list comprehensions like map, lambda and filter?
When programming in python, I now avoid map, lambda and filter by using list comprehensions because it is easier to read and faster in execution. But can reduce be replaced as well? E.g. an object has an operator union() that works on another object, a1.union(a2), and gives a 3rd object of same type. I have a list of objects: L = [a1, a2, a3, ...] How to have the union() of all these objects with list comprehensions, the equivalent of: result = reduce(lambda a, b :a.union(b), L[1:], L[0])
It is no secret that reduce is not among the favored functions of the Pythonistas. Generically, reduce is a left fold on a list It is conceptually easy to write a fold in Python that will fold left or right on a iterable: def fold(func, iterable, initial=None, reverse=False): x=initial if reverse: iterable=reversed(iterable) for e in iterable: x=func(x,e) if x is not None else e return x Without some atrocious hack, this cannot be replicated in a comprehension because there is not accumulator type function in a comprehension. Just use reduce -- or write one that makes more sense to you.
Automating pydrive verification process
I am trying to automate the GoogleAuth process when using the pydrive library (https://pypi.python.org/pypi/PyDrive). I've set up the pydrive and the google API such that my secret_client.json works but it requires web authentication for gdrive access every time i run my script: from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive gauth = GoogleAuth() gauth.LocalWebserverAuth() drive = GoogleDrive(gauth) textfile = drive.CreateFile() textfile.SetContentFile('eng.txt') textfile.Upload() print textfile drive.CreateFile({'id':textfile['id']}).GetContentFile('eng-dl.txt') eng.txt is just a textfile. Moreover when I try to use the above script while I am logged into another account. It doesn't upload the eng.txt into my gdrive that generated the secret_client.json but the account that was logged in when I authorize the authentication From the previous post, I've tried the following to automate the verification process but it's giving error messages: import base64, httplib2 from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from apiclient.discovery import build from oauth2client.client import SignedJwtAssertionCredentials from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive #gauth = GoogleAuth() #gauth.LocalWebserverAuth() # from google API console - convert private key to base64 or load from file id = "464269119984-j3oh4aj7pd80mjae2sghnua3thaigugu.apps.googleusercontent.com" key = base64.b64decode('COaV9QUlO1OdqtjMiUS6xEI8') credentials = SignedJwtAssertionCredentials(id, key, scope='https://www.googleapis.com/auth/drive') credentials.authorize(httplib2.Http()) gauth = GoogleAuth() gauth.credentials = credentials drive = GoogleDrive(gauth) drive = GoogleDrive(gauth) textfile = drive.CreateFile() textfile.SetContentFile('eng.txt') textfile.Upload() print textfile drive.CreateFile({'id':textfile['id']}).GetContentFile('eng-dl.txt') Error: Traceback (most recent call last): File "/home/alvas/git/SeedLing/cloudwiki.py", line 29, in <module> textfile.Upload() File "/usr/local/lib/python2.7/dist-packages/pydrive/files.py", line 216, in Upload self._FilesInsert(param=param) File "/usr/local/lib/python2.7/dist-packages/pydrive/auth.py", line 53, in _decorated self.auth.Authorize() File "/usr/local/lib/python2.7/dist-packages/pydrive/auth.py", line 422, in Authorize self.service = build('drive', 'v2', http=self.http) File "/usr/local/lib/python2.7/dist-packages/oauth2client/util.py", line 132, in positional_wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/apiclient/discovery.py", line 192, in build resp, content = http.request(requested_url) File "/usr/local/lib/python2.7/dist-packages/oauth2client/util.py", line 132, in positional_wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/oauth2client/client.py", line 475, in new_request self._refresh(request_orig) File "/usr/local/lib/python2.7/dist-packages/oauth2client/client.py", line 653, in _refresh self._do_refresh_request(http_request) File "/usr/local/lib/python2.7/dist-packages/oauth2client/client.py", line 677, in _do_refresh_request body = self._generate_refresh_request_body() File "/usr/local/lib/python2.7/dist-packages/oauth2client/client.py", line 861, in _generate_refresh_request_body assertion = self._generate_assertion() File "/usr/local/lib/python2.7/dist-packages/oauth2client/client.py", line 977, in _generate_assertion private_key, self.private_key_password), payload) File "/usr/local/lib/python2.7/dist-packages/oauth2client/crypt.py", line 131, in from_string pkey = crypto.load_pkcs12(key, password).get_privatekey() OpenSSL.crypto.Error: [('asn1 encoding routines', 'ASN1_get_object', 'header too long')] My authentication on gdrive api looks like this: How could I use pydrive such that I do not need to authenticate everytime I use it? How to allow automatic authentication such that the python script using the pydrive script will only upload to the account that generated the secret_client.json and not the currently logged on account on the internet browser?
First, you're misunderstanding one very important bit of how this works: when I try to use the above script while I am logged into another account. It doesn't upload the eng.txt into my gdrive that generated the secret_client.json but the account that was logged in when I authorize the authentication This is exactly how it's supposed to work. You, as the developer, distribute client_secret.json with your application, and that file is used by PyDrive to authenticate the application with Google. Google wants to know how many API requests are being made by each application out there for all sorts of reasons (metrics, charge the account, revoke access, etc.), so it requires the application to authenticate itself. Now, when your application runs LocalWebserverAuth, it's authenticating the client with Google. The client, of course, is the person actually using your application. In this case, the developer and client are the same person (you), but imagine your want to distribute your application to a million different people. They need to be able to authenticate themselves and upload files to their own Drive account, rather that having them all end up in yours (the developer), who provided client_secret.json. That said, it's really just a very minor change to make it so your app doesn't have to ask the client to authenticate every time you run the app. You just need to use LoadCredentialsFile and SaveCredentialsFile. from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive gauth = GoogleAuth() # Try to load saved client credentials gauth.LoadCredentialsFile("mycreds.txt") if gauth.credentials is None: # Authenticate if they're not there gauth.LocalWebserverAuth() elif gauth.access_token_expired: # Refresh them if expired gauth.Refresh() else: # Initialize the saved creds gauth.Authorize() # Save the current credentials to a file gauth.SaveCredentialsFile("mycreds.txt") drive = GoogleDrive(gauth) textfile = drive.CreateFile() textfile.SetContentFile('eng.txt') textfile.Upload() print textfile drive.CreateFile({'id':textfile['id']}).GetContentFile('eng-dl.txt')
What are Flask Blueprints, exactly?
I have read the official Flask documentation on Blueprints and even one or two blog posts on using them. I've even used them in my web app, but I don't completely understand what they are or how they fit into my app as a whole. How is it similar to an instance of my app but not quite? The documentation is comprehensive but I seek a layman explanation or an enlightening analogy to spark it for me. I was sufficiently perplexed when a colleague asked me to explain a Flask blueprint to them that I elected to ask here.
A blueprint is a template for generating a "section" of a web application. Think of it like a mold: You can take the blueprint and apply it to your application in several places. Each time you apply it the blueprint will create a new version of its structure in the plaster of your application. # An example from flask import Blueprint tree_mold = Blueprint("mold", __name__) @tree_mold.route("/leaves") def leaves(): return "This tree has leaves" @tree_mold.route("/roots") def roots(): return "And roots as well" @tree_mold.route("/rings") @tree_mold.route("/rings/<int:year>") def rings(year=None): return "Looking at the rings for {year}".format(year=year) This is a simple mold for working with trees - it says than any application that deals with trees should provide access to its leaves, its roots, and its rings (by year). By itself, it is a hollow shell - it cannot route, it cannot respond, until it is impressed upon an application: from tree_workshop import tree_mold app.register_blueprint(tree_mold, url_prefix="/oak") app.register_blueprint(tree_mold, url_prefix="/fir") app.register_blueprint(tree_mold, url_prefix="/ash") Once it is created it may "impressed" on the application by using the register_blueprint function - this "impresses" the mold of the blueprint on the application at the locations specified by url_prefix.
How to deal with Pylint's "too-many-instance-attributes" message?
I have just tried to lint some code with Pylint, and the last remaining error is R0902: too-many-instance-attributes (8/7) I understand the rationale behind limiting the number of instance attributes, but seven seems a bit low. I also realise that the linter should not have the last word. However, I would like to know what I should be doing instead of: def __init__(self, output_file=None, output_dir=None): """ Set the frobnicator up, along with default geometries """ self.margin = 30 self.pos = [0, 0] self.sep = [5, 5] self.cell = [20, 20] self.frobbr = library.Frobbr() page = self.frobbr.get_settings('page') self.lim = [page.get_width() - self.margin, page.get_height() - self.margin] self.filename = output_file self.moddir = output_dir Should I package the geometries up into a dict, do something else to stop Pylint complaining, or just ignore it (which I don't really want to do)?
A linter's job is to make you aware of potential issues with your code, and as you say in your question, it should not have the last word. If you've considered what pylint has to say and decided that for this class, the attributes you have are appropriate (which seems reasonable to me), you can both suppress the error and indicate that you've considered the issue by adding a disabling comment to your class: class Frobnicator: """All frobnication, all the time.""" # pylint: disable=too-many-instance-attributes # Eight is reasonable in this case. def __init__(self): self.one = 1 self.two = 2 self.three = 3 self.four = 4 self.five = 5 self.six = 6 self.seven = 7 self.eight = 8 That way, you're neither ignoring Pylint nor a slave to it; you're using it as the helpful but fallible tool it is. By default, Pylint will produce an informational message when you locally disable a check: Locally disabling too-many-instance-attributes (R0902) (locally-disabled) You can prevent that message from appearing in one of two ways: Add a disable= flag when running pylint: $ pylint --disable=locally-disabled frob.py Add a directive to a pylintrc config file: [MESSAGES CONTROL] disable = locally-disabled
lxml installation error ubuntu 14.04 (internal compiler error)
I am having problems with installing lxml. I have tried the solutions of the relative questions in this site and other sites but could not fix the problem. Need some suggestions/solution on this. I am providing the full log after executing pip install lxml, Downloading/unpacking lxml Downloading lxml-3.3.5.tar.gz (3.5MB): 3.5MB downloaded Running setup.py (path:/tmp/pip_build_root/lxml/setup.py) egg_info for package lxml /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.3.5. Building without Cython. Using build configuration of libxslt 1.1.28 warning: no previously-included files found matching '*.py' Installing collected packages: lxml Running setup.py install for lxml /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.3.5. Building without Cython. Using build configuration of libxslt 1.1.28 building 'lxml.etree' extension x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/tmp/pip_build_root/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w x86_64-linux-gnu-gcc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.8/README.Bugs> for instructions. error: command 'x86_64-linux-gnu-gcc' failed with exit status 4 Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-KUq9VD-record/install-record.txt --single-version-externally-managed --compile: /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.3.5. Building without Cython. Using build configuration of libxslt 1.1.28 running install running build running build_py creating build creating build/lib.linux-x86_64-2.7 creating build/lib.linux-x86_64-2.7/lxml copying src/lxml/pyclasslookup.py -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/doctestcompare.py -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/builder.py -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/cssselect.py -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/sax.py -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/_elementpath.py -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/ElementInclude.py -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/__init__.py -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/usedoctest.py -> build/lib.linux-x86_64-2.7/lxml creating build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/__init__.py -> build/lib.linux-x86_64-2.7/lxml/includes creating build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/html5parser.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/ElementSoup.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/builder.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/clean.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/_diffcommand.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/diff.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/_html5builder.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/_setmixin.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/defs.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/formfill.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/soupparser.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/__init__.py -> build/lib.linux-x86_64-2.7/lxml/html copying src/lxml/html/usedoctest.py -> build/lib.linux-x86_64-2.7/lxml/html creating build/lib.linux-x86_64-2.7/lxml/isoschematron copying src/lxml/isoschematron/__init__.py -> build/lib.linux-x86_64-2.7/lxml/isoschematron copying src/lxml/lxml.etree.h -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/lxml.etree_api.h -> build/lib.linux-x86_64-2.7/lxml copying src/lxml/includes/config.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/xpath.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/tree.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/dtdvalid.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/xmlparser.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/etreepublic.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/xslt.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/schematron.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/uri.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/c14n.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/xinclude.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/xmlschema.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/xmlerror.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/relaxng.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/htmlparser.pxd -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/etree_defs.h -> build/lib.linux-x86_64-2.7/lxml/includes copying src/lxml/includes/lxml-version.h -> build/lib.linux-x86_64-2.7/lxml/includes creating build/lib.linux-x86_64-2.7/lxml/isoschematron/resources creating build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/rng copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/rng creating build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl creating build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.linux-x86_64-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build/temp.linux-x86_64-2.7 creating build/temp.linux-x86_64-2.7/src creating build/temp.linux-x86_64-2.7/src/lxml x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/libxml2 -I/tmp/pip_build_root/lxml/src/lxml/includes -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w x86_64-linux-gnu-gcc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.8/README.Bugs> for instructions. error: command 'x86_64-linux-gnu-gcc' failed with exit status 4 ---------------------------------------- Cleaning up... Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-KUq9VD-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/lxml Storing debug log for failure in /root/.pip/pip.log Also, the pip.log file looks like this, status = self.run(options, args) File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 1435, in install requirement.install(install_options, global_options, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/pip/req.py", line 706, in install cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File "/usr/lib/python2.7/dist-packages/pip/util.py", line 697, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-KUq9VD-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/lxml dmesg | tail command shows output like this, [1744367.676147] Out of memory: Kill process 25518 (cc1) score 388 or sacrifice child [1744367.676665] Killed process 25518 (cc1) total-vm:242352kB, anon-rss:200608kB, file-rss:0kB It's seems like a memory issue. I am taking reference form this question
Possible solution (if you have no ability to increase memory on that machine) is to add swap file. sudo dd if=/dev/zero of=/swapfile bs=1024 count=524288 sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile from https://github.com/pydata/pandas/issues/1880#issuecomment-9920484 This worked for me on smallest digital ocean machine
Why do the Python docs say I need to define __ne__ when I define __eq__?
According to the Python docs: "when defining __eq__(), one should also define __ne__() so that the operators will behave as expected". However, it appears that Python computes __ne__ as not __eq__ automatically: In [8]: class Test: def __eq__(self, other): print("calling __eq__") ...: return isinstance(other, Test) ...: In [9]: a = Test() In [10]: b = Test() In [11]: a == b calling __eq__ Out[11]: True In [12]: a != b calling __eq__ Out[12]: False In [13]: a == 1 calling __eq__ Out[13]: False In [14]: a != 1 calling __eq__ Out[14]: True So what's the point of defining __ne__ if it's just going to be return not self.__eq__(other)? And furthermore, where is this behavior actually documented? EDIT Apparently it matters that I am using Python 3. In Python 2, I get In [1]: class Test(object): ...: def __eq__(self, other): ...: print("calling __eq__") ...: return isinstance(other, Test) ...: In [2]: a = Test() In [3]: b = Test() In [4]: a == b calling __eq__ Out[4]: True In [5]: a != b Out[5]: True In [6]: a == 1 calling __eq__ Out[6]: False In [7]: a != 1 Out[7]: True But the docs I referenced are the Python 3 docs. Were they just not updated?
Python 3 changed behaviour for the == case, see Python 3, What's New: != now returns the opposite of ==, unless == returns NotImplemented. It was deemed a useful change. The fact that the documentation has not been updated is indeed a long standing bug. However, as a comment on the report points out, if you inherit from a class that already has defined __ne__, overriding just __eq__ is not enough and you'll also have to override the __ne__ method.
Label encoding across multiple columns in scikit-learn
I'm trying to use scikit-learn's LabelEncoder to encode a pandas DataFrame of string labels. As the dataframe has many (50+) columns, I want to avoid creating a LabelEncoder object for each column; I'd rather just have one big LabelEncoder objects that works across all my columns of data. Throwing the entire DataFrame into LabelEncoder creates the below error. Please bear in mind that I'm using dummy data here; in actuality I'm dealing with about 50 columns of string labeled data, so need a solution that doesn't reference any columns by name. import pandas from sklearn import preprocessing df = pandas.DataFrame({'pets':['cat', 'dog', 'cat', 'monkey', 'dog', 'dog'], 'owner':['Champ', 'Ron', 'Brick', 'Champ', 'Veronica', 'Ron'], 'location':['San_Diego', 'New_York', 'New_York', 'San_Diego', 'San_Diego', 'New_York']}) le = preprocessing.LabelEncoder() le.fit(df) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/bbalin/anaconda/lib/python2.7/site-packages/sklearn/preprocessing/label.py", line 103, in fit y = column_or_1d(y, warn=True) File "/Users/bbalin/anaconda/lib/python2.7/site-packages/sklearn/utils/validation.py", line 306, in column_or_1d raise ValueError("bad input shape {0}".format(shape)) ValueError: bad input shape (6, 3) Any thoughts on how to get around this problem?
You can easily do this though, df.apply(LabelEncoder().fit_transform) EDIT: Since this answer is over a year ago, and generated many upvotes (including a bounty), I should probably extend this further. For inverse_transform and transform, you have to do a little bit of hack. from collections import defaultdict d = defaultdict(LabelEncoder) With this, you now retain all columns LabelEncoder as dictionary. # Encoding the variable fit = df.apply(lambda x: d[x.name].fit_transform(x)) # Inverse the encoded fit.apply(lambda x: d[x.name].inverse_transform(x)) # Using the dictionary to label future data df.apply(lambda x: d[x.name].transform(x))
TypeError: get() takes no keyword arguments
I'm new at Python, and I'm trying to basically make a hash table that checks if a key points to a value in the table, and if not, initializes it to an empty array. The offending part of my code is the line: converted_comments[submission.id] = converted_comments.get(submission.id, default=0) I get the error: TypeError: get() takes no keyword arguments But in the documentation (and various pieces of example code), I can see that it does take a default argument: https://docs.python.org/2/library/stdtypes.html#dict.get http://www.tutorialspoint.com/python/dictionary_get.htm Following is the syntax for get() method: dict.get(key, default=None) There's nothing about this on The Stack, so I assume it's a beginner mistake?
Due to the way the Python C-level APIs developed, a lot of built-in functions and methods don't actually have names for their arguments. Even if the documentation calls the argument default, the function doesn't recognize the name default as referring to the optional second argument. You have to provide the argument positionally: >>> d = {1: 2} >>> d.get(0, default=0) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: get() takes no keyword arguments >>> d.get(0, 0) 0
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 23: ordinal not in range(128)
when I try to concatenate this, I get the UnicodeDecodeError when the field contains 'ñ' or '´'. If the field that contains the 'ñ' or '´' is the last I get no error. #... nombre = fabrica nombre = nombre.encode("utf-8") + '-' + sector.encode("utf-8") nombre = nombre.encode("utf-8") + '-' + unidad.encode("utf-8") #... return nombre any idea? Many thanks!
You are encoding to UTF-8, then re-encoding to UTF-8. Python can only do this if it first decodes again to Unicode, but it has to use the default ASCII codec: >>> u'ñ' u'\xf1' >>> u'ñ'.encode('utf8') '\xc3\xb1' >>> u'ñ'.encode('utf8').encode('utf8') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) Don't keep encoding; leave encoding to UTF-8 to the last possible moment instead. Concatenate Unicode values instead. You can use str.join() (or, rather, unicode.join()) here to concatenate the three values with dashes in between: nombre = u'-'.join(fabrica, sector, unidad) return nombre.encode('utf-8') but even encoding here might be too early. Rule of thumb: decode the moment you receive the value (if not Unicode values supplied by an API already), encode only when you have to (if the destination API does not handle Unicode values directly).
gunicorn.errors.HaltServer: django
I have a django app and trying to set it up with gunicorn first and later with supervisor and nginx. The app is running with the normal django command perfectly like python manage.py runserver I installed the gunicorn using pip like pip install gunicorn and django version is 1.5.3 when i run the below command inside the virtual env like below gunicorn hello.wsgi:application -b xx.xxx.xxx.xx:8000 and faced the error Traceback (most recent call last): File "/root/Envs/proj/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/root/Envs/proj/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> So why actually the above error is encountered and whats the fix ?
Actually the problem here was the wsgi file itself, previously before django 1.3 the wsgi file was named with an extension of .wsgi, but now in the recent versions it will be created with and extension of .py that is the wsgi file must be a python module so the file should be hello_wsgi.py and command should be gunicorn hello:application -b xx.xxx.xxx.xx:8000
Pandas: Get unique MultiIndex level values by label
Say you have this MultiIndex-ed DataFrame: df = pd.DataFrame({'co':['DE','DE','FR','FR'], 'tp':['Lake','Forest','Lake','Forest'], 'area':[10,20,30,40], 'count':[7,5,2,3]}) df = df.set_index(['co','tp']) Which looks like this: area count co tp DE Lake 10 7 Forest 20 5 FR Lake 30 2 Forest 40 3 I would like to retrieve the unique values per index level. This can be accomplished using df.index.levels[0] # returns ['DE', 'FR] df.index.levels[1] # returns ['Lake', 'Forest'] What I would really like to do, is to retrieve these lists by addressing the levels by their name, i.e. 'co' and 'tp'. The shortest two ways I could find looks like this: list(set(df.index.get_level_values('co'))) # returns ['DE', 'FR'] df.index.levels[df.index.names.index('co')] # returns ['DE', 'FR'] But non of them are very elegant. Is there a shorter way?
I guess u want unique values in a certain level (and by level names) of a multiindex. I usually do the following, which is a bit long. In [11]: df.index.get_level_values('co').unique() Out[11]: array(['DE', 'FR'], dtype=object)
How to deal with certificates using Selenium?
I am using Selenium to launch a browser. How can I deal with the webpages (URLs) that will ask the browser to accept a certificate or not? In Firefox, I may have a website like that asks me to accept its certificate like this: On the Internet Explorer browser, I may get something like this: On Google Chrome: I repeat my question: How can I automate the acceptance of a website's certificate when I launch a browser (Internet Explorer, Firefox and Google Chrome) with Selenium (Python programming language)?
For the Firefox, you need to set accept_untrusted_certs FirefoxProfile() option to True: from selenium import webdriver profile = webdriver.FirefoxProfile() profile.accept_untrusted_certs = True driver = webdriver.Firefox(firefox_profile=profile) driver.get('https://cacert.org/') driver.close() For Chrome, you need to add --ignore-certificate-errors ChromeOptions() argument: from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument('--ignore-certificate-errors') driver = webdriver.Chrome(chrome_options=options) driver.get('https://cacert.org/') driver.close() For the Internet Explorer, you need to set acceptSslCerts desired capability: from selenium import webdriver capabilities = webdriver.DesiredCapabilities().INTERNETEXPLORER capabilities['acceptSslCerts'] = True driver = webdriver.Ie(capabilities=capabilities) driver.get('https://cacert.org/') driver.close() Actually, according to the Desired Capabilities documentation, setting acceptSslCerts capability to True should work for all browsers since it is a generic read/write capability: acceptSslCerts boolean Whether the session should accept all SSL certs by default. Working demo for Firefox: >>> from selenium import webdriver Setting acceptSslCerts to False: >>> capabilities = webdriver.DesiredCapabilities().FIREFOX >>> capabilities['acceptSslCerts'] = False >>> driver = webdriver.Firefox(capabilities=capabilities) >>> driver.get('https://cacert.org/') >>> print(driver.title) Untrusted Connection >>> driver.close() Setting acceptSslCerts to True: >>> capabilities = webdriver.DesiredCapabilities().FIREFOX >>> capabilities['acceptSslCerts'] = True >>> driver = webdriver.Firefox(capabilities=capabilities) >>> driver.get('https://cacert.org/') >>> print(driver.title) Welcome to CAcert.org >>> driver.close()
Python official installer missing python27.dll
I installed Python 2.7.7 32-bit on Windows from official website and it is missing python27.dll. How can I get this DLL?
At least for the ActiveState Python distribution, and in the official Python distribution: https://docs.python.org/2/faq/windows.html#id7 The dll is in C:\Windows\System\PythonNN.dll where NN is the version number. On a 64-bit, a 32 bit dll will be installed here: %SystemRoot%\SysWoW64 and a running 32 bit application will magically translate this to the proper path, http://en.wikipedia.org/wiki/WoW64 When I link againsy Python27, I use the lib file here: C:\Python27\libs\python27.lib It is somewhat disappointing that the dll gets put in system directories since it means that you can have only one 32 bit and one 64 bit distribution with the same version number.
Run Python script at startup in Ubuntu
I have a short Python script that needs to run at startup - Ubuntu 13.10. I have tried everything I can think of but can't get it to run. The script: #!/usr/bin/python import time with open("/home/username/Desktop/startup.txt", 'a') as f: f.write(str(time.time()) + " It worked!") (The actual script is a bit different, as I'm just using this for testing purposes, but you get the idea.) I've tried all of the following, with no luck: Put the command python startuptest.py in crontab, as @reboot python /home/username/Documents/startuptest.py, both as the regular user and as sudo Put the command python /home/username/Documents/startuptest.py in /etc/rc.local Opened Ubuntu's Startup Applications and put the command there Done all of the preceding, putting the command into a shell script and calling that shell script instead Nothing works. I get the feeling I'm missing something simple. Any ideas? (The script runs fine if I just run the command from a terminal.)
Put this in /etc/init (Use /etc/systemd in Ubuntu 15.x) mystartupscript.conf start on runlevel [2345] stop on runlevel [!2345] exec /path/to/script.py By placing this conf file there you hook into ubuntu's upstart service that runs services on startup. manual starting/stopping is done with sudo service mystartupscript start and sudo service mystartupscript stop
Python program hangs forever when called from subprocess
The pip test suite employs subprocess calls to run integration tests. Recently a PR was placed which removed some older compatability code. Specically it replaced a b() function with explicitly uses of the b"" literal. However this has seemingly broken something to where a particular subprocess call will hang forever. To make matters worse it only hangs forever on Python 3.3 (maybe only Python 3.3.5) and it cannot easily be reproduced outside of Travis. Relevant Pull Requests: https://github.com/pypa/pip/pull/1901 https://github.com/pypa/pip/pull/1900 https://github.com/pypa/pip/pull/1878 A similar problem occurs with other Pull Requests, however they fail on different versions of Python and different test cases. These Pull Requests are: https://github.com/pypa/pip/pull/1882 https://github.com/pypa/pip/pull/1912 (Python 3.3 again) Another user has reported a similar issue to me today in IRC, they say they can reproduce it locally on Ubuntu 14.04 with Python 3.3 from deadsnakes (but not on OSX) and not only on Travis like I've mostly been able too thus far. They've sent me steps to reproduce which are: $ git clone git@github.com:xavfernandez/pip.git $ cd pip $ git checkout debug_stuck $ pip install pytest==2.5.2 scripttest==1.3 virtualenv==1.11.6 mock==1.0.1 pretend==1.0.8 setuptools==4.0 $ # The below should pass just fine $ py.test -k test_env_vars_override_config_file -v -s $ # Now edit pip/req/req_set.py and remove method remove_me_to_block or change its content to print('KO') or pass $ # The below should hang forever $ py.test -k test_env_vars_override_config_file -v -s In the above example, the remove_me_to_block method is not called anywhere, just the mere existence of it is enough to make the test not block, and the non existence of it (or changing it's contents) is enough to make the test block forever. Most of the debugging has been with the changes in this PR (https://github.com/pypa/pip/pull/1901). Having pushed one commit at a time the tests passed until this particular commit was applied - https://github.com/dstufft/pip/commit/d296df620916b4cd2379d9fab988cbc088e28fe0. Specifically either the change to use b'\r\n' or (entry + endline).encode("utf-8") will trigger it, however neither of these things are in the execution path for pip install -vvv INITools which is the command that it fails being able to execute. In attempting to trace down the problem I've noticed that if I replace at least one call to "something".encode("utf8") with (lambda: "something")().encode("utf8") it works. Another issue while attempting to debug this, has been that various things I've tried (adding print statements, no-op atexit functions, using trollious for async subprocess) will simply shift the problem from a particular test case on a particular Python version to different test cases on different versions of Python. I am aware of the fact that the subprocess module can deadlock if you read/write from subprocess.Popen().stdout/stderr/stdin directly. However This code is using the communicate() method which is supposed to work around these issues. It is inside of the wait() call that communicate() does that the process hangs forever waiting for the pip process to exit. Other information: It is very heisenbug-ey, I've managed to make it go away or shift based on various things that should not have any affect on it. I've traced the execution inside of pip itself all the way through to the end of the code paths until sys.exit() is called. Replacing sys.exit() with os._exit() fixes all the hanging issues, however I'd rather not do that as we're then skipping the clean up that the Python interpreter does. There are no additional threads running (verified with threading.enumerate). I've had some combination of changes which have had it hang even without subprocess.PIPE being used for stdout/stderr/stdin, however other combinations will have it not hang if those are not used (or it'll shift to a different test case/python version). It does not appear to be timing related, any particular commit will either fail 100% of the time on the affect test cases/Pythons or fail 0% of the time. Often times the code that was changed isn't even being executed by that particular code path in the pip subprocess, however the mere existence of the change seems to break it. I've tried disabling bytecode generation using PYTHONDONTWRITEBYTECODE=1 and that had an effect in one combination, but in others it's had no effect. The command that the subprocess calls does not hang in every invocation (similar commands are issued through the test suite) however it does always hang in the exact same place for a particular commit. So far i've been completely unable to reproduce this outside of being called via subproccess in the test suite, however I don't know for a fact if it is or isn't related to that. I'm completely at a loss for what could be causing this. UPDATE #1 Using faulthandler.dump_traceback_later() I got this result: Timeout (0:00:05)! Current thread 0x00007f417bd92740: File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/requests/packages/urllib3/response.py", line 287 in closed Timeout (0:00:05)! Current thread 0x00007f417bd92740: File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/requests/packages/urllib3/response.py", line 287 in closed Timeout (0:00:05)! Current thread 0x00007f417bd92740: File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ Timeout (0:00:05)! Current thread 0x00007f417bd92740: File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ Timeout (0:00:05)! Current thread 0x00007f417bd92740: File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ Timeout (0:00:05)! Current thread 0x00007f417bd92740: File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ Timeout (0:00:05)! Current thread 0x00007f417bd92740: File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ Timeout (0:00:05)! Current thread 0x00007f417bd92740: File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/requests/packages/urllib3/response.py", line 285 in closed File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ [ Duplicate Lines Snipped ] File "/tmp/pytest-10/test_env_vars_override_config_file0/pip_src/pip/_vendor/cachecontrol/filewrapper.py", line 24 in __getattr__ This suggests to me that maybe the problem is something to do with the garbage collection and urllib3? The Filewrapper in pip._vendor.cachecontrol.filewrapper is used as a wrapper around a urllib3 response object (which subclasses io.IOBase) so that we can tee the read() method to store the results of each read call in a buffer as well as returning it, and then once the file has been completely consumed run a callback with the contents of that buffer so that we can store the item in the cache. Could this be interacting with the GC in some way? Update #2 If I add a def __del__(self): pass method to the Filewrapper class, then everything works correctly in the cases I've tried. I tested to ensure that this wasn't because I just happened to define a method (which "fixes" it sometimes) by changing that to def __del2__(self): pass and it started failing again. I'm not sure why this works exactly and a no-op __del__ method seems like it's less than optimal. Update #3 Adding a import gc; gc.set_debug(gc.DEBUG_UNCOLLECTABLE) printed stuff to stderr twice during the execution of the pip command that has been hanging, they are: gc: uncollectable <CallbackFileWrapper 0x7f66385c1cd0> gc: uncollectable <dict 0x7f663821d5a8> gc: uncollectable <functools.partial 0x7f663831de10> gc: uncollectable <_io.BytesIO 0x7f663804dd50> gc: uncollectable <method 0x7f6638219170> gc: uncollectable <tuple 0x7f663852bd40> gc: uncollectable <HTTPResponse 0x7f663831c7d0> gc: uncollectable <PreparedRequest 0x7f66385c1a90> gc: uncollectable <dict 0x7f663852cb48> gc: uncollectable <dict 0x7f6637fdcab8> gc: uncollectable <HTTPHeaderDict 0x7f663831cb90> gc: uncollectable <CaseInsensitiveDict 0x7f66385c1ad0> gc: uncollectable <dict 0x7f6638218ab8> gc: uncollectable <RequestsCookieJar 0x7f663805d7d0> gc: uncollectable <dict 0x7f66382140e0> gc: uncollectable <dict 0x7f6638218680> gc: uncollectable <list 0x7f6638218e18> gc: uncollectable <dict 0x7f6637f14878> gc: uncollectable <dict 0x7f663852c5a8> gc: uncollectable <dict 0x7f663852cb00> gc: uncollectable <method 0x7f6638219d88> gc: uncollectable <DefaultCookiePolicy 0x7f663805d590> gc: uncollectable <list 0x7f6637f14518> gc: uncollectable <list 0x7f6637f285a8> gc: uncollectable <list 0x7f6637f144d0> gc: uncollectable <list 0x7f6637f14ab8> gc: uncollectable <list 0x7f6637f28098> gc: uncollectable <list 0x7f6637f14c20> gc: uncollectable <list 0x7f6637f145a8> gc: uncollectable <list 0x7f6637f14440> gc: uncollectable <list 0x7f663852c560> gc: uncollectable <list 0x7f6637f26170> gc: uncollectable <list 0x7f663821e4d0> gc: uncollectable <list 0x7f6637f2d050> gc: uncollectable <list 0x7f6637f14fc8> gc: uncollectable <list 0x7f6637f142d8> gc: uncollectable <list 0x7f663821d050> gc: uncollectable <list 0x7f6637f14128> gc: uncollectable <tuple 0x7f6637fa8d40> gc: uncollectable <tuple 0x7f66382189e0> gc: uncollectable <tuple 0x7f66382183f8> gc: uncollectable <tuple 0x7f663866cc68> gc: uncollectable <tuple 0x7f6637f1e710> gc: uncollectable <tuple 0x7f6637fc77a0> gc: uncollectable <tuple 0x7f6637f289e0> gc: uncollectable <tuple 0x7f6637f19f80> gc: uncollectable <tuple 0x7f6638534d40> gc: uncollectable <tuple 0x7f6637f259e0> gc: uncollectable <tuple 0x7f6637f1c7a0> gc: uncollectable <tuple 0x7f6637fc8c20> gc: uncollectable <tuple 0x7f6638603878> gc: uncollectable <tuple 0x7f6637f23440> gc: uncollectable <tuple 0x7f663852c248> gc: uncollectable <tuple 0x7f6637f2a0e0> gc: uncollectable <tuple 0x7f66386a6ea8> gc: uncollectable <tuple 0x7f663852f9e0> gc: uncollectable <tuple 0x7f6637f28560> and then gc: uncollectable <CallbackFileWrapper 0x7f66385c1350> gc: uncollectable <dict 0x7f6638c33320> gc: uncollectable <HTTPResponse 0x7f66385c1590> gc: uncollectable <functools.partial 0x7f6637f03ec0> gc: uncollectable <_io.BytesIO 0x7f663804d600> gc: uncollectable <dict 0x7f6637f1f680> gc: uncollectable <method 0x7f663902d3b0> gc: uncollectable <tuple 0x7f663852be18> gc: uncollectable <HTTPMessage 0x7f66385c1c10> gc: uncollectable <HTTPResponse 0x7f66385c1450> gc: uncollectable <PreparedRequest 0x7f66385cac50> gc: uncollectable <dict 0x7f6637f2f248> gc: uncollectable <dict 0x7f6637f28b90> gc: uncollectable <dict 0x7f6637f1e638> gc: uncollectable <list 0x7f6637f26cb0> gc: uncollectable <list 0x7f6637f2f638> gc: uncollectable <HTTPHeaderDict 0x7f66385c1f90> gc: uncollectable <CaseInsensitiveDict 0x7f66385b2890> gc: uncollectable <dict 0x7f6638bd9200> gc: uncollectable <RequestsCookieJar 0x7f663805da50> gc: uncollectable <dict 0x7f6637f28a28> gc: uncollectable <dict 0x7f663853aa28> gc: uncollectable <list 0x7f663853a6c8> gc: uncollectable <dict 0x7f6638ede5f0> gc: uncollectable <dict 0x7f6637f285f0> gc: uncollectable <dict 0x7f663853a4d0> gc: uncollectable <method 0x7f663911f710> gc: uncollectable <DefaultCookiePolicy 0x7f663805d210> gc: uncollectable <list 0x7f6637f28ab8> gc: uncollectable <list 0x7f6638215050> gc: uncollectable <list 0x7f663853a200> gc: uncollectable <list 0x7f6638215a28> gc: uncollectable <list 0x7f663853a950> gc: uncollectable <list 0x7f663853a998> gc: uncollectable <list 0x7f6637f21638> gc: uncollectable <list 0x7f6637f0cd40> gc: uncollectable <list 0x7f663853ac68> gc: uncollectable <list 0x7f6637f22c68> gc: uncollectable <list 0x7f663853a170> gc: uncollectable <list 0x7f6637fa6a28> gc: uncollectable <list 0x7f66382153b0> gc: uncollectable <list 0x7f66386a5e60> gc: uncollectable <list 0x7f663852f2d8> gc: uncollectable <list 0x7f66386a3320> [<pip._vendor.cachecontrol.filewrapper.CallbackFileWrapper object at 0x7f66385c1cd0>, <pip._vendor.cachecontrol.filewrapper.CallbackFileWrapper object at 0x7f66385c1350>] Is that useful information? I've never used that flag before so I have no idea if that is unusual or not.
In Python 2, if a set of objects are linked together in a chain (reference cycle) and, at least, one object has a __del__ method, the garbage collector will not delete these objects. If you have a reference cycle, adding a __del__() method may just hide bugs (workaround bugs). According to your update #3, it looks like you have such issue.
How can I get the output of a matplotlib plot as an SVG?
I need to take the output of a matplotlib plot and turn it into an SVG path that I can use on a laser cutter. import matplotlib.pyplot as plt import numpy as np x = np.arange(0,100,0.00001) y = x*np.sin(2*pi*x) plt.plot(y) plt.show() For example, below you see a waveform. I would like to be able to output or save this waveform as an SVG path that I can later work with in a program such as Adobe Illustrator. I am aware of an SVG library called "Cairo" that matplotlib can use (matplotlib.use('Cairo')), however it's not clear to me that this will give me access to the SVG path that I need, even though matplotlib will now be using Cairo to generate the plot. I do have cairo working on my system, and can successfully draw an example composed of SVG paths that I can indeed edit in Illustrator, but I don't have a way to take my equation above into an SVG path. import cairo from cairo import SVGSurface, Context, Matrix s = SVGSurface('example1.svg', WIDTH, HEIGHT) c = Context(s) # Transform to normal cartesian coordinate system m = Matrix(yy=-1, y0=HEIGHT) c.transform(m) # Set a background color c.save() c.set_source_rgb(0.3, 0.3, 1.0) c.paint() c.restore() # Draw some lines c.move_to(0, 0) c.line_to(2 * 72, 2* 72) c.line_to(3 * 72, 1 * 72) c.line_to(4 * 72, 2 * 72) c.line_to(6 * 72, 0) c.close_path() c.save() c.set_line_width(6.0) c.stroke_preserve() c.set_source_rgb(0.3, 0.3, 0.3) c.fill() c.restore() # Draw a circle c.save() c.set_line_width(6.0) c.arc(1 * 72, 3 * 72, 0.5 * 72, 0, 2 * pi) c.stroke_preserve() c.set_source_rgb(1.0, 1.0, 0) c.fill() c.restore() # Save as a SVG and PNG s.write_to_png('example1.png') s.finish() (note that the image displayed here is a png, as stackoverflow doesn't accept svg graphics for display)
You will most probably want to fix the image size and get rid of all sorts of backgrounds and axis markers: import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=[6,6]) x = np.arange(0,100,0.00001) y = x*np.sin(2*pi*x) plt.plot(y) plt.axis('off') plt.gca().set_position([0, 0, 1, 1]) plt.savefig("test.svg") The resulting SVG file contains only one extra element, as savefig really wants to save the figure background. The color of this background is easy to change to 'none', but it does not seem to get rid of it. Anyway, the SVG is very clean otherwise and in the correct scale (1/72" per unit).
Building Cython-compiled python code with PyInstaller
I am trying to build a Python multi-file code with PyInstaller. For that I have compiled the code with Cython, and am using .so files generated in place of .py files. Assuming the 1st file is main.py and the imported ones are file_a.py and file_b.py, I get file_a.so and file_b.so after Cython compilation. When I put main.py, file_a.so and file_b.so in a folder and run it by "python main.py", it works. But when I build it with PyInstaller and try to run the executable generated, it throws errors for imports done in file_a and file_b. How can this be fixed? One solution is to import all standard modules in main.py and this works. But if I do not wish to change my code, what can be the solution?
So I got this to work for you. Please have a look at Bundling Cython extensions with Pyinstaller Quick Start: git clone https://github.com/prologic/pyinstaller-cython-bundling.git cd pyinstaller-cython-bundling ./dist/build.sh This produces a static binary: $ du -h dist/hello 4.2M dist/hello $ ldd dist/hello not a dynamic executable And produces the output: $ ./dist/hello Hello World! FooBar Basically this came down to producing a simple setup.py that builds the extensions file_a.so and file_b.so and then uses pyinstaller to bundle the application the extensions into a single executebla. Example setup.py: from glob import glob from setuptools import setup from Cython.Build import cythonize setup( name="test", scripts=glob("bin/*"), ext_modules=cythonize("lib/*.pyx") ) Building the extensions: $ python setup.py develop Bundling the application: $ pyinstaller -r file_a.so,dll,file_a.so -r file_b.so,dll,file_b.so -F ./bin/hello
Split a generator into chunks without pre-walking it
(This question is related to this one and this one, but those are pre-walking the generator, which is exactly what I want to avoid) I would like to split a generator in chunks. The requirements are: do not pad the chunks: if the number of remaining elements is less than the chunk size, the last chunk must be smaller. do not walk the generator beforehand: computing the elements is expensive, and it must only be done by the consuming function, not by the chunker which means, of course: do not accumulate in memory (no lists) I have tried the following code: def head(iterable, max=10): for cnt, el in enumerate(iterable): yield el if cnt >= max: break def chunks(iterable, size=10): i = iter(iterable) while True: yield head(i, size) # Sample generator: the real data is much more complex, and expensive to compute els = xrange(7) for n, chunk in enumerate(chunks(els, 3)): for el in chunk: print 'Chunk %3d, value %d' % (n, el) And this somehow works: Chunk 0, value 0 Chunk 0, value 1 Chunk 0, value 2 Chunk 1, value 3 Chunk 1, value 4 Chunk 1, value 5 Chunk 2, value 6 ^CTraceback (most recent call last): File "xxxx.py", line 15, in <module> for el in chunk: File "xxxx.py", line 2, in head for cnt, el in enumerate(iterable): KeyboardInterrupt Buuuut ... it never stops (I have to press ^C) because of the while True. I would like to stop that loop whenever the generator has been consumed, but I do not know how to detect that situation. I have tried raising an Exception: class NoMoreData(Exception): pass def head(iterable, max=10): for cnt, el in enumerate(iterable): yield el if cnt >= max: break if cnt == 0 : raise NoMoreData() def chunks(iterable, size=10): i = iter(iterable) while True: try: yield head(i, size) except NoMoreData: break # Sample generator: the real data is much more complex, and expensive to compute els = xrange(7) for n, chunk in enumerate(chunks(els, 2)): for el in chunk: print 'Chunk %3d, value %d' % (n, el) But then the exception is only raised in the context of the consumer, which is not what I want (I want to keep the consumer code clean) Chunk 0, value 0 Chunk 0, value 1 Chunk 0, value 2 Chunk 1, value 3 Chunk 1, value 4 Chunk 1, value 5 Chunk 2, value 6 Traceback (most recent call last): File "xxxx.py", line 22, in <module> for el in chunk: File "xxxx.py", line 9, in head if cnt == 0 : raise NoMoreData __main__.NoMoreData() How can I detect that the generator is exhausted in the chunks function, without walking it?
One way would be to peek at the first element, if any, and then create and return the actual generator. def head(iterable, max=10): first = next(iterable) # raise exception when depleted def head_inner(): yield first # yield the extracted first element for cnt, el in enumerate(iterable): yield el if cnt + 1 >= max: # cnt + 1 to include first break return head_inner() Just use this in your chunk generator and catch the StopIteration exception like you did with your custom exception. Update: Here's another version, using itertools.islice to replace most of the head function, and a for loop. This simple for loop in fact does exactly the same thing as that unwieldy while-try-next-except-break construct in the original code, so the result is much more readable. def chunks(iterable, size=10): iterator = iter(iterable) for first in iterator: # stops when iterator is depleted def chunk(): # construct generator for next chunk yield first # yield element from for loop for more in islice(iterator, size - 1): yield more # yield more elements from the iterator yield chunk() # in outer generator, yield next chunk And we can get even shorter than that, using itertools.chain to replace the inner generator: def chunks(iterable, size=10): iterator = iter(iterable) for first in iterator: yield chain([first], islice(iterator, size - 1))
Why does Django South 1.0 use iteritems()?
I've just installed South 1.0 and when I was about to do my first migration I got this error message: ... /lib/python3.4/site-packages/south/migration/migrators.py", line 186, in _run_migration for name, db in south.db.dbs.iteritems(): AttributeError: 'dict' object has no attribute 'iteritems' I've fixed it by replacing two occurrences of iteritems() by items(). My questions are: If South 1.0 is compatible with Python3, why is this happening? Will my fix cause me any trouble? I'm amazed by the fact that I couldn't find anyone experiencing the same problem.
Update: South 1.0.1 was released on October 27th, 2014, which includes the fix for this issue: South 1.0.1 This is a small bugfix release of South with two changes: Python 3 compatability has been fixed (it was broken in 1.0 by an accidental introduction of iteritems()) South will explicitly error if it detects Django 1.7 or above rather than failing with cryptic errors. My original answer follows below. It appears Django-South does not really support Python 3, at least the project doesn't verify support before releasing. See this issue in the South tracker: Has there been any progress with Python3 support? What is the current status? I see a python3 related commit before the 0.8.3 release, wondering if it is complete or not. It should sort of work, but if you want ongoing Python 3 support then you're better off using django 1.7's migrations, which are South's successor and have a full python 3 test suite. Clearly the 'sorta' is 'not really'; the maintainer has shifted focus to the new migration infrastructure of Django 1.7. 0.8.4 was compatible, but this commit broke the compatibility again; your change should suffice to fix that issue. There is a pull request that addresses just that, but it has not yet been merged. As 1.0 was only released yesterday, the reason you haven't found anyone else with the problem is because not many people have run into it yet. Django 1.7's migration support is basically an integration of South into the mainline, and it supports Python 3 fully (automated tests are run for the full Django tree catching regressions like these). The first release candidate is out (see https://www.djangoproject.com/download/ for instructions on how to install it).
How to mock os.walk in python with a temporary filesystem?
I'm trying to test some code that uses os.walk. I want to create a temporary, in-memory filesystem that I can populate with sample (empty) files and directories that os.walk will then return. This should save me the complexity of mocking os.walk calls to simulate recursion. Specifically, the code I want to test is: if recursive: log.debug("Recursively searching for files under %s" % path) for (dir_path, dirs, files) in os.walk(path): log.debug("Found %d files in %s: %s" % (len(files), path, files)) for f in [os.path.join(dir_path, f) for f in files if not re.search(exclude, f)]: yield f else: log.debug("Non-recursively searching for files under %s" % path) for (dir_path, dirs, files) in os.walk(path): log.debug("Found %d files in %s: %s" % (len(files), path, files)) for f in [os.path.join(dir_path, f) for f in files if not re.search(exclude, f)]: yield f Is this possible in python?
No. os.walk() is constructed entirely around os.listdir(), with assistance of os.path.islink() and os.path.isdir(). These are essentially system calls, so you'd have to mock your filesystem at the system level. Unless you want to write a FUSE plugin this is not going to be easy to mock. All os.walk() needs to return is a list of tuples, really. Unless you are testing manipulating the dirs component, it couldn't be more simple: with mock.patch('os.walk') as mockwalk: mockwalk.return_value = [ ('/foo', ('bar',), ('baz',)), ('/foo/bar', (), ('spam', 'eggs')), ]
How to go back in PyCharm while browsing code like we have a back button in eclipse?
While browsing the code in PyCharm(community edition) how to go back to the previously browsed section? I am looking for eclipse back button type functionality with Pycharm.
in pycharm you have view in view please make sure that toolbar is checked
python numpy ValueError: operands could not be broadcast together with shapes
In numpy, I have two "arrays", X is (m,n) and y is a vector (n,1) using X*y # or, even np.dot(X,y) I am getting the error ValueError: operands could not be broadcast together with shapes (97,2) (2,1) When (97,2)x(2,1) is clearly a legal matrix operation and should give me a (97,1) vector EDIT: I have corrected this using X.dot(y) but the original question still remains.
We have two arrays: X, shape (97,2) y, shape (2,1) With Numpy arrays the operation X * y is done element-wise, but one or both of the values can be expanded in one or more dimensions to make them compatible. This operation are called broadcasting. Dimensions where size is 1 or which are missing can be used in broadcasting. In the example above the dimensions are incompatible, because: 97 2 2 1 Here there are conflicting numbers in the first dimension (97 and 2). That is what the ValueError above is complaining about. The second dimension would be ok, as number 1 does not conflict with anything. For more information on broadcasting rules: http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html (Please note that if X and y are of type numpy.matrix, then asterisk can be used as matrix multiplication. My recommendation is to keep away from numpy.matrix, it tends to complicate more than simplify things.) However, your arrays should be fine with numpy.dot, so there something else must have happened. If numpy.dot throws an exception, it complains: ValueError: matrices are not aligned If you still get this error, please post a minimal example of the problem. It works: import numpy as np np.dot(np.ones((97,2)), np.ones((2,1))) returns a (97,1) array.
Bulk-fetching emails in the new Gmail API
I'm using the python version of the newly released Gmail API by Google. The following call returns just a list of message ids: service.users().messages().list(userId = 'me').execute() But then I just have a list of message ids and need to iterate over them and fetch them one-by-one. Is there a way to get the whole message content for a list of ids, in a single call ? (Similar to how it's done in the Google Calendar API) ? And if not supported yet, is this something that Google would like to consider adding in the API ? Update Here is the solution that worked for me: batch = BatchHttpRequest() for msg_id in message_ids: batch.add(service.users().messages().get(userId = 'me', id = msg_id['id']), callback = mycallbackfunc) batch.execute()
Here is an example of batch request in Java where I get all the threads using threads ids. This can be easily adapted for your need. BatchRequest b = service.batch(); //callback function. (Can also define different callbacks for each request, as required) JsonBatchCallback<Thread> bc = new JsonBatchCallback<Thread>() { @Override public void onSuccess(Thread t, HttpHeaders responseHeaders) throws IOException { System.out.println(t.getMessages().get(0).getPayload().getBody().getData()); } @Override public void onFailure(GoogleJsonError e, HttpHeaders responseHeaders) throws IOException { } }; // queuing requests on the batch requests for (Thread thread : threads) { service.users().threads().get("me", threads.getId()).queue(b, bc); } b.execute();
Save the "Out[]" table of a pandas dataframe as a figure
This may seem to be a useless feature but it would be very helpful for me. I would like to save the output I get inside Canopy IDE. I would not think this is specific to Canopy but for the sake of clarity that is what I use. For example, my console Out[2] is what I would want from this: I think that the formatting is quite nice and to reproduce this each time instead of just saving the output would be a waste of time. So my question is, how can I get a handle on this figure? Ideally the implimentation would be similar to standard methods, such that it could be done like this: from matplotlib.backends.backend_pdf import PdfPages pp = PdfPages('Output.pdf') fig = plt.figure() ax = fig.add_subplot(1, 1, 1) df.plot(how='table') pp.savefig() pp.close() NOTE: I realize that a very similar question has been asked before ( How to save the Pandas dataframe/series data as a figure? ) but it never received an answer and I think I have stated the question more clearly.
Here is a somewhat hackish solution but it gets the job done. You wanted a .pdf but you get a bonus .png. :) import numpy as np import pandas as pd from matplotlib.backends.backend_pdf import PdfPages import matplotlib.pyplot as plt from PySide.QtGui import QImage from PySide.QtGui import QPainter from PySide.QtCore import QSize from PySide.QtWebKit import QWebPage arrays = [np.hstack([ ['one']*3, ['two']*3]), ['Dog', 'Bird', 'Cat']*2] columns = pd.MultiIndex.from_arrays(arrays, names=['foo', 'bar']) df =pd.DataFrame(np.zeros((3,6)),columns=columns,index=pd.date_range('20000103',periods=3)) h = "<!DOCTYPE html> <html> <body> <p> " + df.to_html() + " </p> </body> </html>"; page = QWebPage() page.setViewportSize(QSize(5000,5000)) frame = page.mainFrame() frame.setHtml(h, "text/html") img = QImage(1000,700, QImage.Format(5)) painter = QPainter(img) frame.render(painter) painter.end() a = img.save("html.png") pp = PdfPages('html.pdf') fig = plt.figure(figsize=(8,6),dpi=1080) ax = fig.add_subplot(1, 1, 1) img2 = plt.imread("html.png") plt.axis('off') ax.imshow(img2) pp.savefig() pp.close() Edits welcome.
Infinite integer in Python
Python 3 has float('inf') and Decimal('Infinity') but no int('inf'). So, why a number representing the infinite set of integers is missing in the language? Is int('inf') unreasonable?
Taken from here: https://www.gnu.org/software/libc/manual/html_node/Infinity-and-NaN.html IEEE 754 floating point numbers can represent positive or negative infinity, and NaN (not a number) That is, the representation of float and Decimal can store these special values. However, there is nothing within the basic type int that can store the same. As you exceed the limit of 2^32 in an unsigned 32-bit int, you simply roll over to 0 again. If you want, you could create a class containing an integer which could feature the possibility of infinite values.
How to cache sql alchemy calls with Flask-cache and redis?
I have a flask app that takes parameters from a web form, queries a DB w/ SQL alchemy and returns a jinja-generated template showing a table with the results. I want to cache the calls to the DB. I looked into redis, Using redis as an LRU cache for postgres -- which lead me to http://pythonhosted.org/Flask-Cache/ Now I am trying to use redis + flask-cache to cache the calls to the DB. Based on the Flask-Cache docs it seems like I need to setup a custom redis cache. class RedisCache(BaseCache): def __init__(self, servers, default_timeout=500): pass def redis(app, config, args, kwargs): args.append(app.config['REDIS_SERVERS']) return RedisCache(*args, **kwargs) From there I would need to something like cache = redis(app, config={'CACHE_TYPE': 'redis'}) //not sure what to put for args or kwards? app = Flask(__name__) cache.init_app(app) I have two questions: First: what do I put for args and kwargs? What do these mean? How do I setup a redis cache with flask cache? Second: once the cache is set up it seems like I would want to somehow "memoize" the calls the the DB so that if the method gets the same query it has the output cached. How do I do this? My best guess would be to wrap the call the SQL alchemy in a method that could then be given memoize decorator? That way if two identical queries were passed to the method, flask-cache would recognize this and return to the appropriate response. I'm guessing that it would look like this: @cache.memoize(timeout=50) def queryDB(q): return q.all() This seems like a fairly common use of redis + flask + flask-cache + sql alchemy but I am unable to find a complete example to follow. If someone could post one, that would be super helpful -- but for me and for others down the line.
You don't need to create custom RedisCache class. The docs is just teaching how you would create new backends that are not available in flask-cache. But RedisCache is already available in werkzeug >= 0.7, which you might have already installed because it is one of the core dependencies of flask. This is how I could run the flask-cache with redis backend: import time from flask import Flask from flask.ext.cache import Cache app = Flask(__name__) cache = Cache(app, config={'CACHE_TYPE': 'redis'}) @cache.memoize(timeout=60) def query_db(): time.sleep(5) return "Results from DB" @app.route('/') def index(): return query_db() app.run(debug=True) The reason you're getting "ImportError: redis is not a valid FlaskCache backend" is probably because you don't have redis (python library) installed which you can simply install by: pip install redis.
How can I use functools.singledispatch with instance methods?
Python 3.4 added the ability to define function overloading with static methods. This is essentially the example from the documentation: from functools import singledispatch class TestClass(object): @singledispatch def test_method(arg, verbose=False): if verbose: print("Let me just say,", end=" ") print(arg) @test_method.register(int) def _(arg): print("Strength in numbers, eh?", end=" ") print(arg) @test_method.register(list) def _(arg): print("Enumerate this:") for i, elem in enumerate(arg): print(i, elem) if __name__ == '__main__': TestClass.test_method(55555) TestClass.test_method([33, 22, 11]) In its purest form, the singledispatch implementation relies on the first argument to identify type, therefore making it tricky to extend this functionality to instance methods. Does anyone have any advice for how to use (or jerry-rig) this functionality to get it to work with instance methods?
Looking at the source for singledispatch, we can see that the decorator returns a function wrapper(), which selects a function to call from those registered based on the type of args[0] ... def wrapper(*args, **kw): return dispatch(args[0].__class__)(*args, **kw) ... which is fine for a regular function, but not much use for an instance method, whose first argument is always going to be self. We can, however, write a new decorator methdispatch, which relies on singledispatch to do the heavy lifting, but instead returns a wrapper function that selects which registered function to call based on the type of args[1]: from functools import singledispatch, update_wrapper def methdispatch(func): dispatcher = singledispatch(func) def wrapper(*args, **kw): return dispatcher.dispatch(args[1].__class__)(*args, **kw) wrapper.register = dispatcher.register update_wrapper(wrapper, func) return wrapper Here's a simple example of the decorator in use: class Patchwork(object): def __init__(self, **kwargs): for k, v in kwargs.items(): setattr(self, k, v) @methdispatch def get(self, arg): return getattr(self, arg, None) @get.register(list) def _(self, arg): return [self.get(x) for x in arg] Notice that both the decorated get() method and the method registered to list have an initial self argument as usual. Testing the Patchwork class: >>> pw = Patchwork(a=1, b=2, c=3) >>> pw.get("b") 2 >>> pw.get(["a", "c"]) [1, 3]
How do I execute inserts and updates in an Alembic upgrade script?
I need to alter data during an Alembic upgrade. I currently have a 'players' table in a first revision: def upgrade(): op.create_table('player', sa.Column('id', sa.Integer(), nullable=False), sa.Column('name', sa.Unicode(length=200), nullable=False), sa.Column('position', sa.Unicode(length=200), nullable=True), sa.Column('team', sa.Unicode(length=100), nullable=True) sa.PrimaryKeyConstraint('id') ) I want to introduce a 'teams' table. I've created a second revision: def upgrade(): op.create_table('teams', sa.Column('id', sa.Integer(), nullable=False), sa.Column('name', sa.String(length=80), nullable=False) ) op.add_column('players', sa.Column('team_id', sa.Integer(), nullable=False)) I would like the second migration to also add the following data: Populate teams table: INSERT INTO teams (name) SELECT DISTINCT team FROM players; Update players.team_id based on players.team name: UPDATE players AS p JOIN teams AS t SET p.team_id = t.id WHERE p.team = t.name; How do I execute inserts and updates inside the upgrade script?
What you are asking for is a data migration, as opposed to the schema migration that is most prevalent in the Alembic docs. This answer assumes you are using declarative (as opposed to class-Mapper-Table or core) to define your models. It should be relatively straightforward to adapt this to the other forms. Note that Alembic provides some basic data functions: op.bulk_insert() and op.execute(). If the operations are fairly minimal, use those. If the migration requires relationships or other complex interactions, I prefer to use the full power of models and sessions as described below. The following is an example migration script that sets up some declarative models that will be used to manipulate data in a session. The key points are: Define the basic models you need, with the columns you'll need. You don't need every column, just the primary key and the ones you'll be using. Within the upgrade function, use op.get_bind() to get the current connection, and make a session with it. Use the models and session as you normally would in your application. """create teams table Revision ID: 169ad57156f0 Revises: 29b4c2bfce6d Create Date: 2014-06-25 09:00:06.784170 """ revision = '169ad57156f0' down_revision = '29b4c2bfce6d' from alembic import op from flask_sqlalchemy import _SessionSignalEvents import sqlalchemy as sa from sqlalchemy import event from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, Session as BaseSession, relationship Session = sessionmaker() event.remove(BaseSession, 'before_commit', _SessionSignalEvents.session_signal_before_commit) event.remove(BaseSession, 'after_commit', _SessionSignalEvents.session_signal_after_commit) event.remove(BaseSession, 'after_rollback', _SessionSignalEvents.session_signal_after_rollback) Base = declarative_base() class Player(Base): __tablename__ = 'players' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String, nullable=False) team_name = sa.Column('team', sa.String, nullable=False) team_id = sa.Column(sa.Integer, sa.ForeignKey('teams.id'), nullable=False) team = relationship('Team', backref='players') class Team(Base): __tablename__ = 'teams' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String, nullable=False, unique=True) def upgrade(): bind = op.get_bind() session = Session(bind=bind) # create the teams table and the players.team_id column Team.__table__.create(bind) op.add_column('players', sa.Column('team_id', sa.ForeignKey('teams.id'), nullable=False) # create teams for each team name teams = {name: Team(name=name) for name in session.query(Player.team).distinct()} session.add_all(teams.values()) # set player team based on team name for player in session.query(Player): player.team = teams[player.team_name] session.commit() # don't need team name now that team relationship is set op.drop_column('players', 'team') def downgrade(): bind = op.get_bind() session = Session(bind=bind) # re-add the players.team column op.add_column('players', sa.Column('team', sa.String, nullable=False) # set players.team based on team relationship for player in session.query(Player): player.team_name = player.team.name session.commit() op.drop_column('players', 'team_id') op.drop_table('teams') The event.remove lines are to deal with Flask-SQLAlchemy. The extension adds some events to the session, but since the migration happens outside of the app context, these events produce exceptions. If you are not using that extension or are using a version >=2.0 (not released yet), you don't need those lines (or the related imports).
Sklearn SGDClassifier partial fit
I'm trying to use SGD to classify a large dataset. As the data is too large to fit into memory, I'd like to use the partial_fit method to train the classifier. I have selected a sample of the dataset (100,000 rows) that fits into memory to test fit vs. partial_fit: from sklearn.linear_model import SGDClassifier def batches(l, n): for i in xrange(0, len(l), n): yield l[i:i+n] clf1 = SGDClassifier(shuffle=True, loss='log') clf1.fit(X, Y) clf2 = SGDClassifier(shuffle=True, loss='log') n_iter = 60 for n in range(n_iter): for batch in batches(range(len(X)), 10000): clf2.partial_fit(X[batch[0]:batch[-1]+1], Y[batch[0]:batch[-1]+1], classes=numpy.unique(Y)) I then test both classifiers with an identical test set. In the first case I get an accuracy of 100%. As I understand it, SGD by default passes 5 times over the training data (n_iter = 5). In the second case, I have to pass 60 times over the data to reach the same accuracy. Why this difference (5 vs. 60)? Or am I doing something wrong?
I have finally found the answer. You need to shuffle the training data between each iteration, as setting shuffle=True when instantiating the model will NOT shuffle the data when using partial_fit (it only applies to fit). Note: it would have been helpful to find this information on the sklearn.linear_model.SGDClassifier page. The amended code reads as follows: from sklearn.linear_model import SGDClassifier import random clf2 = SGDClassifier(loss='log') # shuffle=True is useless here shuffledRange = range(len(X)) n_iter = 5 for n in range(n_iter): random.shuffle(shuffledRange) shuffledX = [X[i] for i in shuffledRange] shuffledY = [Y[i] for i in shuffledRange] for batch in batches(range(len(shuffledX)), 10000): clf2.partial_fit(shuffledX[batch[0]:batch[-1]+1], shuffledY[batch[0]:batch[-1]+1], classes=numpy.unique(Y))
How to print to console in Py Test?
I'm trying to use Test-Driven Development with the pytest module. pytest will not print to the console when I write print. I use py.test my_tests.py to run it... The documentation seems to say that it should work by default: http://pytest.org/latest/capture.html But: import myapplication as tum class TestBlogger: @classmethod def setup_class(self): self.user = "alice" self.b = tum.Blogger(self.user) print "This should be printed, but it won't be!" def test_inherit(self): assert issubclass(tum.Blogger, tum.Site) links = self.b.get_links(posts) print len(links) # This won't print either. Nothing gets printed to my standard output console (just the normal progress and how many many tests passed/failed). And the script that I'm testing contains print: class Blogger(Site): get_links(self, posts): print len(posts) # It won't get printed in the test. In unittest module, everything gets printed by default, which is exactly what I need. However, I wish to use pytest for other reasons. It seems like such basic functionality that perhaps I'm missing it!? Does anyone know how to make the print statements get shown?
By default, py.test captures the result of standard out so that it can control how it prints it out. If it didn't do this, it would spew out a lot of text without the context of what test printed that text. However, if a test fails, it will include a section in the resulting report that shows what was printed to standard out in that particular test. For example, def test_good(): for i in range(1000): print(i) def test_bad(): print('this should fail!') assert False Results in the following output: >>> py.test tmp.py ============================= test session starts ============================== platform darwin -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 plugins: cache, cov, pep8, xdist collected 2 items tmp.py .F =================================== FAILURES =================================== ___________________________________ test_bad ___________________________________ def test_bad(): print('this should fail!') > assert False E assert False tmp.py:7: AssertionError ------------------------------- Captured stdout -------------------------------- this should fail! ====================== 1 failed, 1 passed in 0.04 seconds ====================== Note the Captured stdout section. If you would like to see print statements as they are executed, you can pass the -s flag to py.test. However, note that this can sometimes be difficult to parse. >>> py.test tmp.py -s ============================= test session starts ============================== platform darwin -- Python 2.7.6 -- py-1.4.20 -- pytest-2.5.2 plugins: cache, cov, pep8, xdist collected 2 items tmp.py 0 1 2 3 ... and so on ... 997 998 999 .this should fail! F =================================== FAILURES =================================== ___________________________________ test_bad ___________________________________ def test_bad(): print('this should fail!') > assert False E assert False tmp.py:7: AssertionError ====================== 1 failed, 1 passed in 0.02 seconds ======================
Fatal error in launcher: Unable to create process using ""C:\Program Files (x86)\Python33\python.exe" "C:\Program Files (x86)\Python33\pip.exe""
Searching the net this seems to be a problem caused by spaces in the Python installation path. How do I get pip to work without having to reinstall everything in a path without spaces ?
it seems that python -m pip install XXX will work anyway (worked for me) (see link by user474491)
gunicorn.errors.HaltServer:
I've installed gunicorn inside my virtualenv. From this directory: manage.py /onbytes/wsgi.py I run the following: gunicorn onbytes.wsgi:application And I get the following error: Traceback (most recent call last): File "/home/ymorin007/.virtualenvs/onbytes.com/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run self.manage_workers() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers self.spawn_workers() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 537, in spawn_workers time.sleep(0.1 * random.random()) File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld self.reap_workers() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers raise HaltServer(reason, self.WORKER_BOOT_ERROR) gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> running gunicorn onbytes.wsgi:application --preload will get me this error: Traceback (most recent call last): File "/home/ymorin007/.virtualenvs/onbytes.com/bin/gunicorn", line 9, in <module> load_entry_point('gunicorn==19.0.0', 'console_scripts', 'gunicorn')() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 166, in run super(Application, self).run() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run Arbiter(self).run() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 57, in __init__ self.setup(app) File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 113, in setup self.app.wsgi() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 66, in wsgi self.callable = self.load() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load return self.load_wsgiapp() File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp return util.import_app(self.app_uri) File "/home/ymorin007/.virtualenvs/onbytes.com/local/lib/python2.7/site-packages/gunicorn/util.py", line 356, in import_app __import__(module) File "/home/ymorin007/sites/onbytes.com/src/onbytes/wsgi.py", line 8, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named django.core.wsgi
Probably there is an issue in your application, not in gunicorn. Try: gunicorn --log-file=- onbytes.wsgi:application Since the version R19, Gunicorn doesn’t log by default in the console and the --debug option was deprecated.
how to print dataframe without index
I want to print the whole dataframe, but I don't want to print the index Besides, one column is datetime type, I just want to print time, not date. The dataframe looks like: User ID Enter Time Activity Number 0 123 2014-07-08 00:09:00 1411 1 123 2014-07-08 00:18:00 893 2 123 2014-07-08 00:49:00 1041 I want it print as User ID Enter Time Activity Number 123 00:09:00 1411 123 00:18:00 893 123 00:49:00 1041
print df.to_string(index=False)
pandas dataframe columns scaling with sklearn
I have a pandas dataframe with mixed type columns, and I'd like to apply sklearn's min_max_scaler to some of the columns. Ideally, I'd like to do these transformations in place, but haven't figured out a way to do that yet. I've written the following code that works: import pandas as pd import numpy as np from sklearn import preprocessing scaler = preprocessing.MinMaxScaler() dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']}) min_max_scaler = preprocessing.MinMaxScaler() def scaleColumns(df, cols_to_scale): for col in cols_to_scale: df[col] = pd.DataFrame(min_max_scaler.fit_transform(pd.DataFrame(dfTest[col])),columns=[col]) return df dfTest A B C 0 14.00 103.02 big 1 90.20 107.26 small 2 90.95 110.35 big 3 96.27 114.23 small 4 91.21 114.68 small scaled_df = scaleColumns(dfTest,['A','B']) scaled_df A B C 0 0.000000 0.000000 big 1 0.926219 0.363636 small 2 0.935335 0.628645 big 3 1.000000 0.961407 small 4 0.938495 1.000000 small I'm curious if this is the preferred/most efficient way to do this transformation. Is there a way I could use df.apply that would be better? I'm also surprised I can't get the following code to work: bad_output = min_max_scaler.fit_transform(dfTest['A']) If I pass an entire dataframe to the scaler it works: dfTest2 = dfTest.drop('C', axis = 1) good_output = min_max_scaler.fit_transform(dfTest2) good_output I'm confused why passing a series to the scaler fails. In my full working code above I had hoped to just pass a series to the scaler then set the dataframe column = to the scaled series. I've seen this question asked a few other places, but haven't found a good answer. Any help understanding what's going on here would be greatly appreciated!
Like this? dfTest = pd.DataFrame({ 'A':[14.00,90.20,90.95,96.27,91.21], 'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small'] }) dfTest[['A','B']] = dfTest[['A','B']].apply( lambda x: MinMaxScaler().fit_transform(x)) dfTest A B C 0 0.000000 0.000000 big 1 0.926219 0.363636 small 2 0.935335 0.628645 big 3 1.000000 0.961407 small 4 0.938495 1.000000 small
ImportError: No module named request
I am trying to install python SpeechRecognition on my machine.When i am trying to install the package as pip install SpeechRecognition. I am getting the following error. import json, urllib.request ImportError: No module named request And then i referred and installed requests as pip install requests i am i am getting Requirement already satisfied.But still i am unable to install SpeechRecognition.Please let me know what mistake i am doing.Thanks in advance
The SpeechRecognition library requires Python 3.3 or up: Requirements [...] The first software requirement is Python 3.3 or better. This is required to use the library. and from the Trove classifiers: Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3.3 Programming Language :: Python :: 3.4 The urllib.request module is part of the Python 3 standard library; in Python 2 you'd use urllib2 here.
Necessity of explicit cursor.close()
From time to time, I'm executing raw queries using connection.cursor() instead of using ORM (since it is definitely not a silver bullet). I've noticed that in several places I don't call explicit cursor.close() after I'm done with database. So far, this hasn't result into any errors, or performance issues. I'm wondering what kind of problems could I possibly have without closing the cursor explicitly, what can go wrong? As far as I understand, connection and cursor in Django follow "Python Database API Specification v2.0" (PEP-249). And, according to it, cursor would be automatically closed whenever __del__() method is called. I guess the question could be also: Is there a use case when it is not called? FYI, I'm using Python 2.7 and Django 1.6.5.
Django's cursor class is just a wrapper around the underlying DB's cursor, so the effect of leaving the cursor open is basically tied to the underlying DB driver. According to psycopg2's (psycopg2 is DB driver Django uses for PostgreSQL DB's) FAQ, their cursors are lightweight, but will cache the data being returned from queries you made using the cursor object, which could potentially waste memory: Cursors are lightweight objects and creating lots of them should not pose any kind of problem. But note that cursors used to fetch result sets will cache the data and use memory in proportion to the result set size. Our suggestion is to almost always create a new cursor and dispose old ones as soon as the data is not required anymore (call close() on them.) The only exception are tight loops where one usually use the same cursor for a whole bunch of INSERTs or UPDATEs. Django uses MySQLdb as the backend for MySQL, which has several different types of cursors, including some that actually store their result-sets on the server-side. The MySQLdb documentation for Cursor.close make a point to note that it's very important to close the server-side cursor's when you're done with them: If you are using server-side cursors, it is very important to close the cursor when you are done with it and before creating a new one. However, this isn't relevant for Django, because it uses the default Cursor class provided by MySQLdb, which stores results on the client-side. Leaving a used cursor open just risks wasting the memory used by the stored result-set, just like psycopg2. The close method on the cursor just deletes the internal reference to the db connection and exhausts the stored result set: def close(self): """Close the cursor. No further queries will be possible.""" if not self.connection: return while self.nextset(): pass self.connection = None As best as I can tell from looking at their source, all the remaining backends used by Django (cx_oracle, sqlite3/pysqlite2) all follow the same pattern; free memory by deleting/resetting stored results/object references. The sqlite3 docs don't even mention that the Cursor class has a close method, and it's only used sporadically in the included example code. You are right that a cursor will be closed when __del__() is called on the cursor object, so the need to explicitly close is only an issue if you're keeping a long-living reference to the cursor; e.g. a self.cursor object that you're keeping as an instance method of a class.
Python import csv to list
I have a CSV file with about 2000 records. Each record has a string, and a category to it. This is the first line, Line1 This is the second line, Line2 This is the third line, Line3 I need to read this file into a list that looks like this; List = [('This is the first line', 'Line1'), ('This is the second line', 'Line2'), ('This is the third line', 'Line3')] How can import this this csv to the list I need using Python?
Use the csv module: import csv with open('file.csv', 'rb') as f: reader = csv.reader(f) your_list = list(reader) print your_list # [['This is the first line', 'Line1'], # ['This is the second line', 'Line2'], # ['This is the third line', 'Line3']] If you need tuples: import csv with open('test.csv', 'rb') as f: reader = csv.reader(f) your_list = map(tuple, reader) print your_list # [('This is the first line', ' Line1'), # ('This is the second line', ' Line2'), # ('This is the third line', ' Line3')] Python 3.x version (by @seokhoonlee below) import csv with open('file.csv', 'r') as f: reader = csv.reader(f) your_list = list(reader) print(your_list) # [['This is the first line', 'Line1'], # ['This is the second line', 'Line2'], # ['This is the third line', 'Line3']]
Why can I use the same name for iterator and sequence in a Python for loop?
This is more of a conceptual question. I recently saw a piece of code in Python (it worked in 2.7, and it might also have been run in 2.5 as well) in which a for loop used the same name for both the list that was being iterated over and the item in the list, which strikes me as both bad practice and something that should not work at all. For example: x = [1,2,3,4,5] for x in x: print x print x Yields: 1 2 3 4 5 5 Now, it makes sense to me that the last value printed would be the last value assigned to x from the loop, but I fail to understand why you'd be able to use the same variable name for both your parts of the for loop and have it function as intended. Are they in different scopes? What's going on under the hood that allows something like this to work?
What does dis tell us: Python 3.4.1 (default, May 19 2014, 13:10:29) [GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from dis import dis >>> dis("""x = [1,2,3,4,5] ... for x in x: ... print(x) ... print(x)""") 1 0 LOAD_CONST 0 (1) 3 LOAD_CONST 1 (2) 6 LOAD_CONST 2 (3) 9 LOAD_CONST 3 (4) 12 LOAD_CONST 4 (5) 15 BUILD_LIST 5 18 STORE_NAME 0 (x) 2 21 SETUP_LOOP 24 (to 48) 24 LOAD_NAME 0 (x) 27 GET_ITER >> 28 FOR_ITER 16 (to 47) 31 STORE_NAME 0 (x) 3 34 LOAD_NAME 1 (print) 37 LOAD_NAME 0 (x) 40 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 43 POP_TOP 44 JUMP_ABSOLUTE 28 >> 47 POP_BLOCK 4 >> 48 LOAD_NAME 1 (print) 51 LOAD_NAME 0 (x) 54 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 57 POP_TOP 58 LOAD_CONST 5 (None) 61 RETURN_VALUE The key bits are sections 2 and 3 - we load the value out of x (24 LOAD_NAME 0 (x)) and then we get its iterator (27 GET_ITER) and start iterating over it (28 FOR_ITER). Python never goes back to load the iterator again. Aside: It wouldn't make any sense to do so, since it already has the iterator, and as Abhijit points out in his answer, Section 7.3 of Python's specification actually requires this behavior). When the name x gets overwritten to point at each value inside of the list formerly known as x Python doesn't have any problems finding the iterator because it never needs to look at the name x again to finish the iteration protocol.
Is it good practice to yield from within a context manager?
I recently wrote a method which returned a sequence of open files; in other words, something like this: # this is very much simplified, of course # the actual code returns file-like objects, not necessarily files def _iterdir(self, *path): dr = os.path.join(*path) paths = imap(lambda fn: os.path.join(dr, fn), os.listdir(dr)) return imap(open, paths) Syntactically, I do not expect to have to close the resulting objects if I do something like: for f in _iterdir('/', 'usr'): make_unicorns_from(f) # ! f.close() As a result, I decided to wrap _iterdir in a context manager: def iterdir(self, *path): it = self._iterdir(*path) while 1: with it.next() as f: yield f This appears to be working correctly. What I'm interested in is whether doing this is good practice. Will I run into any issues following this pattern (perhaps if exceptions are thrown)?
There are two problems I see. One is that if you try to use more than one file at a time, things break: list(iterdir('/', 'usr')) # Doesn't work; they're all closed. The second is unlikely to happen in CPython, but if you have a reference cycle, or if your code is ever run on a different Python implementation, the problem can manifest. If an exception happens in make_unicorns_from(f): for f in iterdir('/', 'usr'): make_unicorns_from(f) # Uh oh, not enough biomass. The file you were using won't be closed until the generator is garbage-collected. At that point, the generator's close method will be called, throwing a GeneratorExit exception at the point of the last yield, and the exception will cause the context manager to close the file. With CPython's reference counting, this usually happens immediately. However, on a non-reference-counted implementation or in the presence of a reference cycle, the generator might not be collected until a cycle-detecting GC pass is run. This could take a while. My gut says to leave closing the files to the caller. You can do for f in _iterdir('/', 'usr'): with f: make_unicorns_from(f) and they'll all be closed promptly, even without a with in the generator, and even if an exception is thrown. I don't know whether or not this is actually a better idea than having the generator take charge of closing the files.
Is there an equivalent to the "for ... else" Python loop in C++?
Python has an interesting for statement which lets you specify an else clause. In a construct like this one: for i in foo: if bar(i): break else: baz() the else clause is executed after the for, but only if the for terminates normally (not by a break). I wondered if there was an equivalent in C++? Can I use for ... else?
A simpler way to express your actual logic is with std::none_of: if (std::none_of(std::begin(foo), std::end(foo), bar)) baz(); If the range proposal for C++17 gets accepted, hopefully this will simplify to if (std::none_of(foo, bar)) baz();.
Python ftplib Optimal Block Size?
I'm using python's ftplib to transfer lots and lots of data (~100 files X 2GB) across a local network to an FTP server. This code is running on Ubuntu. Here is my call (self is my FtpClient object, which is a wrapper around ftplib client): # Store file. self.ftpClient.storbinary('STOR ' + destination, fileHandle, blocksize = self.blockSize, callback = self.__UpdateFileTransferProgress) My question is, how do I choose an optimal block size? My understanding is that the optimal block size is dependent on a number of things, not the least of which are connection speed and latency. My code will be running on many different networks with different speeds and varying amounts of congestion throughout the day. Ideally, I would like to compute the optimal block size at run time. Would the optimal FTP transfer block size be the same as the optimal TCP window size? If this is true, and TCP window scaling is turned on, is there a way to get the optimal TCP window size from the kernel? How/when does the linux kernel determine optimal window size? Ideally I could ask the linux kernel for the optimal block size, so as to avoid reinventing the wheel.
this is an interesting question and I had to dive in a bit deeper ;) Anyway, here is a good example how to determine the MTU: http://erlerobotics.gitbooks.io/erle-robotics-python-gitbook-free/content/udp_and_tcp/udp_fragmentation.html But, you should also think about the following: the MTU is something that is a local phenomena and maybe regards only a part of your local network. What you think about is the Path MTU, the minimal MTU over the complete transport path. http://en.wikipedia.org/wiki/Path_MTU_Discovery So, you'll have to know every MTU of every involved component. This can be a problem, for example if you're using Jumbo Frames and a switch not, the switch have to split the frames. I already had the problem that a switch did not understand jumbo frames and dropped the frames. Now the most interesting question: the optimal blocksize. A lot of python functions take arguments like blocksize or chunksize. But they don't address the blocksize of the underlying transport protocol. The blocksize defines a reading buffer that will contain the data to be send/read. The standard size in ftplib is 8K (8192 bytes). So, adjusting the blocksize should not really affect the speed of the transfer. Controlling the MTU of the underlying transport protocol is something that will be handled by the operation system and its kernel. Finally some words about ftp. ftp is an old dinosaur which is easy to setup up and use but also is not always the best method to transfer files. Especially if you transfer a lot of small files. I don't know exactly your use case, so thinking about other transfer protocol alternatives like rsync or bbcp could make sense. The later seems to increase the copy speed drastically. You really should have a look at http://moo.nac.uci.edu/~hjm/HOWTO_move_data.html just my two cents...
random forest with categorical features in sklearn
Say I have a categorical feature, color, which takes the values ['red', 'blue', 'green', 'orange'], and I want to use it to predict something in a random forest. If I one-hot encode it (i.e. I change it to four dummy variables), how do I tell sklearn that the four dummy variables are really one variable? Specifically, when sklearn is randomly selecting features to use at different nodes, it should either include the red, blue, green and orange dummies together, or it shouldn't include any of them. I've heard that there's no way to do this, but I'd imagine there must be a way to deal with categorical variables without arbitrarily coding them as numbers or something like that.
No, there isn't. Somebody's working on this and the patch might be merged into mainline some day, but right now there's no support for categorical variables in scikit-learn except dummy (one-hot) encoding.
I don't understand python MANIFEST.in
Reading this: Python Distribute it tells me to include doc/txt files and .py files are excluded in MANIFEST.in file Reading this It tells me only sdist uses MANIFEST.in and only includes file you specify and to include .py files. It also tells me to use: python setup.py sdist --manifest-only to generate a MANIFEST, but python tells me this doesn't exist I appreciate these are from different versions of python and the distribution system is in a complete mess, but assuming I am using python 3 and setuptools (the new one that includes distribute but now called setuptools, not the old setuptools that was deprecated for distribute tools only to be brought back into distribute and distribute renamed to setuptools.....) and I'm following the 'standard' folder structure and setup.py file, Do I need a MANIFEST.in ? What should be in it ? When will all these different package systems and methods be made into one single simple process ?
Re: "Do I need a MANIFEST.in? No, you do not have to use MANIFEST.in. Both, distutils and setuptools are including in source distribution package all the files mentioned in setup.py - modules, package python files, README.txt and test/test*.py. If this is all you want to have in distribution package, you do not have to use MANIFEST.in. If you want to manipulate (add or remove) default files to include, you have to use MANIFEST.in. Re: What should be in it? The procedure is simple: Make sure, in your setup.py you include (by means of setup arguments) all the files you feel important for the program to run (modules, packages, scripts ...) Clarify, if there are some files to add or some files to exclude. If neither is needed, then there is no need for using MANIFEST.in. If MANIFEST.in is needed, create it. Usually, you add there tests*/*.py files, README.rst if you do not use README.txt, docs files and possibly some data files for test suite, if necessary. For example: include README.rst include COPYING.txt To test it, run python setup.py sdist, and examine the tarball created under dist/. When will all these different package systems ... Comparing the situation today and 2 years ago - the situation is much much better - setuptools is the way to go. You can ignore the fact, distutils is a bit broken and is low level base for setuptools as setuptools shall take care of hiding these things from you. EDIT: Last few projects I use pbr for building distribution packages with three line setup.py and rest being in setup.cfg and requirements.txt. No need to care about MANIFEST.in and other strange stuff. Even though the package would deserve a bit more documentation. See http://docs.openstack.org/developer/pbr/
Python: What does the slash mean in the output of help(range)?
What does the / mean in Python 3.4's help output for range before the closing parenthesis? >>> help(range) Help on class range in module builtins: class range(object) | range(stop) -> range object | range(start, stop[, step]) -> range object | | Return a virtual sequence of numbers from start to stop by step. | | Methods defined here: | | __contains__(self, key, /) | Return key in self. | | __eq__(self, value, /) | Return self==value. ...
It signifies the end of the positional only parameters, parameters you cannot use as keyword parameters. Such parameters can only be specified in the C API. It means the key argument to __contains__ can only be passed in by position (range(5).__contains__(3)), not as a keyword argument (range(5).__contains__(key=3)), something you can do with positional arguments in pure-python functions. Also see the Argument Clinic documentation: To mark all parameters as positional-only in Argument Clinic, add a / on a line by itself after the last parameter, indented the same as the parameter lines. The syntax has also been defined for possible future inclusion in Python, see PEP 457 - Syntax For Positional-Only Parameters. At the moment the PEP acts as a reservation on the syntax, there are no actual plans to implement it as such.
Installing h5py on an Ubuntu server
I was installing h5py on an Ubuntu server. However it seems to return an error that h5py.h is not found. It gives the same error message when I install it using pip or the setup.py file. What am I missing here? I have Numpy version 1.8.1, which higher than the required version of 1.6 or above. The complete output is as follows: van@Hulk:~/h5py-2.3.1⟫ sudo python setup.py install libhdf5.so: cannot open shared object file: No such file or directory HDF5 autodetection failed; building for 1.8.4+ running install running bdist_egg running egg_info writing h5py.egg-info/PKG-INFO writing top-level names to h5py.egg-info/top_level.txt writing dependency_links to h5py.egg-info/dependency_links.txt reading manifest file 'h5py.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.c' under directory 'win_include' warning: no files found matching '*.h' under directory 'win_include' writing manifest file 'h5py.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build creating build/lib.linux-x86_64-2.7 creating build/lib.linux-x86_64-2.7/h5py copying h5py/ipy_completer.py -> build/lib.linux-x86_64-2.7/h5py copying h5py/__init__.py -> build/lib.linux-x86_64-2.7/h5py copying h5py/version.py -> build/lib.linux-x86_64-2.7/h5py copying h5py/highlevel.py -> build/lib.linux-x86_64-2.7/h5py creating build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/group.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/files.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/selections.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/__init__.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/filters.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/base.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/dims.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/datatype.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/dataset.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/selections2.py -> build/lib.linux-x86_64-2.7/h5py/_hl copying h5py/_hl/attrs.py -> build/lib.linux-x86_64-2.7/h5py/_hl creating build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_selections.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_group.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_h5.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_attrs.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_objects.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_slicing.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_h5t.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_datatype.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/__init__.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_dimension_scales.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_base.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_dataset.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_file.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_h5p.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_attrs_data.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/common.py -> build/lib.linux-x86_64-2.7/h5py/tests copying h5py/tests/test_h5f.py -> build/lib.linux-x86_64-2.7/h5py/tests running build_ext skipping 'h5py/defs.c' Cython extension (up-to-date) building 'h5py.defs' extension creating build/temp.linux-x86_64-2.7 creating build/temp.linux-x86_64-2.7/h5py x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DH5_USE_16_API -I/home/govinda/h5py-2.3.1/lzf -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -c h5py/defs.c -o build/temp.linux-x86_64-2.7/h5py/defs.o In file included from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/arrayobject.h:4, from h5py/api_compat.h:26, from h5py/defs.c:342: /usr/lib/python2.7/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ In file included from h5py/defs.c:342:0: h5py/api_compat.h:27:18: fatal error: hdf5.h: No such file or directory #include "hdf5.h" ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
You need to install libhdf5-dev to get the required header files. Just run sudo apt-get install libhdf5-dev and it should install it and its dependencies automatically. Don't worry about the NumPy warning, it just means that the package developers are using an old version of the API, but everything will still work.
Python / Pillow: How to scale an image
Suppose I have an image which is 2322pxx4128px. How do I scale it so that both the width and height are both less than 1028px? I won't be able to use Image.resize (http://pillow.readthedocs.org/en/latest/reference/Image.html#PIL.Image.size) since that requires me to give both the new width and height. What I plan to do is (sudo code): if (image.width or image.height) > 1028: if image.width > image.height: image.scale(make width of image 1028) #since the height is less than the width and I am scaling the image and making the width less than 1028px, #the heigt will surely be less than 1028px else: #image's height is greater than it's width image.scale(make height of image 1028) I am guessing I need to use Image.thumbnail but according to this example (http://pillow.readthedocs.org/en/latest/reference/Image.html#create-thumbnails) and this answer (How do I resize an image using PIL and maintain its aspect ratio?), both the width and the height are provided in order to create the tumbnail. Is there any function which takes Either the new width or the new height (not both) and scales the entire image?
Noo need to reinvent the wheel, there is the Image.thumbnail method available for this: maxsize = (1028, 1028) image.thumbnail(maxsize, PIL.Image.ANTIALIAS) Ensures the resulting size is not bigger than the given bounds while maintains the aspect ratio. Specifying PIL.Image.ANTIALIAS applies a high-quality downsampling filter for better resize result, you probably want that too.
Python "raise from" usage
What's the difference between raise and raise from in Python? try: raise ValueError except Exception as e: raise IndexError which yields Traceback (most recent call last): File "tmp.py", line 2, in <module> raise ValueError ValueError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "tmp.py", line 4, in <module> raise IndexError IndexError and try: raise ValueError except Exception as e: raise IndexError from e which yields Traceback (most recent call last): File "tmp.py", line 2, in <module> raise ValueError ValueError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "tmp.py", line 4, in <module> raise IndexError from e IndexError
The difference is that when you use from, the __cause__ attribute is set and the message states that the exception was directly caused by. If you omit the from then no __cause__ is set, but the __context__ attribute may be set as well, and the traceback then shows the context as during handling something else happened. Setting the __context__ happens if you used raise in an exception handler; if you used raise anywhere else no __context__ is set either. If a __cause__ is set, a __suppress_context__ = True flag is also set on the exception; when __suppress_context__ is set to True, the __context__ is ignored when printing a traceback. When raising from a exception handler where you don't want to show the context (don't want a during handling another exception happened message), then use raise ... from None to set __suppress_context__ to True. In other words, Python sets a context on exceptions so you can introspect where an exception was raised, letting you see if another exception was replaced by it. You can also add a cause to an exception, making the traceback explicit about the other exception (use different wording), and the context is ignored (but can still be introspected when debugging). Using raise ... from None lets you suppress the context being printed. See the raise statement documenation: The from clause is used for exception chaining: if given, the second expression must be another exception class or instance, which will then be attached to the raised exception as the __cause__ attribute (which is writable). If the raised exception is not handled, both exceptions will be printed: >>> try: ... print(1 / 0) ... except Exception as exc: ... raise RuntimeError("Something bad happened") from exc ... Traceback (most recent call last): File "<stdin>", line 2, in <module> ZeroDivisionError: int division or modulo by zero The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 4, in <module> RuntimeError: Something bad happened A similar mechanism works implicitly if an exception is raised inside an exception handler: > the previous exception is then attached as the new exception’s __context__ attribute: >>> try: ... print(1 / 0) ... except: ... raise RuntimeError("Something bad happened") ... Traceback (most recent call last): File "<stdin>", line 2, in <module> ZeroDivisionError: int division or modulo by zero During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 4, in <module> RuntimeError: Something bad happened Also see the Built-in Exceptions documentation for details on the context and cause information attached to exceptions.
Unsupervised pre-training for convolutional neural network in theano
I would like to design a deep net with one (or more) convolutional layers (CNN) and one or more fully connected hidden layers on top. For deep network with fully connected layers there are methods in theano for unsupervised pre-training, e.g., using denoising auto-encoders or RBMs. My question is: How can I implement (in theano) an unsupervised pre-training stage for convolutional layers? I do not expect a full implementation as an answer, but I would appreciate a link to a good tutorial or a reliable reference.
This paper describes an approach for building a stacked convolutional autoencoder. Based on that paper and some Google searches I was able to implement the described network. Basically, everything you need is described in the Theano convolutional network and denoising autoencoder tutorials with one crucial exception: how to reverse the max-pooling step in the convolutional network. I was able to work that out using a method from this discussion - the trickiest part is figuring out the right dimensions for W_prime as these will depend on the feed forward filter sizes and the pooling ratio. Here is my inverting function: def get_reconstructed_input(self, hidden): """ Computes the reconstructed input given the values of the hidden layer """ repeated_conv = conv.conv2d(input = hidden, filters = self.W_prime, border_mode='full') multiple_conv_out = [repeated_conv.flatten()] * np.prod(self.poolsize) stacked_conv_neibs = T.stack(*multiple_conv_out).T stretch_unpooling_out = neibs2images(stacked_conv_neibs, self.pl, self.x.shape) rectified_linear_activation = lambda x: T.maximum(0.0, x) return rectified_linear_activation(stretch_unpooling_out + self.b_prime.dimshuffle('x', 0, 'x', 'x'))
functools.partial wants to use a positional argument as a keyword argument
So I am trying to understand partials: import functools def f(x,y) : print x+y g0 = functools.partial( f, 3 ) g0(1) 4 # Works as expected In: g1 = functools.partial( f, y=3 ) g1(1) 4 # Works as expected In: g2 = functools.partial( f, x=3 ) g2(1) TypeError: f() got multiple values for keyword argument 'x' The TypeError disappears if I use y as a keyword argument: In: g2( y=1 ) 4 What causes the TypeError?
This has nothing to do with functools.partial, really. You are essentially calling your function like this: f(1, x=3) Python first fulfils the positional arguments, and your first argument is x. Then the keyword arguments are applied, and you again supplied x. functools.partial() has no means to detect that you already supplied the first positional argument as a keyword argument instead. It will not augment your call by replacing the positional argument with a y= keyword argument. When mixing positional and keyword arguments, you must take care not to use the same argument twice.
Upgrade python packages from requirements.txt using pip command
How do I upgrade all my python packages from requirements.txt file using pip command? tried with below command $ pip install --upgrade -r requirements.txt Since, the python packages are suffixed with the version number (Django==1.5.1) they don't seem to upgrade. Is there any better approach than manually editing requirements.txt file? EDIT As Andy mentioned in his answer packages are pinned to a specific version, hence it is not possible to upgrade packages through pip command. But, we can achieve this with pip-tools using the following command. $ pip-review --auto this will automatically upgrade all packages from requirements.txt (make sure to install pip-tools using pip install command).
No. Your requirements file has been pinned to specific versions. If your requirements are set to that version, you should not be trying to upgrade beyond those versions. If you need to upgrade, then you need to switch to unpinned versions in your requirements file. Example: lxml>=2.2.0 This would upgrade lxml to any version newer than 2.2.0 lxml>=2.2.0,<2.3.0 This would upgrade lxml to the most recent version between 2.2.0 and 2.3.0.
Individual alpha values in scatter plot / Matplotlib
I'm wondering if it is possible to have individual alpha values for each point to be plotted using the scatter function of Matplotlib. I need to plot a set of points, each one with its alpha value. For example, I have this code to plot some points def plot_singularities(points_x, p, alpha_point, file_path): plt.figure() plt.scatter(points_x, points_y, alpha=alpha_point) plt.savefig(file_path + '.png', dpi=100) plt.close() Here, I all my points_x, points_y and alpha_point have n values. However, I can't assign an array to the alpha parameter in scater(). I'm wondering how can I have a different alpha value for each point. I know that I can made a loop and plot point by point with its specific alpha value, but this not seems a good approach. Thank you.
tcaswell's suggestion is correct, you can do it like this: import numpy as np import matplotlib.pylab as plt x = np.arange(10) y = np.arange(10) alphas = np.linspace(0.1, 1, 10) rgba_colors = np.zeros((10,4)) # for red the first column needs to be one rgba_colors[:,0] = 1.0 # the fourth column needs to be your alphas rgba_colors[:, 3] = alphas plt.scatter(x, y, color=rgba_colors) plt.show()
How do I configure a Python interpreter in IntelliJ IDEA with the PyCharm plugin?
There is a tutorial in the IDEA docs on how to add a Python interpreter in PyCharm, which involves accessing the "Project Interpreter" page. Even after installing the Python plugin, I don't see that setting anywhere. Am I missing something obvious?
With the Python plugin installed, navigate to File > Project Structure. Under the Project menu for Project SDK, select "New" and select "Python SDK", then select "Local". Provided you have a Python SDK installed, the flow should be natural from there - navigate to the location your Python installation lives.
Who runs the callback when using apply_async method of a multiprocessing pool?
I'm trying to understand a little bit of what's going on behind the scenes when using the apply_sync method of a multiprocessing pool. Who runs the callback method? Is it the main process that called apply_async? Let's say I send out a whole bunch of apply_async commands with callbacks and then continue with my program. My program is still doing things when the apply_async's start to finish. How does the callback get run my the "main process" while the main process is still busy with the script? Here's an example. import multiprocessing import time def callback(x): print '{} running callback with arg {}'.format(multiprocessing.current_process().name, x) def func(x): print '{} running func with arg {}'.format(multiprocessing.current_process().name, x) return x pool = multiprocessing.Pool() args = range(20) for a in args: pool.apply_async(func, (a,), callback=callback) print '{} going to sleep for a minute'.format(multiprocessing.current_process().name) t0 = time.time() while time.time() - t0 < 60: pass print 'Finished with the script' The output is something like PoolWorker-1 running func with arg 0 PoolWorker-2 running func with arg 1 PoolWorker-3 running func with arg 2 MainProcess going to sleep for a minute <-- main process is busy PoolWorker-4 running func with arg 3 PoolWorker-1 running func with arg 4 PoolWorker-2 running func with arg 5 PoolWorker-3 running func with arg 6 PoolWorker-4 running func with arg 7 MainProcess running callback with arg 0 <-- main process running callback while it's still in the while loop!! MainProcess running callback with arg 1 MainProcess running callback with arg 2 MainProcess running callback with arg 3 MainProcess running callback with arg 4 PoolWorker-1 running func with arg 8 ... Finished with script How is MainProcess running the callback while it's in the middle of that while loop?? There is this statement about the callback in the documentation for multiprocessing.Pool that seems like a hint but I don't understand it. apply_async(func[, args[, kwds[, callback]]]) A variant of the apply() method which returns a result object. If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it (unless the call failed). callback should complete immediately since otherwise the thread which handles the results will get blocked.
There is indeed a hint in the docs: callback should complete immediately since otherwise the thread which handles the results will get blocked. The callbacks are handled in the main process, but they're run in their own separate thread. When you create a Pool it actually creates a few Thread objects internally: class Pool(object): Process = Process def __init__(self, processes=None, initializer=None, initargs=(), maxtasksperchild=None): self._setup_queues() self._taskqueue = Queue.Queue() self._cache = {} ... # stuff we don't care about self._worker_handler = threading.Thread( target=Pool._handle_workers, args=(self, ) ) self._worker_handler.daemon = True self._worker_handler._state = RUN self._worker_handler.start() self._task_handler = threading.Thread( target=Pool._handle_tasks, args=(self._taskqueue, self._quick_put, self._outqueue, self._pool, self._cache) ) self._task_handler.daemon = True self._task_handler._state = RUN self._task_handler.start() self._result_handler = threading.Thread( target=Pool._handle_results, args=(self._outqueue, self._quick_get, self._cache) ) self._result_handler.daemon = True self._result_handler._state = RUN self._result_handler.start() The interesting thread for us is _result_handler; we'll get to why shortly. Switching gears for a second, when you run apply_async, it creates an ApplyResult object internally to manage getting the result from the child: def apply_async(self, func, args=(), kwds={}, callback=None): assert self._state == RUN result = ApplyResult(self._cache, callback) self._taskqueue.put(([(result._job, None, func, args, kwds)], None)) return result class ApplyResult(object): def __init__(self, cache, callback): self._cond = threading.Condition(threading.Lock()) self._job = job_counter.next() self._cache = cache self._ready = False self._callback = callback cache[self._job] = self def _set(self, i, obj): self._success, self._value = obj if self._callback and self._success: self._callback(self._value) self._cond.acquire() try: self._ready = True self._cond.notify() finally: self._cond.release() del self._cache[self._job] As you can see, the _set method is the one that ends up actually executing the callback passed in, assuming the task was successful. Also notice that it adds itself to a global cache dict at the end of __init__. Now, back to the _result_handler thread object. That object calls the _handle_results function, which looks like this: while 1: try: task = get() except (IOError, EOFError): debug('result handler got EOFError/IOError -- exiting') return if thread._state: assert thread._state == TERMINATE debug('result handler found thread._state=TERMINATE') break if task is None: debug('result handler got sentinel') break job, i, obj = task try: cache[job]._set(i, obj) # Here is _set (and therefore our callback) being called! except KeyError: pass # More stuff It's a loop that just pulls results from children out of queue, finds the entry for it in cache, and calls _set, which executes our callback. It's able to run even though you're in a loop because it isn't running in the main thread.
How to fix error “Expected version spec in …” using pip install on Windows?
On a Windows 7 machine I am using the following command to install a package from a local directory: pip install addons/pnc_tests --upgrade --extra-index-url=http://some_server/simple which results in the following error: C:\Users\alex\PNC\tas\ENV\Scripts\pip-script.py run on 07/16/14 07:50:47 Exception: Traceback (most recent call last): File "C:\Users\alex\PNC\tas\ENV\lib\site-packages\pip\basecommand.py", line 122, in main status = self.run(options, args) File "C:\Users\alex\PNC\tas\ENV\lib\site-packages\pip\commands\install.py", line 258, in run InstallRequirement.from_line(name, None)) File "C:\Users\alex\PNC\tas\ENV\lib\site-packages\pip\req.py", line 173, in from_line return cls(req, comes_from, url=url, prereleases=prereleases) File "C:\Users\alex\PNC\tas\ENV\lib\site-packages\pip\req.py", line 71, in __init__ req = pkg_resources.Requirement.parse(req) File "C:\Users\alex\PNC\tas\ENV\lib\site-packages\pip\_vendor\pkg_resources.py", line 2667, in parse reqs = list(parse_requirements(s)) File "C:\Users\alex\PNC\tas\ENV\lib\site-packages\pip\_vendor\pkg_resources.py", line 2605, in parse_requirements line, p, specs = scan_list(VERSION,LINE_END,line,p,(1,2),"version spec") File "C:\Users\alex\PNC\tas\ENV\lib\site-packages\pip\_vendor\pkg_resources.py", line 2573, in scan_list raise ValueError("Expected "+item_name+" in",line,"at",line[p:]) ValueError: ('Expected version spec in', 'addons/pnc_tests', 'at', '/pnc_tests') How to solve this problem?
I guess you are missing the parameter -r; It must be like this if you have a requirement file to install from; pip install -r addons/pnc_tests --upgrade --extra-index-url=http://some_server/simple As it is defined on; Pip Documentation
Element-wise logical OR in Pandas
I would like the element-wise logical OR operator. I know "or" itself is not what I am looking for. For AND I want to use & as explained here. For NOT I want to use np.invert() as explained here. So what is the equivalent for OR?
The corresponding operator is |: df[(df < 3) | (df == 5)] would elementwise check if value is less than 3 or equal to 5.
Install pyyaml using pip/Add PyYaml as pip dependency
I want to use PyYaml in my pip project, but am having trouble using it as a dependency. Mainly the problem is thet PyYaml in pip is not a cross platform install. How do I install pyyaml using pip so that it works. Note, on a current fresh Ubuntu install I get the following error when running pip install pyyaml Installing collected packages: pyyaml Running setup.py install for pyyaml checking if libyaml is compilable gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -fPIC -I/usr/include/python3.2mu -c build/temp.linux-x86_64-3.2/check_libyaml.c -o build/temp.linux-x86_64-3.2/check_libyaml.o build/temp.linux-x86_64-3.2/check_libyaml.c:2:18: fatal error: yaml.h: No such file or directory compilation terminated. libyaml is not found or a compiler error: forcing --without-libyaml (if libyaml is installed correctly, you may need to specify the option --include-dirs or uncomment and modify the parameter include_dirs in setup.cfg) Successfully installed pyyaml Note that the error says "successfully installed" but it is not. I can not import yaml I am not looking for answers that say "use apt-get" due to my very first sentence. I need the install to be cross platform and work as a pip dependency I am not simply wondering on how to install it correctly. If this is not possible, is there any library I can use in replacement?
You will need some extra packages to build it. First of all you need to uninstall pyyaml, or it will complain later that it is already installed pip uninstall pyyaml Then install the following packages: sudo apt-get install libyaml-dev libpython2.7-dev Finally install it again pip install pyyaml
Django: AppRegistryNotReady()
Python: 2.7; Django: 1.7; Mac 10.9.4 I'm following the tutorial of Tango with Django At Chapter 5, the tutorial teaches how to create a population script, which can automatically create some data for the database for the ease of development. I created a populate_rango.py at the same level of manage.py. Here's the populate_rango.py: import os def populate(): python_cat = add_cat('Python') add_page( cat=python_cat, title="Official Python Tutorial", url="http://docs.python.org/2/tutorial/" ) add_page( cat=python_cat, title="How to Think like a Computer Scientist", url="http://www.greenteapress.com/thinkpython/" ) add_page( cat=python_cat, title="Learn Python in 10 Minutes", url="http://www.korokithakis.net/tutorials/python/" ) django_cat = add_cat("Django") add_page( cat=django_cat, title="Official Django Tutorial", url="https://docs.djangoproject.com/en/1.5/intro/tutorial01/" ) add_page( cat=django_cat, title="Django Rocks", url="http://www.djangorocks.com/" ) add_page( cat=django_cat, title="How to Tango with Django", url="http://www.tangowithdjango.com/" ) frame_cat = add_cat("Other Frameworks") add_page( cat=frame_cat, title="Bottle", url="http://bottlepy.org/docs/dev/" ) add_page( cat=frame_cat, title="Flask", url="http://flask.pocoo.org" ) for c in Category.objects.all(): for p in Page.objects.filter(category=c): print "- {0} - {1}".format(str(c), str(p)) def add_page(cat, title, url, views=0): p = Page.objects.get_or_create(category=cat, title=title, url=url, views=views)[0] return p def add_cat(name): c = Category.objects.get_or_create(name=name)[0] return c if __name__ == '__main__': print "Starting Rango population script..." os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tangle.settings') from rango.models import Category, Page populate() Then I run python populate_rango.py at the terminal at the level of manage.py, AppRegistryNotReady() is raised: django.core.exceptions.AppRegistryNotReady Then I googled it, found something like this: Standalone scripts¶ If you’re using Django in a plain Python script — rather than a management command — and you rely on the DJANGO_SETTINGS_MODULE environment variable, you must now explicitly initialize Django at the beginning of your script with: >>> import django >>> django.setup() Otherwise, you will hit an AppRegistryNotReady exception. And I still have no idea what should I do, can some one help? Thx!!!
If you are using your django project applications in standalone scripts, in other words, without using manage.py - you need to manually call django.setup() first - it would configure the logging and, what is important - populate apps registry. Quote from Initialization process docs: setup() This function is called automatically: When running an HTTP server via Django’s WSGI support. When invoking a management command. It must be called explicitly in other cases, for instance in plain Python scripts. In your case, you need to call setup() manually: if __name__ == '__main__': print "Starting Rango population script..." os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tangle.settings') import django django.setup() populate() Also, this problem is described in detail in Troubleshooting section.
Selenium Python get all children elements
In Selenium with Python is it possible to get all the children of a WebElement as a list?
Yes, you can achieve it by find_elements_by_css_selector("*") or find_elements_by_xpath(".//*"). However, this doesn't sound like a valid use case to find all children of an element. It is an expensive operation to get all direct/indirect children. Please further explain what you are trying to do. There should be a better way. from selenium import webdriver driver = webdriver.Firefox() driver.get("http://www.stackoverflow.com") header = driver.find_element_by_id("header") # start from your target element, here for example, "header" all_children_by_css = header.find_elements_by_css_selector("*") all_children_by_xpath = header.find_elements_by_xpath(".//*") print 'len(all_children_by_css): ' + str(len(all_children_by_css)) print 'len(all_children_by_xpath): ' + str(len(all_children_by_xpath))
Starting the ipython notebook
In an ubuntu terminal window when the notebook is starting this error is raised. Could anybody explain this error KeyError: 3 ERROR:root:Exception in I/O handler for fd 3 Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/zmq/eventloop/ioloop.py", line 330, in start self._handlers[fd](fd, events)
I had to upgrade pyzmq to get it working: sudo pip install pyzmq --upgrade If the problem still persists try upgrading the rest of the libraries which are: sudo pip install jinja2 sudo pip install tornado If you don't wanna be messing around with these libraries you could run a virtual environment and install everything there. Here is the documentation: http://docs.python-guide.org/en/latest/dev/virtualenvs/ Hope it helps, cheers!
Understanding MySQL Cursor Types
MySQL Version: 5.5.37-0ubuntu0.14.04.1 I am currently writing a python script which makes use of numerous MySQL tables and queries to obtain results from an inverted index stored in tables. I have noticed that choosing the right type of Cursor in the MySQLdb python module when executing a query has a really big effect on performance and was wondering if someone could explain or provide a reliable resource explaining which cursor to use when. As an example, executing this query 40 times with SSCursor takes 7 seconds: SELECT Pages.PageID, Pages.PageName, Counter AS TermFreq, Pages.Length, (Counter / LOG(Length)) AS Weight FROM Pages INNER JOIN TermOccurrences ON TermOccurrences.PageID = Pages.PageID INNER JOIN Terms ON TermOccurrences.TermID = Terms.TermID WHERE TermName = %s ORDER BY Weight DESC LIMIT 20; Running the same query 40 times with the default Cursor takes 0.004 seconds. Removing the calculation of weight (Counter/LOG(Length)) makes this query execute fast again using an SSCursor however. I was using SSCursor because it was proving to have vastly superior performance on a number of other queries and then suddenly became very slow for this one. Changing back to the default Cursor surprised me when it executed so fast. EDIT: Some more examples. Running the following with the default cursor 40 times takes ~3 seconds: SELECT COUNT(*) FROM Pages INNER JOIN TermOccurrences ON TermOccurrences.PageID = Pages.PageID INNER JOIN Terms ON TermOccurrences.TermID = Terms.TermID WHERE TermName = %s AND Counter > 2 Running it with SSCursor instead takes about 0.002 seconds.
The MySQLdb docs mention that the standard Cursor class is using mysql_store_result() whereas SSCursor is using mysql_use_result() "with the latter you must ensure all the rows have been read before another query can be executed". So it's about mysql_store_result() vs mysql_use_result(). The MySQL docs mention that mysql_use_result() initiates a result set retrieval without actually reading the result set into the client, like mysql_store_result() does. So each row has to be retrieved individually with calls to mysql_fetch_row(), which of course can add up considerably when dealing with large tables. Also in the MySQLdb docs: SSCursor: A "server-side" cursor. Like Cursor but uses CursorUseResultMixIn. Use only if you are dealing with potentially large result sets. So SSCursor is mainly good if your result set is too large to move into your client all at once. See also these questions: How to efficiently use MySQLDB SScursor? MySQLdb is extremely slow with large result sets And note that a LIMIT 20 query can never really be that large. You might have to check your KEYs. To get a better idea about why that could take 7 seconds it's probably best to also include the db schema in a question, probably more something for DBA stack.
Fit a non-linear function to data/observations with pyMCMC/pyMC
I am trying to fit some data with a Gaussian (and more complex) function(s). I have created a small example below. My first question is, am I doing it right? My second question is, how do I add an error in the x-direction, i.e. in the x-position of the observations/data? It is very hard to find nice guides on how to do this kind of regression in pyMC. Perhaps because its easier to use some least squares, or similar approach, I however have many parameters in the end and need to see how well we can constrain them and compare different models, pyMC seemed like the good choice for that. import pymc import numpy as np import matplotlib.pyplot as plt; plt.ion() x = np.arange(5,400,10)*1e3 # Parameters for gaussian amp_true = 0.2 size_true = 1.8 ps_true = 0.1 # Gaussian function gauss = lambda x,amp,size,ps: amp*np.exp(-1*(np.pi**2/(3600.*180.)*size*x)**2/(4.*np.log(2.)))+ps f_true = gauss(x=x,amp=amp_true, size=size_true, ps=ps_true ) # add noise to the data points noise = np.random.normal(size=len(x)) * .02 f = f_true + noise f_error = np.ones_like(f_true)*0.05*f.max() # define the model/function to be fitted. def model(x, f): amp = pymc.Uniform('amp', 0.05, 0.4, value= 0.15) size = pymc.Uniform('size', 0.5, 2.5, value= 1.0) ps = pymc.Normal('ps', 0.13, 40, value=0.15) @pymc.deterministic(plot=False) def gauss(x=x, amp=amp, size=size, ps=ps): e = -1*(np.pi**2*size*x/(3600.*180.))**2/(4.*np.log(2.)) return amp*np.exp(e)+ps y = pymc.Normal('y', mu=gauss, tau=1.0/f_error**2, value=f, observed=True) return locals() MDL = pymc.MCMC(model(x,f)) MDL.sample(1e4) # extract and plot results y_min = MDL.stats()['gauss']['quantiles'][2.5] y_max = MDL.stats()['gauss']['quantiles'][97.5] y_fit = MDL.stats()['gauss']['mean'] plt.plot(x,f_true,'b', marker='None', ls='-', lw=1, label='True') plt.errorbar(x,f,yerr=f_error, color='r', marker='.', ls='None', label='Observed') plt.plot(x,y_fit,'k', marker='+', ls='None', ms=5, mew=2, label='Fit') plt.fill_between(x, y_min, y_max, color='0.5', alpha=0.5) plt.legend() I realize that I might have to run more iterations, use burn in and thinning in the end. The figure plotting the data and the fit is seen here below. The pymc.Matplot.plot(MDL) figures looks like this, showing nicely peaked distributions. This is good, right?
My first question is, am I doing it right? Yes! You need to include a burn-in period, which you know. I like to throw out the first half of my samples. You don't need to do any thinning, but sometimes it will make your post-MCMC work faster to process and smaller to store. The only other thing I advise is to set a random seed, so that your results are "reproducible": np.random.seed(12345) will do the trick. Oh, and if I was really giving too much advice, I'd say import seaborn to make the matplotlib results a little more beautiful. My second question is, how do I add an error in the x-direction, i.e. in the x-position of the observations/data? One way is to include a latent variable for each error. This works in your example, but will not be feasible if you have many more observations. I'll give a little example to get you started down this road: # add noise to observed x values x_obs = pm.rnormal(mu=x, tau=(1e4)**-2) # define the model/function to be fitted. def model(x_obs, f): amp = pm.Uniform('amp', 0.05, 0.4, value= 0.15) size = pm.Uniform('size', 0.5, 2.5, value= 1.0) ps = pm.Normal('ps', 0.13, 40, value=0.15) x_pred = pm.Normal('x', mu=x_obs, tau=(1e4)**-2) # this allows error in x_obs @pm.deterministic(plot=False) def gauss(x=x_pred, amp=amp, size=size, ps=ps): e = -1*(np.pi**2*size*x/(3600.*180.))**2/(4.*np.log(2.)) return amp*np.exp(e)+ps y = pm.Normal('y', mu=gauss, tau=1.0/f_error**2, value=f, observed=True) return locals() MDL = pm.MCMC(model(x_obs, f)) MDL.use_step_method(pm.AdaptiveMetropolis, MDL.x_pred) # use AdaptiveMetropolis to "learn" how to step MDL.sample(200000, 100000, 10) # run chain longer since there are more dimensions It looks like it may be hard to get good answers if you have noise in x and y: Here is a notebook collecting this all up.
How can I copy a Python string?
I do this: a = 'hello' And now I just want an independent copy of a: import copy b = str(a) c = a[:] d = a + '' e = copy.copy(a) map( id, [ a,b,c,d,e ] ) Out[3]: [4365576160, 4365576160, 4365576160, 4365576160, 4365576160] Why do they all have the same memory address and how can I get a copy of a?
You don't need to copy a Python string. They are immutable, and the copy module always returns the original in such cases, as do str(), the whole string slice, and concatenating with an empty string. Moreover, your 'hello' string is interned (certain strings are). Python deliberately tries to keep just the one copy, as that makes dictionary lookups faster. One way you could work around this is to actually create a new string, then slice that string back to the original content: >>> a = 'hello' >>> b = (a + '.')[:-1] >>> id(a), id(b) (4435312528, 4435312432) But all you are doing now is waste memory. It is not as if you can mutate these string objects in any way, after all. If all you wanted to know is how much memory a Python object requires, use sys.getsizeinfo(); it gives you the memory footprint of any Python object. For containers this does not include the contents; you'd have to recurse into each container to calculate a total memory size: >>> import sys >>> a = 'hello' >>> sys.getsizeof(a) 42 >>> b = {'foo': 'bar'} >>> sys.getsizeof(b) 280 >>> sys.getsizeof(b) + sum(sys.getsizeof(k) + sys.getsizeof(v) for k, v in b.items()) 360 You can then choose to use id() tracking to take an actual memory footprint or to estimate a maximum footprint if objects were not cached and reused.
Another one about pyCrypto and Paramiko
I already spent 2 days trying to install pyCrypto for Paramiko module. So, first issue I had faced was this: >>> import paramiko Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Program Files\Python\lib\site-packages\paramiko\__init__.py", line 31 , in <module> from paramiko.transport import SecurityOptions, Transport File "C:\Program Files\Python\lib\site-packages\paramiko\transport.py", line 4 7, in <module> from paramiko.dsskey import DSSKey File "C:\Program Files\Python\lib\site-packages\paramiko\dsskey.py", line 26, in <module> from Crypto.PublicKey import DSA ImportError: No module named 'Crypto' It is very fun actually because I use Windows and it doesn't care about uppercase. I changed a folder name from crypto to Crypto and this particular issue disappeared. Now it wants winrandom: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Program Files\Python\lib\site-packages\paramiko\__init__.py", line 31 , in <module> from paramiko.transport import SecurityOptions, Transport File "C:\Program Files\Python\lib\site-packages\paramiko\transport.py", line 4 7, in <module> from paramiko.dsskey import DSSKey File "C:\Program Files\Python\lib\site-packages\paramiko\dsskey.py", line 26, in <module> from Crypto.PublicKey import DSA File "C:\Program Files\Python\lib\site-packages\Crypto\PublicKey\DSA.py", line 89, in <module> from Crypto import Random File "C:\Program Files\Python\lib\site-packages\Crypto\Random\__init__.py", li ne 28, in <module> from Crypto.Random import OSRNG File "C:\Program Files\Python\lib\site-packages\Crypto\Random\OSRNG\__init__.p y", line 34, in <module> from Crypto.Random.OSRNG.nt import new File "C:\Program Files\Python\lib\site-packages\Crypto\Random\OSRNG\nt.py", li ne 28, in <module> import winrandom ImportError: No module named 'winrandom' When I try to install it through PIP I fail with: Cannot export PyInit_winrandom: symbol not defined build\temp.win32-3.4\Release\src\winrandom.o:winrandom.c:(.text+0x12): undefined reference to `Py_InitModule' collect2: ld returned 1 exit status error: command 'c:\\mingw\\bin\\gcc.exe' failed with exit status 1 Seems like it doesn't support Python3.4. Is there any way to make it all works in Win7 x86 with Python3.4 installed? Installed modules: crypto (1.1.0) ecdsa (0.11) Fabric (1.9.0) paramiko (1.14.0) pip (1.5.6) pyasn1 (0.1.7) pycrypto (2.6.1) PyYAML (3.11) rsa (3.1.4) setuptools (2.1) Python version 3.4.1
Problem is solved by editing string in crypto\Random\OSRNG\nt.py: import winrandom to from . import winrandom
ImportError: No module named scipy
I am using Python 2.7 and trying to get PyBrain to work. But I get this error even though scipy is installed - raceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/PyBrain-0.3.1- py2.7.egg/pybrain/__init__.py", line 1, in <module> from pybrain.structure.__init__ import * File "/usr/local/lib/python2.7/site-packages/PyBrain-0.3.1-py2.7.egg/pybrain/structure/__init__.py", line 1, in <module> from pybrain.structure.connections.__init__ import * File "/usr/local/lib/python2.7/site-packages/PyBrain-0.3.1-py2.7.egg/pybrain/structure/connections/__init__.py", line 1, in <module> from pybrain.structure.connections.full import FullConnection File "/usr/local/lib/python2.7/site-packages/PyBrain-0.3.1-py2.7.egg/pybrain/structure/connections/full.py", line 3, in <module> from scipy import reshape, dot, outer ImportError: No module named scipy I have installed scipy using this command - sudo apt-get install python-scipy I get - Reading package lists... Done Building dependency tree Reading state information... Done python-scipy is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. What do I do?
Try to install it as a python package using pip. You said you already tried: sudo apt-get install python-scipy Now run: pip install scipy I ran both and it worked on my Debian-based box.
Sending a form array to Flask
I have a html form with multiple inputs named like this: <input name="hello[]" type="text" /> <input name="hello[]" type="text" /> <input name="hello[]" type="text" /> In PHP you get this as an array but is it the same way in Python using Flask? I tried this: hello = request.form['hello'] print hello In Flask, but that did not work, I got a "400 bad request": Bad Request The browser (or proxy) sent a request that this server could not understand. How do I do it in Flask?
You are following a PHP convention of adding brackets to the field names. It's not a web standard, but because PHP supports it out of the box it is popular; Ruby on Rails also uses it. If you do use that convention, to get the POST data on the Flask side you need to include the square brackets in the field name. You can retrieve all values of the list using MultiDict.getlist(): hello = request.form.getlist('hello[]') You don't have to use the [] convention at all, of course. Not appending the [] to the hello name will work perfectly fine, at which point you'd use request.form.getlist('hello') in Flask.
Why are complex numbers in Python denoted with 'j' instead of 'i'?
I know this is an electrical engineering convention, but I'm still wondering why it was chosen for Python. I don't know other programming languages with complex-number literals, so I don't have anything to compare against, but does anyone know any that do use i?
It appears to be, as you guessed, because Python follows the electrical engineering convention. Here's an exchange from the Python bug tracker Issue10562: Boštjan Mejak: In Python, the letter 'j' denotes the imaginary unit. It would be great if we would follow mathematics in this regard and let the imaginary unit be denoted with an 'i'. Michael Foord: We follow engineering which uses j. (I was about to close this as wontfix but Antoine is particularly keen that Mark deals with this issue...) Mark Dickinson: Just to add my own thoughts: 'j' for a (not the ) square root of -1 has, as Michael points out, a history of use in engineering (particularly electrical engineering) and physics. Personally, I would have preferred 'i' to 'j' here, but changing it now would cause (IMO) gratuitous breakage. It really doesn't seem a big enough issue to be worth making a fuss about. ... Much later: Guido van Rossum: This will not be fixed. For one thing, the letter 'i' or upper case 'I' look too much like digits. The way numbers are parsed either by the language parser (in source code) or by the built-in functions (int, float, complex) should not be localizable or configurable in any way; that's asking for huge disappointments down the road. If you want to parse complex numbers using 'i' instead of 'j', you have plenty of solutions available already.
ipython notebook clear cell output in code
In a iPython notebook, I have a while loop that listens to a Serial port and print the received data in real time. What I want to achieve to only show the latest received data (i.e only one line showing the most recent data. no scrolling in the cell output area) What I need(i think) is to clear the old cell output when I receives new data, and then prints the new data. I am wondering how can I clear old data programmatically ?
You can use IPython.display.clear_output to clear the output of a cell. from IPython.display import clear_output for i in range(10): clear_output() print("Hello World!") At the end of this loop you will only see one Hello World!. Without a code example it's not easy to give you working code. Probably buffering the latest n events is a good strategy. Whenever the buffer changes you can clear the cell's output and print the buffer again.
django: return string from view
I know this is a simple question, sorry. I just want to return a simple string, no templates. I have my view: def myview(request): return "return this string" I don't remember the command. Thanks
According to the documentation: A view function, or view for short, is simply a Python function that takes a Web request and returns a Web response. Each view function is responsible for returning an HttpResponse object. In other words, your view should return a HttpResponse instance: from django.http import HttpResponse def myview(request): return HttpResponse("return this string")
.pyw and pythonw does not run under Windows 7
Running a simple .py or .pyw python file causes python.exe to show up under Task Manager. python myApp.py python myApp.pyw However when we try to run it without using the console, the script does not appear to run, nor does python.exe or pythonw.exe appears under Task Manager pythonw myApp.pyw pythonw myApp.py How do we troubleshoot the problem? The system is running Python 2.7.8 x64.
tl;dr To troubleshoot, use output redirection on invocation: pythonw myApp.py 1>stdout.txt 2>stderr.txt This will capture stdout output, such as from print(), in file stdout.txt, and stderr output (such as from unhandled exceptions), in file stderr.txt; from PowerShell, use cmd /c pythonw myApp.py 1>stdout.txt 2>stderr.txt). Note that the very act of redirecting stdout may actually make your script work again, if the only reason for its failure with pythonw was the use of print (in Python 2.x - see below). Caveat: This output redirection technique seemingly does not work when invoking *.pyw scripts directly (as opposed to by passing the script file path to pythonw.exe). Do let me know if you know why and/or if it does work for you. To fix your script: Place the following at the top of any Python 2.x or 3.x script that you want to run with pythonw.exe: import sys, os if sys.executable.endswith("pythonw.exe"): sys.stdout = open(os.devnull, "w"); sys.stderr = open(os.path.join(os.getenv("TEMP"), "stderr-"+os.path.basename(sys.argv[0])), "w") This ensures the following when a script is run with pythonw.exe: print() calls and explicit calls to sys.stdout() are effectively ignored (are no-ops). Stderr output, including from an unhandled fatal exception, is sent to file %TEMP%\stderr-<scriptFileName>; %TEMP% is a standard Windows environment variable that points to the current user's folder for temporary files. In other words: With the above code in place, check file %TEMP%\stderr-<scriptFileName> after your script has failed silently when invoked with pythonw.exe. For an explanation, read on. On Windows, pythonw.exe is for launching GUI/no-UI-at-all scripts, which means that the standard in- and output streams - sys.stdin, sys.stdout, sys.stderr are NOT available. This has two nasty side effects: Using print() - which targets sys.stdout by default - causes an exception in Python 2.x. This problem has been fixed in Python 3.x. Any unhandled exception - including one triggered by print() in 2.x - causes the script to abort silently. Exception error messages go to sys.stderr by default, which is the very thing not available in this scenario. The above code fixes these problems by: sending stdout output to the null device, effectively ignoring any attempt to output to sys.stdout - whether explicitly, or implicitly via print(). sending all stderr output to a temporary file. Differences between Python 2.x and Python 3.x: When a script is run with pythonw.exe, sys.stdin, sys.stdout, and sys.stderr: in Python 2.x: have invalid file descriptors The eventual result when trying to write to sys.stdout or sys.stderr is the following exception: IOError: [Errno 9] Bad file descriptor Pitfall: Due to output buffering, this exception may not surface until you've output, say, 4K bytes; you can provoke it instantly by invoking pythonw.exe with -u (for unbuffered output). print() blindly tries to sys.stdout (by default), so it provokes this exception sooner or later. in Python 3.x: are set to None This is complemented with the 3.x print() function performing a no-op (doing nothing) when it finds that sys.stdout is None, so that print() statements can by default safely be used - they'll simply be ignored when run with pythonw.exe However, it follows that trying to use sys.stdout.write() and sys.stderr.write() still results in an exception. See here for more background.
How to catch exception output from Python subprocess.check_output()?
I'm trying to do a Bitcoin payment from within Python. In bash I would normally do this: bitcoin sendtoaddress <bitcoin address> <amount> so for example: bitcoin sendtoaddress 1HoCUcbK9RbVnuaGQwiyaJGGAG6xrTPC9y 1.4214 if it is successfull I get a transaction id as output but if I try to transfer an amount larger than my bitcoin balance, I get the following output: error: {"code":-4,"message":"Insufficient funds"} In my Python program I now try to do the payment as follows: import subprocess try: output = subprocess.check_output(['bitcoin', 'sendtoaddress', address, str(amount)]) except: print "Unexpected error:", sys.exc_info() If there's enough balance it works fine, but if there's not enough balance sys.exc_info() prints out this: (<class 'subprocess.CalledProcessError'>, CalledProcessError(), <traceback object at 0x7f339599ac68>) It doesn't include the error which I get on the command line though. So my question is; how can I get the outputted error ({"code":-4,"message":"Insufficient funds"}) from within Python? All tips are welcome!
According to the subprocess.check_output() docs, the exception raised on error has an output attribute that you can use to access the error details: try: subprocess.check_output(...) except subprocess.CalledProcessError as e: print e.output You should then be able to analyse this string and parse the error details with the json module: if e.output.startswith('error: {'): error = json.loads(e.output[7:]) # Skip "error: " print error['code'] print error['message']
tornado 403 GET warning when opening websocket
I found this python script which should allow me to open a WebSocket. However, I receive the warning [W 1402720 14:44:35 web:1811] 403 GET / (192.168.0.102) 11.02 ms in my Linux terminal when trying to open the actual WebSocket (using Old WebSocket Terminal Chrome plugin). The messages "connection opened", "connection closed" and "message received" are never printed in the terminal window. import tornado.httpserver import tornado.ioloop import tornado.options import tornado.web import tornado.websocket class MyHandler(tornado.websocket.WebSocketHandler): def open(self): print "connection opened" self.write_message("connection opened") def on_close(self): print "connection closed" def on_message(self,message): print "Message received: {}".format(message) self.write_message("message received") if __name__ == "__main__": tornado.options.parse_command_line() app = tornado.web.Application(handlers=[(r"/",MyHandler)]) server = tornado.httpserver.HTTPServer(app) server.listen(8888) tornado.ioloop.IOLoop.instance().start()
please add def check_origin(self, origin): return True in class MyHandler like this class MyHandler(tornado.websocket.WebSocketHandler): def check_origin(self, origin): return True def open(self): print "connection opened" self.write_message("connection opened") def on_close(self): print "connection closed" def on_message(self,message): print "Message received: {}".format(message) self.write_message("message received")
How to check if a column exists in Pandas
Is there a way to check if a column exists in a Pandas DataFrame? Suppose that I have the following DataFrame: >>> import pandas as pd >>> from random import randint >>> df = pd.DataFrame({'A': [randint(1, 9) for x in xrange(10)], 'B': [randint(1, 9)*10 for x in xrange(10)], 'C': [randint(1, 9)*100 for x in xrange(10)]}) >>> df A B C 0 3 40 100 1 6 30 200 2 7 70 800 3 3 50 200 4 7 50 400 5 4 10 400 6 3 70 500 7 8 30 200 8 3 40 800 9 6 60 200 and I want to calculate df['sum'] = df['A'] + df['C'] but first I want to check if df['A'] exists, and if not, I want to calculate df['sum'] = df['B'] + df['C'] instead. Thanks for the help.
This will work: if 'A' in df: But for clarity, I'd probably write it as: if 'A' in df.columns:
A simple example of using cmake to build a Windows DLL
A couple of years ago I was saddled with building and modifying this huge package from a partner engineering institution located in the NorthEast (perhaps, "Down East"). This package was built using something called "cmake" with Linux as the target... cmake, IMHO the most amazingly maddening, poorly documented, oddly structured build system I'd ever had the displeasure of working with given my limited abilities (28 years of professional experience with *NIX systems and building lotsa open source code). Then I had to build another project with "cmake" which targeted MSVS. Oh, the JOY! Finally SOMETHING that could reliably generate those nasty-a**ed "project" and "solution" files. And those same CMakeLists.txt files could re-target Linux! Wow, I've seen the light. It's still pretty dark where I am, though. Unless I have a CMakeLists.txt file to start with, I just can't seem to get my head wrapped around starting one from scratch and spending less than a day on that process for the simplest problem. I have a task to build a DLL with MSVS that can be accessed from a Python script using ctypes. Basically, that means a DLL which has symbols on-board. Since I have that 10-year-old bug where my installations of VS 2008 AND VS 2010 cannot create a new C++ project, I figured I'd bend my pick on generating a DLL Solution with cmake. I haven't been able to find a modern (aka post cmake 2.8.5) COMPLETE example of building a DLL with cmake, which is supposed to be much better at this task than in the past. Dived through the tutorial http://www.cmake.org/cmake/help/cmake_tutorial.html which is horrible because they expect you to write the C++ code while learning the cmake. (Hey, man! I'm having enough trouble getting cmake to work let along make code that will compile!) The tutorial goes though building a simple binary and then a binary using a library, but it does not generate a DLL. Reading http://www.cmake.org/Wiki/BuildingWinDLL between the lines, I naively added some code to the CMakeLists.txt file in the lib directory: Before: add_library(MathFunctions mysqrt.cxx) install (TARGETS MathFunctions DESTINATION bin) install (FILES MathFunctions.h DESTINATION include) After: add_library(MathFunctions SHARED mysqrt.cxx) GENERATE_EXPORT_HEADER( MathFunctions BASE_NAME MathFunctions EXPORT_MACRO_NAME MathFunctions_EXPORT EXPORT_FILE_NAME MathFunctions_Export.h STATIC_DEFINE MathFunctions_BUILT_AS_STATIC ) install (TARGETS MathFunctions DESTINATION bin) install (FILES MathFunctions.h DESTINATION include) cmake 3.0.0 and cmake 2.8.12.2 both winge at this file with: CMake Error at MathFunctions/CMakeLists.txt:2 (GENERATE_EXPORT_HEADER): Unknown CMake command "GENERATE_EXPORT_HEADER". The function appears to be in the cmake installation as GenerateExportHeader.cmake, and no amount of debugging revealed the "why" on this error. And I haven't been able to find this error on the Internet. And that was the first six hours of my day. I finally decided to remove the offending command and try this: add_library(MathFunctions mysqrt.cxx) install (TARGETS MathFunctions DESTINATION bin) install (FILES MathFunctions.h DESTINATION include) Wa-la! cmake configured and generated and MSVS built it successfully and a DLL appeared in the Debug subdirectory of the library directory. Kuel. This DLL, however, did not contain the symbols that would allow python/ctypes to access the desired function. After some more rooting around in the BuildingWinDLL page, I managed to elicit the symbols. Python was very happy, and I now have a model for future work even though it is a rude, simple-minded hack! SO, after that long-winded discussion: Is the BuildingWinDLL, referring to cmake 2.8.5 and above, simply wrong? What was the right way to do this? Does anybody out there have a simple, "cmake 101" example of creating an MSVS DLL with exported symbols that is not a hack so that I can throw away my hack? PS: very, nice, friendly article composition system here at stackoverflow. I will enjoy this, assuming I'm allowed back... UPDATE after the answer from steveire: The original question answered, and I see that I missed a hint in the BuildingWinDLL page. I also have found that I failed to change one of the fields in the example for my own code. So now we're on to the next layer. Using the referenced example, the VS2010 solution build complains: LINK : fatal error LNK1104: cannot open file 'MathFunctions\Debug\MathFunctions.lib' I gathered from the BuildingWinDLL that the GENERATE_EXPORT_HEADER() was all-singing, all-dancing with regard to building the DLL. The .lib file is not being generated, and the .dll that is generated does not contain symbols... The BuildingWinDLL page talks about the pre-cmake-2.8.5 process. The 2.8.5 process noted at the top of the page is how the files at the bottom of the page are now automatically generated using GENERATE_EXPORT_HEADER(). It is still necessary to knit the pieces together, which is not clear to me from the text. So MathFunctions_Export.h is generated by the GENERATE_EXPORT_HEADER() cmake command and the particular parameters presented here and creates a C header with a macro for causing symbols to be exported. This file apparently has to be explicitly referred to, and symbols to export properly qualified: #include <math.h> #include <Mathfunctions/MathFunctions_Export.h> MathFunctions_EXPORT double mysqrt(double v) { return sqrt(v); } Adding the #include and the *EXPORT qualifier now cause symbols to be exported, and VS now knows to generate the .lib and populate the .dll with symbols. SUCCESS! Thanks to all who aided in this process and suffered with me in my pain.
include(GenerateExportHeader) before using it. http://www.cmake.org/cmake/help/v3.0/module/GenerateExportHeader.html CMake builds STATIC libraries by default, so keep the SHARED if you want to build a shared library.
Pycharm does not show plot
Pycharm does not show plot from the following code: import pandas as pd import numpy as np import matplotlib as plt ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts = ts.cumsum() ts.plot() What happens is that a window appears for less than a second, and then disappears again. Using the Pyzo IEP IDE (using same interpreter) on the same code the plot shows as expected. ...So the problem must be with some setting on Pycharm. I've tried using both python.exe and pythonw.exe as interpreter both with same results. This is my sys_info: C:\pyzo2014a\pythonw.exe -u C:\Program Files (x86)\JetBrains\PyCharm Community Edition 3.4.1\helpers\pydev\pydevconsole.py 57315 57316 PyDev console: using IPython 2.1.0import sys; print('Python %s on %s' % (sys.version, sys.platform)) Python 3.4.1 |Continuum Analytics, Inc.| (default, May 19 2014, 13:02:30) [MSC v.1600 64 bit (AMD64)] on win32 sys.path.extend(['C:\\Users\\Rasmus\\PycharmProjects\\untitled2']) In[3]: import IPython print(IPython.sys_info()) {'commit_hash': '681fd77', 'commit_source': 'installation', 'default_encoding': 'UTF-8', 'ipython_path': 'C:\\pyzo2014a\\lib\\site-packages\\IPython', 'ipython_version': '2.1.0', 'os_name': 'nt', 'platform': 'Windows-8-6.2.9200', 'sys_executable': 'C:\\pyzo2014a\\pythonw.exe', 'sys_platform': 'win32', 'sys_version': '3.4.1 |Continuum Analytics, Inc.| (default, May 19 2014, ' '13:02:30) [MSC v.1600 64 bit (AMD64)]'}
I had the same problem. Check wether plt.isinteractive() is True. Setting it to 'False' helped for me. plt.interactive(False)
How to get Google Analytics credentials without gflags - using run_flow() instead?
This may take a second to explain so please bear with me: I'm working on a project for work that requires me to pull in google analytics data. I originally did this following this link, so after installing the API client pip install --upgrade google-api-python-client and setting things up like the client_secrets.json, it wanted gflags to be installed in order to execute the run() statement. (i.e credentials = run(FLOW, storage)) Now, I was getting the error message to install gflags or better to use run_flow() (exact error message was this): NotImplementedError: The gflags library must be installed to use tools.run(). Please install gflags or preferably switch to using tools.run_flow(). I originally used gflags (a few months ago), but it wasn't compatible with our framework (pyramid), so we removed it until we could figure out what the issue was. And the reason why it's preferable to switch from gflags to run_flow() is because gflags has been deprecated, so I don't want to use it like I had. What I'm trying to do now is switch over to using run_flow() The issue with this is run_flow() expects a command line argument to be sent to it and this is not a command line application. I found some documentation that was helpful but I'm stuck on building the flags for the run_flow() function. Before showing code one more thing to explain. run_flow() takes three arguments (documentation here). It takes the flow and storage just like run() does, but it also takes a flags object. the gflags library built a flags ArgumentParser object that was used in the oauth2client execution method. a few other links that were helpful in building the argumentParser object: Link 1: https://google-api-python-client.googlecode.com/hg/docs/epy/oauth2client.tools-module.html Link 2: https://developers.google.com/compute/docs/api/python-guide The second link is very helpful to see how it would be executed, so now when I try to do something similar, sys.argv pulls in the location of my virtual environment that is running aka pserve and also pulls in my .ini file (which stores credentials for my machine to run the virtual environment). But that throws an error because its expecting something else, and this is where I'm stuck. I don't know what flags object I need to build to send through run_flow() I don't know what argv arguments I need passed in order for the statement flags = parser.parse_args(argv[1:]) to retrieve the correct information (I don't know what the correct information is supposed to be) Code: CLIENT_SECRETS = client_file.uri MISSING_CLIENT_SECRETS_MESSAGE = '%s is missing' % CLIENT_SECRETS FLOW = flow_from_clientsecrets( CLIENT_SECRETS, scope='https://www.googleapis.com/auth/analytics.readonly', message=MISSING_CLIENT_SECRETS_MESSAGE ) TOKEN_FILE_NAME = 'analytics.dat' def prepare_credentials(self, argv): storage = Storage(self.TOKEN_FILE_NAME) credentials = storage.get() if credentials is None or credentials.invalid: parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter, parents=[tools.argparser]) flags = parser.parse_args(argv[1:]) # i could also do just argv, both error credentials = run_flow(self.FLOW, storage, flags) return credentials def initialize_service(self, argv): http = httplib2.Http() credentials = self.prepare_credentials(self, argv) http = credentials.authorize(http) return build('analytics', 'v3', http=http) I call a main function passing sys.argv that calls the initialize_service def main(self, argv): service = self.initialize_service(self, argv) try: #do a query and stuff here I knew this wouldn't work because my application is not a command line application but rather a full integrated service but I figured it was worth a shot. Any thoughts on how to build the flags object correctly?
from oauth2client import tools flags = tools.argparser.parse_args(args=[]) credentials = tools.run_flow(flow, storage, flags) Took a bit of mucking about but climbed my way out the two traps it dropped me into: have to use the argparser provided in tools I had to feed args an empty list to prevent it from reading args off the command line, which was a problem because I'm running it from inside a unittest (so different cmdline args).
Python Flask how to get parameters from a URL?
In Flask, How do I extract parameters from a URL? How can I extract named parameters from a URL using flask and python? When the user accesses this URL running on my flask app, I want the web service to be able to handle the parameters specified after the question mark: http://10.1.1.1:5000/login?username=alex&password=pw1 #I just want to be able to manipulate the parameters @app.route('/login', methods=['GET', 'POST']) def login(): username = request.form['username'] print(username) password = request.form['password'] print(password)
Use request.args to get parsed contents of query string: from flask import request @app.route(...) def login(): username = request.args.get('username') password = request.args.get('password')
Python 3.4 and 2.7: Cannot install numpy package for python 3.4
I am using Ubuntu 12.04 and want to use python 3.4 side by side with python 2.7. The installation of python 3.4 worked properly. However, I cannot install the numpy package for python 3 (and as a consequence I can't install scipy, pandas etc.). Using sudo pip3 install numpy spits out the following error: File "numpy/core/setup.py", line 289, in check_types "Cannot compile 'Python.h'. Perhaps you need to "\ SystemError: Cannot compile 'Python.h'. Perhaps you need to install python-dev|python-devel. Btw, I already have python-dev installed. Moreover, installing numpy via sudo apt-get install python-numpy does not work either since I already installed numpy for python 2.7 and the installer responds that numpy is already up to date. What can I do? Thanks!
You have not installed the Python 3 development package. Install python3.4-dev: apt-get install python3.4-dev The main package never includes the development headers; Debian (and by extension Ubuntu) package policy is to put those into a separate -dev package. To install numpy however, you need these files to be able to compile the extension.
Is it possible to create grouping of input cells in IPython Notebook?
When I do data analysis on IPython Notebook, I often feel the need to move up or down several adjacent input cells, for better flow of the analysis story. I'd expected that once I'd create a heading, all cells under that heading would move together if I move the heading. But this is not the case. Any way I can do this? Edit: To clarify, I can of course move cells individually, and the keyboard shortcuts are handy; but what I'm looking for is a way to group cells so that I can move (or even delete) them all together.
I use a little-known extension, which does exactly what you want (i.e. "once I'd create a heading, all cells under that heading would move together if I move the heading"). It is part of the Calico suite, but can be installed separately. More specifically, you need to install a Calico Notebook Extension named Document Tools. From the description: The Calico Document Tools extensions adds section moving, heading numbering, table of contents, and bibliography support. Demonstration of use: https://www.youtube.com/watch?v=YbM8rrj-Bms I don't know whether the instructions of installation given on the wiki page are updated for IPython 3.0 (february 2015), but the source-code on BitBucket actually is. I cannot install IPython 3.0 for the moment, but I gladly use this extension with IPython 2.x since last summer. It's great, perhaps less versatile than the asif.m's suggestion Collective Cut-Copy-Paste for IPython Notebooks (which, by the way, has not been updated for IPython 3.0), but IMHO faster and more logical.
Python mock multiple return values
I am using pythons mock.patch and would like to change the return value for each call. Here is the caviat: the function being patched has no inputs, so I can not change the return value based on the input. Here is my code for reference. def get_boolean_response(): response = io.prompt('y/n').lower() while response not in ('y', 'n', 'yes', 'no'): io.echo('Not a valid input. Try again']) response = io.prompt('y/n').lower() return response in ('y', 'yes') My Test code: @mock.patch('io') def test_get_boolean_response(self, mock_io): #setup mock_io.prompt.return_value = ['x','y'] result = operations.get_boolean_response() #test self.assertTrue(result) self.assertEqual(mock_io.prompt.call_count, 2) io.prompt is just a platform independent (python 2 and 3) version of "input". So ultimately I am trying to mock out the users input. I have tried using a list for the return value, but that doesn't seam to work. You can see that if the return value is something invalid, I will just get an infinite loop here. So I need a way to eventually change the return value, so that my test actually finishes. (another possible way to answer this question could be to explain how I could mimic user input in a unit-test) Not a dup of this question mainly because I do not have the ability to vary the inputs. One of the comments of the Answer on this question is along the same lines, but no answer/comment has been provided.
You can assign an iterable to side_effect, and the mock will return the next value in the sequence each time it is called: >>> from unittest.mock import Mock >>> m = Mock() >>> m.side_effect = ['foo', 'bar', 'baz'] >>> m() 'foo' >>> m() 'bar' >>> m() 'baz' Quoting the Mock() documentation: If side_effect is an iterable then each call to the mock will return the next value from the iterable. As an aside, the test response is not 'y' or 'n' or 'yes' or 'no' will not work; you are asking if the expression (response is not 'y') is true, or 'y' is true (always the case, a non-empty string is always true), etc. The various expressions on either side of or operators are independent. See How do I test one variable against multiple values? You should also not use is to test against a string. The CPython interpreter may reuse string objects under certain circumstances, but this is not behaviour you should count on. As such, use: response not in ('y', 'n', 'yes', 'no') instead; this will use equality tests (==) to determine if response references a string with the same contents (value). The same applies to response == 'y' or 'yes'; use response in ('y', 'yes') instead.
Check if key exists and iterate the JSON array using Python
I have a bunch of JSON data from Facebook posts like the one below: {"from": {"id": "8", "name": "Mary Pinter"}, "message": "How ARE you?", "comments": {"count": 0}, "updated_time": "2012-05-01", "created_time": "2012-05-01", "to": {"data": [{"id": "1543", "name": "Honey Pinter"}]}, "type": "status", "id": "id_7"} The JSON data is semi-structured and all is not the same. Below is my code: import json str = '{"from": {"id": "8", "name": "Mary Pinter"}, "message": "How ARE you?", "comments": {"count": 0}, "updated_time": "2012-05-01", "created_time": "2012-05-01", "to": {"data": [{"id": "1543", "name": "Honey Pinter"}]}, "type": "status", "id": "id_7"}' data = json.loads(str) post_id = data['id'] post_type = data['type'] print(post_id) print(post_type) created_time = data['created_time'] updated_time = data['updated_time'] print(created_time) print(updated_time) if data.get('application'): app_id = data['application'].get('id', 0) print(app_id) else: print('null') #if data.get('to'): #... This is the part I am not sure how to do # Since it is in the form "to": {"data":[{"id":...}]} I want the code to print the to_id as 1543 else print 'null' I am not sure how to do this. Thanks!
import json jsonData = """{"from": {"id": "8", "name": "Mary Pinter"}, "message": "How ARE you?", "comments": {"count": 0}, "updated_time": "2012-05-01", "created_time": "2012-05-01", "to": {"data": [{"id": "1543", "name": "Honey Pinter"}]}, "type": "status", "id": "id_7"}""" def getTargetIds(jsonData): data = json.loads(jsonData) if 'to' not in data: raise ValueError("No target in given data") if 'data' not in data['to']: raise ValueError("No data for target") for dest in data['to']['data']: if 'id' not in dest: continue targetId = dest['id'] print("to_id:", targetId) Output: In [9]: getTargetIds(s) to_id: 1543
Is there an __repr__ equivalent for javascript?
The closest I got to something close to Python's repr is this: function User(name, password){ this.name = name; this.password = password; } User.prototype.toString = function(){ return this.name; }; var user = new User('example', 'password'); console.log(user.toString()) // but user.name would be even shorter Is there a way to represent an object as a string by default? Or am I going to have to just use object.variable to get the results I want?
JSON.stringify is probably the closest you are going to get from native libraries. It doesn't work well with objects, but you could define your own code to work around that. I searched for libraries that provide this functionality but didn't find anything.
how to unpack pkl file
I have a pkl file from MNIST dataset, which consists of handwritten digit images. I'd like to take a look at each of those digit images, so I need to unpack the pkl file, except I can't find out how. Is there a way to unpack/unzip pkl file?
Generally Your pkl file is, in fact, a serialized pickle file, which means it has been dumped using Python's pickle module. To un-pickle de data you can: import pickle with open('serialized.pkl', 'rb') as f: data = pickle.load(f) For the MNIST data set Note gzip is only needed if the file is compressed: import gzip import pickle with gzip.open('mnist.pkl.gz', 'rb') as f: train_set, valid_set, test_set = pickle.load(f) Where each set can be further divided (i.e. for the training set): train_x, train_y = train_set Those would be the inputs (digits) and outputs (labels) of your sets. If you want to display the digits: import matplotlib.cm as cm import matplotlib.pyplot as plt plt.imshow(train_x[0].reshape((28, 28)), cmap=cm.Greys_r) plt.show() The other alternative would be to look at the original data: http://yann.lecun.com/exdb/mnist/ But that will be harder, as you'll need to create a program to read the binary data in those files. So I recommend you to use Python, and load the data with pickle. As you've seen, it's very easy. ;-)
Django 1.7 - makemigrations not detecting changes
As the title says, I can't seem to get migrations working. The app was originally under 1.6, so I understand that migrations won't be there initially, and indeed if I run python manage.py migrate I get: Operations to perform: Synchronize unmigrated apps: myapp Apply all migrations: admin, contenttypes, auth, sessions Synchronizing apps without migrations: Creating tables... Installing custom SQL... Installing indexes... Running migrations: No migrations to apply. If I make a change to any models in myapp, it still says unmigrated, as expected. But if I run python manage.py makemigrations myapp I get: No changes detected in app 'myapp' Doesn't seem to matter what or how I run the command, it's never detecting the app as having changes, nor is it adding any migration files to the app. Is there any way to force an app onto migrations and essentially say "This is my base to work with" or anything? Or am I missing something? My database is a PostgreSQL one if that helps at all.
If you're changing over from an existing app you made in django 1.6, then you need to do one pre-step (as I found out) listed in the documentation: python manage.py makemigrations your_app_label The documentation does not make it obvious that you need to add the app label to the command, as the first thing it tells you to do is python manage.py makemigrations which will fail. The initial migration is done when you create your app in version 1.7, but if you came from 1.6 it wouldn't have been carried out. See the 'Adding migration to apps' in the documentation for more details.
matplotlib: Change grid interval and specify tick labels
I am trying to plot counts in gridded plots but I am not being able to figure out how I go about it. I want to: (1) have dotted grids at an interval of 5 (2) have major tick labels only every 20 (3) I want the ticks to be outside the plot (4) have "counts" inside those grids I have checked for potential duplicates such as here and here but I have not been able to figure out. This is my code. import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator, FormatStrFormatter for key, value in sorted(data.items()): x = value[0][2] y = value[0][3] count = value[0][4] fig = plt.figure() ax = fig.add_subplot(111) ax.annotate(count, xy = (x, y), size = 5) # Overwrites and I only get the last data point plt.close() # Without this, I get "fail to allocate bitmap" error plt.suptitle('Number of counts', fontsize = 12) ax.set_xlabel('x') ax.set_ylabel('y') plt.axes().set_aspect('equal') plt.axis([0, 1000, 0, 1000]) # This gives an interval of 200 majorLocator = MultipleLocator(20) majorFormatter = FormatStrFormatter('%d') minorLocator = MultipleLocator(5) # I want minor grid to be 5 and major grid to be 20 plt.grid() filename = 'C:\Users\Owl\Desktop\Plot.png' plt.savefig(filename, dpi = 150) plt.close() This is what I get. I also have problem of overwriting the data points, which I am also having trouble with... Could anybody PLEASE help me with this problem?
There are several problems in your code. First the big ones: you are creating a new figure and a new axes in every iteration of your loop -> put fig = plt.figure and ax = fig.add_subplot(1,1,1) outside of the loop. Don't use the Locators, call the functions ax.set_xticks() and ax.grid() with the correct keywords. with plt.axes() you are creating a new axes again. use ax.set_aspect('equal') The minor things: You should not mix the matlab like syntax like plt.axis() with the objective syntax. use ax.set_xlim(a,b) and ax.set_ylim(a,b) This should be a working minimal example: import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(1,1,1) # major ticks every 20, minor ticks every 5 major_ticks = np.arange(0, 101, 20) minor_ticks = np.arange(0, 101, 5) ax.set_xticks(major_ticks) ax.set_xticks(minor_ticks, minor=True) ax.set_yticks(major_ticks) ax.set_yticks(minor_ticks, minor=True) # and a corresponding grid ax.grid(which='both') # or if you want differnet settings for the grids: ax.grid(which='minor', alpha=0.2) ax.grid(which='major', alpha=0.5) plt.show() Output is this:
Bulk package updates using Conda [Anaconda]
I've been playing around with both Anaconda and Canopy. While I love the Spyder , object inspector etc integration in Anaconda I admire the graphical package manager and ability to perform bulk updates in Canopy. So is there a way (using conda update) that I can list outdated packages and select or bulk update (compatible) packages in Anaconda? It doesn't make much sense updating the packages individually as there are hundreds of them.
You want conda update --all. conda search --outdated will show outdated packages, and conda update --all will update them (note that the latter will not update you from Python 2 to Python 3, but the former will show Python as being outdated if you do use Python 2).
Sublime text3 and virtualenvs
I'm totally new with sublime3, but i couldn't find anything helpful for my problem... I've differents virtualenvs (made with virtualenwrapper) and I'd like to be able to specify which venv to use with each project Since I'm using SublimeREPL plugin to have custom builds, how can i specify which python installation to build my project with? for example, when i work on project A i want to run scripts with venvA's python, and when i work on project B i want to run things with venvB (using a different build script) sorry my terrible english...
Hopefully this is along the lines you are imagining. I attempted to simplify my solution and remove some things you likely do not need. The advantages of this method are: Single button press to launch a SublimeREPL with correct interpreter and run a file in it if desired. After setting the interpreter, no changes or extra steps are necessary when switching between projects. Can be easily extended to automatically pick up project specific environment variables, desired working directories, run tests, open a Django shell, etc. Let me know if you have any questions, or if I totally missed the mark on what you're looking to do. Set Project's Python Interpreter Open our project file for editing: Project -> Edit Project Add a new key to the project's settings that points to the desired virtualenv: "settings": { "python_interpreter": "/home/user/.virtualenvs/example/bin/python" } A "python_interpreter" project settings key is also used by plugins like Anaconda. Create plugin to grab this setting and launch a SublimeREPL Browse to Sublime Text's Packages directory: Preferences -> Browse Packages... Create a new python file for our plugin, something like: project_venv_repls.py Copy the following python code into this new file: import sublime_plugin class ProjectVenvReplCommand(sublime_plugin.TextCommand): """ Starts a SublimeREPL, attempting to use project's specified python interpreter. """ def run(self, edit, open_file='$file'): """Called on project_venv_repl command""" cmd_list = [self.get_project_interpreter(), '-i', '-u'] if open_file: cmd_list.append(open_file) self.repl_open(cmd_list=cmd_list) def get_project_interpreter(self): """Return the project's specified python interpreter, if any""" settings = self.view.settings() return settings.get('python_interpreter', '/usr/bin/python') def repl_open(self, cmd_list): """Open a SublimeREPL using provided commands""" self.view.window().run_command( 'repl_open', { 'encoding': 'utf8', 'type': 'subprocess', 'cmd': cmd_list, 'cwd': '$file_path', 'syntax': 'Packages/Python/Python.tmLanguage' } ) Set Hotkeys Open user keybind file: Preferences -> Key Bindings - User Add a few keybinds to make use of the plugin. Some examples: // Runs currently open file in repl { "keys": ["f5"], "command": "project_venv_repl" }, // Runs repl without any file { "keys": ["f6"], "command": "project_venv_repl", "args": { "open_file": null } }, // Runs a specific file in repl, change main.py to desired file { "keys": ["f7"], "command": "project_venv_repl", "args": { "open_file": "/home/user/example/main.py" } }
Interpolation over regular grid in Python
I have been struggling to inteprolate the data for "empty" pixels in my 2D matrix. Basically, I understand (but not deeply) interpolation techniques such as Inverse Distance Weighting, Kriging, Bicubic etc. I dont know the starting point exactly (either in the statement of the problem or Python case). The problem definition: I have MxN matrix (regular grid) in which each pixel represents certain measurement value (figure below and data used in this figure is here). I wanted to interpolate the data for "question mark space" (white space which also consists of the same sized but empty pixels) areas using the existing data I have as blue pixels. My question: 1) How can I interpolate this data. Could anyone give me simple example (e.g. 3x3 matrix) to understand that clearly? 2) Could anyone guide me how to perform the steps towards solution in Python environment? 3) How can I compare the interpolation techniques in accuracy sense using Python? 4) Do you think it is a good idea to use different interpolation depending on the density of the data? I will appreciate your answers and suggestions.
What is a sensible solution largely depends on what questions you're trying to answer with the interpolated pixels -- caveat emptor: extrapolating over missing data can lead to very misleading answers! Radial Basis Function Interpolation / Kernel Smoothing In terms of practical solutions available in Python, one way to fill those pixels in would be to use Scipy's implementation of Radial Basis Function interpolation (see here) which is intended for the smoothing/interpolation of scattered data. Given your matrix M and underlying 1D coordinate arrays r and c (such that M.shape == (r.size, c.size)), where missing entries of M are set to nan, this seems to work fairly well with a linear RBF kernel as follows: import numpy as np import scipy.interpolate as interpolate with open('measurement.txt') as fh: M = np.vstack(map(float, r.split(' ')) for r in fh.read().splitlines()) r = np.linspace(0, 1, M.shape[0]) c = np.linspace(0, 1, M.shape[1]) rr, cc = np.meshgrid(r, c) vals = ~np.isnan(M) f = interpolate.Rbf(rr[vals], cc[vals], M[vals], function='linear') interpolated = f(rr, cc) This yields the following interpolation of the data you've linked to above, which although reasonable looking, does highlight how unfavourable the ratio of missing samples to real data is: Gaussian Process Regression / Kriging Kriging interpolation is available via the Gaussian Process Regression implementation (which is itself based on the DACE Kriging toolbox for Matlab) in the scikit-learn library. This could be invoked as follows: from sklearn.gaussian_process import GaussianProcess gp = GaussianProcess(theta0=0.1, thetaL=.001, thetaU=1., nugget=0.01) gp.fit(X=np.column_stack([rr[vals],cc[vals]]), y=M[vals]) rr_cc_as_cols = np.column_stack([rr.flatten(), cc.flatten()]) interpolated = gp.predict(rr_cc_as_cols).reshape(M.shape) This yields a very similar interpolation to the Radial Basis Function example above. In both cases there are a lot of parameters to explore - the choice of these largely hinges on the assumptions that you can make about the data. (One advantage of the linear kernel used in the RBF example above is that it has no free parameters) Inpainting As a final aside, an entirely visually motivated solution would be use OpenCV's inpainting functionality, although this assumes 8bit arrays (0 - 255), and does not have a straightforward mathematical interpretation.
pyzmq missing when running ipython notebook
I can run iPython, but when I try to initiate a notebook I get the following error: ~ ipython notebook Traceback (most recent call last): File "/usr/local/bin/ipython", line 8, in <module> load_entry_point('ipython==2.1.0', 'console_scripts', 'ipython')() File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/__init__.py", line 120, in start_ipython return launch_new_instance(argv=argv, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 563, in launch_instance app.initialize(argv) File "<string>", line 2, in initialize File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 92, in catch_config_error return method(app, *args, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/terminal/ipapp.py", line 321, in initialize super(TerminalIPythonApp, self).initialize(argv) File "<string>", line 2, in initialize File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 92, in catch_config_error return method(app, *args, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/core/application.py", line 381, in initialize self.parse_command_line(argv) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/terminal/ipapp.py", line 316, in parse_command_line return super(TerminalIPythonApp, self).parse_command_line(argv) File "<string>", line 2, in parse_command_line File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 92, in catch_config_error return method(app, *args, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 475, in parse_command_line return self.initialize_subcommand(subc, subargv) File "<string>", line 2, in initialize_subcommand File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 92, in catch_config_error return method(app, *args, **kwargs) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/config/application.py", line 406, in initialize_subcommand subapp = import_item(subapp) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/utils/importstring.py", line 42, in import_item module = __import__(package, fromlist=[obj]) File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/html/notebookapp.py", line 39, in <module> check_for_zmq('2.1.11', 'IPython.html') File "/Library/Python/2.7/site-packages/ipython-2.1.0-py2.7.egg/IPython/utils/zmqrelated.py", line 37, in check_for_zmq raise ImportError("%s requires pyzmq >= %s"%(required_by, minimum_version)) ImportError: IPython.html requires pyzmq >= 2.1.11 But as far as I can see, I already have the pyzmq package installed. ~ pip install pyzmq Requirement already satisfied (use --upgrade to upgrade): pyzmq in /Library/Python/2.7/site-packages/pyzmq-14.3.1-py2.7-macosx-10.6-intel.egg Cleaning up...
Arg. The ipython install is a little idiosyncratic. Here's what I had to do to resolve this: $ pip uninstall ipython $ pip install "ipython[all]" The issue is that notebooks have their own set of dependencies, which aren't installed with pip install ipython. However, having installed ipython, pip doesn't see the need to add anything if you then try the [all] form. As mentioned in comments for some shells (e.g. zsh) it's necessary to escape or quote the square brackets (pip install ipython\[all\] would also work).