instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Cython package with __init__.pyx: Possible?
Is it possible to create a Python 2.7 package using __init__.pyx (compiled to __init__.so)? If so how? I haven't had any luck getting it to work. Here is what I have tried: setup.py: #!/usr/bin/env python from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext foo = Extension(name='foo.__init__', sources=['foo/__init__.pyx']) bar = Extension(name='foo.bar', sources=['foo/bar.pyx']) setup(name='foo', packages = ['foo'], cmdclass={'build_ext':build_ext}, ext_modules = [foo, bar]) foo/__init__.pyx: import foo.bar cpdef hello_world(): print "hello world" foo.bar.blah() foo/bar.pyx: cpdef blah(): print "blah" The above has the following behavior: $ python -c 'import foo; foo.hello_world()' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named foo I saw Python issue #15576 which was fixed by this Hg commit. Looking at the equivalent Git commit in the Git mirror of the Python Hg repository, I see that the commit is reachable from the Python v2.7.5 tag (as well as all subsequent v2.7.x versions). Was there a regression?
According to this really old mailing list post it works if you also have an __init__.py file (the __init__.py file is not used, but seems to be necessary for the directory to be treated as a module, and hence the __init__.so file to be loaded). If I add __init__.py: # an exception just to confirm that the .so file is loaded instead of the .py file raise ImportError("__init__.py loaded when __init__.so should have been loaded") then your example works on Linux Python 2.7.3: $ python -c 'import foo; foo.hello_world()' hello world blah This has all the signs of a buggy corner case so probably isn't recommended. Note that on Windows this doesn't seem to work for me giving ImportError: DLL load failed: %1 is not a valid Win32 application. Addendum (for a little extra context): This behaviour doesn't seem to be explicitly documented. In the original description of packages from around Python 1.5 era they say: without the __init__.py, a directory is not recognized as a package and Tip: the search order is determined by the list of suffixes returned by the function imp.get_suffixes(). Usually the suffixes are searched in the following order: ".so", "module.so", ".py", ".pyc". Directories don't explicitly occur in this list, but precede all entries in it. The observed behaviour is certainly consistent with this — __init__.py needed to treat a directory as a package, but .so file is loaded in preference to .py file — but it's hardly unambiguous. From a Cython point of view this behaviour seems to be been used to compile the standard library (in which case __init__.py would always have been present), or in the testcases given https://github.com/cython/cython/blob/master/tests/build/package_compilation.srctree (and a few other examples too). In these the "srctree" file looks to be expanded into a variety of folders containing __init__.py (and other files) then compiled. It's possible that only having __init__.so was simply never tested.
Plotting in a non-blocking way with Matplotlib
I have been playing with Numpy and matplotlib in the last few days. I am having problems trying to make matplotlib plot a function without blocking execution. I know there are already many threads here on SO asking similar questions, and I 've googled quite a lot but haven't managed to make this work. I have tried using show(block=False) as some people suggest, but all I get is a frozen window. If I simply call show(), the result is plotted properly but execution is blocked until the window is closed. From other threads I 've read, I suspect that whether show(block=False) works or not depends on the backend. Is this correct? My back end is Qt4Agg. Could you have a look at my code and tell me if you see something wrong? Here is my code. Thanks for any help. from math import * from matplotlib import pyplot as plt print plt.get_backend() def main(): x = range(-50, 51, 1) for pow in range(1,5): # plot x^1, x^2, ..., x^4 y = [Xi**pow for Xi in x] print y plt.plot(x, y) plt.draw() #plt.show() #this plots correctly, but blocks execution. plt.show(block=False) #this creates an empty frozen window. _ = raw_input("Press [enter] to continue.") if __name__ == '__main__': main() PS. I forgot to say that I would like to update the existing window every time I plot something, instead of creating a new one.
I spent a long time looking for solutions, and found this answer. It looks like, in order to get what you (and I) want, you need the combination of plt.ion(), plt.show() (not with blocking=False, that's deprecated) and, most importantly, plt.pause(.001) (or whatever time you want). I think this is because it tries to mimic optimizations in MATLAB that cause it to delay drawing any changes to the graph unless there is a set time (pause) that will mean that whatever is displayed will actually be viewed. It's possible that this is implemented by picking up time from a sleeping thread, so maybe IDEs mess with that—I don't know. Here's an implementation that works for me on python 3.5: import numpy as np from matplotlib import pyplot as plt def main(): plt.axis([-50,50,0,10000]) plt.ion() plt.show() x = np.arange(-50, 51) for pow in range(1,5): # plot x^1, x^2, ..., x^4 y = [Xi**pow for Xi in x] plt.plot(x, y) plt.draw() plt.pause(0.001) input("Press [enter] to continue.") if __name__ == '__main__': main()
How do I debug efficiently with spyder in Python?
I like Python and I like Spyder but I find debugging with Spyder terrible! Every time I put a break point, I need to press two buttons: first the debug and then the continue button (it pauses at first line automatically) which is annoying. Moreover, rather than having the standard iPython console with auto completion etc I have a lousy ipdb>> console which is just garbage. The worst thing is that this console freezes very frequently even if I write prints or simple evaluation to try to figure out what is the bug. This is much worse than matlab. Last but not least, if I call a function from within the ipdb>> console, and put a breakpoint in it, it will not stop there. It seems like I have to put the breakpoint there before I start the debugging (ctrl+F5). Do you have a solution or maybe can you tell me how you debug python scripts and functions? I am using fresh install of Anaconda on a Windows 8.1 64bit.
(Spyder dev here) We're aware the debugging experience in Spyder is far from ideal. What we offer right now is very similar to the standard Python debugger, but we're working to improve things in our next major version to provide something closer to what any scientist would expect of a debugger (in short, a regular IPython console that lets you inspect and plot variables at the current breakpoint). Now about your points: It's true. We're thinking to improve that so that if the user press the Run button, and there is a breakpoint present in the current file, then Spyder enters in debug mode and executes the program until the first breakpoint is met. ipdb is the IPython debugger console. Unfortunately, due to limitations in the IPython architecture, it's very limited (no code completion and no history browsing with arrows). Furthermore, it's not possible to run arbitrary Python code in either ipdb or a regular pdb console. The commands you can run in ipdb are the ones you can read when evaluate the help command inside it. That's because, as I said, you can't evaluate arbitrary Python code. You need to put new breakpoints in our Editor so that they are synced with our Python/IPython consoles
Connection Timeout with Elasticsearch
from datetime import datetime from elasticsearch import Elasticsearch es = Elasticsearch() doc = { 'author': 'kimchy', 'text': 'Elasticsearch: cool. bonsai cool.', 'timestamp': datetime(2010, 10, 10, 10, 10, 10) } res = es.index(index="test-index", doc_type='tweet', id=1, body=doc) print(res['created']) This simples code is returning the following error: elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host='localhost', port=9200): Read timed out. (read timeout=10)) Very strange, because the server is ready and set (http://localhost:9200/ is returning some json). Thanks in advance!
By default, the timeout value is set to 10 secs. If one wants to change the global timeout value, this can be achieved by setting the flag timeout=your-time while creating the object. If you have already created the object without specifying the timeout value, then you can set the timeout value for particular request by using request_timeout=your-time flag in the query. es.search(index="my_index", doc_type="document", body=get_req_body(), request_timeout=30)
next() doesn't play nice with any/all in python
I ran down a bug today that came about because I was using next() to extract a value, and 'not found' emits a StopIteration. Normally that would halt the program, but the function using next was being called inside an all() iteration, so the all just terminated early and returned True. Is this an expected behavior? Are there style guides that help avoid this kind of thing? Simplified example: def error(): return next(i for i in range(3) if i==10) error() # fails with StopIteration all(error() for i in range(2)) # returns True
While this is expected behaviour in existing versions of Python at the time of writing, it is scheduled to be changed over the course of the next few point releases of Python 3.x - to quote PEP 479: The interaction of generators and StopIteration is currently somewhat surprising, and can conceal obscure bugs. An unexpected exception should not result in subtly altered behaviour, but should cause a noisy and easily-debugged traceback. Currently, StopIteration raised accidentally inside a generator function will be interpreted as the end of the iteration by the loop construct driving the generator. Testing your code with this patch from Issue 22906 applied to the latest Python trunk, we get the following: Python 3.5.0a0 (default:651aa21433ba+, Feb 2 2015, 21:01:26) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import generator_stop >>> def error(): return next(i for i in range(3) if i==10) ... >>> all(error() for i in range(2)) Traceback (most recent call last): File "<stdin>", line 1, in <genexpr> File "<stdin>", line 1, in error StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: generator raised StopIteration As you can see, in future versions of Python, StopIteration will no longer "bubble up" in this situation, but instead be converted to a RuntimeError. This new behaviour is scheduled to be optional with the addition of a from __future__ import generator_stop import in Python 3.5, and to become the default behaviour in Python 3.7.
RemovedInDjango18Warning: Creating a ModelForm without either the 'fields' attribute or the 'exclude' attribute is deprecated
I am doing a Django project and when I tried to access 127.0.0.1:8000/articles/create, I got the following error in my Ubuntu terminal: /home/(my name)/django_test/article/forms.py:4: RemovedInDjango18Warning: Creating a ModelForm without either the 'fields' attribute or the 'exclude' attribute is deprecated - form ArticleForm needs updating class ArticleForm(forms.ModelForm): In addition, I also got the following error when visiting my actual localhost site: ValueError at /articles/create/ The view article.views.create didn't return an HttpResponse object. It returned None instead. Here is my forms.py file: from django import forms from models import Article class ArticleForm(forms.ModelForm): class Meta: model = Article And here is my views.py file: from django.shortcuts import render_to_response from article.models import Article from django.http import HttpResponse from forms import ArticleForm from django.http import HttpResponseRedirect from django.core.context_processors import csrf #import pdb; pdb.set_trace() # Create your views here. def articles(request): language = 'en-us' session_language = 'en-us' if 'lang' in request.COOKIES: language = request.COOKIES['lang'] if 'lang' in request.session: session_language = request.session['lang'] return render_to_response('articles.html', {'articles': Article.objects.all(), 'language' : language, 'session_language' : session_language}) def article(request, article_id=1): return render_to_response('article.html', {'article': Article.objects.get(id=article_id) }) def language(request, language='en-us'): response = HttpResponse("setting language to %s" % language) response.set_cookie('lang', language) response.session['lang'] = language return response def create(request): if request.POST: form = ArticleForm(request.POST) if form.is_valid(): form.save() return HttpResponseRedirect('/articles/all') else: form = ArticleForm() args = {} args.update(csrf(request)) args['form'] = form return render_to_response('create_article.html', args) I'm not sure how to fix this problem. I looked at the Django documentation but I couldn't find a solution to my problem so any help would be appreciated.
For your form, it's a warning, not an error, telling you that in django 1.8, you will need to change your form to from django import forms from models import Article class ArticleForm(forms.ModelForm): class Meta: model = Article fields = '__all__' # Or a list of the fields that you want to include in your form Or add an exclude to list fields to exclude instead Which wasn't required up till 1.8 https://docs.djangoproject.com/en/1.8/topics/forms/modelforms/#selecting-the-fields-to-use As for the error with your views, your return is inside of an if statement: if request.POST: so when it receives a get request, nothing is returned. def create(request): if request.POST: form = ArticleForm(request.POST) if form.is_valid(): form.save() return HttpResponseRedirect('/articles/all') else: form = ArticleForm() args = {} args.update(csrf(request)) args['form'] = form return render_to_response('create_article.html', args) Just dedent the else block so that it's applying to the correct if statement.
Retrieve the command line arguments of the Python interpreter
Inspired by another question here, I would like to retrieve the Python interpreter's full command line in a portable way. That is, I want to get the original argv of the interpreter, not the sys.argv which excludes options to the interpreter itself (like -m, -O, etc.). sys.flags tells us which boolean options were set, but it doesn't tell us about -m arguments, and the set of flags is bound to change over time, creating a maintenance burden. On Linux you can use procfs to retrieve the original command line, but this is not portable (and it's sort of gross): open('/proc/{}/cmdline'.format(os.getpid())).read().split('\0')
You can use ctypes ~$ python2 -B -R -u Python 2.7.9 (default, Dec 11 2014, 04:42:00) [GCC 4.9.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. Persistent session history and tab completion are enabled. >>> import ctypes >>> argv = ctypes.POINTER(ctypes.c_char_p)() >>> argc = ctypes.c_int() >>> ctypes.pythonapi.Py_GetArgcArgv(ctypes.byref(argc), ctypes.byref(argv)) 1227013240 >>> argc.value 4 >>> argv[0] 'python2' >>> argv[1] '-B' >>> argv[2] '-R' >>> argv[3] '-u'
Python curve_fit with multiple independent variables
Python's curve_fit calculates the best-fit parameters for a function with a single independent variable, but is there a way, using curve_fit or something else, to fit for a function with multiple independent variables? For example: def func(x, y, a, b, c): return log(a) + b*log(x) + c*log(y) where x and y are the independent variable and we would like to fit for a, b, and c.
You can pass curve_fit a multi-dimensional array for the independent variables, but then your func must accept the same thing. For example, calling this array X and unpacking it to x, y for clarity: import numpy as np from scipy.optimize import curve_fit def func(X, a, b, c): x,y = X return np.log(a) + b*np.log(x) + c*np.log(y) # some artificially noisy data to fit x = np.linspace(0.1,1.1,101) y = np.linspace(1.,2., 101) a, b, c = 10., 4., 6. z = func((x,y), a, b, c) * 1 + np.random.random(101) / 100 # initial guesses for a,b,c: p0 = 8., 2., 7. print curve_fit(func, (x,y), z, p0) Gives the fit: (array([ 9.99933937, 3.99710083, 6.00875164]), array([[ 1.75295644e-03, 9.34724308e-05, -2.90150983e-04], [ 9.34724308e-05, 5.09079478e-06, -1.53939905e-05], [ -2.90150983e-04, -1.53939905e-05, 4.84935731e-05]]))
Using (Ana)conda within PyCharm
I've got Pycharm 4 running on my Linux (Ubuntu 14.04) machine. In addition to the system python, I've also got Anaconda installed. Getting the two to play nicely together seems to be a bit of a problem... PyCharm provides some interesting integration for virtualenvs and pip, but the Anaconda Python distribution seems to prefer using its own conda tool for both activities. Is there a relatively simple/painless way to be able to use conda in conjunction with PyCharm? Not just as an alternative interpreter i.e. point PyCharm at the Anaconda Python binary for a project interpreter, but to be able to create, source/activate and deactivate virtual envs, add/remove packages in those virtual envs, etc. Or am I going to have to choose between using Anaconda (and having a more recent and up-to-date python than may come with the system), and being able to use PyCharm's features to their fullest extent?
I know it's late, but I thought it would be nice to clarify things: PyCharm and Conda and pip work well together. The short answer Just manage Conda from the command line. PyCharm will automatically notice changes once they happen, just like it does with pip. The long answer Create a new Conda environment: conda create --name foo pandas bokeh This environment lives under conda_root/envs/foo. Your python interpreter is conda_root/envs/foo/bin/pythonX.X and your all your site-packages are in conda_root/envs/foo/lib/pythonX.X/site-packages. This is same directory structure as in a pip virtual environement. PyCharm sees no difference. Now to activate your new environment from PyCharm go to file > settings > project > interpreter, select Add local in the project interpreter field (the little gear wheel) and hunt down your python interpreter. Congratulations! You now have a Conda environment with pandas and bokeh! Now install more packages: conda install scikit-learn OK... go back to your interpreter in settings. Magically, PyCharm now sees scikit-learn! And the reverse is also true, i.e. when you pip install another package in PyCharm, Conda will automatically notice. Say you've installed requests. Now list the Conda packages in your current environment: conda list The list now includes requests and Conda has correctly detected (3rd column) that it was installed with pip. Conclusion This is definitely good news for people like myself who are trying to get away from the pip/virtualenv installation problems when packages are not pure python. NB: I run PyCharm pro edition 4.5.3 on Linux. For Windows users, replace in command line with in the GUI (and forward slashes with backslashes). There's no reason it shouldn't work for you too. EDIT: PyCharm5 is out with Conda support! In the community edition too.
How to change default Anaconda python environment
I've installed Anaconda and created two extra environments: py3k (which holds Python 3.3) and py34 (which holds Python 3.4). Besides those, I have a default environment named 'root' which the Anaconda installer created by default and which holds Python 2.7. This last one is the default, whenever I launch 'ipython' from the terminal it gives me version 2.7. In order to work with Python 3.4, I need to issue the commands (in the shell) source activate py34 ipython which change the default environment to Python 3.4. This works fine, but it's annoying since most of the time I work on Python 3.4, instead of Python 2.7 (which I hold for teaching purposes, it's a rather long story). Anyway, I'll like to know how to change the default environment to Python 3.4, bearing in mind that I don't want to reinstall everything from scratch.
First, make sure you have the latest version of conda by running conda update conda Then run conda update --all python=3.5 This will attempt to update all your packages in your root environment to Python 3 versions. If it is not possible (e.g., because some package is not built for Python 3), it will give you an error message indicating which package(s) caused the issue. If you installed packages with pip, you'll have to reinstall them.
pylint 1.4 reports E1101(no-member) on all C extensions
We've been long-time fans of pylint. Its static analysis has become a critical part of all our python projects and has saved tons of time chasing obscure bugs. But after upgrading from 1.3 -> 1.4, almost all compiled c extensions result in E1101(no-member) errors. Projects that previously run perfectly clean through pylint 1.3 now complain about almost every C extension member with E1101. We've been forced to disable E1101 errors, but this materially detracts from the usefulness of pylint. For example, this perfectly valid use of the lxml package r"""valid.py: demonstrate pylint 1.4 error""" from lxml import etree print etree.Element('mydoc') Run this through pylint, and it reports: $ pylint -rn valid.py No config file found, using default configuration ************* Module valid E: 3, 6: Module 'lxml.etree' has no 'Element' member (no-member) But it is perfectly valid: $ python valid.py <Element mydoc at 7fddf67b1ba8> Here's where it gets really weird. A very small handful of C extensions seem to work just fine through pylint, e.g.: r"""valid2.py: this one works fine""" import sqlite3 print sqlite3.version $ pylint -rn valid2.py No config file found, using default configuration My question is, has anyone else witnessed this? And if so, would you be willing to share your workaround/solution? We've experimented with trying to create plugins to suppress these warnings (http://docs.pylint.org/plugins.html#enter-plugin), but we're having difficulty making heads or tails of the docs -- and the astroid base class is uber-complex and has defied our attempts to grok it. For real bonus points (and our eternal gratitude) we'd love to understand what changed in pylint. We'd be happy to fix the code (or at least publish a best practice document for C extension authors) that would satisfy pylint. Platform details $ pylint --version No config file found, using default configuration pylint 1.4.0, astroid 1.3.2, common 0.63.2 Python 2.7.5 (default, Jul 1 2013, 18:09:11) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)]
Shortly after posting my question, I found the answer. The change was in fact done on purpose as a security measure. Pylint imports modules to effectively identify valid methods and attributes. It was decided that importing c extensions that are not part of the python stdlib is a security risk and could introduce malicious code. This was done in the release of Astroid 1.3.1 https://mail.python.org/pipermail/code-quality/2014-November/000394.html Only C extensions from trusted sources (the standard library) are loaded into the examining Python process to build an AST from the live module. There are four solutions if you want to use pylint on projects that import non-stdlib c extensions. 1) Disable safety using the --unsafe-load-any-extension=y command line option. This feature is undocumented and classified as a hidden option (https://mail.python.org/pipermail/code-quality/2014-November/000439.html). 2) Disable safety using the pylint.rc setting unsafe-load-any-extensions=yes. This is recommended over option 1 and includes full documentation in the default pylint.rc file (created with --generate-rcfile). 3) Specifically list packages or modules names that you trust to be loaded by pylint in the pylint.rc file using the extension-pkg-whitelist= option. 4) Create a plugin to manipulate the AST (I have no idea how to effect this -- but it's regularly discussed on on the pylint mailing list). We opted for Option 3. We added the following line to our project pylint.rc file: extension-pkg-whitelist=lxml
Django manage.py Unknown command: 'syncdb'
I'm trying to follow this tutorial but I'm stuck on the 5th step. When I execute [~/Django Projects/netmag$] python manage.py syncdb I get the following error message : Unknown command: 'syncdb' Type 'manage.py help' for usage. and here is the output of ./manage.py help does not contain syncdb command. How do I add it? Thanks for any help! Edit : When I run migrate, I get this error : "Error creating new content types. Please make sure contenttypes " RuntimeError: Error creating new content types. Please make sure contenttypes is migrated before trying to migrate apps individually. in settings.py : INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.admindocs', 'blog', ] Edit 2: If I remove 'blog', from settings.py : :~/Django Projects/netmag$ python manage.py migrate blog CommandError: App 'blog' does not have migrations. :~/Django Projects/netmag$ python manage.py makemigrations blog App 'blog' could not be found. Is it in INSTALLED_APPS?
syncdb command is deprecated in django 1.7. Use the python manage.py migrate instead.
Basics of setting up a Spyder workspace and projects
I have searched for a basic tutorial regarding workspaces and projects in the Spyder IDE. What I want to understand is the basic concepts of how to use the workspace and projects to organize my code. It seems that this is perhaps basic programming skills and that is the reason why I have issues finding any kind of overview. This page seems to be related, but is actually about Eclipse and rather sparse. The Pythonxy tutorial and the documentation for Spyder does not go into any detail. Neither does the Anaconda documentation. The questions I have are: When should I set up a new workspace (if ever)? When do I create a new project? How does the PYTHONPATH depend on my workspace and project settings? Is it the same in all cases or can I customize it per workspace/project? Are there other settings apart from the PYTHONPATH that I should configure? How specific are the answers above to Spyder? Would it be the same for other IDEs, like Eclipse? I am running Spyder on 64-bit Windows 7, as part of the Anaconda package.
In my experience, setting up a workspace in Spyder is not always necessary. A workspace is a space on your computer where you create and save all the files you work in. Workspaces usually help in managing your project files. Once you create a workspace in Spyder, a pane called "Project Explorer" opens up inside Spyder. There you see in real-time the files of your project. For instance, if you generate a file with Python, it will show in that pane. The pane let's you keep the files organized, filter them etc. This can be useful for web development for example because helps you keep your content organized. I use Python to handle files (e.g. csv) and work with data (data analysis), and I find no use in the workspace feature. Moreover, if you delete a file in the Project Explorer pane, the file cannot be found in the Windows recycle bin.
How to combine python asyncio with threads?
I have successfully built a RESTful microservice with Python asyncio and aiohttp that listens to a POST event to collect realtime events from various feeders. It then builds an in-memory structure to cache the last 24h of events in a nested defaultdict/deque structure. Now I would like to periodically checkpoint that structure to disc, preferably using pickle. Since the memory structure can be >100MB I would like to avoid holding up my incoming event processing for the time it takes to checkpoint the structure. I'd rather create a snapshot copy (e.g. deepcopy) of the structure and then take my time to write it to disk and repeat on a preset time interval. I have been searching for examples on how to combine threads (and is a thread even the best solution for this?) and asyncio for that purpose but could not find something that would help me. Any pointers to get started are much appreciated!
It's pretty simple to delegate a method to a thread or sub-process using BaseEventLoop.run_in_executor: import asyncio import time from concurrent.futures import ProcessPoolExecutor def cpu_bound_operation(x): time.sleep(x) # This is some operation that is CPU-bound @asyncio.coroutine def main(): # Run cpu_bound_operation in the ProcessPoolExecutor # This will make your coroutine block, but won't block # the event loop; other coroutines can run in meantime. yield from loop.run_in_executor(p, cpu_bound_operation, 5) loop = asyncio.get_event_loop() p = ProcessPoolExecutor(2) # Create a ProcessPool with 2 processes loop.run_until_complete(main()) As for whether to use a ProcessPoolExecutor or ThreadPoolExecutor, that's kind of hard to say; pickling a large object will definitely eat some CPU cycles, which initially would you make think ProcessPoolExecutor is the way to go. However, passing your 100MB object to a Process in the pool would require pickling the instance in your main process, sending the bytes to the child process via IPC, unpickling it in the child, and then pickling it again so you can write it to disk. Given that, my guess is the pickling/unpickling overhead will be large enough that you're better off using a ThreadPoolExecutor, even though you're going to take a performance hit because of the GIL. That said, it's very simple to test both ways and find out for sure, so you might as well do that.
how to check which version of nltk, scikit learn installed?
In shell script I am checking whether this packages are installed or not, if not installed then install it. So withing shell script: import nltk echo nltk.__version__ but it stops shell script at import line in linux terminal tried to see in this manner: which nltk which gives nothing thought it is installed. Is there any other way to verify this package installation in shell script, if not installed, also install it.
import nltk is Python syntax, and as such won't work in a shell script. To test the version of nltk and scikit_learn, you can write a Python script and run it. Such a script may look like import nltk import sklearn print('The nltk version is {}.'.format(nltk.__version__)) print('The scikit-learn version is {}.'.format(sklearn.__version__)) # The nltk version is 3.0.0. # The scikit-learn version is 0.15.2. Note that not all Python packages are guaranteed to have a __version__ attribute, so for some others it may fail, but for nltk and scikit-learn at least it will work.
Problems using psycopg2 on Mac OS (Yosemite)
Currently i am installing psycopg2 for work within eclipse with python. I am finding a lot of problems: The first problem sudo pip3.4 install psycopg2 is not working and it is showing the following message Error: pg_config executable not found. FIXED WITH:export PATH=/Library/PostgreSQL/9.4/bin/:"$PATH” When I import psycopg2 in my project i obtein: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Library libssl.1.0.0.dylib Library libcrypto.1.0.0.dylib FIXED WITH: sudo ln -s /Library/PostgreSQL/9.4/lib/libssl.1.0.0.dylib /usr/lib sudo ln -s /Library/PostgreSQL/9.4/lib/libcrypto.1.0.0.dylib /usr/lib Now I am obtaining: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _lo_lseek64 Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Expected in: /usr/lib/libpq.5.dylib in /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so After 3 hours I didn´t find the solution. Can you help me? Thank you so much! Regards Benja.
You need to replace the /usr/lib/libpq.5.dylib library because its version is too old. Here's my solution to this problem: $ sudo mv /usr/lib/libpq.5.dylib /usr/lib/libpq.5.dylib.old $ sudo ln -s /Library/PostgreSQL/9.4/lib/libpq.5.dylib /usr/lib
Django MySQL error when creating tables
I am building a django app with a MySQL DB. When I run 'python manage.py migrate' for the first time, some tables are created well then some errors appear. The error brought out is: django.db.utils.IntegrityError: (1215, 'Cannot add foreign key constraint') When I run this MySQL command - SHOW ENGINE INNODB STATUS\G, I get this >>> 2015-02-17 14:33:17 7f10891cf700 Error in foreign key constraint of table movie_store/#sql-4f1_66: FOREIGN KEY (`group_id`) REFERENCES `auth_group` (`id`): Cannot resolve table name close to: (`id`) The complete traceback is: Creating tables... Creating table users Creating table merchant Creating table celery_taskmeta Creating table celery_tasksetmeta Creating table djcelery_intervalschedule Creating table djcelery_crontabschedule Creating table djcelery_periodictasks Creating table djcelery_periodictask Creating table djcelery_workerstate Creating table djcelery_taskstate Creating table post_office_email Creating table post_office_log Creating table post_office_emailtemplate Creating table post_office_attachment Running deferred SQL... Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 338, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 330, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 390, in run_from_argv self.execute(*args, **cmd_options) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 441, in execute output = self.handle(*args, **options) File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/migrate.py", line 173, in handle created_models = self.sync_apps(connection, executor.loader.unmigrated_apps) File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/migrate.py", line 309, in sync_apps cursor.execute(statement) File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 80, in execute return super(CursorDebugWrapper, self).execute(sql, params) File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 65, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 95, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 63, in execute return self.cursor.execute(sql) File "/usr/local/lib/python2.7/dist-packages/django/db/backends/mysql/base.py", line 124, in execute return self.cursor.execute(query, args) File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue django.db.utils.IntegrityError: (1215, 'Cannot add foreign key constraint')
This will works python manage.py migrate auth python manage.py migrate The issue because of other migration run before the auth, so this will make sure "authtools"'s migration run first
Make a numpy array monotonic without a Python loop
I have a 1D array of values which is supposed to be monotonic (let's say decreasing), but there are random regions where the value increases with index. I need an array where each region is replaced with a value directly preceding it, so that the resulting array is sorted. So if given array is: a = np.array([10.0, 9.5, 8.0, 7.2, 7.8, 8.0, 7.0, 5.0, 3.0, 2.5, 3.0, 2.0]) I want the result to be b = np.array([10.0, 9.5, 8.0, 7.2, 7.2, 7.2, 7.0, 5.0, 3.0, 2.5, 2.5, 2.0]) Here's a graphical representation: I know how to achieve it with a Python loop, but is there a way to do this with NumPy machinery? Python code for clarity: b = np.array(a) for i in range(1, b.size): if b[i] > b[i-1]: b[i] = b[i-1]
You can use np.minimum.accumulate to collect the minimum values as you move through the array: >>> np.minimum.accumulate(a) array([ 10. , 9.5, 8. , 7.2, 7.2, 7.2, 7. , 5. , 3. , 2.5, 2.5, 2. ]) At each element in the array, this function returns the minimum value seen so far. If you wanted an array to be monotonic increasing, you could use np.maximum.accumulate. Many other universal functions in NumPy have an accumulate method to simulate looping through an array, applying the function to each element and collecting the returned values into an array of the same size.
filename.whl is not supported wheel on this platform
I would like to install scipy-0.15.1-cp33-none-win_amd64.whl that I have saved to local drive. I am using: pip 6.0.8 from C:\Python27\Lib\site-packages python 2.7.9 (default, Dec 10 2014, 12:28:03) [MSC v.1500 64 bit (AMD64)] when I run: pip install scipy-0.15.1-cp33-none-win_amd64.whl I get the following error: scipy-0.15.1-cp33-none-win_amd64.whl.whl is not supported wheel on this platform I would like to know where is the problem?
cp33 means CPython 3.3 you need scipy‑0.15.1‑cp27‑none‑win_amd64.whl instead.
Why use Python's os module methods instead of executing shell commands directly?
I am trying to understand what is the motivation behind using Python's library functions for executing OS-specific tasks such as creating files/directories, changing file attributes, etc. instead of just executing those commands via os.system() or subprocess.call()? For example, why would I want to use os.chmod instead of doing os.system("chmod...")? I understand that it is more "pythonic" to use Python's available library methods as much as possible instead of just executing shell commands directly. But, is there any other motivation behind doing this from a functionality point of view? I am only talking about executing simple one-line shell commands here. When we need more control over the execution of the task, I understand that using subprocess module makes more sense, for example.
It's faster, os.system and subprocess.call create new processes which is unnecessary for something this simple. In fact, os.system and subprocess.call with the shell argument usually create at least two new processes: the first one being the shell, and the second one being the command that you're running (if it's not a shell built-in like test). Some commands are useless in a separate process. For example, if you run os.spawn("cd dir/"), it will change the current working directory of the child process, but not of the Python process. You need to use os.chdir for that. You don't have to worry about special characters interpreted by the shell. os.chmod(path, mode) will work no matter what the filename is, whereas os.spawn("chmod 777 " + path) will fail horribly if the filename is something like ; rm -rf ~. (Note that you can work around this if you use subprocess.call without the shell argument.) You don't have to worry about filenames that begin with a dash. os.chmod("--quiet", mode) will change the permissions of the file named --quiet, but os.spawn("chmod 777 --quiet") will fail, as --quiet is interpreted as an argument. This is true even for subprocess.call(["chmod", "777", "--quiet"]). You have fewer cross-platform and cross-shell concerns, as Python's standard library is supposed to deal with that for you. Does your system have chmod command? Is it installed? Does it support the parameters that you expect it to support? The os module will try to be as cross-platform as possible and documents when that it's not possible. If the command you're running has output that you care about, you need to parse it, which is trickier than it sounds, as you may forget about corner-cases (filenames with spaces, tabs and newlines in them), even when you don't care about portability.
AttributeError: 'Context' object has no attribute 'wrap_socket'
I am trying to set up a Flask server that uses an OpenSSL context. However, since I moved the script on a different server, it keeps throwing the following error, no matter if I am using Python 2.7 or 3.4 and no matter which SSL method I chose (SSLv23 / TLSv1/...): File "/usr/lib/python3.4/threading.py", line 920, in _bootstrap_inner self.run() File "/usr/lib/python3.4/threading.py", line 868, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.4/dist-packages/werkzeug/serving.py", line 602, in inner passthrough_errors, ssl_context).serve_forever() File "/usr/local/lib/python3.4/dist-packages/werkzeug/serving.py", line 506, in make_server passthrough_errors, ssl_context) File "/usr/local/lib/python3.4/dist-packages/werkzeug/serving.py", line 450, in __init__ self.socket = ssl_context.wrap_socket(self.socket, AttributeError: 'Context' object has no attribute 'wrap_socket' The according code below: if __name__ == "__main__": context = SSL.Context(SSL.SSLv23_METHOD) context.use_privatekey_file('key.key') context.use_certificate_file('cert.crt') app.run(host='0.0.0.0', port=80, ssl_context=context, threaded=True, debug=True) Thank you very much in advance! I am happy for any help
As of 0.10, Werkzeug doesn't support OpenSSL contexts anymore. This decision was made because it is easier to support ssl.SSLContext across Python versions. Your option to re-write this code is this one: if __name__ == "__main__": context = ('cert.crt', 'key.key') app.run(host='0.0.0.0', port=80, ssl_context=context, threaded=True, debug=True) See http://werkzeug.pocoo.org/docs/0.10/serving/ for all possibilities.
'str' object has no attribute 'decode'. Python 3 error?
Here is my code: import imaplib from email.parser import HeaderParser conn = imaplib.IMAP4_SSL('imap.gmail.com') conn.login('example@gmail.com', 'password') conn.select() conn.search(None, 'ALL') data = conn.fetch('1', '(BODY[HEADER])') header_data = data[1][0][1].decode('utf-8') at this point I get the error message AttributeError: 'str' object has no attribute 'decode' Python 3 doesn't have decode anymore, am I right? how can I fix this? Also, in: data = conn.fetch('1', '(BODY[HEADER])') I am selecting only the 1st email. How do I select all? If i delete the .decode(utf-8') like some of you are suggesting, i get the error message : TypeError: initial_value must be str or none, not bytes
You are trying to decode an object that is already decoded. You have a str, there is no need to decode from UTF-8 anymore. Simply drop the .decode('utf-8') part: header_data = data[1][0][1] As for your fetch() call, you are explicitly asking for just the first message. Use a range if you want to retrieve more messages. See the documentation: The message_set options to commands below is a string specifying one or more messages to be acted upon. It may be a simple message number ('1'), a range of message numbers ('2:4'), or a group of non-contiguous ranges separated by commas ('1:3,6:9'). A range can contain an asterisk to indicate an infinite upper bound ('3:*').
How to pythonically have partially-mutually exclusive optional arguments?
As a simple example, take a class Ellipse that can return its properties such as area A, circumference C, major/minor axis a/b, eccentricity eetc. In order to get that, one obviously has to provide precisely two of its parameters to obtain all the other ones, though as a special case providing only one parameter should assume a circle. Three or more parameters that are consistent should yield a warning but work, otherwise obviously raise an exception. So some examples of valid Ellipses are: Ellipse(a=5, b=2) Ellipse(A=3) Ellipse(a=3, e=.1) Ellipse(a=3, b=3, A=9*math.pi) # note the consistency while invalid ones would be Ellipse() Ellipse(a=3, b=3, A=7) The constructor would therefore either contain many =None arguments, class Ellipse(object): def __init__(self, a=None, b=None, A=None, C=None, ...): or, probably more sensible, a simple **kwargs, maybe adding the option to provide a,b as positional arguments, class Ellipse(object): def __init__(self, a=None, b=None, **kwargs): kwargs.update({key: value for key, value in (('a', a), ('b', b)) if value is not None}) So far, so good. But now comes the actual implementation, i.e. figuring out which parameters were provided and which were not and determine all the others depending on them, or check for consistency if required. My first approach would be a simple yet tedious combination of many if 'a' in kwargs: a = kwargs['a'] if 'b' in kwargs: b = kwargs['b'] A = kwargs['A'] = math.pi * a * b f = kwargs['f'] = math.sqrt(a**2 - b**2) ... elif 'f' in kwargs: f = kwargs['f'] b = kwargs['b'] = math.sqrt(a**2 + f**2) A = kwargs['A'] = math.pi * a * b ... elif ... and so on*. But is there no better way? Or is this class design totally bollocks and I should create constructors such as Ellipse.create_from_a_b(a, b), despite that basically making the "provide three or more consistent parameters" option impossible? Bonus question: Since the ellipse's circumference involves elliptic integrals (or elliptic functions if the circumference is provided and the other parameters are to be obtained) which are not exactly computationally trivial, should those calculations actually be in the constructor or rather be put into the @property Ellipse.C? * I guess at least one readability improvement would be always extracting a and b and calculating the rest from them but that means recalculating the values already provided, wasting both time and precision...
My proposal is focused on data encapsulation and code readability. a) Pick pair on unambigous measurements to represent ellipse internally class Ellipse(object): def __init__(a, b): self.a = a self.b = b b) Create family of properties to get desired metrics about ellipse class Ellipse(object): @property def area(self): return math.pi * self._x * self._b c) Create factory class / factory methods with unambigous names: class Ellipse(object): @classmethod def fromAreaAndCircumference(cls, area, circumference): # convert area and circumference to common format return cls(a, b) Sample usage: ellipse = Ellipse.fromLongAxisAndEccentricity(axis, eccentricity) assert ellipse.a == axis assert ellipse.eccentricity == eccentricity
What is the -H flag for pip?
When using sudo pip install ... with pip version 6.0.4 or greater, I get some warnings like: The directory '/home/drevicko/.cache/pip/log' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want the -H flag. This warning appears to have been added in 6.0.4, but the -H flag doesn't appear in the pip install docs nor in the docs on pip's general options. So, what is the -H flag and why do I need it when using sudo pip install ...?
The -H flag is actually for the sudo command and not for pip. As taken from the docs The -H (HOME) option requests that the security policy set the HOME environment variable to the home directory of the target user (root by default) as specified by the password database. Depending on the policy, this may be the default behavior. A look at this question might provide more insight into what could be happening.
MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe"
I am trying to install nodejs from a long time now. I tried it searching over the google but seriously i had not got any working solutions. My first question is that Why Nodejs require Microsoft Visual component? Secondly as per suggestion on google i tried below things Installed Visual C++ 2010 (updated path in the variable) but after installing i got many more errors including "MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe". Went through https://github.com/TooTallNate/node-gyp for getting errors removed but still it is not working. Uninstalled and installed Node js again but with no success. I have following versions as Node js 0.12 Python 2.7 Ruby 1.9.3 Windows 7 64 bit. When i run npm-install then the error appears as below MSBUILD : error MSB3428: Could not load the Visual C++ component "VCBuild.exe".To fix this, 1) install the .NET Framework 2.0 SDK, 2) install Microsoft Visual Studio 2005 or 3) add the location of the component to the system path if it is installed elsewhere. My package.json is as below: { "name": "TRest", "version": "0.1.0", "devDependencies": { "grunt": "~0.4.2", "grunt-contrib-watch": "~0.5.3", "grunt-sass": "~0.11.0", "grunt-pixrem": "^0.1.2", "grunt-legacssy": "^0.2.0", "grunt-contrib-concat": "~0.3.0", "grunt-contrib-uglify": "~0.3.2", "node-bourbon": "^1.0.0" } }
You can tell npm to use Visual studio 2010 by doing this... npm install socket.io --msvs_version=2010 Replace socket.io with the package that is giving the issue. It is also possible to set the global settings for npm: npm config set msvs_version 2010 --global
Will a UNICODE string just containing ASCII characters always be equal to the ASCII string?
I noticed the following holds: >>> u'abc' == 'abc' True >>> 'abc' == u'abc' True Will this always be true or could it possibly depend on the system locale? (It seems strings are unicode in python 3: e.g. this question, but bytes in 2.x)
Python 2 coerces between unicode and str using the ASCII codec when comparing the two types. So yes, this is always true. That is to say, unless you mess up your Python installation and use sys.setdefaultencoding() to change that default. You cannot do that normally, because the sys.setdefaultencoding() function is deleted from the module at start-up time, but there is a Cargo Cult going around where people use reload(sys) to reinstate that function and change the default encoding to something else to try and fix implicit encoding and decoding problems. This is a dumb thing to do for precisely this reason.
Dangers of sys.setdefaultencoding('utf-8')
There is a trend of discouraging setting sys.setdefaultencoding('utf-8') in Python 2. Can anybody list real examples of problems with that? Arguments like it is harmful or it hides bugs don't sound very convincing. UPDATE: Please note that this question is only about utf-8, it is not about changing default encoding "in general case". Please give some examples with code if you can.
Because you don't always want to have your strings automatically decoded to Unicode, or for that matter your Unicode objects automatically encoded to bytes. Since you are asking for a concrete example, here is one: Take a WSGI web application; you are building a response by adding the product of an external process to a list, in a loop, and that external process gives you UTF-8 encoded bytes: results = [] content_length = 0 for somevar in some_iterable: output = some_process_that_produces_utf8(somevar) content_length += len(output) results.append(output) headers = { 'Content-Length': str(content_length), 'Content-Type': 'text/html; charset=utf8', } start_response(200, headers) return results That's great and fine and works. But then your co-worker comes along and adds a new feature; you are now providing labels too, and these are localised: results = [] content_length = 0 for somevar in some_iterable: label = translations.get_label(somevar) output = some_process_that_produces_utf8(somevar) content_length += len(label) + len(output) + 1 results.append(label + '\n') results.append(output) headers = { 'Content-Length': str(content_length), 'Content-Type': 'text/html; charset=utf8', } start_response(200, headers) return results You tested this in English and everything still works, great! However, the translations.get_label() library actually returns Unicode values and when you switch locale, the labels contain non-ASCII characters. The WSGI library writes out those results to the socket, and all the Unicode values get auto-encoded for you, since you set setdefaultencoding() to UTF-8, but the length you calculated is entirely wrong. It'll be too short as UTF-8 encodes everything outside of the ASCII range with more than one byte. All this is ignoring the possibility that you are actually working with data in a different codec; you could be writing out Latin-1 + Unicode, and now you have an incorrect length header and a mix of data encodings. Had you not used sys.setdefaultencoding() an exception would have been raised and you knew you had a bug, but now your clients are complaining about incomplete responses; there are bytes missing at the end of the page and you don't quite know how that happened. Note that this scenario doesn't even involve 3rd party libraries that may or may not depend on the default still being ASCII. The sys.setdefaultencoding() setting is global, applying to all code running in the interpreter. How sure are you there are no issues in those libraries involving implicit encoding or decoding? That Python 2 encodes and decodes between str and unicode types implicitly can be helpful and safe when you are dealing with ASCII data only. But you really need to know when you are mixing Unicode and byte string data accidentally, rather than plaster over it with a global brush and hope for the best.
How to count the occurrence of certain item in an ndarray in Python?
In Python, I have an ndarray y that is printed as array([0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1]) I'm trying to count how many 0 and how many 1 are there in this array. But when I type y.count(0) or y.count(1), it says 'numpy.ndarray' object has no attribute 'count' What should I do?
Use collections.Counter; >> import collections, numpy >>> a = numpy.array([0, 3, 0, 1, 0, 1, 2, 1, 0, 0, 0, 0, 1, 3, 4]) >>> collections.Counter(a) Counter({0: 7, 1: 4, 3: 2, 2: 1, 4: 1}) Thanks to @ali_m and @shredding, here is how you can do it using numpy: >>> unique, counts = numpy.unique(a, return_counts=True) >>> dict(zip(unique, counts)) {0: 7, 1: 4, 2: 1, 3: 2, 4: 1}
Can't install pip packages inside a docker container with Ubuntu
I'm following the fig guide to using docker with a python application, but when docker gets up to the command RUN pip install -r requirements.txt I get the following error message: Step 3 : RUN pip install -r requirements.txt ---> Running in fe0b84217ad1 Collecting blinker==1.3 (from -r requirements.txt (line 1)) Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', gaierror(-2, 'Name or service not known'))': /simple/blinker/ This repeats several times and then I get another message: Could not find any downloads that satisfy the requirement blinker==1.3 (from -r requirements.txt (line 1)) No distributions at all found for blinker==1.3 (from -r requirements.txt (line 1)) So for some reason pip can't access any packages from inside a docker container. Is there anything I need to do to allow it internet access? However pip works fine to install things outside of the docker container, and worked fine even with that exact package (blinker==1.3) so that's not the problem. Also this problem isn't specific to that package. I get the same issue with any pip install command for any package. Does anyone have any idea what's going on here?
Your problem comes from the fact that Docker is not using the proper DNS server. You can fix it in three different ways : 1. Adding Google DNS to your local config Modifying /etc/resolv.conf and adding the following lines at the end # Google IPv4 nameservers nameserver 8.8.8.8 nameserver 8.8.4.4 If you want to add other DNS servers, have a look here. However this change won't be permanent (see this thread). To make it permanent : $ sudo nano /etc/dhcp/dhclient.conf Uncomment and edit the line with prepend domain-name-server : prepend domain-name-servers 8.8.8.8, 8.8.4.4; Restart dhclient : $ sudo dhclient. 2. Modifying Docker config As explained in the docs : Systems that run Ubuntu or an Ubuntu derivative on the desktop typically use 127.0.0.1 as the default nameserver in /etc/resolv.conf file. To specify a DNS server for use by Docker : 1. Log into Ubuntu as a user with sudo privileges. 2. Open the /etc/default/docker file for editing : $ sudo nano /etc/default/docker 3. Add the followin setting for Docker. DOCKER_OPTS="--dns 8.8.8.8" 4. Save and close the file. 5. Restart the Docker daemon : $ sudo restart docker 3. Using a parameter when you run Docker When you run docker, simply add the following parameter : --dns 8.8.8.8
Calculate new value based on decreasing value
Problem: What'd I like to do is step-by-step reduce a value in a Series by a continuously decreasing base figure. I'm not sure of the terminology for this - I did think I could do something with cumsum and diff but I think I'm leading myself on a wild goose chase there... Starting code: import pandas as pd ALLOWANCE = 100 values = pd.Series([85, 10, 25, 30]) Desired output: desired = pd.Series([0, 0, 20, 30]) Rationale: Starting with a base of ALLOWANCE - each value in the Series is reduced by the amount remaining, as is the allowance itself, so the following steps occur: Start with 100, we can completely remove 85 so it becomes 0, we now have 15 left as ALLOWANCE The next value is 10 and we still have 15 available, so this becomes 0 again and we have 5 left. The next value is 25 - we only have 5 left, so this becomes 20 and now we have no further allowance. The next value is 30, and since there's no allowance, the value remains as 30.
Your idea with cumsum and diff works. It doesn't look too complicated; not sure if there's an even shorter solution. First, we compute the cumulative sum, operate on that, and then go back (diff is kinda sorta the inverse function of cumsum). import math c = values.cumsum() - ALLOWANCE # now we've got [-15, -5, 20, 50] c[c < 0] = 0 # negative values don't make sense here # (c - c.shift(1)) # <-- what I had first: diff by accident # it is important that we don't fill with 0, in case that the first # value is greater than ALLOWANCE c.diff().fillna(math.max(0, values[0] - ALLOWANCE))
PyCharm error: 'No Module' when trying to import own module (python script)
I have written a module (a file my_mod.py file residing in the folder my_module). Currently, I am working in the file cool_script.py that resides in the folder cur_proj. I have opened the folder in PyCharm using File -- open (and I assume, hence, it is a PyCharm project). In ProjectView (CMD-7), I can see my project cur_proj (in red) and under "External Libraries" I do see my_module. In cool_script.py, I can write from my_module import my_mod as mm and PyCharm even makes suggestion for my_mod. So far so good. However, when I try to run cool_script.py, PyCharm tells me "No module named my_module" This seems strange to me, because A) in the terminal (OS 10.10.2), in python, I can import the module no problem -- there is a corresponding entry in the PYTHONPATH in .bashrc B) in PyCharm -- Settings -- Project cur_proj -- Project Interpreter -- CogWheel next to python interpreter -- more -- show paths for selected interpreter icon, the paths from PYTHONPATH do appear (as I think they should) Hence, why do I get the error when I try to run cool_script.py? -- What am I missing? Notes: I am not declaring a different / special python version at the top of cool_script.py I made sure that the path to my_module is correct I put __init__.py files (empty files) both in my_module and in cur_proj I am not using virtualenv Addendum 2015-Feb-25 When I go in PyCharm to Run -- Edit Configurations, for my current project, there are two options that are selected with a check mark: "Add content roots to PYTHONPATH" and "Add source roots to PYTHONPATH". When I have both unchecked, I can load my module. So it works now -- but why? Further questions emerged: What are "content roots" and what are "source roots"? And why does adding something to the PYTHONPATH make it somehow break? should I uncheck both of those options all the time (so also in the defaults, not only the project specific configurations (left panel of the Run/Debug Configurations dialog)?
If your own module is in the same path you need mark the path as Sources Root. Click with mouse right button in the directory that you want import. Then select Mark Directory As and select Sources Root. I hope it help.
Composite primary key in django
I have a legacy db table which has composite primary key. I don't think I will be able to change the structure to include a surrogate key, as there is some code written that uses that table. And in django, I cannot use that table, as it doesn't have a primary key(non-composite). Do django models support composite primary keys? If not, is there any workaround without changing the structure of the table? P.S. I am using postgresql.
Try similar below code: class MyTable(models.Model): class Meta: unique_together = (('key1', 'key2'),) key1 = models.IntegerField(primary_key=True) key2 = models.IntegerField() or if you want only unique mixed fields: class MyTable(models.Model): class Meta: unique_together = (('key1', 'key2'),) key1 = models.IntegerField() key2 = models.IntegerField() EDIT: I would like to note that there is a problem with this approach if there are 3 columns. Update queries don't work because it tries to update (puts pk fields right after "SET") the fields that are unique together and obviously fails.
Adding a APIView to Django REST Framework Browsable API
I've been developing a REST backend with the Django REST Framework. However, I'm having trouble adding a APIView instance to the web browsable API. The documentation and the previous answer suggests that all I have to do is add a docstring. It did not work for me. I'm under the assumption that the browsable API only displays Viewset endpoints are registered with the router. If this is so, how can I register APIView classes to the router? Below is my current router code: router = DefaultRouter(trailing_slash=False) router.register(r'tokens', TokenViewSet, base_name='token') urlpatterns = patterns('', url(r'^admin/', include(admin.site.urls)), url(r'^api/', include(router.urls)), url(r'^api/register$', RegisterUser.as_view(), name='register_user'), url(r'^api/auth$', ObtainAuthToken.as_view(), name='obtain_token'), url(r'^api/me$', ObtainProfile.as_view(), name='obtain_profile'), url(r'^api/recover$', FindUsername.as_view(), name='recover_username'), ) Currently, only the Token endpoint shows up. Thank you.
Routers aren't designed for normal views. You need use ViewSet if you want register you url to your router. I have the same question here. Maybe you can ref it: How can I register a single view (not a viewset) on my router?
Loc vs. iloc vs. ix vs. at vs. iat?
Recently began branching out from my safe place (R) into Python and and am a bit confused by the cell localization/selection in Pandas. I've read the documentation but I'm struggling to understand the practical implications of the various localization/selection options. Is there a reason why I should ever use .loc or .iloc over the most general option .ix? I understand that .loc, iloc, at, and iat may provide some guaranteed correctness that .ix can't offer, but I've also read where .ix tends to be the fastest solution across the board. Can someone please explain the real world, best practices reasoning behind utilizing anything other than .ix?
loc: only work on index iloc: work on position ix: You can get data from dataframe without it being in the index at: get scalar values. It's a very fast loc iat: Get scalar values. It's a very fast iloc http://pyciencia.blogspot.com/2015/05/obtener-y-filtrar-datos-de-un-dataframe.html
Pyinstaller error ImportError: No module named 'requests.packages.chardet.sys
I can't seem to find the root cause of this. I don't know if it's pyinstaller, a pip problem, the requests module, or something else as nothing can be eliminated conclusively. I wrote a script in python that properly configures a new hardware sonicwall for our enterprise network when we have to deploy a new unit. It configures a proper .exp file in memory, logs into the sonicwall device with default credentials, imports the file via a multi-part data form, restarts the sonicwall, then logs in again and changes the shared secret properly. For security reasons, I can't post the code here, but I can explain the problem with a much simpler example. Previously, the code was using urllib and urllib2 to process http requests, but then I discovered the requests module when I had to re-write the script to include csrfTokens. Long story short, the script works amazing when called by the python interpreter. However, when trying to compile it with pyinstaller, I get a series of errors now that I've switched to requests instead of the urllibs. Some more background: Windows 7 - Python2.7.9 pip 6.0.8 from C:\Python27\lib\site-packages\pip-6.0.8-py2.7.egg (python 2.7) pip freeze output: pyinstaller==2.1.1.dev0 pywin32==219 requests==2.5.3 As an example, I'll give you some code that bombs out, a simple request to google. #!/usr/bin/python import requests r = requests.get('https://google.com') print(r.text) The above code works for a simple request when I call the file from python, but when I compile it, I get this: (EDIT Having problems pasting output, here is the pastebin) pastebin It creates a windows executable, but this is the following error when I try to run it: NOTE I did install the ms c++ 2008 redistributable to help clear up the msvcr90.dll, but I still get the above requests.packages.chardet.sys error. I've tried everything I can think of, installing chardet, installing chardet2, installing cchardet, forcing earlier versions of pyinstaller and requests incrementally. Scrapping pip and manually installing pyinstaler and requests. I'm at my wits end with things to try, I don't quite get the error on hand here. requests.packages.chardet exists on the system. I also have C:\Python27 in my windows PATH as I can call python from any directory. If you need more information, please let me know. I tried to be as thorough with the errors as possible and what I have installed, but I can provide more if needed. ALSO This issue ONLY appears to be happening when I try to compile when I import requests. creating test scripts, beautifulsoup, urllib/2, etc all compile a valid windows exe that runs properly.
I don't have a solution for this yet, but this is caused by latest changes in requests module (versions 2.5.2 & 2.5.3). For now you can use version 2.5.1 until PyInstaller will have suitable hook for solving this issue. I cannot really explain the issue, but it looks like there's some kind of collision between PyInstaller import hooks and some latest additions to requests (VendorAlias).
Load CSV file with Spark
I'm new to Spark and I'm trying to read CSV data from a file with Spark. Here's what I am doing : sc.textFile('file.csv') .map(lambda line: (line.split(',')[0], line.split(',')[1])) .collect() I would expect this call to give me a list of the two first columns of my file but I'm getting this error : File "<ipython-input-60-73ea98550983>", line 1, in <lambda> IndexError: list index out of range although my CSV file as more than one column.
Spark 2.0.0+ You can use csv data source directly: spark.read.csv( "some_input_file.csv", header=True, mode="DROPMALFORMED", schema=schema ) Spark < 2.0.0: Instead of manual parsing, which is far from trivial in a general case, I would recommend spark-csv: Make sure that Spark CSV is included in the path (--packages, --jars, --driver-class-path) And load your data as follows: (df = sqlContext .read.format("com.databricks.spark.csv") .option("header", "true") .option("inferschema", "true") .option("mode", "DROPMALFORMED") .load("some_input_file.csv")) It can handle loading, schema inference, dropping malformed lines and doesn't require passing data from Python to the JVM. If you know the schema, it is better to avoid schema inference and pass it to DataFrameReader. Assuming you have three columns - integer, double and string: from pyspark.sql.types import StructType, StructField from pyspark.sql.types import DoubleType, IntegerType, StringType schema = StructType([ StructField("A", IntegerType()), StructField("B", DoubleType()), StructField("C", StringType()) ]) (sqlContext .read .schema(schema) .option("header", "true") .option("mode", "DROPMALFORMED") .load("some_input_file.csv"))
Support multiple API versions in flask
I started to design a RESTful webservice with Flask and Python and I'm wondering how one would support multiple API versions in the same project. I'm thinking of putting the requested API version in the URL like this: /myapp/v1/Users After some time I want to add another endpoint in Version 1.1 of the API and keep everything from v1 which did not change: /myapp/v1.1/Users <= Same as in v1 /myapp/v1.1/Books In v2 the "Users"-endpoint is changed: /myapp/v2/Users <= Changed in v2 /myapp/v2/Books <= Same as in v1.1 and so on... Looking at this question the easiest way probably would be something like this: @app.route('/<version>/users') def users(version): # do something return jsonify(response) But I can imagine that this will get harder to maintain with each new API version. Therefore I was wondering if there's any better (= easier to maintain and better structured) way to achieve this with Flask?
I am the author of the accepted answer on the question you referenced. I think the /<version>/users approach is not very effective as you say. If you have to manage three or four different versions you'll end up with spaghetti code. The nginx idea I proposed there is better, but has the drawback that you have to host two separate applications. Back then I missed to mention a third alternative, which is to use a blueprint for each API version. For example, consider the following app structure (greatly simplified for clarity): my_project +-- api/ +-- v1/ +-- __init__.py +-- routes.py +-- v1_1/ +-- __init__.py +-- routes.py +-- v2/ +-- __init__.py +-- routes.py +-- __init__.py +-- common.py Here you have a api/common.py that implements common functions that all versions of the API need. For example, you can have an auxiliary function (not decorated as a route) that responds to your /users route that is identical in v1 and v1.1. The routes.py for each API version define the routes, and when necessary call into common.py functions to avoid duplicating logic. For example, your v1 and v1.1 routes.py can have: from api import common @api.route('/users') def get_users(): return common.get_users() Note the api.route. Here api is a blueprint. Having each API version implemented as a blueprint helps to combine everything with the proper versioned URLs. Here is an example app setup code that imports the API blueprints into the application instance: from api.v1 import api as api_v1 from api.v1_1 import api as api_v1_1 from api.v2 import api as api_v2 app.register_blueprint(api_v1, url_prefix='/v1') app.register_blueprint(api_v1_1, url_prefix='/v1.1') app.register_blueprint(api_v2, url_prefix='/v2') This structure is very nice because it keeps all API versions separate, yet they are served by the same application. As an added benefit, when the time comes to stop supporting v1, you just remove the register_blueprint call for that version, delete the v1 package from your sources and you are done. Now, with all of this said, you should really make an effort to design your API in a way that minimizes the risk of having to rev the version. Consider that adding new routes does not require a new API version, it is perfectly fine to extend an API with new routes. And changes in existing routes can sometimes be designed in a way that do not affect old clients. Sometimes it is less painful to rev the API and have more freedom to change things, but ideally that doesn't happen too often.
Game of Life patterns carried out incorrectly
My Conway's game of life implementation in Python doesn't seem to follow the rules correctly, and I can't figure out what could be wrong. When I put a final configuration into Golly, it continues past what mine did. I first identified the program by putting a configuration at which my program stopped into Golly, and then noticing that it could be carried further. I also put an entire small board from my game into Golly, and it progressed much differently from my configuration. Golly is a game of life simulator that's widely used. I've tried several different things to fix my problem: I broke up the logic statements in my code to use no and / or statements. I tested my neighbors() function by inserting it into its own program, and setting up some grid configurations. Then I looked at the printed out grid, and I called neighbors() on a certain position. It worked perfectly. Looking at my code, I can't see why it's not working. I don't get errors, it works, it just works wrong. The patterns progress much differently than how they should. This is also the first > 100 line program that I have written without following a tutorial even loosely, so forgive me if the answer is obvious. The relevant code is as follows: #Function to find number of live neighbors def neighbors(row, column): adjacents = 0 #Horizontally adjacent if row > 0: if board[row-1][column]: adjacents += 1 if column > 0: if board[row][column-1]: adjacents += 1 if row < thesize-1: if board[row+1][column]: adjacents += 1 if column < thesize-1: if board[row][column+1]: adjacents += 1 #Diagonally adjacent if row > 0 and column > 0: if board[row-1][column-1]: adjacents += 1 if row < thesize-1 and column < thesize-1: if board[row+1][column+1]: adjacents += 1 if row > 0 and column < thesize-1: if board[row-1][column+1]: adjacents += 1 if row < thesize-1 and column > 0: if board[row+1][column-1]: adjacents += 1 #Return the final count (0-8) return adjacents This seems to work perfectly to return how many of the 8 neighbors of any given cell are alive. This next bit is the logic part, where I think the problem is. It changes the array according to the rules of the game. #Main loop while 1: #Manage the rules of the game for r in range(len(board)): for c in range(len(board)): neighborcount = neighbors(r, c) if board[r][c]: giveLife(r, c) if neighborcount < 2 or neighborcount > 3: board[r][c] = False elif not board[r][c]: killRuthlessly(r, c) if neighborcount == 3: board[r][c] = True Finally, the part that turns squares on and off visually, on the pygame screen. This is tested, and appears to work well, I just thought I'd include it in case there's an issue. for r in range(len(board)): for c in range(len(board)): if board[r][c]: giveLife(r, c) if not board[r][c]: killRuthlessly(r, c) giveLife is a function that draws a black rectangle at the given position, killRuthlessly draws a white one. These both seem to work properly.
For the logic that loops through the board and checks adjacent cells, it is turning cells on/off while continuing to check others. It is likely you are reading adjacent cells as live or dead not because they were in the previous time step (which matters), but because you've changed their state as they have been already looped over. Try creating a tmp_board which copies the current board and to which edits are done. Then copy it back to board after you've looped over everything.
Is it possible to ignore one single specific line with pylint?
I have the following line in my header: import config.logging_settings This actually changes my python logging settings, but pylint thinks it is an unused import. I do not want to remove unused-import warnings in general so is it possible to just ignore this one specific line? I wouldn't mind having a .pylintrc for this project so answers changing a config file will be accepted. Otherwise, something like this will also be appreciated: import config.logging_settings # pylint: disable-this-line-in-some-way
Pylint message control is documented in the Pylint manual: Is it possible to locally disable a particular message? Yes, this feature has been added in Pylint 0.11. This may be done by adding #pylint: disable=some-message,another-one at the desired block level or at the end of the desired line of code You can use the message code or the symbolic names. The manual also has an example. There is a wiki that documents all pylint messages and their codes.
How to send requests with JSONs in unit tests
I have code within a Flask application that uses JSONs in the request, and I can get the JSON object like so: Request = request.get_json() This has been working fine, however I am trying to create unit tests using Python's unittest module and I'm having difficulty finding a way to send a JSON with the request. response=self.app.post('/test_function', data=json.dumps(dict(foo = 'bar'))) This gives me: >>> request.get_data() '{"foo": "bar"}' >>> request.get_json() None Flask seems to have a JSON argument where you can set json=dict(foo='bar') within the post request, but I don't know how to do that with the unittest module.
Changing the post to response=self.app.post('/test_function', data=json.dumps(dict(foo='bar')), content_type='application/json') fixed it. Thanks to user3012759.
Representing rings of algebraic integers
I'm trying to represent the ring; where theta is the root of a monic irreducible polynomial f with integer coefficients of degree d. This ring is a subring of the algebraic integers, which itself is a subring of the field; I can represent this field with sympy's AlgebraicField class Q_theta = sympy.polys.domains.AlgebraicField(QQ,theta) Is there a way to represent the above integer subring in a similar way?
I suspect that may not be a feature in sympy for these reasons: First, if theta is not algebraic over the integers, then adjoining theta to a polynomial ring over the integers, is isomorphic. For example, pi is not algebraic over the integers, because there are no integer coefficients that, combined with pi and powers of pi, will equal zero. To prove that these, are in fact, isomorphic, just take the evaluation ring homomorphism that evaluates each polynomial at pi. This may not be a ready feature, because computing whether a number is not-algebraic over any ring is non-trivial. For example, determining whether or not e + pi is algebraic is still an open question. This can be achieved in sympy by from sympy.polys.domains import ZZ, QQ, RR, FF, EX x, y, z, t = symbols('x y z t') ZZ['theta'] or ZZ[t] One can easily test that this, does in fact, give you the ring of polynomials over the integers. Second, numbers that are algebraic, (numbers like the imaginary number i, which are the roots of integer valued polynomials) can be obtained by taking the polynomial ring modulo and the idea generated by it's unique monic polynomial. So if theta is the imaginary number i, which has the unique monic polynomial x^2+1 >>> QQ.old_poly_ring(x).ideal(x**2+1) <x**2 + 1> >>> ZZ.old_poly_ring(x).ideal(x**2+1) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/sympy/polys/domains/ring.py", line 91, in ideal return ModuleImplementedIdeal(self, self.free_module(1).submodule( File "/usr/local/lib/python2.7/dist- packages/sympy/polys/domains/old_polynomialring.py", line 192, in free_module return FreeModulePolyRing(self, rank) File "/usr/local/lib/python2.7/dist-packages/sympy/polys/agca/modules.py", line 455, in __init__ + 'got %s' % ring.dom) NotImplementedError: Ground domain must be a field, got ZZ Additionally, trying this: >>> QQ.old_poly_ring(x).quotient_ring([x**2]) QQ[x]/<x**2> >>> ZZ.old_poly_ring(x).quotient_ring([x**2]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/sympy/polys/domains/ring.py", line 115, in quotient_ring e = self.ideal(*e) File "/usr/local/lib/python2.7/dist-packages/sympy/polys/domains/ring.py", line 91, in ideal return ModuleImplementedIdeal(self, self.free_module(1).submodule( File "/usr/local/lib/python2.7/dist-packages/sympy/polys/domains/old_polynomialring.py", line 192, in free_module return FreeModulePolyRing(self, rank) File "/usr/local/lib/python2.7/dist-packages/sympy/polys/agca/modules.py", line 455, in __init__ + 'got %s' % ring.dom) NotImplementedError: Ground domain must be a field, got ZZ Looking at the docs: However, useful functionality is only implemented for polynomial rings over fields, and various localizations and quotients thereof. In short, unless theta is non-algebraic over the integers, this might be impossible within sympy's framework. However, representing rings in this manner can be achieved by making classes and using Python's magic methods to override the regular behavior of + and *, which is essentially what we need to study rings. Here is an example of the Gaussian Integers mentioned above. This code could easily be re-purposed to give you, say, the square root of 2, or any other algebraic number over the integers.
Is there an official or common knowledge standard minimal interface for a "list-like" object?
I keep seeing functions and documentation like this and this (to name a few) which operate on or refer to list-like objects. I'm quite aware of what exactly an actual list is (dir(list)), and can deduce what (often varying) methods from a list are necessary in most references to a "list-like object", however the number of times I see it referenced has left me with the following question: Is there an official or common knowledge standard minimal interface for a "list-like" object? Is it as simple as actualizing__getitem__, or is it agreed that additional things like __len__ and __setitem__are required as well? This may seem like semantics, but I can't help but think that if there does not exist a standard minimal interface requirement, various ideas of "list-likeness" could cause some issues/improper handling. Perhaps this is just a slight downside to Python's duck typing?
See the collections.abc module. Of the abstract base classes listed there, list in Python implements Iterable, Container, Sized, Sequence and MutableSequence. Now, of these, Iterable, Sequence and MutableSequence could be casually called list-like. However, I would understand the term list-like to mean that it is a MutableSequence - has at least the methods __getitem__, __setitem__, __delitem__ and __len__, expecting also it to have the mixin methods mentioned in the documentation, such as append. If there is no need for __setitem__ and __delitem__ it should be called a sequence instead - the assumption is that if something accepts a sequence, it does not need to be mutable, thus str, bytes, tuple etc also work there. Your two links highlight the vagueness of the term: The plotly API requires that the list-like objects will be serialized to a JSON array by the internal PlotlyJSONEncoder that delegates most of the encoding to the Python JSONEncoder. However, the latter encodes only tuple and list (and subclasses) to a JSON array; thus the list-like here means a list, a tuple or subclasses thereof. A custom sequence object that is not a subclass of either will result in TypeError: [...] is not JSON serializable. The unzip recipe you linked to requires an object that behaves like a Sequence, (mutability is not required), thus a tuple or str, or any custom object implementing Sequence will do there. TL;DR list-like is a vague term. It is preferable to use the terms iterable, sequence and mutable sequence instead, now that these are defined in collections.abc.
add vs update in set operations in python
What is the difference between add and update operations in python if i just want to add a single value to the set. a = set() a.update([1]) #works a.add(1) #works a.update([1,2])#works a.add([1,2])#fails Can someone explain why is this so.
set.add set.add adds an individual element to the set. So, >>> a = set() >>> a.add(1) >>> a set([1]) works, but it cannot work with an iterable, unless it is hashable. That is the reason why a.add([1, 2]) fails. >>> a.add([1, 2]) Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: unhashable type: 'list' Here, [1, 2] is treated as the element being added to the set and as the error message says, a list cannot be hashed but all the elements of a set are expected to be hashables. Quoting the documentation, Return a new set or frozenset object whose elements are taken from iterable. The elements of a set must be hashable. set.update In case of set.update, you can pass multiple iterables to it and it will iterate all iterables and will include the individual elements in the set. Remember: It can accept only iterables. That is why you are getting an error when you try to update it with 1 >>> a.update(1) Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: 'int' object is not iterable But, the following would work because the list [1] is iterated and the elements of the list are added to the set. >>> a.update([1]) >>> a set([1]) set.update is basically an equivalent of in-place set union operation. Consider the following cases >>> set([1, 2]) | set([3, 4]) | set([1, 3]) set([1, 2, 3, 4]) >>> set([1, 2]) | set(range(3, 5)) | set(i for i in range(1, 5) if i % 2 == 1) set([1, 2, 3, 4]) Here, we explicitly convert all the iterables to sets and then we find the union. There are multiple intermediate sets and unions. In this case, set.update serves as a good helper function. Since it accepts any iterable, you can simply do >>> a.update([1, 2], range(3, 5), (i for i in range(1, 5) if i % 2 == 1)) >>> a set([1, 2, 3, 4])
Traversing a list of lists by index within a loop, to reformat strings
I have a list of lists that looks like this, that was pulled in from a poorly formatted csv file: DF = [['Customer Number: 001 '], ['Notes: Bought a ton of stuff and was easy to deal with'], ['Customer Number: 666 '], ['Notes: acted and looked like Chris Farley on that hidden decaf skit from SNL'], ['Customer Number: 103 '], ['Notes: bought a ton of stuff got a free keychain'], ['Notes: gave us a referral to his uncles cousins hairdresser'], ['Notes: name address birthday social security number on file'], ['Customer Number: 007 '], ['Notes: looked a lot like James Bond'], ['Notes: came in with a martini']] I would like to end up with a new structure like this: ['Customer Number: 001 Notes: Bought a ton of stuff and was easy to deal with', 'Customer Number: 666 Notes: acted and looked like Chris Farley on that hidden decaf skit from SNL', 'Customer Number: 103 Notes: bought a ton of stuff got a free keychain', 'Customer Number: 103 Notes: gave us a referral to his uncles cousins hairdresser', 'Customer Number: 103 Notes: name address birthday social security number on file', 'Customer Number: 007 Notes: looked a lot like James Bond', 'Customer Number: 007 Notes: came in with a martini'] after which I can further split, strip, etc. So, I used the facts that: the customer number always starts with Customer Number the Notes are always longer the number of Notes never exceeds 5 to code up what is clearly an absurd solution, even though it works. DF = [item for sublist in DF for item in sublist] DF = DF + ['stophere'] DF2 = [] for record in DF: if (record[0:17]=="Customer Number: ") & (record !="stophere"): DF2.append(record + DF[DF.index(record)+1]) if len(DF[DF.index(record)+2]) >21: DF2.append(record + DF[DF.index(record)+2]) if len(DF[DF.index(record)+3]) >21: DF2.append(record + DF[DF.index(record)+3]) if len(DF[DF.index(record)+4]) >21: DF2.append(record + DF[DF.index(record)+4]) if len(DF[DF.index(record)+5]) >21: DF2.append(record + DF[DF.index(record)+5]) Would anyone mind recommending a more stable and intelligent solution to this kind of problem?
Just keep track of when we find a new customer: from pprint import pprint as pp out = [] for sub in DF: if sub[0].startswith("Customer Number"): cust = sub[0] else: out.append(cust + sub[0]) pp(out) Output: ['Customer Number: 001 Notes: Bought a ton of stuff and was easy to deal with', 'Customer Number: 666 Notes: acted and looked like Chris Farley on that ' 'hidden decaf skit from SNL', 'Customer Number: 103 Notes: bought a ton of stuff got a free keychain', 'Customer Number: 103 Notes: gave us a referral to his uncles cousins ' 'hairdresser', 'Customer Number: 103 Notes: name address birthday social security number ' 'on file', 'Customer Number: 007 Notes: looked a lot like James Bond', 'Customer Number: 007 Notes: came in with a martini'] If the customer can repeat again later and you want them grouped together use a dict: from collections import defaultdict d = defaultdict(list) for sub in DF: if sub[0].startswith("Customer Number"): cust = sub[0] else: d[cust].append(cust + sub[0]) print(d) Output: pp(d) {'Customer Number: 001 ': ['Customer Number: 001 Notes: Bought a ton of ' 'stuff and was easy to deal with'], 'Customer Number: 007 ': ['Customer Number: 007 Notes: looked a lot like ' 'James Bond', 'Customer Number: 007 Notes: came in with a ' 'martini'], 'Customer Number: 103 ': ['Customer Number: 103 Notes: bought a ton of ' 'stuff got a free keychain', 'Customer Number: 103 Notes: gave us a referral ' 'to his uncles cousins hairdresser', 'Customer Number: 103 Notes: name address ' 'birthday social security number on file'], 'Customer Number: 666 ': ['Customer Number: 666 Notes: acted and looked ' 'like Chris Farley on that hidden decaf skit ' 'from SNL']} Based on your comment and error you seem to have lines coming before an actual customer so we can add them to the first customer in the list: # added ["foo"] before we see any customer DF = [["foo"],['Customer Number: 001 '], ['Notes: Bought a ton of stuff and was easy to deal with'], ['Customer Number: 666 '], ['Notes: acted and looked like Chris Farley on that hidden decaf skit from SNL'], ['Customer Number: 103 '], ['Notes: bought a ton of stuff got a free keychain'], ['Notes: gave us a referral to his uncles cousins hairdresser'], ['Notes: name address birthday social security number on file'], ['Customer Number: 007 '], ['Notes: looked a lot like James Bond'], ['Notes: came in with a martini']] from pprint import pprint as pp from itertools import takewhile, islice # find lines up to first customer start = list(takewhile(lambda x: "Customer Number:" not in x[0], DF)) out = [] ln = len(start) # if we had data before we actually found a customer this will be True if start: # so set cust to first customer in list and start adding to out cust = DF[ln][0] for sub in start: out.append(cust + sub[0]) # ln will either be 0 if start is empty else we start at first customer for sub in islice(DF, ln, None): if sub[0].startswith("Customer Number"): cust = sub[0] else: out.append(cust + sub[0]) Which outputs: ['Customer Number: 001 foo', 'Customer Number: 001 Notes: Bought a ton of stuff and was easy to deal with', 'Customer Number: 666 Notes: acted and looked like Chris Farley on that ' 'hidden decaf skit from SNL', 'Customer Number: 103 Notes: bought a ton of stuff got a free keychain', 'Customer Number: 103 Notes: gave us a referral to his uncles cousins ' 'hairdresser', 'Customer Number: 103 Notes: name address birthday social security number ' 'on file', 'Customer Number: 007 Notes: looked a lot like James Bond', 'Customer Number: 007 Notes: came in with a martini'] I presumed you would consider lines that come before any customer to actually belong to that first customer.
import matplotlib.pyplot gives ImportError: dlopen(…) Library not loaded libpng15.15.dylib
I am aware that this exact same question has been asked before. I did follow the instructions given in the answer there, and it didn't solve my problem (and I don't have enough reputation to just comment on the Q or A in that thread). Anyway, here's what's going on: I try to do: import matplotlib.pyplot And in return I get: Traceback (most recent call last): File "/Users/russellrichie/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3032, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-3-eff513f636fd>", line 1, in <module> import matplotlib.pyplot as plt File "/Users/russellrichie/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.py", line 27, in <module> import matplotlib.colorbar File "/Users/russellrichie/anaconda/lib/python2.7/site-packages/matplotlib/colorbar.py", line 34, in <module> import matplotlib.collections as collections File "/Users/russellrichie/anaconda/lib/python2.7/site-packages/matplotlib/collections.py", line 27, in <module> import matplotlib.backend_bases as backend_bases File "/Users/russellrichie/anaconda/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 56, in <module> import matplotlib.textpath as textpath File "/Users/russellrichie/anaconda/lib/python2.7/site-packages/matplotlib/textpath.py", line 22, in <module> from matplotlib.mathtext import MathTextParser File "/Users/russellrichie/anaconda/lib/python2.7/site-packages/matplotlib/mathtext.py", line 63, in <module> import matplotlib._png as _png ImportError: dlopen(/Users/russellrichie/anaconda/lib/python2.7/site-packages/matplotlib/_png.so, 2): Library not loaded: libpng15.15.dylib Referenced from: /Users/russellrichie/anaconda/lib/python2.7/site-packages/matplotlib/_png.so Reason: image not found My Python version: 2.7.7 |Anaconda 2.0.1 (x86_64)| (default, Jun 2 2014, 12:48:16) [GCC 4.0.1 (Apple Inc. build 5493)] EDIT: cel's suggestion worked! I just tried "conda remove matplotlib", "pip install matplotlib", and then "conda install matplotlib", and presto! Man, you have no idea how long this problem has vexed me. Bless you all.
Some python packages link dynamically against native c libraries. After an update of one of those libraries, links can break and give you weird error messages about missing dynamic libraries, as seen in the error message in the question. Basically, after an update of a native library sometimes you also have to rebuild python packages (here matplotlib). The above statement is true in general. If you are using conda as your python distribution things are usually less complicated: For extension packages conda also maintains required c libraries. As long as you use only conda install and conda update for installing those packages you should not run into these issues. For numpy, scipy, matplotlib and many more I would suggest to try conda search <library name> first to see if there's a conda recipe that matches your needs. For most users conda install <library name> will be a better option than pip install. To make sure that only conda's version is installed you can do conda remove matplotlib pip uninstall matplotlib conda install matplotlib Afterwards this issue should not appear anymore.
Why do I constantly see "Resetting dropped connection" when uploading data to my database?
I'm uploading hundreds of millions of items to my database via a REST API from a cloud server on Heroku to a database in AWS EC2. I'm using Python and I am constantly seeing the following INFO log message in the logs. [requests.packages.urllib3.connectionpool] [INFO] Resetting dropped connection: <hostname> This "resetting of the dropped connection" seems to take many seconds (sometimes 30+ sec) before my code continues to execute again. Firstly what exactly is happening here and why? Secondly is there a way to stop the connection from dropping so that I am able to upload data faster? Thanks for your help. Andrew.
Requests use Keep-Alive by default. Resetting dropped connection, from my understanding, means a connection that should be alive was dropped somehow. Possible reasons are: Server doesn't support Keep-Alive. There's no data transfer in established connections for a while, so server drops connections. See http://stackoverflow.com/a/25239947/2142577 for more details.
Unable to start appengine application after updating it via Google Cloud SDK
Recently I have updated google appengine from 1.9.17 to 1.9.18 via Google Cloud SDK by using command gcloud components update in Windows 7 64 bit. After that I couldn't able to start any of the project in appengine launcher. Getting this error: Traceback (most recent call last): File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\dev_appserver.py", line 83, in <module> _run_file(__file__, globals()) File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\dev_appserver.py", line 79, in _run_file execfile(_PATHS.script_file(script_name), globals_) File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\devappserver2.py", line 36, in <module> from google.appengine.tools.devappserver2 import dispatcher File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\dispatcher.py", line 29, in <module> from google.appengine.tools.devappserver2 import module File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\module.py", line 71, in <module> from google.appengine.tools.devappserver2 import vm_runtime_factory File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\vm_runtime_factory.py", line 25, in <module> from google.appengine.tools.devappserver2 import vm_runtime_proxy File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\vm_runtime_proxy.py", line 29, in <module> from google.appengine.tools.devappserver2 import log_manager File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\log_manager.py", line 34, in <module> from google.appengine.tools.docker import containers File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\docker\containers.py", line 47, in <module> import docker ImportError: No module named docker 2015-03-05 19:11:27 (Process exited with code 1) Even I have installed the latest Google Cloud SDK. Still getiing the same error. I can able install the appengine SDK 1.9.18(without using Google Cloud SDK) and able to run the project successfully. This error happening only for the appengine launcher installed via Google Cloud SDK in Windows 7. This issue is raised in App Engine Issue Tracker: Issue 125. I recommend you to star this issue.
This has happened to me today to reinstall the app engine sdk. I could not run my code in the launcher. I remember reading that is not used pip app engine, but now I have solved the problem. In short what I did was: Install pip the footsteps of https://pip.pypa.io/en/latest/installing.html (this also correctly install the setuptools) Install docker-py by pip: pip install docker-py and ready, I can now run my code in the launcher P.S. Previously I tried to install the docker-py package, downloaded from https://github.com/docker/docker-py, but lacked setuptools, downloaded and installed the package did not work. So use this with pip.
Why is 'x' in ('x',) faster than 'x' == 'x'?
>>> timeit.timeit("'x' in ('x',)") 0.04869917374131205 >>> timeit.timeit("'x' == 'x'") 0.06144205736110564 Also works for tuples with multiple elements, both versions seem to grow linearly: >>> timeit.timeit("'x' in ('x', 'y')") 0.04866674801541748 >>> timeit.timeit("'x' == 'x' or 'x' == 'y'") 0.06565782838087131 >>> timeit.timeit("'x' in ('y', 'x')") 0.08975995576448526 >>> timeit.timeit("'x' == 'y' or 'x' == 'y'") 0.12992391047427532 Based on this, I think I should totally start using in everywhere instead of ==!
As I mentioned to David Wolever, there's more to this than meets the eye; both methods dispatch to is; you can prove this by doing min(Timer("x == x", setup="x = 'a' * 1000000").repeat(10, 10000)) #>>> 0.00045456900261342525 min(Timer("x == y", setup="x = 'a' * 1000000; y = 'a' * 1000000").repeat(10, 10000)) #>>> 0.5256857610074803 The first can only be so fast because it checks by identity. To find out why one would take longer than the other, let's trace through execution. They both start in ceval.c, from COMPARE_OP since that is the bytecode involved TARGET(COMPARE_OP) { PyObject *right = POP(); PyObject *left = TOP(); PyObject *res = cmp_outcome(oparg, left, right); Py_DECREF(left); Py_DECREF(right); SET_TOP(res); if (res == NULL) goto error; PREDICT(POP_JUMP_IF_FALSE); PREDICT(POP_JUMP_IF_TRUE); DISPATCH(); } This pops the values from the stack (technically it only pops one) PyObject *right = POP(); PyObject *left = TOP(); and runs the compare: PyObject *res = cmp_outcome(oparg, left, right); cmp_outcome is this: static PyObject * cmp_outcome(int op, PyObject *v, PyObject *w) { int res = 0; switch (op) { case PyCmp_IS: ... case PyCmp_IS_NOT: ... case PyCmp_IN: res = PySequence_Contains(w, v); if (res < 0) return NULL; break; case PyCmp_NOT_IN: ... case PyCmp_EXC_MATCH: ... default: return PyObject_RichCompare(v, w, op); } v = res ? Py_True : Py_False; Py_INCREF(v); return v; } This is where the paths split. The PyCmp_IN branch does int PySequence_Contains(PyObject *seq, PyObject *ob) { Py_ssize_t result; PySequenceMethods *sqm = seq->ob_type->tp_as_sequence; if (sqm != NULL && sqm->sq_contains != NULL) return (*sqm->sq_contains)(seq, ob); result = _PySequence_IterSearch(seq, ob, PY_ITERSEARCH_CONTAINS); return Py_SAFE_DOWNCAST(result, Py_ssize_t, int); } Note that a tuple is defined as static PySequenceMethods tuple_as_sequence = { ... (objobjproc)tuplecontains, /* sq_contains */ }; PyTypeObject PyTuple_Type = { ... &tuple_as_sequence, /* tp_as_sequence */ ... }; So the branch if (sqm != NULL && sqm->sq_contains != NULL) will be taken and *sqm->sq_contains, which is the function (objobjproc)tuplecontains, will be taken. This does static int tuplecontains(PyTupleObject *a, PyObject *el) { Py_ssize_t i; int cmp; for (i = 0, cmp = 0 ; cmp == 0 && i < Py_SIZE(a); ++i) cmp = PyObject_RichCompareBool(el, PyTuple_GET_ITEM(a, i), Py_EQ); return cmp; } ...Wait, wasn't that PyObject_RichCompareBool what the other branch took? Nope, that was PyObject_RichCompare. That code path was short so it likely just comes down to the speed of these two. Let's compare. int PyObject_RichCompareBool(PyObject *v, PyObject *w, int op) { PyObject *res; int ok; /* Quick result when objects are the same. Guarantees that identity implies equality. */ if (v == w) { if (op == Py_EQ) return 1; else if (op == Py_NE) return 0; } ... } The code path in PyObject_RichCompareBool pretty much immediately terminates. For PyObject_RichCompare, it does PyObject * PyObject_RichCompare(PyObject *v, PyObject *w, int op) { PyObject *res; assert(Py_LT <= op && op <= Py_GE); if (v == NULL || w == NULL) { ... } if (Py_EnterRecursiveCall(" in comparison")) return NULL; res = do_richcompare(v, w, op); Py_LeaveRecursiveCall(); return res; } The Py_EnterRecursiveCall/Py_LeaveRecursiveCall combo are not taken in the previous path, but these are relatively quick macros that'll short-circuit after incrementing and decrementing some globals. do_richcompare does: static PyObject * do_richcompare(PyObject *v, PyObject *w, int op) { richcmpfunc f; PyObject *res; int checked_reverse_op = 0; if (v->ob_type != w->ob_type && ...) { ... } if ((f = v->ob_type->tp_richcompare) != NULL) { res = (*f)(v, w, op); if (res != Py_NotImplemented) return res; ... } ... } This does some quick checks to call v->ob_type->tp_richcompare which is PyTypeObject PyUnicode_Type = { ... PyUnicode_RichCompare, /* tp_richcompare */ ... }; which does PyObject * PyUnicode_RichCompare(PyObject *left, PyObject *right, int op) { int result; PyObject *v; if (!PyUnicode_Check(left) || !PyUnicode_Check(right)) Py_RETURN_NOTIMPLEMENTED; if (PyUnicode_READY(left) == -1 || PyUnicode_READY(right) == -1) return NULL; if (left == right) { switch (op) { case Py_EQ: case Py_LE: case Py_GE: /* a string is equal to itself */ v = Py_True; break; case Py_NE: case Py_LT: case Py_GT: v = Py_False; break; default: ... } } else if (...) { ... } else { ...} Py_INCREF(v); return v; } Namely, this shortcuts on left == right... but only after doing if (!PyUnicode_Check(left) || !PyUnicode_Check(right)) if (PyUnicode_READY(left) == -1 || PyUnicode_READY(right) == -1) All in all the paths then look something like this (manually recursively inlining, unrolling and pruning known branches) POP() # Stack stuff TOP() # # case PyCmp_IN: # Dispatch on operation # sqm != NULL # Dispatch to builtin op sqm->sq_contains != NULL # *sqm->sq_contains # # cmp == 0 # Do comparison in loop i < Py_SIZE(a) # v == w # op == Py_EQ # ++i # cmp == 0 # # res < 0 # Convert to Python-space res ? Py_True : Py_False # Py_INCREF(v) # # Py_DECREF(left) # Stack stuff Py_DECREF(right) # SET_TOP(res) # res == NULL # DISPATCH() # vs POP() # Stack stuff TOP() # # default: # Dispatch on operation # Py_LT <= op # Checking operation op <= Py_GE # v == NULL # w == NULL # Py_EnterRecursiveCall(...) # Recursive check # v->ob_type != w->ob_type # More operation checks f = v->ob_type->tp_richcompare # Dispatch to builtin op f != NULL # # !PyUnicode_Check(left) # ...More checks !PyUnicode_Check(right)) # PyUnicode_READY(left) == -1 # PyUnicode_READY(right) == -1 # left == right # Finally, doing comparison case Py_EQ: # Immediately short circuit Py_INCREF(v); # # res != Py_NotImplemented # # Py_LeaveRecursiveCall() # Recursive check # Py_DECREF(left) # Stack stuff Py_DECREF(right) # SET_TOP(res) # res == NULL # DISPATCH() # Now, PyUnicode_Check and PyUnicode_READY are pretty cheap since they only check a couple of fields, but it should be obvious that the top one is a smaller code path, it has fewer function calls, only one switch statement and is just a bit thinner. TL;DR: Both dispatch to if (left_pointer == right_pointer); the difference is just how much work they do to get there. in just does less.
Why are exceptions within a Python generator not caught?
I have the following experimental code whose function is similar to the zip built-in. What it tries to do should have been simple and clear, trying to return the zipped tuples one at a time until an IndexError occurs when we stop the generator. def my_zip(*args): i = 0 while True: try: yield (arg[i] for arg in args) except IndexError: raise StopIteration i += 1 However, when I tried to execute the following code, the IndexError was not caught but instead thrown by the generator: gen = my_zip([1,2], ['a','b']) print(list(next(gen))) print(list(next(gen))) print(list(next(gen))) IndexError Traceback (most recent call last) I:\Software\WinPython-32bit-3.4.2.4\python-3.4.2\my\temp2.py in <module>() 12 print(list(next(gen))) 13 print(list(next(gen))) ---> 14 print(list(next(gen))) I:\Software\WinPython-32bit-3.4.2.4\python-3.4.2\my\temp2.py in <genexpr>(.0) 3 while True: 4 try: ----> 5 yield (arg[i] for arg in args) 6 except IndexError: 7 raise StopIteration IndexError: list index out of range Why is this happening? Edit: Thanks @thefourtheye for providing a nice explanation for what's happening above. Now another problem occurs when I execute: list(my_zip([1,2], ['a','b'])) This line never returns and seems to hang the machine. What's happening now?
The yield yields a generator object everytime and when the generators were created there was no problem at all. That is why try...except in my_zip is not catching anything. The third time when you executed it, list(arg[2] for arg in args) this is how it got reduced to (over simplified for our understanding) and now, observe carefully, list is iterating the generator, not the actual my_zip generator. Now, list calls next on the generator object and arg[2] is evaluated, only to find that 2 is not a valid index for arg (which is [1, 2] in this case), so IndexError is raised, and list fails to handle it (it has no reason to handle that anyway) and so it fails. As per the edit, list(my_zip([1,2], ['a','b'])) will be evaluated like this. First, my_zip will be called and that will give you a generator object. Then iterate it with list. It calls next on it, and it gets another generator object list(arg[0] for arg in args). Since there is no exception or return encountered, it will call next, to get another generator object list(arg[1] for arg in args) and it keeps on iterating. Remember, the yielded generators are never iterated, so we ll never get the IndexError. That is why the code runs infinitely. You can confirm this like this, from itertools import islice from pprint import pprint pprint(list(islice(my_zip([1, 2], ["a", 'b']), 10))) and you will get [<generator object <genexpr> at 0x7f4d0a709678>, <generator object <genexpr> at 0x7f4d0a7096c0>, <generator object <genexpr> at 0x7f4d0a7099d8>, <generator object <genexpr> at 0x7f4d0a709990>, <generator object <genexpr> at 0x7f4d0a7095a0>, <generator object <genexpr> at 0x7f4d0a709510>, <generator object <genexpr> at 0x7f4d0a7095e8>, <generator object <genexpr> at 0x7f4d0a71c708>, <generator object <genexpr> at 0x7f4d0a71c750>, <generator object <genexpr> at 0x7f4d0a71c798>] So the code tries to build an infinite list of generator objects.
'module' has no attribute 'urlencode'
When i try to follow python wiki page example related to URL encoding: >>> import urllib >>> params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) >>> f = urllib.urlopen("http://www.musi-cal.com/cgi-bin/query", params) >>> print f.read() An error is raised on the second line: Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'urlencode' What i am missing? https://docs.python.org/2/library/urllib.html#urllib.urlencode
urllib has been split up in Python 3. The urllib.urlencode() function is now urllib.parse.urlencode(), and the urllib.urlopen() function is now urllib.request.urlopen().
Improving line-wise I/O operations in D
I need to process lots of medium to large files (a few hundred MB to GBs) in a linewise manner, so I'm interested in standard D approaches for iterating over lines. The foreach(line; file.byLine()) idiom seems to fit the bill and is pleasantly terse and readable, however performance seems to be less than ideal. For example, below are two trivial programs in Python and D for iterating over the lines of a file and counting the lines. For a ~470 MB file (~3.6M lines) I get the following timings (best out of 10): D times: real 0m19.146s user 0m18.932s sys 0m0.190s Python times (after EDIT 2, see below) : real 0m0.924s user 0m0.792s sys 0m0.129s Here's the D version, compiled with dmd -O -release -inline -m64: import std.stdio; import std.string; int main(string[] args) { if (args.length < 2) { return 1; } auto infile = File(args[1]); uint linect = 0; foreach (line; infile.byLine()) linect += 1; writeln("There are: ", linect, " lines."); return 0; } And now the corresponding Python version: import sys if __name__ == "__main__": if (len(sys.argv) < 2): sys.exit() infile = open(sys.argv[1]) linect = 0 for line in infile: linect += 1 print "There are %d lines" % linect EDIT 2: I changed the Python code to use the more idiomatic for line in infile as suggested in the comments below, leading to an even greater speed-up for the Python version, which is now approaching the speed of the standard wc -l call to the Unix wc tool. Any advice or pointers to what I might be doing wrong in D, that is giving such poor performance? EDIT: And for comparison, here's a D version that throws the byLine() idiom out the window and sucks all the data into memory at once, and then splits the data into lines post-hoc. This gives better performance but is still about 2x slower than they Python version. import std.stdio; import std.string; import std.file; int main(string[] args) { if (args.length < 2) { return 1; } auto c = cast(string) read(args[1]); auto l = splitLines(c); writeln("There are ", l.length, " lines."); return 0; } The timings for this last version are as follows: real 0m3.201s user 0m2.820s sys 0m0.376s
EDIT AND TL;DR: This problem has been solved in https://github.com/D-Programming-Language/phobos/pull/3089. The improved File.byLine performance will be available starting with D 2.068. I tried your code on a text file with 575247 lines. The Python baseline takes about 0.125 seconds. Here's my codebase with timings embedded in the comments for each method. Explanations follow. import std.algorithm, std.file, std.stdio, std.string; int main(string[] args) { if (args.length < 2) { return 1; } size_t linect = 0; // 0.62 s foreach (line; File(args[1]).byLine()) linect += 1; // 0.2 s //linect = args[1].readText.count!(c => c == '\n'); // 0.095 s //linect = args[1].readText.representation.count!(c => c == '\n'); // 0.11 s //linect = File(args[1]).byChunk(4096).joiner.count!(c => c == '\n'); writeln("There are: ", linect, " lines."); return 0; } I used dmd -O -release -inline for each variant. The first version (slowest) reads one line at a time. We could and should improve the performance of byLine; currently it's hamstrung by things like mixed use of byLine with other C stdio operations, which is probably overly conservative. If we do away with that, we can easily do prefetching etc. The second version reads the file in one fell swoop and then uses a standard algorithm to count the lines with a predicate. The third version acknowledges the fact that there's no need to mind any UTF subtleties; counting bytes is just as fine, so it converts the string to its byte-wise representation (at no cost) and then counts the bytes. The last version (my fave) reads 4KB of data from the file at a time and flattens them lazily using joiner. Then again it counts the bytes.
What is the difference between flatten and ravel functions in numpy?
import numpy as np y = np.array(((1,2,3),(4,5,6),(7,8,9))) OUTPUT: print(y.flatten()) [1 2 3 4 5 6 7 8 9] print(y.ravel()) [1 2 3 4 5 6 7 8 9] Both function return the same list. Then what is the need of two different functions performing same job.
The difference is that flatten always returns a copy and ravel returns a view of the original array whenever possible. This isn't visible in the printed output, but if you modify the array returned by ravel, it may modify the entries in the original array. If you modify the entries in an array returned from flatten this will never happen. ravel will often be faster since no memory is copied, but you have to be more careful about modifying the array it returns.
Adding value labels on a matplotlib bar chart
I got stuck on something that feels like should be relatively easy. The code I bring below is a sample based on a larger project I'm working on. I saw no reason to post all the details, so please accept the data structures I bring as is. Basically, I'm creating a bar chart, and I just can figure out how to add value labels on the bars (in the center of the bar, or just above it). Been looking at samples around the web but with no success implementing on my own code. I believe the solution is either with 'text' or 'annotate', but I: a) don't know which one to use (and generally speaking, haven't figured out when to use which). b) can't see to get either to present the value labels. Would appreciate your help, my code below. Thanks in advance! import numpy as np import pandas as pd import matplotlib.pyplot as plt pd.set_option('display.mpl_style', 'default') %matplotlib inline frequencies = [6, 16, 75, 160, 244, 260, 145, 73, 16, 4, 1] # bring some raw data freq_series = pd.Series.from_array(frequencies) # in my original code I create a series and run on that, so for consistency I create a series from the list. x_labels = [108300.0, 110540.0, 112780.0, 115020.0, 117260.0, 119500.0, 121740.0, 123980.0, 126220.0, 128460.0, 130700.0] # now to plot the figure... plt.figure(figsize=(12, 8)) fig = freq_series.plot(kind='bar') fig.set_title("Amount Frequency") fig.set_xlabel("Amount ($)") fig.set_ylabel("Frequency") fig.set_xticklabels(x_labels)
Firstly freq_series.plot returns an axis not a figure so to make my answer a little more clear I've changed your given code to refer to it as ax rather than fig to be more consistent with other code examples. You can get the list of the bars produced in the plot from the ax.patches member. Then you can use the technique demonstrated in this matplotlib gallery example to add the labels using the ax.text method. import numpy as np import pandas as pd import matplotlib.pyplot as plt frequencies = [6, 16, 75, 160, 244, 260, 145, 73, 16, 4, 1] # bring some raw data freq_series = pd.Series.from_array(frequencies) # in my original code I create a series and run on that, so for consistency I create a series from the list. x_labels = [108300.0, 110540.0, 112780.0, 115020.0, 117260.0, 119500.0, 121740.0, 123980.0, 126220.0, 128460.0, 130700.0] # now to plot the figure... plt.figure(figsize=(12, 8)) ax = freq_series.plot(kind='bar') ax.set_title("Amount Frequency") ax.set_xlabel("Amount ($)") ax.set_ylabel("Frequency") ax.set_xticklabels(x_labels) rects = ax.patches # Now make some labels labels = ["label%d" % i for i in xrange(len(rects))] for rect, label in zip(rects, labels): height = rect.get_height() ax.text(rect.get_x() + rect.get_width()/2, height + 5, label, ha='center', va='bottom') plt.savefig("image.png") This produces a labelled plot that looks like:
Collection object is not callable error with PyMongo
Following along the PyMongo tutorial and am getting an error when calling the insert_one method on a collection. In [1]: import pymongo In [2]: from pymongo import MongoClient In [3]: client = MongoClient() In [4]: db = client.new_db In [5]: db Out[5]: Database(MongoClient('localhost', 27017), u'new_db') In [6]: posts = db.posts In [7]: posts.insert_one({'a':1}) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-7-2271c01f9a85> in <module>() ----> 1 posts.insert_one({'a':1}) C:\Anaconda\lib\site-packages\pymongo-2.8-py2.7-win32.egg\pymongo\collection.py in __call__(self, *a rgs, **kwargs) 1771 "call the '%s' method on a 'Collection' object it is " 1772 "failing because no such method exists." % -> 1773 self.__name.split(".")[-1]) TypeError: 'Collection' object is not callable. If you meant to call the 'insert_one' method on a 'Collection' object it is failing because no such method exists. There are a few posts online that discuss this error but all seem to be when the user calls a deprecated name. Any guidance on what I am doing wrong here?
It is a clear question but the problem here seems to be that you are reading from the "beta" release documentation but in all likelihood you actually at most have "pymongo" 2.8 installed rather than the "3.0b" referred to in the link you quote. The 2.8 release tutorial points to the .insert() method instead: posts.insert({'a':1}) Since .insert_one() is only available in the 3.0b driver. Either force the installation of the "beta" driver or live with a stable driver and the available methods. This seems to be the fault of the current "search engine response" matching the "beta release" as "current".
Python3 project remove __pycache__ folders and .pyc files
What is the BEST way to clear out all the __pycache__ folders and .pyc/.pyo files from a python3 project. I have seen multiple users suggest the pyclean script bundled with Debian, but this does not remove the folders. I want a simple way to clean up the project before pushing the files to my DVS.
You can do it manually with the next command: find . | grep -E "(__pycache__|\.pyc|\.pyo$)" | xargs rm -rf This will remove all *.pyc files and __pycache__ directories recursively in the current directory.
How to convert a DataFrame back to normal RDD in pyspark?
I need to use the (rdd.)partitionBy(npartitions, custom_partitioner) method that is not available on the DataFrame. All of the DataFrame methods refer only to DataFrame results. So then how to create an RDD from the DataFrame data? Note: this is a change (in 1.3.0) from 1.2.0. Update from the answer from @dpangmao: the method is .rdd. I was interested to understand if (a) it were public and (b) what are the performance implications. Well (a) is yes and (b) - well you can see here that there are significant perf implications: a new RDD must be created by invoking mapPartitions : In dataframe.py (note the file name changed as well (was sql.py): @property def rdd(self): """ Return the content of the :class:`DataFrame` as an :class:`RDD` of :class:`Row` s. """ if not hasattr(self, '_lazy_rdd'): jrdd = self._jdf.javaToPython() rdd = RDD(jrdd, self.sql_ctx._sc, BatchedSerializer(PickleSerializer())) schema = self.schema def applySchema(it): cls = _create_cls(schema) return itertools.imap(cls, it) self._lazy_rdd = rdd.mapPartitions(applySchema) return self._lazy_rdd
Use the method .rdd like this: rdd = df.rdd
How do I add a kernel on a remote machine in IPython (Jupyter) Notebook?
Dropdown menu in the top-right of the UI on a local machine (PC): Kernel-> Change kernel-> Python 2 (on a local PC) Python 3 (on a local PC) My new kernel (on a remote PC)
The IPython notebook talks to the kernels over predefined ports. To talk to a remote kernel, you just need to forward the ports to the remote machine as part of the kernel initialisation, the notebook doesn't care where the kernel is as long as it can talk to it. You could either set up a wrapper script that gets called in the kernel spec file (https://ipython.org/ipython-doc/dev/development/kernels.html#kernel-specs) or use a module that can help you set up and manage different kinds of remote kernels: (pip install remote_ikernel; https://bitbucket.org/tdaff/remote_ikernel). If you are using remote_ikernel, and have ssh access to the machine, the following command will set up the entry in the drop down list: remote_ikernel manage --add \ --kernel_cmd="ipython kernel -f {connection_file}" \ --name="Remote Python" --interface=ssh \ --host=my_remote_machine
pip fails with AttributeError: 'module' object has no attribute 'wraps'
I'm on Fedora. I recently upgraded my system from F20 to F21. Pip was working fine on F20 but after the upgrade to F21 something must have gone wrong. Pip stopped working, every time I enter the command pip <anything> the error below occurs: Traceback (most recent call last): File "/usr/bin/pip", line 7, in <module> from pip import main File "/usr/lib/python2.7/site-packages/pip/__init__.py", line 12, in <module> from pip.commands import commands, get_summaries, get_similar_commands File "/usr/lib/python2.7/site-packages/pip/commands/__init__.py", line 6, in <module> from pip.commands.bundle import BundleCommand File "/usr/lib/python2.7/site-packages/pip/commands/bundle.py", line 6, in <module> from pip.commands.install import InstallCommand File "/usr/lib/python2.7/site-packages/pip/commands/install.py", line 5, in <module> from pip.req import InstallRequirement, RequirementSet, parse_requirements File "/usr/lib/python2.7/site-packages/pip/req/__init__.py", line 3, in <module> from .req_install import InstallRequirement File "/usr/lib/python2.7/site-packages/pip/req/req_install.py", line 31, in <module> from pip.utils import ( File "/usr/lib/python2.7/site-packages/pip/utils/__init__.py", line 59, in <module> def rmtree(dir, ignore_errors=False): File "/usr/lib/python2.7/site-packages/pip/_vendor/retrying.py", line 47, in wrap @six.wraps(f) AttributeError: 'module' object has no attribute 'wraps'
Okay after trying out all the solutions I could google with no result in sight. I tried to risk and play a little bit. This might not be the safest solution but it worked fine for me. Seeing that python get-pip.py resulted in: Requirement already up-to-date: pip in /usr/lib/python2.7/site-packages even when I had pip uninstalled. I went over to /usr/lib/python2.7/site-packages/ to find out two pip directories: pip and pip-6.0.8.dist-info. Removed both immediately. Then tried python get-pip.py again. Voila! it worked.
Python time.sleep() vs event.wait()
I want to perform an action at a regular interval in my multi-threaded Python application. I have seen two different ways of doing it exit = False def thread_func(): while not exit: action() time.sleep(DELAY) or exit_flag = threading.Event() def thread_func(): while not exit_flag.wait(timeout=DELAY): action() Is there an advantage to one way over the other? Does one use less resources, or play nicer with other threads and the GIL? Which one makes the remaining threads in my app more responsive? (Assume some external event sets exit or exit_flag, and I am willing to wait the full delay while shutting down)
Using exit_flag.wait(timeout=DELAY) will be more responsive, because you'll break out of the while loop instantly when exit_flag is set. With time.sleep, even after the event is set, you're going to wait around in the time.sleep call until you've slept for DELAY seconds. In terms of implementation, Python 2.x and Python 3.x have very different behavior. In Python 2.x Event.wait is implemented in pure Python using a bunch of small time.sleep calls: from time import time as _time, sleep as _sleep .... # This is inside the Condition class (Event.wait calls Condition.wait). def wait(self, timeout=None): if not self._is_owned(): raise RuntimeError("cannot wait on un-acquired lock") waiter = _allocate_lock() waiter.acquire() self.__waiters.append(waiter) saved_state = self._release_save() try: # restore state no matter what (e.g., KeyboardInterrupt) if timeout is None: waiter.acquire() if __debug__: self._note("%s.wait(): got it", self) else: # Balancing act: We can't afford a pure busy loop, so we # have to sleep; but if we sleep the whole timeout time, # we'll be unresponsive. The scheme here sleeps very # little at first, longer as time goes on, but never longer # than 20 times per second (or the timeout time remaining). endtime = _time() + timeout delay = 0.0005 # 500 us -> initial delay of 1 ms while True: gotit = waiter.acquire(0) if gotit: break remaining = endtime - _time() if remaining <= 0: break delay = min(delay * 2, remaining, .05) _sleep(delay) if not gotit: if __debug__: self._note("%s.wait(%s): timed out", self, timeout) try: self.__waiters.remove(waiter) except ValueError: pass else: if __debug__: self._note("%s.wait(%s): got it", self, timeout) finally: self._acquire_restore(saved_state) This actually means using wait is probably a bit more CPU-hungry than just sleeping the full DELAY unconditionally, but has the benefit being (potentially a lot, depending on how long DELAY is) more responsive. It also means that the GIL needs to be frequently re-acquired, so that the next sleep can be scheduled, while time.sleep can release the GIL for the full DELAY. Now, will acquiring the GIL more frequently have a noticeable effect on other threads in your application? Maybe or maybe not. It depends on how many other threads are running and what kind of work loads they have. My guess is it won't be particularly noticeable unless you have a high number of threads, or perhaps another thread doing lots of CPU-bound work, but its easy enough to try it both ways and see. In Python 3.x, much of the implementation is moved to pure C code: import _thread # C-module _allocate_lock = _thread.allocate_lock class Condition: ... def wait(self, timeout=None): if not self._is_owned(): raise RuntimeError("cannot wait on un-acquired lock") waiter = _allocate_lock() waiter.acquire() self._waiters.append(waiter) saved_state = self._release_save() gotit = False try: # restore state no matter what (e.g., KeyboardInterrupt) if timeout is None: waiter.acquire() gotit = True else: if timeout > 0: gotit = waiter.acquire(True, timeout) # This calls C code else: gotit = waiter.acquire(False) return gotit finally: self._acquire_restore(saved_state) if not gotit: try: self._waiters.remove(waiter) except ValueError: pass class Event: def __init__(self): self._cond = Condition(Lock()) self._flag = False def wait(self, timeout=None): self._cond.acquire() try: signaled = self._flag if not signaled: signaled = self._cond.wait(timeout) return signaled finally: self._cond.release() And the C code that acquires the lock: /* Helper to acquire an interruptible lock with a timeout. If the lock acquire * is interrupted, signal handlers are run, and if they raise an exception, * PY_LOCK_INTR is returned. Otherwise, PY_LOCK_ACQUIRED or PY_LOCK_FAILURE * are returned, depending on whether the lock can be acquired withing the * timeout. */ static PyLockStatus acquire_timed(PyThread_type_lock lock, PY_TIMEOUT_T microseconds) { PyLockStatus r; _PyTime_timeval curtime; _PyTime_timeval endtime; if (microseconds > 0) { _PyTime_gettimeofday(&endtime); endtime.tv_sec += microseconds / (1000 * 1000); endtime.tv_usec += microseconds % (1000 * 1000); } do { /* first a simple non-blocking try without releasing the GIL */ r = PyThread_acquire_lock_timed(lock, 0, 0); if (r == PY_LOCK_FAILURE && microseconds != 0) { Py_BEGIN_ALLOW_THREADS // GIL is released here r = PyThread_acquire_lock_timed(lock, microseconds, 1); Py_END_ALLOW_THREADS } if (r == PY_LOCK_INTR) { /* Run signal handlers if we were interrupted. Propagate * exceptions from signal handlers, such as KeyboardInterrupt, by * passing up PY_LOCK_INTR. */ if (Py_MakePendingCalls() < 0) { return PY_LOCK_INTR; } /* If we're using a timeout, recompute the timeout after processing * signals, since those can take time. */ if (microseconds > 0) { _PyTime_gettimeofday(&curtime); microseconds = ((endtime.tv_sec - curtime.tv_sec) * 1000000 + (endtime.tv_usec - curtime.tv_usec)); /* Check for negative values, since those mean block forever. */ if (microseconds <= 0) { r = PY_LOCK_FAILURE; } } } } while (r == PY_LOCK_INTR); /* Retry if we were interrupted. */ return r; } This implementation is responsive, and doesn't require frequent wakeups that re-acquire the GIL, so you get the best of both worlds.
Heroku Sporadic High Response Time
This is very specific, but I will try to be brief: We are running a Django app on Heroku. Three servers: test (1 web, 1 celery dyno) training (1 web, 1 celery dyno) prod (2 web, 1 celery dyno). We are using Gunicorn with gevents and 4 workers on each dyno. We are experiencing sporadic high service times. Here is an example from Logentries: High Response Time: heroku router - - at=info method=GET path="/accounts/login/" dyno=web.1 connect=1ms service=6880ms status=200 bytes=3562 I have been Googling this for weeks now. We are unable to reproduce at will but experience these alerts 0 to 5 times a day. Notable points: Occurs on all three apps (all running similar code) Occurs on different pages, including simple pages such as 404 and /admin Occurs at random times Occurs with varying throughput. One of our instances only drives 3 users/day. It is not related to sleeping dynos because we ping with New Relic and the issue can occur mid-session Unable to reproduce at will. I have experienced this issue personally once. Clicking a page that normally executes in 500ms resulted in a 30 second delay and eventually an app error screen from Heroku's 30s timeout High response times vary from 5000ms - 30000ms. New Relic does not point to a specific issue. Here are the past few transactions and times: RegexURLResolver.resolve 4,270ms SessionMiddleware.process_request 2,750ms Render login.html 1,230ms WSGIHandler 1,390ms The above are simple calls and do not normally take near that amount of time What I have narrowed it down to: This article on Gunicorn and slow clients I have seen this issue happen with slow clients but also at our office where we have a fiber connection. Gevent and async workers not playing nicely We've switched to gunicorn sync workers and problem still persists. Gunicorn worker timeout It's possible that workers are somehow being kept-alive in a null state. Insufficient workers / dynos No indication of CPU/memory/db overutilization and New Relic doesn't display any indication of DB latency Noisy Neighbors Among my multiple emails with Heroku, the support rep has mentioned at least one of my long requests was due to a noisy neighbor, but was not convinced that was the issue. Subdomain 301 The requests are coming through fine, but getting stuck randomly in the application. Dynos restarting If this were the case, many users would be affected. Also, I can see that our dynos have not restarted recently. Heroku routing / service issue It is possible that the Heroku service is less than advertised and this is simply a downside of using their service. We have been having this issue for the past few months, but now that we are scaling it needs to be fixed. Any ideas would be much appreciated as I have exhausted nearly every SO or Google link.
I have been in contact with the Heroku support team over the past 6 months. It has been a long period of narrowing down through trial/error, but we have identified the problem. I eventually noticed these high response times corresponded with a sudden memory swap, and even though I was paying for a Standard Dyno (which was not idling), these memory swaps were taking place when my app had not received traffic recently. It was also clear by looking at the metrics charts that this was not a memory leak because the memory would plateau off. For example: After many discussions with their support team, I was provided this explanation: Essentially, what happens is some backend runtimes end up with a combination of applications that end up using enough memory that the runtime has to swap. When that happens, a random set of dyno containers on the runtime are forced to swap arbitrarily by small amounts (note that "random" here is likely containers with memory that hasn't been accessed recently but is still resident in memory). At the same time, the apps that are using large amounts of memory also end up swapping heavily, which causes more iowait on the runtime than normal. We haven't changed how tightly we pack runtimes at all since this issue started becoming more apparent, so our current hypothesis is that the issue may be coming from customers moving from versions of Ruby prior to 2.1 to 2.1+. Ruby makes up for a huge percentage of the applications that run on our platform and Ruby 2.1 made changes to it's GC that trades memory usage for speed (essentially, it GCs less frequently to get speed gains). This results in a notable increase in memory usage for any application moving from older versions of Ruby. As such, the same number of Ruby apps that maintained a certain memory usage level before would now start requiring more memory usage. That phenomenon combined with misbehaving applications that have resource abuse on the platform hit a tipping point that got us to the situation we see now where dynos that shouldn't be swapping are. We have a few avenues of attack we're looking into, but for now a lot of the above is still a little bit speculative. We do know for sure that some of this is being caused by resource abusive applications though and that's why moving to Performance-M or Performance-L dynos (which have dedicated backend runtimes) shouldn't exhibit the problem. The only memory usage on those dynos will be your application's. So, if there's swap it'll be because your application is causing it. I am confident this is the issue I and others have been experiencing, as it is related to the architecture itself and not to any combination of language/framework/configs. There doesn't seem to be a good solution other than A) tough up and wait it out or B) switch to one of their dedicated instances I am aware of the crowd that says "This is why you should use AWS", but I find the benefits that Heroku offers to outweigh some occasional high response times and their pricing has gotten better over the years. If you are suffering from the same issue, the "best solution" will be your choice. I will update this answer when I hear anything more. Good luck!
Permission denied on dl.open() with ipython but not with python
My initial goal is to open a dll file on Cygwin using ctypes. However I found some issues with it. I dug up to sys.dl which returns an unknown Permission denied only on IPython. With python everything looks fine: $ ls my.dll $ python Python 2.7.8 (default, Jul 28 2014, 01:34:03) [GCC 4.8.3] on cygwin >>> import dl >>> dl.open('my.dll') <dl.dl object at 0xfffaa0c0> With ipython I get the error: $ ipython Python 2.7.8 (default, Jul 28 2014, 01:34:03) In [1]: import dl In [2]: dl.open('my.dll') --------------------------------------------------------------------------- error Traceback (most recent call last) <ipython-input-2-c681630fa713> in <module>() ----> 1 dl.open('my.dll') error: Permission denied I investigated on this using strace. The output log for `IPython is huge, more than 4MB. Fortunately, I identified some weird things: symlink.check(C:\Users\user\Home\projects\foo\my.dll, 0x28AB88) (0x4022) 35 2705178 [main] python2.7 16924 path_conv::check: this->path(C:\Users\user\Home\projects\foo\my.dll), has_acls(1) 37 2705215 [main] python2.7 16924 cwdstuff::get: posix /cygdrive/c/Users/user/Home/projects/foo 32 2705247 [main] python2.7 16924 cwdstuff::get: (C:\Users\user\Home\projects\foo) = cwdstuff::get (0x8006ECF0, 32768, 0, 0), errno 11 --- Process 14376, exception c0000138 at 7726163E 3286 2708533 [main] python2.7 16924 seterrno_from_win_error: /home/corinna/src/cygwin/cygwin-1.7.35/cygwin-1.7.35-1.i686/src/src/winsup/cygwin/dlfcn.cc:174 windows error 182 42 2708575 [main] python2.7 16924 geterrno_from_win_error: unknown windows error 182, setting errno to 13 36 2708611 [main] python2.7 16924 dlopen: ret 0x0 Who is /home/corinna? I have no corinna user in my installation, neither on my Windows. Corinna does not come from my installation. Is it some hard-coded stuff? Now, here is what I get from strace for python: symlink.check(C:\Users\user\Home\projects\foo\my.dll, 0x28B728) (0x4022) 26 10440048 [main] python 12604 path_conv::check: this->path(C:\Users\user\Home\projects\foo\my.dll), has_acls(1) 23 10440071 [main] python 12604 cwdstuff::get: posix /cygdrive/c/Users/user/Home/projects/foo 25 10440096 [main] python 12604 cwdstuff::get: (C:\Users\user\Home\projects\foo) = cwdstuff::get (0x8006ECF0, 32768, 0, 0), errno 0 3405 10443501 [main] python 12604 dlopen: ret 0x5B9C0000 dlopen is returning 0x0 in IPython while it is returning 0x5B9C0000 for python. I notice that cwdstuff::get is raising an error before dlopen is called. EDIT I sent a message to Cygwin's mailing list and the answer of Corinna regarding this issue is: This is not Cygwin's fault, AFAICS. Cygwin never loads functions by ordinal. This is also a bit on the lean side as far as information is concerned. One can't see how the process calls dlopen, for instance. Corinna How to solve this issue? My earlier tests using ctypes Initially when I asked my question I was just playing with ctypes. I am working on Cygwin 32-bit and Windows 7. With IPython I got an OSError when I tried to load a dll using cdll.LoadLibrary.
Two ideas: 1) in the next cell, type %pdb, and then interactively "print self._name" to see what it is. 2) Use a full path to cdll.LoadLibrary("foo.dll") to see if that works. Once you know what the issue is, then you can decide whose bug it is, and report it (could be a ctypes issue, but probably ipython)
SSL InsecurePlatform error when using Requests package
Im using Python 2.7.3 and Requests. I installed Requests via pip. I believe it's the latest version. I'm running on Debian Wheezy. I've used Requests lots of times in the past and never faced this issue, but it seems that when making https requests with Requests I get an InsecurePlatform exception. The error mentions urllib3, but I don't have that installed. I did install it to check if it resolved the error, but it didn't. /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3 /util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest /security.html#insecureplatformwarning. Any ideas as to why I'm getting this? I've checked the docs, as specified in the error message, but the docs are saying to import urllib3 and either disable the warning, or provide a certificate.
Use the somewhat hidden security feature: pip install 'requests[security]' or pip install pyOpenSSL ndg-httpsclient pyasn1 Both commands install following extra packages: pyOpenSSL ndg-httpsclient pyasn1 Please note that this is not required for python-2.7.9+. If pip install fails with errors, check whether you have required development packages for libffi, libssl and python installed in your system using distribution's package manager: Debian/Ubuntu - python-dev libffi-dev libssl-dev packages. Fedora - openssl-devel python-devel libffi-devel packages. Distro list above is incomplete. Workaround (see the original answer by @TomDotTom): In case you cannot install some of the required development packages, there's also an option to disable that warning: import requests.packages.urllib3 requests.packages.urllib3.disable_warnings()
Updating a dataframe column in spark
Looking at the new spark dataframe api, it is unclear whether it is possible to modify dataframe columns. How would I go about changing a value in row x column y of a dataframe? In pandas this would be df.ix[x,y] = new_value
While you cannot modify a column as such, you may operate on a column and return a new DataFrame reflecting that change. For that you'd first create a UserDefinedFunction implementing the operation to apply and then selectively apply that function to the targeted column only. In Python: from pyspark.sql.functions import UserDefinedFunction from pyspark.sql.types import StringType name = 'target_column' udf = UserDefinedFunction(lambda x: 'new_value', Stringtype()) new_df = old_df.select(*[udf(column).alias(name) if column == name else column for column in old_df.columns]) new_df now has the same schema as old_df (assuming that old_df.target_column was of type StringType as well) but all values in column target_column will be new_value.
Django Rest Framework 3.1 breaks pagination.PaginationSerializer
I just updated to Django Rest Framework 3.1 and it seems that all hell broke loose. in my serializers.py I was having the following code: class TaskSerializer(serializers.ModelSerializer): class Meta: model = task exclude = ('key', ...) class PaginatedTaskSerializer(pagination.PaginationSerializer): class Meta: object_serializer_class = TaskSerializer which was working just fine. Now with the release of 3.1 I can't find examples on how to do the same thing since PaginationSerializer is no longer there. I have tried to subclass PageNumberPagination and use its default paginate_queryset and get_paginated_response methods but I can no longer get their results serialized. In other words my problem is that I can no longer do this: class Meta: object_serializer_class = TaskSerializer Any ideas? Thanks in advance
I think I figured it out (for the most part at least): What we should have used from the very beginning is this: Just use the built-in paginator and change your views.py to this: from rest_framework.pagination import PageNumberPagination class CourseListView(AuthView): def get(self, request, format=None): """ Returns a JSON response with a listing of course objects """ courses = Course.objects.order_by('name').all() paginator = PageNumberPagination() # From the docs: # The paginate_queryset method is passed the initial queryset # and should return an iterable object that contains only the # data in the requested page. result_page = paginator.paginate_queryset(courses, request) # Now we just have to serialize the data just like you suggested. serializer = CourseSerializer(result_page, many=True) # From the docs: # The get_paginated_response method is passed the serialized page # data and should return a Response instance. return paginator.get_paginated_response(serializer.data) For the desired page size just set the PAGE_SIZE in settings.py: REST_FRAMEWORK = { 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination', 'PAGE_SIZE': 15 } You should be all set now with all the options present in the body of the response (count, next and back links) ordered just like before the update. However there is one more thing that still troubles me: We should also be able to get the new html pagination controls which for some reason are missing for now... I could definitely use a couple more suggestions on this...
How to fix pylint logging-not-lazy?
I am using prospector and there are many errors: Line: 31 pylint: logging-not-lazy / Specify string format arguments as logging function parameters (col 16) Line: 42 pylint: logging-not-lazy / Specify string format arguments as logging function parameters (col 12) My code is: logging.debug("detect mimetypes faild because %s" % e )
This mean, that you should rewrite your code as: logging.debug("detect mimetypes faild because %s", e) Acording to https://docs.python.org/2/library/logging.html Logger.debug(msg, *args, **kwargs) ... Logs a message with level DEBUG on this logger. The msg is the message format string, and the args are the arguments which are merged into msg using the string formatting operator. (Note that this means that you can use keywords in the format string, together with a single dictionary argument.) ...
Flask-RESTful - don't return object property instead of returning null
Let's say I've got a clients table with id, name and email fields. An email field is optional. The code looks like this: client_fields = { 'id' : fields.String, 'name' : fields.String, 'email' : fields.String } And for displaying: class ClientList(Resource): @marshal_with(client_fields) def get(self): return model.Client.query.all() When email is not provided, API returns JSON like this: { "id": "1", "name": "John Doe", "email": null } But instead I want it to return this object: { "id": "1", "name": "John Doe" } Which basically means that instead of a property with null value I want it to return no such property at all. Is there a way to achieve that?
I would use the marshal function instead of the marshal_with decorator: class ClientList(Resource): def get(self): clients = [] for client in model.Client.query.all(): if client.email: clients.append(marshal(client_fields)) else: clients.append(marshal(client_fields_no_email)) return clients Or even better class ClientList(Resource): def get(self): return [client_marshal(client) for client in model.Client.query.all()] with def client_marshal(client): if client.email: return {'id' : fields.String, 'name' : fields.String, 'email' : fields.String} else: return {'id' : fields.String, 'name' : fields.String}
Super init vs. parent.__init__
We're following a Python class in a book I'm involved with that does not use super for initialization from a inherited class. I'm having trouble giving a clear, straightforward description of the differences between these two cases: class Parent(object): def __init__(self): .... class Child(Parent): def __init__(self): super(Child, self).__init__() class Child(Parent): def __init__(self): Parent.__init__(self) But I am embarrassed to admit I can't explain why one case is superior to the other. Can anyone offer a clear explanation?
The purpose of super is to handle inheritance diamonds. If the class inheritance structure uses only single-inheritance, then using super() will result in the same calls as explicit calls to the "parent" class. Consider this inheritance diamond: class A(object): def __init__(self): print('Running A.__init__') super(A,self).__init__() class B(A): def __init__(self): print('Running B.__init__') super(B,self).__init__() class C(A): def __init__(self): print('Running C.__init__') super(C,self).__init__() class D(B,C): def __init__(self): print('Running D.__init__') super(D,self).__init__() foo = D() which prints Running D.__init__ Running B.__init__ Running C.__init__ Running A.__init__ while if we change B to B2 and use explicit calls to the parent __init__: class B2(A): def __init__(self): print('Running B.__init__') A.__init__(self) class D2(B2,C): def __init__(self): print('Running D.__init__') super(D2,self).__init__() bar = D2() then the chain of init calls becomes Running D.__init__ Running B.__init__ Running A.__init__ So the call to C.__init__ is skipped entirely. There is no one preferred option. If you can guarantee that you do not want to support multiple inheritance then explicit parent calls are simpler and clearer. If you wish to support multiple inheritance now or in the future, then you need to use super(). But understand that there are some pitfalls involved with using super, but with proper use these pitfalls can be avoided.
How do you kill Futures once they have started?
I am using the new concurrent.futures module (which also has a Python 2 backport) to do some simple multithreaded I/O. I am having trouble understanding how to cleanly kill tasks started using this module. Check out the following Python 2/3 script, which reproduces the behavior I'm seeing: #!/usr/bin/env python from __future__ import print_function import concurrent.futures import time def control_c_this(): with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: future1 = executor.submit(wait_a_bit, name="Jack") future2 = executor.submit(wait_a_bit, name="Jill") for future in concurrent.futures.as_completed([future1, future2]): future.result() print("All done!") def wait_a_bit(name): print("{n} is waiting...".format(n=name)) time.sleep(100) if __name__ == "__main__": control_c_this() While this script is running it appears impossible to kill cleanly using the regular Control-C keyboard interrupt. I am running on OS X. On Python 2.7 I have to resort to kill from the command line to kill the script. Control-C is just ignored. On Python 3.4, Control-C works if you hit it twice, but then a lot of strange stack traces are dumped. Most documentation I've found online talks about how to cleanly kill threads with the old threading module. None of it seems to apply here. And all the methods provided within the concurrent.futures module to stop stuff (like Executor.shutdown() and Future.cancel()) only work when the Futures haven't started yet or are complete, which is pointless in this case. I want to interrupt the Future immediately. My use case is simple: When the user hits Control-C, the script should exit immediately like any well-behaved script does. That's all I want. So what's the proper way to get this behavior when using concurrent.futures?
It's kind of painful. Essentially, your worker threads have to be finished before your main thread can exit. You cannot exit unless they do. The typical workaround is to have some global state, that each thread can check to determine if they should do more work or not. Here's the quote explaining why. In essence, if threads exited when the interpreter does, bad things could happen. Here's a working example. Note that C-c takes at most 1 sec to propagate because the sleep duration of the child thread. #!/usr/bin/env python from __future__ import print_function import concurrent.futures import time import sys quit = False def wait_a_bit(name): while not quit: print("{n} is doing work...".format(n=name)) time.sleep(1) def setup(): executor = concurrent.futures.ThreadPoolExecutor(max_workers=5) future1 = executor.submit(wait_a_bit, "Jack") future2 = executor.submit(wait_a_bit, "Jill") # main thread must be doing "work" to be able to catch a Ctrl+C # http://www.luke.maurits.id.au/blog/post/threads-and-signals-in-python.html while (not (future1.done() and future2.done())): time.sleep(1) if __name__ == "__main__": try: setup() except KeyboardInterrupt: quit = True
Why is this Haskell program so much slower than an equivalent Python one?
As part of a programming challenge, I need to read, from stdin, a sequence of space-separated integers (on a single line), and print the sum of those integers to stdout. The sequence in question can contain as many as 10,000,000 integers. I have two solutions for this: one written in Haskell (foo.hs), and another, equivalent one, written in Python 2 (foo.py). Unfortunately, the (compiled) Haskell program is consistently slower than the Python program, and I'm at a loss for explaining the discrepancy in performance between the two programs; see the Benchmark section below. If anything, I would have expected Haskell to have the upper hand... What am I doing wrong? How can I account for this discrepancy? Is there an easy way of speeding up my Haskell code? (For information, I'm using a mid-2010 Macbook Pro with 8Gb RAM, GHC 7.8.4, and Python 2.7.9.) foo.hs main = print . sum =<< getIntList getIntList :: IO [Int] getIntList = fmap (map read . words) getLine (compiled with ghc -O2 foo.hs) foo.py ns = map(int, raw_input().split()) print sum(ns) Benchmark In the following, test.txt consists of a single line of 10 million space-separated integers. # Haskell $ time ./foo < test.txt 1679257 real 0m36.704s user 0m35.932s sys 0m0.632s # Python $ time python foo.py < test.txt 1679257 real 0m7.916s user 0m7.756s sys 0m0.151s
read is slow. For bulk parsing, use bytestring or text primitives, or attoparsec. I did some benchmarking. Your original version ran in 23,9 secs on my computer. The version below ran in 0.35 secs: import qualified Data.ByteString.Char8 as B import Control.Applicative import Data.Maybe import Data.List import Data.Char main = print . sum =<< getIntList getIntList :: IO [Int] getIntList = map (fst . fromJust . B.readInt) . B.words <$> B.readFile "test.txt" By specializing the parser to your test.txt file, I could get the runtime down to 0.26 sec: getIntList :: IO [Int] getIntList = unfoldr (B.readInt . B.dropWhile (==' ')) <$> B.readFile "test.txt"
Attribute Error trying to run Gmail API quickstart in Python
It looks like there might be a version mismatch problem here. How should I go about fixing it? I've trying updating six with pip, but that doesn't do anything. Here's the error I see: Traceback (most recent call last): File "./quickstart.py", line 27, in <module> credentials = run(flow, STORAGE, http=http) File "/Library/Python/2.7/site-packages/oauth2client/util.py", line 137, in positional_wrapper return wrapped(*args, **kwargs) File "/Library/Python/2.7/site-packages/oauth2client/old_run.py", line 120, in run authorize_url = flow.step1_get_authorize_url() File "/Library/Python/2.7/site-packages/oauth2client/util.py", line 137, in positional_wrapper return wrapped(*args, **kwargs) File "/Library/Python/2.7/site-packages/oauth2client/client.py", line 1827, in step1_get_authorize_url return _update_query_params(self.auth_uri, query_params) File "/Library/Python/2.7/site-packages/oauth2client/client.py", line 435, in _update_query_params parts = urllib.parse.urlparse(uri) AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'urlparse'
I ran into a very similar problem, albeit using a completely different API (compute engine). I ended up rolling back the google api client to the previous version - 1.3.2 - as opposed to the latest - 1.4.0. To do this, I ran: sudo pip install -I google-api-python-client==1.3.2 And was then able to run my code. I'm not sure if this is the same problem, but it seems to have done the trick for me, hope this helps.
Django 1.8 RC1: ProgrammingError when creating database tables
I'm using AbstractBaseUser for my user models in various projects. Updating to Django 1.8 RC1 works smoothly and I can run the migrate management command. However, when trying to create a fresh database table layout from scratch, I get the following error: python manage.py migrate >>> ... >>> ... >>> django.db.utils.ProgrammingError: relation "auth_group" does not exist All works perfectly with Django 1.7.x and I cannot find anything about this issue elsewhere. So, is it a big with the RC1 version or did something change that I'm not aware of in Django 1.8? Unfortunately, the error message doesn't really help ... but I'm pretty sure it has to do with the automatic migrations that come with the new Django version.
To copy the answer I got from the Django ticket mentioned above: Before calling "python manage.py migrate" to create the database layout, one needs to create a migration for the app that contains the custom user class: python manage.py makemigrations appname This creates a migration file within the app directory - et voilà, migrate does work and creates the other tables.
What is the Spark DataFrame method `toPandas` actually doing?
I'm a beginner of Spark-DataFrame API. I use this code to load csv tab-separated into Spark Dataframe lines = sc.textFile('tail5.csv') parts = lines.map(lambda l : l.strip().split('\t')) fnames = *some name list* schemaData = StructType([StructField(fname, StringType(), True) for fname in fnames]) ddf = sqlContext.createDataFrame(parts,schemaData) Suppose I create DataFrame with Spark from new files, and convert it to pandas using built-in method toPandas(), Does it store the Pandas object to local memory? Does Pandas low-level computation handled all by Spark? Does it exposed all pandas dataframe functionality?(I guess yes) Can I convert it toPandas and just be done with it, without so much touching DataFrame API?
Using spark to read in a CSV file to pandas is quite a roundabout method for achieving the end goal of reading a CSV file into memory. It seems like you might be misunderstanding the use cases of the technologies in play here. Spark is for distributed computing (though it can be used locally). It's generally far too heavyweight to be used for simply reading in a CSV file. In your example, the sc.textFile method will simply give you a spark RDD that is effectively a list of text lines. This likely isn't what you want. No type inference will be performed, so if you want to sum a column of numbers in your CSV file, you won't be able to because they are still strings as far as Spark is concerned. Just use pandas.read_csv and read the whole CSV into memory. Pandas will automatically infer the type of each column. Spark doesn't do this. Now to answer your questions: Does it store the Pandas object to local memory: Yes. toPandas() will convert the Spark DataFrame into a Pandas DataFrame, which is of course in memory. Does Pandas low-level computation handled all by Spark No. Pandas runs its own computations, there's no interplay between spark and pandas, there's simply some API compatibility. Does it exposed all pandas dataframe functionality? No. For example, Series objects have an interpolate method which isn't available in PySpark Column objects. There are many many methods and functions that are in the pandas API that are not in the PySpark API. Can I convert it toPandas and just be done with it, without so much touching DataFrame API? Absolutely. In fact, you probably shouldn't even use Spark at all in this case. pandas.read_csv will likely handle your use case unless you're working with a huge amount of data. Try to solve your problem with simple, low-tech, easy-to-understand libraries, and only go to something more complicated as you need it. Many times, you won't need the more complex technology.
cycle through multiple list using itertools.cycle()
I have a list of servers. Every server has a list of name on it. example: server1 = ['a','b','c'] server2 = ['d','e','f'] server3 = ['g','h','i'] I want to iterate per server name not per server. For example after picking 'a' in server1, move to 'd' (not 'b') and so on. If I'm going to use itertools.cycle(), do I have to create a list of server to cycle through? My expected result is ['a','d','g','b','e','h','c','f','i']. Can you give me a simple example on how to cycle in multiple list.
We can also use itertools.chain.from_iterable() which is faster in comparison. import itertools server1 = ['a','b','c'] server2 = ['d','e','f'] server3 = ['g','h','i'] print list(itertools.chain.from_iterable(zip(server1,server2,server3))) Results: ['a', 'd', 'g', 'b', 'e', 'h', 'c', 'f', 'i']
Edit distance such as Levenshtein taking into account proximity on keyboard
Is there an edit distance such as Levenshtein which takes into account distance for substitutions? For example, if we would consider if words are equal, typo and tylo are really close (p and l are physically close on the keyboard), while typo and tyqo are far apart. I'd like to allocate a smaller distance to more likely typos. There must be a metric that takes this kind of promixity into account?
the kind of distance you ask is not included in levenshtein - but you should use a helper like euclidean or manhattan distance, to get the result.my simple assumption is, q (in english qwerty layout) is cartesian (y=0; x=0) so, w will be (y=0; x=1) and so on. whole list here keyboard_cartesian= { 'q': {'y': 0, 'x': 0}, 'w': {'y': 0, 'x': 1}, 'e': {'y': 0, 'x': 2}, 'r': {'y': 0, 'x': 3}, # ... 'a': {'y': 1, 'x': 0}, #... 'z': {'y': 2, 'x': 0}, 'x' : {'x':1, 'y':2}, # } assume, word qaz has a meaning. levenshtein distance between qaz and with both of waz and eaz is 1. to check out which misspell is more likely, take the differences (here (q,w) and (q,e)) and calculate euclidean distance >>> from math import * >>> def euclidean_distance(a,b): ... X = (keyboard_cartesian[a]['x']-keyboard_cartesian[b]['x'])**2 ... Y = (keyboard_cartesian[a]['y']-keyboard_cartesian[b]['y'])**2 ... return sqrt(X+Y) ... >>> euclidean_distance('q', 'w') 1.0 >>> euclidean_distance('q', 'e') 2.0 this means misspell of qaz as waz is more likley than qaz as eaz.
Why can't I call hash() on an apparently hashable method of an unhashable instance?
Let's say I have a dictionary: >>> d = {} It has a method clear(): >>> d.clear <built-in method clear of dict object at 0x7f209051c988> ... which has a __hash__ attribute: >>> d.clear.__hash__ <method-wrapper '__hash__' of builtin_function_or_method object at 0x7f2090456288> ... which is callable: >>> callable(d.clear.__hash__) True So why can't I hash it? >>> hash(d.clear) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict' Note: I know that dict objects are unhashable – I'm curious as to why this restriction extends to their methods, even though, as noted above, they appear to claim otherwise?
It is a bound method, and bound methods have a reference to self, e.g. the dictionary. This makes the method un-hashable. You can hash the unbound dict.clear method: >>> d = {} >>> d.clear.__self__ {} >>> d.clear.__self__ is d True >>> hash(d.clear) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict' >>> hash(dict.clear) -9223372036586189204 Methods on instances that are hashable will themselves be hashable, so the object type for built-in bound methods implements a __hash__ method but raises TypeError when the __self__ attribute is not hashable. This is consistent with the object.__hash__ method documentation; if you can set it to None or not implement it at all then that is preferable but for these cases where the hashability is only known at runtime raising a TypeError is the only option available.
Python collections.Counter: most_common complexity
I'm wondering what is the complexity of the function most_common provided by the collections.Counter object in python 2.7. More specifically, is the Counter keeping some kind of sorted list while it is being updated, allowing to perform the most_common operation faster than O(n) when n is the number of (unique) items added to the counter? For you information, I am processing some large amount of text data trying to find the n-th most frequent tokens. I checked the official documentation (https://docs.python.org/2/library/collections.html#collections.Counter), the CPython wiki (https://wiki.python.org/moin/TimeComplexity) but I could not find the answer. Thank you in advance! Romain.
From the source code of collections.py, we see that if we don't specify a number of returned elements, most_common returns a sorted list of the counts. This is an O(n log n) algorithm. If we use most_common to return k > 1 elements, then we use nlargest method of heapq. This is an O(k) + O((n - k) log k) + O(k log k) algorithm, which is very good for a small constant k, since it's essentialy linear. The O(k) part comes from heapifying the initial k counts, the second part from n - k calls to heappushpop method and the third part from sorting the final heap of k elements. Since k <= n we can conclude that the complexity is: O(n log k) If k = 1 then it's easy to show that the complexity is: O(n)
How to properly create and run concurrent tasks using python's asyncio module?
I am trying to properly understand and implement two concurrently running Task objects using Python 3's relatively new asyncio module. In a nutshell, asyncio seems designed to handle asynchronous processes and concurrent Task execution over an event loop. It promotes the use of await (applied in async functions) as a callback-free way to wait for and use a result, without blocking the event loop. (Futures and callbacks are still a viable alternative.) It also provides the asyncio.Task() class, a specialized subclass of Future designed to wrap coroutines. Preferably invoked by using the asyncio.ensure_future() method. The intended use of asyncio tasks is to allow independently running tasks to run 'concurrently' with other tasks within the same event loop. My understanding is that Tasks are connected to the event loop which then automatically keeps driving the coroutine between await statements. I like the idea of being able to use concurrent Tasks without needing to use one of the Executor classes, but I haven't found much elaboration on implementation. This is how I'm currently doing it: import asyncio print('running async test') async def say_boo(): i = 0 while True: await asyncio.sleep(0) print('...boo {0}'.format(i)) i += 1 async def say_baa(): i = 0 while True: await asyncio.sleep(0) print('...baa {0}'.format(i)) i += 1 # OPTION 1: wrap in Task object # -> automatically attaches to event loop and executes boo = asyncio.ensure_future(say_boo()) baa = asyncio.ensure_future(say_baa()) loop = asyncio.get_event_loop() loop.run_forever() In the case of trying to concurrently run two looping Tasks, I've noticed that unless the Task has an internal await expression, it will get stuck in the while loop, effectively blocking other tasks from running (much like a normal while loop). However, as soon the Tasks have to wait--even for just a fraction of a second--they seem to run concurrently without an issue. Thus, the await statements seem to provide the event loop with a foothold for switching back and forth between the tasks, giving the effect of concurrency. Example output with internal await: running async test ...boo 0 ...baa 0 ...boo 1 ...baa 1 ...boo 2 ...baa 2 Example output without internal await: ...boo 0 ...boo 1 ...boo 2 ...boo 3 ...boo 4 Questions Does this implementation pass for a 'proper' example of concurrent looping Tasks in asyncio? Is it correct that the only way this works is for a Task to provide a blocking point (await expression) in order for the event loop to juggle multiple tasks?
Yes, any coroutine that's running inside your event loop will block other coroutines and tasks from running, unless it Calls another coroutine using yield from or await (if using Python 3.5+). Returns. This is because asyncio is single-threaded; the only way for the event loop to run is for no other coroutine to be actively executing. Using yield from/await suspends the coroutine temporarily, giving the event loop a chance to work. Your example code is fine, but in many cases, you probably wouldn't want long-running code that isn't doing asynchronous I/O running inside the event loop to begin with. In those cases, it often makes more sense to use BaseEventLoop.run_in_executor to run the code in a background thread or process. ProcessPoolExecutor would be the better choice is your task is CPU-bound, ThreadPoolExecutor would be used if you need to do some I/O that isn't asyncio-friendly. Your two loops, for example, are completely CPU-bound and don't share any state, so the best performance would come from using ProcessPoolExecutor to run each loop in parallel across CPUs: import asyncio from concurrent.futures import ProcessPoolExecutor print('running async test') def say_boo(): i = 0 while True: print('...boo {0}'.format(i)) i += 1 def say_baa(): i = 0 while True: print('...baa {0}'.format(i)) i += 1 if __name__ == "__main__": executor = ProcessPoolExecutor(2) loop = asyncio.get_event_loop() boo = asyncio.ensure_future(loop.run_in_executor(executor, say_boo)) baa = asyncio.ensure_future(loop.run_in_executor(executor, say_baa)) loop.run_forever()
Why is a Python I/O bound task not blocked by the GIL?
The python threading documentation states that "...threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously", apparently because I/O-bound processes can avoid the GIL that prevents threads from concurrent execution in CPU-bound tasks. But what I dont understand is that an I/O task still uses the CPU. So how could it not encounter the same issues? Is it because the I/O bound task will not require memory management?
All of Python's blocking I/O primitives release the GIL while waiting for the I/O block to resolve -- it's as simple as that! They will of course need to acquire the GIL again before going on to execute further Python code, but for the long-in-terms-of-machine-cycles intervals in which they're just waiting for some I/O syscall, they don't need the GIL, so they don't hold on to it!
Numpy individual element access slower than for lists
I just started using Numpy and noticed that iterating through each element in a Numpy array is ~4x slower than doing the same but with a list of lists. I know now that this defeats the purpose of Numpy and I should vectorize the function if possible. My question is though why is it 4x slower. That seems like quite a large amount. I ran the tests below using %timeit import numpy as np b = np.eye(1000) a = b.tolist() %timeit b[100][100] #1000000 loops, best of 3: 692 ns per loop %timeit a[100][100] #10000000 loops, best of 3: 70.7 ns per loop %timeit b[100,100] #1000000 loops, best of 3: 343 ns per loop %timeit b.item(100,100) #1000000 loops, best of 3: 297 ns per loop I tried to use dis.dis to see what was going on under the hood but got: TypeError: don't know how to disassemble method-wrapper objects Then I tried to look at the Numpy source code but couldn't figure out which file corresponded to array element access. I'm curious what accounts for the extra overhead, and more importantly how to figure this out for myself in the future. It seems like python can't be easily compiled to C code so that I can see the difference. But is there a way to see what byte code is generated for each line, to get a sense of the differences?
To recap, the NumPy operations you have listed do the following: b[100][100] returns row 100 of b, and then gets the value at index 100 of this row. b[100,100] returns the value at row 100 and column 100 directly (no row is returned first). b.item(100,100) does the same as above b[100,100] except that the NumPy type is copied to a Python type and returned. Now, (1) is slowest because it requires two sequential NumPy indexing operations (I'll explain why this is slower than list indexing below). (2) is quickest because only a single indexing operation is performed and the NumPy type does not need to be copied to a Python type (unlike (3)). Why is list access still faster than b[100,100]? Two of the reasons why a[100][100] (getting from the list) is quicker than b[100,100] (getting from the array) are that: The bytecode opcode BINARY_SUBSCR is executed when indexing both lists and arrays, but it is optimised for the case of Python lists. The internal C function handling integer indexing for Python lists is very short and simple. On the other hand, NumPy indexing is much more complicated and a significant amount of code is executed to determine the type of indexing being used so that the correct value can be returned. Below, the steps for accessing elements in a list and array with a[100][100] and b[100,100] are described in more detail. Bytecode The same four bytecode opcodes are triggered for both lists and arrays: 0 LOAD_NAME 0 (a) # the list or array 3 LOAD_CONST 0 (100) # index number (tuple for b[100,100]) 6 BINARY_SUBSCR # find correct "getitem" function 7 RETURN_VALUE # value returned from list or array Note: if you start chain indexing for multi-dimensional lists, e.g. a[100][100][100], you start to repeat these bytecode instructions. This does not happen for NumPy arrays using the tuple indexing: b[100,100,100] uses just the four instructions. This is why the gap in the timings begins to close as the number of dimensions increases. Finding the correct "getitem" function The functions for accessing lists and arrays are different and the correct one needs to be found in each case. This task is handled by the BINARY_SUBSCR opcode: w = POP(); // our index v = TOP(); // our list or NumPy array if (PyList_CheckExact(v) && PyInt_CheckExact(w)) { // do we have a list and an int? /* INLINE: list[int] */ Py_ssize_t i = PyInt_AsSsize_t(w); if (i < 0) i += PyList_GET_SIZE(v); if (i >= 0 && i < PyList_GET_SIZE(v)) { x = PyList_GET_ITEM(v, i); // call "getitem" for lists Py_INCREF(x); } else goto slow_get; } else slow_get: x = PyObject_GetItem(v, w); // else, call another function // to work out what is needed Py_DECREF(v); Py_DECREF(w); SET_TOP(x); if (x != NULL) continue; break; This code is optimised for Python lists. If the function sees a list, it will quickly call the function PyList_GET_ITEM. This list can now be accessed at the required index (see next section below). However, if it doesn't see a list (e.g. we have a NumPy array), it takes the "slow_get" path. This in turn calls another function PyObject_GetItem to check which "getitem" function the object is mapped to: PyObject_GetItem(PyObject *o, PyObject *key) { PyMappingMethods *m; if (o == NULL || key == NULL) return null_error(); m = o->ob_type->tp_as_mapping; if (m && m->mp_subscript) return m->mp_subscript(o, key); ... In the case of NumPy arrays, the correct function is located in mp_subscript in the PyMappingMethods structure. Notice the additional function calls before this correct "get" function can be called. These calls add to the overhead for b[100], although how much will depend on how Python/NumPy was compiled, the system architecture, and so on. Getting from a Python list Above it was seen that the function PyList_GET_ITEM is called. This is a short function that essentially looks like this*: PyList_GetItem(PyObject *op, Py_ssize_t i) { if (!PyList_Check(op)) { // check if list PyErr_BadInternalCall(); return NULL; } if (i < 0 || i >= Py_SIZE(op)) { // check i is in range if (indexerr == NULL) { indexerr = PyUnicode_FromString( "list index out of range"); if (indexerr == NULL) return NULL; } PyErr_SetObject(PyExc_IndexError, indexerr); return NULL; } return ((PyListObject *)op) -> ob_item[i]; // return reference to object } * PyList_GET_ITEM is actually the macro form of this function which does the same thing, minus error checking. This means that getting the item at index i of a Python list is relatively simple. Internally, Python checks whether the type of the item being is a list, whether i is in the correct range for the list, and then returns the reference to the object in the list. Getting from a NumPy array In contrast, NumPy has to do much more work before the value at the requested index can be returned. Arrays can be indexed in a variety of different ways and NumPy has to decide which index routine is needed. The various indexing routines are handled largely by code in mapping.c. Anything used to index NumPy arrays passes through the function prepare_index which begins the parsing of the index and stores the information about broadcasting, number of dimensions, and so on. Here is the call signature for the function: NPY_NO_EXPORT int prepare_index(PyArrayObject *self, PyObject *index, npy_index_info *indices, int *num, int *ndim, int *out_fancy_ndim, int allow_boolean) /* @param the array being indexed * @param the index object * @param index info struct being filled (size of NPY_MAXDIMS * 2 + 1) * @param number of indices found * @param dimension of the indexing result * @param dimension of the fancy/advanced indices part * @param whether to allow the boolean special case */ The function has to do a lot of checks. Even for a relatively simple index such as b[100,100], a lot of information has to be inferred so that NumPy can return a reference (view) to the correct value. In conclusion, it takes longer for the "getitem" function for NumPy to be found and the functions handling the indexing of arrays are necessarily more complex than the single function for Python lists.
Existence of mutable named tuple in Python?
Can anyone amend namedtuple or provide an alternative class so that it works for mutable objects? Primarily for readability, I would like something similar to namedtuple that does this: from Camelot import namedgroup Point = namedgroup('Point', ['x', 'y']) p = Point(0, 0) p.x = 10 >>> p Point(x=10, y=0) >>> p.x *= 10 Point(x=100, y=0) It must be possible to pickle the resulting object. And per the characteristics of named tuple, the ordering of the output when represented must match the order of the parameter list when constructing the object. RESPONSE: Thanks to everyone who submitted suggestions. I believe that the recordclass referred by @intellimath is the best solution (also see here). recordclass 0.4 Mutable variant of collections.namedtuple, which supports assignments recordclass is MIT Licensed python library. It implements the type memoryslots and factory function recordclass in order to create record-like classes. memoryslots is tuple-like type, which supports assignment operations. recordclass is a factory function that create a “mutable” analog of collection.namedtuple. This library actually is a “proof of concept” for the problem of “mutable” alternative of namedtuple. I also ran some tests against all of the suggestions. Not all of these features were requested, so the comparison isn't really fair. The tests are here just to point out the usability of each class. # Option 1 (p1): @kennes913 # Option 2 (p2): @MadMan2064 # Option 3 (p3): @intellimath # Option 4 (p4): @Roland Smith # Option 5 (p5): @agomcas # Option 6 (p6): @Antti Haapala # TEST: p1 p2 p3 p4 p5 p6 # 1. Mutation of field values | x | x | x | x | x | x | # 2. String | | x | x | x | | x | # 3. Representation | | x | x | x | | x | # 4. Sizeof | x | x | x | ? | ?? | x | # 5. Access by name of field | x | x | x | x | x | x | # 6. Access by index. | | | x | | | | # 7. Iterative unpacking. | | x | x | | | x | # 8. Iteration | | x | x | | | x | # 9. Ordered Dict | | | x | | | | # 10. Inplace replacement | | | x | | | | # 11. Pickle and Unpickle | | | x | | | | # 12. Fields* | | | yes | | yes | | # 13. Slots* | yes | | | | yes | | # *Note that I'm not very familiar with slots and fields, so please excuse # my ignorance in reporting their results. I have included them for completeness. # Class/Object creation. p1 = Point1(x=1, y=2) Point2 = namedgroup("Point2", ["x", "y"]) p2 = Point2(x=1, y=2) Point3 = recordclass('Point3', 'x y') # *** p3 = Point3(x=1, y=2) p4 = AttrDict() p4.x = 1 p4.y = 2 p5 = namedlist('Point5', 'x y') Point6 = namedgroup('Point6', ['x', 'y']) p6 = Point6(x=1, y=2) point_objects = [p1, p2, p3, p4, p5, p6] # 1. Mutation of field values. for n, p in enumerate(point_objects): try: p.x *= 10 p.y += 10 print('p{0}: {1}, {2}'.format(n + 1, p.x, p.y)) except Exception as e: print('p{0}: Mutation not supported. {1}'.format(n + 1, e)) p1: 10, 12 p2: 10, 12 p3: 10, 12 p4: 10, 12 p5: 10, 12 p6: 10, 12 # 2. String. for n, p in enumerate(point_objects): print('p{0}: {1}'.format(n + 1, p)) p1: <__main__.Point1 instance at 0x10c72dc68> p2: Point2(x=10, y=12) p3: Point3(x=10, y=12) p4: {'y': 12, 'x': 10} p5: <class '__main__.Point5'> p6: Point6(x=10, y=12) # 3. Representation. [('p{0}'.format(n + 1), p) for n, p in enumerate(point_objects)] [('p1', <__main__.Point1 instance at 0x10c72dc68>), ('p2', Point2(x=10, y=12)), ('p3', Point3(x=10, y=12)), ('p4', {'x': 10, 'y': 12}), ('p5', __main__.Point5), ('p6', Point6(x=10, y=12))] # 4. Sizeof. for n, p in enumerate(point_objects): print("size of p{0}:".format(n + 1), sys.getsizeof(p)) size of p1: 72 size of p2: 64 size of p3: 72 size of p4: 280 size of p5: 904 size of p6: 64 # 5. Access by name of field. for n, p in enumerate(point_objects): print('p{0}: {1}, {2}'.format(n + 1, p.x, p.y)) p1: 10, 12 p2: 10, 12 p3: 10, 12 p4: 10, 12 p5: 10, 12 p6: 10, 12 # 6. Access by index. for n, p in enumerate(point_objects): try: print('p{0}: {1}, {2}'.format(n + 1, p[0], p[1])) except: print('p{0}: Unable to access by index.'.format(n+1)) p1: Unable to access by index. p2: Unable to access by index. p3: 10, 12 p4: Unable to access by index. p5: Unable to access by index. p6: Unable to access by index. # 7. Iterative unpacking. for n, p in enumerate(point_objects): try: x, y = p print('p{0}: {1}, {2}'.format(n + 1, x, y)) except: print('p{0}: Unable to unpack.'.format(n + 1)) p1: Unable to unpack. p2: 10, 12 p3: 10, 12 p4: y, x p5: Unable to unpack. p6: 10, 12 # 8. Iteration for n, p in enumerate(point_objects): try: print('p{0}: {1}'.format(n + 1, [v for v in p])) except: print('p{0}: Unable to iterate.'.format(n + 1)) p1: Unable to iterate. p2: [10, 12] p3: [10, 12] p4: ['y', 'x'] p5: Unable to iterate. p6: [10, 12] In [95]: # 9. Ordered Dict for n, p in enumerate(point_objects): try: print('p{0}: {1}'.format(n + 1, p._asdict())) except: print('p{0}: Unable to create Ordered Dict.'.format(n + 1)) p1: Unable to create Ordered Dict. p2: Unable to create Ordered Dict. p3: OrderedDict([('x', 10), ('y', 12)]) p4: Unable to create Ordered Dict. p5: Unable to create Ordered Dict. p6: Unable to create Ordered Dict. # 10. Inplace replacement for n, p in enumerate(point_objects): try: p_ = p._replace(x=100, y=200) print('p{0}: {1} - {2}'.format(n + 1, 'Success' if p is p_ else 'Failure', p)) except: print('p{0}: Unable to replace inplace.'.format(n + 1)) p1: Unable to replace inplace. p2: Unable to replace inplace. p3: Success - Point3(x=100, y=200) p4: Unable to replace inplace. p5: Unable to replace inplace. p6: Unable to replace inplace. # 11. Pickle and Unpickle. for n, p in enumerate(point_objects): try: pickled = pickle.dumps(p) unpickled = pickle.loads(pickled) if p != unpickled: raise ValueError((p, unpickled)) print('p{0}: {1}'.format(n + 1, 'Pickled successfully', )) except Exception as e: print('p{0}: {1}; {2}'.format(n + 1, 'Pickle failure', e)) p1: Pickle failure; (<__main__.Point1 instance at 0x10c72dc68>, <__main__.Point1 instance at 0x10ca631b8>) p2: Pickle failure; (Point2(x=10, y=12), Point2(x=10, y=12)) p3: Pickled successfully p4: Pickle failure; '__getstate__' p5: Pickle failure; Can't pickle <class '__main__.Point5'>: it's not found as __main__.Point5 p6: Pickle failure; (Point6(x=10, y=12), Point6(x=10, y=12)) # 12. Fields. for n, p in enumerate(point_objects): try: print('p{0}: {1}'.format(n + 1, p._fields)) except Exception as e: print('p{0}: {1}; {2}'.format(n + 1, 'Unable to access fields.', e)) p1: Unable to access fields.; Point1 instance has no attribute '_fields' p2: Unable to access fields.; 'Point2' object has no attribute '_fields' p3: ('x', 'y') p4: Unable to access fields.; '_fields' p5: ('x', 'y') p6: Unable to access fields.; 'Point6' object has no attribute '_fields' # 13. Slots. for n, p in enumerate(point_objects): try: print('p{0}: {1}'.format(n + 1, p.__slots__)) except Exception as e: print('p{0}: {1}; {2}'.format(n + 1, 'Unable to access slots', e)) p1: ['x', 'y'] p2: Unable to access slots; 'Point2' object has no attribute '__slots__' p3: () p4: Unable to access slots; '__slots__' p5: ('x', 'y') p6: Unable to access slots; 'Point6' object has no attribute '__slots__'
There is a mutable alternative to collections.namedtuple - recordclass. It has same API and memory footprint as namedtuple (actually it also faster). It also support assignments. For example: from recordclass import recordclass Point = recordclass('Point', 'x y') >>> p = Point(1, 2) >>> p Point(x=1, y=2) >>> print(p.x, p.y) 1 2 >>> p.x += 2; p.y += 3; print(p) Point(x=3, y=5) There is more complete example (it also include performance comparisons).
How to stop celery worker process
I have a Django project on an Ubuntu EC2 node, which I have been using to set up an asynchronous using Celery. I am following this along with the docs. I've been able to get a basic task working at the command line, using: (env1)ubuntu@ip-172-31-22-65:~/projects/tp$ celery --app=myproject.celery:app worker --loglevel=INFO To start a worker. I have since made some changes to the Python, but realized that I need to restart a worker. From the command line, I've tried: ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9 But I can see that the worker is still running. How can I kill it? edit: (env1)ubuntu@ip-172-31-22-65:~/projects/tp$ sudo ps auxww | grep celeryd | grep -v "grep" | awk '{print $2}' | sudo xargs kill -HUP kill: invalid argument H Usage: kill [options] <pid> [...] Options: <pid> [...] send signal to every <pid> listed -<signal>, -s, --signal <signal> specify the <signal> to be sent -l, --list=[<signal>] list all signal names, or convert one to a name -L, --table list all signal names in a nice table -h, --help display this help and exit -V, --version output version information and exit For more details see kill(1). edit 2: (env1)ubuntu@ip-172-31-22-65:~/projects/tp$ ps aux|grep celery ubuntu 9756 0.0 3.4 100868 35508 pts/6 S+ 15:49 0:07 /home/ubuntu/.virtualenvs/env1/bin/python3.4 /home/ubuntu/.virtualenvs/env1/bin/celery --app=tp.celery:app worker --loglevel=INFO ubuntu 9760 0.0 3.9 255840 39852 pts/6 S+ 15:49 0:05 /home/ubuntu/.virtualenvs/env1/bin/python3.4 /home/ubuntu/.virtualenvs/env1/bin/celery --app=tp.celery:app worker --loglevel=INFO ubuntu 12760 0.0 0.0 10464 932 pts/7 S+ 19:04 0:00 grep --color=auto celery
Try this in terminal ps aux|grep 'celery worker' You will see like this username 29042 0.0 0.6 23216 14356 pts/1 S+ 00:18 0:01 /bin/celery worker ... Then kill process id by sudo kill -9 process_id # here 29042 If you have multiple processes, then you have to kill all process id using above kill commmand sudo kill -9 id1 id2 id3 ... From the celery doc ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9 OR if you are running celeryd ps auxww | grep celeryd | awk '{print $2}' | xargs kill -9 Note If you are running celery in supervisor, even though kill the process, it automatically restarts(if autorestart=True in supervisor script). Hope this helps
How to set the timezone in Django?
In my django project's settings.py file, I have this line : TIME_ZONE = 'UTC' But I want my app to run in UTC+2 timezone, so I changed it to TIME_ZONE = 'UTC+2' It gives the error ValueError: Incorrect timezone setting: UTC+2. What is the correct way of doing this? Thanks!
Here is the list of valid timezones: http://en.wikipedia.org/wiki/List_of_tz_database_time_zones You can use TIME_ZONE = 'Europe/Istanbul' for UTC+02:00
Reverse a Python string without omitting start and end slice
How do you reverse a Python string without omitting the start and end slice arguments? word = "hello" reversed_word = word[::-1] I understand that this works, but how would I get the result by specifying the start and end indexes? word = "hello" reversed_word = word[?:?:-1]
Some other ways to reverse a string: word = "hello" reversed_word1 = word[-1: :-1] reversed_word2 = word[len(word): :-1] reversed_word3 = word[:-len(word)-1 :-1] One thing you should note about the slicing notation a[i:j:k] is that omitting i and j doesn't always mean that i will become 0 and j will become len(s). It depends upon the sign of k. By default k is +1. If k is +ve then the default value of i is 0 (start from the beginning). If it is -ve then the default value of i is -1 (start from the end). If k is +ve then the default value of j is len(s) (stop at the end). If it is -ve then the default value of j is -(len(s)+1) (stop at the beginning). Now you can explain your students how Hello[::-1] prints olleH.
Python class scoping rules
EDIT: Looks like this is a very old "bug" or, actually, feature. See, e.g., this mail I am trying to understand the Python scoping rules. More precisely, I thought that I understand them but then I found this code here: x = "xtop" y = "ytop" def func(): x = "xlocal" y = "ylocal" class C: print(x) print(y) y = 1 func() In Python 3.4 the output is: xlocal ytop If I replace the inner class by a function then it reasonably gives UnboundLocalError. Could you explain me why it behaves this strange way with classes and what is the reason for such choice of scoping rules?
TL;DR: This behaviour has existed since Python 2.1 PEP 227: Nested Scopes, and was known back then. If a name is assigned to within a class body (like y), then it is assumed to be a local/global variable; if it is not assigned to (x), then it also can potentially point to a closure cell. The lexical variables do not show up as local/global names to the class body. On Python 3.4, dis.dis(func) shows the following: >>> dis.dis(func) 4 0 LOAD_CONST 1 ('xlocal') 3 STORE_DEREF 0 (x) 5 6 LOAD_CONST 2 ('ylocal') 9 STORE_FAST 0 (y) 6 12 LOAD_BUILD_CLASS 13 LOAD_CLOSURE 0 (x) 16 BUILD_TUPLE 1 19 LOAD_CONST 3 (<code object C at 0x7f083c9bbf60, file "test.py", line 6>) 22 LOAD_CONST 4 ('C') 25 MAKE_CLOSURE 0 28 LOAD_CONST 4 ('C') 31 CALL_FUNCTION 2 (2 positional, 0 keyword pair) 34 STORE_FAST 1 (C) 37 LOAD_CONST 0 (None) 40 RETURN_VALUE The LOAD_BUILD_CLASS loads the builtins.__build_class__ on the stack; this is called with arguments __build_class__(func, name); where func is the class body, and name is 'C'. The class body is the constant #3 for the function func: >>> dis.dis(func.__code__.co_consts[3]) 6 0 LOAD_NAME 0 (__name__) 3 STORE_NAME 1 (__module__) 6 LOAD_CONST 0 ('func.<locals>.C') 9 STORE_NAME 2 (__qualname__) 7 12 LOAD_NAME 3 (print) 15 LOAD_CLASSDEREF 0 (x) 18 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 21 POP_TOP 8 22 LOAD_NAME 3 (print) 25 LOAD_NAME 4 (y) 28 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 31 POP_TOP 9 32 LOAD_CONST 1 (1) 35 STORE_NAME 4 (y) 38 LOAD_CONST 2 (None) 41 RETURN_VALUE Within the class body, x is accessed with LOAD_CLASSDEREF (15) while y is load with LOAD_NAME (25). The LOAD_CLASSDEREF is a Python 3.4+ opcode for loading values from closure cells specifically within class bodies (in previous versions, the generic LOAD_DEREF was used); the LOAD_NAME is for loading values from locals and then globals. However closure cells show up neither as locals nor globals. Now, because the name y is stored to within the class body (35), it is consistently being used as not a closure cell but a local/global name. The closure cells do not show up as local variables to the class body. This behaviour has been true ever since implementing PEP 227 - nested scopes. And back then BDFL stated that this should not be fixed - and thus it has been for these 13+ years. The only change since PEP 227 is the addition of nonlocal in Python 3; if one uses it within the class body, the class body can set the values of the cells within the containing scope: x = "xtop" y = "ytop" def func(): x = "xlocal" y = "ylocal" class C: nonlocal y # y here now refers to the outer variable print(x) print(y) y = 1 print(y) print(C.y) func() The output now is xlocal ylocal 1 Traceback (most recent call last): File "test.py", line 15, in <module> func() File "test.py", line 13, in func print(C.y) AttributeError: type object 'C' has no attribute 'y' That is, print(y) read the value of the cell y of the containing scope, and y = 1 set the value in that cell; in this case, no attribute was created for the class C.
Why am I getting different results when using a list comprehension with coroutines with asyncio?
I initially had some code that aggregated results into a list. When I refactored this code to use a list comphrehension, I am getting unexpected results: import asyncio @asyncio.coroutine def coro(): return "foo" # Writing the code without a list comp works, # even with an asyncio.sleep(0.1). @asyncio.coroutine def good(): yield from asyncio.sleep(0.1) result = [] for i in range(3): current = yield from coro() result.append(current) return result # Using a list comp without an async.sleep(0.1) # works. @asyncio.coroutine def still_good(): return [(yield from coro()) for i in range(3)] # Using a list comp along with an asyncio.sleep(0.1) # does _not_ work. @asyncio.coroutine def huh(): yield from asyncio.sleep(0.1) return [(yield from coro()) for i in range(3)] loop = asyncio.get_event_loop() print(loop.run_until_complete(good())) print(loop.run_until_complete(still_good())) print(loop.run_until_complete(huh())) If I run this code I get this output: $ python3.4 /tmp/test.py ['foo', 'foo', 'foo'] ['foo', 'foo', 'foo'] <generator object <listcomp> at 0x104eb1360> Why do I get different results for third huh() function?
A fix to your problem would be to put next(...) instead of ... in the return of the third function, or better write return list((yield from coro()) for i in range(3)) (credits to @zch for this idea), or even better stay with the first function. The point is that the second function is not a generator. It is just an ordinary function that returns a comprehension generator. For example this code is valid outside generator: values = [(yield x) for x in range(3)] Then you can do this: next(values) 0 next(values) 1 next(values) 2 next(values) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: [None, None, None] Decorator @coroutine then makes the second function a generator by iterating over the result, see here, line 143. On the contrast, the first and the third functions are actually generators, and the @coroutine decorator just return themselves, see here, lines 136-137. In first case the generator returns list (actualy raises StopIteration(['foo', 'foo', 'foo'])). In the third case it returns the comprehension generator.
Eval scope in Python 2 vs. 3
I came across bizarre eval behavior in Python 3 - local variables aren't picked up when eval is called in a list comprehension. def apply_op(): x, y, z = [0.5, 0.25, 0.75] op = "x,y,z" return [eval(o) for o in op.split(",")] print(apply_op()) It errors in Python 3: ▶ python --version Python 3.4.3 ▶ python eval.py Traceback (most recent call last): File "eval.py", line 7, in <module> print(apply_op()) File "eval.py", line 5, in apply_op return [eval(o) % 1 for o in op.split(",")] File "eval.py", line 5, in <listcomp> return [eval(o) % 1 for o in op.split(",")] File "<string>", line 1, in <module> NameError: name 'x' is not defined And it works fine in Python 2: ▶ python --version Python 2.7.8 ▶ python eval.py [0.5, 0.25, 0.75] Moving it outside of the list comprehension removes the problem. def apply_op(): x, y, z = [0.5, 0.25, 0.75] return [eval("x"), eval("y"), eval("z")] Is this intended behavior, or is it a bug?
There is a closed issue in the bug tracker for this: Issue 5242. The resolution for this bug is won't fix. Some comments from the Issue read: This is expected, and won't easily fix. The reason is that list comprehensions in 3.x use a function namespace "under the hood" (in 2.x, they were implemented like a simple for loop). Because inner functions need to know what names to get from what enclosing namespace, the names referenced in eval() can't come from enclosing functions. They must either be locals or globals. eval() is probably already an hack, there's no need to add another hack to make it work. It's better to just get rid of eval() and find a better way to do what you want to do.
Python list() vs list comprehension building speed
This is interesting; list() to force an iterator to get the actual list is so much faster than [x for x in someList] (comprehension). Is this for real or is my test just too simple? Below is the code: import time timer = time.clock() for i in xrange(90): #localList = [x for x in xrange(1000000)] #Very slow, took me 6.8s localList = list(xrange(1000000)) #Very fast, took me 0.9s print localList[999999] #make sure list is really evaluated. print "Total time: ", time.clock() - timer
The list comprehension executes the loop in Python bytecode, just like a regular for loop. The list() call iterates entirely in C code, which is far faster. The bytecode for the list comprehension looks like this: >>> import dis >>> dis.dis(compile("[x for x in xrange(1000000)]", '<stdin>', 'exec')) 1 0 BUILD_LIST 0 3 LOAD_NAME 0 (xrange) 6 LOAD_CONST 0 (1000000) 9 CALL_FUNCTION 1 12 GET_ITER >> 13 FOR_ITER 12 (to 28) 16 STORE_NAME 1 (x) 19 LOAD_NAME 1 (x) 22 LIST_APPEND 2 25 JUMP_ABSOLUTE 13 >> 28 POP_TOP 29 LOAD_CONST 1 (None) 32 RETURN_VALUE The >> pointers roughly give you the boundaries of the loop being executed, so you have 1 million STORE_NAME, LOAD_NAME and LIST_APPEND steps to execute in the Python bytecode evaluation loop. list() on the other hand just grabs the values from the xrange() iterable directly by using the C API for object iteration, and it can use the length of the xrange() object to pre-allocate the list object rather than grow it dynamically.
How does this function to remove duplicate characters from a string in python work?
I was looking up how to create a function that removes duplicate characters from a string in python and found this on stack overflow: from collections import OrderedDict def remove_duplicates (foo) : print " ".join(OrderedDict.fromkeys(foo)) It works, but how? I've searched what OrderedDict and fromkeys mean but I can't find anything that explains how it works in this context.
I will give it a shot: OrderedDict are dictionaries that store keys in order they are added. Normal dictionaries don't. If you look at doc of fromkeys, you find: OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S. So the fromkeys class method, creates an OrderedDict using items in the input iterable S (in my example characters from a string) as keys. In a dictionary, keys are unique, so duplicate items in S are ignored. For example: s = "abbcdece" # example string with duplicate characters print(OrderedDict.fromkeys(s)) This results in an OrderedDict: OrderedDict([('a', None), ('b', None), ('c', None), ('d', None), ('e', None)]) Then " ".join(some_iterable) takes an iterable and joins its elements using a space in this case. It uses only keys, as iterating through a dictionary is done by its keys. For example: for k in OrderedDict.fromkeys(s): # k is a key of the OrderedDict print(k) Results in: a b c d e Subsequently, call to join: print(" ".join(OrderedDict.fromkeys(s))) will print out: a b c d e Using set Sometimes, people use a set for this: print( " ".join(set(s))) # c a b d e But unlike sets in C++, sets in python do not guarantee order. So using a set will give you unique values easily, but they might be in a different order then they are in the original list or string (as in the above example). Hope this helps a bit.
Select dataframe rows between two dates
I am creating a dataframe from a csv as follows: stock = pd.read_csv('data_in/' + filename + '.csv', skipinitialspace=True) The dataframe has a date column. Is there a way to create a new dataframe (or just overwrite the existing one) which only containes rows that fall between a specific date range?
There are two possible solutions: Use a boolean mask, then use df.loc[mask] Set the date column as a DatetimeIndex, then use df[start_date : end_date] Using a boolean mask: Ensure df['date'] is a Series with dtype datetime64[ns]: df['date'] = pd.to_datetime(df['date']) Make a boolean mask. start_date and end_date can be datetime.datetimes, np.datetime64s, pd.Timestamps, or even datetime strings: mask = (df['date'] > start_date) & (df['date'] <= end_date) Select the sub-DataFrame: df.loc[mask] or re-assign to df df = df.loc[mask] For example, import numpy as np import pandas as pd df = pd.DataFrame(np.random.random((200,3))) df['date'] = pd.date_range('2000-1-1', periods=200, freq='D') mask = (df['date'] > '2000-6-1') & (df['date'] <= '2000-6-10') print(df.loc[mask]) yields 0 1 2 date 153 0.208875 0.727656 0.037787 2000-06-02 154 0.750800 0.776498 0.237716 2000-06-03 155 0.812008 0.127338 0.397240 2000-06-04 156 0.639937 0.207359 0.533527 2000-06-05 157 0.416998 0.845658 0.872826 2000-06-06 158 0.440069 0.338690 0.847545 2000-06-07 159 0.202354 0.624833 0.740254 2000-06-08 160 0.465746 0.080888 0.155452 2000-06-09 161 0.858232 0.190321 0.432574 2000-06-10 Using a DatetimeIndex: If you are going to do a lot of selections by date, it may be quicker to set the date column as the index first. Then you can select rows by date using df[start_date : end_date]. import numpy as np import pandas as pd df = pd.DataFrame(np.random.random((200,3))) df['date'] = pd.date_range('2000-1-1', periods=200, freq='D') df = df.set_index(['date']) print(df.loc['2000-6-1':'2000-6-10']) yields 0 1 2 date 2000-06-01 0.040457 0.326594 0.492136 # <- includes start_date 2000-06-02 0.279323 0.877446 0.464523 2000-06-03 0.328068 0.837669 0.608559 2000-06-04 0.107959 0.678297 0.517435 2000-06-05 0.131555 0.418380 0.025725 2000-06-06 0.999961 0.619517 0.206108 2000-06-07 0.129270 0.024533 0.154769 2000-06-08 0.441010 0.741781 0.470402 2000-06-09 0.682101 0.375660 0.009916 2000-06-10 0.754488 0.352293 0.339337 Some caveats: When using df[start_date : end_date] both end-points are included in result. Unlike the boolean mask solution, the start_date and end_date must be dates in the DatetimeIndex. Also note that pd.read_csv has a parse_dates parameter which you could use to parse the date column as datetime64s. Thus, if you use parse_dates, you would not need to use df['date'] = pd.to_datetime(df['date']).
What is the difference between numpy.linalg.lstsq and scipy.linalg.lstsq?
lstsq tries to solve Ax=b minimizing |b - Ax|. Both scipy and numpy provide a linalg.lstsq function with a very similar interface. The documentation does not mention which kind of algorithm is used, neither for scipy.linalg.lstsq nor for numpy.linalg.lstsq, but it seems to do pretty much the same. The implementation seems to be different for scipy.linalg.lstsq and numpy.linalg.lstsq. Both seem to use LAPACK, both algorithms seem to use a SVD. Where is the difference? Which one should I use? Note: do not confuse linalg.lstsq with scipy.optimize.leastsq which can solve also non-linear optimization problems.
If I read the source code right (Numpy 1.8.2, Scipy 0.14.1 ), numpy.linalg.lstsq() uses the LAPACK routine xGELSD and scipy.linalg.lstsq() usesxGELSS. The LAPACK Manual Sec. 2.4 states The subroutine xGELSD is significantly faster than its older counterpart xGELSS, especially for large problems, but may require somewhat more workspace depending on the matrix dimensions. That means that Numpy is faster but uses more memory.
First month of quarter given month in Python
Given a month in numeric form (e.g., 2 for February), how do you find the first month of its respective quarter (e.g., 1 for January)? I read through the datetime module documentation and the Pandas documentation of their datetime functions, which ought to be relevant, but I could not find a function that solves this problem. Essentially, what I am trying to understand is how I could produce a function like the one below that, given month x, outputs the number corresponding to the first month of x's quarter. >> first_month_quarter(5) 4
It's not so pretty, but if speed is important a simple list lookup slaughters math: def quarter(month, quarters=[None, 1, 1, 1, 4, 4, 4, 7, 7, 7, 10, 10, 10]): """Return the first month of the quarter for a given month.""" return quarters[month] A timeit comparison suggests this is about twice as fast as TigerhawkT3's mathematical approach. Test script: import math def quarter(month, quarters=[None, 1, 1, 1, 4, 4, 4, 7, 7, 7, 10, 10, 10]): """Return the first month of the quarter for a given month.""" return quarters[month] def firstMonthInQuarter1(month): return (month - 1) // 3 * 3 + 1 def firstMonthInQuarter2(month): return month - (month - 1) % 3 def first_month_quarter(month): return int(math.ceil(month / 3.)) * 3 - 2 if __name__ == '__main__': from timeit import timeit methods = ['quarter', 'firstMonthInQuarter1', 'firstMonthInQuarter2', 'first_month_quarter'] setup = 'from __main__ import {}'.format(','.join(methods)) results = {method: timeit('[{}(x) for x in range(1, 13)]'.format(method), setup=setup) for method in methods} for method in methods: print '{}:\t{}'.format(method, results[method]) Results: quarter: 3.01457574242 firstMonthInQuarter1: 4.51578357209 firstMonthInQuarter2: 4.01768559763 first_month_quarter: 8.08281871176
How to save S3 object to a file using boto3
I'm trying to do a "hello world" with new boto3 client for AWS. The use-case I have is fairly simple: get object from S3 and save it to the file. In boto 2.X I would do it like this: import boto key = boto.connect_s3().get_bucket('foo').get_key('foo') key.get_contents_to_filename('/tmp/foo') In boto 3 . I can't find a clean way to do the same thing, so I'm manually iterating over the "Streaming" object: import boto3 key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get() with open('/tmp/my-image.tar.gz', 'w') as f: chunk = key['Body'].read(1024*8) while chunk: f.write(chunk) chunk = key['Body'].read(1024*8) or import boto3 key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get() with open('/tmp/my-image.tar.gz', 'w') as f: for chunk in iter(lambda: key['Body'].read(4096), b''): f.write(chunk) And it works fine. I was wondering is there any "native" boto3 function that will do the same task?
There is a customization that went into Boto3 recently which helps with this (among other things). It is currently exposed on the low-level S3 client, and can be used like this: s3_client = boto3.client('s3') open('hello.txt').write('Hello, world!') # Upload the file to S3 s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt') # Download the file from S3 s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt') print(open('hello2.txt').read()) These functions will automatically handle reading/writing files as well as doing multipart uploads in parallel for large files.
How to get http headers in flask?
I am newbie to python and using Python Flask and generating REST API service. I want to check authorization header which is sent by angularjs. But I can't find way to get HTTP header in flask. Any help for getting HTTP header authorization is appreciated.
from flask import request request.headers.get('your-header-name') request.headers is a dictionary, so you can also get your header like you would any dictionary: request.headers['your-header-name']
Manually trigger Django email error report
Django error reporting handles uncaught exceptions by sending an email, and (optionally) shows user a nice 500 error page. This works very well, but in a few instances I'd like to allow users to continue with their business uninterrupted, but still have Django send me the email error report about the exception. So basically: can I manually send email error report even if I catch the exception? Of course, I'd like to avoid manually generating the error report email.
You can use the following code to send manually an email about a request and an exception e: import sys import traceback from django.core import mail from django.views.debug import ExceptionReporter def send_manually_exception_email(request, e): exc_info = sys.exc_info() reporter = ExceptionReporter(request, is_email=True, *exc_info) subject = e.message.replace('\n', '\\n').replace('\r', '\\r')[:989] message = "%s\n\n%s" % ( '\n'.join(traceback.format_exception(*exc_info)), reporter.filter.get_request_repr(request) ) mail.mail_admins( subject, message, fail_silently=True, html_message=reporter.get_traceback_html() ) You can test it in a view like this: def test_view(request): try: raise Exception except Exception as e: send_manually_exception_email(request, e)
Python for loop and iterator behavior
I wanted to understand a bit more about iterators, so please correct me if I'm wrong. An iterator is an object which has a pointer to the next object and is read as a buffer or stream (i.e. a linked list). They're particularly efficient cause all they do is tell you what is next by references instead of using indexing. However I still don't understand why is the following behavior happening: In [1]: iter = (i for i in range(5)) In [2]: for _ in iter: ....: print _ ....: 0 1 2 3 4 In [3]: for _ in iter: ....: print _ ....: In [4]: After a first loop through the iterator (In [2]) it's as if it was consumed and left empty, so the second loop (In [3]) prints nothing. However I never assigned a new value to the iter variable. What is really happening under the hood of the for loop?
Your suspicion is correct: the iterator has been consumed. In actuality, your iterator is a generator, which is an object which has the ability to be iterated through only once. type((i for i in range(5))) # says it's type generator def another_generator(): yield 1 # the yield expression makes it a generator, not a function type(another_generator()) # also a generator The reason they are efficient has nothing to do with telling you what is next "by reference." They are efficient because they only generate the next item upon request; all of the items are not generated at once. In fact, you can have an infinite generator: def my_gen(): while True: yield 1 # again: yield means it is a generator, not a function for _ in my_gen(): print(_) # hit ctl+c to stop this infinite loop! Some other corrections to help improve your understanding: The generator is not a pointer, and does not behave like a pointer as you might be familiar with in other languages. One of the differences from other languages: as said above, each result of the generator is generated on the fly. The next result is not produced until it is requested. The keyword combination for in accepts an iterable object as its second argument. The iterable object can be a generator, as in your example case, but it can also be any other iterable object, such as a list, or dict, or a str object (string), or a user-defined type that provides the required functionality. The iter function is applied to the object to get an iterator (by the way: don't use iter as a variable name in Python, as you have done - it is one of the keywords). Actually, to be more precise, the object's __iter__ method is called (which is, for the most part, all the iter function does anyway; __iter__ is one of Python's so-called "magic methods"). If the call to __iter__ is successful, the function next() is applied to the iterable object over and over again, in a loop, and the first variable supplied to for in is assigned to the result of the next() function. (Remember: the iterable object could be a generator, or a container object's iterator, or any other iterable object.) Actually, to be more precise: it calls the iterator object's __next__ method, which is another "magic method". The for loop ends when next() raises the StopIteration exception (which usually happens when the iterable does not have another object to yield when next() is called). You can "manually" implement a for loop in python this way (probably not perfect, but close enough): try: temp = iterable.__iter__() except AttributeError(): raise TypeError("'{}' object is not iterable".format(type(iterable).__name__)) else: while True: try: _ = temp.__next__() except StopIteration: break except AttributeError: raise TypeError("iter() returned non-iterator of type '{}'".format(type(temp).__name__)) # this is the "body" of the for loop continue There is pretty much no difference between the above and your example code. Actually, the more interesting part of a for loop is not the for, but the in. Using in by itself produces a different effect than for in, but it is very useful to understand what in does with its arguments, since for in implements very similar behavior. When used by itself, the in keyword first calls the object's __contains__ method, which is yet another "magic method" (note that this step is skipped when using for in). Using in by itself on a container, you can do things like this: 1 in [1, 2, 3] # True 'He' in 'Hello' # True 3 in range(10) # True 'eH' in 'Hello'[::-1] # True If the iterable object is NOT a container (i.e. it doesn't have a __contains__ method), in next tries to call the object's __iter__ method. As was said previously: the __iter__ method returns what is known in Python as an iterator. Basically, an iterator is an object that you can use the built-in generic function next() on1. A generator is just one type of iterator. If the call to __iter__ is successful, the in keyword applies the function next() to the iterable object over and over again. (Remember: the iterable object could be a generator, or a container object's iterator, or any other iterable object.) Actually, to be more precise: it calls the iterator object's __next__ method). If the object doesn't have a __iter__ method to return an iterator, in then falls back on the old-style iteration protocol using the object's __getitem__ method2. If all of the above attempts fail, you'll get a TypeError exception. If you wish to create your own object type to iterate over (i.e, you can use for in, or just in, on it), it's useful to know about the yield keyword, which is used in generators (as mentioned above). class MyIterable(): def __iter__(self): yield 1 m = MyIterable() for _ in m: print(_) # 1 1 in m # True The presence of yield turns a function or method into a generator instead of a regular function/method. You don't need the __next__ method if you use a generator (it brings __next__ along with it automatically). If you wish to create your own container object type (i.e, you can use in on it by itself, but NOT for in), you just need the __contains__ method. class MyUselessContainer(): def __contains__(self, obj): return True m = MyUselessContainer() 1 in m # True 'Foo' in m # True TypeError in m # True None in m # True 1 Note that, to be an iterator, an object must implement the iterator protocol. This only means that both the __next__ and __iter__ methods must be correctly implemented (generators come with this functionality "for free", so you don't need to worry about it when using them). Also note that the ___next__ method is actually next (no underscores) in Python 2. 2 See this answer for the different ways to create iterable classes.
Correlation matrix using pandas
I have a data set with huge number of features, so analysing the correlation matrix has become very difficult. I want to plot a correlation matrix which we get using dataframe.corr() function from pandas library. Is there any inbuilt function provided by pandas library to plot this matrix?
You can use matshow() from matplotlib: plt.matshow(dataframe.corr())
How to map a series of conditions as keys in a dictionary?
I know you can use a dictionary as an alternative to a switch statement such as the following: def printMessage(mystring): # Switch statement without a dictionary if mystring == "helloworld": print "say hello" elif mystring == "byeworld": print "say bye" elif mystring == "goodafternoonworld": print "good afternoon" def printMessage(mystring): # Dictionary equivalent of a switch statement myDictionary = {"helloworld": "say hello", "byeworld": "say bye", "goodafternoonworld": "good afternoon"} print myDictionary[mystring] However if conditions are used, other than equality (==) which return true of false these cant be mapped as easily i.e.: if i > 0.5: print "greater than 0.5" elif i == 5: print "it is equal to 5" elif i > 5 and i < 6: print "somewhere between 5 and 6" The above cannot be directly converted to a dictionary key-value pair as is: # this does not work mydictionary = { i > 0.5: "greater than 0.5" } A lambda can be used since it is hash-able but the only way to get the resulting string out of the map is by passing the same lambda object into the dictionary and not when the evaluation of the lambda is true: x = lambda i: i > 0.5 mydictionary[x] = "greater than 0.5" # you can get the string by doing this: mydictionary[x] # which doesnt result in the evaluation of x # however a lambda is a hashable item in a dictionary mydictionary = {lambda i: i > 0.5: "greater than 0.5"} Does anyone know of a technique or method to create a mapping between a lambda evaluation and a return value? (this maybe similar to pattern matching in functional language)
Your conditions are sequential in nature; you want to test one after the other, not map a small number of keys to a value here. Changing the order of the conditions could alter the outcome; a value of 5 results in "greater than 0.5" in your sample, not "it is equal to 5". Use a list of tuples: myconditions = [ (lambda i: i > 0.5, "greater than 0.5"), (lambda i: i == 5, "it is equal to 5"), (lambda i: i > 5 and i < 6, "somewhere between 5 and 6"), ] after which you can access each one in turn until one matches: for test, message in myconditions: if test(i): return message Re-ordering the tests will change the outcome. A dictionary works for your first example because there is a simple equality test against multiple static values that is optimised by a dictionary, but there are no such simple equalities available here.
Unable to "import matplotlib.pyplot as plt" in virtualenv
I am working with flask in a virtual environment. I was able to install matplotlib with pip, and I can import matplotlib in a Python session. However, when I import it as matplotlib.pyplot as plt I get the following error: >>> import matplotlib.pyplot as plt Traceback (most recent call last): File "<stdin>", line 1, in <module> File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/pyplot.py", line 109, in <module> _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup globals(),locals(),[backend_name],0) File "//anaconda/envs/myenv/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py", line 24, in <module> from matplotlib.backends import _macosx RuntimeError: Python is not installed as a framework. The Mac OS X backend will not be able to function correctly if Python is not installed as a framework. See the Python documentation for more information on installing Python as a framework on Mac OS X. Please either reinstall Python as a framework, or try one of the other backends. I am confused about why it asks me to install Python as framework. Doesn't it already exists? What does it mean to "install Python as framework", and how do I install it?
This solution worked for me. If you already installed matplotlib using pip on your virtual environment, you can just type the following: $ cd ~/.matplotlib $ nano matplotlibrc And then, write backend: TkAgg in there. If you need more information, just go to the solution link.
Memory error while using pip install Matplotlib
I am using Python 2.7, If i try to install Matplotlib I am getting this error if i use "pip install matplotlib" Exception: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 339, in run requirement_set.prepare_files(finder) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 355, in prepare_files do_download, session=self.session, File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 782, in unpack_url session, File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 667, in unpack_http_url from_path, content_type = _download_http_url(link, session, temp_dir) File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 843, in _download_http_url _download_url(resp, link, content_file) File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 615, in _download_url for chunk in progress_indicator(resp_read(4096), 4096): File "/usr/local/lib/python2.7/dist-packages/pip/utils/ui.py", line 46, in iter for x in it: File "/usr/local/lib/python2.7/dist-packages/pip/download.py", line 580, in resp_read decode_content=False): File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/response.py", line 256, in stream data = self.read(amt=amt, decode_content=decode_content) File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/response.py", line 186, in read data = self._fp.read(amt) File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 54, in read self.__callback(self.__buf.getvalue()) File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/controller.py", line 205, in cache_response self.serializer.dumps(request, response, body=body), File "/usr/local/lib/python2.7/dist-packages/pip/_vendor/cachecontrol/serialize.py", line 81, in dumps ).encode("utf8"), MemoryError" What might the problem be? I am using raspberry Pi 2 with a 16gb SD card. I still have 8gb data free but still getting this error. Kindly help
This error is coming up because, it seems, pip's caching mechanism is trying to read the entire file into memory before caching it… which poses a problem in a limited-memory environent, as matplotlib is ~50mb. A simpler solution, until pip is patched to use a constant-space caching algorithm, is to run pip with --no-cache-dir to avoid the cache: $ pip --no-cache-dir install matplotlib
There is no South database module 'south.db.postgresql_psycopg2' for your database
i new to django and I'm getting this error from south but i don't know what i'm missing. I search for answers but i can't found anything. There is no South database module 'south.db.postgresql_psycopg2' for your database. Please either choose a supported database, check for SOUTH_DATABASE_ADAPTER[S] settings, or remove South from INSTALLED_APPS. This is my base_settings: from unipath import Path BASE_DIR = Path(__file__).ancestor(3) SECRET_KEY = 'pp@iz7%bc7%+*11%usf7o@_e&)r2o&^3%zjse)n=6b&w^hem96' DJANGO_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ) THIRD_PARTY_APPS = ( 'south', ) LOCAL_APPS = ( ) INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django.middleware.security.SecurityMiddleware', ) ROOT_URLCONF = 'misite.urls' WSGI_APPLICATION = 'misite.wsgi.application' LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True local_settings: from .base import * TEMPLATE_DEBUG = True ALLOWED_HOSTS = [] DEBUG = True DEFAULT_FROM_EMAIL = 'webmaster@example.com' # ---------------------------- # POSTGRESQL DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'misite', 'USER:': 'fernandoperez', 'PASSWORD': 'admin', 'HOST':'localhost', 'PORT':'5432', } } SOUTH_DATABASE_ADAPTERS = { 'default': "south.db.postgresql_psycopg2" } STATIC_URL = '/static/' Can someone help me?? Thanks a lot.
Try pip uninstall South, then you will see if your application is 1.8 ready.