instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Python assignment destructuring
These three expressions seem to be equivalent: a,b,c = line.split() (a,b,c) = line.split() [a,b,c] = line.split() Do they compile to the same code? Which one is more pythonic?
According to dis, they all get compiled to the same bytecode: >>> def f1(line): ... a,b,c = line.split() ... >>> def f2(line): ... (a,b,c) = line.split() ... >>> def f3(line): ... [a,b,c] = line.split() ... >>> import dis >>> dis.dis(f1) 2 0 LOAD_FAST 0 (line) 3 LOAD_ATTR 0 (split) 6 CALL_FUNCTION 0 9 UNPACK_SEQUENCE 3 12 STORE_FAST 1 (a) 15 STORE_FAST 2 (b) 18 STORE_FAST 3 (c) 21 LOAD_CONST 0 (None) 24 RETURN_VALUE >>> dis.dis(f2) 2 0 LOAD_FAST 0 (line) 3 LOAD_ATTR 0 (split) 6 CALL_FUNCTION 0 9 UNPACK_SEQUENCE 3 12 STORE_FAST 1 (a) 15 STORE_FAST 2 (b) 18 STORE_FAST 3 (c) 21 LOAD_CONST 0 (None) 24 RETURN_VALUE >>> dis.dis(f3) 2 0 LOAD_FAST 0 (line) 3 LOAD_ATTR 0 (split) 6 CALL_FUNCTION 0 9 UNPACK_SEQUENCE 3 12 STORE_FAST 1 (a) 15 STORE_FAST 2 (b) 18 STORE_FAST 3 (c) 21 LOAD_CONST 0 (None) 24 RETURN_VALUE So they should all have the same efficiency. As far as which is most Pythonic, it's somewhat down to opinion, but I would favor either the first or (to a lesser degree) the second option. Using the square brackets is confusing because it looks like you're creating a list (though it turns out you're not).
E731 do not assign a lambda expression, use a def
I get this pep8 warning whenever I use lambda expressions. Are lambda expressions not recommended? If not why?
The recommendation in PEP-8 you are running into is: Always use a def statement instead of an assignment statement that binds a lambda expression directly to a name. Yes: def f(x): return 2*x No: f = lambda x: 2*x The first form means that the name of the resulting function object is specifically 'f' instead of the generic ''. This is more useful for tracebacks and string representations in general. The use of the assignment statement eliminates the sole benefit a lambda expression can offer over an explicit def statement (i.e. that it can be embedded inside a larger expression) Assigning lambdas to names basically just duplicates the functionality of def - and in general, it's best to do something a single way to avoid confusion and increase clarity. The legitimate use case for lambda is where you want to use a function without assigning it, e.g: sorted(players, key=lambda player: player.rank)
install scipy module on centos
I'm using centos, and I installed python2.7 alternately to the default python. I'm able to install all the module I want with pip, but I can't install scipy. When I'm doing sudo /usr/local/bin/pip2.7 install scipy It's downloading it, but then I have this error message: blas_mkl_info: libraries mkl,vml,guide not found in ['/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib'] NOT AVAILABLE openblas_info: libraries not found in ['/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['/usr/local/lib64', '/usr/local/lib', '/usr/lib64/atlas', '/usr/lib64/sse2', '/usr/lib64', '/usr/lib'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['/usr/local/lib64', '/usr/local/lib', '/usr/lib64/atlas', '/usr/lib64/sse2', '/usr/lib64', '/usr/lib'] NOT AVAILABLE /usr/local/lib/python2.7/site-packages/numpy/distutils/system_info.py:1521: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in ['/usr/local/lib64', '/usr/local/lib', '/usr/lib64', '/usr/lib'] NOT AVAILABLE /usr/local/lib/python2.7/site-packages/numpy/distutils/system_info.py:1530: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /usr/local/lib/python2.7/site-packages/numpy/distutils/system_info.py:1533: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip_build_root/scipy/setup.py", line 237, in <module> setup_package() File "/tmp/pip_build_root/scipy/setup.py", line 234, in setup_package setup(**metadata) File "/usr/local/lib/python2.7/site-packages/numpy/distutils/core.py", line 135, in setup config = configuration() File "/tmp/pip_build_root/scipy/setup.py", line 173, in configuration config.add_subpackage('scipy') File "/usr/local/lib/python2.7/site-packages/numpy/distutils/misc_util.py", line 966, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.7/site-packages/numpy/distutils/misc_util.py", line 935, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.7/site-packages/numpy/distutils/misc_util.py", line 872, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/setup.py", line 12, in configuration config.add_subpackage('integrate') File "/usr/local/lib/python2.7/site-packages/numpy/distutils/misc_util.py", line 966, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.7/site-packages/numpy/distutils/misc_util.py", line 935, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.7/site-packages/numpy/distutils/misc_util.py", line 872, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "scipy/integrate/setup.py", line 12, in configuration blas_opt = get_info('blas_opt',notfound_action=2) File "/usr/local/lib/python2.7/site-packages/numpy/distutils/system_info.py", line 350, in get_info return cl().get_info(notfound_action) File "/usr/local/lib/python2.7/site-packages/numpy/distutils/system_info.py", line 509, in get_info raise self.notfounderror(self.notfounderror.__doc__) numpy.distutils.system_info.BlasNotFoundError: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. ---------------------------------------- Cleaning up... Command /usr/local/bin/python2.7 -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-z_rbuN-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/scipy Storing debug log for failure in /root/.pip/pip.log any idea how to install it?
Scipy requires two Linear Algebra libraries installed on your system. Make sure the following (or variants thereof) are installed, sudo yum install blas-devel lapack-devel
Django 1.7 where to put the code to add Groups programatically?
I have been trying to find the answer in the Django Auth docs, but can not seem to find what i am looking for. The problem i am having is, when i define the code for adding Groups (same as Groups in the admin page): #read_only group, created = Group.objects.get_or_create(name='read_only') if created: group.permissions.add(can_read_campaign) logger.info('read_only_user Group created') #standard group, created = Group.objects.get_or_create(name='standard_user') if created: group.permissions.add(can_edit_users) logger.info('standard_user Group created') #admin group, created = Group.objects.get_or_create(name='admin_user') if created: group.permissions.add(can_edit_campaign, can_edit_users) logger.info('admin_user Group created') When I have run this code in models.py and init.py and they both give me this error: django.core.exceptions.AppRegistryNotReady I presume this is due to the Model/init trying to insert things into the django app/admin too early? Would someone please be able to help me with a solution so i can add these Groups programmatically. Thank you. EDIT: This is not a duplicate question, this was actually adding permission and groups within the models during setup of the project, rather than through the shell. I have solved this issues, by using signals and receivers (django modules). I added the code to create the permissions/groups into it's own function and decorated this with a receiver (post_migrate), which will run this function after migrations are complete, removing this error. @receiver(post_migrate) def init_groups(sender, **kwargs): #permission and group code goes here I hope this can help somebody with this problem!
I was recommended this way to do it: Create a fake migration in the appropriate module: python manage.py makemigrations --empty yourappname Open up the file that was created, which should look like this: # -*- coding: utf-8 -*- from django.db import models, migrations class Migration(migrations.Migration): dependencies = [ ('yourappname', '0001_initial'), ] operations = [ ] And add your code: # -*- coding: utf-8 -*- from django.db import models, migrations def add_group_permissions(): #read_only group, created = Group.objects.get_or_create(name='read_only') if created: group.permissions.add(can_read_campaign) logger.info('read_only_user Group created') #standard group, created = Group.objects.get_or_create(name='standard_user') if created: group.permissions.add(can_edit_users) logger.info('standard_user Group created') #admin group, created = Group.objects.get_or_create(name='admin_user') if created: group.permissions.add(can_edit_campaign, can_edit_users) logger.info('admin_user Group created') class Migration(migrations.Migration): dependencies = [ ('yourappname', '0001_initial'), ] operations = [ migrations.RunPython(add_group_permissions), ] Finally, run the migration: python manage.py migrate This is nice because you can deploy to Heroku or wherever and be sure it'll be applied, as it's just another migration.
find numeric columns in pandas (python)
say df is a pandas DataFrame. I would like to find all columns of numeric type. something like: isNumeric = is_numeric(df)
You could use select_dtypes method of DataFrame. It includes two parameters include and exclude. So isNumeric would look like: numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] newdf = df.select_dtypes(include=numerics)
Extract first item of each sublist in python
I am wondering what is the best way to extract a first item of each sublist in a list of lists and append it to a new list. So if i have: lst = [[a,b,c], [1,2,3], [x,y,z]] and i want to pull out a, 1 and x and create a seperate list from those. I tried: lst2.append(x[0] for x in lst)
Using list comprehension: >>> lst = [['a','b','c'], [1,2,3], ['x','y','z']] >>> lst2 = [item[0] for item in lst] >>> lst2 ['a', 1, 'x']
OrderedDict vs Dict in python
In Tim Peter's answer to "Are there any reasons not to use an ordered dictionary", he says OrderedDict is a subclass of dict. It's not a lot slower, but at least doubles the memory over using a plain dict. Now, while going through a particular question, I tried some sample checks using ipython and both of them contradict the earlier reasoning: both dict and OrderedDict are of same size operating on an OrderedDict takes easily around 7-8 times more time than operating on a dict (Hence a lot slower) Can someone explain to me where I'm going wrong in my reasoning? Create a large Dict and OrderedDict and compare sizes: import sys import random from collections import OrderedDict test_dict = {} test_ordered_dict = OrderedDict() for key in range(10000): test_dict[key] = random.random() test_ordered_dict[key] = random.random() sys.getsizeof(test_dict) 786712 sys.getsizeof(test_ordered_dict) 786712 Check time taken for the insertions using %timeit: import sys import random from collections import OrderedDict def operate_on_dict(r): test_dict = {} for key in range(r): test_dict[key] = random.random() def operate_on_ordered_dict(r): test_ordered_dict = OrderedDict() for key in range(r): test_ordered_dict[key] = random.random() %timeit for x in range(100): operate_on_ordered_dict(100) 100 loops, best of 3: 9.24 ms per loop %timeit for x in range(100): operate_on_dict(100) 1000 loops, best of 3: 1.23 ms per loop
I think the problem with size is due to the fact that there's no __sizeof__ method defined in Python 2.X implementation of OrderedDict, so it simply falls back to dict's __sizeof__ method. To prove this here I've created a class A here which extends list and also added an additional method foo to check if that affects the size. class A(list): def __getitem__(self, k): return list.__getitem__(self, k) def foo(self): print 'abcde' >>> a = A(range(1000)) >>> b = list(range(1000)) But still same size is returned by sys.getsizeof: >>> sys.getsizeof(a), sys.getsizeof(b) (9120, 9120) Of course A is going to be slow because its methods are running in Python while list's method will run in pure C. >>> %%timeit ... for _ in xrange(1000): ... a[_] ... 1000 loops, best of 3: 449 µs per loop >>> %%timeit for _ in xrange(1000): b[_] ... 10000 loops, best of 3: 52 µs per loop And this seems to be fixed in Python 3 where there's a well defined __sizeof__ method now: def __sizeof__(self): sizeof = _sys.getsizeof n = len(self) + 1 # number of links including root size = sizeof(self.__dict__) # instance dictionary size += sizeof(self.__map) * 2 # internal dict and inherited dict size += sizeof(self.__hardroot) * n # link objects size += sizeof(self.__root) * n # proxy objects return size
Python - Running Autobahn|Python asyncio websocket server in a separate subprocess or thread
I have a tkinter based GUI program running in Python 3.4.1. I have several threads running in the program to get JSON data from various urls. I am wanting to add some WebSocket functionality to be able to allow program to act as a server and allow several clients to connect to it over a WebSocket and exchange other JSON data. I am attempting to use the Autobahn|Python WebSocket server for asyncio. I first tried to run the asyncio event loop in a separate thread under the GUI program. However, every attempt gives 'AssertionError: There is no current event loop in thread 'Thread-1'. I then tried spawning a process with the standard library multiprocessing package that ran the asyncio event loop in another Process. When I try this I don't get any exception but the WebSocket server doesn't start either. Is it even possible to run an asyncio event loop in a subprocess from another Python program? Is there even a way to integrate an asyncio event loop into a currently multithreaded/tkinter program? UPDATE Below is the actual code I am trying to run for an initial test. from autobahn.asyncio.websocket import WebSocketServerProtocol from autobahn.asyncio.websocket import WebSocketServerFactory import asyncio from multiprocessing import Process class MyServerProtocol(WebSocketServerProtocol): def onConnect(self, request): print("Client connecting: {0}".format(request.peer)) def onOpen(self): print("WebSocket connection open.") def onMessage(self, payload, isBinary): if isBinary: print("Binary message received: {0} bytes".format(len(payload))) else: print("Text message received: {0}".format(payload.decode('utf8'))) ## echo back message verbatim self.sendMessage(payload, isBinary) def onClose(self, wasClean, code, reason): print("WebSocket connection closed: {0}".format(reason)) def start_server(): factory = WebSocketServerFactory("ws://10.241.142.27:6900", debug = False) factory.protocol = MyServerProtocol loop = asyncio.get_event_loop() coro = loop.create_server(factory, '10.241.142.27', 6900) server = loop.run_until_complete(coro) loop.run_forever() server.close() loop.close() websocket_server_process = Process(target = start_server) websocket_server_process.start() Most of it is straight from the Autobahn|Python example code for asyncio. If I try to run it as a Process it doesn't do anything, no client can connect to it, if I run netstat -a there is no port 6900 being used. If just use start_server() in the main program it creates the WebSocket Server.
First, you're getting AssertionError: There is no current event loop in thread 'Thread-1'. because asyncio requires each thread in your program to have its own event loop, but it will only automatically create an event loop for you in the main thread. So if you call asyncio.get_event_loop once in the main thread it will automatically create a loop object and set it as the default for you, but if you call it again in a child thread, you'll get that error. Instead, you need to explicitly create/set the event loop when the thread starts: loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) Once you've done that, you should be able to use get_event_loop() in that specific thread. It is possible to start an asyncio event loop in a subprocess started via multiprocessing: import asyncio from multiprocessing import Process @asyncio.coroutine def coro(): print("hi") def worker(): loop = asyncio.get_event_loop() loop.run_until_complete(coro()) if __name__ == "__main__": p = Process(target=worker) p.start() p.join() Output: hi The only caveat is that if you start an event loop in the parent process as well as the child, you need to explicitly create/set a new event loop in the child if you're on a Unix platform (due to a bug in Python). It should work fine on Windows, or if you use the 'spawn' multiprocessing context. I think it should be possible to start an asyncio event loop in a background thread (or process) of your Tkinter application and have both the tkinter and asyncio event loop run side-by-side. You'll only run into issues if you try to update the GUI from the background thread/process.
pypi see older versions of package
This is the package I'm interested in : https://pypi.python.org/pypi/django-filebrowser-no-grappelli/ However the latest version no longer supports Django 1.3. I need to find a version that does. How do I see list of older versions ?
It's perhaps a little inelegant, but it appears that you can go to the URL https://pypi.python.org/simple/<package> And you will get a bunch of links to tarballs for the package. Ex: https://pypi.python.org/simple/django-filebrowser-no-grappelli/
Compiling Cx-Freeze under Ubuntu
For the entire day I have been attempting to compile cx-Freeze under Ubuntu 14.04 and had no luck. So I gave up and decided to ask experts here. What I have Ubuntu 14.04 Python 3.4 python-dev, python3-dev, python3.4-dev installed (I know this common issue) Sources of cx-Freeze 4.3.3 I tried two ways: install from the sources install by pip Install from the sources sudo python3 setup.py install What I got a lot of MyPath/cx_Freeze-4.3.3/source/bases/Console.c:24: undefined reference to `PyErr_Print' MyPath/cx_Freeze-4.3.3/source/bases/Console.c:24: undefined reference to `Py_FatalError' and then collect2: error: ld returned 1 exit status error: command 'i686-linux-gnu-gcc' failed with exit status 1 Install by pip sudo pip3 install cx-Freeze What I got collect2: error: ld returned 1 exit status error: command 'i686-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Cleaning up... Command /usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/cx-Freeze/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-c954v7x6-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/cx-Freeze Storing debug log for failure in /home/grimel/.pip/pip.log and in pip.log Exception information: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 283, in run requirement_set.install(install_options, global_options, root=options.root_path) File "/usr/lib/python3/dist-packages/pip/req.py", line 1435, in install requirement.install(install_options, global_options, *args, **kwargs) File "/usr/lib/python3/dist-packages/pip/req.py", line 706, in install cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File "/usr/lib/python3/dist-packages/pip/util.py", line 697, in call_subprocess % (command_desc, proc.returncode, cwd)) pip.exceptions.InstallationError: Command /usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/cx-Freeze/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-c954v7x6-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/cx-Freeze So, I expect you to help me with this issue and gonna be very thankful:)
In setup.py string if not vars.get("Py_ENABLE_SHARED", 0): replace by if True: Thanks to Thomas K
How to properly stop phantomjs execution
I initiated and close phantomjs in Python with the following from selenium import webdriver driver = webdriver.PhantomJS() driver.get(url) html_doc = driver.page_source driver.close() yet after the script ends execution I still find an instance of phantomjs in my Mac Activity Monitor. And actually every time I run the script a new process phantomjs is created. How should I close the driver?
I've seen several people struggle with the same issue, but for me, the simplest workaround/hack was to execute the following from the command line through Python AFTER you have invoked driver.close() or driver.quit(): pgrep phantomjs | xargs kill Please not that this will obviously cause trouble if you have several threads/processes starting PhantomJS on your machine.
After OS python upgrade, virtualenv python failing with "undefined symbol: _PyLong_AsInt¨ error on simple tasks
I had a long-working virtualenv based on python-2.7.3. After accepting recommended platform OS (Ubuntu) updates which (among many other changes) brought python up to 2.7.6, the python inside the virtualenv has started erroring on essentially all non-trivial tasks, with stacks ending like: ImportError: /home/myusername/ENVS/myvenv/lib/python2.7/lib-dynload/_io.so: undefined symbol: _PyLong_AsInt Even pip freeze is failing with such an error -- making it impossible to even get an accurate inventory of installed packages in the broken virtualenv (for potentially reinstalling into a fresh working virtualenv)! Shouldn't the virtualenv be protected from such outside upgrades? Or at least within the 2.7.x series?
You can simply do cp /usr/bin/python2 /path/to/my-virtualenv/bin/python2 or cp /usr/bin/python3 /path/to/my-virtualenv/bin/python3
Ansible. override single dictionary key
I am using ansible to manage configuration as for production, as well as for vagrant box. I have file with default values: group_vars/all. --- env: prod wwwuser: www-data db: root_pwd: root_pwd pdo_driver: pdo_mysql host: localhost name: test user: test pwd: test charset: utf8 domain: somedomain projectdir: /var/www/application webrootdir: "{{ projectdir }}/web" In host_vars/vagrantbox I want tohave something like: db: root_pwd: super_easy_password But this one is overriding completely db dictrionary, while I want to override single key. How to achieve that? UPDATE 1 Just checked with ansible.cfg: [defaults] host_key_checking=false hash_behaviour=merge groups_vars/all db: root_pwd: some_strong_pwd pdo_driver: pdo_mysql host: localhost name: dbname user: dbuser pwd: some password charset: utf8 host_vars/vagrantbox db: root_pwd: root I am getting following error: One or more undefined variables: 'dict object' has no attribute 'name' What I do wrong?
By default, Ansible overrides variables at the first level. If you want to be able to merge dictionaries, you have to change your ansible.cfg file and set : hash_behaviour=merge (the default value being replace). Note that the Ansible team does not recommend this (but do not explain why). I guess this is a real dividing setting between users. A kind of decision that is done once for all : when you start using this feature, you can not go back, and you probably can not share your playbook with replace-type people. However, you can still benefit from the playbooks out there (I don't hink playbooks use replace behaviour as a "feature"). It's like having an AB blood type, being an universal receiver... but since the magic usually happens at variable resolution, not inside tasks or templates, I think it is often possible to share your roles without any changes. If you need to override a single key from, let's say, role parameters, you'll have to pass parameters in some convoluted way. For instance, to override post_max_size and upload_max_size keys in a php5 dictionnary for a specific role, you'll have to do it this way : - { role: php5-fpm, php5: { post_max_size: 40M, upload_max_filesize: 20M }} This being said, I use merge behaviour since the beginning, and I'm pretty happy with it. It is very handy to keep variables organised.
How do I make pyCharm stop hiding (unfold) my python imports?
Every time I open a python module file pyCharm will hide all imports and show import ... within the editor. I have to manually unfold it to see the imports. Where do I find the setting to undo auto-hiding of import statements? Thanks! EDIT: Added code-folding to the tags.
As this question may be useful for people who also are not looking for the term "code folding", I'll make my comment an answer. As extracted from IntelliJ IDEA 13.1.0 Web Help, but also worked on PyCharm CE 3.4.1: Open the IDE Settings (File > Settings, or Ctrl+Alt+S). Under the "Editor" node, click "Code Folding". The "Code Folding" page is displayed. In the "Collapse by default list", select the check boxes to the left of the code constructs you want to be displayed collapsed. So here you can uncheck "Imports". Apply changes. The image below shows what it looks like:
Extracting just Month and Year from Pandas Datetime column (Python)
I have a Dataframe, df, with the following column: df['ArrivalDate'] = ... 936 2012-12-31 938 2012-12-29 965 2012-12-31 966 2012-12-31 967 2012-12-31 968 2012-12-31 969 2012-12-31 970 2012-12-29 971 2012-12-31 972 2012-12-29 973 2012-12-29 ... The elements of the column are pandas.tslib.Timestamp. I want to just include the year and month. I thought there would be simple way to do it, but I can't figure it out. Here's what I've tried: df['ArrivalDate'].resample('M', how = 'mean') I got the following error: Only valid with DatetimeIndex or PeriodIndex Then I tried: df['ArrivalDate'].apply(lambda(x):x[:-2]) I got the following error: 'Timestamp' object has no attribute '__getitem__' Any suggestions? Edit: I sort of figured it out. df.index = df['ArrivalDate'] Then, I can resample another column using the index. But I'd still like a method for reconfiguring the entire column. Any ideas?
If you want new columns showing year and month separately you can do this: df['year'] = pd.DatetimeIndex(df['ArrivalDate']).year df['month'] = pd.DatetimeIndex(df['ArrivalDate']).month or... df['year'] = df['ArrivalDate'].dt.year df['month'] = df['ArrivalDate'].dt.month Then you can combine them or work with them just as they are.
Export csv file from scrapy (not via command line)
I successfully tried to export my items into a csv file from the command line like: scrapy crawl spiderName -o filename.csv My question is: What is the easiest solution to do the same in the code? I need this as i extract the filename from another file. End scenario should be, that i call scrapy crawl spiderName and it writes the items into filename.csv
Why not use an item pipeline? WriteToCsv.py import csv from YOUR_PROJECT_NAME_HERE import settings def write_to_csv(item): writer = csv.writer(open(settings.csv_file_path, 'a'), lineterminator='\n') writer.writerow([item[key] for key in item.keys()]) class WriteToCsv(object): def process_item(self, item, spider): write_to_csv(item) return item settings.py ITEM_PIPELINES = { 'project.pipelines_path.WriteToCsv.WriteToCsv' : A_NUMBER_HIGHER_THAN_ALL_OTHER_PIPELINES} csv_file_path = PATH_TO_CSV If you wanted items to be written to separate csv for separate spiders you could give your spider a CSV_PATH field. Then in your pipeline use your spiders field instead of path from setttigs. This works I tested it in my project. HTH http://doc.scrapy.org/en/latest/topics/item-pipeline.html
Pylint invalid constant name
I'm receiving a Pylint error regarding my constant: MIN_SOIL_PARTICLE_DENS (invalid name). Any ideas why this constant is wrong? Here's my full function: def bulk_density(clay, sand, organic_matter): MIN_SOIL_PARTICLE_DENS = 2.65 x1 = (0.078 + 0.278 * sand + 0.034 * clay + 0.022 * organic_matter - 0.018 * sand * organic_matter - 0.027 * clay * organic_matter - 0.584 * sand * clay) x2 = -0.107 + 1.636 * x1 field_capacity = vol_water_content_33_j_kg(clay, sand, organic_matter)#m3/m3 sat_water_content = 0.043 + field_capacity + x2 - 0.097 * sand return (1 - sat_water_content) * MIN_SOIL_PARTICLE_DENS
When checking names, Pylint differenciates between constants, variables, classes etc. Any name that is not inside a function/class will be considered a constant, anything else is a variable. See http://docs.pylint.org/features.html#basic-checker variable-rgx: [a-z_][a-z0-9_]{2,30}$ const-rgx: (([A-Z_][A-Z0-9_]*)|(__.*__))$ Because you're in a function, MIN_SOIL_PARTICLE_DENS is (according to pylint) supposed to be a variable, pylint however treats it as a constant and therefore complains. This means you can't have any uppercase names inside functions without pylint complaining. If you ask me, using uppercase inside functions is fine; not all constants are necessarily defined globally.
Creating lowpass filter in SciPy - understanding methods and units
I am trying to filter a noisy heart rate signal with python. Because heart rates should never be about 220 beats per minute i want to filter out all noise above 220bpm. I converted 220/minute into 3.66666666 Hertz and then converted that Hertz to rad/s to get 23.0383461 rad/sec. The sampling frequency of the chip that takes data is 30Hz so i converted that to rad/s to get 188.495559 rad/s. After looking up some stuff online i found some unctions for a bandpass filter that i wanted to make into a lowpass. Here is the link the bandpass code, so i converted it to be this: from scipy.signal import butter, lfilter from scipy.signal import freqs def butter_lowpass(cutOff, fs, order=5): nyq = 0.5 * fs normalCutoff = cutOff / nyq b, a = butter(order, normalCutoff, btype='low', analog = True) return b, a def butter_lowpass_filter(data, cutOff, fs, order=4): b, a = butter_lowpass(cutOff, fs, order=order) y = lfilter(b, a, data) return y cutOff = 23.1 #cutoff frequency in rad/s fs = 188.495559 #sampling frequency in rad/s order = 20 #order of filter #print sticker_data.ps1_dxdt2 y = butter_lowpass_filter(data, cutOff, fs, order) plt.plot(y) I am very confused by this though because i am pretty sure the butter function takes in the cutoff and sampling frequency in rad/s but i seem to be getting a weird output. Is it actually in Hz? Secondly what is the purpose of these two lines: nyq = 0.5 * fs normalCutoff = cutOff / nyq I know its something about normalization but i thought the nyquist was 2 times the sampling requency, not one half. And why are you using the nyquist as a normalizer? Can one explain more about how to create filters with these functions? I plotted the filter using w, h = signal.freqs(b, a) plt.plot(w, 20 * np.log10(abs(h))) plt.xscale('log') plt.title('Butterworth filter frequency response') plt.xlabel('Frequency [radians / second]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.axvline(100, color='green') # cutoff frequency plt.show() and got this that clearly does not cut off at 23 rad/s
A few comments: The Nyquist frequency is half the sampling rate. You are working with regularly sampled data, so you want a digital filter, not an analog filter. This means you should not use analog=True in the call to butter, and you should use scipy.signal.freqz (not freqs) to generate the frequency response. One goal of those short utility functions is to allow you to leave all your frequencies expressed in Hz. You shouldn't have to convert to rad/sec. As long as you express your frequencies with consistent units, the scaling in the utility functions takes care of the normalization for you. Here's my modified version of your script, followed by the plot that it generates. import numpy as np from scipy.signal import butter, lfilter, freqz import matplotlib.pyplot as plt def butter_lowpass(cutoff, fs, order=5): nyq = 0.5 * fs normal_cutoff = cutoff / nyq b, a = butter(order, normal_cutoff, btype='low', analog=False) return b, a def butter_lowpass_filter(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = lfilter(b, a, data) return y # Filter requirements. order = 6 fs = 30.0 # sample rate, Hz cutoff = 3.667 # desired cutoff frequency of the filter, Hz # Get the filter coefficients so we can check its frequency response. b, a = butter_lowpass(cutoff, fs, order) # Plot the frequency response. w, h = freqz(b, a, worN=8000) plt.subplot(2, 1, 1) plt.plot(0.5*fs*w/np.pi, np.abs(h), 'b') plt.plot(cutoff, 0.5*np.sqrt(2), 'ko') plt.axvline(cutoff, color='k') plt.xlim(0, 0.5*fs) plt.title("Lowpass Filter Frequency Response") plt.xlabel('Frequency [Hz]') plt.grid() # Demonstrate the use of the filter. # First make some data to be filtered. T = 5.0 # seconds n = int(T * fs) # total number of samples t = np.linspace(0, T, n, endpoint=False) # "Noisy" data. We want to recover the 1.2 Hz signal from this. data = np.sin(1.2*2*np.pi*t) + 1.5*np.cos(9*2*np.pi*t) + 0.5*np.sin(12.0*2*np.pi*t) # Filter the data, and plot both the original and filtered signals. y = butter_lowpass_filter(data, cutoff, fs, order) plt.subplot(2, 1, 2) plt.plot(t, data, 'b-', label='data') plt.plot(t, y, 'g-', linewidth=2, label='filtered data') plt.xlabel('Time [sec]') plt.grid() plt.legend() plt.subplots_adjust(hspace=0.35) plt.show()
How to turn off INFO logging in PySpark?
I installed Spark using the AWS EC2 guide and I can launch the program fine using the bin/pyspark script to get to the spark prompt and can also do the Quick Start quide successfully. However, I cannot for the life of me figure out how to stop all of the verbose INFO logging after each command. I have tried nearly every possible scenario in the below code (commenting out, setting to OFF) within my log4j.properties file in the conf folder in where I launch the application from as well as on each node and nothing is doing anything. I still get the logging INFO statements printing after executing each statement. I am very confused with how this is supposed to work. #Set everything to be logged to the console log4j.rootCategory=INFO, console log4j.appender.console=org.apache.log4j.ConsoleAppender log4j.appender.console.target=System.err log4j.appender.console.layout=org.apache.log4j.PatternLayout log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n # Settings to quiet third party logs that are too verbose log4j.logger.org.eclipse.jetty=WARN log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO Here is my full classpath when I use SPARK_PRINT_LAUNCH_COMMAND: Spark Command: /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/bin/java -cp :/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.2.0.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar -XX:MaxPermSize=128m -Djava.library.path= -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit spark-shell --class org.apache.spark.repl.Main contents of spark-env.sh: #!/usr/bin/env bash # This file is sourced when running various Spark programs. # Copy it as spark-env.sh and edit that to configure Spark for your site. # Options read when launching programs locally with # ./bin/run-example or ./bin/spark-submit # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node # - SPARK_PUBLIC_DNS, to set the public dns name of the driver program # - SPARK_CLASSPATH=/root/spark-1.0.1-bin-hadoop2/conf/ # Options read by executors and drivers running inside the cluster # - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node # - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program # - SPARK_CLASSPATH, default classpath entries to append # - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data # - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos # Options read in YARN client mode # - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files # - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2) # - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1). # - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G) # - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb) # - SPARK_YARN_APP_NAME, The name of your application (Default: Spark) # - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’) # - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job. # - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job. # Options for the daemons used in the standalone deploy mode: # - SPARK_MASTER_IP, to bind the master to a different IP address or hostname # - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master # - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y") # - SPARK_WORKER_CORES, to set the number of cores to use on this machine # - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g) # - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker # - SPARK_WORKER_INSTANCES, to set the number of worker processes per node # - SPARK_WORKER_DIR, to set the working directory of worker processes # - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y") # - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y") # - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y") # - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers export SPARK_SUBMIT_CLASSPATH="$FWDIR/conf"
Just execute this command in the spark directory: cp conf/log4j.properties.template conf/log4j.properties Edit log4j.properties: # Set everything to be logged to the console log4j.rootCategory=INFO, console log4j.appender.console=org.apache.log4j.ConsoleAppender log4j.appender.console.target=System.err log4j.appender.console.layout=org.apache.log4j.PatternLayout log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n # Settings to quiet third party logs that are too verbose log4j.logger.org.eclipse.jetty=WARN log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO Replace at the first line: log4j.rootCategory=INFO, console by: log4j.rootCategory=WARN, console Save and restart your shell. It works for me for Spark 1.1.0 and Spark 1.5.1 on OS X.
One chart with two different y axis ranges in Bokeh?
I would like a Bar chart with Quantity information on the left y-axis, and then overlay a Scatter/Line plot with Yield % on the right. I can create each of these charts separately, but do not know how to combine them into a single plot. In matplotlib, we would create a second figure using twinx(), and then use yaxis.tick_left() and yaxis.tick_right() on the respective figures. Is there a method for doing something similar with Bokeh?
Yes, now it is possible to have two y axes in Bokeh plots. The code below shows script parts significant in setting up the second y axis to the usual figure plotting script. # Modules needed from Bokeh. from bokeh.io import output_file, show from bokeh.plotting import figure from bokeh.models import LinearAxis, Range1d # Seting the params for the first figure. s1 = figure(x_axis_type="datetime", tools=TOOLS, plot_width=1000, plot_height=600) # Setting the second y axis range name and range s1.extra_y_ranges = {"foo": Range1d(start=-100, end=200)} # Adding the second axis to the plot. s1.add_layout(LinearAxis(y_range_name="foo"), 'right') # Setting the rect glyph params for the first graph. # Using the default y range and y axis here. s1.rect(df_j.timestamp, mids, w, spans, fill_color="#D5E1DD", line_color="black") # Setting the rect glyph params for the second graph. # Using the aditional y range named "foo" and "right" y axis here. s1.rect(df_j.timestamp, ad_bar_coord, w, bar_span, fill_color="#D5E1DD", color="green", y_range_name="foo") # Show the combined graphs with twin y axes. show(s1) And the plot we get looks like this:
What happens when a function returns its own name in python?
def traceit(frame, event, trace_arg): global stepping if event == 'line': if stepping or frame.f_lineno in breakpoints: resume = False while not resume: print(event, frame.f_lineno, frame.f_code.co_name, frame.f_locals) command = input_command() resume = debug(command, frame.f_locals) return traceit What is the meaning of the last line in the code? EDIT: def remove_html_markup(s): tag = False quote = False out = "" for c in s: if c == '<' and not quote: tag = True elif c == '>' and not quote: tag = False elif c == '"' or c == "'" and tag: quote = not quote elif not tag: out = out + c return out def main(): print (remove_html_markup('xyz')) print (remove_html_markup('"<b>foo</b>"')) print (remove_html_markup("'<b>foo</b>'")) # globals breakpoints = {9: True} stepping = False def debug(command, my_locals): global stepping global breakpoints if command.find(' ') > 0: arg = command.split(' ')[1] else: arg = None if command.startswith('s'): # step stepping = True return True elif command.startswith('c'): # continue stepping = False return True elif command.startswith('q'): # quit sys.exit(0) else: print ("No such command", repr(command)) return False commands = ['s', 's', 's', 'q'] def input_command(): #command = raw_input("(my-spyder) ") global commands command = commands.pop(0) return command def traceit(frame, event, trace_arg): global stepping if event == 'line': if stepping or frame.f_lineno in breakpoints: resume = False while not resume: print(event, frame.f_lineno, frame.f_code.co_name, frame.f_locals) command = input_command() resume = debug(command, frame.f_locals) return traceit # Using the tracer sys.settrace(traceit) main() sys.settrace(None)
A function is an object like anyone else, so there's no problem in returning itself. For example, it allows repeated calling on the same line: traceit("abc", "def", None)("ghi", "jkl", 3)("mno", "pqr", 4.3) Edit: sys.settrace sets the global tracing function, that is invoked every time a local scope is entered to ask for a local tracing function. Here it returns itself, to handle all the tracing in the same function. See https://docs.python.org/2/library/sys.html#sys.settrace for details.
How do I install pyspark for use in standalone scripts?
I'm am trying to use Spark with Python. I installed the Spark 1.0.2 for Hadoop 2 binary distribution from the downloads page. I can run through the quickstart examples in Python interactive mode, but now I'd like to write a standalone Python script that uses Spark. The quick start documentation says to just import pyspark, but this doesn't work because it's not on my PYTHONPATH. I can run bin/pyspark and see that the module is installed beneath SPARK_DIR/python/pyspark. I can manually add this to my PYTHONPATH environment variable, but I'd like to know the preferred automated method. What is the best way to add pyspark support for standalone scripts? I don't see a setup.py anywhere under the Spark install directory. How would I create a pip package for a Python script that depended on Spark?
Add Pyspark lib in Python path in the bashrc export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH also don't forget to set up the SPARK_HOME. PySpark depends the py4j Python package. So install that as follows pip install py4j For more details about stand alone PySpark application refer this post
How to set some xlim and ylim in Seaborn lmplot facetgrid
I'm using Seaborn's lmplot to plot a linear regression, dividing my dataset into two groups with a categorical variable. For both x and y, I'd like to manually set the lower bound on both plots, but leave the upper bound at the Seaborn default. Here's a simple example: import pandas as pd import seaborn as sns import random n = 200 random.seed(2014) base_x = [random.random() for i in range(n)] base_y = [2*i for i in base_x] errors = [random.uniform(0,1) for i in range(n)] y = [i+j for i,j in zip(base_y,errors)] df = pd.DataFrame({'X': base_x, 'Y': y, 'Z': ['A','B']*(n/2)}) mask_for_b = df.Z == 'B' df.loc[mask_for_b,['X','Y']] = df.loc[mask_for_b,] *2 sns.lmplot('X','Y',df,col='Z',sharex=False,sharey=False) This outputs the following: But in this example, I'd like the xlim and the ylim to be (0,*) . I tried using sns.plt.ylim and sns.plt.xlim but those only affect the right-hand plot. Example: sns.plt.ylim(0,) sns.plt.xlim(0,) How can I access the xlim and ylim for each plot in the FacetGrid?
The lmplot function returns a FacetGrid instance. This object has a method called set, to which you can pass key=value pairs and they will be set on each Axes object in the grid. Secondly, you can set only one side of an Axes limit in matplotlib by passing None for the value you want to remain as the default. Putting these together, we have: g = sns.lmplot('X', 'Y', df, col='Z', sharex=False, sharey=False) g.set(ylim=(0, None))
Installing OpenCV for Python on Ubuntu, getting ImportError: No module named cv2.cv
I have an Ubuntu 14.04 system, on which I want to install OpenCV and use it with Python 2.x. I installed OpenCV using the instructions here: https://help.ubuntu.com/community/OpenCV The install seemed to run properly, no errors, the script ended with output OpenCV 2.4.9 ready to be used When I try to run the sample Python script, I get the following: $ python opencv.py Traceback (most recent call last): File "opencv.py", line 1, in <module> from cv2.cv import * ImportError: No module named cv2.cv I suspect I know why, I just don't know how to fix it. OpenCV installed to the current directory I was in when I ran the install script, it's a subdirectory of my home folder. Others who get this import error after install seem to be having a path issue, and have luck adding this to their code: import sys sys.path.append('/usr/local/lib/python2.7/site-packages') or updating their PYTHONPATH with that same directory. I tried adding that code, it doesn't make a difference. I don't see any files in the "site-packages" directory. Should I have done the install in that directory? I imagine the installation instructions would have spelled that out. I suspect that my problem has to do with Python not finding the OpenCV install, but I'm not sure how to proceed. Please help me get a usable install of OpenCV as simply as possible.
I think you don't have the python-opencv package. I had the exact same problem and sudo apt-get install python-opencv solved the issue for me.
Python requests speed up using keep-alive
In the HTTP protocol you can send many requests in one socket using keep-alive and then receive the response from server at once, so that will significantly speed up whole process. Is there any way to do this in python requests lib? Or are there any other ways to speed this up that well using requests lib?
Yes, there is. Use requests.Session and it will do keep-alive by default. I guess I should include a quick example: import logging import requests logging.basicConfig(level=logging.DEBUG) s = requests.Session() s.get('http://httpbin.org/cookies/set/sessioncookie/123456789') s.get('http://httpbin.org/cookies/set/anothercookie/123456789') r = s.get("http://httpbin.org/cookies") print(r.text) You will note that these log message occur INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies/set/sessioncookie/123456789 HTTP/1.1" 302 223 DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies HTTP/1.1" 200 55 DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies/set/anothercookie/123456789 HTTP/1.1" 302 223 DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies HTTP/1.1" 200 90 DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies HTTP/1.1" 200 90 If you wait a little while, and repeat the last get call INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: httpbin.org DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies HTTP/1.1" 200 90 Note that it resets the dropped connection, i.e. reestablishing the connection to the server to make the new request.
How to add title to subplots in Matplotlib?
I have one figure which contains many subplots. fig = plt.figure(num=None, figsize=(26, 12), dpi=80, facecolor='w', edgecolor='k') fig.canvas.set_window_title('Window Title') # Returns the Axes instance ax = fig.add_subplot(311) ax2 = fig.add_subplot(312) ax3 = fig.add_subplot(313) How do I add titles to the subplots? fig.suptitle adds a title to all graphs and although ax.set_title() exists, the latter does not add any title to my subplots. Thank you for your help. Edit: Corrected typo about set_title(). Thanks Rutger Kassies
ax.set_title() should set the titles for separate subplots: import matplotlib.pyplot as plt if __name__ == "__main__": data = [1, 2, 3, 4, 5] fig = plt.figure() fig.suptitle("Title for whole figure", fontsize=16) ax = plt.subplot("211") ax.set_title("Title for first plot") ax.plot(data) ax = plt.subplot("212") ax.set_title("Title for second plot") ax.plot(data) plt.show() Can you check if this code works for you? Maybe something overwrites them later?
"Models aren't loaded yet" error while populating in Django 1.8 and Python 2.7.8
I am using this code to populate my database: import os def populate(): python_cat = add_cat('Python') add_page(cat=python_cat, title="Official Python Tutorial", url="http://docs.python.org/2/tutorial/") add_page(cat=python_cat, title="How to Think like a Computer Scientist", url="http://www.greenteapress.com/thinkpython/") add_page(cat=python_cat, title="Learn Python in 10 minutes", url="http://www.korokithakis.net/tutorials/python/") django_cat = add_cat(name="Django") add_page(cat=django_cat, title="Official Django Tutorial", url="http://djangoproject.com/en/1.5/intro/tutorial01/") add_page(cat=django_cat, title="Django Rocks", url="http://www.djangorocks.com/") add_page(cat=django_cat, title="How to Tango with Django", url="htttp://www.tangowithdjango.com/") frame_cat = add_cat(name="Other Frameworks") add_page(cat=frame_cat, title="Bottle", url="http://bottlepy.org/docs/dev/") add_page(cat=frame_cat, title="Flask", url="http://flask.pocoo.org") # Print out what we have added to the user. for c in Category.objects.all(): for p in Page.objects.filter(category=c): print "- {0} - {1}".format(str(c), str(p)) def add_page(cat, title, url, views=0): p = Page.objects.get_or_create(category=cat, title=title, url=url, views=views)[0] return p def add_cat(name): c = Category.objects.get_or_create(name=name) return c if __name__ == '__main__': print "Starting Rango population script..." os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'p.settings') from rango.models import Category, Page populate() On running python c:\python27\p\populate_rango.py It gives the error: Staring Rango population script... Traceback (most recent call last): File "c:\python27\p\populate_rango.py", line 59, in <module> populate() File "c:\python27\p\populate_rango.py", line 4, in populate python_cat = add_cat('Python') File "c:\python27\p\populate_rango.py", line 52, in add_cat c = Category.objects.get_or_create(name=name) File "C:\Python27\Lib\site-packages\django\db\models\manager.py", li manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "C:\Python27\Lib\site-packages\django\db\models\query.py", line et_or_create return self.get(**lookup), False File "C:\Python27\Lib\site-packages\django\db\models\query.py", line clone = self.filter(*args, **kwargs) File "C:\Python27\Lib\site-packages\django\db\models\query.py", line ilter return self._filter_or_exclude(False, *args, **kwargs) File "C:\Python27\Lib\site-packages\django\db\models\query.py", line filter_or_exclude clone.query.add_q(Q(*args, **kwargs)) File "C:\Python27\Lib\site-packages\django\db\models\sql\query.py", in add_q clause, require_inner = self._add_q(where_part, self.used_aliases) File "C:\Python27\Lib\site-packages\django\db\models\sql\query.py", in _add_q current_negated=current_negated, connector=connector) File "C:\Python27\Lib\site-packages\django\db\models\sql\query.py", in build_filter lookups, parts, reffed_aggregate = self.solve_lookup_type(arg) File "C:\Python27\Lib\site-packages\django\db\models\sql\query.py", in solve_lookup_type _, field, _, lookup_parts = self.names_to_path(lookup_splitted, se a()) File "C:\Python27\Lib\site-packages\django\db\models\sql\query.py", in names_to_path field, model, direct, m2m = opts.get_field_by_name(name) File "C:\Python27\Lib\site-packages\django\db\models\options.py", li get_field_by_name cache = self.init_name_map() File "C:\Python27\Lib\site-packages\django\db\models\options.py", li init_name_map for f, model in self.get_all_related_m2m_objects_with_model(): File "C:\Python27\Lib\site-packages\django\db\models\options.py", li get_all_related_m2m_objects_with_model cache = self._fill_related_many_to_many_cache() File "C:\Python27\Lib\site-packages\django\db\models\options.py", li _fill_related_many_to_many_cache for klass in self.apps.get_models(): File "C:\Python27\Lib\site-packages\django\utils\lru_cache.py", line rapper result = user_function(*args, **kwds) File "C:\Python27\Lib\site-packages\django\apps\registry.py", line 1 _models *self.check_models_ready() File "C:\Python27\Lib\site-packages\django\apps\registry.py", line 1 ck_models_ready raise AppRegistryNotReady("Models aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet.* Rest of my files are ok but getting this error. I am following the tutorial from Tango with Django book but as the book refers to Django 1.5.4 and i am using Django 1.8, so can anyone help me here?
I had the same exception with Django 1.7rc2. The solution was to add these lines at the beginning of my program: import django django.setup() Update: This is now documented for Django 1.8.
numpy.void type - how to use it?
I loaded a MATLAB .mat file via scipy.io.loadmat and it gave me a list of numpy.void objects. Can someone tell me what are they, how they can be used and where can I get some reference documentation on them?
According to the numpy documentation: http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html, numpy.void types are defined as flexible data types. Basically, these are data types where there is no pre-defined type associated to the variable(s) you're looking at. If you look at numpy, you have data types such as float, uint8, bool, string, etc. void is to accommodate for more generic and flexible types and are for those data types that don't necessary fall into any one of these pre-defined data types. This situation is mostly encountered when you're loading in a struct where each element has multiple data types associated with multiple fields. Each structure element could have a combination of different data types, and the amalgamation of all of these data types to represent an instance of this structure element thus leads us to numpy.void. With the documentation, you can certainly do the same operations like you would with any other data type. Take a look at the generic data type methods here: http://docs.scipy.org/doc/numpy/reference/generated/numpy.generic.html#numpy.generic . In fact, all numpy data types are derived from this generic class, including numpy.void. In the first link I provided at the beginning of this post, it shows a good example of how to create a custom record type, where a record is a combination of a tuple of numbers and a string. When creating a list of these records, each type in the list is of type numpy.void and it demonstrates that a record is of this data type. However, bear in mind that this record list has a data type that is of this record, but each element of this list will be of type numpy.void. However, as a matter of self-containment, let's re-create the example here: Let's create a custom record type where it has two fields associated for each variable you create: A 16-bit string with a field named name A 2-element tuple of floating point numbers that are 64-bits each, with a field named grades As such, you'd do something like: import numpy as np dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) As such, let's create an example list of two elements and instantiate their fields: x = np.array([('Sarah', (8.0, 7.0)), ('John', (6.0, 7.0))], dtype=dt) Because we made this list into a numpy.array, we expect its data type to be so: type(x) We get: <type 'numpy.ndarray'> Remember, the list itself is a numpy.array, but not the individual elements. To access the second element of this list, which is the second record, we do: x[1] We get: ('John', [6.0, 7.0]) To check the type of the second record, we do: type(x[1]) We get: <type 'numpy.void'> # As expected Some additional bonuses for you To access the name of the second record, we do: x[1]['name'] We get: 'John' To access the grades of the second record, we do: x[1]['grades'] We get: array([ 6., 7.]) To check the type of the name inside the second record, we do: type(x[1]['name']) We get: <type 'numpy.string_'> To check the type of the grades inside the second record, we do: type(x[1]['grades']) We get: <type 'numpy.ndarray'> Take note that each element in this list is of type numpy.void. However, the individual fields for each element in our list is either a tuple of numbers, or a string. The collection of these elements together is of type numpy.void.
How do I run two separate instances of Spyder
I want to be able to have two instances which are completely independent in the sense that I can be working on two separate unrelated projects in different folders without any interference.
(Spyder dev here) This is very easy. You need to go to: Tools > Prefences > General and deactivate the option called [ ] Use a single instance Then every time you start Spyder a new window will be opened. If you want the old behavior back, just activate that option again.
Pandas - Get first row value of a given column
This seems like a ridiculously easy question... but I'm not seeing the easy answer I was expecting. So, how do I get the value at an nth row of a given column in Pandas? (I am particularly interested in the first row, but would be interested in a more general practice as well). For example, let's say I want to pull the 1.2 value in Btime as a variable. Whats the right way to do this? df_test = ATime X Y Z Btime C D E 0 1.2 2 15 2 1.2 12 25 12 1 1.4 3 12 1 1.3 13 22 11 2 1.5 1 10 6 1.4 11 20 16 3 1.6 2 9 10 1.7 12 29 12 4 1.9 1 1 9 1.9 11 21 19 5 2.0 0 0 0 2.0 8 10 11 6 2.4 0 0 0 2.4 10 12 15
To select the ith row, use iloc: In [31]: df_test.iloc[0] Out[31]: ATime 1.2 X 2.0 Y 15.0 Z 2.0 Btime 1.2 C 12.0 D 25.0 E 12.0 Name: 0, dtype: float64 To select the ith value in the Btime column you could use: In [30]: df_test['Btime'].iloc[0] Out[30]: 1.2 Warning: I had previously suggested df_test.ix[i, 'Btime']. But this is not guaranteed to give you the ith value since ix tries to index by label before trying to index by position. So if the DataFrame has an integer index which is not in sorted order starting at 0, then using ix[i] will return the row labeled i rather than the ith row. For example, In [1]: df = pd.DataFrame({'foo':list('ABC')}, index=[0,2,1]) In [2]: df Out[2]: foo 0 A 2 B 1 C In [4]: df.ix[1, 'foo'] Out[4]: 'C'
Why is numpy.power 60x slower than in-lining?
Maybe I'm doing something odd, but maybe found a surprising performance loss when using numpy, seems consistent regardless of the power used. For instance when x is a random 100x100 array x = numpy.power(x,3) is about 60x slower than x = x*x*x A plot of the speed up for various array sizes reveals a sweet spot with arrays around size 10k and a consistent 5-10x speed up for other sizes. Code to test below on your own machine (a little messy): import numpy as np from matplotlib import pyplot as plt from time import time ratios = [] sizes = [] for n in np.logspace(1,3,20).astype(int): a = np.random.randn(n,n) inline_times = [] for i in range(100): t = time() b = a*a*a inline_times.append(time()-t) inline_time = np.mean(inline_times) pow_times = [] for i in range(100): t = time() b = np.power(a,3) pow_times.append(time()-t) pow_time = np.mean(pow_times) sizes.append(a.size) ratios.append(pow_time/inline_time) plt.plot(sizes,ratios) plt.title('Performance of inline vs numpy.power') plt.ylabel('Nx speed-up using inline') plt.xlabel('Array size') plt.xscale('log') plt.show() Anyone have an explanation?
It's well known that multiplication of doubles, which your processor can do in a very fancy way, is very, very fast. pow is decidedly slower. Some performance guides out there even advise people to plan for this, perhaps even in some way that might be a bit overzealous at times. numpy special-cases squaring to make sure it's not too, too slow, but it sends cubing right off to your libc's pow, which isn't nearly as fast as a couple multiplications.
Can Python be configured to cache sys.path directory lookups?
We've been doing a lot of benchmarking of Python running over a remote connection. The program is running offsite but accessing disks on-site. We are running under RHEL6. We watched a simple program with strace. It appears it's spending a lot of time performing stat and open on files to see if they are there. Over a remote connecting that is costly. Is there a way to configure Python to read a directories contents once and cache it's listing so it doesn't have to check it again? Sample Program test_import.py: import random import itertools I ran the following commands: $ strace -Tf python test_import.py >& strace.out $ grep '/usr/lib64/python2.6/' strace.out | wc 331 3160 35350 So it's looking in that directory roughly 331 times. A lot of them with results like: stat ( "/usr/lib64/python2.6/posixpath", 0x7fff1b447340 ) = -1 ENOENT ( No such file or directory ) < 0.000009 > If it cached the directory it wouldn't have to stat the file to see if it's there.
You can avoid this by either moving to Python 3.3, or replacing the standard import system with an alternative. In the strace talk that I gave two weeks ago at PyOhio, I discuss the unfortunate O(nm) performance (for n directories and m possible suffixes) of the old import mechanism; start at this slide. I demonstrate how easy_install plus a Zope-powered web framework generates 73,477 system calls simply to do enough imports to get up and running. After a quick install of bottle in a virtualenv on my laptop, for example, I find that exactly 1,000 calls are necessary for Python to import that module and be up and running: $ strace -c -e stat64,open python -c 'import bottle' % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 100.00 0.000179 0 1519 1355 open 0.00 0.000000 0 475 363 stat64 ------ ----------- ----------- --------- --------- ---------------- 100.00 0.000179 1994 1718 total If I hop into os.py, however, I can add a caching importer and even with a very naive implementation can cut the number of misses down by nearly a thousand: $ strace -c -e stat64,open python -c 'import bottle' % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 100.00 0.000041 0 699 581 open 0.00 0.000000 0 301 189 stat64 ------ ----------- ----------- --------- --------- ---------------- 100.00 0.000041 1000 770 total I chose os.py for the experiment because strace shows it to be the very first module that Python imports, and the sooner we can get our importer installed, the fewer Standard Library modules Python will have to import under its old terrible slow regime! # Put this right below "del _names" in os.py class CachingImporter(object): def __init__(self): self.directory_listings = {} def find_module(self, fullname, other_path=None): filename = fullname + '.py' for syspath in sys.path: listing = self.directory_listings.get(syspath, None) if listing is None: try: listing = listdir(syspath) except OSError: listing = [] self.directory_listings[syspath] = listing if filename in listing: modpath = path.join(syspath, filename) return CachingLoader(modpath) class CachingLoader(object): def __init__(self, modpath): self.modpath = modpath def load_module(self, fullname): if fullname in sys.modules: return sys.modules[fullname] import imp mod = imp.new_module(fullname) mod.__loader__ = self sys.modules[fullname] = mod mod.__file__ = self.modpath with file(self.modpath) as f: code = f.read() exec code in mod.__dict__ return mod sys.meta_path.append(CachingImporter()) This has rough edges, of course — it does not try to detect .pyc files or .so files or any of the other extensions that Python might go looking for. Nor does it know about __init__.py files or about modules inside of packages (which would require running lsdir() in sub-directories of the sys.path entries). But it at least illustrates that thousands of extra calls can be eliminated through something like this, and demonstrates a direction you might try out. When it cannot find a module, the normal import mechanism simply kicks in instead. I wonder if there is a good caching importer already available on PyPI or somewhere? It seems like the sort of thing that would have been written a hundreds times already in various shops. I thought that Noah Gift had written one and put it in a blog post or something, but I cannot find a link that confirms that memory of mine. Edit: as @ncoglan mentions in the comments, there is an alpha-release backport of the new Python 3.3+ import system to Python 2.7 available on PyPI: http://pypi.python.org/pypi/importlib2 — unfortunately it looks like the questioner is still stuck on 2.6.
Convert variable-sized byte array to a integer/long
How can I convert a (big endian) variable-sized binary byte array to an (unsigned) integer/long? As an example, '\x11\x34', which represents 4404 Right now, I'm using def bytes_to_int(bytes): return int(bytes.encode('hex'), 16) Which is small and somewhat readable, but probably not very efficient. Is there a better (more obvious) way?
Python doesn't traditionally have much use for "numbers in big-endian C layout" that are too big for C. (If you're dealing with 2-byte, 4-byte, or 8-byte numbers, then struct.unpack is the answer.) But enough people got sick of there not being one obvious way to do this that Python 3.2 added a method int.from_bytes that does exactly what you want: int.from_bytes(b, byteorder='big', signed=False) Unfortunately, if you're using an older version of Python, you don't have this. So, what options do you have? (Besides the obvious one: update to 3.2, or, better, 3.4…) First, there's your code. I think binascii.hexlify is a better way to spell it than .encode('hex'), because "encode" has always seemed a little weird for a method on byte strings (as opposed to Unicode strings), and it's in fact been banished in Python 3. But otherwise, it seems pretty readable and obvious to me. And it should be pretty fast—yes, it has to create an intermediate string, but it's doing all the looping and arithmetic in C (at least in CPython), which is generally an order of magnitude or two faster than in Python. Unless your bytearray is so big that allocating the string will itself be costly, I wouldn't worry about performance here. Alternatively, you could do it in a loop. But that's going to be more verbose and, at least in CPython, a lot slower. You could try to eliminate the explicit loop for an implicit one, but the obvious function to do that is reduce, which is considered un-Pythonic by part of the community—and of course it's going to require calling a function for each byte. You could unroll the loop or reduce by breaking it into chunks of 8 bytes and looping over struct.unpack_from, or by just doing a big struct.unpack('Q'*len(b)//8 + 'B' * len(b)%8) and looping over that, but that makes it a lot less readable and probably not that much faster. You could use NumPy… but if you're going bigger than either 64 or maybe 128 bits, it's going to end up converting everything to Python objects anyway. So, I think your answer is the best option. Here are some timings comparing it to the most obvious manual conversion: import binascii import functools import numpy as np def hexint(b): return int(binascii.hexlify(b), 16) def loop1(b): def f(x, y): return (x<<8)|y return functools.reduce(f, b, 0) def loop2(b): x = 0 for c in b: x <<= 8 x |= c return x def numpily(b): n = np.array(list(b)) p = 1 << np.arange(len(b)-1, -1, -1, dtype=object) return np.sum(n * p) In [226]: b = bytearray(range(256)) In [227]: %timeit hexint(b) 1000000 loops, best of 3: 1.8 µs per loop In [228]: %timeit loop1(b) 10000 loops, best of 3: 57.7 µs per loop In [229]: %timeit loop2(b) 10000 loops, best of 3: 46.4 µs per loop In [283]: %timeit numpily(b) 10000 loops, best of 3: 88.5 µs per loop For comparison in Python 3.4: In [17]: %timeit hexint(b) 1000000 loops, best of 3: 1.69 µs per loop In [17]: %timeit int.from_bytes(b, byteorder='big', signed=False) 1000000 loops, best of 3: 1.42 µs per loop So, your method is still pretty fast…
Can't load Python modules installed via pip from site-packages directory
I am trying to install and use the Evernote module (https://github.com/evernote/evernote-sdk-python) . I ran pip install evernote and it says that the installation worked. I can confirm that the evernote module exists in /usr/local/lib/python2.7/site-packages. However, when I try to run python -c "import evernote" I get the following error: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named evernote This is the contents of my .bash-profile: [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function* # Setting PATH for Python 3.3 # The orginal version is saved in .bash_profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/3.3/bin:${PATH}" export PATH export PATH=$PATH:/usr/local/bin/ I am having this same problem with other modules installed with pip. Help? EDIT: I am a super newbie and have not edited that .bash-profile file. EDIT: python -c 'import sys; print "\n".join(sys.path)' Outputs the following: /Library/Python/2.7/site-packages/setuptools-1.3.2-py2.7.egg /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7 /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC /Library/Python/2.7/site-packages EDIT: I seemed to have made progress towards a solution by adding export PYTHONPATH=“/usr/local/lib/python2.7/site-packages” to my .bash_profile file. However, now when I run python -c 'from evernote.api.client import EvernoteClient' it tries to import oauth2, which fails with the same error. The ouath2 module is present in the module directory.
/usr/bin/python is the executable for the python that comes with OS X. /usr/local/lib is a location for user-installed programs only, possibly from Python.org or Homebrew. So you're mixing different Python installs, and changing the python path is only a partial workaround for different packages being installed for different installations. In order to make sure you use the pip associated with a particular python, you can run python -m pip install <pkg>, or go look at what the pip on your path is, or is symlinked to.
Parsing non-zero padded timestamps in Python
I want to get datetimes from timestamps like the following :3/1/2014 9:55 with datetime.strptime, or something equivalent. The month, day of month, and hour is not zero padded, but there doesn't seem to be a formatting directive listed here that is able to parse this automatically. What's the best approach to do so? Thanks!
strptime is able to parse non-padded values. The fact that they are noted as being padded in the formatting codes table applies to strftime's output. So you can just use datetime.strptime(datestr, "%m/%d/%Y %H:%M")
How to plot a 3D density map in python with matplotlib
I have a large dataset of (x,y,z) protein positions and would like to plot areas of high occupancy as a heatmap. Ideally the output should look similiar to the volumetric visualisation below, but I'm not sure how to achieve this with matplotlib. My initial idea was to display my positions as a 3D scatter plot and color their density via a KDE. I coded this up as follows with test data: import numpy as np from scipy import stats import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D mu, sigma = 0, 0.1 x = np.random.normal(mu, sigma, 1000) y = np.random.normal(mu, sigma, 1000) z = np.random.normal(mu, sigma, 1000) xyz = np.vstack([x,y,z]) density = stats.gaussian_kde(xyz)(xyz) idx = density.argsort() x, y, z, density = x[idx], y[idx], z[idx], density[idx] fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(x, y, z, c=density) plt.show() This works well! However, my real data contains many thousands of data points and calculating the kde and the scatter plot becomes extremely slow. A small sample of my real data: My research would suggest that a better option is to evaluate the gaussian kde on a grid. I’m just not sure how to this in 3D: import numpy as np from scipy import stats import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D mu, sigma = 0, 0.1 x = np.random.normal(mu, sigma, 1000) y = np.random.normal(mu, sigma, 1000) nbins = 50 xy = np.vstack([x,y]) density = stats.gaussian_kde(xy) xi, yi = np.mgrid[x.min():x.max():nbins*1j, y.min():y.max():nbins*1j] di = density(np.vstack([xi.flatten(), yi.flatten()])) fig = plt.figure() ax = fig.add_subplot(111) ax.pcolormesh(xi, yi, di.reshape(xi.shape)) plt.show()
Thanks to mwaskon - for suggesting the mayavi library. I recreated the density scatter plot in mayavi as follows: import numpy as np from scipy import stats from mayavi import mlab mu, sigma = 0, 0.1 x = 10*np.random.normal(mu, sigma, 5000) y = 10*np.random.normal(mu, sigma, 5000) z = 10*np.random.normal(mu, sigma, 5000) xyz = np.vstack([x,y,z]) kde = stats.gaussian_kde(xyz) density = kde(xyz) # Plot scatter with mayavi figure = mlab.figure('DensityPlot') pts = mlab.points3d(x, y, z, density, scale_mode='none', scale_factor=0.07) mlab.axes() mlab.show() Setting the scale_mode to 'none' prevents glyphs from being scaled in proportion to the density vector. In addition for large datasets, I disabled scene rendering and used a mask to reduce the number of points. # Plot scatter with mayavi figure = mlab.figure('DensityPlot') figure.scene.disable_render = True pts = mlab.points3d(x, y, z, density, scale_mode='none', scale_factor=0.07) mask = pts.glyph.mask_points mask.maximum_number_of_points = x.size mask.on_ratio = 1 pts.glyph.mask_input_points = True figure.scene.disable_render = False mlab.axes() mlab.show() Next, to evaluate the gaussian kde on a grid: import numpy as np from scipy import stats from mayavi import mlab mu, sigma = 0, 0.1 x = 10*np.random.normal(mu, sigma, 5000) y = 10*np.random.normal(mu, sigma, 5000) z = 10*np.random.normal(mu, sigma, 5000) xyz = np.vstack([x,y,z]) kde = stats.gaussian_kde(xyz) # Evaluate kde on a grid xmin, ymin, zmin = x.min(), y.min(), z.min() xmax, ymax, zmax = x.max(), y.max(), z.max() xi, yi, zi = np.mgrid[xmin:xmax:30j, ymin:ymax:30j, zmin:zmax:30j] coords = np.vstack([item.ravel() for item in [xi, yi, zi]]) density = kde(coords).reshape(xi.shape) # Plot scatter with mayavi figure = mlab.figure('DensityPlot') grid = mlab.pipeline.scalar_field(xi, yi, zi, density) min = density.min() max=density.max() mlab.pipeline.volume(grid, vmin=min, vmax=min + .5*(max-min)) mlab.axes() mlab.show() As a final improvement I sped up the evaluation of kensity density function by calling the kde function in parallel. import numpy as np from scipy import stats from mayavi import mlab import multiprocessing def calc_kde(data): return kde(data.T) mu, sigma = 0, 0.1 x = 10*np.random.normal(mu, sigma, 5000) y = 10*np.random.normal(mu, sigma, 5000) z = 10*np.random.normal(mu, sigma, 5000) xyz = np.vstack([x,y,z]) kde = stats.gaussian_kde(xyz) # Evaluate kde on a grid xmin, ymin, zmin = x.min(), y.min(), z.min() xmax, ymax, zmax = x.max(), y.max(), z.max() xi, yi, zi = np.mgrid[xmin:xmax:30j, ymin:ymax:30j, zmin:zmax:30j] coords = np.vstack([item.ravel() for item in [xi, yi, zi]]) # Multiprocessing cores = multiprocessing.cpu_count() pool = multiprocessing.Pool(processes=cores) results = pool.map(calc_kde, np.array_split(coords.T, 2)) density = np.concatenate(results).reshape(xi.shape) # Plot scatter with mayavi figure = mlab.figure('DensityPlot') grid = mlab.pipeline.scalar_field(xi, yi, zi, density) min = density.min() max=density.max() mlab.pipeline.volume(grid, vmin=min, vmax=min + .5*(max-min)) mlab.axes() mlab.show()
View RDD contents in Python Spark?
Running a simple app in pyspark. f = sc.textFile("README.md") wc = f.flatMap(lambda x: x.split(' ')).map(lambda x: (x, 1)).reduceByKey(add) I want to view RDD contents using foreach action: wc.foreach(print) This throws a syntax error: SyntaxError: invalid syntax What am I missing?
This error is because print isn't a function in Python 2.6. You can either define a helper UDF that performs the print, or use the __future__ library to treat print as a function: >>> from operator import add >>> f = sc.textFile("README.md") >>> def g(x): ... print x ... >>> wc.foreach(g) or >>> from __future__ import print_function >>> wc.foreach(print) However, I think it would be better to use collect() to bring the RDD contents back to the driver, because foreach executes on the worker nodes and the outputs may not necessarily appear in your driver / shell (it probably will in local mode, but not when running on a cluster). >>> for x in wc.collect(): ... print x
How to avoid reinstalling packages when building Docker image for Python projects?
My Dockerfile is something like FROM my/base ADD . /srv RUN pip install -r requirements.txt RUN python setup.py install ENTRYPOINT ["run_server"] Every time I build a new image, dependencies have to be reinstalled, which could be very slow in my region. One way I think of to cache packages that have been installed is to override the my/base image with newer images like this: docker build -t new_image_1 . docker tag new_image_1 my/base So next time I build with this Dockerfile, my/base already has some packages installed. But this solution has two problems: It is not always possible to override a base image The base image grow bigger and bigger as newer images are layered on it So what better solution could I use to solve this problem? EDIT##: Some information about the docker on my machine: ☁ test docker version Client version: 1.1.2 Client API version: 1.13 Go version (client): go1.2.1 Git commit (client): d84a070 Server version: 1.1.2 Server API version: 1.13 Go version (server): go1.2.1 Git commit (server): d84a070 ☁ test docker info Containers: 0 Images: 56 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Dirs: 56 Execution Driver: native-0.2 Kernel Version: 3.13.0-29-generic WARNING: No swap limit support
Try to build with below Dockerfile. FROM my/base WORKDIR /srv ADD ./requirements.txt /srv/requirements.txt RUN pip install -r requirements.txt ADD . /srv RUN python setup.py install ENTRYPOINT ["run_server"] If there are some changes on .(your project), docker skip pip install line by using cache. Docker only run pip install on build when you edit requirements.txt file. I write simple Hello, World! program. $ tree . ├── Dockerfile ├── requirements.txt └── run.py 0 directories, 3 file # Dockerfile FROM dockerfile/python WORKDIR /srv ADD ./requirements.txt /srv/requirements.txt RUN pip install -r requirements.txt ADD . /srv CMD python /srv/run.py # requirements.txt pytest==2.3.4 # run.py print("Hello, World") Below is output. Step 1 : WORKDIR /srv ---> Running in 22d725d22e10 ---> 55768a00fd94 Removing intermediate container 22d725d22e10 Step 2 : ADD ./requirements.txt /srv/requirements.txt ---> 968a7c3a4483 Removing intermediate container 5f4e01f290fd Step 3 : RUN pip install -r requirements.txt ---> Running in 08188205e92b Downloading/unpacking pytest==2.3.4 (from -r requirements.txt (line 1)) Running setup.py (path:/tmp/pip_build_root/pytest/setup.py) egg_info for package pytest .... Cleaning up... ---> bf5c154b87c9 Removing intermediate container 08188205e92b Step 4 : ADD . /srv ---> 3002a3a67e72 Removing intermediate container 83defd1851d0 Step 5 : CMD python /srv/run.py ---> Running in 11e69b887341 ---> 5c0e7e3726d6 Removing intermediate container 11e69b887341 Successfully built 5c0e7e3726d6 I update only run.py and try to build again. # run.py print("Hello, Python") Below is output. Sending build context to Docker daemon 5.12 kB Sending build context to Docker daemon Step 0 : FROM dockerfile/python ---> f86d6993fc7b Step 1 : WORKDIR /srv ---> Using cache ---> 55768a00fd94 Step 2 : ADD ./requirements.txt /srv/requirements.txt ---> Using cache ---> 968a7c3a4483 Step 3 : RUN pip install -r requirements.txt ---> Using cache ---> bf5c154b87c9 Step 4 : ADD . /srv ---> 9cc7508034d6 Removing intermediate container 0d7cf71eb05e Step 5 : CMD python /srv/run.py ---> Running in f25c21135010 ---> 4ffab7bc66c7 Removing intermediate container f25c21135010 Successfully built 4ffab7bc66c7 As you can see above, docker use build cache. And I update requirements.txt this time. # requirements.txt pytest==2.3.4 ipython Below is output. Sending build context to Docker daemon 5.12 kB Sending build context to Docker daemon Step 0 : FROM dockerfile/python ---> f86d6993fc7b Step 1 : WORKDIR /srv ---> Using cache ---> 55768a00fd94 Step 2 : ADD ./requirements.txt /srv/requirements.txt ---> b6c19f0643b5 Removing intermediate container a4d9cb37dff0 Step 3 : RUN pip install -r requirements.txt ---> Running in 4b7a85a64c33 Downloading/unpacking pytest==2.3.4 (from -r requirements.txt (line 1)) Running setup.py (path:/tmp/pip_build_root/pytest/setup.py) egg_info for package pytest Downloading/unpacking ipython (from -r requirements.txt (line 2)) Downloading/unpacking py>=1.4.12 (from pytest==2.3.4->-r requirements.txt (line 1)) Running setup.py (path:/tmp/pip_build_root/py/setup.py) egg_info for package py Installing collected packages: pytest, ipython, py Running setup.py install for pytest Installing py.test script to /usr/local/bin Installing py.test-2.7 script to /usr/local/bin Running setup.py install for py Successfully installed pytest ipython py Cleaning up... ---> 23a1af3df8ed Removing intermediate container 4b7a85a64c33 Step 4 : ADD . /srv ---> d8ae270eca35 Removing intermediate container 7f003ebc3179 Step 5 : CMD python /srv/run.py ---> Running in 510359cf9e12 ---> e42fc9121a77 Removing intermediate container 510359cf9e12 Successfully built e42fc9121a77 And docker doesn't use build cache. If it doesn't work, check your docker version. Client version: 1.1.2 Client API version: 1.13 Go version (client): go1.2.1 Git commit (client): d84a070 Server version: 1.1.2 Server API version: 1.13 Go version (server): go1.2.1 Git commit (server): d84a070
Cell-var-from-loop warning from Pylint
For the following code: for sort_key, order in query_data['sort']: results.sort(key=lambda k: get_from_dot_path(k, sort_key), reverse=(order == -1)) Pylint reported an error: Cell variable sort_key defined in loop (cell-var-from-loop) Could anyone give a hint what is happening here? From pylint source code the description is: A variable used in a closure is defined in a loop. This will result in all closures using the same value for the closed-over variable. But I do not have a clue what it means. Could anyone give an example of the problem?
The name sort_key in the body of the lambda will be looked up when the function is actually called, so it will see what ever value sort_key had most recently. Since you are calling sort immediately, the value of sort_key will not change before the resulting function object is used, so you can safely ignore the warning. To silence it, you can make sort_key the default value of a parameter to the lambda: results.sort(key=lambda k, sk=sort_key: get_from_dot_path(k, sk), reverse=(order == -1))
Difference between .string and .text BeautifulSoup
I noticed something odd about when working with BeautifulSoup and couldn't find any documentation to support this so I wanted to ask over here. Say we have a tags like these that we have parsed with BS: <td>Some Table Data</td> <td></td> The official documented way to extract the data is soup.string. However this extracted a NoneType for the second <td> tag. So I tried soup.text (because why not?) and it extracted an empty string exactly as I wanted. However I couldn't find any reference to this in the documentation and am worried that something is a miss. Can anyone let me know if this is acceptable to use or will it cause problems later? BTW I am scraping table data from a web page and mean to create CSVs from the data so I do actually need empty strings rather than NoneTypes.
.string on a Tag type object returns a NavigableString type object. On the other hand, .text gets all the child strings and return concatenated using the given separator. Return type of .text is unicode object. From the documentation, A NavigableString is just like a Python Unicode string, except that it also supports some of the features described in Navigating the tree and Searching the tree. From the documentation on .string, we can see that, If the html is like this, <td>Some Table Data</td> <td></td> Then, .string on the second td will return None. But .text will return and empty string which is a unicode type object. For more convenience, string Convenience property of a tag to get the single string within this tag. If the tag has a single string child then the return value is that string. If the tag has no children or more than one child the return value is None If this tag has one child tag return value is the 'string' attribute of the child tag, recursively. And text Get all the child strings and return concatenated using the given separator. If the html is like this: <td>some text</td> <td></td> <td><p>more text</p></td> <td>even <p>more text</p></td> .string on the four td will return, some text None more text None .text will give result like this, some text more text even more text
python 2.7: cannot pip on windows "bash: pip: command not found"
I am trying to install the SciPy stack located at scipy(dot)org/stackspec(dot)html [I am only allowed 2 links; trying to use them wisely]. I realize that there are much easier ways to do this, but I think there is a lot to be learned by doing it manually. I am relatively new to a lot of this stuff, so I apologize if I sound ignorant at any point. I am running Windows 7 Enterprise - 64 bit. Here is what I have done so far: Installed "python-2.7.8.msi" (32-bit) from https(colon)//www(DOT)python(DOT)org/download/releases/2.7.8/ Installed "numpy-1.8.1-win32-superpack-python2.7" from http(colon)//sourceforge(dot)net/projects/numpy/files/ Test: import numpy as np ---> no errors Installed scipy library, "scipy-0.14.0-win32-superpack-python2.7.exe" from (SCIPY DOT ORG LINK REMOVED) Test: import scipy as sp ---> no errors Installed matplotlib: "matplotlib-1.3.1.win32-py2.7.exe" from (MATPLOTLIB DOT ORG LINK REMOVED) Installed PIP by running script here: https://raw.githubusercontent.com/pypa/pip/master/contrib/get-pip.py I just copied-pasted script to a new file in IDLE, saved as C:\Python27\Scripts\pip_install.py and clicked Run>module. No errors reported. Does the path on which I saved "pip_install.py" matter? 6. HERE IS WHERE I FAIL Attempted to install matlibplot dependency dateutil: Opened a Cygwin Shell, and typed cd C:\Python27 ! is it necessary to cd to python directtory? pip install python-dateutil This results in the error: bash: pip: command not found I get the same error attempting from CMD. Any help is appreciated; the closest I found was stackoverflow_com/questions/9780717/bash-pip-command-not-found. But the OSX nature of it is just enough to confise me further. **********************UPDATE***************** I added the pip-path per Paul H's suggestion below. It made the error go away, but strangely, nothing I pip actually installs. For example, in Cygwin, I type: cbennett2> pip install python-dateutil cbennett2> You can see that there is no output or feedback from the shell (which I think there should be). Then when I go to a new python shell: >>> from dateutil.parser import parse Traceback (most recent call last): File "<pyshell#12>", line 1, in <module> from dateutil.parser import parse ImportError: No module named dateutil.parser >>>> This happens with all of the modules that I thought I had pip'd ... pandas, tornado, etc.
On Windows, pip lives in C:\[pythondir]\scripts. So you'll need to add that to your system path in order to run it from the command prompt. You could alternatively cd into that directory each time, but that's a hassle. See the top answer here for info on how to do that: Adding Python Path on Windows 7 Also, that is a terrifying way to install pip. Grab it from Christophe Gohlke. Grab everything else from there for that matter. http://www.lfd.uci.edu/~gohlke/pythonlibs/
Performance of subprocess.check_output vs subprocess.call
I've been using subprocess.check_output() for some time to capture output from subprocesses, but ran into some performance problems under certain circumstances. I'm running this on a RHEL6 machine. The calling Python environment is linux-compiled and 64-bit. The subprocess I'm executing is a shell script which eventually fires off a Windows python.exe process via Wine (why this foolishness is required is another story). As input to the shell script, I'm piping in a small bit of Python code that gets passed off to python.exe. While the system is under moderate/heavy load (40 to 70% CPU utilization), I've noticed that using subprocess.check_output(cmd, shell=True) can result in a significant delay (up to ~45 seconds) after the subprocess has finished execution before the check_output command returns. Looking at output from ps -efH during this time shows the called subprocess as sh <defunct>, until it finally returns with a normal zero exit status. Conversely, using subprocess.call(cmd, shell=True) to run the same command under the same moderate/heavy load will cause the subprocess to return immediately with no delay, all output printed to STDOUT/STDERR (rather than returned from the function call). Why is there such a significant delay only when check_output() is redirecting the STDOUT/STDERR output into its return value, and not when the call() simply prints it back to the parent's STDOUT/STDERR?
Reading the docs, both subprocess.call and subprocess.check_output are use-cases of subprocess.Popen. One minor difference is that check_output will raise a Python error if the subprocess returns a non-zero exit status. The greater difference is emphasized in the bit about check_output (my emphasis): The full function signature is largely the same as that of the Popen constructor, except that stdout is not permitted as it is used internally. All other supplied arguments are passed directly through to the Popen constructor. So how is stdout "used internally"? Let's compare call and check_output: call def call(*popenargs, **kwargs): return Popen(*popenargs, **kwargs).wait() check_output def check_output(*popenargs, **kwargs): if 'stdout' in kwargs: raise ValueError('stdout argument not allowed, it will be overridden.') process = Popen(stdout=PIPE, *popenargs, **kwargs) output, unused_err = process.communicate() retcode = process.poll() if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] raise CalledProcessError(retcode, cmd, output=output) return output communicate Now we have to look at Popen.communicate as well. Doing this, we notice that for one pipe, communicate does several things which simply take more time than simply returning Popen().wait(), as call does. For one thing, communicate processes stdout=PIPE whether you set shell=True or not. Clearly, call does not. It just lets your shell spout whatever... making it a security risk, as Python describes here. Secondly, in the case of check_output(cmd, shell=True) (just one pipe)... whatever your subprocess sends to stdout is processed by a thread in the _communicate method. And Popen must join the thread (wait on it) before additionally waiting on the subprocess itself to terminate! Plus, more trivially, it processes stdout as a list which must then be joined into a string. In short, even with minimal arguments, check_output spends a lot more time in Python processes than call does.
Why can't I iterate twice over the same data?
Honestly I am a little confused here, why can't I iterate twice over the same data? def _view(self,dbName): db = self.dictDatabases[dbName] data = db[3] for row in data: print("doing this one time") for row in data: print("doing this two times") This will print out "doing this one time" a few times (as data has a few rows), however it will NOT print out "doing this two times" at all ... The first time I iterate over data works fine, but the second time when I run the last list "for row in data" this returns nothing ... so executing it one time works but not twice ... ? FYI - data is a csv.reader object (in case that is the reason)...
It's because data is an iterator, an you can consume an iterator only once. For example: lst = [1, 2, 3] it = iter(lst) next(it) => 1 next(it) => 2 next(it) => 3 next(it) => StopIteration If we are traversing some data using a for loop, that last StopIteration will cause it to exit the first time. If we try to iterate over it again, we'll keep getting the StopIteration exception, because the iterator has already been consumed. Now for the second question: What if we do need to traverse the iterator more than once? A simple solution would be to create a list with the elements, and we can traverse it as many times as needed. This is all right as long as there are few elements in the list: data = list(db[3]) But if there are many elements, it's a better idea to create independent iterators using tee(): import itertools it1, it2 = itertools.tee(db[3], n=2) # create as many as needed Now we can loop over each one in turn: for e in it1: print("doing this one time") for e in it2: print("doing this two times")
setuptools vs. distutils: Why is distutils still a thing?
Python has a confusing history of tools that can be used to package and describe projects: These include distutils in the Standard Library, distribute, distutils2, and setuptools and maybe more. It appears that distribute and distutils2 were discontinued in favor of setuptools, which leaves two competing standards. To my understanding setuptools offers far more options (e.g. declaring dependencies, tests, etc.) than distutils, however it is not included in the python standard library (yet?). The Python Packaging User Guide[1] recommends now: Use setuptools to define projects and create Source Distributions. And explains: Although you can use pure distutils for many projects, it does not support defining dependencies on other projects and is missing several convenience utilities for automatically populating package metadata correctly that are provided by setuptools. Being outside the standard library, setuptools also offers a more consistent feature set across different versions of Python, and (unlike > distutils), setuptools will be updated to produce the upcoming “Metadata 2.0” standard formats on all supported versions. Even for projects that do choose to use distutils, when pip installs such projects directly from source (rather than installing from a prebuilt wheel file), it will actually build your project using setuptools instead. However, looking into various project's setup.py files reveals that this does not seem to be an actual standard. Many packages still use distutils and those that support setuptools often mix setuptools with distutils e.g. by doing a fallback import: try: from setuptools import setup except ImportError: from distutils.core import setup Followed by an attempt to find a way to write a setup that can be installed by both setuptools and distutils. This often includes various ways of error prone dependency checking, since distutils does not support dependencies in the setup function. Why are people still making the extra effort to support distutils - is the fact that setuptools is not in the standard library the only reason? What are the advantages of distutils and are there any drawbacks of writing setup.py files that only support setuptools.
Have a look at this. It explains all the packaging methods very well, and might help answer your question to some extent: webpage Distutils is still the standard tool for packaging in Python. It is included in the standard library (Python 2 and Python 3.0 to 3.3). It is useful for simple Python distributions, but lacks features. It introduces the distutils Python package that can be imported in your setup.py script. Setuptools was developed to overcome Distutils' limitations, and is not included in the standard library. It introduced a command-line utility called easy_install. It also introduced the setuptools Python package that can be imported in your setup.py script, and the pkg_resources Python package that can be imported in your code to locate data files installed with a distribution. One of its gotchas is that it monkey-patches the distutils Python package. It should work well with pip. The latest version was released in July 2013. So, as you can see setuptools should be preffered to distutils, and I see where your question comes from, however I don't see distutils losing support anytime soon, as, simply put, it is in many cases (especially legacy programs) the defacto standard. And as you probably know changing these sorts of things in legacy programs can be quite a pain and come with quite a few problems, for example incompatibilities, which would then lead to the developer having to rewrite the source code. So there is that, and also the fact that distutils is a part of the standard python library whereas setuptools is not. So, if you are creating a python program, in this day and age, use setuptools, however keep in mind that without distutils, setuptools would have never existed.
Tuple or list when using 'in' in an 'if' clause?
Which approach is better? Using a tuple, like: if number in (1, 2): or a list, like: if number in [1, 2]: Which one is recommended for such uses and why (both logical and performance wise)?
The CPython interpreter replaces the second form with the first. That's because loading the tuple from a constant is one operation, but the list would be 3 operations; load the two integer contents and build a new list object. Because you are using a list literal that isn't otherwise reachable, it is substituted for a tuple: >>> import dis >>> dis.dis(compile('number in [1, 2]', '<stdin>', 'eval')) 1 0 LOAD_NAME 0 (number) 3 LOAD_CONST 2 ((1, 2)) 6 COMPARE_OP 6 (in) 9 RETURN_VALUE Here the second bytecode loads a (1, 2) tuple as a constant, in one step. Compare this to creating a list object not used in a membership test: >>> dis.dis(compile('[1, 2]', '<stdin>', 'eval')) 1 0 LOAD_CONST 0 (1) 3 LOAD_CONST 1 (2) 6 BUILD_LIST 2 9 RETURN_VALUE Here N+1 steps are required for a list object of length N. This substitution is a CPython-specific peephole optimisation; see the Python/peephole.c source. For other Python implementations then, you want to stick with immutable objects instead. That said, the best option when using Python 3.2 and up, is to use a set literal: if number in {1, 2}: as the peephole optimiser will replace that with a frozenset() object and membership tests against sets are O(1) constant operations: >>> dis.dis(compile('number in {1, 2}', '<stdin>', 'eval')) 1 0 LOAD_NAME 0 (number) 3 LOAD_CONST 2 (frozenset({1, 2})) 6 COMPARE_OP 6 (in) 9 RETURN_VALUE This optimization was added in Python 3.2 but wasn't backported to Python 2. As such, the Python 2 optimiser doesn't recognize this option and the cost of building either a set or frozenset from the contents is almost guaranteed to be more costly than using a tuple for the test.
Caught exception is None
I have a script written in python2.7 where I, for debugging purposes, use a catch-all statement to catch and print all exceptions. For some reason, the exception caught is sometimes None. What could cause this to happen? the code is something like this: from __future__ import print_function try: run_arbitrary_code() except Exception as e: print(e) The output is then: None None None None I have never experienced an exception being None, and wonder what could cause this. To answer some of the comments, the function does quite a lot. It includes things like graph searches and sending and receiving JSON data over a socket, so there are quite a few things that could go wrong. But the issue here is that the raised exception is None, which does not help my debugging at all.
Look at the type of the exception, I bet it's a KeyError: try: dict()[None] except Exception as e: print(type(e), e) Output: <class 'KeyError'> None
python/pip error on osx
I've recently purchased a new hard drive and installed a clean copy of OS X Mavericks. I installed python using homebrew and i need to create a python virtual environment. But when ever i try to run any command using pip, I get this error. I haven't been able to find a solution online for this problem. Any reference would be appreciated. Here is the error I'm getting. ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md5 ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha1 ERROR:root:code for hash sha224 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha224 ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha256 ERROR:root:code for hash sha384 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha384 ERROR:root:code for hash sha512 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha512 Traceback (most recent call last): File "/usr/local/bin/pip", line 9, in <module> load_entry_point('pip==1.5.6', 'console_scripts', 'pip')() File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 356, in load_entry_point File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 2439, in load_entry_point File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 2155, in load File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/__init__.py", line 10, in <module> from pip.util import get_installed_distributions, get_prog File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/util.py", line 18, in <module> from pip._vendor.distlib import version File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/version.py", line 14, in <module> from .compat import string_types File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/compat.py", line 31, in <module> from urllib2 import (Request, urlopen, URLError, HTTPError, ImportError: cannot import name HTTPSHandler If you need any extra information from me let me know, this is my first time posting a question here. Thanks.
Ok I found out online that these errors are related to openssl. But i already had openssl installed. A little more research and I tried the following and it solved the issue for me. Here's my solution in case you get the same error. brew install openssl brew link openssl --force brew uninstall python brew install python --with-brewed-openssl Hope that helps.
Pandas Plotting with Multi-Index
After performing a groupby.sum() on a dataframe I'm having some trouble trying to create my intended plot. How can I create a subplot (kind='bar') for each 'Code', where the x-axis is the 'Month' and the bars are ColA and ColB?
I found the unstack(level) method to work perfectly, which has the added benefit of not needing a priori knowledge about how many Codes there are. summed_group.unstack(level=0).plot(kind='bar', subplots=True)
What exactly is Python multiprocessing Module's .join() Method Doing?
Learning about Python Multiprocessing (from a PMOTW article) and would love some clarification on what exactly the join() method is doing. In an old tutorial from 2008 it states that without the p.join() call in the code below, "the child process will sit idle and not terminate, becoming a zombie you must manually kill". from multiprocessing import Process def say_hello(name='world'): print "Hello, %s" % name p = Process(target=say_hello) p.start() p.join() I added a printout of the PID as well as a time.sleep to test and as far as I can tell, the process terminates on its own: from multiprocessing import Process import sys import time def say_hello(name='world'): print "Hello, %s" % name print 'Starting:', p.name, p.pid sys.stdout.flush() print 'Exiting :', p.name, p.pid sys.stdout.flush() time.sleep(20) p = Process(target=say_hello) p.start() # no p.join() within 20 seconds: 936 ttys000 0:00.05 /Library/Frameworks/Python.framework/Versions/2.7/Reso 938 ttys000 0:00.00 /Library/Frameworks/Python.framework/Versions/2.7/Reso 947 ttys001 0:00.13 -bash after 20 seconds: 947 ttys001 0:00.13 -bash Behavior is the same with p.join() added back at end of the file. Python Module of the Week offers a very readable explanation of the module; "To wait until a process has completed its work and exited, use the join() method.", but it seems like at least OS X was doing that anyway. Am also wondering about the name of the method. Is the .join() method concatenating anything here? Is it concatenating a process with it's end? Or does it just share a name with Python's native .join() method?
The join() method, when used with threading or multiprocessing, is not related to str.join() - it's not actually concatenating anything together. Rather, it just means "wait for this [thread/process] to complete". The name join is used because the multiprocessing module's API is meant to look as similar to the threading module's API, and the threading module uses join for its Thread object. Using the term join to mean "wait for a thread to complete" is common across many programming languages, so Python just adopted it as well. Now, the reason you see the 20 second delay both with and without the call to join() is because by default, when the main process is ready to exit, it will implicitly call join() on all running multiprocessing.Process instances. This isn't as clearly stated in the multiprocessing docs as it should be, but it is mentioned in the Programming Guidelines section: Remember also that non-daemonic processes will be automatically be joined. You can override this behavior by setting the daemon flag on the Process to True prior to starting the process: p = Process(target=say_hello) p.daemon = True p.start() # Both parent and child will exit here, since the main process has completed. If you do that, the child process will be terminated as soon as the main process completes: daemon The process’s daemon flag, a Boolean value. This must be set before start() is called. The initial value is inherited from the creating process. When a process exits, it attempts to terminate all of its daemonic child processes.
How can I use seaborn without changing the matplotlib defaults?
I am trying to use seaborn, because of its distplot function. But I prefer the default matplotlib settings. When I import seaborn, it changes automatically the appearance of my figure. How can I use seaborn functions without changing the look of the plots?
Import seaborn like this: import seaborn.apionly as sns and then you should be able to use sns.distplot but maintain the default matplotlib styling + your personal rc configuration.
py2app: modulegraph missing scan_code
For some reason I can't explain or google, py2app crashes on me even with the simplest examples. Im using a python 3.4.1 virtual environment created as Projects/Test/virtenv which has py2app installed via pip. Here is the output of $pip list: altgraph (0.12) macholib (1.7) modulegraph (0.12) pip (1.5.6) py2app (0.9) setuptools (3.6) foo.py is a hello world example file saved in Projects/Test/ and contains a single line: print('hello world') setup.py is saved in Projects/Test as generated by $py2applet --make-setup foo.py: """ This is a setup.py script generated by py2applet Usage: python setup.py py2app """ from setuptools import setup APP = ['foo.py'] DATA_FILES = [] OPTIONS = {'argv_emulation': True} setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) Here is the full output of running $python setup.py py2app (all pip and python commands were done with the virtual enviroment activated) : running py2app creating /Users/mik/Desktop/Projects/Test/build creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64 creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64/python3.4-standalone creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64/python3.4-standalone/app creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64/python3.4-standalone/app/collect creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64/python3.4-standalone/app/temp creating /Users/mik/Desktop/Projects/Test/dist creating build/bdist.macosx-10.8-x86_64/python3.4-standalone/app/lib-dynload creating build/bdist.macosx-10.8-x86_64/python3.4-standalone/app/Frameworks *** using recipe: lxml *** *** using recipe: ftplib *** *** using recipe: sip *** *** using recipe: ctypes *** *** using recipe: xml *** *** using recipe: pydoc *** Traceback (most recent call last): File "setup.py", line 18, in <module> setup_requires=['py2app'], File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/build_app.py", line 659, in run self._run() File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/build_app.py", line 865, in _run self.run_normal() File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/build_app.py", line 943, in run_normal self.process_recipes(mf, filters, flatpackages, loader_files) File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/build_app.py", line 824, in process_recipes rval = check(self, mf) File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/recipes/virtualenv.py", line 80, in check mf.scan_code(co, m) AttributeError: 'ModuleGraph' object has no attribute 'scan_code' Can someone please explain whats going on and how to fix it? EDIT: here is the documentation for scan_code in modulegraph.py, however the file found in Projects/Test/virtenv/lib/python3.4/site-packages/modulegraph/modulegraph.py contains a function called _scan_code with a leading underscore. Is this some type of change that broke py2app? EDIT: posted this EDIT: Manually removing leading underscores from a couple function definitions in the file mentioned allowed py2app to run without error. I'm still confused regarding what happened
I had the same problem as you and solved it now. I referred to this post. First, search for the path $ /yourenv/lib/python2.7/site-packages/py2app/recipes/virtualenv.py Next, open this file virtualenv.py, look for scan_code or load_module and change it to _scan_code or _load_module. Last, try to run your app
Getting individual colors from a color map in matplotlib
If you have a Colormap from cmap = matplotlib.cm.get_cmap('Spectral') how can you get a particular colour out of it between 0 and 1, where 0.0 is the first colour in the map and 1.0 is the last colour in the map? Ideally I would be able to get the middle colour in the map by doing: >>> do_some_magic(cmap, 0.5) # Return an RGBA tuplet (0.1, 0.2, 0.3, 1.0)
You can do this with the code below, and the code in your question was actually very close to what you needed, all you have to do is call the cmap object you have. import matplotlib cmap = matplotlib.cm.get_cmap('Spectral') rgba = cmap(0.5) print(rgba) # (0.99807766255210428, 0.99923106502084169, 0.74602077638401709, 1.0) For values outside of the range [0.0, 1.0] it will return the under and over colour (respectively). This, by default, is the minimum and maximum colour within the range (so 0.0 and 1.0). This default can be changed with cmap.set_under() and cmap_set_over(). For "special" numbers such as np.nan and np.inf the default is to use the 0.0 value, this can be changed using cmap.set_bad() similarly to under and over as above. Finally it may be necessary for you to normalize your data such that it conforms to the range [0.0, 1.0]. This can be done using matplotlib.colors.Normalize simply as shown in the small example below where the arguments vmin and vmax describe what numbers should be mapped to 0.0 and 1.0 respectively. import matplotlib norm = matplotlib.colors.Normalize(vmin=10.0, vmax=20.0) print(norm(15.0)) # 0.5 A logarithmic normaliser (matplotlib.colors.LogNorm) is also available for data ranges with a large range of values. (Thanks to both Joe Kington and tcaswell for suggestions on how to improve the answer.)
merging dictionaries in ansible
I'm currently building a role for installing PHP using ansible, and I'm having some difficulty merging dictionaries. I've tried several ways to do so, but I can't get it to work like I want it to: # A vars file: my_default_values: key = value my_values: my_key = my_value # In a playbook, I create a task to attempt merging the # two dictionaries (which doesn't work): - debug: msg="{{ item.key }} = {{ item.value }}" with_dict: my_default_values + my_values # I have also tried: - debug: msg="{{ item.key }} = {{ item.value }}" with_dict: my_default_values|union(my_values) # I have /some/ success with using j2's update, # but you can't use j2 syntax in "with_dict", it appears. # This works: - debug: msg="{{ my_default_values.update(my_values) }}" # But this doesn't: - debug: msg="{{ item.key }} = {{ item.value }}" with_dict: my_default_values.update(my_values) Is there a way to merge two dictionaries, so I can use it with "with_dict"?
In Ansible 2.0, there is a Jinja filter, combine, for this: - debug: msg="{{ item.key }} = {{ item.value }}" with_dict: "{{ my_default_values | combine(my_values) }}"
What's the purpose of dictproxy?
The __dict__ of a type is a dictproxy object that is read only. I want to know what's the purpose of it. Is it only for "don't allow modify builtin types"? I found a method that can walk around this. I know it's not a good idea to modify builtin types. But I am trying to modify cdef class of Cython on fly. I want to know are there any dangerous to modify the __dict__ of cdef class this way? Here is the code: import gc gc.get_referents(float.__dict__)[0]["square"] = lambda self: self*self (3.14).square()
__dict__ is a namespace object that holds the classes attributes. There shouldn't be a problem modifying the dict object as long as you don't modify the __dict__ itself: Example: //good MyClass.x = 1 //bad MyClass.__dict__['x'] = 1 //good m = MyClass() m.x = 1 //also good m.__dict__['x'] = 1 Python documentation [ LINK ]: Attribute assignment updates the module’s namespace dictionary, e.g., m.x = 1 is equivalent to m.__dict__["x"] = 1. so there should be no problem modifying objects of the dict. Furthermore: Since we can modify the attributes of a class the dict of a class is always Writable: __dict__ | The namespace supporting arbitrary function attributes. | Writable Note It could be Cpython has a slightly diffrent implementation than pyhton but as i can read from the documentation its not mentioned.
What does "SyntaxError: Missing parentheses in call to 'print'" mean in Python?
When I try to use a print statement in Python, it gives me this error: >>> print "Hello world!" File "<stdin>", line 1 print "Hello world!" ^ SyntaxError: Missing parentheses in call to 'print' What does that mean?
This error message means that you are attempting to use Python 3 to follow an example or run a program that uses the Python 2 print statement: print "Hello world" The statement above does not work in Python 3. In Python 3 you need to add parentheses around the value to be printed: print("Hello world") “SyntaxError: Missing parentheses in call to 'print'” is a new error message that was added in Python 3.4.2 primarily to help users that are trying to follow a Python 2 tutorial while running Python 3. In Python 3, printing values changed from being a distinct statement to being an ordinary function call, so it now needs parentheses: >>> print("Hello world!") Hello world! In earlier versions of Python 3, the interpreter just reports a generic syntax error, without providing any useful hints as to what might be going wrong: >>> print "Hello world!" File "<stdin>", line 1 print "Hello world!" ^ SyntaxError: invalid syntax As for why print became an ordinary function in Python 3, that didn't relate to the basic form of the statement, but rather to how you did more complicated things like printing multiple items to stderr with a trailing space rather than ending the line. In Python 2: >>> import sys >>> print >> sys.stderr, 1, 2, 3,; print >> sys.stderr, 4, 5, 6 1 2 3 4 5 6 In Python 3: >>> import sys >>> print(1, 2, 3, file=sys.stderr, end=" "); print(4, 5, 6, file=sys.stderr) 1 2 3 4 5 6
Mac OS X - EnvironmentError: mysql_config not found
First off, yeah, I've already seen this: pip install mysql-python fails with EnvironmentError: mysql_config not found The problem I am trying to use Django on a Google App Engine project. However, I haven't been able to get started as the server fails to start properly due to: ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb I did some research and it all pointed to having to install Mysql-python, as apparently it isn't on my system. I actually tried uninstalling it and got this: Cannot uninstall requirement mysql-python, not installed Whenever I actually do try to install via: sudo pip install MySQL-python I get an error stating: raise EnvironmentError("%s not found" % (mysql_config.path,)) EnvironmentError: mysql_config not found I've already tried running: export PATH=$PATH:/usr/local/mysql/bin but that didn't seem to help, as I ran the installation command again and it still failed. Any ideas? Please note I'm not on a vertualenv.
Ok, well, first of all, let me check if I am on the same page as you: You installed python You did brew install mysql You did export PATH=$PATH:/usr/local/mysql/bin And finally, you did pip install MySQL-Python If you did all those steps in the same order, and you still got an error, read on to the end, if, however, you did not follow these exact steps try, following them from the very beginning. So, you followed the steps, and you're still geting an error, well, there are a few things you could try: Try running which mysql_config from bash. It probably won't be found. That's why the build isn't finding it either. Try running locate mysql_config and see if anything comes back. The path to this binary needs to be either in your shell's $PATH environment variable, or it needs to be explicitly in the setup.py file for the module assuming it's looking in some specific place for that file. Instead of using MySQL-Python, try using 'mysql-connector-python', it can be installed using pip install mysql-connector-python. More information on this can be found here and here. Manually find the location of 'mysql/bin', 'mysql_config', and 'MySQL-Python', and add all these to the $PATH environment variable. If all above steps fail, then you could try installing 'mysql' using MacPorts, in which case the file 'mysql_config' would actually be called 'mysql_config5', and in this case, you would have to do this after installing: export PATH=$PATH:/opt/local/lib/mysql5/bin. You can find more details here. Note1: I've seen some people saying that installing python-dev and libmysqlclient-dev also helped, however I do not know if these packages are available on Mac OS. Note2: Also, make sure to try running the commands as root. I got my answers from (besides my brain) these places (maybe you could have a look at them, to see if it would help): 1, 2, 3, 4. I hoped I helped, and would be happy to know if any of this worked, or not. Good luck.
Print raw HTTP request in Flask or WSGI
I am debugging a microcontroller I've built which is writing raw HTTP requests line by line. I am using Flask for my backend and I would like to see the entire request as it appears in this format: GET / HTTP/1.1 Content-length: 123 User-agent: blah ... I know Flask is based on WSGI. Is there anyway to get this to work with Flask?
Yes, Flask is a WSGI application, so it is trivial to wrap your app in an extra layer that logs the request: import pprint class LoggingMiddleware(object): def __init__(self, app): self._app = app def __call__(self, environ, resp): errorlog = environ['wsgi.errors'] pprint.pprint(('REQUEST', environ), stream=errorlog) def log_response(status, headers, *args): pprint.pprint(('RESPONSE', status, headers), stream=errorlog) return resp(status, headers, *args) return self._app(environ, log_response) This defines a piece of middleware to wrap your Flask application in. The advantage is that it operates entirely independent of Flask, giving you unfiltered insight into what goes in and what comes out. How you apply the middleware depends on the exact WSGI server you are using; see your WSGI server documentation. When running Flask with the built-in server (app.run()), do: if __name__ == '__main__': app.wsgi_app = LoggingMiddleware(app.wsgi_app) app.run() The little app.wsgi_app wrapping dance places the LoggingMiddleware around the Flask WSGI application. The output goes to the wsgi.error stream; where that ends up again depends on your WSGI server; mod_wsgi puts this in the Apache error log for your site, the bundled Flask server prints this to stderr.
Specify format for input arguments argparse python
I have a python script that requires some command line inputs and I am using argparse for parsing them. I found the documentation a bit confusing and couldn't find a way to check for a format in the input parameters. What I mean by checking format is explained with this example script: parser.add_argument('-s', "--startdate", help="The Start Date - format YYYY-MM-DD ", required=True) parser.add_argument('-e', "--enddate", help="The End Date format YYYY-MM-DD (Inclusive)", required=True) parser.add_argument('-a', "--accountid", type=int, help='Account ID for the account for which data is required (Default: 570)') parser.add_argument('-o', "--outputpath", help='Directory where output needs to be stored (Default: ' + os.path.dirname(os.path.abspath(__file__))) I need to check for option -s and -e that the input by the user is in the format YYYY-MM-DD. Is there an option in argparse that I do not know of which accomplishes this.
Per the documentation: The type keyword argument of add_argument() allows any necessary type-checking and type conversions to be performed ... type= can take any callable that takes a single string argument and returns the converted value You could do something like: def valid_date(s): try: return datetime.strptime(s, "%Y-%m-%d") except ValueError: msg = "Not a valid date: '{0}'.".format(s) raise argparse.ArgumentTypeError(msg) Then use that as type: parser.add_argument('-s', "--startdate", help="The Start Date - format YYYY-MM-DD ", required=True, type=valid_date)
Initializing an OrderedDict using its constructor
What's the correct way to initialize an ordered dictionary (OD) so that it retains the order of initial data? from collections import OrderedDict # Obviously wrong because regular dict loses order d = OrderedDict({'b':2, 'a':1}) # An OD is represented by a list of tuples, so would this work? d = OrderedDict([('b',2), ('a', 1)]) # What about using a list comprehension, will 'd' preserve the order of 'l' l = ['b', 'a', 'c', 'aa'] d = OrderedDict([(i,i) for i in l]) Question: Will an OrderedDict preserve the order of a list of tuples, or tuple of tuples or tuple of lists or list of lists etc. passed at the time of initialization (2nd & 3rd example above)? How does one go about verifying if OrderedDict actually maintains an order? Since a dict has an unpredictable order, what if my test vectors luckily have the same initial order as the unpredictable order of a dict? For example, if instead of d = OrderedDict({'b':2, 'a':1}) I write d = OrderedDict({'a':1, 'b':2}), I can wrongly conclude that the order is preserved. In this case, I found out that a dict is ordered alphabetically, but that may not be always true. What's a reliable way to use a counterexample to verify whether a data structure preserves order or not, short of trying test vectors repeatedly until one breaks? P.S. I'll just leave this here for reference: "The OrderedDict constructor and update() method both accept keyword arguments, but their order is lost because Python’s function call semantics pass-in keyword arguments using a regular unordered dictionary" P.P.S : Hopefully, in future, OrderedDict will preserve the order of kwargs also (example 1): http://bugs.python.org/issue16991
# An OD is represented by a list of tuples, so would this work? d = OrderedDict([('b', 2), ('a', 1)]) Yes, that will work. By definition, a list is always ordered the way it is represented. This goes for list-comprehension too, the list generated is in the same way the data was provided (i.e. source from a list it will be deterministic, sourced from a set or dict not so much). How does one go about verifying if OrderedDict actually maintains an order. Since a dict has an unpredictable order, what if my test vectors luckily has the same initial order as the unpredictable order of a dict?. For example, if instead of d = OrderedDict({'b':2, 'a':1}) I write d = OrderedDict({'a':1, 'b':2}), I can wrongly conclude that the order is preserved. In this case, I found out that a dict is order alphabetically, but that may not be always true. i.e. what's a reliable way to use a counter example to verify if a data structure preserves order or not short of trying test vectors repeatedly until one breaks. You keep your source list of 2-tuple around for reference, and use that as your test data for your test cases when you do unit tests. Iterate through them and ensure the order is maintained.
how to use python to execute a curl command
I'm new to python and I want to execute a curl command in python. Usually, I just need enter the command in terminal and press return key. However, I don't know how it works in python. The command shows below: curl -d @request.json --header "Content-Type: application/json" https://www.googleapis.com/qpxExpress/v1/trips/search?key=mykeyhere There is a request.json file to be sent to get response. I searched a lot and got confused. I tried to write a piece of code, although I could not fully understand. It didn't work. import pycurl import StringIO response = StringIO.StringIO() c = pycurl.Curl() c.setopt(c.URL, 'https://www.googleapis.com/qpxExpress/v1/trips/search?key=mykeyhere') c.setopt(c.WRITEFUNCTION, response.write) c.setopt(c.HTTPHEADER, ['Content-Type: application/json','Accept-Charset: UTF-8']) c.setopt(c.POSTFIELDS, '@request.json') c.perform() c.close() print response.getvalue() response.close() The error message is 'Parse Error'.Can anyone tell me how to fix it? or how to get response from the sever correctly? Thanks a lot!
For sake of simplicity, maybe you should consider using the standard library Requests. An example with json response content would be something like: import requests r = requests.get('https://github.com/timeline.json') r.json() If you look for further information, in the Quickstart section, they have lots of working examples. EDIT: For your specific curl translation: import requests import json url = 'https://www.googleapis.com/qpxExpress/v1/trips/search?key=mykeyhere' payload = json.load(open("request.json")) headers = {'content-type': 'application/json', 'Accept-Charset': 'UTF-8'} r = requests.post(url, data=json.dumps(payload), headers=headers)
vlookup in Pandas using join
I have the following 2 dataframes Example1 sku loc flag 122 61 True 123 61 True 113 62 True 122 62 True 123 62 False 122 63 False 301 63 True Example2 sku dept 113 a 122 b 123 b 301 c I want to perform a merge, or join opertation using Pandas (or whichever Python operator is best) to produce the below data frame. Example3 sku loc flag dept 122 61 True b 123 61 True b 113 62 True a 122 62 True b 123 62 False b 122 63 False b 301 63 True c Both df_Example1.join(df_Example2,lsuffix='_ProdHier') df_Example1.join(df_Example2,how='outer',lsuffix='_ProdHier') Aren't working. What am I doing wrong?
Perform a left merge, this will use sku column as the column to join on: In [26]: df.merge(df1, on='sku', how='left') Out[26]: sku loc flag dept 0 122 61 True b 1 122 62 True b 2 122 63 False b 3 123 61 True b 4 123 62 False b 5 113 62 True a 6 301 63 True c If sku is in fact your index then do this: In [28]: df.merge(df1, left_index=True, right_index=True, how='left') Out[28]: loc flag dept sku 113 62 True a 122 61 True b 122 62 True b 122 63 False b 123 61 True b 123 62 False b 301 63 True c Another method is to use map, if you set sku as the index on your second df, so in effect it becomes a Series then the code simplifies to this: In [19]: df['dept']=df.sku.map(df1.dept) df Out[19]: sku loc flag dept 0 122 61 True b 1 123 61 True b 2 113 62 True a 3 122 62 True b 4 123 62 False b 5 122 63 False b 6 301 63 True c
Print not showing in ipython notebook - python
I am using ipython notebook (http://ipython.org/notebook.html) to do a demo and it seems like the print function is not working: Sorry the print screen might not be that clear but in short it's showing: In [1]: 'hello world' Out [1]: 'hellow world' In [2]: print 'hello world' And there's no print output from the the [2] Does anyone know whether it's a known bug? And does anyone know how to fix it?
I had a similar printing problem when my first code cell was: import sys reload(sys) sys.setdefaultencoding("utf-8") Then I've commented the second and third lines like this: import sys #reload(sys) #sys.setdefaultencoding("utf-8") Reset the kernel and re-ran the program and now my print statements are working properly. [edit] Later on I've found that when I was first having the printing problem, all print outputs were actually being sent to the ipython console terminal (on my Linux box), instead of being embed on the notebook.
Why is variable1 += variable2 much faster than variable1 = variable1 + variable2?
I have inherited some Python code which is used to create huge tables (of up to 19 columns wide by 5000 rows). It took nine seconds for the table to be drawn on the screen. I noticed that each row was added using this code: sTable = sTable + '\n' + GetRow() where sTable is a string. I changed that to: sTable += '\n' + GetRow() and I noticed that the table now appeared in six seconds. And then I changed it to: sTable += '\n%s' % GetRow() based on these Python performance tips (still six seconds). Since this was called about 5000 times, it highlighted the performance issue. But why was there such a large difference? And why didn't the compiler spot the problem in the first version and optimise it?
This isn't about using inplace += versus + binary add. You didn't tell us the whole story. Your original version concatenated 3 strings, not just two: sTable = sTable + '\n' + sRow # simplified, sRow is a function call Python tries to help out and optimises string concatenation; both when using strobj += otherstrobj and strobj = strobj + otherstringobj, but it cannot apply this optimisation when more than 2 strings are involved. Python strings are immutable normally, but if there are no other references to the left-hand string object and it is being rebound anyway, then Python cheats and mutates the string. This avoids having to create a new string each time you concatenate, and that can lead to a big speed improvement. This is implemented in the bytecode evaluation loop. Both when using BINARY_ADD on two strings and when using INPLACE_ADD on two strings, Python delegates concatenation to a special helper function string_concatenate(). To be able to optimize the concatenation by mutating the string, it first needs to make sure that the string has no other references to it; if only the stack and the original variable reference it then this can be done, and the next operation is going to replace the original variable reference. So if there are just 2 references to the string, and the next operator is one of STORE_FAST (set a local variable), STORE_DEREF (set a variable referenced by closed over functions) or STORE_NAME (set a global variable), and the affected variable currently references the same string, then that target variable is cleared to reduce the number of references to just 1, the stack. And this is why your original code could not use this optimization fully. The first part of your expression is sTable + '\n' and the next operation is another BINARY_ADD: >>> import dis >>> dis.dis(compile(r"sTable = sTable + '\n' + sRow", '<stdin>', 'exec')) 1 0 LOAD_NAME 0 (sTable) 3 LOAD_CONST 0 ('\n') 6 BINARY_ADD 7 LOAD_NAME 1 (sRow) 10 BINARY_ADD 11 STORE_NAME 0 (sTable) 14 LOAD_CONST 1 (None) 17 RETURN_VALUE The first BINARY_ADD is followed by a LOAD_NAME to access the sRow variable, not a store operation. This first BINARY_ADD must always result in a new string object, ever larger as sTable grows and it takes more and more time to create this new string object. You changed this code to: sTable += '\n%s' % sRow which removed the second concatenation. Now the bytecode is: >>> dis.dis(compile(r"sTable += '\n%s' % sRow", '<stdin>', 'exec')) 1 0 LOAD_NAME 0 (sTable) 3 LOAD_CONST 0 ('\n%s') 6 LOAD_NAME 1 (sRow) 9 BINARY_MODULO 10 INPLACE_ADD 11 STORE_NAME 0 (sTable) 14 LOAD_CONST 1 (None) 17 RETURN_VALUE and all we have left is an INPLACE_ADD followed by a store. Now sTable can be altered in-place, not resulting in a ever larger new string object. You'd have gotten the same speed difference with: sTable = sTable + ('\n%s' % sRow) here. A time trial shows the difference: >>> import random >>> from timeit import timeit >>> testlist = [''.join([chr(random.randint(48, 127)) for _ in range(random.randrange(10, 30))]) for _ in range(1000)] >>> def str_threevalue_concat(lst): ... res = '' ... for elem in lst: ... res = res + '\n' + elem ... >>> def str_twovalue_concat(lst): ... res = '' ... for elem in lst: ... res = res + ('\n%s' % elem) ... >>> timeit('f(l)', 'from __main__ import testlist as l, str_threevalue_concat as f', number=10000) 6.196403980255127 >>> timeit('f(l)', 'from __main__ import testlist as l, str_twovalue_concat as f', number=10000) 2.3599119186401367 The moral of this story is that you should not be using string concatenation in the first place. The proper way to build a new string from loads of other strings is to use a list, then use str.join(): table_rows = [] for something in something_else: table_rows += ['\n', GetRow()] sTable = ''.join(table_rows) This is faster still: >>> def str_join_concat(lst): ... res = ''.join(['\n%s' % elem for elem in lst]) ... >>> timeit('f(l)', 'from __main__ import testlist as l, str_join_concat as f', number=10000) 1.7978830337524414 but you cannot beat using just '\n'.join(lst): >>> timeit('f(l)', 'from __main__ import testlist as l, nl_join_concat as f', number=10000) 0.23735499382019043
Why does running the Flask dev server run itself twice?
I'm using Flask for developing a website and while in development I run flask using the following file: #!/usr/bin/env python from datetime import datetime from app import app import config if __name__ == '__main__': print '################### Restarting @', datetime.utcnow(), '###################' app.run(port=4004, debug=config.DEBUG, host='0.0.0.0') When I start the server, or when it auto-restarts because files have been updated, it always shows the print line twice: ################### Restarting @ 2014-08-26 10:51:49.167062 ################### ################### Restarting @ 2014-08-26 10:51:49.607096 ################### Although it is not really a problem (everything for the rest works as expected), I simply wonder why it behaves like this? Any ideas?
The Werkzeug reloader spawns a child process so that it can restart that process each time your code changes. Werkzeug is the library that supplies Flask with the development server when you call app.run(). See the restart_with_reloader() function code; your script is run again with subprocess.call(). If you set use_reloader to False you'll see the behaviour go away, but then you also lose the reloading functionality: app.run(port=4004, debug=config.DEBUG, host='0.0.0.0', use_reloader=False) You can look for the WERKZEUG_RUN_MAIN environment variable if you wanted to detect when you are in the reloading child process: if __name__ == '__main__': import os if os.environ.get('WERKZEUG_RUN_MAIN') == 'true': print '################### Restarting @ {} ###################'.format( datetime.utcnow()) app.run(port=4004, debug=config.DEBUG, host='0.0.0.0') However, if you need to set up module globals, then you should instead use the @app.before_first_request decorator on a function and have that function set up such globals. It'll be called just once after every reload when the first request comes in: @app.before_first_request def before_first_request(): print '########### Restarted, first request @ {} ############'.format( datetime.utcnow()) Do take into account that if you run this in a full-scale WSGI server that uses forking or new subprocesses to handle requests, that before_first_request handlers may be invoked for each new subprocess.
How to provide Python syntax coloring inside Webstorm?
I have a Python project, and I use WebStorm as my Editor. The problem is that Python's syntax doesn't get colored. How can I display Python pages with a nice syntax? I am not searching more than than. I'm not going to develop pages in Python, but I do want them to get displayed nicely in Webstorm.
Your ONLY option is to use TextMate bundles support plugin with Python bundle. This official article (with pictures) is for PhpStorm, but it should work the same for WebStorm as well: http://confluence.jetbrains.com/display/PhpStorm/TextMate+Bundles+in+PhpStorm
PyCharm resolving - flask.ext.sqlalchemy vs flask_sqlalchemy
If I use the following format in my application, everything works, except PyCharms resolving / autocomplete feature: from flask.ext.sqlalchemy import SQLAlchemy If I use the following format in my application, everything works. But, alas, it is not the correct way to import the libraries: from flask_sqlalchemy import SQLAlchemy Is there any way to make PyCharm resolve the first syntax correctly?
The flask.ext namespace is a transistion namespace, see the Extension Import Transition section of the Flask Extension Development docs: For a while we recommended using namespace packages for Flask extensions. This turned out to be problematic in practice because many different competing namespace package systems exist and pip would automatically switch between different systems and this caused a lot of problems for users. and Flask extensions should urge users to import from flask.ext.foo instead of flask_foo or flaskext_foo so that extensions can transition to the new package name without affecting users. So to transition between versions, the flask.ext alias was added, which will automatically try to import flask_[name] packages when importing flask.ext.[name]. But that transition is now moot; you no longer will find packages that still rely solely on flask.ext. As such, it is perfectly fine to use the actual module name and have PyCharm autocomplete the module contents. You only really have to use flask.ext if you are still using an older version of the extension and need to be future compatible. That future is already here.
ANOVA in python using pandas dataframe with statsmodels or scipy?
I want to use the Pandas dataframe to breakdown the variance in one variable. For example, if I have a column called 'Degrees', and I have this indexed for various dates, cities, and night vs. day, I want to find out what fraction of the variation in this series is coming from cross-sectional city variation, how much is coming from time series variation, and how much is coming from night vs. day. In Stata I would use Fixed effects and look at the R^2. Hopefully my question makes sense. Basically, what I want to do, is find the ANOVA breakdown of "Degrees" by three other columns.
I set up a direct comparison to test them, found that their assumptions can differ slightly , got a hint from a statistician, and here is an example of ANOVA on a pandas dataframe matching R's results: import pandas as pd import statsmodels.api as sm from statsmodels.formula.api import ols # R code on R sample dataset #> anova(with(ChickWeight, lm(weight ~ Time + Diet))) #Analysis of Variance Table # #Response: weight # Df Sum Sq Mean Sq F value Pr(>F) #Time 1 2042344 2042344 1576.460 < 2.2e-16 *** #Diet 3 129876 43292 33.417 < 2.2e-16 *** #Residuals 573 742336 1296 #write.csv(file='ChickWeight.csv', x=ChickWeight, row.names=F) cw = pd.read_csv('ChickWeight.csv') cw_lm=ols('weight ~ Time + C(Diet)', data=cw).fit() #Specify C for Categorical print(sm.stats.anova_lm(cw_lm, typ=2)) # sum_sq df F PR(>F) #C(Diet) 129876.056995 3 33.416570 6.473189e-20 #Time 2016357.148493 1 1556.400956 1.803038e-165 #Residual 742336.119560 573 NaN NaN
Django 1.7 throws django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet
This is the traceback on my windows system. Traceback (most recent call last): File "D:\AMD\workspace\steelrumors\manage.py", line 9, in <module> django.setup() File "D:\AMD\Django\django-django-4c85a0d\django\__init__.py", line 21, in setup apps.populate(settings.INSTALLED_APPS) File "D:\AMD\Django\django-django-4c85a0d\django\apps\registry.py", line 108, in populate app_config.import_models(all_models) File "D:\AMD\Django\django-django-4c85a0d\django\apps\config.py", line 197, in import_models self.models_module = import_module(models_module_name) File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module __import__(name) File "C:\Python27\lib\site-packages\registration\models.py", line 15, in <module> User = get_user_model() File "D:\AMD\Django\django-django-4c85a0d\django\contrib\auth\__init__.py", line 135, in get_user_model return django_apps.get_model(settings.AUTH_USER_MODEL) File "D:\AMD\Django\django-django-4c85a0d\django\apps\registry.py", line 199, in get_model self.check_models_ready() File "D:\AMD\Django\django-django-4c85a0d\django\apps\registry.py", line 131, in check_models_ready raise AppRegistryNotReady("Models aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Models aren't loaded yet. And my manage.py looks like this: import os import sys import django if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "steelrumors.settings") django.setup() from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) I get this error when i am trying to use registration app in Django 1.7
Running these commands solved my problem (credit to this answer): import django django.setup() However I'm not sure why I need this. Comments would be appreciated.
Remove or adapt border of frame of legend using matplotlib
When plotting a plot using matplotlib: How to remove the box of the legend? How to change the color of the border of the legend box? How to remove only the border of the box of the legend?
When plotting a plot using matplotlib: How to remove the box of the legend? plt.legend(frameon=False) How to change the color of the border of the legend box? leg = plt.legend() leg.get_frame().set_edgecolor('b') How to remove only the border of the box of the legend? leg = plt.legend() leg.get_frame().set_linewidth(0.0)
Python sharing a lock between processes
I am attempting to use a partial function so that pool.map() can target a function that has more than one parameter (in this case a Lock() object). Here is example code (taken from an answer to a previous question of mine): from functools import partial def target(lock, iterable_item): for item in items: # Do cool stuff if (... some condition here ...): lock.acquire() # Write to stdout or logfile, etc. lock.release() def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() l = multiprocessing.Lock() func = partial(target, l) pool.map(func, iterable) pool.close() pool.join() However when I run this code, I get the error: Runtime Error: Lock objects should only be shared between processes through inheritance. What am I missing here? How can I share the lock between my subprocesses?
Sorry, I should have caught this in my answer to your other question. You can't pass normal multiprocessing.Lock objects to Pool methods, because they can't be pickled. There are two ways to get around this. One is to create Manager() and pass a Manager.Lock(): def main(): iterable = [1, 2, 3, 4, 5] pool = multiprocessing.Pool() m = multiprocessing.Manager() l = m.Lock() func = partial(target, l) pool.map(func, iterable) pool.close() pool.join() This is a little bit heavyweight, though; using a Manager requires spawning another process to host the Manager server. And all calls to acquire/release the lock have to be sent to that server via IPC. The other option is to pass the regular multiprocessing.Lock() at Pool creation time, using the initializer kwarg. This will make your lock instance global in all the child workers: def target(iterable_item): for item in items: # Do cool stuff if (... some condition here ...): lock.acquire() # Write to stdout or logfile, etc. lock.release() def init(l): global lock lock = l def main(): iterable = [1, 2, 3, 4, 5] l = multiprocessing.Lock() pool = multiprocessing.Pool(initializer=init, initargs=(l,)) pool.map(target, iterable) pool.close() pool.join() The second solution has the side-effect of no longer requiring partial.
Scrapy Installation Fails with error 'cannot open include: 'openssl/aes.h '
I am trying to install Scrapy with easy_install -U Scrapy but it ends up in a strange error "Can not open include file " while trying to install it. Does any one know what is going on? Here is my complete traceback: C:\Users\Mubashar Kamran>easy_install -U Scrapy Searching for Scrapy Reading https://pypi.python.org/simple/Scrapy/ Best match: scrapy 0.24.4 Processing scrapy-0.24.4-py2.7.egg scrapy 0.24.4 is already the active version in easy-install.pth Installing scrapy-script.py script to C:\Python27\Scripts Installing scrapy.exe script to C:\Python27\Scripts Installing scrapy.exe.manifest script to C:\Python27\Scripts Using c:\python27\lib\site-packages\scrapy-0.24.4-py2.7.egg Processing dependencies for Scrapy Searching for cryptography>=0.2.1 Reading https://pypi.python.org/simple/cryptography/ Best match: cryptography 0.5.4 Downloading https://pypi.python.org/packages/source/c/cryptography/cryptography- 0.5.4.tar.gz#md5=4fd1f10e9f99009a44667fabe7980aec Processing cryptography-0.5.4.tar.gz Writing c:\users\mubash~1\appdata\local\temp\easy_install-jjms3i\cryptography-0. 5.4\setup.cfg Running cryptography-0.5.4\setup.py -q bdist_egg --dist-dir c:\users\mubash~1\ap pdata\local\temp\easy_install-jjms3i\cryptography-0.5.4\egg-dist-tmp-ry6bwd C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution option: 'setup_requires' warnings.warn(msg) _Cryptography_cffi_684bb40axf342507b.c Creating library c:\users\mubash~1\appdata\local\temp\easy_install-jjms3i\cry ptography-0.5.4\cryptography\hazmat\primitives\__pycache__\Release\cryptography\ hazmat\primitives\__pycache__\_Cryptography_cffi_684bb40axf342507b.lib and objec t c:\users\mubash~1\appdata\local\temp\easy_install-jjms3i\cryptography-0.5.4\cr yptography\hazmat\primitives\__pycache__\Release\cryptography\hazmat\primitives\ __pycache__\_Cryptography_cffi_684bb40axf342507b.exp _Cryptography_cffi_8f86901cxc1767c5a.c Creating library c:\users\mubash~1\appdata\local\temp\easy_install-jjms3i\cry ptography-0.5.4\cryptography\hazmat\primitives\__pycache__\Release\cryptography\ hazmat\primitives\__pycache__\_Cryptography_cffi_8f86901cxc1767c5a.lib and objec t c:\users\mubash~1\appdata\local\temp\easy_install-jjms3i\cryptography-0.5.4\cr yptography\hazmat\primitives\__pycache__\Release\cryptography\hazmat\primitives\ __pycache__\_Cryptography_cffi_8f86901cxc1767c5a.exp _Cryptography_cffi_4ed9e37dx4000d087.c cryptography\hazmat\bindings\__pycache__\_Cryptography_cffi_4ed9e37dx4000d087.c( 194) : fatal error C1083: Cannot open include file: 'openssl/aes.h': No such fil e or directory error: c:\users\mubash~1\appdata\local\temp\easy_install-jjms3i\cryptography-0.5 .4\cryptography\hazmat\primitives\__pycache__\_Cryptography_cffi_684bb40axf34250 7b.pyd: Access is denied
I got same error installing different python app. I was missing OpenSSL dev package, solved by: sudo apt-get install libssl-dev
AttributeError: 'module' object has no attribute 'tests'
I'm running this command: python manage.py test project.apps.app1.tests and it causes this error: AttributeError: 'module' object has no attribute 'tests' Below is my directory structure. I've also added app1 to my installed apps config. Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line utility.execute() File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 377, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/commands/test.py", line 50, in run_from_argv super(Command, self).run_from_argv(argv) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/base.py", line 288, in run_from_argv self.execute(*args, **options.__dict__) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/commands/test.py", line 71, in execute super(Command, self).execute(*args, **options) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/base.py", line 338, in execute output = self.handle(*args, **options) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/commands/test.py", line 88, in handle failures = test_runner.run_tests(test_labels) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/test/runner.py", line 146, in run_tests suite = self.build_suite(test_labels, extra_tests) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/test/runner.py", line 66, in build_suite tests = self.test_loader.loadTestsFromName(label) File "/usr/lib/python2.7/unittest/loader.py", line 100, in loadTestsFromName parent, obj = obj, getattr(obj, part) AttributeError: 'module' object has no attribute 'tests' Directory structure:
I finally figured it out working on another problem. The problem was that my test couldn't find an import. It looks like you get the above error if your test fails to import. This makes sense because the test suite can't import a broken test. At least I think this is what is going on because I fixed the import within my test file and sure enough it started working. To validate your test case just try import the test case file in python console. Example: from project.apps.app1.tests import *
Get value of an input box using Selenium (Python)
I am trying to extract the text in an input box, <input type="text" name="inputbox" value="name" class="box"> I started with input = driver.find_element_by_name("inputbox") I tried input.getText() but I got AttributeError: 'WebElement' object has no attribute 'getText'
Use this to get the value of the input element: input.get_attribute('value')
ipython: how to set terminal width
When I use ipython terminal and want to print a numpy.ndarray which has many columns, the lines are automatically broken somewhere around 80 characters (i.e. the width of the lines is cca 80 chars): z = zeros((2,20)) print z Presumably, ipython expects that my terminal has 80 columns. In fact however, my terminal has width of 176 characters and I would like to use the full width. I have tried changing the following parameter, but this has no effect: c.PlainTextFormatter.max_width = 160 How can I tell ipython to use full width of my terminal ? I am using ipython 1.2.1 on Debian Wheezy
After some digging through the code, it appears that the variable you're looking for is numpy.core.arrayprint._line_width, which is 75 by default. Setting it to 160 worked for me: >>> numpy.zeros((2, 20)) array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) The function used by default for array formatting is numpy.core.numeric.array_repr, although you can change this with numpy.core.numeric.set_string_function.
Running Python from Atom
In Sublime, we have an easy and convent way to run Python or almost any language for that matter using ⌘ + b (or ctrl + b) Where the code will run in a small window below the source code and can easily be closed with the escape key when no longer needed. Is there a way to replicate this functionally with Github's atom editor?
The script package does exactly what you're looking for: https://atom.io/packages/script The package's documentation also contains the key mappings, which you can easily customize.
How to migrate back from initial migration in Django 1.7?
I created a new app with some models and now I noticed that some of the models are poorly thought out. As I haven't committed the code the sensible thing would be to migrate the database to last good state and redo the migration with better models. In this case the last good state is database where the new app doesn't exist. How can I migrate back from initial migration in Django 1.7? In South one could do: python manage.py migrate <app> zero Which would clear <app> from migration history and drop all tables of <app>. How to do this with Django 1.7 migrations?
You can do the same with Django 1.7 also. python manage.py migrate <app> zero This clears <app> from migration history and drops all tables of <app> See django docs for more info.
Pip Install not installing into correct directory?
I can't seem to use sudo pip install correctly so that it installs into the following directory: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ so that I can then import the module using python I've run sudo pip install scikit-learn --upgrade Result Requirement already up-to-date: scikit-learn in /usr/local/lib/python2.7/site-packages Cleaning up... However, it's not in the correct directory How do I get sudo pip install to install into correct directory? In addition, I've tried sudo pip install Scrappy I get the following message new-host-2:site-packages Chris$ sudo pip install Scrapy Password: Requirement already satisfied (use --upgrade to upgrade): Scrapy in /usr/local/lib/python2.7/site-packages Requirement already satisfied (use --upgrade to upgrade): Twisted>=10.0.0 in /usr/local/lib/python2.7/site-packages (from Scrapy) Requirement already satisfied (use --upgrade to upgrade): w3lib>=1.8.0 in /usr/local/lib/python2.7/site-packages (from Scrapy) Requirement already satisfied (use --upgrade to upgrade): queuelib in /usr/local/lib/python2.7/site-packages (from Scrapy) Requirement already satisfied (use --upgrade to upgrade): lxml in /usr/local/lib/python2.7/site-packages (from Scrapy) Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /usr/local/lib/python2.7/site-packages (from Scrapy) Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /usr/local/lib/python2.7/site-packages (from Scrapy) Requirement already satisfied (use --upgrade to upgrade): six>=1.5.2 in /usr/local/lib/python2.7/site-packages (from Scrapy) Requirement already satisfied (use --upgrade to upgrade): zope.interface>=3.6.0 in /usr/local/lib/python2.7/site-packages (from Twisted>=10.0.0->Scrapy) Requirement already satisfied (use --upgrade to upgrade): cryptography>=0.2.1 in /usr/local/lib/python2.7/site-packages (from pyOpenSSL->Scrapy) Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/local/lib/python2.7/site-packages (from zope.interface>=3.6.0->Twisted>=10.0.0->Scrapy) Requirement already satisfied (use --upgrade to upgrade): cffi>=0.8 in /usr/local/lib/python2.7/site-packages (from cryptography>=0.2.1->pyOpenSSL->Scrapy) Requirement already satisfied (use --upgrade to upgrade): pycparser in /usr/local/lib/python2.7/site-packages (from cffi>=0.8->cryptography>=0.2.1->pyOpenSSL->Scrapy) Both these instances demonstrate that it's been installed but not correctly. For example, when I run the following import in python: import scrapy --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-6-51c73a18167b> in <module>() ----> 1 import scrapy ImportError: No module named scrapy I've tried the following: sudo ln -s /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/
From the comments to the original question, it seems that you have multiple versions of Python installed, and that pip just goes to the wrong version. First, to know which version of python you're using, just type which python. You should either see: which python /Library/Frameworks/Python.framework/Versions/2.7/bin/python if you're going to the right version of python, or: which python /usr/bin/python If you're going to the 'wrong' version. To make pip go to the right version, you first have to change the path: export PATH=/Library/Frameworks/Python.framework/Versions/2.7/bin/python:${PATH} typing 'which python' would now get you to the right result. Next, install pip (if it's not already installed for this installation of python). Finally, use it. you should be fine now.
Paramiko : Error reading SSH protocol banner
Recently, I made a code that connect to work station with different usernames (thanks to a private key) based on paramiko. I never had any issues with it, but today, I have that : SSHException: Error reading SSH protocol banner This is strange because it happens randomly on any connections. Is there any way to fix it ?
It depends on what you mean by "fix". The underlying cause, as pointed out in the comments, are congestion/lack of resources. In that way, it's similar to some HTTP codes. That's the normal cause, it could be that the ssh server is returning the wrong header data. 429 Too Many Requests, tells the client to use rate limiting, or sometimes APIs will return 503 in a similar way, if you exceed your quota. The idea being, to try again later, with a delay. You can attempt to handle this exception in your code, wait a little while, and try again. You can also edit your transport.py file, to set the banner timeout to something higher. If you have an application where it doesn't matter how quickly the server responds, you could set this to 60 seconds.
Syntax error installing gunicorn
I am following this Heroku tutorial: https://devcenter.heroku.com/articles/getting-started-with-python-o and when I am trying to install gunicorn in a virtualenv I am getting this error: (venv)jabuntu14@ubuntu:~/Desktop/helloflask$ pip install gunicorn Downloading/unpacking gunicorn Downloading gunicorn-19.1.1-py2.py3-none-any.whl (104kB): 104kB downloaded Installing collected packages: gunicorn Compiling /home/jabuntu14/Desktop/helloflask/venv/build/gunicorn/gunicorn/workers /_gaiohttp.py ... File "/home/jabuntu14/Desktop/helloflask/venv/build/gunicorn/gunicorn/workers /_gaiohttp.py", line 64 yield from self.wsgi.close() ^ SyntaxError: invalid syntax Successfully installed gunicorn Cleaning up... However, when I run $foreman start it appears to work properly. How important is this error? Any idea how to solve it?
The error can be ignored, your gunicorn package installed successfully. The error is thrown by a bit of code that'd only work on Python 3.3 or newer, but isn't used by older Python versions that Gunicorn supports. See https://github.com/benoitc/gunicorn/issues/788: The error is a syntax error happening during install. It is harmless. During installation the setup.py script tries to collect all files to be installed, and compiles them to .pyc bytecache files. One file that is used only on Python 3.3 or up is included in this and the compilation for that one file fails. The file in question adds support for the aiohttp http client/server package, which only works on Python 3.3 and up anyway. As such you can ignore this error entirely.
Python Argparse conditionally required arguments
I have done as much research as possible but I haven't found the best way to make certain cmdline arguments necessary only under certain conditions, in this case only if other arguments have been given. Here's what I want to do at a very basic level: p = argparse.ArgumentParser(description='...') p.add_argument('--argument', required=False) p.add_argument('-a', required=False) # only required if --argument is given p.add_argument('-b', required=False) # only required if --argument is given From what I have seen, other people seem to just add their own check at the end: if args.argument and (args.a is None or args.b is None): # raise argparse error here Is there a way to do this natively within the argparse package?
You can implement a check by providing a custom action for --argument, which will take an additional keyword argument to specify which other action(s) should become required if --argument is used. import argparse class CondAction(argparse.Action): def __init__(self, option_strings, dest, nargs=None, **kwargs): x = kwargs.pop('to_be_required', []) super(CondAction, self).__init__(option_strings, dest, **kwargs) self.make_required = x def __call__(self, parser, namespace, values, option_string=None): for x in self.make_required: x.required = True try: return super(CondAction, self).__call__(parser, namespace, values, option_string) except NotImplementedError: pass p = argparse.ArgumentParser() x = p.add_argument("--a") p.add_argument("--argument", action=CondAction, to_be_required=[x]) The exact definition of CondAction will depend on what, exactly, --argument should do. But, for example, if --argument is a regular, take-one-argument-and-save-it type of action, then just inheriting from argparse._StoreAction should be sufficient. In the example parser, we save a reference to the --a option inside the --argument option, and when --argument is seen on the command line, it sets the required flag on --a to True. Once all the options are processed, argparse verifies that any option marked as required has been set.
How to run recurring task in the Python Flask framework?
I'm building a website which provides some information to the visitors. This information is aggregated in the background by polling a couple external APIs every 5 seconds. The way I have it working now is that I use APScheduler jobs. I initially preferred APScheduler because it makes the whole system more easy to port (since I don't need to set cron jobs on the new machine). I start the polling functions as follows: from apscheduler.scheduler import Scheduler @app.before_first_request def initialize(): apsched = Scheduler() apsched.start() apsched.add_interval_job(checkFirstAPI, seconds=5) apsched.add_interval_job(checkSecondAPI, seconds=5) apsched.add_interval_job(checkThirdAPI, seconds=5) This kinda works, but there's some trouble with it: For starters, this means that the interval-jobs are running outside of the Flask context. So far this hasn't been much of a problem, but when calling an endpoint fails I want the system to send me an email (saying "hey calling API X failed"). Because it doesn't run within the Flask context however, it complaints that flask-mail cannot be executed (RuntimeError('working outside of application context')). Secondly, I wonder how this is going to behave when I don't use the Flask built-in debug server anymore, but a production server with lets say 4 workers. Will it start every job four times then? All in all I feel that there should be a better way of running these recurring tasks, but I'm unsure how. Does anybody out there have an interesting solution to this problem? All tips are welcome! [EDIT] I've just been reading about Celery with its schedules. Although I don't really see how Celery is different from APScheduler and whether it could thus solve my two points, I wonder if anyone reading this thinks that I should investigate more in Celery? [CONCLUSION] About two years later I'm reading this, and I thought I could let you guys know what I ended up with. I figured that @BluePeppers was right in saying that I shouldn't be tied so closely to the Flask ecosystem. So I opted for regular cron-jobs running every minute which are set using Ansible. Although this makes it a bit more complex (I needed to learn Ansible and convert some code so that running it every minute would be enough) I think this is more robust. I'm currently using the awesome pythonr-rq for queueing a-sync jobs (checking APIs and sending emails). I just found out about rq-scheduler. I haven't tested it yet, but it seems to do precisely what I needed in the first place. So maybe this is a tip for future readers of this question. For the rest, I just wish all of you a beautiful day!
(1) You can use the app.app_context() context manager to set the application context. I imagine usage would go something like this: from apscheduler.scheduler import Scheduler def checkSecondApi(): with app.app_context(): # Do whatever you were doing to check the second API @app.before_first_request def initialize(): apsched = Scheduler() apsched.start() apsched.add_interval_job(checkFirstAPI, seconds=5) apsched.add_interval_job(checkSecondAPI, seconds=5) apsched.add_interval_job(checkThirdAPI, seconds=5) Alternatively, you could use a decorator def with_application_context(app): def inner(func): @functools.wraps(func) def wrapper(*args, **kwargs): with app.app_context(): return func(*args, **kwargs) return wrapper return inner @with_application_context(app) def checkFirstAPI(): # Check the first API as before (2) Yes it will still work. The sole (significant) difference is that your application will not be communicating directly with the world; it will be going through a reverse proxy or something via fastcgi/uwsgi/whatever. The only concern is that if you have multiple instances of the app starting, then multiple schedulers will be created. To manage this, I would suggest you move your backend tasks out of the Flask application, and use a tool designed for running tasks regularly (i.e. Celery). The downside to this is that you won't be able to use things like Flask-Mail, but imo, it's not too good to be so closely tied to the Flask ecosystem; what are you gaining by using Flask-Mail over a standard, non Flask, mail library? Also, breaking up your application makes it much easier to scale up individual components as the capacity is required, compared to having one monolithic web application.
Python: Convert timedelta to int in a dataframe
I would like to create a column in a pandas data frame that is an integer representation of the number of days in a timedelta column. Is it possible to use 'datetime.days' or do I need to do something more manual? timedelta column 7 days, 23:29:00 day integer column 7
You could do this, where td is your series of timedeltas. The division converts the nanosecond deltas into day deltas, and the conversion to int drops to whole days. import numpy as np (td / np.timedelta64(1, 'D')).astype(int)
Upgrading to Django 1.7. Getting error: Cannot serialize: <storages.backends.s3boto.S3BotoStorage object
I am trying to upgrade a django app from django 1.6.6 to 1.7 and am using python 2.7.8. When I run python manage.py makemigrations, I get the following error: ValueError: Cannot serialize: <storages.backends.s3boto.S3BotoStorage object at 0x11116eed0> There are some values Django cannot serialize into migration files. And here is the relevant code: protected_storage = storages.backends.s3boto.S3BotoStorage( acl='private', querystring_auth=True, querystring_expire=3600, ) class Document(models.Model): ... file = models.FileField(upload_to='media/docs/', max_length=10000, storage=protected_storage) def __unicode__(self): return "%s" % self.candidate def get_absolute_url(self): return reverse('documents', args=[str(self.pk)]) I've read the migration docs and read about a similar issue here, but I've been unable to resolve this. My app uses django-storages and boto to save files onto Amazon S3. Any help is appreciated.
Just make a deconstructible subclass and use it instead. from django.utils.deconstruct import deconstructible @deconstructible class MyS3BotoStorage(S3BotoStorage): pass
How to move a model between two Django apps (Django 1.7)
So about a year ago I started a project and like all new developers I didn't really focus to much on the structure, however now I am further along with Django it has started to appear that my project layout mainly my models are horrible in structure. I have models mainly held in a single app and really most of these models should be in their own individual apps, I did try and resolve this and move them with south however I found it tricky and really difficult due to foreign keys ect. However due to Django 1.7 and built in support for migrations is there a better way to do this now? Thank you in advance for any help.
This can be done fairly easily using migrations.SeparateDatabaseAndState. Basically, we use a database operation to rename the table concurrently with two state operations to remove the model from one app's history and create it in another's. Remove from old app python manage.py makemigrations old_app --empty In the migration: class Migration(migrations.Migration): dependencies = [] database_operations = [ migrations.AlterModelTable('TheModel', 'newapp_themodel') ] state_operations = [ migrations.DeleteModel('TheModel') ] operations = [ migrations.SeparateDatabaseAndState( database_operations=database_operations, state_operations=state_operations) ] Add to new app First, copy the model to the new app's model.py, then: python manage.py makemigrations new_app This will generate a migration with a naive CreateModel operation as the sole operation. Wrap that in a SeparateDatabaseAndState operation such that we don't try to recreate the table. Also include the prior migration as a dependency: class Migration(migrations.Migration): dependencies = [ ('old_app', 'above_migration') ] state_operations = [ migrations.CreateModel( name='TheModel', fields=[ ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), ], options={ 'db_table': 'newapp_themodel', }, bases=(models.Model,), ) ] operations = [ migrations.SeparateDatabaseAndState(state_operations=state_operations) ]
Where is Python's shutdown procedure documented?
CPython has a strange behaviour where it sets modules to None during shutdown. This screws up error logging during shutdown of some multithreading code I've written. I can't find any documentation of this behaviour. It's mentioned in passing in PEP 432: [...] significantly reducing the number of modules that will experience the "module globals set to None" behaviour that is used to deliberate break cycles and attempt to releases more external resources cleanly. There are SO questions about this behaviour and the C API documentation mentions shutdown behaviour for embedded interpreters. I've also found a related thread on python-dev and a related CPython bug: This patch does not change the behavior of module objects clearing their globals dictionary as soon as they are deallocated. Where is this behaviour documented? Is it Python 2 specific?
The behaviour is not well documented, and is present in all versions of Python from about 1.5-ish until Python 3.4: As part of this change, module globals are no longer forcibly set to None during interpreter shutdown in most cases, instead relying on the normal operation of the cyclic garbage collector. The only documentation for the behaviour is the moduleobject.c source code: /* To make the execution order of destructors for global objects a bit more predictable, we first zap all objects whose name starts with a single underscore, before we clear the entire dictionary. We zap them by replacing them with None, rather than deleting them from the dictionary, to avoid rehashing the dictionary (to some extent). */ Note that setting the values to None is an optimisation; the alternative would be to delete names from the mapping, which would lead to different errors (NameError exceptions rather than AttributeErrors when trying to use globals from a __del__ handler). As you found out on the mailinglist, the behaviour predates the cyclic garbage collector; it was added in 1998, while the cyclic garbage collector was added in 2000. Since function objects always reference the module __dict__ all function objects in a module involve circular references, which is why the __dict__ needed clearing before GC came into play. It was kept in place even when cyclic GC was added, because there might be objects with __del__ methods involved in cycles. These aren't otherwise garbage-collectable, and cleaning out the module dictionary would at least remove the module __dict__ from such cycles. Not doing that would keep all referenced globals of that module alive. The changes made for PEP 442 now make it possible for the garbage collector to clear cyclic references with objects that provide a __del__ finalizer, removing the need to clear the module __dict__ for most cases. The code is still there but this is only triggered if the __dict__ attribute is still alive even after moving the contents of sys.modules to weak references and starting a GC collection run when the interpreter is shutting down; the module finalizer simply decrements their reference count.
from django.db import models, migrations ImportError: cannot import name migrations
So I've started to experience some issues with south on my Django web server. Migrate command is failing with this output everytime: from django.db import models, migrations ImportError: cannot import name migrations (Above this the error displays the rout to the file that failed to be migrated) My Django version is 1.5.1, while my south version is 0.8.4 The thing that troubles me the most is that the module django.db.migrations is nowhere to be found. Any ideas?
Migrations were introduced in Django 1.7; you are using 1.5. Here is a link to the docs explaining this. If you're using an older version of Django, South is the most popular option for data migrations. EDIT So the Django Rest Framework is causing the error. From their documentation: The rest_framework.authtoken app includes both Django native migrations (for Django versions >1.7) and South migrations (for Django versions <1.7) that will create the authtoken table. Note: From REST Framework v2.4.0 using South with Django <1.7 requires upgrading South v1.0+ You must upgrade South beyond your version of 0.8.4 to 1.0+.
PySide / Qt Import Error
I'm trying to import PySide / Qt into Python like so and get the follow error: from PySide import QtCore ImportError: dlopen(/usr/local/lib/python2.7/site-packages/PySide/QtCore.so, 2): Library not loaded: libpyside-python2.7.1.2.dylib Referenced from: /usr/local/lib/python2.7/site-packages/PySide/QtCore.so Reason: image not found I'm running/installed via: Mac OSX 10.9.4 Mavericks Homebrew Python 2.7 Homebrew installed Qt Pip installed PySide The file libpyside-python2.7.1.2.dylib is located in the same path as the QtCore.so file listed in the error message. All my searches for this particular problem have yielded people trying to package these libraries as part of an app, which I am not doing. I am just trying to run it on my system and yet have this problem. For troubleshooting an app, people suggested oTool; not sure if it is helpful here, but this is the output when I run oTool: otool -L QtCore.so QtCore.so: libpyside-python2.7.1.2.dylib (compatibility version 1.2.0, current version 1.2.2) libshiboken-python2.7.1.2.dylib (compatibility version 1.2.0, current version 1.2.2) /usr/local/lib/QtCore.framework/Versions/4/QtCore (compatibility version 4.8.0, current version 4.8.6) /usr/lib/libc++.1.dylib (compatibility version 1.0.0, current version 120.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1) Any ideas? Thanks in advance :)
if you watch this , you're question will be fixed: https://github.com/PySide/pyside-setup/blob/master/pyside_postinstall.py pyside_postinstall.py -install
Flask sqlalchemy many-to-many insert data
Greetings, I am trying to make a many to many relation here in Flask-SQLAlchemy, but it seems that I don't know how to fill the "many to many identifier database". Could you please help me understand what I am doing wrong and how it is supposed to look? class User(db.Model): __tablename__ = 'users' user_id = db.Column(db.Integer, primary_key=True) user_fistName = db.Column(db.String(64)) user_lastName = db.Column(db.String(64)) user_email = db.Column(db.String(128), unique=True) class Class(db.Model): __tablename__ = 'classes' class_id = db.Column(db.Integer, primary_key=True) class_name = db.Column(db.String(128), unique=True) and then my identifier database: student_identifier = db.Table('student_identifier', db.Column('class_id', db.Integer, db.ForeignKey('classes.class_id')), db.Column('user_id', db.Integer, db.ForeignKey('users.user_id')) ) so far it looks like this when I try to insert the data into the database. # User user1 = User( user_fistName='John', user_lastName='Doe', user_email='john@doe.es') user2 = User( user_fistName='Jack', user_lastName='Doe', user_email='jack@doe.es') user3 = User( user_fistName='Jane', user_lastName='Doe', user_email='jane@doe.es') db.session.add_all([user1, user2, user3]) db.session.commit() # Class cl1 = Class(class_name='0A') cl2 = Class(class_name='0B') cl3 = Class(class_name='0C') cl4 = Class(class_name='Math') cl5 = Class(class_name='Spanish') db.session.add_all([cl1, cl2, cl3, cl4, cl5]) db.session.commit() Now my problem is, how do I add to the many to many database, since I really can't create a 'student_identifier' object? If I could it could perhaps have looked like this: # Student Identifier sti1 = StiClass(class_id=cl1.class_id, class_name=user1.user_id) sti2 = StiClass(class_id=cl3.class_id, class_name=user1.user_id) sti3 = StiClass(class_id=cl4.class_id, class_name=user1.user_id) sti4 = StiClass(class_id=cl2.class_id, class_name=user2.user_id) db.session.add_all([sti1, sti2, sti3, sti4]) db.session.commit() How I am supposed to insert into a many to many table with ORM?
You don't need to add anything directly to your association table, SQLAlchemy will do that. This is more or less from SQLAlchemy documentations: association_table = db.Table('association', db.Model.metadata, db.Column('left_id', db.Integer, db.ForeignKey('left.id')), db.Column('right_id', db.Integer, db.ForeignKey('right.id')) ) class Parent(db.Model): __tablename__ = 'left' id = db.Column(db.Integer, primary_key=True) children = db.relationship("Child", secondary=association_table) class Child(db.Model): __tablename__ = 'right' id = db.Column(db.Integer, primary_key=True) p = Parent() c = Child() p.children.append(c) db.session.add(p) db.session.commit() Therefore your sample would be like this: student_identifier = db.Table('student_identifier', db.Column('class_id', db.Integer, db.ForeignKey('classes.class_id')), db.Column('user_id', db.Integer, db.ForeignKey('students.user_id')) ) class Student(db.Model): __tablename__ = 'students' user_id = db.Column(db.Integer, primary_key=True) user_fistName = db.Column(db.String(64)) user_lastName = db.Column(db.String(64)) user_email = db.Column(db.String(128), unique=True) class Class(db.Model): __tablename__ = 'classes' class_id = db.Column(db.Integer, primary_key=True) class_name = db.Column(db.String(128), unique=True) children = db.relationship("Student", secondary=student_identifier) s = Student() c = Class() c.children.append(s) db.session.add(c) db.session.commit()
Models inside tests - Django 1.7 issue
I'm trying to port my project to use Django 1.7. Everything is fine except 1 thing. Models inside tests folders. Django 1.7 new migrations run migrate command internally. Before syncdb was ran. That means if a model is not included in migrations - it won't be populated to DB (and also to test DB). That's exactly what I'm experiencing right now. What I do is: In my /app/tests/models.py I have dummy model: class TestBaseImage(BaseImage): pass All it does is to inherit from an abstract BaseImage model. Then in tests I create instances of that dummy model to test it. The problem is that it doesn't work any more. It's not included in migrations (that's obvious as I don't want to keep my test models in a production DB). Running my tests causes DB error saying that table does not exist. That makes sense as it's not included in migrations. Is there any way to make it work with new migrations system? I can't find a way to "fix" that. Code I use: app/tests/models.py from ..models import BaseImage class TestBaseImage(BaseImage): """Dummy model just to test BaseImage abstract class""" pass app/models.py class BaseImage(models.Model): # ... fields ... class Meta: abstract = True factories: class BaseImageFactory(factory.django.DjangoModelFactory): """Factory class for Vessel model""" FACTORY_FOR = BaseImage ABSTRACT_FACTORY = True class PortImageFactory(BaseImageFactory): FACTORY_FOR = PortImage example test: def get_model_field(model, field_name): """Returns field instance""" return model._meta.get_field_by_name(field_name)[0] def test_owner_field(self): """Tests owner field""" field = get_model_field(BaseImage, "owner") self.assertIsInstance(field, models.ForeignKey) self.assertEqual(field.rel.to, get_user_model())
There is a ticket requesting a way to do test-only models here As a workaround, you can decouple your tests.py and make it an app. tests |--migrations |--__init__.py |--models.py |--tests.py You will end up with something like this: myapp |-migrations |-tests |--migrations |--__init__.py |--models.py |--tests.py |-__init__.py |-models.py |-views.py Then you should add it to your INSTALLED_APPS INSTALLED_APPS = ( # ... 'myapp', 'myapp.tests', ) You probably don't want to install myapp.tests in production, so you can keep separate settings files. Something like this: INSTALLED_APPS = ( # ... 'myapp', ) try: from local_settings import * except ImportError: pass Or better yet, create a test runner and install your tests there. Last but not least, remember to run python manage.py makemigrations
ubuntu 14.04, pip cannot upgrade matplotllib
When I try to upgrade my matplotlib using pip, it outputs: Downloading/unpacking matplotlib from https://pypi.python.org/packages/source/m/matplotlib/matplotlib-1.4.0.tar.gz#md5=1daf7f2123d94745feac1a30b210940c Downloading matplotlib-1.4.0.tar.gz (51.2MB): 51.2MB downloaded Running setup.py (path:/tmp/pip_build_root/matplotlib/setup.py) egg_info for package matplotlib ============================================================================ Edit setup.cfg to change the build options BUILDING MATPLOTLIB matplotlib: yes [1.4.0] python: yes [2.7.6 (default, Mar 22 2014, 22:59:38) [GCC 4.8.2]] platform: yes [linux2] REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [version 1.8.2] six: yes [using six version 1.7.3] dateutil: yes [using dateutil version 2.2] tornado: yes [using tornado version 4.0.1] pyparsing: yes [using pyparsing version 2.0.2] pycxx: yes [Couldn't import. Using local copy.] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] Traceback (most recent call last): File "<string>", line 17, in <module> File "/tmp/pip_build_root/matplotlib/setup.py", line 154, in <module> result = package.check() File "setupext.py", line 940, in check if 'No such file or directory\ngrep:' in version: TypeError: argument of type 'NoneType' is not iterable Complete output from command python setup.py egg_info: ============================================================================ Edit setup.cfg to change the build options BUILDING MATPLOTLIB matplotlib: yes [1.4.0] python: yes [2.7.6 (default, Mar 22 2014, 22:59:38) [GCC 4.8.2]] platform: yes [linux2] REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [version 1.8.2] six: yes [using six version 1.7.3] dateutil: yes [using dateutil version 2.2] tornado: yes [using tornado version 4.0.1] pyparsing: yes [using pyparsing version 2.0.2] pycxx: yes [Couldn't import. Using local copy.] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] Traceback (most recent call last): File "<string>", line 17, in <module> File "/tmp/pip_build_root/matplotlib/setup.py", line 154, in <module> result = package.check() File "setupext.py", line 940, in check if 'No such file or directory\ngrep:' in version: TypeError: argument of type 'NoneType' is not iterable ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/matplotlib Storing debug log for failure in /home/username/.pip/pip.log In the tail of the log it says: Exception information: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/pip-1.5.6-py2.7.egg/pip/basecommand.py", line 122, in main status = self.run(options, args) File "/usr/local/lib/python2.7/dist-packages/pip-1.5.6-py2.7.egg/pip/commands/install.py", line 278, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/usr/local/lib/python2.7/dist-packages/pip-1.5.6-py2.7.egg/pip/req.py", line 1229, in prepare_files req_to_install.run_egg_info() File "/usr/local/lib/python2.7/dist-packages/pip-1.5.6-py2.7.egg/pip/req.py", line 325, in run_egg_info command_desc='python setup.py egg_info') File "/usr/local/lib/python2.7/dist-packages/pip-1.5.6-py2.7.egg/pip/util.py", line 697, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/matplotlib Why did it fail? Many thanks!
On Ubuntu 14 server, you also need to install libxft-dev sudo apt-get install libfreetype6-dev libxft-dev
How can I concatenate str and int objects?
If I try to do the following: things = 5 print("You have " + things + " things.") I get the following error in Python 3.x: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't convert 'int' object to str implicitly ... and a similar error in Python 2.x: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: cannot concatenate 'str' and 'int' objects How can I get around this problem?
The problem here is that the + operator has (at least) two different meanings in Python: for numeric types, it means "add the numbers together": >>> 1 + 2 3 >>> 3.4 + 5.6 9.0 ... and for sequence types, it means "concatenate the sequences": >>> [1, 2, 3] + [4, 5, 6] [1, 2, 3, 4, 5, 6] >>> 'abc' + 'def' 'abcdef' As a rule, Python doesn't implicitly convert objects from one type to another1 in order to make operations "make sense", because that would be confusing: for instance, you might think that '3' + 5 should mean '35', but someone else might think it should mean 8 or even '8'. Similarly, Python won't let you concatenate two different types of sequence: >>> [7, 8, 9] + 'ghi' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: can only concatenate list (not "str") to list Because of this, you need to do the conversion explicitly, whether what you want is concatenation or addition: >>> 'Total: ' + str(123) 'Total: 123' >>> int('456') + 789 1245 However, there is a better way. Python comes with two different kinds of string formatting, which not only allow you to avoid multiple + operations: >>> things = 5 >>> 'You have %d things.' % things # % interpolation 'You have 5 things.' >>> 'You have {} things.'.format(things) # str.format() 'You have 5 things.' ... but also allow you to control how values are displayed: >>> value = 5 >>> sq_root = value ** 0.5 >>> sq_root 2.23606797749979 >>> 'The square root of %d is %.2f (roughly).' % (value, sq_root) 'The square root of 5 is 2.24 (roughly).' >>> 'The square root of {v} is {sr:.2f} (roughly).'.format(v=value, sr=sq_root) 'The square root of 5 is 2.24 (roughly).' Whether you use % interpolation or str.format() is up to you: % interpolation can be quicker in simple cases, str.format() is often more powerful, and neither are going away in the foreseeable future. A third alternative is to use the fact that if you give print multiple positional arguments, it will join their string representations together using the sep keyword argument (which defaults to ' '): >>> things = 5 >>> print('you have', things, 'things.') you have 5 things. >>> print('you have', things, 'things.', sep=' ... ') you have ... 5 ... things. ... but that's usually not as flexible as using Python's built-in string formatting abilities. 1 Although it makes an exception for numeric types, where most people would agree on the 'right' thing to do: >>> 1 + 2.3 3.3 >>> 4.5 + (5.6+7j) (10.1+7j)
How to embed HTML into iPython output?
Is it possible to embed rendered HTML output into iPython output? One way is to use from IPython.core.display import HTML HTML('<a href="http://example.com">link</a>') or (IPython multiline cell alias) %%html <a href="http://example.com">link</a> Which return a formatted link, but This link doesn't open a browser with the webpage itself from the console. IPython notebooks support honest rendering, though. I'm unaware about how to render HTML() object within, say, a list or pandas printed table. You can do df.to_html(), but without making links inside cells. This output isn't interactive in the PyCharm Python console (because it's not QT). How can I overcome thes shortcomings and make iPython output a bit more interactive?
This seems to work for me: from IPython.core.display import display, HTML display(HTML('<h1>Hello, world!</h1>')) The trick is to wrap it in "display" as well. Source: http://python.6.x6.nabble.com/Printing-HTML-within-IPython-Notebook-IPython-specific-prettyprint-tp5016624p5016631.html
Replace all occurrences of a string in a pandas dataframe (Python)
I have a pandas dataframe with about 20 columns. It is possible to replace all occurrences of a string (here a newline) by manually writing all column names: df['columnname1'] = df['columnname1'].str.replace("\n","<br>") df['columnname2'] = df['columnname2'].str.replace("\n","<br>") df['columnname3'] = df['columnname3'].str.replace("\n","<br>") ... df['columnname20'] = df['columnname20'].str.replace("\n","<br>") This unfortunately does not work: df = df.replace("\n","<br>") Is there any other, more elegant solution?
You can use replace and pass the strings to find/replace as dictionary keys/items: df.replace({'\n': '<br>'}, regex=True) For example: >>> df = pd.DataFrame({'a': ['1\n', '2\n', '3'], 'b': ['4\n', '5', '6\n']}) >>> df a b 0 1\n 4\n 1 2\n 5 2 3 6\n >>> df.replace({'\n': '<br>'}, regex=True) a b 0 1<br> 4<br> 1 2<br> 5 2 3 6<br>
JSON ValueError: Expecting property name: line 1 column 2 (char 1)
I am having trouble using json.loads to convert to a dict object and I can't figure out what I'm doing wrong.The exact error I get running this is ValueError: Expecting property name: line 1 column 2 (char 1) Here is my code: __author__ = 'xxdpavelxx' from kafka.client import KafkaClient from kafka.consumer import SimpleConsumer from kafka.producer import SimpleProducer, KeyedProducer import pymongo from pymongo import MongoClient import json c = MongoClient("54.210.157.57") db = c.test_database3 collection = db.tweet_col kafka = KafkaClient("54.210.157.57:9092") consumer = SimpleConsumer(kafka,"myconsumer","test") for tweet in consumer: print tweet.message.value jsonTweet=json.loads(({u'favorited': False, u'contributors': None}) collection.insert(jsonTweet) I'm pretty sure that the error is occuring at the 2nd to last line jsonTweet=json.loads({u'favorited': False, u'contributors': None}) but I do not know what to do to fix it. Any advice would be appreciated.
json.loads will load a json object into a python dict, json.dumps will dump a python dict to a json object, for example: >>> json_string = '{"favorited": false, "contributors": null}' '{"favorited": false, "contributors": null}' >>> value = json.loads(json_string) {u'favorited': False, u'contributors': None} >>> json_dump = json.dumps(value) '{"favorited": false, "contributors": null}' So that line is incorrect since you are trying to load a python dict, and json.loads is expecting a valid json object which should have <type 'str'>. So if you are trying to load the json, you should change what you are loading to look like the json_string above, or you should be dumping it. This is just my best guess from the given information. What is it that you are trying to accomplish? Also you don't need to specify the u before your strings, as @Cld mentioned in the comments.
How to avoid "RuntimeWarning: invalid value encountered in divide" in NumPy?
I am trying to avoid the warning RuntimeWarning: invalid value encountered in divide in NumPy. I thought I could do: import numpy as np A=np.array([0.0]) print A.dtype with np.errstate(divide='ignore'): B=A/A print B but this gives: float64 ./t.py:9: RuntimeWarning: invalid value encountered in divide B=A/A [ nan] If I replace B=A/A with np.float64(1.0) / 0.0 it gives no warning.
You need to set invalid rather than divide: with np.errstate(invalid='ignore'): ^^^^^^^
Django 1.7 - updating base_site.html not working
I'm following the tutorial for django 1.7 (again). I cannot get the admin site to update. I've followed this: Django: Overrideing base_site.html this: Custom base_site.html not working in Django and a couple of offsite things links. My settings file looks like this: """ Django settings for website project. For more information on this file, see https://docs.djangoproject.com/en/1.7/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.7/ref/settings/ """ # Build paths inside the project like this: os.path.join(BASE_DIR, ...) import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True TEMPLATE_DEBUG = True TEMPLATE_DIRS = [os.path.join(BASE_DIR, 'templates')] ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'blog', ) MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) ROOT_URLCONF = 'website.urls' WSGI_APPLICATION = 'website.wsgi.application' # Database # https://docs.djangoproject.com/en/1.7/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': '', 'USER': 'root', 'PASSWORD': '', 'HOST': '127.0.0.1', 'PORT': '3306', } } # Internationalization # https://docs.djangoproject.com/en/1.7/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'GMT' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.7/howto/static-files/ STATIC_URL = '/static/' And I know my file structure is working because if I cut everything out of the base_site.html and replace it with 'wtf' that's exactly what displays when I visit the admin site. I've gone as far as to delete the admin/base_site.html from the django install but still I get the 'Django administration'. When it doesn't say 'wtf' my base_site.html looks like this: {% extends "admin/base.html" %} {% block title %}{{ title }} | {{ site_title|default:_('whatever site admin') }}{% endblock %} {% block branding %} <h1 id="site-name"><a href="{% url 'admin:index' %}">{{ site_header|default:_('whatever site administration') }}</a></h1> {% endblock %} {% block nav-global %}{% endblock %} I guess this must be something to do with 1.7 as I got it working in 1.6 but I've checked the docs for 1.6, 1.7 and dev and can't find what's wrong. I'm developing on windows in a virtual env running a local MySQL db.
To start off, I am not sure if it was a copy/paste issue or if you actually have TEMPLATE_DIRS commented out. It will need to be a non-commented line: TEMPLATE_DIRS = [os.path.join(BASE_DIR, 'templates')] As for the real problem, you have to replace more of your template to make it work because site_title is defined here: https://github.com/django/django/blob/1.7/django/contrib/admin/sites.py#L36 and site_header is defined here: https://github.com/django/django/blob/1.7/django/contrib/admin/sites.py#L39 Default will only work if these do not exist, so your template should look like this: {% extends "admin/base.html" %} {% block title %}{{ title }} | whatever site admin{% endblock %} {% block branding %} <h1 id="site-name"><a href="{% url 'admin:index' %}">whatever site administration</a></h1> {% endblock %} {% block nav-global %}{% endblock %} You can learn more about the default tag here: https://docs.djangoproject.com/en/1.7/ref/templates/builtins/#default
Installing theano on Windows 8 with GPU enabled
I understand that the Theano support for Windows 8.1 is at experimental stage only but I wonder if anyone had any luck with resolving my issues. Depending on my config, I get three distinct types of errors. I assume that the resolution of any of my errors would solve my problem. I have installed Python using WinPython 32-bit system, using MinGW as described here. The contents of my .theanorc file are as follows: [global] openmp=False device = gpu [nvcc] flags=-LC:\TheanoPython\python-2.7.6\libs compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\ [blas] ldflags = When I run import theano the error is as follows: nvcc fatal : nvcc cannot find a supported version of Microsoft Visual Studio. Only the versions 2010, 2012, and 2013 are supported ['nvcc', '-shared', '-g', '-O3', '--compiler-bindir', 'C:\\Program Files (x86)\\ Microsoft Visual Studio 10.0\\VC\\bin# flags=-m32 # we have this hard coded for now', '-Xlinker', '/DEBUG', '-m32', '-Xcompiler', '-DCUDA_NDARRAY_CUH=d67f7c8a21 306c67152a70a88a837011,/Zi,/MD', '-IC:\\TheanoPython\\python-2.7.6\\lib\\site-pa ckages\\theano\\sandbox\\cuda', '-IC:\\TheanoPython\\python-2.7.6\\lib\\site-pac kages\\numpy\\core\\include', '-IC:\\TheanoPython\\python-2.7.6\\include', '-o', 'C:\\Users\\Matej\\AppData\\Local\\Theano\\compiledir_Windows-8-6.2.9200-Intel6 4_Family_6_Model_60_Stepping_3_GenuineIntel-2.7.6-32\\cuda_ndarray\\cuda_ndarray .pyd', 'mod.cu', '-LC:\\TheanoPython\\python-2.7.6\\libs', '-LNone\\lib', '-LNon e\\lib64', '-LC:\\TheanoPython\\python-2.7.6', '-lpython27', '-lcublas', '-lcuda rt'] ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc return st atus', 1, 'for cmd', 'nvcc -shared -g -O3 --compiler-bindir C:\\Program Files (x 86)\\Microsoft Visual Studio 10.0\\VC\\bin# flags=-m32 # we have this hard coded for now -Xlinker /DEBUG -m32 -Xcompiler -DCUDA_NDARRAY_CUH=d67f7c8a21306c67152a 70a88a837011,/Zi,/MD -IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\thean o\\sandbox\\cuda -IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\numpy\\co re\\include -IC:\\TheanoPython\\python-2.7.6\\include -o C:\\Users\\Matej\\AppDa ta\\Local\\Theano\\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_60_Stepp ing_3_GenuineIntel-2.7.6-32\\cuda_ndarray\\cuda_ndarray.pyd mod.cu -LC:\\TheanoP ython\\python-2.7.6\\libs -LNone\\lib -LNone\\lib64 -LC:\\TheanoPython\\python-2 .7.6 -lpython27 -lcublas -lcudart') WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not availabl e I have also tested it using Visual Studio 12.0 which is installed on my system with the following error: mod.cu nvlink fatal : Could not open input file 'C:/Users/Matej/AppData/Local/Temp/tm pxft_00001b70_00000000-28_mod.obj' ['nvcc', '-shared', '-g', '-O3', '--compiler-bindir', 'C:\\Program Files (x86)\\ Microsoft Visual Studio 12.0\\VC\\bin\\', '-Xlinker', '/DEBUG', '-m32', '-Xcompi ler', '-LC:\\TheanoPython\\python-2.7.6\\libs,-DCUDA_NDARRAY_CUH=d67f7c8a21306c6 7152a70a88a837011,/Zi,/MD', '-IC:\\TheanoPython\\python-2.7.6\\lib\\site-package s\\theano\\sandbox\\cuda', '-IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages \\numpy\\core\\include', '-IC:\\TheanoPython\\python-2.7.6\\include', '-o', 'C:\ \Users\\Matej\\AppData\\Local\\Theano\\compiledir_Windows-8-6.2.9200-Intel64_Fam ily_6_Model_60_Stepping_3_GenuineIntel-2.7.6-32\\cuda_ndarray\\cuda_ndarray.pyd' , 'mod.cu', '-LC:\\TheanoPython\\python-2.7.6\\libs', '-LNone\\lib', '-LNone\\li b64', '-LC:\\TheanoPython\\python-2.7.6', '-lpython27', '-lcublas', '-lcudart'] ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc return st atus', 1, 'for cmd', 'nvcc -shared -g -O3 --compiler-bindir C:\\Program Files (x 86)\\Microsoft Visual Studio 12.0\\VC\\bin\\ -Xlinker /DEBUG -m32 -Xcompiler -LC :\\TheanoPython\\python-2.7.6\\libs,-DCUDA_NDARRAY_CUH=d67f7c8a21306c67152a70a88 a837011,/Zi,/MD -IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\theano\\sa ndbox\\cuda -IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\numpy\\core\\i nclude -IC:\\TheanoPython\\python-2.7.6\\include -o C:\\Users\\Matej\\AppData\\L ocal\\Theano\\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_60_Stepping_3 _GenuineIntel-2.7.6-32\\cuda_ndarray\\cuda_ndarray.pyd mod.cu -LC:\\TheanoPython \\python-2.7.6\\libs -LNone\\lib -LNone\\lib64 -LC:\\TheanoPython\\python-2.7.6 -lpython27 -lcublas -lcudart') WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not availabl e In the latter error, several pop-up windows ask me how would I like to open (.res) file before error is thrown. cl.exe is present in both folders (i.e. VS 2010 and VS 2013). Finally, if I set VS 2013 in the environment path and set .theanorc contents as follows: [global] base_compiledir=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin openmp=False floatX = float32 device = gpu [nvcc] flags=-LC:\TheanoPython\python-2.7.6\libs compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\ [blas] ldflags = I get the following error: c:\theanopython\python-2.7.6\include\pymath.h(22): warning: dllexport/dllimport conflict with "round" c:\program files\nvidia gpu computing toolkit\cuda\v6.5\include\math_functions.h(2455): here; dllimport/dllexport dropped mod.cu(954): warning: statement is unreachable mod.cu(1114): error: namespace "std" has no member "min" mod.cu(1145): error: namespace "std" has no member "min" mod.cu(1173): error: namespace "std" has no member "min" mod.cu(1174): error: namespace "std" has no member "min" mod.cu(1317): error: namespace "std" has no member "min" mod.cu(1318): error: namespace "std" has no member "min" mod.cu(1442): error: namespace "std" has no member "min" mod.cu(1443): error: namespace "std" has no member "min" mod.cu(1742): error: namespace "std" has no member "min" mod.cu(1777): error: namespace "std" has no member "min" mod.cu(1781): error: namespace "std" has no member "min" mod.cu(1814): error: namespace "std" has no member "min" mod.cu(1821): error: namespace "std" has no member "min" mod.cu(1853): error: namespace "std" has no member "min" mod.cu(1861): error: namespace "std" has no member "min" mod.cu(1898): error: namespace "std" has no member "min" mod.cu(1905): error: namespace "std" has no member "min" mod.cu(1946): error: namespace "std" has no member "min" mod.cu(1960): error: namespace "std" has no member "min" mod.cu(3750): error: namespace "std" has no member "min" mod.cu(3752): error: namespace "std" has no member "min" mod.cu(3784): error: namespace "std" has no member "min" mod.cu(3786): error: namespace "std" has no member "min" mod.cu(3789): error: namespace "std" has no member "min" mod.cu(3791): error: namespace "std" has no member "min" mod.cu(3794): error: namespace "std" has no member "min" mod.cu(3795): error: namespace "std" has no member "min" mod.cu(3836): error: namespace "std" has no member "min" mod.cu(3838): error: namespace "std" has no member "min" mod.cu(4602): error: namespace "std" has no member "min" mod.cu(4604): error: namespace "std" has no member "min" 31 errors detected in the compilation of "C:/Users/Matej/AppData/Local/Temp/tmpxft_00001d84_00000000-10_mod.cpp1.ii". ERROR (theano.sandbox.cuda): Failed to compile cuda_ndarray.cu: ('nvcc return status', 2, 'for cmd', 'nvcc -shared -g -O3 -Xlinker /DEBUG -m32 -Xcompiler -DCUDA_NDARRAY_CUH=d67f7c8a21306c67152a70a88a837011,/Zi,/MD -IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\theano\\sandbox\\cuda -IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\numpy\\core\\include -IC:\\TheanoPython\\python-2.7.6\\include -o C:\\Users\\Matej\\AppData\\Local\\Theano\\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_60_Stepping_3_GenuineIntel-2.7.6-32\\cuda_ndarray\\cuda_ndarray.pyd mod.cu -LC:\\TheanoPython\\python-2.7.6\\libs -LNone\\lib -LNone\\lib64 -LC:\\TheanoPython\\python-2.7.6 -lpython27 -lcublas -lcudart') ERROR:theano.sandbox.cuda:Failed to compile cuda_ndarray.cu: ('nvcc return status', 2, 'for cmd', 'nvcc -shared -g -O3 -Xlinker /DEBUG -m32 -Xcompiler -DCUDA_NDARRAY_CUH=d67f7c8a21306c67152a70a88a837011,/Zi,/MD -IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\theano\\sandbox\\cuda -IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\numpy\\core\\include -IC:\\TheanoPython\\python-2.7.6\\include -o C:\\Users\\Matej\\AppData\\Local\\Theano\\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_60_Stepping_3_GenuineIntel-2.7.6-32\\cuda_ndarray\\cuda_ndarray.pyd mod.cu -LC:\\TheanoPython\\python-2.7.6\\libs -LNone\\lib -LNone\\lib64 -LC:\\TheanoPython\\python-2.7.6 -lpython27 -lcublas -lcudart') mod.cu ['nvcc', '-shared', '-g', '-O3', '-Xlinker', '/DEBUG', '-m32', '-Xcompiler', '-DCUDA_NDARRAY_CUH=d67f7c8a21306c67152a70a88a837011,/Zi,/MD', '-IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\theano\\sandbox\\cuda', '-IC:\\TheanoPython\\python-2.7.6\\lib\\site-packages\\numpy\\core\\include', '-IC:\\TheanoPython\\python-2.7.6\\include', '-o', 'C:\\Users\\Matej\\AppData\\Local\\Theano\\compiledir_Windows-8-6.2.9200-Intel64_Family_6_Model_60_Stepping_3_GenuineIntel-2.7.6-32\\cuda_ndarray\\cuda_ndarray.pyd', 'mod.cu', '-LC:\\TheanoPython\\python-2.7.6\\libs', '-LNone\\lib', '-LNone\\lib64', '-LC:\\TheanoPython\\python-2.7.6', '-lpython27', '-lcublas', '-lcudart'] If I run import theano without the GPU option on, it runs without a problem. Also CUDA samples run without a problem.
Theano is a great tool for machine learning applications, yet I found that its installation on Windows is not trivial especially for beginners (like myself) in programming. In my case, I see 5-6x speedups of my scripts when run on a GPU so it was definitely worth the hassle. I wrote this guide based on my installation procedure and is meant to be verbose and hopefully complete even for people with no prior understanding of building programs under Windows environment. Most of this guide is based on these instructions but I had to change some of the steps in order for it to work on my system. If there is anything that I do that may not be optimal or that doesn't work on your machine, please, let me know and I will try to modify this guide accordingly. These are the steps (in order) I followed when installing Theano with GPU enabled on my Windows 8.1 machine: CUDA Installation CUDA can be downloaded from here. In my case, I chose 64-bit Notebook version for my NVIDIA Optimus laptop with Geforce 750m. Verify that your installation was successful by launching deviceQuery from command line. In my case this was located in the following folder: C:\ProgramData\NVIDIA Corporation\CUDA Samples\v6.5\bin\win64\Release . If successful, you should see PASS at the end of the test. Visual Studio 2010 Installation I installed this via dreamspark. If you are a student you are entitled for a free version. If not, you can still install the Express version which should work just as well. After install is complete you should be able to call Visual Studio Command Prompt 2010 from the start menu. Python Installation At the time of writing, Theano on GPU only allows working with 32-bit floats and is primarily built for 2.7 version of Python. Theano requires most of the basic scientific Python libraries such as scipy and numpy. I found that the easiest way to install these was via WinPython. It installs all the dependencies in a self-contained folder which allows easy reinstall if something goes wrong in the installation process and you get some useful IDE tools such as ipython notebook and Spyder installed for free as well. For ease of use you might want to add the path to your python.exe and path to your Scripts folder in the environment variables. Git installation Found here. MinGW Installation Setup file is here. I checked all the base installation files during the installation process. This is required if you run into g++ error described below. Cygwin installation You can find it here. I basically used this utility only to extract PyCUDA tar file which is already provided in the base install (so the install should be straightforward). Python distutils fix Open msvc9compiler.py located in your /lib/distutils/ directory of your Python installation. Line 641 in my case reads: ld_args.append ('/IMPLIB:' + implib_file). Add the following after this line (same indentation): ld_args.append('/MANIFEST') PyCUDA installation Source for PyCUDA is here. Steps: Open cygwin and navigate to the PyCUDA folder (i.e. /cygdrive/c/etc/etc) and execute tar -xzf pycuda-2012.1.tar.gz. Open Visual Studio Command Prompt 2010 and navigate to the directory where tarball was extracted and execute python configure.py Open the ./siteconf.py and change the values so that it reads (for CUDA 6.5 for instance): BOOST_INC_DIR = [] BOOST_LIB_DIR = [] BOOST_COMPILER = 'gcc43' USE_SHIPPED_BOOST = True BOOST_PYTHON_LIBNAME = ['boost_python'] BOOST_THREAD_LIBNAME = ['boost_thread'] CUDA_TRACE = False CUDA_ROOT = 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v6.5' CUDA_ENABLE_GL = False CUDA_ENABLE_CURAND = True CUDADRV_LIB_DIR = ['${CUDA_ROOT}/lib/Win32'] CUDADRV_LIBNAME = ['cuda'] CUDART_LIB_DIR = ['${CUDA_ROOT}/lib/Win32'] CUDART_LIBNAME = ['cudart'] CURAND_LIB_DIR = ['${CUDA_ROOT}/lib/Win32'] CURAND_LIBNAME = ['curand'] CXXFLAGS = ['/EHsc'] LDFLAGS = ['/FORCE'] Execute the following commands at the VS2010 command prompt: set VS90COMNTOOLS=%VS100COMNTOOLS% python setup.py build python setup.py install Create this python file and verify that you get a result: # from: http://documen.tician.de/pycuda/tutorial.html import pycuda.gpuarray as gpuarray import pycuda.driver as cuda import pycuda.autoinit import numpy a_gpu = gpuarray.to_gpu(numpy.random.randn(4,4).astype(numpy.float32)) a_doubled = (2*a_gpu).get() print a_doubled print a_gpu Install Theano Open git bash shell and choose a folder in which you want to place Theano installation files and execute: git clone git://github.com/Theano/Theano.git python setup.py install Try opening python in VS2010 command prompt and run import theano If you get a g++ related error, open MinGW msys.bat in my case installed here: C:\MinGW\msys\1.0 and try importing theano in MinGW shell. Then retry importing theano from VS2010 Command Prompt and it should be working now. Create a file in WordPad (NOT Notepad!), name it .theanorc.txt and put it in C:\Users\Your_Name\ or wherever your users folder is located: #!sh [global] device = gpu floatX = float32 [nvcc] compiler_bindir=C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin # flags=-m32 # we have this hard coded for now [blas] ldflags = # ldflags = -lopenblas # placeholder for openblas support Create a test python script and run it: from theano import function, config, shared, sandbox import theano.tensor as T import numpy import time vlen = 10 * 30 * 768 # 10 x #cores x # threads per core iters = 1000 rng = numpy.random.RandomState(22) x = shared(numpy.asarray(rng.rand(vlen), config.floatX)) f = function([], T.exp(x)) print f.maker.fgraph.toposort() t0 = time.time() for i in xrange(iters): r = f() t1 = time.time() print 'Looping %d times took' % iters, t1 - t0, 'seconds' print 'Result is', r if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]): print 'Used the cpu' else: print 'Used the gpu' Verify you got Used the gpu at the end and you're done!
Python - Pandas sum dataframe rows for given columns
new to pandas since today, I have the following dataframe: import pandas as pd df = pd.DataFrame({'a': [1,2,3], 'b': [2,3,4], 'c':['dd','ee','ff'], 'd':[5,9,1]}) It is a really basic question, but I simply would like to add a column 'e' which is the sum of column 'a', 'b' and 'd'. Going accross forums, I thought something like this would work: df['e'] = df[['a','b','d']].map(sum) But no! I would like to realize the operation having the list of columns ['a','b','d'] and df as inputs.
You can just sum and set param axis=1 to sum the rows, this will ignore none numeric columns: In [91]: df = pd.DataFrame({'a': [1,2,3], 'b': [2,3,4], 'c':['dd','ee','ff'], 'd':[5,9,1]}) df['e'] = df.sum(axis=1) df Out[91]: a b c d e 0 1 2 dd 5 8 1 2 3 ee 9 14 2 3 4 ff 1 8 If you want to just sum specific columns then you can create a list of the columns and remove the ones you are not interested in: In [98]: col_list= list(df) col_list.remove('d') col_list Out[98]: ['a', 'b', 'c'] In [99]: df['e'] = df[col_list].sum(axis=1) df Out[99]: a b c d e 0 1 2 dd 5 3 1 2 3 ee 9 5 2 3 4 ff 1 7
Using Selenium with Python and PhantomJS to download file to filesystem
I've been grappling with using PhantomJS/Selenium/python-selenium to download a file to the filesystem. I'm able to easily navigate through the DOM and click, hover etc. Downloading a file is, however, proving to be quite troublesome. I've tried a headless approach with Firefox and pyvirtualdisplay but that wasn't working well either and was unbelievably slow. I know That CasperJS allows for file downloads. Does anyone know how to integrate CasperJS with Python or how to utilize PhantomJS to download files. Much appreciated.
PhantomJS doesn't currently support file downloads. Relevant issues with workarounds: File download How to handle file save dialog box using Selenium webdriver and PhantomJS? As far as I understand, you have at least 3 options: switch to casperjs (and you should leave python here) try with headless on xvfb switch to normal non-headless browsers Here are also some links that might help too: Selenium Headless Automated Testing in Ubuntu XWindows for Headless Selenium (with further links inside) How to run browsers(chrome, IE and firefox) in headless mode? Tutorial: How to use Headless Firefox for Scraping in Linux
Django - Cannot create migrations for ImageField with dynamic upload_to value
I just upgraded my app to 1.7 (actually still trying). This is what i had in models.py: def path_and_rename(path): def wrapper(instance, filename): ext = filename.split('.')[-1] # set filename as random string filename = '{}.{}'.format(uuid4().hex, ext) # return the whole path to the file return os.path.join(path, filename) return wrapper class UserProfile(AbstractUser): #... avatar = models.ImageField(upload_to=path_and_rename("avatars/"), null=True, blank=True, default="avatars/none/default.png", height_field="image_height", width_field="image_width") When i try to makemigrations, it throws: ValueError: Could not find function wrapper in webapp.models. Please note that due to Python 2 limitations, you cannot serialize unbound method functions (e.g. a method declared and used in the same class body). Please move the function into the main module body to use migrations.
I am not sure if it is OK to answer my own question, but i just figured out (i think). According to this bug report, i edited my code: from django.utils.deconstruct import deconstructible @deconstructible class PathAndRename(object): def __init__(self, sub_path): self.path = sub_path def __call__(self, instance, filename): ext = filename.split('.')[-1] # set filename as random string filename = '{}.{}'.format(uuid4().hex, ext) # return the whole path to the file return os.path.join(self.path, filename) path_and_rename = PathAndRename("/avatars") And then, in field definition: avatar = models.ImageField(upload_to=path_and_rename, null=True, blank=True, default="avatars/none/default.png", height_field="image_height", width_field="image_width") This worked for me.