instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Python conversion between coordinates
|
Are there functions for conversion between different coordinate systems?
For example, Matlab has [rho,phi] = cart2pol(x,y) for conversion from cartesian to polar coordinates. Seems like it should be in numpy or scipy.
|
Using numpy, you can define the following:
import numpy as np
def cart2pol(x, y):
rho = np.sqrt(x**2 + y**2)
phi = np.arctan2(y, x)
return(rho, phi)
def pol2cart(rho, phi):
x = rho * np.cos(phi)
y = rho * np.sin(phi)
return(x, y)
|
Python check if function exists without try
|
In python how do you check if a function exists without actually running the function (i.e. using try)? I would be testing if it exists in a module.
|
You can use dir to check if a name is in a module:
>>> import os
>>> "walk" in dir(os)
True
>>>
In the sample code above, we test for the os.walk function.
|
Input and output numpy arrays to h5py
|
I have a Python code whose output is a sized matrix, whose entries are all of the type float. If I save it with the extension .dat the file size is of the order of 500 MB. I read that using h5py reduces the file size considerably. So, let's say I have the 2D numpy array named A. How do I save it to an h5py file?
Also, how do I read the same file and put it as a numpy array in a different code, as I need to do manipulations with the array?
|
h5py provides a model of datasets and groups. The former is basically arrays and the latter you can think of as directories. Each is named. You should look at the documentation for the API and examples:
http://docs.h5py.org/en/latest/quick.html
A simple example where you are creating all of the data upfront and just want to save it to an hdf5 file would look something like:
In [1]: import numpy as np
In [2]: import h5py
In [3]: a = np.random.random(size=(100,20))
In [4]: h5f = h5py.File('data.h5', 'w')
In [5]: h5f.create_dataset('dataset_1', data=a)
Out[5]: <HDF5 dataset "dataset_1": shape (100, 20), type "<f8">
In [6]: h5f.close()
You can then load that data back in using:
'
In [10]: h5f = h5py.File('data.h5','r')
In [11]: b = h5f['dataset_1'][:]
In [12]: h5f.close()
In [13]: np.allclose(a,b)
Out[13]: True
Definitely check out the docs:
http://docs.h5py.org
Writing to hdf5 file depends either on h5py or pytables (each has a different python API that sits on top of the hdf5 file specification). You should also take a look at other simple binary formats provided by numpy natively such as np.save, np.savez etc:
http://docs.scipy.org/doc/numpy/reference/routines.io.html
|
How to repeat a block in a jinja2 template?
|
I'm using Jinja2 as the template engine to a static HTML site generated through a Python script.
I want to repeat the content of a block in the layout template, which goes something like this:
<html>
<head>
<title>{% block title %}{% endblock %} - {{ sitename }}</title>
</head>
<body>
<h1>{% block title %}{% endblock %}</h1>
<div id="content">
{% block content %}{% endblock %}
</div>
</body>
</html>
This template will be extended in a page template, that looks like this:
{% extends "layout.html" %}
{% block title %}Page title{% endblock %}
{% block content %}
Here goes the content
{% endblock %}
However, this doesn't work as I expected, resulting in an error:
jinja2.exceptions.TemplateAssertionError: block 'title' defined twice
Jinja interprets the second {% block title %} in layout.html as a block redefinition.
How can I repeat the content of a block in the same template using jinja2?
|
Use the special self variable to access the block by name:
<title>{% block title %}{% endblock %} - {{ sitename }}</title>
<!-- ... snip ... -->
<h1>{{ self.title() }}</h1>
|
Resizing numpy.memmap arrays
|
I'm working with a bunch of large numpy arrays, and as these started to chew up too much memory lately, I wanted to replace them with numpy.memmap instances. The problem is, now and then I have to resize the arrays, and I'd preferably do that inplace. This worked quite well with ordinary arrays, but trying that on memmaps complains, that the data might be shared, and even disabling the refcheck does not help.
a = np.arange(10)
a.resize(20)
a
>>> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
a = np.memmap('bla.bin', dtype=int)
a
>>> memmap([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
a.resize(20, refcheck=False)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-41-f1546111a7a1> in <module>()
----> 1 a.resize(20, refcheck=False)
ValueError: cannot resize this array: it does not own its data
Resizing the underlying mmap buffer works perfectly fine. The problem is how to reflect these changes to the array object. I've seen this workaround, but unfortunately it doesn't resize the array in place. There is also some numpy documentation about resizing mmaps, but it's clearly not working, at least with version 1.8.0. Any other ideas, how to override the inbuilt resizing checks?
|
The issue is that the flag OWNDATA is False when you create your array. You can change that by requiring the flag to be True when you create the array:
>>> a = np.require(np.memmap('bla.bin', dtype=int), requirements=['O'])
>>> a.shape
(10,)
>>> a.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
>>> a.resize(20, refcheck=False)
>>> a.shape
(20,)
The only caveat is that it may create the array and make a copy to be sure the requirements are met.
Edit to address saving:
If you want to save the re-sized array to disk, you can save the memmap as a .npy formatted file and open as a numpy.memmap when you need to re-open it and use as a memmap:
>>> a[9] = 1
>>> np.save('bla.npy',a)
>>> b = np.lib.format.open_memmap('bla.npy', dtype=int, mode='r+')
>>> b
memmap([0, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
Edit to offer another method:
You may get close to what you're looking for by re-sizing the base mmap (a.base or a._mmap, stored in uint8 format) and "reloading" the memmap:
>>> a = np.memmap('bla.bin', dtype=int)
>>> a
memmap([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
>>> a[3] = 7
>>> a
memmap([0, 0, 0, 7, 0, 0, 0, 0, 0, 0])
>>> a.flush()
>>> a = np.memmap('bla.bin', dtype=int)
>>> a
memmap([0, 0, 0, 7, 0, 0, 0, 0, 0, 0])
>>> a.base.resize(20*8)
>>> a.flush()
>>> a = np.memmap('bla.bin', dtype=int)
>>> a
memmap([0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
|
How can I create a random number that is cryptographically secure in python?
|
I'm making a project in python and I would like to create a random number that is cryptographically secure, How can I do that? I have read online that the numbers generated by the regular randomizer are not cryptographically secure, and that the function os.urandom(n) returns me a string, and not a number.
|
Since you want to generate integers in some specific range, it's a lot easier to use the random.SystemRandom class instead. Creating an instance of that class gives you an object that supports all the methods of the random module, but using os.urandom() under the covers. Examples:
>>> from random import SystemRandom
>>> cryptogen = SystemRandom()
>>> [cryptogen.randrange(3) for i in range(20)] # random ints in range(3)
[2, 2, 2, 2, 1, 2, 1, 2, 1, 0, 0, 1, 1, 0, 0, 2, 0, 0, 0, 0]
>>> [cryptogen.random() for i in range(3)] # random floats in [0., 1.)
[0.2710009745425236, 0.016722063038868695, 0.8207742461236148]
Etc. Using urandom() directly, you have to invent your own algorithms for converting the random bytes it produces to the results you want. Don't do that ;-) SystemRandom does it for you.
Note this part of the docs:
class random.SystemRandom([seed])
Class that uses the os.urandom() function for generating random numbers from sources provided by the operating system. Not available on all systems. Does not rely on software state and sequences are not reproducible. Accordingly, the seed() and jumpahead() methods have no effect and are ignored. The getstate() and setstate() methods raise NotImplementedError if called.
|
how to display pandas DataFrame using a format string for columns?
|
I would like to display a pandas dataframe with a given format using print() and the IPython display(). For example:
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
index=['foo','bar','baz','quux'],
columns=['cost'])
print df
cost
foo 123.4567
bar 234.5678
baz 345.6789
quux 456.7890
I would like to somehow coerce this into printing
cost
foo $123.46
bar $234.57
baz $345.68
quux $456.79
without having to modify the data itself or create a copy, just change the way it is displayed.
How can I do this?
|
import pandas as pd
pd.options.display.float_format = '${:,.2f}'.format
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
index=['foo','bar','baz','quux'],
columns=['cost'])
print(df)
yields
cost
foo $123.46
bar $234.57
baz $345.68
quux $456.79
but this only works if you want every float to be formatted with a dollar sign.
Otherwise, if you want dollar formatting for some floats only, then I think you'll have to pre-modify the dataframe (converting those floats to strings):
import pandas as pd
df = pd.DataFrame([123.4567, 234.5678, 345.6789, 456.7890],
index=['foo','bar','baz','quux'],
columns=['cost'])
df['foo'] = df['cost']
df['cost'] = df['cost'].map('${:,.2f}'.format)
print(df)
yields
cost foo
foo $123.46 123.4567
bar $234.57 234.5678
baz $345.68 345.6789
quux $456.79 456.7890
|
Does Python support multithreading? Can it speed up execution time?
|
I'm slightly confused about whether multithreading works in Python or not.
I know there has been a lot of questions about this and I've read many of them, but I'm still confused. I know from my own experience and have seen others post their own answers and examples here on StackOverflow that multithreading is indeed possible in Python. So why is it that everyone keep saying that Python is locked by the GIL and that only one thread can run at a time? It clearly does work. Or is there some distinction I'm not getting here?
Many posters/respondents also keep mentioning that threading is limited because it does not make use of multiple cores. But I would say they are still useful because they do work simultaneously and thus get the combined workload done faster. I mean why would there even be a Python thread module otherwise?
Update:
Thanks for all the answers so far. The way I understand it is that multithreading will only run in parallel for some IO tasks, but can only run one at a time for CPU-bound multiple core tasks.
I'm not entirely sure what this means for me in practical terms, so I'll just give an example of the kind of task I'd like to multithread. For instance, let's say I want to loop through a very long list of strings and I want to do some basic string operations on each list item. If I split up the list, send each sublist to be processed by my loop/string code in a new thread, and send the results back in a queue, will these workloads run roughly at the same time? Most importantly will this theoretically speed up the time it takes to run the script?
Another example might be if I can render and save four different pictures using PIL in four different threads, and have this be faster than processing the pictures one by one after each other? I guess this speed-component is what I'm really wondering about rather than what the correct terminology is.
I also know about the multiprocessing module but my main interest right now is for small-to-medium task loads (10-30 secs) and so I think multithreading will be more appropriate because subprocesses can be slow to initiate.
|
The GIL does not prevent threading. All the GIL does is make sure only one thread is executing Python code at a time; control still switches between threads.
What the GIL prevents then, is making use of more than one CPU core or separate CPUs to run threads in parallel.
This only applies to Python code. C extensions can and do release the GIL to allow multiple threads of C code and one Python thread to run across multiple cores. This extends to I/O controlled by the kernel, such as select() calls for socket reads and writes, making Python handle network events reasonably efficiently in a multi-threaded multi-core setup.
What many server deployments then do, is run more than one Python process, to let the OS handle the scheduling between processes to utilize your CPU cores to the max. You can also use the multiprocessing library to handle parallel processing across multiple processes from one codebase and parent process, if that suits your use cases.
Note that the GIL is only applicable to the CPython implementation; Jython and IronPython use a different threading implementation (the native Java VM and .NET common runtime threads respectively).
To address your update directly: Any task that tries to get a speed boost from parallel execution, using pure Python code, will not see a speed-up as threaded Python code is locked to one thread executing at a time. If you mix in C extensions and I/O, however (such as PIL or numpy operations) and any C code can run in parallel with one active Python thread.
Python threading is great for creating a responsive GUI, or for handling multiple short web requests where I/O is the bottleneck more than the Python code. It is not suitable for parallelizing computationally intensive Python code, stick to the multiprocessing module for such tasks or delegate to a dedicated external library.
|
Python - Decorators
|
I'm trying to learn Decorators .
I understood the concept of it and now trying to implement it.
Here is the code that I've written
The code is self-explanatory. It just checks whether the argument passed in int or not.
def wrapper(func):
def inner():
if issubclass(x,int): pass
else: return 'invalid values'
return inner()
@wrapper
def add(x,y):
return x+y
print add('a',2)
It's throwing error saying global name 'x' is not defined. I understand that it is not defined under inner, but didnt know how to rectify this code? Where I'm going wrong?
|
Your decorator should look like:
def wrapper(func):
def inner(x, y): # inner function needs parameters
if issubclass(type(x), int): # maybe you looked for isinstance?
return func(x, y) # call the wrapped function
else:
return 'invalid values'
return inner # return the inner function (don't call it)
Some points:
issubclass expects a class as first argument (you could replace it with a simple try/except TypeError).
the wrapper should return a function, not the result of a called function
you should actually call the wrapped function in the inner function
your inner function didn't had parameters
You can find a good explanation of decorators here.
|
How to sort Counter by value? - python
|
Other than doing list comprehensions of reversed list comprehension, is there a pythonic way to sort Counter by value? If so, it is faster than this:
>>> from collections import Counter
>>> x = Counter({'a':5, 'b':3, 'c':7})
>>> sorted(x)
['a', 'b', 'c']
>>> sorted(x.items())
[('a', 5), ('b', 3), ('c', 7)]
>>> [(l,k) for k,l in sorted([(j,i) for i,j in x.items()])]
[('b', 3), ('a', 5), ('c', 7)]
>>> [(l,k) for k,l in sorted([(j,i) for i,j in x.items()], reverse=True)]
[('c', 7), ('a', 5), ('b', 3)
|
Use the Counter.most_common() method, it'll sort the items for you:
>>> from collections import Counter
>>> x = Counter({'a':5, 'b':3, 'c':7})
>>> x.most_common()
[('c', 7), ('a', 5), ('b', 3)]
It'll do so in the most efficient manner possible; if you ask for a Top N instead of all values, a heapq is used instead of a straight sort:
>>> x.most_common(1)
[('c', 7)]
Outside of counters, sorting can always be adjusted based on a key function; .sort() and sorted() both take callable that lets you specify a value on which to sort the input sequence; sorted(x, key=x.get, reverse=True) would give you the same sorting as x.most_common(), but only return the keys, for example:
>>> sorted(x, key=x.get, reverse=True)
['c', 'a', 'b']
or you can sort on only the value given (key, value) pairs:
>>> sorted(x.items(), key=lambda pair: pair[1], reverse=True)
[('c', 7), ('a', 5), ('b', 3)]
See the Python sorting howto for more information.
|
pip installing in global site-packages instead of virtualenv
|
Using pip to install a package in a virtualenv causes the package to be installed in the global site-packages folder instead of the one in the virtualenv folder. Here's how I set up Python3 and virtualenv on OS X Mavericks (10.9.1):
I installed python3 using Homebrew:
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
brew install python3 --with-brewed-openssl
Changed the $PATH variable in .bash_profile; added the following line:
export PATH=/usr/local/bin:$PATH
Running which python3 returns /usr/local/bin/python3 (after restarting the shell).
Note: which python3 still returns /usr/bin/python though.
Installed virtualenv using pip3:
pip3 install virtualenv
Next, create a new virtualenv and activate it:
virtualenv testpy3 -p python3
cd testpy3
source bin/activate
Note: if I don't specify -p python3, pip will be missing from the bin folder in the virtualenv.
Running which pip and which pip3 both return the virtualenv folder:
/Users/kristof/VirtualEnvs/testpy3/bin/pip3
Now, when I try to install e.g. Markdown using pip in the activated virtualenv, pip will install in the global site-packages folder instead of the site-packages folder of the virtualenv.
pip install markdown
Running pip list returns:
Markdown (2.3.1)
pip (1.4.1)
setuptools (2.0.1)
virtualenv (1.11)
Contents of /Users/kristof/VirtualEnvs/testpy3/lib/python3.3/site-packages:
__pycache__/
_markerlib/
easy_install.py
pip/
pip-1.5.dist-info/
pkg_resources.py
setuptools/
setuptools-2.0.2.dist-info/
Contents of /usr/local/lib/python3.3/site-packages:
Markdown-2.3.1-py3.3.egg-info/
__pycache__/
easy-install.pth
markdown/
pip-1.4.1-py3.3.egg/
setuptools-2.0.1-py3.3.egg
setuptools.pth
virtualenv-1.11-py3.3.egg-info/
virtualenv.py
virtualenv_support/
As you can see, the global site-packages folder contains Markdown, the virtualenv folder doesn't.
Note: I had Python2 and Python3 installed before on a different VM (followed these instructions) and had the same issue with Python3; installing packages in a Python2 based virtualenv worked flawlessly though.
Any tips, hints, ⦠would be very much appreciated.
|
Funny you brought this up, I just had the exact same problem. I solved it eventually, but I'm still unsure as to what caused it.
Try checking your bin/pip and bin/activate scripts. In bin/pip, look at the shebang. Is it correct? If not, correct it. Then on line ~42 in your bin/activate, check to see if your virtualenv path is right. It'll look something like this
VIRTUAL_ENV="/Users/me/path/to/virtual/environment"
If it's wrong, correct it, deactivate, then . bin/activate, and if our mutual problem had the same cause, it should work. If it still doesn't, you're on the right track, anyway. I went through the same problem solving routine as you did, which piping over and over, following the stack trace, etc.
Make absolutely sure that
/Users/kristof/VirtualEnvs/testpy3/bin/pip3
is what you want, and not referring to another similarly-named test project (I had that problem, and have no idea how it started. My suspicion is running multiple virtualenvs at the same time).
If none of this works, a temporary solution may be to, as Joe Holloway said,
Just run the virtualenv's pip with its full path (i.e. don't rely on searching the executable path) and you don't even need to activate the environment. It will do the right thing.
Perhaps not ideal, but it ought to work in a pinch.
Link to my original question:
VirtualEnv/Pip trying to install packages globally
|
Why built-in functions like abs works on numpy array?
|
I feel surprised that abs works on numpy array but not on lists. Why is that?
import numpy as np
abs(np.array((1,-2)))
array([1, 2])
abs([1,-1])
TypeError: bad operand type for abs(): 'list'
Also, built in functions like sum also works on numpy array. I guess it is because numpy array supports __getitem__? But in case of abs, if it depends on __getitem__ it should work for list as well, but it didn't.
|
That's because numpy.ndarray implements the __abs__(self) method. Just provide it for your own class, and abs() will magically work. For non-builtin types you can also provide this facility after-the-fact. E.g.
class A:
"A class without __abs__ defined"
def __init__(self, v):
self.v = v
def A_abs(a):
"An 'extension' method that will be added to `A`"
return abs(a.v)
# Make abs() work with an instance of A
A.__abs__ = A_abs
However, this will not work for built-in types, such as list or dict.
|
Install opencv for Python 3.3
|
Is OpenCV still not available for Python 3.3 and do I really have to downgrade to Python 2.7 to use it? I didn't find much about it on the internet, only some posts from 2012 that OpenCV wasn't yet ported to be used in Python 3.x. But now it's 2014 and after trying to install the latest OpenCV 2.4.x and copying the cv2.pyd file to C:\Program Files (x86)\Python333\Lib\site-packages this still yields the error in Python IDLE:
>>> import cv2
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import cv2
ImportError: DLL load failed: %1 ist keine zulässige Win32-Anwendung.
|
Following Pawelmhm's pointer, I installed latest OpenCV from Git master on Ubuntu 12.10 with Python 3.2 and 3.3 bindings. I was able to get feature detectors, matchers, homography, and perspective stuff working!
Steps:
Ensure you have the *-dev package installed for your version of Python
Eg: sudo apt-get install python3.3-dev
Source: Setup OpenCV 2.3 w/ python bindings in ubuntu
For Python 3.3 (didn't have to do this for 3.2):
For some reason the Python 3.3 pycofig.h header file is not in the typical place, it's moved to a platform-specific path (at least for Ubuntu). My hack was to just copy the file to the expected place. I suspect the real fix is incorporating proper include statements for the build process.
Run python3.3-config --includes
This will output two paths like -I/usr/include/python3.3m -I/usr/include/x86_64-linux-gnu/python3.3m
Copy the pyconfig.h file in the second path to the first:
sudo cp /usr/include/x86_64-linux-gnu/python3.3m/pyconfig.h /usr/include/python3.3m/
Source: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=692429
Then follow the steps at this link but with the differences/exceptions noted below:
http://docs.opencv.org/trunk/doc/tutorials/introduction/linux_install/linux_install.html
Python 3.3 only (didn't happen with 3.2):
For some reason the Java tests bork up near the end. If you don't need the Java bindings, use this cmake command to disable them. If you do, you'll have to figure out the issue. As for me, I choose Python! /wink
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_opencv_java=OFF ..
Source: http://stackoverflow.com/a/15761033/3075814
Sometimes the cmake command can't find the Python version you want to install for. In this case, override the path with this command arg: -D PYTHON_EXECUTABLE=/path/to/my/python
This is useful if you want to install OpenCV bindings for a virtual env, instead of into your system's Python dirs.
Source: https://groups.google.com/forum/#!topic/hoomd-users/zTlikJkq_DI
Sometimes the cmake command can't find PythonLibs. In this case, override some extra defs: -D PYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.3m.so -D PYTHON_INCLUDE_DIR=/usr/include/python3.3m -DPYTHON_INCLUDE_DIR2=/usr/include/x86_64-linux-gnu/python3.3m -D PYTHON_NUMPY_INCLUDE_DIRS=/usr/lib/python3/dist-packages/numpy/core/include/
Note there is a warning near the end of the build for both Python 3.2 and 3.3 which may indicate degraded capabilities:
Note: Class Feature2D has more than 1 base class (not supported by Python C extensions)
Bases: cv::FeatureDetector, cv::DescriptorExtractor
Only the first base class will be used
Trying it out:
~> ipython3
Python 3.3.0 (default, Sep 25 2013, 19:28:08)
Type "copyright", "credits" or "license" for more information.
IPython 1.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import cv2
In [2]: img = cv2.imread('derp.png')
In [3]: img[0]
Out[3]:
array([[240, 226, 66],
[240, 226, 66],
[240, 226, 66],
...,
[ 69, 157, 236],
[ 44, 44, 52],
[ 44, 44, 44]], dtype=uint8)
|
Python: Selenium Firefox Webdriver failing with error: 'Can't load the profile...WARN addons.xpi..."
|
I am trying to run the following Python code to create a Firefox Webdriver window via Selenium:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.google.com")
While this code worked fine a few weeks ago, it now produces the following foreboding message:
Traceback (most recent call last):
File "test.py", line 2, in <module>
driver = webdriver.Firefox()
File "c:\python27\lib\site-packages\selenium\webdriver\firefox\webdriver.py", line 60, in __init__
self.binary, timeout),
File "c:\python27\lib\site-packages\selenium\webdriver\firefox\extension_connection.py", line 47, in __init__
self.binary.launch_browser(self.profile)
File "c:\python27\lib\site-packages\selenium\webdriver\firefox\firefox_binary.py", line 61, in launch_browser
self._wait_until_connectable()
File "c:\python27\lib\site-packages\selenium\webdriver\firefox\firefox_binary.py", line 105, in _wait_until_connectable
self.profile.path, self._get_firefox_output()))
selenium.common.exceptions.WebDriverException: Message: 'Can\'t load the profile. Profile Dir: c:\\users\\douglas\\appdata\\local\\temp\\tmpuf4ipq Firefox output: *** LOG addons.xpi: startup\r\n*** WARN addons.xpi: Ignoring missing add-on in C:\\Program Files\\CheckPoint\\ZAForceField\\WOW64\\TrustChecker\r\n*** WARN addons.xpi: Ignoring missing add-on in C:\\ProgramData\\Norton\\{78CA3BF0-9C3B-40e1-B46D-38C877EF059A}\\NSM_2.9.5.20\\coFFFw\r\n*** LOG addons.xpi: Skipping unavailable install location app-system-local\r\n*** LOG addons.xpi: Skipping unavailable install location app-system-share\r\n*** LOG addons.xpi: checkForChanges\r\n*** LOG addons.xpi: No changes found\r\n*** Blocklist::_loadBlocklistFromFile: blocklist is disabled\r\n************************************************************\r\n* Call to xpconnect wrapped JSObject produced this error: *\r\n[Exception... "\'[JavaScript Error: "this._defaultEngine is null" {file: "resource://gre/components/nsSearchService.js" line: 3527}]\' when calling method: [nsIBrowserSearchService::currentEngine]" nsresult: "0x80570021 (NS_ERROR_XPC_JAVASCRIPT_ERROR_WITH_DETAILS)" location: "JS frame :: chrome://browser/content/search/search.xml :: get_currentEngine :: line 130" data: yes]\r\n************************************************************\r\n************************************************************\r\n* Call to xpconnect wrapped JSObject produced this error: *\r\n[Exception... "\'[JavaScript Error: "this._defaultEngine is null" {file: "resource://gre/components/nsSearchService.js" line: 3527}]\' when calling method: [nsIBrowserSearchService::currentEngine]" nsresult: "0x80570021 (NS_ERROR_XPC_JAVASCRIPT_ERROR_WITH_DETAILS)" location: "JS frame :: chrome://browser/content/search/search.xml :: get_currentEngine :: line 130" data: yes]\r\n************************************************************\r\n************************************************************\r\n* Call to xpconnect wrapped JSObject produced this error: *\r\n[Exception... "\'[JavaScript Error: "this._defaultEngine is null" {file: "resource://gre/components/nsSearchService.js" line: 3527}]\' when calling method: [nsIBrowserSearchService::currentEngine]" nsresult: "0x80570021 (NS_ERROR_XPC_JAVASCRIPT_ERROR_WITH_DETAILS)" location: "JS frame :: resource://app/components/nsBrowserGlue.js :: <TOP_LEVEL> :: line 354" data: yes]\r\n************************************************************\r\n************************************************************\r\n* Call to xpconnect wrapped JSObject produced this error: *\r\n[Exception... "\'[JavaScript Error: "this._defaultEngine is null" {file: "resource://gre/components/nsSearchService.js" line: 3527}]\' when calling method: [nsIBrowserSearchService::currentEngine]" nsresult: "0x80570021 (NS_ERROR_XPC_JAVASCRIPT_ERROR_WITH_DETAILS)" location: "JS frame :: resource://app/components/nsBrowserGlue.js :: <TOP_LEVEL> :: line 354" data: yes]\r\n************************************************************\r\n'
Does anyone know what this means, or what I can do to remedy the error and get the code to run as expected? I've found related error messages through Google searches, but nothing that has allowed me to resolve the issue.
For what it's worth, I can open a Chrome Webdriver without issue by changing the second line of the above to driver = webdriver.Chrome().
I'm using Python 2.7, Selenium 2.35.0 (I just ran "pip install selenium --upgrade) and Firefox 26.0 on a Windows 8 machine. Any tips or advice others can offer are most appreciated.
|
Selenium 2.35 is not compatible with Firefox 26. As the release notes say, FF 26 support was added in Selenium 2.39. You need to update to 2.39. Try pip install -U selenium instead.
|
How do I get warnings.warn to issue a warning and not ignore the line?
|
I'm trying to raise a DeprecationWarning, with a code snippet based on the example shown in the docs. http://docs.python.org/2/library/warnings.html#warnings.warn
Official
def deprecation(message):
warnings.warn(message, DeprecationWarning, stacklevel=2)
Mine
import warnings
warnings.warn("This is a warnings.", DeprecationWarning, stacklevel=2) is None # returns True
I've tried removing the stacklevel argument, setting it to negative, 0, 2 and 20000. The warning is always silently swallowed. It doesn't issue a warning or raise an exception. It just ignores the line and returns None. The docs doesn't mention the criteria for ignoring. Giving a message, makes warnings.warn correctly issue a Userwarning.
What can be causing this and how do I get warn to actually warn?
|
From the docs:
By default, Python installs several warning filters, which can be
overridden by the command-line options passed to -W and calls to
filterwarnings().
DeprecationWarning and PendingDeprecationWarning, and ImportWarning are ignored.
BytesWarning is ignored unless the -b option is given once or twice; in this case this warning is either printed (-b) or turned into
an exception (-bb).
By default, DeprecationWarning is ignored. You can change the filters using the following:
warnings.simplefilter('always', DeprecationWarning)
Now your warnings should be printed:
>>> import warnings
>>> warnings.simplefilter('always', DeprecationWarning)
>>> warnings.warn('test', DeprecationWarning)
/home/guest/.env/bin/ipython:1: DeprecationWarning: test
#!/home/guest/.env/bin/python
|
What is %pylab?
|
I keep seeing people use %pylabin various code snippits, particularly with iPython. However, I cannot see where %pylabis mentioned anywhere in Learning Python (and the few other Python books I have) and am not really sure what it means.
I'm sure the answer is simple, but can anyone enlighten me?
|
%pylab is a magic function in ipython.
Magic functions in pylab always begin with the percent sign (%) followed without
any spaces by a small text string; in essence, ipython magic functions define
shortcuts particularly useful for interactive work, e.g.,
to give you an idea of how magic functions work in python, a few of my favorites:
to view cwd directory contents:
%ls
to run a script in ipython using an empty namespace, type space then a script name:
%run
to execute a code snippet (particularly for multi-line snippets which would usually cause an _IndentationError_ to be thrown):
%paste
When the %pylab magic function is entered at the IPython prompt, it triggers
the import of various modules within Matplotlib.
Which modules? well, the ones subsumed under the pylab interface.
The awesome Matplotlib plotting library has two distinct interfaces: a pythonic one, and the original MATLAB-like one intended for plotting at the interactive prompt.
The former is usually imported like so:
from matplotlib import pyplot as PLT
Indeed, pyplot has its own magic python magic function
%pyplot
Why two different interfaces? Matplotlib's original interface was pylab; only
later was the pythonic interface added. Scripting and app development were not
the primary uses cases for Matplotlib when the project began, plotting in the
python shell was.
Apparently John Hunter (Matplotlib's creator) wanted to include interactive plotting in python so he submitted a patch to Fernando Perez's (FP) IPython project. FP was a Ph.D student at the time and informed JH that he would not able to review the path for some time. As a result, JH created Matplotlib. The significance is that Matplotlib began as a shell-based plotting scheme.
the pylab interface is indeed more suitable for interactive work:
from pylab import *
x, y = arange(10), cos(x/2)
plot(x, y)
show()
and using the pyplot interface:
from matplotlib import pyplot as PLT
import numpy as NP
x, y = NP.arange(10), NP.cos(x/2)
fig = PLT.figure()
ax1 = fig.add_subplot(111)
ax1.plot(x, y)
PLT.show()
|
Cumulative sum and percentage on column?
|
I have a DataFrame like this:
df:
fruit val1 val2
0 orange 15 3
1 apple 10 13
2 mango 5 5
How do I get Pandas to give me a cumulative sum and percentage column on only val1?
Desired output:
df_with_cumsum:
fruit val1 val2 cum_sum cum_perc
0 orange 15 3 15 50.00
1 apple 10 13 25 83.33
2 mango 5 5 30 100.00
I tried df.cumsum(), but it's giving me this error:
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
|
df['cum_sum'] = df.val1.cumsum()
df['cum_perc'] = 100*df.cum_sum/df.val1.sum()
This will add the columns to df. If you want a copy, copy df first and then do these operations on the copy.
|
Exponentials in python x.**y vs math.pow(x, y)
|
Which one is more efficient using math.pow or the ** operator? When should I use one over the other?
So far I know that x**y can return an int or a float if you use a decimal
the function pow will return a float
import math
print math.pow(10, 2)
print 10. ** 2
|
Using the power operator ** will be faster as it wonât have the overhead of a function call. You can see this if you disassemble the Python code:
>>> dis.dis('7. ** i')
1 0 LOAD_CONST 0 (7.0)
3 LOAD_NAME 0 (i)
6 BINARY_POWER
7 RETURN_VALUE
>>> dis.dis('pow(7., i)')
1 0 LOAD_NAME 0 (pow)
3 LOAD_CONST 0 (7.0)
6 LOAD_NAME 1 (i)
9 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
12 RETURN_VALUE
>>> dis.dis('math.pow(7, i)')
1 0 LOAD_NAME 0 (math)
3 LOAD_ATTR 1 (pow)
6 LOAD_CONST 0 (7)
9 LOAD_NAME 2 (i)
12 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
15 RETURN_VALUE
Note that Iâm using a variable i as the exponent here because constant expressions like 7. ** 5 are actually evaluated at compile time.
Now, in practice, this difference does not matter that much, as you can see when timing it:
>>> from timeit import timeit
>>> timeit('7. ** i', setup='i = 5')
0.2894785532627111
>>> timeit('pow(7., i)', setup='i = 5')
0.41218495570683444
>>> timeit('math.pow(7, i)', setup='import math; i = 5')
0.5655053168791255
So, while pow and math.pow are about twice as slow, they are still fast enough to not care much. Unless you can actually identify the exponentiation as a bottleneck, there wonât be a reason to choose one method over the other if clarity decreases. This especially applies since pow offers an integrated modulo operation for example.
Alfe asked a good question in the comments above:
timeit shows that math.pow is slower than ** in all cases. What is math.pow() good for anyway? Has anybody an idea where it can be of any advantage then?
The big difference of math.pow to both the builtin pow and the power operator ** is that it always uses float semantics. So if you, for some reason, want to make sure you get a float as a result back, then math.pow will ensure this property.
Letâs think of an example: We have two numbers, i and j, and have no idea if they are floats or integers. But we want to have a float result of i^j. So what options do we have?
We can convert at least one of the arguments to a float and then do i ** j.
We can do i ** j and convert the result to a float (float exponentation is automatically used when either i or j are floats, so the result is the same).
We can use math.pow.
So, letâs test this:
>>> timeit('float(i) ** j', setup='i, j = 7, 5')
0.7610865891750791
>>> timeit('i ** float(j)', setup='i, j = 7, 5')
0.7930400942188385
>>> timeit('float(i ** j)', setup='i, j = 7, 5')
0.8946636625872202
>>> timeit('math.pow(i, j)', setup='import math; i, j = 7, 5')
0.5699394063529439
As you can see, math.pow is actually faster! And if you think about it, the overhead from the function call is also gone now, because in all the other alternatives we have to call float().
In addition, it might be worth to note that the behavior of ** and pow can be overridden by implementing the special __pow__ (and __rpow__) method for custom types. So if you donât want that (for whatever reason), using math.pow wonât do that.
|
how to do a left,right and mid of a string in a pandas dataframe
|
in a pandas dataframe how can I apply a sort of excel left('state',2) to only take the first two letters. Ideally I want to learn how to use left,right and mid in a dataframe too. So need an equivalent and not a "trick" for this specific example.
data = {'state': ['Auckland', 'Otago', 'Wellington', 'Dunedin', 'Hamilton'],
'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
df = pd.DataFrame(data)
print df
pop state year
0 1.5 Auckland 2000
1 1.7 Otago 2001
2 3.6 Wellington 2002
3 2.4 Dunedin 2001
4 2.9 Hamilton 2002
I want to get this:
pop state year StateInitial
0 1.5 Auckland 2000 Au
1 1.7 Otago 2001 Ot
2 3.6 Wellington 2002 We
3 2.4 Dunedin 2001 Du
4 2.9 Hamilton 2002 Ha
|
First two letters for each value in a column:
>>> df['StateInitial'] = df['state'].str[:2]
>>> df
pop state year StateInitial
0 1.5 Auckland 2000 Au
1 1.7 Otago 2001 Ot
2 3.6 Wellington 2002 We
3 2.4 Dunedin 2001 Du
4 2.9 Hamilton 2002 Ha
For last two that would be df['state'].str[-2:]. Don't know what exactly you want for middle, but you can apply arbitrary function to a column with apply method:
>>> df['state'].apply(lambda x: x[len(x)/2-1:len(x)/2+1])
0 kl
1 ta
2 in
3 ne
4 il
|
Ensuring py.test includes the application directory in sys.path
|
I have a project directory structure as follows (which I think is pretty standard):
my_project
setup.py
mypkg
__init__.py
foo.py
tests
functional
test_f1.py
unit
test_u1.py
I'm using py.test for my testing framework, and I'd expect to be able to run py.test tests when in the my_project directory to run my tests. This does indeed work, until I try to import my application code using (for example) import mypkg in a test. At that point, I get the error "No module named mypkg". On doing a bit of investigation, it appears that py.test runs the tests with the directory of the test file in sys.path, but not the directory that py.test was run from.
In order to work around this, I have added a conftest.py file to my tests directory, containing the following code:
import sys, os
# Make sure that the application source directory (this directory's parent) is
# on sys.path.
here = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.insert(0, here)
This seems to work, but is it a good way of making sure that the tests see the application code? Is there a better way of achieving this, or am I doing something wrong in how I have my project structured?
I've looked at some other projects that use py.test (for example, pip) but I can't see code that does anything like this, and yet running py.test tests seems to work there. I don't know quite why, but I'm worried that they may have achieved the same result in a simpler way.
I've looked in the py.test documentation, but I can't see an explanation of this problem or what the recommended approach is to deal with it.
|
As you say yourself py.test basically assumes you have the PYTHONPATH setup up correctly. There are several ways of achieving this:
Give your project a setup.py and use pip install -e . in a virtualenv for this project. This is probably the standard method.
As a variation on this if you have a virtualenv but no setup.py use your venv's facility to add the projects directory on sys.path, e.g. pew add . if you use pew, or add2virtualenv . if you use virtualenv and the extensions of virtualenvwrapper.
If you always like the current working directory on sys.path you can simply always export PYTHONPATH='' in your shell. That is ensure the empty string on on sys.path which python will interpret as the current working direcotry. This is potentially a security hazard though.
My own favourite hack, abuse how py.test loads conftest files: put an empty conftest.py in the project's top-level directory.
The reason for py.test to behave this way is to make it easy to run the tests in a tests/ directory of a checkout against an installed package. If it would unconditionally add the project directory to the PYTHONPATH then this would not be possible anymore.
|
How to invoke external scripts/programs from node.js
|
I have a C++ program and a Python script that I want to incorporate into my node.js web app.
I want to use them to parse the files that are uploaded to my site; it may take a few seconds to process, so I would avoid to block the app as well.
How can I just accept the file then just run the C++ program and script in a sub-process from a node.js controller?
|
see child_process. here is an example using spawn, which allows you to write to stdin and read from stderr/stdout as data is output. If you have no need to write to stdin and you can handle all output when the process completes, child_process.exec offers a slightly shorter syntax to execute a command.
// with express 3.x
var express = require('express');
var app = express();
app.use(express.logger('dev'));
app.use(express.bodyParser());
app.use(app.router);
app.post('/upload', function(req, res){
if(req.files.myUpload){
var python = require('child_process').spawn(
'python',
// second argument is array of parameters, e.g.:
["/home/me/pythonScript.py"
, req.files.myUpload.path
, req.files.myUpload.type]
);
var output = "";
python.stdout.on('data', function(data){ output += data });
python.on('close', function(code){
if (code !== 0) {
return res.send(500, code);
}
return res.send(200, output);
});
} else { res.send(500, 'No file found') }
});
require('http').createServer(app).listen(3000, function(){
console.log('Listening on 3000');
});
|
Cannot complete Flask-Migration
|
I've setup a local Postgres DB with SQLAlchemy and cannot commit my first entry. I keep on getting this error...
ProgrammingError: (ProgrammingError) relation "user" does not exist
LINE 1: INSERT INTO "user" (name, email, facebook_id, facebook_token...
It seems like the fields aren't matching to those in the database. I'm trying to migrate using flask-migrate but, when I run $ python app.py db migrate I get this error...
raise util.CommandError("No such revision '%s'" % id_)
alembic.util.CommandError: No such revision '39408d6b248d'
It may be best to delete everything and start from scratch as it seems I have botched my database setup and / or migration but I'm not sure how to.
UPDATE: The database has started working now (I dropped and created it again). However, I'm still getting the same error trying to run migrations and it turns out the "no such revision '39408d6b248d' is referring to a migration from an unrelated project. I re-installed flask-migrate but same error.
|
flask-migrate will create a table named "alembic_version" in your database.
so you should drop this table and delete migrations folder in your project.
and then use $ python app.py db init again...
I think $ python app.py db migrate will work fine.
|
(list|set|dict) comprehension containing a yield expression does not return a (list|set|dict)
|
Python 3.3
I've constructed this slightly cryptic piece of python 3.3:
>>> [(yield from (i, i + 1, i)) for i in range(5)]
<generator object <listcomp> at 0x0000008666D96900>
>>> list(_)
[0, 1, 0, 1, 2, 1, 2, 3, 2, 3, 4, 3, 4, 5, 4]
If I use a generator comprehension inside a list constructor, I get a different result:
>>> list((yield from (i, i + 1, i)) for i in range(5))
[0, 1, 0, None, 1, 2, 1, None, 2, 3, 2, None, 3, 4, 3, None, 4, 5, 4, None]
Why isn't the list comprehension returning a list?
Python 2.7
I can get a similarly odd effect in python 2 (using a set comprehension, because list comprehensions have odd scope):
>>> {(yield i) for i in range(5)}
<generator object <setcomp> at 0x0000000004A06120>
>>> list(_)
[0, 1, 2, 3, 4, {None}]
And when using a generator comprehension:
>>> list((yield i) for i in range(5))
[0, None, 1, None, 2, None, 3, None, 4, None]
Where'd that {None} come from?
|
Using this as a reference:
Python 3 explanation
This:
values = [(yield from (i, i + 1, i)) for i in range(5)]
Translates to the following in Python 3.x:
def _tmpfunc():
_tmp = []
for x in range(5):
_tmp.append(yield from (i, i + 1, i))
return _tmp
values = _tmpfunc()
Which results in values containing a generator
That generator will then yield from each (i, i + 1, i), until finally reaching the return statement. In python 3, this will throw StopIteration(_tmp) - however, this exception is ignored by the list constructor.
On the other hand, this:
list((yield from (i, i + 1, i)) for i in range(5))
Translates to the following in Python 3.x:
def _tmpfunc():
for x in range(5):
yield (yield from (i, i + 1, i))
values = list(_tmpfunc())
This time, every time the yield from completes, it evaluates to None, which is then yielded amidst the other values.
|
How to pretty print in ipython notebook via sympy? pprint only prints Unicode version
|
from sympy import symbols, Function
import sympy.functions as sym
from sympy import init_printing
init_printing(use_latex=True)
from sympy import pprint
from sympy import Symbol
x = Symbol('x')
# If a cell contains only the following, it will render perfectly.
(pi + x)**2
# However I would like to control what to print in a function,
# so that multiple expressions can be printed from a single notebook cell.
pprint((pi + x)**2)
I tried pprint, print, the former only prints Unicode version, and the latter doesn't do pretty prints.
|
you need to use display:
from IPython.display import display
display(yourobject)
It will choose the appropriate representation (text/LaTex/png...)
|
Efficient dot products of large memory-mapped arrays
|
I'm working with some rather large, dense numpy float arrays that currently reside on disk in PyTables CArrays. I need to be able to perform efficient dot products using these arrays, for example C = A.dot(B), where A is a huge (~1E4 x 3E5 float32) memory-mapped array, and B and C are smaller numpy arrays that are resident in core memory.
What I'm doing at the moment is copying the data into memory-mapped numpy arrays using np.memmap, then calling np.dot directly on the memory-mapped arrays. This works, but I suspect that the standard np.dot (or rather the underlying BLAS functions it calls) is probably not very efficient in terms of the number of I/O operations required in order to compute the result.
I came across an interesting example in this review article. A naive dot product computed using 3x nested loops, like this:
def naive_dot(A, B, C):
for ii in xrange(n):
for jj in xrange(n):
C[ii,jj] = 0
for kk in xrange(n):
C[ii,jj] += A[ii,kk]*B[kk,jj]
return C
requires O(n^3) I/O operations to compute.
However, by processing the arrays in appropriately-sized blocks:
def block_dot(A, B, C, M):
b = sqrt(M / 3)
for ii in xrange(0, n, b):
for jj in xrange(0, n, b):
C[ii:ii+b,jj:jj+b] = 0
for kk in xrange(0, n, b):
C[ii:ii+b,jj:jj+b] += naive_dot(A[ii:ii+b,kk:kk+b],
B[kk:kk+b,jj:jj+b],
C[ii:ii+b,jj:jj+b])
return C
where M is the maximum number of elements that will fit into core memory, the number of I/O operations is reduced to O(n^3 / sqrt(M)).
How smart is np.dot and/or np.memmap? Does calling np.dot perform an I/O-efficient blockwise dot product? Does np.memmap do any fancy caching that would improve the efficiency of this type of operation?
If not, is there some pre-existing library function that performs I/O efficient dot products, or should I try and implement it myself?
Update
I've done some benchmarking with a hand-rolled implementation of np.dot that operates on blocks of the input array, which are explicitly read into core memory. This data at least a partially addresses my original question, so I'm posting it as an answer.
|
I've implemented a function for applying np.dot to blocks that are explicitly read into core memory from the memory-mapped array:
import numpy as np
def _block_slices(dim_size, block_size):
"""Generator that yields slice objects for indexing into
sequential blocks of an array along a particular axis
"""
count = 0
while True:
yield slice(count, count + block_size, 1)
count += block_size
if count > dim_size:
raise StopIteration
def blockwise_dot(A, B, max_elements=int(2**27), out=None):
"""
Computes the dot product of two matrices in a block-wise fashion.
Only blocks of `A` with a maximum size of `max_elements` will be
processed simultaneously.
"""
m, n = A.shape
n1, o = B.shape
if n1 != n:
raise ValueError('matrices are not aligned')
if A.flags.f_contiguous:
# prioritize processing as many columns of A as possible
max_cols = max(1, max_elements / m)
max_rows = max_elements / max_cols
else:
# prioritize processing as many rows of A as possible
max_rows = max(1, max_elements / n)
max_cols = max_elements / max_rows
if out is None:
out = np.empty((m, o), dtype=np.result_type(A, B))
elif out.shape != (m, o):
raise ValueError('output array has incorrect dimensions')
for mm in _block_slices(m, max_rows):
out[mm, :] = 0
for nn in _block_slices(n, max_cols):
A_block = A[mm, nn].copy() # copy to force a read
out[mm, :] += np.dot(A_block, B[nn, :])
del A_block
return out
I then did some benchmarking to compare my blockwise_dot function to the normal np.dot function applied directly to a memory-mapped array (see below for the benchmarking script). I'm using numpy 1.9.0.dev-205598b linked against OpenBLAS v0.2.9.rc1 (compiled from source). The machine is a quad-core laptop running Ubuntu 13.10, with 8GB RAM and an SSD, and I've disabled the swap file.
Results
As @Bi Rico predicted, the time taken to compute the dot product is beautifully O(n) with respect to the dimensions of A. Operating on cached blocks of A gives a huge performance improvement over just calling the normal np.dot function on the whole memory-mapped array:
It's surprisingly insensitive to the size of the blocks being processed - there's very little difference between the time taken to process the array in blocks of 1GB, 2GB or 4GB. I conclude that whatever caching np.memmap arrays natively implement, it seems to be very suboptimal for computing dot products.
Further questions
It's still a bit of a pain to have to manually implement this caching strategy, since my code will probably have to run on machines with different amounts of physical memory, and potentially different operating systems. For that reason I'm still interested in whether there are ways to control the caching behaviour of memory-mapped arrays in order to improve the performance of np.dot.
I noticed some odd memory handling behaviour as I was running the benchmarks - when I called np.dot on the whole of A I never saw the resident set size of my Python process exceed about 3.8GB, even though I have about 7.5GB of RAM free. This leads me to suspect that there is some limit imposed on the amount of physical memory an np.memmap array is allowed to occupy - I had previously assumed that it would use whatever RAM the OS allows it to grab. In my case it might be very beneficial to be able to increase this limit.
Does anyone have any further insight into the caching behaviour of np.memmap arrays that would help to explain this?
Benchmarking script
def generate_random_mmarray(shape, fp, max_elements):
A = np.memmap(fp, dtype=np.float32, mode='w+', shape=shape)
max_rows = max(1, max_elements / shape[1])
max_cols = max_elements / max_rows
for rr in _block_slices(shape[0], max_rows):
for cc in _block_slices(shape[1], max_cols):
A[rr, cc] = np.random.randn(*A[rr, cc].shape)
return A
def run_bench(n_gigabytes=np.array([16]), max_block_gigabytes=6, reps=3,
fpath='temp_array'):
"""
time C = A * B, where A is a big (n, n) memory-mapped array, and B and C are
(n, o) arrays resident in core memory
"""
standard_times = []
blockwise_times = []
differences = []
nbytes = n_gigabytes * 2 ** 30
o = 64
# float32 elements
max_elements = int((max_block_gigabytes * 2 ** 30) / 4)
for nb in nbytes:
# float32 elements
n = int(np.sqrt(nb / 4))
with open(fpath, 'w+') as f:
A = generate_random_mmarray((n, n), f, (max_elements / 2))
B = np.random.randn(n, o).astype(np.float32)
print "\n" + "-"*60
print "A: %s\t(%i bytes)" %(A.shape, A.nbytes)
print "B: %s\t\t(%i bytes)" %(B.shape, B.nbytes)
best = np.inf
for _ in xrange(reps):
tic = time.time()
res1 = np.dot(A, B)
t = time.time() - tic
best = min(best, t)
print "Normal dot:\t%imin %.2fsec" %divmod(best, 60)
standard_times.append(best)
best = np.inf
for _ in xrange(reps):
tic = time.time()
res2 = blockwise_dot(A, B, max_elements=max_elements)
t = time.time() - tic
best = min(best, t)
print "Block-wise dot:\t%imin %.2fsec" %divmod(best, 60)
blockwise_times.append(best)
diff = np.linalg.norm(res1 - res2)
print "L2 norm of difference:\t%g" %diff
differences.append(diff)
del A, B
del res1, res2
os.remove(fpath)
return (np.array(standard_times), np.array(blockwise_times),
np.array(differences))
if __name__ == '__main__':
n = np.logspace(2,5,4,base=2)
standard_times, blockwise_times, differences = run_bench(
n_gigabytes=n,
max_block_gigabytes=4)
np.savez('bench_results', standard_times=standard_times,
blockwise_times=blockwise_times, differences=differences)
|
Topic distribution: How do we see which document belong to which topic after doing LDA in python
|
I am able to run the LDA code from gensim and got the top 10 topics with their respective keywords.
Now I would like to go a step further to see how accurate the LDA algo is by seeing which document they cluster into each topic. Is this possible in gensim LDA?
Basically i would like to do something like this, but in python and using gensim.
LDA with topicmodels, how can I see which topics different documents belong to?
|
Using the probabilities of the topics, you can try to set some threshold and use it as a clustering baseline, but i am sure there are better ways to do clustering than this 'hacky' method.
from gensim import corpora, models, similarities
from itertools import chain
""" DEMO """
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
all_tokens = sum(texts, [])
tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1)
texts = [[word for word in text if word not in tokens_once] for text in texts]
# Create Dictionary.
id2word = corpora.Dictionary(texts)
# Creates the Bag of Word corpus.
mm = [id2word.doc2bow(text) for text in texts]
# Trains the LDA models.
lda = models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=3, \
update_every=1, chunksize=10000, passes=1)
# Prints the topics.
for top in lda.print_topics():
print top
print
# Assigns the topics to the documents in corpus
lda_corpus = lda[mm]
# Find the threshold, let's set the threshold to be 1/#clusters,
# To prove that the threshold is sane, we average the sum of all probabilities:
scores = list(chain(*[[score for topic_id,score in topic] \
for topic in [doc for doc in lda_corpus]]))
threshold = sum(scores)/len(scores)
print threshold
print
cluster1 = [j for i,j in zip(lda_corpus,documents) if i[0][1] > threshold]
cluster2 = [j for i,j in zip(lda_corpus,documents) if i[1][1] > threshold]
cluster3 = [j for i,j in zip(lda_corpus,documents) if i[2][1] > threshold]
print cluster1
print cluster2
print cluster3
[out]:
0.131*trees + 0.121*graph + 0.119*system + 0.115*user + 0.098*survey + 0.082*interface + 0.080*eps + 0.064*minors + 0.056*response + 0.056*computer
0.171*time + 0.171*user + 0.170*response + 0.082*survey + 0.080*computer + 0.079*system + 0.050*trees + 0.042*graph + 0.040*minors + 0.040*human
0.155*system + 0.150*human + 0.110*graph + 0.107*minors + 0.094*trees + 0.090*eps + 0.088*computer + 0.087*interface + 0.040*survey + 0.028*user
0.333333333333
['The EPS user interface management system', 'The generation of random binary unordered trees', 'The intersection graph of paths in trees', 'Graph minors A survey']
['A survey of user opinion of computer system response time', 'Relation of user perceived response time to error measurement']
['Human machine interface for lab abc computer applications', 'System and human system engineering testing of EPS', 'Graph minors IV Widths of trees and well quasi ordering']
Just to make it clearer:
# Find the threshold, let's set the threshold to be 1/#clusters,
# To prove that the threshold is sane, we average the sum of all probabilities:
scores = []
for doc in lda_corpus
for topic in doc:
for topic_id, score in topic:
scores.append(score)
threshold = sum(scores)/len(scores)
The above code is sum the score of all words and in all topics for all documents.
Then normalize the sum by the number of scores.
|
How can I scroll a web page using selenium webdriver in python?
|
I am currently using selenium webdriver to parse through facebook user friends page and extract all ids from the AJAX script. But I need to scroll down to get all the friends. How can I scroll down in Selenium. I am using python.
|
or
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
to scroll to the bottom of the page.
|
Cython: (Why / When) Is it preferable to use Py_ssize_t for indexing?
|
This is a follow-up to this question.
(Why / When) Is it preferable to use Py_ssize_t for indexing? In the docs I just found
# Purists could use "Py_ssize_t" which is the proper Python type for
# array indices.
-> Does that mean always when indexing NumPy/Cython - array(s)/-views one should use Py_ssize_t?
-> Is Py_ssize_t e. g. an unsigned int so that I can't used @cython.boundscheck(False)
|
Py_ssize_t is signed. See PEP 353, where it says "A new type Py_ssize_t is introduced, which has the same size as the compiler's size_t type, but is signed. It will be a typedef for ssize_t where available."
You should use Py_ssize_t for indexing. I didn't find a definitive statement of this in the Cython docs, but Stefan Behnel, a Cython developer, said as much in an email (https://groups.google.com/forum/#!topic/cython-users/brENF_M9zxM):
As a general remark, you are using ints as indices. You should
use the Py_ssize_t type instead (or size_t, if you prefer an
unsigned type) to properly accommodate for 64 bit architectures.
|
Python Dictionaries vs Javascript Objects
|
I'm new to python and I was reading about Dictionaries. And from my previous experience with langages like javascript they seemed like objects to me. Dictionaries can store lists and share many similaraties to objects in javascript.
ex python code:
menu = {}
menu['Chicken Alfredo'] = 14.50
menu['Italian Pasta'] = 15.89
menu['Shrimp Soup'] = 12.43
menu['Persian Rice'] = 21.99
ex javascript code:
var menu = new Object();
menu['Chicken Alfredo'] = 14.50;
menu['Italian Pasta'] = 15.89;
menu['Shrimp Soup'] = 12.43;
menu['Persian Rice'] = 21.99;
What's the difference here, they both do the same job, but there different concepts?
|
From :
In Python, dictionaries are a form of mapping type. They can be
initialized using a sequence of comma separated name:value pairs,
enclosed in curly braces. The are accessed using array notation
involving square braces. The key can be any hashable, including
numbers and strings.
In Javascript, a dictionary is the same as an object. It can be
initialized using the same syntax as Python. The key can be a number,
a string, or an identifier. Because the dictionary is also an object,
the elements can be accessed either using array notation, e.g. b[i],
or using property notation, e.g. b.i.
Consider an identifier used in an initializer, such as
b = {i:j}
In Python both i and j are evaluated, but in Javascript, only j is evaluated. In Javascript you also have the
privilege of writing in the dot notation, using the identifier i.
Hence in Python,
i='k'
j=1
b = {i:j}
b['k'] -> 1
In Javascript,
i='k'
j=1
b = {i:j}
b['i'] // -> 1
b.i // -> 1
// b[i], b['k'] and b.k are not defined
In Javascript, using the identifier in dot notation is completely identical in all cases to
using a string that "looks like" the identifier in array notation.
Hence,
b = { 'i':1 } ;
b['i'] // -> 1
b.i // -> 1 When a number or boolean is used in a dictionary, Javascript will access the element using a string representation of
the number or boolean. Not so in Python â a string and a number (or
boolean) are different hashables.
If you are interested in differences between both languages, then look at ans
|
Why Python need rich comparison?
|
There is a confusion for me for some time: is there a scene that we do need to use rich comparison in Python?
I read the official doc here, but it only gives how it works not why we need it.
A snippet of the doc:
The truth of x==y does not imply that x!=y is false. may describe a scene that we need rich comparison. In this scene, we can
make __eq__ and __ne__ both return False for disabling the
comparsion or any other purpose. (We can implement this by using
__cmp__)
But this just a guess, I have never encountered such a requirement in a real project yet.
Does anyone need to use rich comparison indeed or is there any other scenario where we need to use rich comparison in theory?
Maybe my example of x==y and x!=y caused some confusion, sorry for that.
Let me make it a bit clearer:
Are there any scenario where rich comparison can help but __cmp__ can not?
|
NumPy uses rich comparisons to vectorize ==, !=, <, etc, just like it does with most other operators. For example,
>>> x = numpy.array([1, 2, 3, 4, 5])
>>> y = numpy.array([2, 2, 1, 4, 4])
>>> x == y
array([False, True, False, True, False], dtype=bool)
When arrays x and y are compared with any comparison operator, NumPy applies the operator (roughly) elementwise and returns an array of results. This is useful, for example, to apply an operation to the cells of x that fit the condition:
>>> x[x==y] = 6
>>> x
array([1, 6, 3, 6, 5])
Here, I've selected all elements of x that equal the corresponding elements of y, and set them equal to 6.
|
Reverse for 'index' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
|
I'm trying to get django-register to work on my website but I keep getting this error which I do not understand
I'm using django 1.6 on Python 3.3
NoReverseMatch at /accounts/register/
Reverse for 'index' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
Request Method: GET
Request URL: http://127.0.0.1:8000/accounts/register/
Django Version: 1.6.1
Exception Type: NoReverseMatch
Exception Value:
Reverse for 'index' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
Exception Location: D:\Programming\Py33\lib\site-packages\django\core\urlresolvers.py in _reverse_with_prefix, line 429
Python Executable: D:\Programming\Py33\python.exe
Python Version: 3.3.3
Python Path:
['D:\\Programming\\GItHub\\photobyte\\PhotoByte',
'D:\\Programming\\Py33\\lib\\site-packages\\setuptools-2.0.3dev-py3.3.egg',
'C:\\WINDOWS\\SYSTEM32\\python33.zip',
'D:\\Programming\\Py33\\DLLs',
'D:\\Programming\\Py33\\lib',
'D:\\Programming\\Py33',
'D:\\Programming\\Py33\\lib\\site-packages']
Server time: Wed, 8 Jan 2014 02:49:17 -0800
Error during template rendering
this is the html code that is erroring
Its complaining about line 14
In template D:\Programming\GItHub\photobyte\PhotoByte\templates\base.html, error at line 14
Reverse for 'index' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
4 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
5
6 <head>
7 <link rel="stylesheet" href="/style.css" />
8 <title>{% block title %}User test{% endblock %}</title>
9 </head>
10
11 <body>
12 <div id="header">
13 {% block header %}
14 <a href="{% url 'index' %}">{% trans "Home" %}</a> |
15
16 {% if user.is_authenticated %}
17 {% trans "Logged in" %}: {{ user.username }}
18 (<a href="{% url 'auth_logout' %}">{% trans "Log out" %}</a> |
19 <a href="{% url 'auth_password_change' %}">{% trans "Change password" %}</a>)
20 {% else %}
21 <a href="{% url 'auth_login' %}">{% trans "Log in" %}</a>
22 {% endif %}
23 <hr />
24 {% endblock %}
This is my Urls.py for my project
urlpatterns = patterns('',
(r'^ImageUpload/', include('ImageUpload.urls')),
(r'^accounts/', include('registration.backends.default.urls')),
(r'^$', RedirectView.as_view(url='/ImageUpload/list/')), # Just for ease of use.
) + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
Can someone explain what the error means please?
|
Its complaining about line no.14 because Django is unable to determine the url named as "index" in your urls.py files. I don't see a URL named as "index" above. Where/What is the URL pattern for your home page?
|
What is the difference between pip and conda?
|
I know pip is a package manager for python packages. However, I saw the installation on iPython's website use conda to install iPython.
Can I use pip to install iPython? Why should I use conda as another python package manager when I already have pip?
What is the difference between pip and conda?
|
Quoting from the Conda blog:
Having been involved in the python world for so long, we are all aware of pip, easy_install, and virtualenv, but these tools did not meet all of our specific requirements. The main problem is that they are focused around Python, neglecting non-Python library dependencies, such as HDF5, MKL, LLVM, etc., which do not have a setup.py in their source code and also do not install files into Pythonâs site-packages directory.
So Conda is a packaging tool and installer that aims to do more than what pip does; handle library dependencies outside of the Python packages as well as the Python packages themselves. Conda also creates a virtual environment, like virtualenv does.
As such, Conda should be compared to Buildout perhaps, another tool that lets you handle both Python and non-Python installation tasks.
Because Conda introduces a new packaging format, you cannot use pip and Conda interchangeably; pip cannot install the Conda package format. You can use the two tools side by side but they do not interoperate either.
|
Python Pandas counting and summing specific conditions
|
Are there single functions in pandas to perform the equivalents of SUMIF, which sums over a specific condition and COUNTIF, which counts values of specific conditions from Excel?
I know that there are many multiple step functions that can be used for
for example for sumif I can use (df.map(lambda x: condition), or df.size()) then use .sum()
and for countif I can use (groupby functions and look for my answer or use a filter and the .count())
Is there simple one step process to do these functions where you enter the condition and the data frame and you get the sum or counted results?
|
You can first make a conditional selection, and sum up the results of the selection using the sum function.
>> df = pd.DataFrame({'a': [1, 2, 3]})
>> df[df.a > 1].sum()
a 5
dtype: int64
Having more than one condition:
>> df[(df.a > 1) & (df.a < 3)].sum()
a 2
dtype: int64
|
What does the Brown clustering algorithm output mean?
|
I've ran the brown-clustering algorithm from https://github.com/percyliang/brown-cluster and also a python implementation https://github.com/mheilman/tan-clustering. And they both give some sort of binary and another integer for each unique token. For example:
0 the 6
10 chased 3
110 dog 2
1110 mouse 2
1111 cat 2
What does the binary and the integer mean?
From the first link, the binary is known as a bit-string, see http://saffron.deri.ie/acl_acl/document/ACL_ANTHOLOGY_ACL_P11-1053/
But how do I tell from the output that dog and mouse and cat is one cluster and the and chased is not in the same cluster?
|
If I understand correctly, the algorithm gives you a tree and you need to truncate it at some level to get clusters. In case of those bit strings, you should just take first L characters.
For example, cutting at the second character gives you two clusters
10 chased
11 dog
11 mouse
11 cat
At the third character you get
110 dog
111 mouse
111 cat
The cutting strategy is a different subject though.
|
Formatting floats in a numpy array
|
If I have a numpy array like this:
[2.15295647e+01, 8.12531501e+00, 3.97113829e+00, 1.00777250e+01]
how can I move the decimal point and format the numbers so I end up with a numpy array like this:
[21.53, 8.13, 3.97, 10.08]
np.around(a, decimals=2) only gives me [2.15300000e+01, 8.13000000e+00, 3.97000000e+00, 1.00800000e+01] Which I don't want and I haven't found another way to do it.
|
In order to make numpy display float arrays in an arbitrary format, you can define a custom function that takes a float value as its input and returns a formatted string:
In [1]: float_formatter = lambda x: "%.2f" % x
The f here means fixed-point format (not 'scientific'), and the .2 means two decimal places (you can read more about string formatting here).
Let's test it out with a float value:
In [2]: float_formatter(1.234567E3)
Out[2]: '1234.57'
To make numpy print all float arrays this way, you can pass the formatter= argument to np.set_printoptions:
In [3]: np.set_printoptions(formatter={'float_kind':float_formatter})
Now numpy will print all float arrays this way:
In [4]: np.random.randn(5) * 10
Out[4]: array([5.25, 3.91, 0.04, -1.53, 6.68]
Note that this only affects numpy arrays, not scalars:
In [5]: np.pi
Out[5]: 3.141592653589793
It also won't affect non-floats, complex floats etc - you will need to define separate formatters for other scalar types.
You should also be aware that this only affects how numpy displays float values - the actual values that will be used in computations will retain their original precision.
For example:
In [6]: a = np.array([1E-9])
In [7]: a
Out[7]: array([0.00])
In [8]: a == 0
Out[8]: array([False], dtype=bool)
numpy prints a as if it were equal to 0, but it is not - it still equals 1E-9.
If you actually want to round the values in your array in a way that affects how they will be used in calculations, you should use np.round, as others have already pointed out.
|
How can I remove Nan from list Python/NumPy
|
I have a list that countain values, one of the values I got is 'nan'
countries= [nan, 'USA', 'UK', 'France']
I tried to remove it, but I everytime get an error
cleanedList = [x for x in countries if (math.isnan(x) == True)]
TypeError: a float is required
When I tried this one :
cleanedList = cities[np.logical_not(np.isnan(countries))]
cleanedList = cities[~np.isnan(countries)]
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
|
The question has changed, so to has the answer:
Strings can't be tested using math.isnan as this expects a float argument. In your countries list, you have floats and strings.
In your case the following should suffice:
cleanedList = [x for x in countries if str(x) != 'nan']
Old answer
In your countries list, the literal 'nan' is a string not the Python float nan which is equivalent to:
float('NaN')
In your case the following should suffice:
cleanedList = [x for x in countries if x != 'nan']
|
Cython: how to make an python object as a property of cython class
|
I have a existing python class X and I want to do the followings:
from my_python_module import X
cdef class Y:
cdef X test
But this does not work out of the box, the cdef only accepts C type, not a Python class. Any work-around ?
|
I don't think you can (http://docs.cython.org/src/userguide/sharing_declarations.html#sharing-extension-types) but you can work-around it using __cinit__ to assert that the attribute has the correct type:
In your Cython file (named "p.pyx" for example):
import my_python_module as q
cdef class Y:
cdef int i
cdef public object x # public so it can be accessed from Python
def __cinit__(self, x_):
assert isinstance(x_, q.X)
self.x = x_
and my_python_module.py is where you have defined your class X:
class X(object):
def __init__(self):
self.i = 1
Then, you use it like this:
import my_python_module as q
import p
y = p.Y(q.X())
print y.x
print y.x.i
|
Converting int to bytes in Python 3
|
I was trying to build this bytes object in Python 3:
b'3\r\n'
so I tried the obvious (for me), and found a weird behaviour:
>>> bytes(3) + b'\r\n'
b'\x00\x00\x00\r\n'
Apparently:
>>> bytes(10)
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
I've been unable to see any pointers on why the bytes conversion works this way reading the documentation. However, I did find some surprise messages in this Python issue about adding format to bytes (see also Python 3 bytes formatting):
http://bugs.python.org/issue3982
This interacts even more poorly with oddities like bytes(int) returning zeroes now
and:
It would be much more convenient for me if bytes(int) returned the ASCIIfication of that int; but honestly, even an error would be better than this behavior. (If I wanted this behavior - which I never have - I'd rather it be a classmethod, invoked like "bytes.zeroes(n)".)
Can someone explain me where the heck does this behaviour come from?
|
That's the way it was designed - and it makes sense because usually, you would call bytes on an iterable instead of a single integer:
>>> bytes([3])
b'\x03'
The docs state this, as well as the docstring for bytes:
>>> help(bytes)
...
bytes(int) -> bytes object of size given by the parameter initialized with null bytes
|
Strings in a DataFrame, but dtype is object
|
Why does Pandas tell me that I have objects, although every item in the selected column is a string â even after explicit conversion.
This is my DataFrame:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 56992 entries, 0 to 56991
Data columns (total 7 columns):
id 56992 non-null values
attr1 56992 non-null values
attr2 56992 non-null values
attr3 56992 non-null values
attr4 56992 non-null values
attr5 56992 non-null values
attr6 56992 non-null values
dtypes: int64(2), object(5)
Five of them are dtype object. I explicitly convert those objects to strings:
for c in df.columns:
if df[c].dtype == object:
print "convert ", df[c].name, " to string"
df[c] = df[c].astype(str)
Then, df["attr2"] still has dtype object, although type(df["attr2"].ix[0] reveals str, which is correct.
Pandas distinguishes between int64 and float64 and object. What is the logic behind it when there is no dtype str? Why is a str covered by object?
|
The dtype object comes from NumPy, it describes the type of element in a ndarray. Every element in a ndarray must has the same size in byte. For int64 and float64, they are 8 bytes. But for strings, the length of the string is not fixed. So instead of save the bytes of strings in the ndarray directly, Pandas use object ndarray, which save pointers to objects, because of this the dtype of this kind ndarray is object.
Here is an example:
the int64 array contains 4 int64 value.
the object array contains 4 pointers to 3 string objects.
|
Security considerations of `pip --allow-external`
|
What are the security considerations of using --allow-external or --allow-all-externals options of pip?
The documentation sections where these options are described (pip install, pip wheel) are very terse and do not explain the dangers of using them. I couldn't also find any resource on the Internet that would do so either.
|
I have asked this question on the FreeNode #pip channel. The following is my interpretation of the replies I've got there. Thanks go to agronholm and dstufft from #pip for answering my question.
Packages can be maintained on PyPI in three different ways:
Directly on PyPI. If a package is hosted on PyPI, no additional switch is required to install it. Connection to PyPI is secured by HTTPS, therefore the downloads are considered as trusted.
On an external site, with PyPI storing a secure checksum of the relevant files. In this case pip requires the --allow-external switch to proceed. While the download might potentially come from an unsecured server, downloaded files are checked against the secure checksum stored on PyPI. Because of that, this case is also considered secure.
On an external site, without PyPI storing any checksum. In this case there is no way to ensure that the download is safe. --allow-external is not enough to enable installation in this case, pip requires --allow-unverified.
Therefore, --allow-external alone is considered a safe switch, and only using --allow-unverified is a potential security issue. This is also why pip has an --allow-all-external option, but no --allow-all-unverified.
As a side note, --allow-external was introduced not as a security feature, but due to the potential speed, uptime and convenience issues while dealing with third party websites.
|
Difference between "yield" of Tornado and "yield from" of asyncio in mechanism?
|
In Tornado, we usually write the following code to call a function asynchronously:
class MainHandler(tornado.web.RequestHandler):
@tornado.gen.coroutine
def post(self):
...
yield self.handleRequest(foo)
...
@tornado.gen.coroutine
def handleRequest(self, foo):
...
But in asyncio (will be shipped with Python 3.4, can be installed from pip for Python 3.3), we use yield from to achieve the same thing:
@asyncio.coroutine
def myPostHandler():
...
yield from handleRequest(foo)
...
@asyncio.coroutine
def handleRequest(foo)
...
Seeing from the code, the difference is yield and yield from. However the former handleRequest(foo) returns a tornado.concurrent.Future object, the latter returns a generator object.
My question is, what is the difference between the two things in mechanism? How is the control flow? And who calls the actual handleRequest and retrieves its returning value?
Append: I have basic knowledge of Python generators and iterators. I wanted to understand what Tornado and asyncio achieved by using these, and what is the difference between those two mechanisms.
|
There is a huge difference between the two. yield from takes another generator and continues yielding from that generator instead (delegating responsibility, as it were). yield just yields one value.
In other words, yield from, in the simplest case, could be replaced by:
for value in self.handleRequest(foo):
yield value
If you replaced a yield from <expression> line with yield <expression> you'd return the whole generator to the caller, not the values that generator produces.
The yield from syntax was only introduced in Python 3.3, see PEP 380: Syntax for Delegating to a Subgenerator. Tornado supports Python versions 2.6, 2.7 and 3.2 in addition to Python 3.3, so it cannot rely on the yield from syntax being available. asyncio, on the other hand, being a core Python library added in 3.4, can fully rely on the yield from generator delegation syntax being available.
As a result, Tornado will have to post-process values yielded from a @tornado.gen.coroutine generator to detect that a tornado.concurrent.Future object was yielded; the @asyncio.coroutine code handling can be much simpler. And indeed the Tornado Runner.run() method does explicit type checks to handle delegated tasks.
|
Pandas: Elementwise multiplication of two dataframes
|
I know how to do element by element multiplication between two Pandas dataframes. However, things get more complicated when the dimensions of the two dataframes are not compatible. For instance below df * df2 is straightforward, but df * df3 is a problem:
df = pd.DataFrame({'col1' : [1.0] * 5,
'col2' : [2.0] * 5,
'col3' : [3.0] * 5 }, index = range(1,6),)
df2 = pd.DataFrame({'col1' : [10.0] * 5,
'col2' : [100.0] * 5,
'col3' : [1000.0] * 5 }, index = range(1,6),)
df3 = pd.DataFrame({'col1' : [0.1] * 5}, index = range(1,6),)
df.mul(df2, 1) # element by element multiplication no problems
df.mul(df3, 1) # df(row*col) is not equal to df3(row*col)
col1 col2 col3
1 0.1 NaN NaN
2 0.1 NaN NaN
3 0.1 NaN NaN
4 0.1 NaN NaN
5 0.1 NaN NaN
In the above situation, how can I multiply every column of df with df3.col1?
My attempt: I tried to replicate df3.col1 len(df.columns.values) times to get a dataframe that is of the same dimension as df:
df3 = pd.DataFrame([df3.col1 for n in range(len(df.columns.values)) ])
df3
1 2 3 4 5
col1 0.1 0.1 0.1 0.1 0.1
col1 0.1 0.1 0.1 0.1 0.1
col1 0.1 0.1 0.1 0.1 0.1
But this creates a dataframe of dimensions 3 * 5, whereas I am after 5*3. I know I can take the transpose with df3.T() to get what I need but I think this is not that the fastest way.
|
In [161]: pd.DataFrame(df.values*df2.values, columns=df.columns, index=df.index)
Out[161]:
col1 col2 col3
1 10 200 3000
2 10 200 3000
3 10 200 3000
4 10 200 3000
5 10 200 3000
|
Joblib Parallel multiple cpu's slower than single
|
I've just started using the Joblib module and I'm trying to understand how the Parallel function works. Below is an example of where parallelizing leads to longer runtimes but I don't understand why. My runtime on 1 cpu was 51 sec vs. 217 secs on 2 cpu.
My assumption was that running the loop in parallel would copy lists a and b to each processor. Then dispatch item_n to one cpu and item_n+1 to the other cpu, execute the function and then write the results back to a list (in order). Then grab the next 2 items and so on. I'm obviously missing something.
Is this a poor example or use of joblib? Did I simply structure the code wrong?
Here is the example:
import numpy as np
from matplotlib.path import Path
from joblib import Parallel, delayed
## Create pairs of points for line segments
a = zip(np.random.rand(5000,2),np.random.rand(5000,2))
b = zip(np.random.rand(300,2),np.random.rand(300,2))
## Check if one line segment contains another.
def check_paths(path, paths):
for other_path in paths:
res='no cross'
chck = Path(other_path)
if chck.contains_path(path)==1:
res= 'cross'
break
return res
res = Parallel(n_jobs=2) (delayed(check_paths) (Path(points), a) for points in b)
|
In short: I cannot reproduce your problem. If you are on Windows you should use a protector for your main loop: documentation of joblib.Parallel. The only problem I see is much data copying overhead, but your numbers seem unrealistic to be caused by that.
In long, here are my timings with your code:
On my i7 3770k (4 cores, 8 threads) I get the following results for different n_jobs:
For-loop: Finished in 33.8521318436 sec
n_jobs=1: Finished in 33.5527760983 sec
n_jobs=2: Finished in 18.9543449879 sec
n_jobs=3: Finished in 13.4856410027 sec
n_jobs=4: Finished in 15.0832719803 sec
n_jobs=5: Finished in 14.7227740288 sec
n_jobs=6: Finished in 15.6106669903 sec
So there is a gain in using multiple processes. However although I have four cores the gain already saturates at three processes. So I guess the execution time is actually limited by memory access rather than processor time.
You should notice that the arguments for each single loop entry are copied to the process executing it. This means you copy a for each element in b. That is ineffective. So instead access the global a. (Parallel will fork the process, copying all global variables to the newly spawned processes, so a is accessible). This gives me the following code (with timing and main loop guard as the documentation of joblib recommends:
import numpy as np
from matplotlib.path import Path
from joblib import Parallel, delayed
import time
import sys
## Check if one line segment contains another.
def check_paths(path):
for other_path in a:
res='no cross'
chck = Path(other_path)
if chck.contains_path(path)==1:
res= 'cross'
break
return res
if __name__ == '__main__':
## Create pairs of points for line segments
a = zip(np.random.rand(5000,2),np.random.rand(5000,2))
b = zip(np.random.rand(300,2),np.random.rand(300,2))
now = time.time()
if len(sys.argv) >= 2:
res = Parallel(n_jobs=int(sys.argv[1])) (delayed(check_paths) (Path(points)) for points in b)
else:
res = [check_paths(Path(points)) for points in b]
print "Finished in", time.time()-now , "sec"
Timing results:
n_jobs=1: Finished in 34.2845709324 sec
n_jobs=2: Finished in 16.6254048347 sec
n_jobs=3: Finished in 11.219119072 sec
n_jobs=4: Finished in 8.61683392525 sec
n_jobs=5: Finished in 8.51907801628 sec
n_jobs=6: Finished in 8.21842098236 sec
n_jobs=7: Finished in 8.21816396713 sec
n_jobs=8: Finished in 7.81841087341 sec
The saturation now slightly moved to n_jobs=4 which is the value to be expected.
check_paths does several redundant calculations that can easily be eliminated. Firstly for all elements in other_paths=a the line Path(...) is executed in every call. Precalculate that. Secondly the string res='no cross' is written is each loop turn, although it may only change once (followed by a break and return). Move the line in front of the loop. Then the code looks like this:
import numpy as np
from matplotlib.path import Path
from joblib import Parallel, delayed
import time
import sys
## Check if one line segment contains another.
def check_paths(path):
#global a
#print(path, a[:10])
res='no cross'
for other_path in a:
if other_path.contains_path(path)==1:
res= 'cross'
break
return res
if __name__ == '__main__':
## Create pairs of points for line segments
a = zip(np.random.rand(5000,2),np.random.rand(5000,2))
a = [Path(x) for x in a]
b = zip(np.random.rand(300,2),np.random.rand(300,2))
now = time.time()
if len(sys.argv) >= 2:
res = Parallel(n_jobs=int(sys.argv[1])) (delayed(check_paths) (Path(points)) for points in b)
else:
res = [check_paths(Path(points)) for points in b]
print "Finished in", time.time()-now , "sec"
with timings:
n_jobs=1: Finished in 5.33742594719 sec
n_jobs=2: Finished in 2.70858597755 sec
n_jobs=3: Finished in 1.80810618401 sec
n_jobs=4: Finished in 1.40814709663 sec
n_jobs=5: Finished in 1.50854086876 sec
n_jobs=6: Finished in 1.50901818275 sec
n_jobs=7: Finished in 1.51030707359 sec
n_jobs=8: Finished in 1.51062297821 sec
A side node on your code, although I haven't really followed its purpose as this was unrelated to your question, contains_path will only return True if this path completely contains the given path. (see documentation). Therefore your function will basically always return no cross given the random input.
|
how to normalize array numpy?
|
I would like to have a norm 1 numpy array.
I am looking for an equivalent version of this function
def normalize(v):
norm=np.linalg.norm(v)
if norm==0:
return v
return v/norm
Is there something like that in skearn or numpy?
This function works in situation where v is the 0 vector.
|
If you're using scikit-learn you can use sklearn.preprocessing.normalize:
import numpy as np
from sklearn.preprocessing import normalize
x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = normalize(x[:,np.newaxis], axis=0).ravel()
print np.all(norm1 == norm2)
# True
|
Python Pandas Histogram Log Scale
|
I'm making a fairly simple histogram in with pandas using
results.val1.hist(bins=120)
which works fine, but I really want to have a log scale on the y axis, which I normally (probably incorrectly) do like this:
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
plt.plot(np.random.rand(100))
ax.set_yscale('log')
plt.show()
If I replace the plt command with the pandas command, so I have:
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
results.val1.hist(bins=120)
ax.set_yscale('log')
plt.show()
results in many copies of the same error:
Jan 9 15:53:07 BLARG.local python[6917] <Error>: CGContextClosePath: no current point.
I do get a log scale histogram, but it only has the top lines of the bars, but no vertical bars or colors. Am doing something horribly wrong or is this just not supported by pandas?
EDIT:
From Paul H code I replaced
Added bottom=0.1 to hist call fixes the problem, I guess there is some kind of divide by zero thing, or something.
Thanks
|
Hard to diagnose without any data. The following works for me:
import numpy as np
import matplotlib.pyplot as plt
import pandas
series = pandas.Series(np.random.normal(size=2000))
fig, ax = plt.subplots()
series.hist(ax=ax, bins=100, bottom=0.1)
ax.set_yscale('log')
The key here is that you pass ax to the histogram function and you specify the bottom since there is no zero value on a log scale.
|
How to load/edit/run/save text files (.py) into an IPython notebook cell?
|
I've recently moved over to using IPython notebooks as part of my workflow. However, I've not been successful in finding a way to import .py files into the individual cells of an open IPython notebook so that they can edited, run and then saved. Can this be done?
I've found this in the documentation which tells me how to import .py files as new notebooks but this falls short of what I want to achieve.
Any suggestions would be much appreciated,
thanks in advance.
|
A text file can be loaded in a notebook cell with the magic command %load.
If you execute a cell containing:
%load filename.py
the content of filename.py will be loaded in the next cell. You can edit and execute it as usual.
To save the cell content back into a file add the cell-magic %%writefile filename.py at the beginning of the cell and run it. Beware that if a file with the same name already exists it will be silently overwritten.
To see the help for any magic command add a ?: like %load? or %%writefile?.
For general help on magic functions type "%magic"
For a list of the available magic functions, use %lsmagic. For a description
of any of them, type %magic_name?, e.g. '%cd?'.
See also: Magic functions from the official IPython docs.
EDIT: Starting from IPython 3 (now Jupyter project), the notebook has a text editor that can be used as a more convenient alternative to load/edit/save text files.
|
Python read JSON file and modify
|
Hi I am trying to take the data from a json file and insert and id then perform POST REST.
my file data.json has:
{
'name':'myname'
}
and I would like to add an id so that the json data looks like:
{
'id': 134,
'name': 'myname'
}
So I tried:
import json
f = open("data.json","r")
data = f.read()
jsonObj = json.loads(data)
I can't get to load the json format file.
What should I do so that I can convert the json file into json object and add another id value.
|
Set item using data['id'] = ....
import json
with open('data.json', 'r+') as f:
data = json.load(f)
data['id'] = 134 # <--- add `id` value.
f.seek(0) # <--- should reset file position to the beginning.
json.dump(data, f, indent=4)
|
DateTimeField received a naive datetime
|
I have model with DateTimeField column.
I'm try to insert row with database current_time value directly into table by sql query.
My sql query for MySQL database like:
INSERT INTO MyTable (..., my_datetime, ...) VALUES (..., current_time, ...)
And get:
RuntimeWarning: DateTimeField ModelName.field_name received a naive
datetime (2014-01-09 22:16:23) while time zone support is active.
How to insert current time directly into table by sql query without warning?
|
Further to falsetru's answer, if the datetime has already been created you can convert it to timezone aware:
from django.utils import timezone
my_datetime = timezone.make_aware(my_datetime, timezone.get_current_timezone())
|
What does "table" in the string.translate function mean?
|
Going through the string.translate function which says:
Delete all characters from s that are in deletechars (if present), and then translate the characters using table, which must be a 256-character string giving the translation for each character value, indexed by its ordinal. If table is None, then only the character deletion step is performed.
What does table mean here? Can it be a dict containing the mapping?
What does "must be a 256-character string" mean?
Can the table be made manually or through a custom function instead of string.maketrans?
I tried using the function (attempts below) just to see how it worked but wasn't successfully able to use it.
>>> "abcabc".translate("abcabc",{ord("a"): "d", ord("c"): "x"})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: translation table must be 256 characters long
>>> "abcabc".translate({ord("a"): ord("d"), ord("c"): ord("x")}, "b")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: expected a character buffer object
>>> "abc".translate({"a": "d", "c": "x"}, ["b"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: expected a character buffer object
What am I missing here?
|
It depends on Python version you are using.
In Python 2.x. The table is 256-characters string. It can be created using string.maketrans:
>>> import string
>>> tbl = string.maketrans('ac', 'dx')
>>> "abcabc".translate(tbl)
'dbxdbx'
In Python 3.x, the table is mapping of unicode ordinals to unicode characters.
>>> "abcabc".translate({ord('a'): 'd', ord('c'): 'x'})
'dbxdbx'
|
Flask 301 Response
|
My flask app is doing a 301 redirect for one of the urls.
The traceback in New Relic is:
Traceback (most recent call last):
File "/var/www/app/env/local/lib/python2.7/site-packages/flask/app.py", line 1358, in full_dispatch_request
rv = self.dispatch_request()
File "/var/www/app/env/local/lib/python2.7/site-packages/flask/app.py", line 1336, in dispatch_request
self.raise_routing_exception(req)
File "/var/www/app/env/local/lib/python2.7/site-packages/flask/app.py", line 1319, in raise_routing_exception
raise request.routing_exception
RequestRedirect: 301: Moved Permanently
It doesn't look like it is even hitting my code or rather the traceback isn't showing any of my files in it. At one point I did have Nginx redirect all non SSL request to HTTPS but had to disable that as Varnish was not able to make the request to port 443 with out an error... probably some configuration that I did or didn't make.
It doesn't always return a 301 though, I can request the URL and get it without any trouble. But someone out in the world requesting the URL is getting a 301 response.
It is a GET request with some custom headers to link it to the account.
At no point in my code is there a 301 redirect.
|
The traceback shows that it was the route matching that raised a redirect; usually (e.g. unless you added explicit redirect routes), that means the client tried to access a branch URL (one that ends with a trailing slash), but the requested URL did not include the last slash. The client is simply being redirected to the canonical branch URL with the slash.
From the Werkzeug Rule documentation:
URL rules that end with a slash are branch URLs, others are leaves. If you have strict_slashes enabled (which is the default), all branch URLs that are matched without a trailing slash will trigger a redirect to the same URL with the missing slash appended.
From the routing documentation:
Flaskâs URL rules are based on Werkzeugâs routing module. The idea behind that module is to ensure beautiful and unique URLs based on precedents laid down by Apache and earlier HTTP servers.
Take these two rules:
@app.route('/projects/')
def projects():
return 'The project page'
@app.route('/about')
def about():
return 'The about page'
Though they look rather similar, they differ in their use of the trailing slash in the URL definition. In the first case, the canonical URL for the projects endpoint has a trailing slash. In that sense, it is similar to a folder on a file system. Accessing it without a trailing slash will cause Flask to redirect to the canonical URL with the trailing slash.
In the second case, however, the URL is defined without a trailing slash, rather like the pathname of a file on UNIX-like systems. Accessing the URL with a trailing slash will produce a 404 âNot Foundâ error.
This behavior allows relative URLs to continue working even if the trailing slash is ommited, consistent with how Apache and other servers work. Also, the URLs will stay unique, which helps search engines avoid indexing the same page twice.
As documented, if you do not want this behaviour (and have the url without the trailing slash be a 404 Not Found instead), you must set the strict_slashes=False option on your route.
|
What does "del" do exactly?
|
Here is my code:
from memory_profiler import profile
@profile
def mess_with_memory():
huge_list = range(20000000)
del huge_list
print "why this kolaveri di?"
This is what the output is, when I ran it from interpreter:
Line # Mem usage Increment Line Contents
3 7.0 MiB 0.0 MiB @profile
4 def mess_with_memory():
5
6 628.5 MiB 621.5 MiB huge_list = range(20000000)
7 476.0 MiB -152.6 MiB del huge_list
8 476.0 MiB 0.0 MiB print "why this kolaveri di"
If you notice the output, creating the huge list consumed 621.5 MB while deleting it just freed up 152.6 MB. When i checked the docs, I found the below statement:
the statement del x removes the binding of x from the namespace referenced by the local scope
So I guess, it didn't delete the object itself, but just unbind it. But, what did it do in unbinding that it freed up so much of space(152.6 MB). Can somebody please take the pain to explain me what is going on here?
|
Python is a garbage-collected language. If a value isn't "reachable" from your code anymore, it will eventually get deleted.
The del statement, as you saw, removes the binding of your variable. Variables aren't values, they're just names for values.
If that variable was the only reference to the value anywhere, the value will eventually get deleted. In CPython in particular, the garbage collector is built on top of reference counting. So, that "eventually" means "immediately".* In other implementations, it's usually "pretty soon".
If there were other references to the same value, however, just removing one of those references (whether by del x, x = None, exiting the scope where x existed, etc.) doesn't clean anything up.**
There's another issue here. I don't know what the memory_profiler module (presumably this one) actually measures, but the description (talking about use of psutil) sounds like it's measuring your memory usage from "outside".
When Python frees up storage, it doesn't alwaysâor even usuallyâreturn it to the operating system. It keeps "free lists" around at multiple levels so it can re-use the memory more quickly than if it had to go all the way back to the OS to ask for more. On modern systems, this is rarely a problemâif you need the storage again, it's good that you had it; if you don't, it'll get paged out as soon as someone else needs it and never get paged back in, so there's little harm.
(On top of that, which I referred to as "the OS" above is really an abstraction made up of multiple levels, from the malloc library through the core C library to the kernel/pager, and at least one of those levels usually has its own free lists.)
If you want to trace memory use from the inside perspective⦠well, that's pretty hard. It gets a lot easier in Python 3.4 thanks to the new tracemalloc module. There are various third-party modules (e.g., heapy/guppy, Pympler, meliae) that try to get the same kind of information with earlier versions, but it's difficult, because getting information from the various allocators, and tying that information to the garbage collector, was very hard before PEP 445.
* In some cases, there are references to the value⦠but only from other references that are themselves unreachable, possibly in a cycle. That still counts as "unreachable" as far as the garbage collector is concerned, but not as far as reference counts are concerned. So, CPython also has a "cycle detector" that runs every so often and finds cycles of mutually-reachable but not-reachable-from-anyone-else values and cleans them up.
** If you're testing in the interactive console, there may be hidden references to your values that are hard to track, so you might think you've gotten rid of the last reference when you haven't. In a script, it should always be possible, if not easy, to figure things out. The gc module can help, as can the debugger. But of course both of them also give you new ways to add additional hidden references.
|
Python pip unable to locate pyodbc
|
Using virtualenv and just trying to install pyodbc. All resources I can find claim this should be extremely straightforward. After all the basic installs of MySQL, etc., just do:
pip install pyodbc
However, I am seeing a very strange error. It has nothing (as far as I can tell) to do with missing libraries, and after Googling for this sort of error for a long time, I can't find anything constructive on it at all.
(local-dev)espears@espears-w ~ $ pip install pyodbc
Downloading/unpacking pyodbc
Could not find any downloads that satisfy the requirement pyodbc
Some externally hosted files were ignored (use --allow-external pyodbc to allow).
Cleaning up...
No distributions at all found for pyodbc
Storing debug log for failure in /home/espears/.pip/pip.log
So I tried with the "allow-external" option and it does not help:
(local-dev)espears@espears-w ~ $ pip install --allow-external pyodbc
You must give at least one requirement to install (see "pip help install")
But the help documentation makes it appear that I am using this option correctly, e.g. from the output of running pip help install:
Package Index Options:
...
--allow-external <package> Allow the installation of externally hosted files
Here's the result in the PIP log file:
(local-dev)espears@espears-w ~ $ cat /home/espears/.pip/pip.log
Downloading/unpacking pyodbc
Getting page https://pypi.python.org/simple/pyodbc/
URLs to search for versions for pyodbc:
* https://pypi.python.org/simple/pyodbc/
Analyzing links from page https://pypi.python.org/simple/pyodbc/
Skipping link http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/); not a file
Skipping link http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/); not a file
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Not searching http://code.google.com/p/pyodbc/downloads/list (from https://pypi.python.org/simple/pyodbc/) for files because external urls are disallowed.
Could not find any downloads that satisfy the requirement pyodbc
Some externally hosted files were ignored (use --allow-external pyodbc to allow).
Cleaning up...
Removing temporary dir /home/espears/.virtualenvs/local-dev/build...
No distributions at all found for pyodbc
Exception information:
Traceback (most recent call last):
File "/home/espears/.virtualenvs/local-dev/local/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/home/espears/.virtualenvs/local-dev/local/lib/python2.7/site-packages/pip/commands/install.py", line 270, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/home/espears/.virtualenvs/local-dev/local/lib/python2.7/site-packages/pip/req.py", line 1157, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "/home/espears/.virtualenvs/local-dev/local/lib/python2.7/site-packages/pip/index.py", line 285, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for pyodbc
It makes it seem that if I can just get the --allow-external option to work, it will work because it clearly sees the common location for getting pyodbc.
But I am puzzled why this is needed at all, when virtually all references to installing pyodbc do not need to go out of the way to enable external links. I've also been installing many packages all day today, with few issues and certainly not encountering this allow-external need in any other case.
I am using pip version 1.5 with Python 2.7.3 on Ubuntu 12.04.
|
Running pip install --allow-external pyodbc --allow-unverified pyodbc pyodbc will work if the PyPI directory structure is correct.
Based on the pip.log output when I originally posted this answer, I think that package's website directory structure was broken. You can always work around this type of problem by specifying the URL of the package like pip install https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/pyodbc/pyodbc-3.0.7.zip, links for different versions are described here.
|
Reversal of string.contains In python, pandas
|
I have something like this in my code:
df2 = df[df['A'].str.contains("Hello|World")]
However, I want all the rows that don't contain either of Hello or World. How do I most efficiently reverse this?
|
You can use the tilde ~ to flip the bool values:
>>> df = pd.DataFrame({"A": ["Hello", "this", "World", "apple"]})
>>> df.A.str.contains("Hello|World")
0 True
1 False
2 True
3 False
Name: A, dtype: bool
>>> ~df.A.str.contains("Hello|World")
0 False
1 True
2 False
3 True
Name: A, dtype: bool
>>> df[~df.A.str.contains("Hello|World")]
A
1 this
3 apple
[2 rows x 1 columns]
Whether this is the most efficient way, I don't know; you'd have to time it against your other options. Sometimes using a regular expression is slower than things like df[~(df.A.str.contains("Hello") | (df.A.str.contains("World")))], but I'm bad at guessing where the crossovers are.
|
What are the risks of running 'sudo pip'?
|
Occasionally I run into comments or responses that state emphatically that running pip under sudo is "wrong" or "bad", but there are cases (including the way I have a bunch of tools set up) where it is either much simpler, or even necessary to run it that way.
What are the risks associated with running pip under sudo?
Note that this in not the same question as this one, which, despite the title, provides no information about risks. This also isn't a question about how to avoid using sudo, but about specifically why one would want to.
|
When you run pip with sudo, you run setup.py with sudo. In other words, you run arbitrary Python code from the Internet as root. If someone puts up a malicious project on PyPI and you install it, you give an attacker root access to your machine. Prior to some recent fixes to pip and PyPI, an attacker could also run a man in the middle attack to inject their code when you download a trustworthy project.
|
sklearn.LabelEncoder with never seen before values
|
If a sklearn.LabelEncoder has been fitted on a training set, it might break if it encounters new values when used on a test set.
The only solution I could come up with for this is to map everything new in the test set (i.e. not belonging to any existing class) to "<unknown>", and then explicitly add a corresponding class to the LabelEncoder afterward:
# train and test are pandas.DataFrame's and c is whatever column
le = LabelEncoder()
train[c] = le.fit_transform(train[c])
test[c] = test[c].map(lambda s: '<unknown>' if s not in le.classes_ else s)
le.classes_ = np.append(le.classes_, '<unknown>')
test[c] = le.transform(test[c])
This works, but is there a better solution?
Update
As @sapo_cosmico points out in a comment, it seems that the above doesn't work anymore, given what I assume is an implementation change in LabelEncoder.transform, which now seems to use np.searchsorted (I don't know if it was the case before). So instead of appending the <unknown> class to the LabelEncoder's list of already extracted classes, it needs to be inserted in sorted order:
import bisect
le_classes = le.classes_.tolist()
bisect.insort_left(le_classes, '<unknown>')
le.classes_ = le_classes
However, as this feels pretty clunky all in all, I'm certain there is a better approach for this.
|
I ended up switching to Pandas' get_dummies due to this problem of unseen data.
create the dummies on the training data
dummy_train = pd.get_dummies(train)
create the dummies in the new (unseen data)
dummy_new = pd.get_dummies(new_data)
re-index the new data to the columns of the training data, filling the missing values with 0
dummy_new.reindex(columns = dummies.columns, fill_value=0)
Effectively any new features which are categorical will not go into the classifier, but I think that should not cause problems as it would not know what to do with them.
|
Pip install not functioning on windows 7 Cygwin install
|
I'm having a terrible time of getting pip up and running on Cygwin which I just recently installed on my Windows 7 Computer. I am writing in the hope that anyone out there can tell me what I am doing incorrectly in terms of getting these packages installed correctly.
To start, I followed the instructions on this site:
http://www.pip-installer.org/en/latest/installing.html
with setuptools installed prior to pip installation. I followed the steps, ran this command:
Ryan@Albert ~
$ python get-pip.py
got this output:
Downloading/unpacking pip
Downloading pip-1.5.tar.gz (898kB): 898kB downloaded
Running setup.py egg_info for package pip
warning: no files found matching 'pip/cacert.pem'
warning: no files found matching '*.html' under directory 'docs'
warning: no previously-included files matching '*.rst' found under direct
no previously-included directories found matching 'docs/_build/_sources'
Installing collected packages: pip
Running setup.py install for pip
warning: no files found matching 'pip/cacert.pem'
warning: no files found matching '*.html' under directory 'docs'
warning: no previously-included files matching '*.rst' found under direct
no previously-included directories found matching 'docs/_build/_sources'
Installing pip script to /usr/bin
Installing pip2.7 script to /usr/bin
Installing pip2 script to /usr/bin
Successfully installed pip
Cleaning up...
and lo and behold, ran pip with this command:
Ryan@Albert ~
$ pip install --upgrade setuptools
which led to absolutely no output. A blank line appeared underneath for 3-4 seconds and then the input prompt came up again without pip actually doing anything. I did a bunch more testing to confirm that there was something called pip on my machine but anytime it ran, it essentially did nothing. It did not download or install any programs.
I went about trying to install pip another way after uninstalling the first version. This time I tried:
$ easy_install pip
And got the following output:
Searching for pip
Best match: pip 1.5
Adding pip 1.5 to easy-install.pth file
Installing pip script to /usr/bin
Installing pip2.7 script to /usr/bin
Installing pip2 script to /usr/bin
Using /usr/lib/python2.7/site-packages
Processing dependencies for pip
Finished processing dependencies for pip
Again, tried using pip to install virtualenv using this command:
$ pip install virtualenv
and it paused for 3-4 seconds, then made the command prompt available again. Exactly like the previous time. When I checked to see whether virtualenv was installed, it was not.
Essentially I have tried and tried to get pip up and running on my windows 7 Cygwin install but to no avail. I am aware of the fact that I can use other packages to install plugins and so forth but I would really appreciate it if someone had any knowledge on why this was happening so it doesn't plague me when I try to install stuff further on down the line.
Any help would be greatly appreciated!
|
There's a bug(?) in 64-bit Cygwin which causes ctypes.util to segfault when trying to find libuuid (/usr/bin/cyguuid-1.dll). The fix is to install libuuid-devel from Cygwin setup. I found this from an issue filed against requests.py, but it's noted (and worked around in different ways) in a few other places, too.
|
How to assert a dict contains another dict without assertDictContainsSubset in python?
|
I know assertDictContainsSubset can do this in python 2.7, but for some reason it's deprecated in python 3.2. So is there any way to assert a dict contains another one without assertDictContainsSubset?
This seems not good:
for item in dic2:
self.assertIn(item, dic)
any other good way? Thanks
|
>>> d1 = dict(a=1, b=2, c=3, d=4)
>>> d2 = dict(a=1, b=2)
>>> set(d2.items()).issubset( set(d1.items()) )
True
And the other way around:
>>> set(d1.items()).issubset( set(d2.items()) )
False
Limitation: the dictionary values have to be hashable.
|
Compute *rolling* maximum drawdown of pandas Series
|
It's pretty easy to write a function that computes the maximum drawdown of a time series. It takes a small bit of thinking to write it in O(n) time instead of O(n^2) time. But it's not that bad. This will work:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def max_dd(ser):
max2here = pd.expanding_max(ser)
dd2here = ser - max2here
return dd2here.min()
Let's set up a brief series to play with to try it out:
np.random.seed(0)
n = 100
s = pd.Series(np.random.randn(n).cumsum())
s.plot()
plt.show()
As expected, max_dd(s) winds up showing something right around -17.6. Good, great, grand. Now say I'm interested in computing the rolling drawdown of this Series. I.e. for each step, I want to compute the maximum drawdown from the preceding sub series of a specified length. This is easy to do using pd.rolling_apply. It works like so:
rolling_dd = pd.rolling_apply(s, 10, max_dd, min_periods=0)
df = pd.concat([s, rolling_dd], axis=1)
df.columns = ['s', 'rol_dd_10']
df.plot()
This works perfectly. But it feels very slow. Is there a particularly slick algorithm in pandas or another toolkit to do this fast? I took a shot at writing something bespoke: it keeps track of all sorts of intermediate data (locations of observed maxima, locations of previously found drawdowns) to cut down on lots of redundant calculations. It does save some time, but not a whole lot, and not nearly as much as should be possible.
I think it's because of all the looping overhead in Python/Numpy/Pandas. But I'm not currently fluent enough in Cython to really know how to begin attacking this from that angle. I was hoping someone had tried this before. Or, perhaps, that someone might want to have a look at my "handmade" code and be willing to help me convert it to Cython.
Edit:
For anyone who wants a review of all the functions mentioned here (and some others!) have a look at the iPython notebook at: http://nbviewer.ipython.org/gist/8one6/8506455
It shows how some of the approaches to this problem relate, checks that they give the same results, and shows their runtimes on data of various sizes.
If anyone is interested, the "bespoke" algorithm I alluded to in my post is rolling_dd_custom. I think that could be a very fast solution if implemented in Cython.
|
Here's a numpy version of the rolling maximum drawdown function. windowed_view is a wrapper of a one-line function that uses numpy.lib.stride_tricks.as_strided to make a memory efficient 2d windowed view of the 1d array (full code below). Once we have this windowed view, the calculation is basically the same as your max_dd, but written for a numpy array, and applied along the second axis (i.e. axis=1).
def rolling_max_dd(x, window_size, min_periods=1):
"""Compute the rolling maximum drawdown of `x`.
`x` must be a 1d numpy array.
`min_periods` should satisfy `1 <= min_periods <= window_size`.
Returns an 1d array with length `len(x) - min_periods + 1`.
"""
if min_periods < window_size:
pad = np.empty(window_size - min_periods)
pad.fill(x[0])
x = np.concatenate((pad, x))
y = windowed_view(x, window_size)
running_max_y = np.maximum.accumulate(y, axis=1)
dd = y - running_max_y
return dd.min(axis=1)
Here's a complete script that demonstrates the function:
import numpy as np
from numpy.lib.stride_tricks import as_strided
import pandas as pd
import matplotlib.pyplot as plt
def windowed_view(x, window_size):
"""Creat a 2d windowed view of a 1d array.
`x` must be a 1d numpy array.
`numpy.lib.stride_tricks.as_strided` is used to create the view.
The data is not copied.
Example:
>>> x = np.array([1, 2, 3, 4, 5, 6])
>>> windowed_view(x, 3)
array([[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6]])
"""
y = as_strided(x, shape=(x.size - window_size + 1, window_size),
strides=(x.strides[0], x.strides[0]))
return y
def rolling_max_dd(x, window_size, min_periods=1):
"""Compute the rolling maximum drawdown of `x`.
`x` must be a 1d numpy array.
`min_periods` should satisfy `1 <= min_periods <= window_size`.
Returns an 1d array with length `len(x) - min_periods + 1`.
"""
if min_periods < window_size:
pad = np.empty(window_size - min_periods)
pad.fill(x[0])
x = np.concatenate((pad, x))
y = windowed_view(x, window_size)
running_max_y = np.maximum.accumulate(y, axis=1)
dd = y - running_max_y
return dd.min(axis=1)
def max_dd(ser):
max2here = pd.expanding_max(ser)
dd2here = ser - max2here
return dd2here.min()
if __name__ == "__main__":
np.random.seed(0)
n = 100
s = pd.Series(np.random.randn(n).cumsum())
window_length = 10
rolling_dd = pd.rolling_apply(s, window_length, max_dd, min_periods=0)
df = pd.concat([s, rolling_dd], axis=1)
df.columns = ['s', 'rol_dd_%d' % window_length]
df.plot(linewidth=3, alpha=0.4)
my_rmdd = rolling_max_dd(s.values, window_length, min_periods=1)
plt.plot(my_rmdd, 'g.')
plt.show()
The plot shows the curves generated by your code. The green dots are computed by rolling_max_dd.
Timing comparison, with n = 10000 and window_length = 500:
In [2]: %timeit rolling_dd = pd.rolling_apply(s, window_length, max_dd, min_periods=0)
1 loops, best of 3: 247 ms per loop
In [3]: %timeit my_rmdd = rolling_max_dd(s.values, window_length, min_periods=1)
10 loops, best of 3: 38.2 ms per loop
rolling_max_dd is about 6.5 times faster. The speedup is better for smaller window lengths. For example, with window_length = 200, it is almost 13 times faster.
To handle NA's, you could preprocess the Series using the fillna method before passing the array to rolling_max_dd.
|
Python json.loads shows ValueError: Extra data
|
I am getting some data from a JSON file "new.json", and I want to filter some data and store it into a new JSON file. Here is my code:
import json
with open('new.json') as infile:
data = json.load(infile)
for item in data:
iden = item.get["id"]
a = item.get["a"]
b = item.get["b"]
c = item.get["c"]
if c == 'XYZ' or "XYZ" in data["text"]:
filename = 'abc.json'
try:
outfile = open(filename,'ab')
except:
outfile = open(filename,'wb')
obj_json={}
obj_json["ID"] = iden
obj_json["VAL_A"] = a
obj_json["VAL_B"] = b
and I am getting an error, the traceback is:
File "rtfav.py", line 3, in <module>
data = json.load(infile)
File "/usr/lib64/python2.7/json/__init__.py", line 278, in load
**kw)
File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 369, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 88 column 2 - line 50607 column 2 (char 3077 - 1868399)
Can someone help me?
Here is a sample of the data in new.json, there are about 1500 more such dictionaries in the file
{
"contributors": null,
"truncated": false,
"text": "@HomeShop18 #DreamJob to professional rafter",
"in_reply_to_status_id": null,
"id": 421584490452893696,
"favorite_count": 0,
"source": "<a href=\"https://mobile.twitter.com\" rel=\"nofollow\">Mobile Web (M2)</a>",
"retweeted": false,
"coordinates": null,
"entities": {
"symbols": [],
"user_mentions": [
{
"id": 183093247,
"indices": [
0,
11
],
"id_str": "183093247",
"screen_name": "HomeShop18",
"name": "HomeShop18"
}
],
"hashtags": [
{
"indices": [
12,
21
],
"text": "DreamJob"
}
],
"urls": []
},
"in_reply_to_screen_name": "HomeShop18",
"id_str": "421584490452893696",
"retweet_count": 0,
"in_reply_to_user_id": 183093247,
"favorited": false,
"user": {
"follow_request_sent": null,
"profile_use_background_image": true,
"default_profile_image": false,
"id": 2254546045,
"verified": false,
"profile_image_url_https": "https://pbs.twimg.com/profile_images/413952088880594944/rcdr59OY_normal.jpeg",
"profile_sidebar_fill_color": "171106",
"profile_text_color": "8A7302",
"followers_count": 87,
"profile_sidebar_border_color": "BCB302",
"id_str": "2254546045",
"profile_background_color": "0F0A02",
"listed_count": 1,
"profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png",
"utc_offset": null,
"statuses_count": 9793,
"description": "Rafter. Rafting is what I do. Me aur mera Tablet. Technocrat of Future",
"friends_count": 231,
"location": "",
"profile_link_color": "473623",
"profile_image_url": "http://pbs.twimg.com/profile_images/413952088880594944/rcdr59OY_normal.jpeg",
"following": null,
"geo_enabled": false,
"profile_banner_url": "https://pbs.twimg.com/profile_banners/2254546045/1388065343",
"profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png",
"name": "Jayy",
"lang": "en",
"profile_background_tile": false,
"favourites_count": 41,
"screen_name": "JzayyPsingh",
"notifications": null,
"url": null,
"created_at": "Fri Dec 20 05:46:00 +0000 2013",
"contributors_enabled": false,
"time_zone": null,
"protected": false,
"default_profile": false,
"is_translator": false
},
"geo": null,
"in_reply_to_user_id_str": "183093247",
"lang": "en",
"created_at": "Fri Jan 10 10:09:09 +0000 2014",
"filter_level": "medium",
"in_reply_to_status_id_str": null,
"place": null
}
|
As you can see in the following example, json.loads (and json.load) does not decode multiple json object.
>>> json.loads('{}')
{}
>>> json.loads('{}{}') # == json.loads(json.dumps({}) + json.dumps({}))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\json\__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "C:\Python27\lib\json\decoder.py", line 368, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 3 - line 1 column 5 (char 2 - 4)
If you want to dump multiple dictionaries, wrap them in a list, dump the list (instead of dumping dictionaries multiple times)
>>> dict1 = {}
>>> dict2 = {}
>>> json.dumps([dict1, dict2])
'[{}, {}]'
>>> json.loads(json.dumps([dict1, dict2]))
[{}, {}]
|
Shortest way to get first item of `OrderedDict` in Python 3
|
What's the shortest way to get first item of OrderedDict in Python 3?
My best:
list(ordered_dict.items())[0]
Quite long and ugly.
I can think of:
next(iter(ordered_dict.items())) # Fixed, thanks Ashwini
But it's not very self-describing.
Any better suggestions?
|
The usual ways to get an element from a set, dict, OrderedDict, consumeable iterator, or any other non-indexable collection are:
for value in some_collection:
break
and:
value = next(iter(some_collection))
The latter is nice because the next() function lets you specify a default value if collection is empty or you can choose to let it raise an exception. The next() function is also explicit that it is asking for the next item.
BTW, if you actually need indexing and slicing for more than just the first element, it is a simple matter to convert to a list with list(some_collection) or to use itertools.islice().
|
Aptana Error-pydev: Port not bound (found port -1)?
|
I just updated my Aptana Studio3. When I open my python file it says that it can not find map, range and filter and some other methods. but when I run my code, it'll run without any problem. my code completion doesn't work any more. The error for code completion when I use CTRL+SPACE is
Port not bound (found port -1). Is there an enabled firewall?
I don't know where the problem is ?!! I searched but I couldn't find a proper solution. I'm using windows 7.
|
It seems that this is solved on PyDev and the problem is you can't upgrade PyDev on Aptana 3.6.0. Version 3.6.1 takes out the integration with PyDev and lets you upgrade PyDev.
So I installed Aptana 3.6.1 through Beta repository and then installed the latest PyDev.
Aptana Beta link to add to "Available Software Sites" on Aptana:
http://preview.appcelerator.com/aptana/studio3/standalone/update/beta/
Upgrade to Aptana 3.6.1. This will uninstall PyDev.
PyDev link to add to "Available Software Sites" on Aptana:
http://pydev.org/updates
Install PyDev.
And then, "Port not bound" will be solved.
|
Alternative to dict comprehension prior to Python 2.7
|
How can I make the following functionality compatible with versions of Python earlier than Python 2.7?
gwfuncs = [reboot, flush_macs, flush_cache, new_gw, revert_gw, send_log]
gw_func_dict = {chr(2**i): func for i, func in enumerate(gwfuncs[:8])}
|
Use:
gw_func_dict = dict((chr(2**i), func) for i, func in enumerate(gwfuncs[:8]))
That's the dict() function with a generator expression producing (key, value) pairs.
Or, to put it generically, a dict comprehension of the form:
{key_expr: value_expr for targets in iterable <additional loops or if expressions>}
can always be made compatible with Python < 2.7 by using:
dict((key_expr, value_expr) for targets in iterable <additional loops or if expressions>)
|
Running django tutorial tests fail - No module named polls.tests
|
I'm playing with django 1.6 tutorial but i can't run tests.
My project (name mydjango) and app structure (name is polls) are as shown below in a virtualenv. (.nja files are just created by ninja-ide the ide i'm using)
.
âââ __init__.py
âââ manage.py
âââ mydjango
â  âââ __init__.py
â  âââ __init__.pyc
â  âââ mydjango.nja
â  âââ settings.py
â  âââ settings.pyc
â  âââ templates
â  â  âââ admin
â  â  âââ base_site.html
â  âââ urls.py
â  âââ urls.pyc
â  âââ wsgi.py
â  âââ wsgi.pyc
âââ polls
â  âââ admin.py
â  âââ admin.pyc
â  âââ __init__.py
â  âââ __init__.pyc
â  âââ models.py
â  âââ models.pyc
â  âââ templates
â  â  âââ __init__.py
â  â  âââ polls
â  â  âââ detail.html
â  â  âââ index.html
â  â  âââ __init__.py
â  â  âââ results.html
â  âââ tests.py
â  âââ tests.pyc
â  âââ urls.py
â  âââ urls.pyc
â  âââ views.py
â  âââ views.pyc
âââ polls.nja
I followed the tutorial to understand how django works but i'm stuck in the test part.
As tutorial suggest i created a file named tests.py into the app folder, the pretty strighforward file is:
# -*- coding: utf-8 -*-
from django.test import TestCase
import datetime
from django.utils import timezone
from polls.models import Question
# Create your tests here.l
class QuestionMethodTests(TestCase):
def test_was_published_recently_with_future_poll(self):
"""
was_published_recently dovrebbe ritornare falso se si mette una data nel futuro
"""
future_question = Question(pub_date=timezone.now() + datetime.timedelta(hours=50))
self.assertEqual(future_question.was_published_recently(), False)
then i installed unittest2 into the virtualenv with
$pip install unittest2
and run
$python manage.py test polls
Creating test database for alias 'default'...
E
======================================================================
ERROR: mydjango.polls.tests (unittest2.loader.ModuleImportFailure)
----------------------------------------------------------------------
ImportError: Failed to import test module: mydjango.polls.tests
Traceback (most recent call last):
File "/home/sergio/.virtualenvs/django4/local/lib/python2.7/site-packages/unittest2/loader.py", line 260, in _find_tests
module = self._get_module_from_name(name)
File "/home/sergio/.virtualenvs/django4/local/lib/python2.7/site-packages/unittest2/loader.py", line 238, in _get_module_from_name
__import__(name)
ImportError: No module named polls.tests
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Destroying test database for alias 'default'...
No way to have the test working, also if don't pass the app name it returns the same error:
$ python manage.py test
Creating test database for alias 'default'...
E
======================================================================
ERROR: mydjango.polls.tests (unittest2.loader.ModuleImportFailure)
----------------------------------------------------------------------
ImportError: Failed to import test module: mydjango.polls.tests
Traceback (most recent call last):
File "/home/sergio/.virtualenvs/django4/local/lib/python2.7/site-packages/unittest2/loader.py", line 260, in _find_tests
module = self._get_module_from_name(name)
File "/home/sergio/.virtualenvs/django4/local/lib/python2.7/site-packages/unittest2/loader.py", line 238, in _get_module_from_name
__import__(name)
ImportError: No module named polls.tests
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Destroying test database for alias 'default'...
My INSTALLED_APPS are:
INSTALLED_APPS = (
'south',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'polls',
)
Can't understand what's i'm doing wrong - any help should be really appreciated.
Thank u
|
I had exactly the same issue with my Django project:
$ python manage test polls.tests
worked fine whereas the following failed with an import error:
$ python manage test polls
$ python manage test
(...)
ImportError: Failed to import test module: mydjango.polls.tests
Traceback (most recent call last):
(...)
ImportError: No module named polls.tests
Check carefully the error message: Django's test runner tries to import the tests from mydjango.polls.tests where mydjango is the name of the root directory (the container for your project).
I fixed this issue by deleting the __init__.py file in mydjango directory (at the same level than manage.py file). This directory is not supposed to be a python module and it seems to mess up with Django's test runner if it is the case.
So just deleting the _init_.py file should fix our problem:
$ rm mydjango/__init__.py
|
Redirecting stdout and stderr to a PyQt4 QTextEdit from a secondary thread
|
Stack overflow. Once again, I come to you in a time of dire need, teetering precariously on the brink of insanity. This question - as may be evident from the title - is an amalgamation of several other questions I have seen answered here.
I have a PyQt application, and I want to re-route the stdout and stderr streams to a QTextEdit that is in my GUI without delay.
Initially, I found the following stack overflow answer: http://stackoverflow.com/a/17145093/629404
This works perfectly, but with one caveat: If stdout or stderr are updated multiple times while the CPU is processing a relatively longer method, all of the updates show up simultaneously when the main thread returns to the application loop. Unfortunately, I have a few methods which take up to 20 seconds to complete (networking related), and so the application becomes unresponsive - and the QTextEdit does not update - until they are finished.
In order to fix this problem, I delegated all of the GUI processing to the main thread, and I have been spawning off a second thread to handle the longer networking operations, using pyqtSignals to notify the main thread of when the work is finished and pass back results. Immediately when I began testing the code written this way, the python interpreter began crashing without any warning.
This is where it gets very frusterating: Python is crashing because - using the class from the included link above - I have assigned the sys.stdout/err streams to the QTextEdit widget; PyQt widgets cannot be modified from any thread other then the application thread, and since the updates to stdout and stderr are coming from the secondary worker thread that I created, they are violating this rule. I have commented out the section of code where I redirect the output streams, and sure enough, the program runs without error.
This brings me back to square one, and leaves me in a confusing situation; Assuming I continue to handle GUI related operations in the main thread and deal with computation and longer operations in a secondary thread (which I have come to understand is the best way to keep the application from blocking when the user triggers events), how can I redirect Stdout and Stderr from both threads to the QTextEdit widget? The class in the link above works just fine for the main thread, but kills python - for the reason described above - when updates come from the second thread.
|
Firstly, +1 for realising how thread-unsafe many of the examples on stack overflow are!
The solution is to use a thread-safe object (like a Python Queue.Queue) to mediate the transfer of information. I've attached some sample code below which redirects stdout to a Python Queue. This Queue is read by a QThread, which emits the contents to the main thread through Qt's signal/slot mechanism (emitting signals is thread-safe). The main thread then writes the text to a text edit.
Hope that is clear, feel free to ask questions if it is not!
EDIT: Note that the code example provided doesn't clean up QThreads nicely, so you'll get warnings printed when you quit. I'll leave it to you to extend to your use case and clean up the thread(s)
import sys
from Queue import Queue
from PyQt4.QtCore import *
from PyQt4.QtGui import *
# The new Stream Object which replaces the default stream associated with sys.stdout
# This object just puts data in a queue!
class WriteStream(object):
def __init__(self,queue):
self.queue = queue
def write(self, text):
self.queue.put(text)
# A QObject (to be run in a QThread) which sits waiting for data to come through a Queue.Queue().
# It blocks until data is available, and one it has got something from the queue, it sends
# it to the "MainThread" by emitting a Qt Signal
class MyReceiver(QObject):
mysignal = pyqtSignal(str)
def __init__(self,queue,*args,**kwargs):
QObject.__init__(self,*args,**kwargs)
self.queue = queue
@pyqtSlot()
def run(self):
while True:
text = self.queue.get()
self.mysignal.emit(text)
# An example QObject (to be run in a QThread) which outputs information with print
class LongRunningThing(QObject):
@pyqtSlot()
def run(self):
for i in range(1000):
print i
# An Example application QWidget containing the textedit to redirect stdout to
class MyApp(QWidget):
def __init__(self,*args,**kwargs):
QWidget.__init__(self,*args,**kwargs)
self.layout = QVBoxLayout(self)
self.textedit = QTextEdit()
self.button = QPushButton('start long running thread')
self.button.clicked.connect(self.start_thread)
self.layout.addWidget(self.textedit)
self.layout.addWidget(self.button)
@pyqtSlot(str)
def append_text(self,text):
self.textedit.moveCursor(QTextCursor.End)
self.textedit.insertPlainText( text )
@pyqtSlot()
def start_thread(self):
self.thread = QThread()
self.long_running_thing = LongRunningThing()
self.long_running_thing.moveToThread(self.thread)
self.thread.started.connect(self.long_running_thing.run)
self.thread.start()
# Create Queue and redirect sys.stdout to this queue
queue = Queue()
sys.stdout = WriteStream(queue)
# Create QApplication and QWidget
qapp = QApplication(sys.argv)
app = MyApp()
app.show()
# Create thread that will listen on the other end of the queue, and send the text to the textedit in our application
thread = QThread()
my_receiver = MyReceiver(queue)
my_receiver.mysignal.connect(app.append_text)
my_receiver.moveToThread(thread)
thread.started.connect(my_receiver.run)
thread.start()
qapp.exec_()
|
How to find pg_config path
|
Complete newbie here, trying to set up Django to work with ProstgreSQL.
I'm using mac osx 10.6.8. I have also installed PostgreSQL 9.3
When I run pip install psycopg2 in terminal I get the following error
Downloading/unpacking psycopg2
Downloading psycopg2-2.5.2.tar.gz (685kB): 685kB downloaded
Running setup.py (path:/private/var/folders/A9/A99cs6x0FNusPejCVkYNTE+++TI/-Tmp-/pip_build_bengorman/psycopg2/setup.py) egg_info for package psycopg2
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
I've seen a number of posts on this
how-to-install-psycopg2-with-pip-on-python
pg-config-executable-not-found
but I have no clue how to find the bin folder location containing pg_config. Any tips on finding this path?
|
I recommend that you try to use Postgres.app. (http://postgresapp.com)
This way you can easily turn Postgres on and off on your Mac.
Once you do, add the path to Postgres to your .profile file by appending the following:
PATH="/Applications/Postgres.app/Contents/Versions/latest/bin:$PATH"
Only after you added Postgres to your path you can try to install psycopg2 either within a virtual environment (using pip) or into your global site packages.
|
Python service discovery: Advertise a service across a local network
|
I have a "server" python script running on one of the local network machines, which waits for clients to connect, and passes them some work to do. The server and client code have both been written, and are working as expected...
The problem is, this server might be running from any machine in the local network, so I can't hard code the address in the script... I immediately wondered if I can make a machine advertise about its existence, and clients can respond to that. Is that doable in Python with the standard library? I really don't have time to download twisted or tornado and learn about them, unfortunately, so I need something simple.
I tried to think more about it, and realized I can have a single static IP machine where servers register/unregister from and clients can look for servers from there. Kind of like a torrent tracker, I think. This'll have to do if I can't do the service advertising approach easily.
|
An easy way to do service announcement/discovery on the local network is by broadcasting UDP packets.
Constants:
PORT = 50000
MAGIC = "fna349fn" #to make sure we don't confuse or get confused by other programs
Announcement:
from time import sleep
from socket import socket, AF_INET, SOCK_DGRAM, SOL_SOCKET, SO_BROADCAST, gethostbyname, gethostname
s = socket(AF_INET, SOCK_DGRAM) #create UDP socket
s.bind(('', 0))
s.setsockopt(SOL_SOCKET, SO_BROADCAST, 1) #this is a broadcast socket
my_ip= gethostbyname(gethostname()) #get our IP. Be careful if you have multiple network interfaces or IPs
while 1:
data = MAGIC+my_ip
s.sendto(data, ('<broadcast>', PORT))
print "sent service announcement"
sleep(5)
Discovery:
from socket import socket, AF_INET, SOCK_DGRAM
s = socket(AF_INET, SOCK_DGRAM) #create UDP socket
s.bind(('', PORT))
while 1:
data, addr = s.recvfrom(1024) #wait for a packet
if data.startswith(MAGIC):
print "got service announcement from", data[len(MAGIC):]
This code was adapted from the demo on python.org
|
How print statement create a local variables
|
Question are at the end of this post.
First snippet: empty local variable dictionary.
def outer():
x = 1
def inner():
print "Local variables: %s" % locals()
return inner()
print outer()
Output:
Local variables: {}
Second snippet: print inside inner() function and creating local variable entry.
def outer():
x = 1
def inner():
print x
print "Local variables: %s" % locals()
return inner()
print outer()
Output:
1
Local variables: {'x': 1}
Third Snippet: del x from inside the inner function:
def outer():
x = 1
def inner():
print x
print "Local variables: %s" % locals()
del x
return inner()
print outer()
Output:
>>> outer()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 7, in outer
File "<stdin>", line 4, in inner
UnboundLocalError: local variable 'x' referenced before assignment
>>>
Questions :
In Second Snippet, how print statement create local variable.
If it creates local variable inside inner function why I am not able to delete it.
Could someone please help me understanding this.
|
In Python, unless you specify otherwise (with a global statement, or a nonlocal statement in 3.0+), a variable is in locals if you modify it (assign to it, del it, etc.) anywhere in the function.*
In the first snippet, you never modify x, or even access it, so it's not local. In fact, it doesn't even exist. That's easy.
The second version is the tricky one. x is not local to inner, because you don't modify it in inner. So, Python goes looking for it, moving outward scope by scope, until it finds it a scope that has that variable. And it finds it as a local variable in outer. Which means it's a closure variable or free variable in inner. Since the locals function includes closure variables as well as local variables, you see it.
The third version, by doing del x, makes x local to inner.** So, it appears in locals. However, you try to print it without having ever assigned anything to it, so there is no value yet. So you get an UnboundLocalError.
Generally, once you understand the basic idea Python is trying to accomplish here, it's usually obvious what kind of variable you have. But if it's ever unclear, the detailed rules are defined in Naming and Binding.
If you want to understand how closures work under the covers, you can start by inspecting the function objects. Try this:
def outer():
x = 1
def inner():
print x
print "Local variables: %s" % locals()
return inner
inner = outer()
print inner.func_closure
print inner.func_code.co_freevars
print outer.func_code.co_cellvars
The inspect module docs list all of the important members of function, code, and other "under the covers" objects.
Using the dis module to look at the bytecode for outer and inner may also be helpful.*** For example, if you run this code, you'll see a LOAD_FAST for the local, LOAD_DEREF for the cell, and LOAD_GLOBAL for the global.
But if you really want to understand how all of this really works, the series of articles on symbol tables at Eli Bendersky's "Python internals" blog covers just about everything very nicely. (Thanks to Ashwini Chaudhary for locating it and pointing it out in a comment.)
* This is checked at compile time, not execution time, so trying to confuse it with, e.g., exec can successfully confuse both Python and yourself.
** Note that del counts as both an modification and an access. This can be surprising, but you can see that def foo(): del x will raise an UnboundLocalError because the del makes x local, and the very same del fails to find a value.
*** ⦠assuming you're using a Python implementation that uses CPython-style bytecode, like CPython itself (of course) or PyPy.
|
error: could not create '/usr/local/lib/python2.7/dist-packages/virtualenv_support': Permission denied
|
I am using ubuntu 12.04 and I am trying to pip install virtualenv but suddenly I got this error.
samuel@sampc:~$ pip install virtualenv
Downloading/unpacking virtualenv
Running setup.py egg_info for package virtualenv
warning: no previously-included files matching '*' found under directory 'docs/_templates'
warning: no previously-included files matching '*' found under directory 'docs/_build'
Installing collected packages: virtualenv
Running setup.py install for virtualenv
error: could not create '/usr/local/lib/python2.7/dist-packages/virtualenv_support': Permission denied
Complete output from command /usr/bin/python -c "import setuptools;__file__='/home/samuel/build/virtualenv/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-Z2v_fR-record/install-record.txt:
running install
running build
running build_py
running install_lib
creating /usr/local/lib/python2.7/dist-packages/virtualenv_support
error: could not create '/usr/local/lib/python2.7/dist-packages/virtualenv_support': Permission denied
----------------------------------------
Command /usr/bin/python -c "import setuptools;__file__='/home/samuel/build/virtualenv/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-Z2v_fR-record/install-record.txt failed with error code 1
Storing complete log in /home/samuel/.pip/pip.log
Does anyone have an idea about my case?
|
Use
sudo pip install virtualenv
Apparently you will have powers of administrator when adding "sudo" before the line... just don't forget your password.
|
What is a 'NoneType' object?
|
I'm getting this error when I run my python script:
TypeError: cannot concatenate 'str' and 'NoneType' objects
I'm pretty sure the 'str' means string, but I dont know what a 'NoneType' object is. My script craps out on the second line, I know the first one works because the commands from that line are in my asa as I would expect. At first I thought it may be because I'm using variables and user input inside send_command.
Everything in 'CAPS' are variables, everything in 'lower case' is input from 'parser.add_option' options.
I'm using pexpect, and optparse
send_command(child, SNMPGROUPCMD + group + V3PRIVCMD)
send_command(child, SNMPSRVUSRCMD + snmpuser + group + V3AUTHCMD + snmphmac + snmpauth + PRIVCMD + snmpencrypt + snmppriv)
|
NoneType is the type for the None object, which is an object that indicates no value. You cannot add it to strings or other objects.
What is most likely happening is that one of your names (the proper term for a variable in Python) is not set, hence it has the value of None and when you try to + it to a string, you get that exception:
send_command(child, SNMPGROUPCMD + group + V3PRIVCMD)
One of group or SNMPGROUPCMD or V3PRIVCMD has None as its value.
The best way to get around this problem is to assign a default value to your names.
|
SSL backend error when using OpenSSL
|
I was trying to install pycurl in a virtualenv using pip and I got this error
ImportError: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (none/other)
I read some documentation saying that "To fix this, you need to tell setup.py what SSL backend is used" (source) although I am not sure how to do this since I installed pycurl using pip.
How can I specify the SSL backend when installing pycurl with pip?
Thanks
|
After reading their INSTALLATION file, I was able to solve my problem by setting an environment variable and did a reinstall
remove existing pycurl installation
pip uninstall pycurl
export variable with your link-time ssl backend (which is openssl above)
export PYCURL_SSL_LIBRARY=openssl
install pycurl
pip install pycurl
There could be other solution out there but this works perfect for me on a virtualenv and pip installation
|
OrderedDict comprehensions
|
Can I extend syntax in python for dict comprehensions for other dicts, like the OrderedDict in collections module or my own types which inherit from dict?
Just rebinding the dict name obviously doesn't work, the {key: value} comprehension syntax still gives you a plain old dict for comprehensions and literals.
>>> from collections import OrderedDict
>>> olddict, dict = dict, OrderedDict
>>> {i: i*i for i in range(3)}.__class__
<type 'dict'>
So, if it's possible how would I go about doing that? It's OK if it only works in CPython. For syntax I guess I would try it with a O{k: v} prefix like we have on the r'various' u'string' b'objects'.
note: Of course we can use a generator expression instead, but I'm more interested seeing how hackable python is in terms of the grammar.
|
Sorry, not possible. Dict literals and dict comprehensions map to the built-in dict type, in a way that's hardcoded at the C level. That can't be overridden.
You can use this as an alternative, though:
OrderedDict((i, i * i) for i in range(3))
|
JSON to pandas DataFrame
|
What I am trying to do is extract elevation data from a google maps API along a path specified by latitude and longitude coordinates as follows:
from urllib2 import Request, urlopen
import json
path1 = '42.974049,-81.205203|42.974298,-81.195755'
request=Request('http://maps.googleapis.com/maps/api/elevation/json?locations='+path1+'&sensor=false')
response = urlopen(request)
elevations = response.read()
This gives me a data that looks like this:
elevations.splitlines()
['{',
' "results" : [',
' {',
' "elevation" : 243.3462677001953,',
' "location" : {',
' "lat" : 42.974049,',
' "lng" : -81.205203',
' },',
' "resolution" : 19.08790397644043',
' },',
' {',
' "elevation" : 244.1318664550781,',
' "location" : {',
' "lat" : 42.974298,',
' "lng" : -81.19575500000001',
' },',
' "resolution" : 19.08790397644043',
' }',
' ],',
' "status" : "OK"',
'}']
when putting into as DataFrame here is what I get:
pd.read_json(elevations)
and here is what I want:
I'm not sure if this is possible, but mainly what I am looking for is a way to be able to put the elevation, latitude and longitude data together in a pandas dataframe (doesn't have to have fancy mutiline headers).
If any one can help or give some advice on working with this data that would be great! If you can't tell I haven't worked much with json data before...
EDIT:
This method isn't all that attractive but seems to work:
data = json.loads(elevations)
lat,lng,el = [],[],[]
for result in data['results']:
lat.append(result[u'location'][u'lat'])
lng.append(result[u'location'][u'lng'])
el.append(result[u'elevation'])
df = pd.DataFrame([lat,lng,el]).T
ends up dataframe having columns latitude, longitude, elevation
|
I found a quick and easy solution to what I wanted using json_normalize function included in the latest release of pandas 0.13.
from urllib2 import Request, urlopen
import json
from pandas.io.json import json_normalize
path1 = '42.974049,-81.205203|42.974298,-81.195755'
request=Request('http://maps.googleapis.com/maps/api/elevation/json?locations='+path1+'&sensor=false')
response = urlopen(request)
elevations = response.read()
data = json.loads(elevations)
json_normalize(data['results'])
This gives a nice flattened dataframe with the json data that I got from the google maps API.
|
Catching KeyboardInterrupt in Python during program shutdown
|
I'm writing a command line utility in Python which, since it is production code, ought to be able to shut down cleanly without dumping a bunch of stuff (error codes, stack traces, etc.) to the screen. This means I need to catch keyboard interrupts.
I've tried using both a try catch block like:
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print 'Interrupted'
sys.exit(0)
and catching the signal itself (as in this post):
import signal
import sys
def sigint_handler(signal, frame):
print 'Interrupted'
sys.exit(0)
signal.signal(signal.SIGINT, sigint_handler)
Both methods seem to work quite well during normal operation. However, if the interrupt comes during cleanup code at the end of the application, Python seems to always print something to the screen. Catching the interrupt gives
^CInterrupted
Exception KeyboardInterrupt in <bound method MyClass.__del__ of <path.to.MyClass object at 0x802852b90>> ignored
whereas handling the signal gives either
^CInterrupted
Exception SystemExit: 0 in <Finalize object, dead> ignored
or
^CInterrupted
Exception SystemExit: 0 in <bound method MyClass.__del__ of <path.to.MyClass object at 0x802854a90>> ignored
Not only are these errors ugly, they're not very helpful (especially to an end user with no source code)!
The cleanup code for this application is fairly big, so there's a decent chance that this issue will be hit by real users. Is there any way to catch or block this output, or is it just something I'll have to deal with?
|
Checkout this thread, it has some useful information about exiting and tracebacks.
If you are more interested in just killing the program, try something like this (this will take the legs out from under the cleanup code as well):
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print 'Interrupted'
try:
sys.exit(0)
except SystemExit:
os._exit(0)
|
input() error - NameError: name '...' is not defined
|
I am getting an error when I try to run this simple python script:
input_variable = input ("Enter your name: ")
print ("your name is" + input_variable)
Lets say I type in "dude", the error I am getting is:
line 1, in <module>
input_variable = input ("Enter your name: ")
File "<string>", line 1, in <module>
NameError: name 'dude' is not defined
I am running Mac OS X 10.9.1 and I am using the Python Launcher app that came with the install of python 3.3 to run the script.
Edit: I realized I am somehow running these scripts with 2.7. I guess the real question is how do I run my scripts with version 3.3? I thought if I dragged and dropped my scripts on top of the Python Launcher app that is inside the Python 3.3 folder in my applications folder that it would launch my scripts using 3.3. I guess this method still launches scripts with 2.7. So How do I use 3.3?
|
TL;DR
input function in Python 2.7, evaluates whatever your enter, as a Python expression. If you simply want to read strings, then use raw_input function in Python 2.7, which will not evaluate the read strings.
If you are using Python 3.x, raw_input has been renamed to input. Quoting the Python 3.0 release notes,
raw_input() was renamed to input(). That is, the new input() function reads a line from sys.stdin and returns it with the trailing newline stripped. It raises EOFError if the input is terminated prematurely. To get the old behavior of input(), use eval(input())
In Python 2.7, there are two functions which can be used to accept user inputs. One is input and the other one is raw_input. You can think of the relation between them as follows
input = eval(raw_input)
Consider the following piece of code to understand this better
>>> dude = "thefourtheye"
>>> input_variable = input("Enter your name: ")
Enter your name: dude
>>> input_variable
'thefourtheye'
input accepts a string from the user and evaluates the string in the current Python context. When I type dude as input, it finds that dude is bound to the value thefourtheye and so the result of evaluation becomes thefourtheye and that gets assigned to input_variable.
If I enter something else which is not there in the current python context, it will fail will the NameError.
>>> input("Enter your name: ")
Enter your name: dummy
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "<string>", line 1, in <module>
NameError: name 'dummy' is not defined
Security considerations with Python 2.7's input:
Since whatever user types is evaluated, it imposes security issues as well. For example, if you have already loaded os module in your program with import os, and then the user types in
os.remove("/etc/hosts")
this will be evaluated as a function call expression by python and it will be executed. If you are executing Python with elevated privileges, /etc/hosts file will be deleted. See, how dangerous it could be?
To demonstrate this, let's try to execute input function again.
>>> dude = "thefourtheye"
>>> input("Enter your name: ")
Enter your name: input("Enter your name again: ")
Enter your name again: dude
Now, when input("Enter your name: ") is executed, it waits for the user input and the user input is a valid Python function invocation and so that is also invoked. That is why we are seeing Enter your name again: prompt again.
So, you are better off with raw_input function, like this
input_variable = raw_input("Enter your name: ")
If you need to convert the result to some other type, then you can use appropriate functions to convert the string returned by raw_input. For example, to read inputs as integers, use the int function, like shown in this answer.
In python 3.x, there is only one function to get user inputs and that is called input, which is equivalent to Python 2.7's raw_input.
|
How do I manipulate a variable whose name conflicts with PDB commands?
|
My code is, for better or worse, rife with single letter variables (it's physics stuff, so those letters are meaningful), as well as NumPy's, which I'm often interacting with.
When using the Python debugger, occasionally I'll want to look at the value of, say, n. However, when I hit n<enter>, that's the PDB command for (n)ext, which has a higher priority. print n works around looking at it, but how can I set it?
|
Use an exclamation mark ! before a statement to have it run :
python -m pdb test.py
> /home/user/test.py(1)<module>()
-> print('foo')
(Pdb) !n = 77
(Pdb) !n
77
(Pdb) n
foo
> /home/user/test.py(2)<module>()
-> print('bar')
(Pdb)
|
where is "from __future__ import braces" code
|
I was wondering what is exactly the code that executed on the command:
>>> from __future__ import braces
SyntaxError: not a chance
so, since python is open-sourced I opened C:\Python27\Lib\__future__.py and looked.
surprisingly, I found nothing there that handle importing braces module.
so, my question is, where is the code that handle this? what happen when I run that command?
|
The code is in future.c:
future_check_features(PyFutureFeatures *ff, stmt_ty s, const char *filename)
...
else if (strcmp(feature, "braces") == 0) {
PyErr_SetString(PyExc_SyntaxError,
"not a chance");
PyErr_SyntaxLocation(filename, s->lineno);
return 0;
}
|
Factorial function works in Python, returns 0 for Julia
|
I define a factorial function as follows in Python:
def fact(n):
if n == 1:
return n
else:
return n * fact(n-1)
print(fact(100))
and as follows in Julia:
function fact(n)
if n == 1
n
else
n * fact(n-1)
end
end
println(fact(100))
The python program returns a very large number for the evaluation of 100 (as expected). Julia returns 0. With a smaller number (like 10) they both work.
I have two questions:
Why does Python handle this OK and Julia not.
Why doesn't Julia throw an error and just print 0 instead?
|
Julia has separate fixed-size integer types, plus a BigInt type. The default type is Int64, which is of course 64 bits.
Since 100! takes about 526 bits, it obviously overflows an Int64.
You can solve this problem by just doing fact(BigInt(100)) (assuming you've required it), or of course you can do the conversion in the fact function.
Python used to be the same, once upon a time. It had separate types int, which was 16 or 32 or 64 bits depending on your machine, and long, which was arbitrary-length. If you ran your program on Python 1.5, it would either wrap around just like Julia, or raise an exception. The solution would be to call fact(100L), or to do the conversion to long inside the fact function.
However, at some point in the 2.x series, Python tied the two types together, so any int that overflows automatically becomes a long. And then, in 3.0, it merged the two types entirely, so there is no separate long anymore.
So, why does Julia just overflow instead of raising an error?
The FAQ actually explains Why does Julia use native machine integer arithmetic. Which includes the wraparound behavior on overflow.
By "native machine arithmetic", people generally mean "what C does on almost all 2s-complement machines". Especially in languages like Julia and Python that were originally built on top of C, and stuck pretty close to the metal. In the case of Julia, this is not just a "default", but an intentional choice.
In C (at least as it was at the time), it's actually up to the implementation what happens if you overflow a signed integer type like int64⦠but on almost any platform that natively uses 2's complement arithmetic (which is almost any platform you'll see today), the exact same thing happens: it just truncates everything above the top 64 bits, meaning you wrap around from positive to negative. In fact, unsigned integer types are required to work this way in C. (C, meanwhile, works this way because that's how most CPUs work.)
In C (unlike most CPUs' machine languages), there is no way to detect that you've gotten an overflow after the fact. So, if you want to raise an OverflowError, you have to write some logic that detects that the multiplication will overflow before doing it. And you have to run that logic on every single multiplication. You may be able to optimize this for some platforms by writing inline assembly code. Or you can cast to a larger type, but (a) that tends to make your code slower, and (b) it doesn't work if you're already using the largest type (which int64 is on many platforms today).
In Python, making each multiplication up to 4x slower (usually less, but it can be that high) is no big deal, because Python spends more time fetching the bytecode and unboxing the integer objects than multiplying anyway. But Julia is meant to be faster than that.
As John Myles White explains in Computers are Machines:
In many ways, Julia sets itself apart from other new languages by its attempt to recover some of the power that was lost in the transition from C to languages like Python. But the transition comes with a substantial learning curve.
But there's another reason for this: overflowing signed arithmetic is actual useful in many cases. Not nearly as many as overflowing unsigned arithmetic (which is why C has defined unsigned arithmetic to work that way since before the first ANSI spec), but there are use cases.
And, even though you probably want type conversions more often than you want rollover, it is a lot easier to do the type conversions manually than the rollover. If you've ever done it in Python, picking the operand for % and getting the signs right is certainly easy to get wrong; casting to BigInt is pretty hard to screw up.
And finally, in a strongly-typed language, like both Python and Julia, type stability is important. One of the reasons Python 3 exists was that the old str type magically converting to unicode caused problems. It's far less common for your int type magically converting to long to cause problems, but it can happen (e.g., when you're grabbing a value off the wire, or via a C API, and expect to write the result out in the same format). Python's dev team argued over this when doing the int/long unification, quoting "practicality beats purity" and various other bits of the Zen, and ultimately deciding that the old behavior caused more problems than the new behavior would. Julia's designed made the opposite decision.
|
How to make an auto increment integer field Django
|
the question is simple, I am making an order for shopping cart and I need to make a field that auto increments when the order is made. But I don't know how to make the integer field auto increment :S any help.
order.py
class Order(models.Model):
cart = models.ForeignKey(Cart)
add_date = models.DateTimeField(auto_now_add=True)
order_number = models.IntegerField()
enable = models.BooleanField(default=True)
|
In Django
1 : There will be a default field with name "id" which is auto increment .
2 : You can define any field as auto increment field using AutoField
field.
class Order(models.Model):
auto_increment_id = models.AutoField(primary_key=True)
#you use primary_key = True if you do not want to use default field "id" given by django to your model
db design
+------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
| core_order | CREATE TABLE `core_order` (
`auto_increment_id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`auto_increment_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
If you want to use django's default id as increment field .
class Order(models.Model):
dd_date = models.DateTimeField(auto_now_add=True)
db design
+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
| core_order | CREATE TABLE `core_order` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`dd_date` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
How to fix: "UnicodeDecodeError: 'ascii' codec can't decode byte"
|
as3:~/ngokevin-site# nano content/blog/20140114_test-chinese.mkd
as3:~/ngokevin-site# wok
Traceback (most recent call last):
File "/usr/local/bin/wok", line 4, in
Engine()
File "/usr/local/lib/python2.7/site-packages/wok/engine.py", line 104, in init
self.load_pages()
File "/usr/local/lib/python2.7/site-packages/wok/engine.py", line 238, in load_pages
p = Page.from_file(os.path.join(root, f), self.options, self, renderer)
File "/usr/local/lib/python2.7/site-packages/wok/page.py", line 111, in from_file
page.meta['content'] = page.renderer.render(page.original)
File "/usr/local/lib/python2.7/site-packages/wok/renderers.py", line 46, in render
return markdown(plain, Markdown.plugins)
File "/usr/local/lib/python2.7/site-packages/markdown/init.py", line 419, in markdown
return md.convert(text)
File "/usr/local/lib/python2.7/site-packages/markdown/init.py", line 281, in convert
source = unicode(source)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 1: ordinal not in range(128). -- Note: Markdown only accepts unicode input!
how to fix it?
but in some other python-based static blog apps,chinese post can be published successfully.
such as this app: http://github.com/vrypan/bucket3 .in my site http://bc3.brite.biz/,chinese post can be published successfully.
|
Finally I got it:
as3:/usr/local/lib/python2.7/site-packages# cat sitecustomize.py
# encoding=utf8
import sys
reload(sys)
sys.setdefaultencoding('utf8')
Let me check:
as3:~/ngokevin-site# python
Python 2.7.6 (default, Dec 6 2013, 14:49:02)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> reload(sys)
<module 'sys' (built-in)>
>>> sys.getdefaultencoding()
'utf8'
>>>
The above shows the default encoding of python is utf8. Then the error is no more.
|
Format / Suppress Scientific Notation from Python Pandas Aggregation Results
|
How can one modify the format for the output from a groupby operation in pandas that produces scientific notation for very large numbers. I know how to do string formatting in pythong but I'm at a loss when it comes to applying it here.
df1.groupby('dept')['data1'].sum()
dept
value1 1.192433e+08
value2 1.293066e+08
value3 1.077142e+08
This suppresses the scientific notation if I convert to string but now I'm just wondering how to string format and add decimals.
sum_sales_dept.astype(str)
|
Granted, the answer I linked in the comments is not very helpful. You can specify your own string converter like so.
In [25]: pd.set_option('display.float_format', lambda x: '%.3f' % x)
In [28]: Series(np.random.randn(3))*1000000000
Out[28]:
0 -757322420.605
1 -1436160588.997
2 -1235116117.064
dtype: float64
I'm not sure if that's the preferred way to do this, but it works.
Converting numbers to strings purely for aesthetic purposes seems like a bad idea, but if you have a good reason, this is one way:
In [6]: Series(np.random.randn(3)).apply(lambda x: '%.3f' % x)
Out[6]:
0 0.026
1 -0.482
2 -0.694
dtype: object
|
False Unused Import Statement in PyCharm?
|
Given this scenario:
b.py:
import A
# A is unused here
c.py:
from b import A
# A is used here
PyCharm complains in b.py that "import A" is an unused import and Optimize imports deletes it, breaking import in c.py
I know these chained imports are not a good practice (although you may use it to implement a facade module), but is it me or is it a PyCharm fail?
|
You can actually use the PyUnresolvedReferences marker to deactivate the inspection for your import statement:
# noinspection PyUnresolvedReferences
import A
Reference: PyCharm bug PY-2240
|
Python: Wait on all of `concurrent.futures.ThreadPoolExecutor`'s futures
|
I've given concurrent.futures.ThreadPoolExecutor a bunch of tasks, and I want to wait until they're all completed before proceeding with the flow. How can I do that, without having to save all the futures and call wait on them? (I want an action on the executor.)
|
Just call Executor.shutdown:
shutdown(wait=True)
Signal the executor that it should free any resources that it is
using when the currently pending futures are done executing. Calls
to Executor.submit() and Executor.map() made after shutdown will
raise RuntimeError.
If wait is True then this method will not return until all the pending futures are
done executing and the resources associated with the executor have been freed.
However if you keep track of your futures in a list then you can avoid shutting the executor down for future use using the futures.wait() function:
concurrent.futures.wait(fs, timeout=None, return_when=ALL_COMPLETED)
Wait for the Future instances (possibly created by different
Executor instances) given by fs to complete. Returns a named 2-tuple
of sets. The first set, named done, contains the futures that
completed (finished or were cancelled) before the wait completed. The
second set, named not_done, contains uncompleted futures.
note that if you don't provide a timeout it waits until all futures have completed.
You can also use futures.as_completed() instead, however you'd have to iterate over it.
|
What is the correct way to get the previous page of results given an NDB cursor?
|
I'm working on providing an API via GAE that will allow users to page forwards and backwards through a set of entities. I've reviewed the section about cursors on the NDB Queries documentation page, which includes some sample code that describes how to page backwards through query results, but it doesn't seem to be working as desired. I'm using GAE Development SDK 1.8.8.
Here's a modified version of that example that creates 5 sample entities, gets and prints the first page, steps forward into and prints the second page, and attempts to step backwards and print the first page again:
import pprint
from google.appengine.ext import ndb
class Bar(ndb.Model):
foo = ndb.StringProperty()
#ndb.put_multi([Bar(foo="a"), Bar(foo="b"), Bar(foo="c"), Bar(foo="d"), Bar(foo="e")])
# Set up.
q = Bar.query()
q_forward = q.order(Bar.foo)
q_reverse = q.order(-Bar.foo)
# Fetch the first page.
bars1, cursor1, more1 = q_forward.fetch_page(2)
pprint.pprint(bars1)
# Fetch the next (2nd) page.
bars2, cursor2, more2 = q_forward.fetch_page(2, start_cursor=cursor1)
pprint.pprint(bars2)
# Fetch the previous page.
rev_cursor2 = cursor2.reversed()
bars3, cursor3, more3 = q_reverse.fetch_page(2, start_cursor=rev_cursor2)
pprint.pprint(bars3)
(FYI, you can run the above in the Interactive Console of your local app engine.)
The above code prints the following results; note that the third page of results is just the second page reversed, instead of going back to the first page:
[Bar(key=Key('Bar', 4996180836614144), foo=u'a'),
Bar(key=Key('Bar', 6122080743456768), foo=u'b')]
[Bar(key=Key('Bar', 5559130790035456), foo=u'c'),
Bar(key=Key('Bar', 6685030696878080), foo=u'd')]
[Bar(key=Key('Bar', 6685030696878080), foo=u'd'),
Bar(key=Key('Bar', 5559130790035456), foo=u'c')]
I was expecting to see results like this:
[Bar(key=Key('Bar', 4996180836614144), foo=u'a'),
Bar(key=Key('Bar', 6122080743456768), foo=u'b')]
[Bar(key=Key('Bar', 5559130790035456), foo=u'c'),
Bar(key=Key('Bar', 6685030696878080), foo=u'd')]
[Bar(key=Key('Bar', 6685030696878080), foo=u'a'),
Bar(key=Key('Bar', 5559130790035456), foo=u'b')]
If I change the "Fetch the previous page" section of code to the following code snippet, I get the expected output, but are there any drawbacks that I haven't forseen to using the forward-ordered query and end_cursor instead of the mechanism described in the documentation?
# Fetch the previous page.
bars3, cursor3, more3 = q_forward.fetch_page(2, end_cursor=cursor1)
pprint.pprint(bars3)
|
To make the example from the docs a little clearer let's forget about the datastore for a moment and work with a list instead:
# some_list = [4, 6, 1, 12, 15, 0, 3, 7, 10, 11, 8, 2, 9, 14, 5, 13]
# Set up.
q = Bar.query()
q_forward = q.order(Bar.key)
# This puts the elements of our list into the following order:
# ordered_list = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
q_reverse = q.order(-Bar.key)
# Now we reversed the order for backwards paging:
# reversed_list = [15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
# Fetch a page going forward.
bars, cursor, more = q_forward.fetch_page(10)
# This fetches the first 10 elements from ordered_list(!)
# and yields the following:
# bars = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# cursor = [... 9, CURSOR-> 10 ...]
# more = True
# Please notice the right-facing cursor.
# Fetch the same page going backward.
rev_cursor = cursor.reversed()
# Now the cursor is facing to the left:
# rev_cursor = [... 9, <-CURSOR 10 ...]
bars1, cursor1, more1 = q_reverse.fetch_page(10, start_cursor=rev_cursor)
# This uses reversed_list(!), starts at rev_cursor and fetches
# the first ten elements to it's left:
# bars1 = [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
So the example from the docs fetches the same page from two different directions in two differents orders. This is not what you want to achieve.
It seems you already found a solution that covers your use case pretty well but let me suggest another:
Simply reuse cursor1 to go back to page2.
If we're talking frontend and the current page is page3, this would mean assigning cursor3 to the 'next'-button and cursor1 to the 'previous'-button.
That way you have to reverse neither the query nor the cursor(s).
|
Autogenerate documentation for Python project using setuptools
|
I have created a demo project which uses setuptools and has the following structure:
project/
|- pizza/
| |- __init__.py
| `- margherita.py
|
|- README.rst
|- setup.cfg
`- setup.py
I'm trying to autogenerate documentation for this project using Sphinx. So far I've tried:
# Generate a sphinx template
sphinx-quickstart
# Use default settings, except for project name, etc.
sphinx-apidoc -o source .
./setup.py build_sphinx
I feel there has to be an easier way to autogenerate this documentation using the README, setup.py and docstrings.
Ultimately I'd like to autogenerate apidocs for another project where I use the Python C-api as well. I couldn't find anything for this.
My main question is: Is there an easier way to autogenerate this documentation?
|
To extend setup.py so it contains an extra command for Sphinx, you could create a custom command. I've cooked up a small example that runs Sphinx apidoc and then builds the doc sources. The project name, author, version and location of the sources defined in the setup.py are used (assuming they are defined).
class Sphinx(Command):
user_options = []
description = 'sphinx'
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
# metadata contains information supplied in setup()
metadata = self.distribution.metadata
# package_dir may be None, in that case use the current directory.
src_dir = (self.distribution.package_dir or {'': ''})['']
src_dir = os.path.join(os.getcwd(), src_dir)
# Run sphinx by calling the main method, '--full' also adds a conf.py
sphinx.apidoc.main(
['', '--full', '-H', metadata.name, '-A', metadata.author,
'-V', metadata.version, '-R', metadata.version,
'-o', os.path.join('doc', 'source'), src_dir])
# build the doc sources
sphinx.main(['', os.path.join('doc', 'source'),
os.path.join('doc', 'build')])
Then the command needs to be registered to the entry point group distutils.commands. Here the command is called sphinx.
from setuptools import setup
setup(
# ...
setup_requires = ['sphinx'],
entry_points = {
'distutils.commands': [
'sphinx = example_module:Sphinx'
]
}
)
I don't know how C sources are handled, but this'll get you started.
|
Convert unicode to datetime proper strptime format
|
I am trying to convert a unicode object to a datetime object.
I read through the documentation: http://docs.python.org/2/library/time.html#time.strptime
and tried
datetime.strptime(date_posted, '%Y-%m-%dT%H:%M:%SZ')
but I get the error message ValueError: time data '2014-01-15T01:35:30.314Z' does not match format '%Y-%m-%dT%H:%M:%SZ'
Any feedback on what is the proper format?
I appreciate the time and expertise.
|
You can parse the microseconds:
from datetime import datetime
date_posted = '2014-01-15T01:35:30.314Z'
datetime.strptime(date_posted, '%Y-%m-%dT%H:%M:%S.%fZ')
|
broken easy_install and pip after upgrading to OS X Mavericks
|
Upgraded to OS X 10.9 Mavericks and installed XCode, Command Line Tools, XQuartz, etc. Trying to run a pip install now, but it says that the distribution is not found:
Traceback (most recent call last):
File "/usr/local/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2603, in <module>
working_set.require(__requires__)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 666, in require
needed = self.resolve(parse_requirements(requirements))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 565, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: pip==1.4.1
So I tried to install pip with an easy_install. Turns out that's borked too:
Traceback (most recent call last):
File "/usr/local/bin/easy_install", line 5, in <module>
from pkg_resources import load_entry_point
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2607, in <module>
parse_requirements(__requires__), Environment()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 565, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: setuptools==1.1.6
So some of the other threads say to reinstall setuptools with a sudo python ez_setup.py. It seems to work fine:
Installed /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools-1.1.6-py2.7.egg
Processing dependencies for setuptools==1.1.6
Finished processing dependencies for setuptools==1.1.6
But when running the easy_install pip, the same pkg_resources.DistributionNotFound: setuptools==1.1.6 error occurs. Anyone else have this problem? Any ideas how to fix this?
|
Install easy_install:
Download ez_setup.py module from https://pypi.python.org/pypi/setuptools
$ cd path/to/ez_setup.py
$ python ez_setup.py
Install pip:
$ sudo easy_install pip
|
Set environment variable using saltstack
|
I am writing some salt stack formulas which will install tomcat package. but after installation I have to set JAVA_HOME in /etc/default/tomcat7 file. Is there any option to set JAVA_HOME? Or is there any option to modify or add JAVA_HOME in environment variable (i.e. in .bashrc or .profile files)?
My pillar.example file will have these details:
tomcat_version: 7
java_home: '/usr/lib/jvm/java-7-oracle'
|
As an alternative to setting .bashrc or .profile, you could simply set the JAVA_HOME value directly in /etc/default/tomcat7:
tomcat_configuration:
file.append:
- name: /etc/default/tomcat7
- text: export JAVA_HOME={{ pillar['java_home'] }}
If for some reason file.append is not suitable, salt offers file.replace and (new in 0.18.0) file.blockreplace.
|
What kind of problems (if any) would there be combining asyncio with multiprocessing?
|
As almost everyone is aware when they first look at threading in Python, there is the GIL that makes life miserable for people who actually want to do processing in parallel - or at least give it a chance.
I am currently looking at implementing something like the Reactor pattern. Effectively I want to listen for incoming socket connections on one thread-like, and when someone tries to connect, accept that connection and pass it along to another thread-like for processing.
I'm not (yet) sure what kind of load I might be facing. I know there is currently setup a 2MB cap on incoming messages. Theoretically we could get thousands per second (though I don't know if practically we've seen anything like that). The amount of time spent processing a message isn't terribly important, though obviously quicker would be better.
I was looking into the Reactor pattern, and developed a small example using the multiprocessing library that (at least in testing) seems to work just fine. However, now/soon we'll have the asyncio library available, which would handle the event loop for me.
Is there anything that could bite me by combining asyncio and multiprocessing?
|
You should be able to safely combine asyncio and multiprocessing without too much trouble, though you shouldn't be using multiprocessing directly. The cardinal sin of asyncio (and any other event-loop based asynchronous framework) is blocking the event loop. If you try to use multiprocessing directly, any time you block to wait for a child process, you're going to block the event loop. Obviously, this is bad.
The simplest way to avoid this is to use BaseEventLoop.run_in_executor to execute a function in a concurrent.futures.ProcessPoolExecutor. ProcessPoolExecutor is a process pool implemented using multiprocessing.Process, but asyncio has built-in support for executing a function in it without blocking the event loop. Here's a simple example:
import time
import asyncio
from concurrent.futures import ProcessPoolExecutor
def blocking_func(x):
time.sleep(x) # Pretend this is expensive calculations
return x * 5
@asyncio.coroutine
def main():
#pool = multiprocessing.Pool()
#out = pool.apply(blocking_func, args=(10,)) # This blocks the event loop.
executor = ProcessPoolExecutor()
out = yield from loop.run_in_executor(executor, blocking_func, 10) # This does not
print(out)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
For the majority of cases, this is function alone is good enough. If you find yourself needing other constructs from multiprocessing, like Queue, Event, Manager, etc., there is a third-party library called aioprocessing (full disclosure: I wrote it), that provides asyncio-compatible versions of all the multiprocessing data structures. Here's an example demoing that:
import time
import asyncio
import aioprocessing
import multiprocessing
def func(queue, event, lock, items):
with lock:
event.set()
for item in items:
time.sleep(3)
queue.put(item+5)
queue.close()
@asyncio.coroutine
def example(queue, event, lock):
l = [1,2,3,4,5]
p = aioprocessing.AioProcess(target=func, args=(queue, event, lock, l))
p.start()
while True:
result = yield from queue.coro_get()
if result is None:
break
print("Got result {}".format(result))
yield from p.coro_join()
@asyncio.coroutine
def example2(queue, event, lock):
yield from event.coro_wait()
with (yield from lock):
yield from queue.coro_put(78)
yield from queue.coro_put(None) # Shut down the worker
if __name__ == "__main__":
loop = asyncio.get_event_loop()
queue = aioprocessing.AioQueue()
lock = aioprocessing.AioLock()
event = aioprocessing.AioEvent()
tasks = [
asyncio.async(example(queue, event, lock)),
asyncio.async(example2(queue, event, lock)),
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
|
When to use imshow over pcolormesh?
|
I often find myself needing to create heatmap-style visualizations in Python with matplotlib. Matplotlib provides several functions which apparently do the same thing. pcolormesh is recommended instead of pcolor but what is the difference (from a practical point of view as a data plotter) between imshow and pcolormesh? What are the pros/cons of using one over the other? In what scenarios would one or the other be a clear winner?
|
Fundamentally, imshow assumes that all data elements in your array are to be rendered at the same size, whereas pcolormesh/pcolor associates elements of the data array with rectangular elements whose size may vary over the rectangular grid.
If your mesh elements are uniform, then imshow with interpolation set to "nearest" will look very similar to the default pcolormesh display (without the optional X and Y args). The obvious differences are that the imshow y-axis will be inverted (w.r.t. pcolormesh) and the aspect ratio is maintained, although those characteristics can be altered to look like the pcolormesh output as well.
From a practical point of view, pcolormesh is more convenient if you want to visualize the data array as cells, particularly when the rectangular mesh is non-uniform or when you want to plot the boundaries/edges of the cells. Otherwise, imshow is more convenient if you have a fixed cell size, want to maintain aspect ratio, want control over pixel interpolation, or want to specify RGB values directly.
|
How to create a numpy array of all True or all False?
|
In Python, how do I create a numpy array of arbitrary shape filled with all True or all False?
|
numpy already allows the creation of arrays of all ones or all zeros very easily:
e.g. np.ones((2, 2)) or np.zeros((2, 2))
Since True and False are represented in Python as 1 and 0, respectively, we have only to cast this array as boolean using the optional dtype parameter and we are done.
np.ones((2, 2), dtype=bool)
returns:
array([[ True, True],
[ True, True]], dtype=bool)
|
Automatically run %matplotlib inline in IPython Notebook
|
Every time I launch IPython Notebook, the first command I run is
%matplotlib inline
Is there some way to change my config file so that when I launch IPython, it is automatically in this mode?
|
The configuration way
IPython has profiles for configuration, located at ~/.ipython/profile_*. The default profile is called profile_default. Within this folder there are two primary configuration files:
ipython_config.py
ipython_kernel_config
Add the inline option for matplotlib to ipython_kernel_config.py:
c = get_config()
# ... Any other configurables you want to set
c.InteractiveShellApp.matplotlib = "inline"
matplotlib vs. pylab
Usage of %pylab to get inline plotting is discouraged.
It introduces all sorts of gunk into your namespace that you just don't need.
%matplotlib on the other hand enables inline plotting without injecting your namespace. You'll need to do explicit calls to get matplotlib and numpy imported.
import matplotlib.pyplot as plt
import numpy as np
The small price of typing out your imports explicitly should be completely overcome by the fact that you now have reproducible code.
|
Django REST framework - filtering against query param
|
So I created my "API" using REST framework, now trying to do filtering for it.
That's how my models.py look like more or less:
class Airline(models.Model):
name = models.TextField()
class Workspace(models.Model):
airline = models.ForeignKey(Airline)
name = models.CharField(max_length=100)
class Passenger(models.Model):
workspace = models.ForeignKey(Workspace)
title = models.CharField(max_length=200)
So I would like to see in my JSON file "all passengers in particular workspace" or "all passengers in particular airline" etc.
Here is my, serializers.py
class AirlineSerializer(serializers.ModelSerializer):
class Meta:
model = Airline
class WorkspaceSerializer(serializers.ModelSerializer):
class Meta:
model = Workspace
class PassengerSerializer(serializers.ModelSerializer):
class Meta:
model = Passenger
And views.py:
class AirlineList(generics.ListCreateAPIView):
model = Airline
serializer_class = AirlineSerializer
class AirlineDetail(generics.RetrieveUpdateDestroyAPIView):
model = Airline
serializer_class = AirlineSerializer
class WorkspaceList(generics.ListCreateAPIView):
model = Workspace
serializer_class = WorkspaceSerializer
class WorkspaceDetail(generics.RetrieveUpdateDestroyAPIView):
model = Workspace
serializer_class = WorkspaceSerializer
class PassengerList(generics.ListCreateAPIView):
model = Passenger
serializer_class = PassengerSerializer
class PassengerDetail(generics.RetrieveUpdateDestroyAPIView):
model = Passenger
serializer_class = PassengerSerializer
It's my first time using REST framework, I have checked the docs, they helped me with what I've done so far, I would like to use
Filtering against query parameter:
http://www.django-rest-framework.org/api-guide/filtering/#filtering-against-query-parameters
Can't really get it..
|
So with the @limelights I managed to do what I wanted, here is the code:
class PassengerList(generics.ListCreateAPIView):
model = Passenger
serializer_class = PassengerSerializer
# Show all of the PASSENGERS in particular WORKSPACE
# or all of the PASSENGERS in particular AIRLINE
def get_queryset(self):
queryset = Passenger.objects.all()
workspace = self.request.query_params.get('workspace', None)
airline = self.request.query_params.get('airline', None)
if workspace is not None:
queryset = queryset.filter(workspace_id=workspace)
elif airline is not None:
queryset = queryset.filter(workspace__airline_id=airline)
return queryset
|
Fill username and password using selenium in python
|
How can I auto fill the username and password over the link below:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
chromedriver = 'C:\\chromedriver.exe'
browser = webdriver.Chrome(chromedriver)
browser.get('http://www.example.com')
After that I really do not know:
username = Select(browser.find_element_by_name('Username'))
password = Select(browser.find_element_by_name('Password'))
username.select_by_visible_text("text")
password.select_by_visible_text("text")
|
username = selenium.find_element_by_id("username")
password = selenium.find_element_by_id("password")
username.send_keys("YourUsername")
password.send_keys("Pa55worD")
selenium.find_element_by_name("submit").click()
Notes to your code:
find_element_by_name('Username'): Username capitalized doesn't match anything.
Select() is used to act on a Select Element (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/select)
|
Django rest framework api_view vs normal view
|
I have been looking everywhere to find a decent explanation for this, and they all come short...When do you use the @api_view decorator rather than a class based view with the django rest framework app
|
REST Framework aside, it's the same question of when to use class based views versus function based views in general. CBVs in Django are awesome, flexible and save loads of boilerplate code, but sometimes it's just faster, easier and clearer to use a function based view. Think about it with the same approach you'd take to writing a normal view in Django. REST Framework simply supports both methods of writing view code as it introduces in the tutorial.
Generally go with a CBV unless it's getting in your way, then keep it simple with a function based view and the decorator. In both Django and the REST Framework, the logic for typical things like lists, pagination and CRUD operations is already written and easily extendable in the form of classes and mixins. If your view logic is doing something notably different, a function based view might be appropriate. And of course you can use both approaches in your app.
|
Assign pandas dataframe column dtypes
|
I want to set the dtypes of multiple columns in pd.Dataframe (I have a file that I've had to manually parse into a list of lists, as the file was not amenable for pd.read_csv)
import pandas as pd
print pd.DataFrame([['a','1'],['b','2']],
dtype={'x':'object','y':'int'},
columns=['x','y'])
I get
ValueError: entry not a 2- or 3- tuple
The only way I can set them is by looping through each column variable and recasting with astype.
dtypes = {'x':'object','y':'int'}
mydata = pd.DataFrame([['a','1'],['b','2']],
columns=['x','y'])
for c in mydata.columns:
mydata[c] = mydata[c].astype(dtypes[c])
print mydata['y'].dtype #=> int64
Is there a better way?
|
You can use convert_objects to infer better dtypes:
In [11]: df
Out[11]:
x y
0 a 1
1 b 2
In [12]: df.dtypes
Out[12]:
x object
y object
dtype: object
In [13]: df.convert_objects(convert_numeric=True)
Out[13]:
x y
0 a 1
1 b 2
In [14]: df.convert_objects(convert_numeric=True).dtypes
Out[14]:
x object
y int64
dtype: object
Magic!
|
Custom user model in django
|
I want to create a custom user model using django.contrib.auth.models.AbstractUser as stated in the djangodocs:
If youâre entirely happy with Djangoâs User model and you just want to
add some additional profile information, you can simply subclass
django.contrib.auth.models.AbstractUser and add your custom profile
fields. This class provides the full implementation of the default
User as an abstract model.
So I inherited the AbstractUser class in my Users class and added a field. But when I run the python manage.py syncdb I get the following error:
CommandError: One or more models did not validate:
admin.logentry: 'user' has a relation with model login.users, which has either
not been installed or is abstract.
I went through other questions on stackoverflow but couldn't resolve the error. Here is my code:
models.py
from django.conf import settings
from django.db import models
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.models import AbstractUser
from django.contrib import admin
class Users(AbstractUser):
college = models.CharField(max_length=40)
admin.site.register(Users, UserAdmin)
admin.py
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from django.contrib.auth.forms import UserChangeForm, UserCreationForm
from login.models import Users
from django import forms
class UsersChangeForm(UserChangeForm):
class Meta(UserChangeForm.Meta):
model = Users
class UsersAdmin(UserAdmin):
form = UsersChangeForm
fieldsets = UserAdmin.fieldsets + (
(None, {'fields': ('college',)}),
)
admin.site.register(Users, UsersAdmin)
settings.py
INSTALLED_APPS = (
'forms',
'login',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
AUTH_USER_MODEL = 'login.users'
EDIT:
I want to store the user information in the same table as auth_user and not in a new table.
|
I did this in one of my project. I was surprised to see that you extended User because the doc says something else :) You can extend Django User model, but if you only want to add new fields (not change its behavior), you should use a OneToOneField.
If you wish to store information related to User, you can use a one-to-one
relationship to a model containing the fields for additional information.
So, as you can see in the link, your code should look like:
from django.contrib.auth.models import User
class MyUser(models.Model):
user = models.OneToOneField(User)
# Or a ForeingKey to the College table?
college = models.CharField(max_length=40)
other_data = ...
|
Anaconda not finding my packages installed with `pip`
|
I'm new to Anaconda version of Python, and already I'm running into unpleasant problems.
I installed Anaconda per the instructions here, and it worked like charm, with all the included packages imported properly when demanded. Then I went on to install some extra packages which Anaconda did not included in the first place with pip:
$ sudo pip install BeautifulSoup mrjob pattern
The installations seems to be perfect, but when I try to import them in ipython, things get frustrating:
Python 2.7.6 |Anaconda 1.8.0 (64-bit)| (default, Nov 11 2013, 10:47:18)
Type "copyright", "credits" or "license" for more information.
IPython 1.1.0 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import BeautifulSoup
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-aa1e12a76f5e> in <module>()
----> 1 import BeautifulSoup
ImportError: No module named BeautifulSoup
In [2]: import mrjob
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-2-6ea1b9bda48b> in <module>()
----> 1 import mrjob
ImportError: No module named mrjob
In [3]: import pattern
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-3-4b662941bac1> in <module>()
----> 1 import pattern
ImportError: No module named pattern
In [4]:
Funny thing is, these packages can be very well imported when I'm not running the Anaconda bundle of python, after removing
# added by Anaconda 1.8.0 installer
export PATH="/home/username/anaconda/bin:$PATH"
from my .bashrc:
Python 2.7.5+ (default, Sep 19 2013, 13:48:49)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: import BeautifulSoup
In [2]: import mrjob
In [3]: import pattern
In [4]:
Did I miss anything during any of these installations that I ran? Should I manually link these packages to Anaconda so that it knows where to find them?
|
In the comments, it was determined that the pip in use was /usr/bin/pip; in other words, the system pip. The system pip will install into the system site-packages, not Anaconda's site-packages.
The solution is to make sure you're using Anaconda's pip when installing packages for use with Anaconda.
|
Converting Float to Dollars and Cents
|
First of all, I have tried this post (among others): Currency formatting in Python. It has no affect on my variable. My best guess is that it is because I am using Python 3 and that was code for Python 2. (Unless I overlooked something, because I am new to Python).
I want to convert a float, such as 1234.5, to a String, such as "$1,234.50". How would I go about doing this?
And just in case, here is my code which compiled, but did not affect my variable:
money = float(1234.5)
locale.setlocale(locale.LC_ALL, '')
locale.currency(money, grouping=True)
Also unsuccessful:
money = float(1234.5)
print(money) #output is 1234.5
'${:,.2f}'.format(money)
print(money) #output is 1234.5
|
In Python 3.x and 2.7, you can simply do this:
>>> '${:,.2f}'.format(1234.5)
'$1,234.50'
The :, adds a comma as a thousands separator, and the .2f limits the string to two decimal places (or adds enough zeroes to get to 2 decimal places, as the case may be) at the end.
|
What does abstraction mean in programming?
|
I'm learning python and I'm not sure of understanding the following statement : "The function (including its name) can capture our mental chunking, or abstraction, of the problem."
It's the part that is in bold that I don't understand the meaning in terms of programming. The quote comes from http://www.openbookproject.net/thinkcs/python/english3e/functions.html
How to think like a computer scientist, 3 edition.
Thank you !
|
Abstraction is a core concept in all of computer science. Without abstraction, we would still be programming in machine code or worse not have computers in the first place. So IMHO that's a really good question.
What is abstraction
Abstracting something means to give names to things, so that the name captures the core of what a function or a whole program does.
One example is given in the book you reference, where it says
Suppose weâre working with turtles, and a common operation we need is
to draw squares. âDraw a squareâ is an abstraction, or a mental chunk,
of a number of smaller steps. So letâs write a function to capture the
pattern of this âbuilding blockâ:
Forget about the turtles for a moment and just think of drawing a square. If I tell you to draw a square (on paper), you immediately know what to do:
draw a square => draw a rectangle with all sides of the same length.
You can do this without further questions because you know by heart what a square is, without me telling you step by step. Here, the word square is the abstraction of "draw a rectangle with all sides of the same length".
Abstractions run deep
But wait, how do you know what a rectangle is? Well, that's another abstraction for the following:
rectangle => draw two lines parallel to each other, of the same length, and then add another two parallel lines perpendicular to the other two lines, again of the same length but possibly of different length than the first two.
Of course it goes on and on - lines, parallel, perpendicular, connecting are all abstractions of well-known concepts.
Now, imagine each time you want a rectangle or a square to be drawn you have to give the full definition of a rectangle, or explain lines, parallel lines, perpendicular lines and connecting lines -- it would take far too long to do so.
The real power of abstraction
That's the first power of abstractions: they make talking and getting things done much easier.
The second power of abstractions comes from the nice property of composability: once you have defined abstractions, you can compose two or more abstractions to form a new, larger abstraction: say you are tired of drawing squares, but you really want to draw a house. Assume we have already defined the triangle, so then we can define:
house => draw a square with a triangle on top of it
Next, you want a village:
village => draw multiple houses next to each other
Oh wait, we want a city -- and we have a new concept street:
city => draw many villages close to each other, fill empty spaces with more houses, but leave room for streets
street => (some definition of street)
and so on...
How does this all apply to programmming?
If in the course of planning your program (a process known as analysis and design), you find good abstractions to the problem you are trying to solve, your programs become shorter, hence easier to write and - maybe more importantly - easier to read. The way to do this is to try and grasp the major concepts that define your problems -- as in the (simplified) example of drawing a house, this was squares and triangles, to draw a village it was houses.
In programming, we define abstractions as functions (and some other constructs like classes and modules, but let's focus on functions for now). A function essentially names a set of single statements, so a function essentially is an abstraction -- see the examples in your book for details.
The beauty of it all
In programming, abstractions can make or break productivity. That's why often times, commonly used functions are collected into libraries which can be reused by others. This means you don't have to worry about the details, you only need to understand how to use the ready-made abstractions. Obviously that should make things easier for you, so you can work faster and thus be more productive:
Example:
Imagine there is a graphics library called "nicepic" that contains pre-defined functions for all abstractions discussed above: rectangles, squares, triangles, house, village.
Say you want to create a program based on the above abstractions that paints a nice picture of a house, all you have to write is this:
import nicepic
draw_house()
So that's just two lines of code to get something much more elaborate. Isn't that just wonderful?
Hope this helps.
|
How do I install Python libraries?
|
I was looking for a tutorial on how to install Python libraries in the wheel format.
It does not seem straightforward so I'd appreciate a simple step by step tutorial how to install the module named "requests" for CPython.
I downloaded it from: https://pypi.python.org/pypi/requests and now I have a .whl file. I've got Python 2.7 and 3.3 on Windows, so how do I install it so all the other Python scripts I run can use it?
|
You want to install a downloaded wheel (.whl) file on Python under Windows?
Install pip on your Python(s) on Windows (on Python 3.4+ it is already included)
Upgrade pip if necessary
pip install -U pip
Install a local wheel file using pip
pip install --no-index --find-links=LocalPathToWheelFile PackageName
Option --no-index tells pip to not look on pypi.python.org (which would fail for many packages if you have no compiler installed), --find-links then tells pip where to look for instead. PackageName is the name of the package (numpy, scipy, .. first part or whole of wheel file name). For more informations see the install options of pip.
You can execute these commands in the command prompt when switching to your Scripts folder of your Python installation.
Example:
cd C:\Python27\Scripts
pip install -U pip
pip install --no-index --find-links=LocalPathToWheelFile PackageName
Note: It can still be that the package does not install on Windows because it may contain C/C++ source files which need to be compiled. You would need then to make sure a compiler is installed. Often searching for alternative pre-compiled distributions is the fastest way out.
For example numpy-1.9.2+mkl-cp27-none-win_amd64.whl has PackageName numpy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.