instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Python pandas Filtering out nan from a data selection of a column of strings
|
Without using groupby how would I filter out data without NaN?
Let say I have a matrix where customers will fill in 'N/A','n/a' or any of its variations and others leave it blank:
import pandas as pd
import numpy as np
df = pd.DataFrame({'movie': ['thg', 'thg', 'mol', 'mol', 'lob', 'lob'],
'rating': [3., 4., 5., np.nan, np.nan, np.nan],
'name': ['John', np.nan, 'N/A', 'Graham', np.nan, np.nan]})
nbs = df['name'].str.extract('^(N/A|NA|na|n/a)')
nms=df[(df['name'] != nbs) ]
output:
>>> nms
movie name rating
0 thg John 3
1 thg NaN 4
3 mol Graham NaN
4 lob NaN NaN
5 lob NaN NaN
How would I filter out NaN values so I can get results to work with like this:
movie name rating
0 thg John 3
3 mol Graham NaN
I am guessing I need something like ~np.isnan but the tilda does not work with strings.
|
Just drop them:
nms.dropna(thresh=2)
this will drop all rows where there are at least two NaN
then you could then drop where name is NaN:
In [87]:
nms
Out[87]:
movie name rating
0 thg John 3
1 thg NaN 4
3 mol Graham NaN
4 lob NaN NaN
5 lob NaN NaN
[5 rows x 3 columns]
In [89]:
nms = nms.dropna(thresh=2)
In [90]:
nms[nms.name.notnull()]
Out[90]:
movie name rating
0 thg John 3
3 mol Graham NaN
[2 rows x 3 columns]
EDIT
Actually looking at what you originally want you can do just this without the dropna call:
nms[nms.name.notnull()]
|
error: [Errno 32] Broken pipe
|
I am working on a Django project. All went well till I created an Ajax request to send values from the html page to the backend (views.py).
When I send the data using Ajax, I am able to view the values being passed to views.py, and it even reaches the render_to_response method and displays my page, but throws the broken pipe error in the terminal. I don't see any kind of disruption to the program, but I wanted to know if there is a way to prevent this error from occurring. I checked the other responses. But no luck so far.
When I try to hit submit again on the refreshed page, I get this message:
The page that you're looking for used information that you entered. Returning to that page might cause any action you took to be repeated. Do you want to continue? [Submit] [Cancel]`
Here is the dump:
Traceback (most recent call last):
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 34812)
----------------------------------------
File "/usr/lib/python2.7/dist-packages/django/core/servers/basehttp.py", line 284, in run
self.finish_response()
File "/usr/lib/python2.7/dist-packages/django/core/servers/basehttp.py", line 324, in finish_response
self.write(data)
File "/usr/lib/python2.7/dist-packages/django/core/servers/basehttp.py", line 403, in write
self.send_headers()
File "/usr/lib/python2.7/dist-packages/django/core/servers/basehttp.py", line 467, in send_headers
self.send_preamble()
File "/usr/lib/python2.7/dist-packages/django/core/servers/basehttp.py", line 385, in send_preamble
'Date: %s\r\n' % http_date()
File "/usr/lib/python2.7/socket.py", line 324, in write
self.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 284, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 310, in process_request
self.finish_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python2.7/dist-packages/django/core/servers/basehttp.py", line 570, in __init__
BaseHTTPRequestHandler.__init__(self, *args, **kwargs)
File "/usr/lib/python2.7/SocketServer.py", line 640, in __init__
self.finish()
File "/usr/lib/python2.7/SocketServer.py", line 693, in finish
self.wfile.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
Update:
Here is the code that I am sending:
$( document ).ready(function() {
$.csrftoken();
$("#submitdata").click(function(){
//values = [tmode, fmode, t_cool, t_heat, hold];
values = {
"tmode": tmode,
"fmode": fmode,
"t_cool": t_cool,
"t_heat": t_heat,
"hold": hold
};
var jsonText = JSON.stringify(values);
$.ajax({
url: "/submitdata/",
type: 'POST',
data: jsonText,
dataType: 'json',
success:function(data){
console.log(data.success);
},
complete:function(){
console.log('complete');
},
error:function (xhr, textStatus, thrownError){
console.log(thrownError);
console.log(obj);
}
});
});
});
And here is my views.py:
@login_required
def submitvalues(request):
#context = RequestContext(request)
if request.POST:
jsonvalues = json.loads(request.raw_post_data)
print jsonvalues
return HttpResponse(json.dumps(dict(status='updated')), mimetype="application/json")
I am still facing the same issue. Can someone help me with this?
Edit on 5/28/2014:
I just figured out the reason for a Broken Pipe. It was because I was not sending back the response from Python and was just expecting the page to refresh automatically. I am a newbie to all of this, and took me a while to figure out why this happened.
|
You haven't posted any code, but this is probably because you have triggered the Ajax request on a button submit but haven't prevented the default action. So the Ajax request is made, but by the time it comes to return the data, the browser has already requested the next page anyway, so there is nothing to receive it.
|
Circular Histogram for Python
|
I have periodic data and the distribution for it is best visualised around a circle. Now the question is how can I do this visualisation using matplotlib? If not, can it be done easily in Python?
My code here will demonstrate a crude approximation of distribution around a circle:
from matplotlib import pyplot as plt
import numpy as np
#generatin random data
a=np.random.uniform(low=0,high=2*np.pi,size=50)
#real circle
b=np.linspace(0,2*np.pi,1000)
a=sorted(a)
plt.plot(np.sin(a)*0.5,np.cos(a)*0.5)
plt.plot(np.sin(b),np.cos(b))
plt.show()
There are a few examples in a question on SX for Mathematica:
|
Building off of this example from the gallery, you can do
import numpy as np
import matplotlib.pyplot as plt
N = 80
bottom = 8
max_height = 4
theta = np.linspace(0.0, 2 * np.pi, N, endpoint=False)
radii = max_height*np.random.rand(N)
width = (2*np.pi) / N
ax = plt.subplot(111, polar=True)
bars = ax.bar(theta, radii, width=width, bottom=bottom)
# Use custom colors and opacity
for r, bar in zip(radii, bars):
bar.set_facecolor(plt.cm.jet(r / 10.))
bar.set_alpha(0.8)
plt.show()
Of course, there are many variations and tweeks, but this should get you started.
In general, a browse through the matplotlib gallery is usually a good place to start.
Here, I used the bottom keyword to leave the center empty, because I think I saw an earlier question by you with a graph more like what I have, so I assume that's what you want. To get the full wedges that you show above, just use bottom=0 (or leave it out since 0 is the default).
|
AttributeError: can't set attribute in python
|
Here is my code
N = namedtuple("N", ['ind', 'set', 'v'])
def solve()
items=[]
stack=[]
R = set(range(0,8))
for i in range(0,8):
items.append(N(i,R,8))
stack.append(N(0,R-set(range(0,1)),i))
while(len(stack)>0):
node = stack.pop()
print node
print items[node.ind]
items[node.ind].v = node.v
In the last line I cant set the items[node.ind].v value to node.v as I want, and am getting the error
"AttributeError: can't set attribute"
I don't know what's wrong but it must be something based on syntax as using statements like node.v+=1 is also showing same error. I'm new to Python, so please suggest a way to make the above change possible.
|
items[node.ind] = items[node.ind]._replace(v=node.v)
|
Why can two functions with the same `id` have different attributes?
|
Why can two functions with the same id value have differing attributes like __doc__ or __name__?
Here's a toy example:
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
Which prints:
57386560
57386560
I am function 0
I am function 1
function_0
function_1
1 # <--- Why is it bound to the most recent value of that variable?
1
I've tried mixing in a call to copy.deepcopy (not sure if the recursive copy is needed for functions or it is overkill) but this doesn't change anything.
|
You are comparing methods, and method objects are created anew each time you access one on an instance or class (via the descriptor protocol).
Once you tested their id() you discard the method again (there are no references to it), so Python is free to reuse the id when you create another method. You want to test the actual functions here, by using m.function_0.__func__ and m.function_1.__func__:
>>> id(m.function_0.__func__)
4321897240
>>> id(m.function_1.__func__)
4321906032
Method objects inherit the __doc__ and __name__ attributes from the function that they wrap. The actual underlying functions are really still different objects.
As for the two functions returning 1; both functions use i as a closure; the value for i is looked up when you call the method, not when you created the function. See Local variables in Python nested functions.
The easiest work-around is to add another scope with a factory function:
some_dict = {}
for i in range(2):
def create_fun(i):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
return fun
some_dict["function_{}".format(i)] = create_fun(i)
|
Debugging the error "gcc: error: x86_64-linux-gnu-gcc: No such file or directory"
|
I'm trying to build:
https://github.com/kanzure/nanoengineer
But it looks like it errors out on:
gcc -DHAVE_CONFIG_H -I. -I../.. -I/usr/include/python2.7 -std=c99 x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -MT libstructcompare_a-structcompare.o -MD -MP -MF .deps/libstructcompare_a-structcompare.Tpo -c -o libstructcompare_a-structcompare.o `test -f 'structcompare.c' || echo './'`structcompare.c
gcc: error: x86_64-linux-gnu-gcc: No such file or directory
x86_64-linux-gnu-gcc definitely exists in /usr/bin (It's a symlink) and the target definitely exists as well. It looks to me like the Makefile wasn't generated correctly, perhaps there is a flag that should be passed before specifying x86_64-linux-gnu-gcc? I am unsure as well what specifying x86_64-linux-gnu-gcc is supposed to accomplish.
Finally, this makefile was generated by configure, so once we narrow down the cause of the error, I'll have to figure out what files to modify in order to fix this. (I'm a CMake kind of guy myself, but of course I didn't choose the build system for this project.) My OS is Debian.
I've tried building this branch as well:
https://github.com/kanzure/nanoengineer/branches/kirka-updates
If you can try getting this to build on your system, I would greatly appreciate it! Thanks!
|
After a fair amount of work, I was able to get it to build on Ubuntu 12.04 x86 and Debian 7.4 x86_64. I wrote up a guide below. Can you please try following it to see if it resolves the issue?
If not please let me know where you get stuck.
Install Common Dependencies
sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev
Install NumArray 1.5.2
wget http://goo.gl/6gL0q3 -O numarray-1.5.2.tgz
tar xfvz numarray-1.5.2.tgz
cd numarray-1.5.2
sudo python setup.py install
Install Numeric 23.8
wget http://goo.gl/PxaHFW -O numeric-23.8.tgz
tar xfvz numeric-23.8.tgz
cd Numeric-23.8
sudo python setup.py install
Install HDF5 1.6.5
wget ftp://ftp.hdfgroup.org/HDF5/releases/hdf5-1.6/hdf5-1.6.5.tar.gz
tar xfvz hdf5-1.6.5.tar.gz
cd hdf5-1.6.5
./configure --prefix=/usr/local
sudo make
sudo make install
Install Nanoengineer
git clone https://github.com/kanzure/nanoengineer.git
cd nanoengineer
./bootstrap
./configure
make
sudo make install
Troubleshooting
On Debian Jessie, you will receive the error message that cant pants mentioned. There seems to be an issue in the automake scripts. x86_64-linux-gnu-gcc is inserted in CFLAGS and gcc will interpret that as a name of one of the source files. As a workaround, let's create an empty file with that name. Empty so that it won't change the program and that very name so that compiler picks it up. From the cloned nanoengineer directory, run this command to make gcc happy (it is a hack yes, but it does work) ...
touch sim/src/x86_64-linux-gnu-gcc
If you receive an error message when attemping to compile HDF5 along the lines of: "error: call to â__open_missing_modeâ declared with attribute error: open with O_CREAT in second argument needs 3 arguments", then modify the file perform/zip_perf.c, line 548 to look like the following and then rerun make...
output = open(filename, O_RDWR | O_CREAT, S_IRUSR|S_IWUSR);
If you receive an error message about Numeric/arrayobject.h not being found when building Nanoengineer, try running
export CPPFLAGS=-I/usr/local/include/python2.7
./configure
make
sudo make install
If you receive an error message similar to "TRACE_PREFIX undeclared", modify the file sim/src/simhelp.c lines 38 to 41 to look like this and re-run make:
#ifdef DISTUTILS
static char tracePrefix[] = "";
#else
static char tracePrefix[] = "";
If you receive an error message when trying to launch NanoEngineer-1 that mentions something similar to "cannot import name GL_ARRAY_BUFFER_ARB", modify the lines in the following files
/usr/local/bin/NanoEngineer1_0.9.2.app/program/graphics/drawing/setup_draw.py
/usr/local/bin/NanoEngineer1_0.9.2.app/program/graphics/drawing/GLPrimitiveBuffer.py
/usr/local/bin/NanoEngineer1_0.9.2.app/program/prototype/test_drawing.py
that look like this:
from OpenGL.GL import GL_ARRAY_BUFFER_ARB
from OpenGL.GL import GL_ELEMENT_ARRAY_BUFFER_ARB
to look like this:
from OpenGL.GL.ARB.vertex_buffer_object import GL_ARRAY_BUFFER_AR
from OpenGL.GL.ARB.vertex_buffer_object import GL_ELEMENT_ARRAY_BUFFER_ARB
I also found an additional troubleshooting text file that has been removed, but you can find it here
|
Python Anaconda - How to Safely Uninstall
|
I installed Python Anaconda on Mac (OS Mavericks). I wanted to revert to the default version of Python on my Mac. What's the best way to do this? Should I delete the ~/anaconda directory? Any other changes required?
Currently when I run which python I get this path:
/Users/username/anaconda/bin/python
|
The anaconda installer adds a line in your ~/.bash_profile script that prepends the anaconda bin directory to your $PATH environment variable. Deleting the anaconda directory should be all you need to do, but it's good housekeeping to remove this line from your setup script too.
|
pandas applying regex to replace values
|
I have read some pricing data into a pandas dataframe the values appear as:
$40,000*
$40000 conditions attached
I want to strip it down to just the numeric values.
I know I can loop through and apply regex
[0-9]+
to each field then join the resulting list back together but is there a not loopy way?
Thanks
|
You could use Series.str.replace:
import pandas as pd
df = pd.DataFrame(['$40,000*','$40000 conditions attached'], columns=['P'])
print(df)
# P
# 0 $40,000*
# 1 $40000 conditions attached
df['P'] = df['P'].str.replace(r'\D+', '').astype('int')
print(df)
yields
P
0 40000
1 40000
|
Compiling Python 3.4 is not copying pip
|
I have compiled Python 3.4 from the sources on Linux Mint, but for some reason it is not copying pip to its final compiled folder (after the make install).
Any ideas?
|
Just sorted it out. Here it is how to compile python from the sources.
$ ./configure --prefix=/home/user/sources/compiled/python3.4_dev --with-ensurepip=install
$ make
$ make install
If you get "Ignoring ensurepip failure: pip 1.5.4 requires SSL/TLS" error:
$ sudo apt-get install libssl-dev openssl
$ ls
2to3 idle3 pip3.5 python3 python3.5m pyvenv
2to3-3.5 idle3.5 pydoc3 python3.5 python3.5m-config pyvenv-3.5
easy_install-3.5 pip3 pydoc3.5 python3.5-config python3-config
As you can see pip is copied into target folder, the --with-ensurepip=install is important.
|
How to create a Pandas DataFrame from String
|
In order to test some functionality I would like to create a DataFrame from a string. Let's say my testdata looks like:
TESTDATA="""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
"""
What is the simplest way to read that data into a Pandas DataFrame?
|
Simple way to do this was to use StringIO and pass that to the pandas.read_csv function. E.g:
import sys
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
import pandas as pd
TESTDATA=StringIO("""col1;col2;col3
1;4.4;99
2;4.5;200
3;4.7;65
4;3.2;140
""")
df = pd.read_csv(TESTDATA, sep=";")
|
How to access the real value of a cell using the openpyxl module for python
|
I am having real trouble with this, since the cell.value function returns the formula used for the cell, and I need to extract the result Excel provides after operating.
Thank you.
Ok, I think I ahve found a way around it; apparently to access cell.internal value you have to use the iter_rows() in your worksheet previously, which is a list of "RawCell".
for row in ws.iter_rows():
for cell in row:
print cell.internal_value
|
Like Charlie Clark already suggest you can set data_only on True when you load your workbook:
from openpyxl import load_workbook
wb = load_workbook("file.xlsx", data_only=True)
sh = wb["Sheet_name"]
print(sh["x10"].value)
Good luck :)
|
Difference between import numpy and import numpy as np
|
I understand that when possible one should use
import numpy as np
This helps keep away any conflict due to namespaces. But I have noticed that while the command below works
import numpy.f2py as myf2py
the following does not
import numpy as np
np.f2py #throws no module named f2py
Can someone please explain this?
|
numpy is the top package name, and doing import numpy doesn't import submodule numpy.f2py.
When you do import numpy it creats a link that points to numpy, but numpy is not further linked to f2py. The link is established when you do import numpy.f2py
In your above code:
import numpy as np # np is an alias pointing to numpy, but at this point numpy is not linked to numpy.f2py
import numpy.f2py as myf2py # this command makes numpy link to numpy.f2py. myf2py is another alias pointing to numpy.f2py as well
Here is the difference between import numpy.f2py and import numpy.f2py as myf2py:
import numpy.f2py
put numpy into local symbol table(pointing to numpy), and numpy is linked to numpy.f2py
both numpy and numpy.f2py are accessible
import numpy.f2py as myf2py
put my2py into local symbol table(pointing to numpy.f2py)
Its parent numpy is not added into local symbol table. Therefore you can not access numpy directly
|
How to pass command line arguments to ipython
|
Is there any way that I pass arguments to my python script through command line while using ipython? Ideally I want to call my script as:
ipython -i script.py --argument blah
and I want to be able to have --argument and blah listed in my sys.argv.
|
You can use one -- more option before that:
ipython script.py -- --argument blah
Help of Ipython:
ipython [subcommand] [options] [-c cmd | -m mod | file] [--] [arg] ...
If invoked with no options, it executes the file and exits, passing the
remaining arguments to the script, just as if you had specified the same
command with python. You may need to specify `--` before args to be passed
to the script, to prevent IPython from attempting to parse them. If you
specify the option `-i` before the filename, it will enter an interactive
IPython session after running the script, rather than exiting.
Demo:
$ cat script.py
import sys
print(sys.argv)
$ ipython script.py -- --argument blah
['script.py', '--argument', 'blah']
$ ipython script.py -- arg1 arg2
['script.py', 'arg1', 'arg2']
|
random.seed(): What does it do?
|
I am a bit confused on what random.seed() does in Python. For example, why does the below trials do what they do (consistently)?
>>> import random
>>> random.seed(9001)
>>> random.randint(1, 10)
1
>>> random.randint(1, 10)
3
>>> random.randint(1, 10)
6
>>> random.randint(1, 10)
6
>>> random.randint(1, 10)
7
I couldn't find good documentation on this. Thanks in advance!
|
Pseudo-random number generators work by performing some operation on a value. Generally this value is the previous number generated by the generator. However, the first time you use the generator, there is no previous value.
Seeding a pseudo-random number generator gives it its first "previous" value. Each seed value will correspond to a sequence of generated values for a given random number generator. That is, if you provide the same seed twice, you get the same sequence of numbers twice.
Generally, you want to seed your random number generator with some value that will change each execution of the program. For instance, the current time is a frequently-used seed. The reason why this doesn't happen automatically is so that if you want, you can provide a specific seed to get a known sequence of numbers.
|
How to install PyQt4 on Windows using pip?
|
I'm using Python 3.4 on Windows. When I run a script, it complains
ImportError: No Module named 'PyQt4'
So I tried to install it, but pip install PyQt4 gives
Could not find any downloads that satisfy the requirement PyQt4
although it does show up when I run pip search PyQt4. I tried to pip install python-qt, which installed successfully but didn't solve the problem.
WHat am I doing wrong?
|
Here are PyQt installers from the site - RiverBank Computing - PyQt Binary Downloads
Here are Windows wheel packages built by Chris Golke - Python Windows Binary packages - PyQt
Since Qt is a more complicated system with a compiled C++ codebase underlying the python interface it provides you, it can be more complex to build than just a pure python code package, which means it can be hard to install it from source.
For the Windows wheel files, make sure you grab the correct one (python version, 32/64 bit), and then use pip to install it - e.g:
C:\path\where\wheel\is\> pip install PyQt4-4.11.4-cp35-none-win_amd64.whl
Should properly install if you are running an x64 build of Python 3.5.
|
Difference between 'and' (boolean) vs. '&' (bitwise) in python. Why difference in behavior with lists vs numpy arrays?
|
What explains the difference in behavior of boolean and bitwise operations on lists vs numpy.arrays?
I'm getting confused about the appropriate use of the '&' vs 'and' in python, illustrated in the following simple examples.
mylist1 = [True, True, True, False, True]
mylist2 = [False, True, False, True, False]
>>> len(mylist1) == len(mylist2)
True
# ---- Example 1 ----
>>>mylist1 and mylist2
[False, True, False, True, False]
#I am confused: I would have expected [False, True, False, False, False]
# ---- Example 2 ----
>>>mylist1 & mylist2
*** TypeError: unsupported operand type(s) for &: 'list' and 'list'
#I am confused: Why not just like example 1?
# ---- Example 3 ----
>>>import numpy as np
>>> np.array(mylist1) and np.array(mylist2)
*** ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
#I am confused: Why not just like Example 4?
# ---- Example 4 ----
>>> np.array(mylist1) & np.array(mylist2)
array([False, True, False, False, False], dtype=bool)
#This is the output I was expecting!
This answer, and this answer both helped me understand that 'and' is a boolean operation but '&' is a bitwise operation.
I was reading some information to better understand the concept of bitwise operations, but I am struggling to use that information to make sense of my above 4 examples.
Note, in my particular situation, my desired output is a newlist where:
len(newlist) == len(mylist1)
newlist[i] == (mylist1[i] and mylist2[i]) #for every element of newlist
Example 4, above, led me to my desired output, so that is fine.
But I am left feeling confused about when/how/why I should use 'and' vs '&'. Why do lists and numpy arrays behave differently with these operators?
Can anyone help me understand the difference between boolean and bitwise operations to explain why they handle lists and numpy.arrays differently?
I just want to make sure I continue to use these operations correctly going forward. Thanks a lot for the help!
Numpy version 1.7.1
python 2.7
References all inline with text.
EDITS
1) Thanks @delnan for pointing out that in my original examples I had am ambiguity that was masking my deeper confusion. I have updated my examples to clarify my question.
|
and tests whether both expressions are logically True while & (when used with True/False values) tests if both are True.
In Python, empty built-in objects are typically treated as logically False while non-empty built-ins are logically True. This facilitates the common use case where you want to do something if a list is empty and something else if the list is not. Note that this means that the list [False] is logically True:
>>> if [False]:
... print 'True'
...
True
So in Example 1, the first list is non-empty and therefore logically True, so the truth value of the and is the same as that of the second list. (In our case, the second list is non-empty and therefore logically True, but identifying that would require an unnecessary step of calculation.)
For example 2, lists cannot meaningfully be combined in a bitwise fashion because they can contain arbitrary unlike elements. Things that can be combined bitwise include: Trues and Falses, integers.
NumPy objects, by contrast, support vectorized calculations. That is, they let you perform the same operations on multiple pieces of data.
Example 3 fails because NumPy arrays (of length > 1) have no truth value as this prevents vector-based logic confusion.
Example 4 is simply a vectorized bit and operation.
Bottom Line
If you are not dealing with arrays and are not performing math manipulations of integers, you probably want and.
If you have vectors of truth values that you wish to combine, use numpy with &.
|
Drop rows with all zeros in pandas data frame
|
I can use pandas dropna() functionality to remove rows with some or all columns set as NA's. Is there an equivalent function for dropping rows with all columns having value 0?
P kt b tt mky depth
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
5 1.1 3 4.5 2.3 9.0
In this example, we would like to drop the first 4 rows from the data frame.
thanks!
|
One-liner. No transpose needed:
df.loc[~(df==0).all(axis=1)]
And for those who like symmetry, this also works...
df.loc[(df!=0).any(axis=1)]
|
jinja2 - how to put a block in an if statement?
|
I am trying to use an if to determine which block should fill my {% block content %}.
I have a base.html which has a default {% block content %} and this template is extending base.html.So I tried this:
{% extends "base.html" %}
{% if condition == True %}
{% block content %}
<div>blah blah blah blah</div>
{% endblock content %}
{% endif %}
and i was expecting to see "blah blah blah blah" if condition was true and see the default block if it wasn't true.
but both times I got "blah blah blah blah".
Then I tried this one:
{% extends "base.html" %}
{% if condition == True %}
{% block content %}
<div>blah blah blah blah</div>
{% endblock content %}
{% else %}
{% block content %}
<div>The Default Thing</div>
{% endblock content %}
{% endif %}
and I got this error:
TemplateAssertionError: block 'content' defined twice
how can I put a block inside an if statement?
|
You cannot make a {% block %} conditional; once you use the tag, the block is always going to be filled in.
Put your conditional inside the block instead, and use super() to instruct Jinja to use the original contents of the block as defined in the template:
{% extends "base.html" %}
{% block content %}
{% if condition %}
<div>blah blah blah blah</div>
{% else %}
{{ super() }}
{% endif %}
{% endblock content %}
|
Securely storing environment variables in GAE with app.yaml
|
I need to store API keys and other sensitive information in app.yaml as environment variables for deployment on GAE. The issue with this is that if I push app.yaml to GitHub, this information becomes public (not good). I don't want to store the info in a datastore as it does not suit the project. Rather, I'd like to swap out the values from a file that is listed in .gitignore on each deployment of the app.
Here is my app.yaml file:
application: myapp
version: 3
runtime: python27
api_version: 1
threadsafe: true
libraries:
- name: webapp2
version: latest
- name: jinja2
version: latest
handlers:
- url: /static
static_dir: static
- url: /.*
script: main.application
login: required
secure: always
# auth_fail_action: unauthorized
env_variables:
CLIENT_ID: ${CLIENT_ID}
CLIENT_SECRET: ${CLIENT_SECRET}
ORG: ${ORG}
ACCESS_TOKEN: ${ACCESS_TOKEN}
SESSION_SECRET: ${SESSION_SECRET}
Any ideas?
|
My approach is to store client secrets only within the App Engine app itself. The client secrets are neither in source control nor on any local computers. This has the benefit that any App Engine collaborator can deploy code changes without having to worry about the client secrets.
I store client secrets directly in Datastore and use Memcache for improved latency accessing the secrets. The Datastore entities only need to be created once and will persist across future deploys. of course the App Engine console can be used to update these entities at any time.
There are two options to perform the one-time entity creation:
Use the App Engine Remote API interactive shell to create the entities.
Create an Admin only handler that will initialize the entities with dummy values. Manually invoke this admin handler, then use the App Engine console to update the entities with the production client secrets.
|
Pandas - The difference between join and merge
|
Suppose I have two DataFrames like so:
left = pd.DataFrame({'key1': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({'key2': ['foo', 'bar'], 'rval': [4, 5]})
I want to merge them, so I try something like this:
pd.merge(left, right, left_on='key1', right_on='key2')
And I'm happy
key1 lval key2 rval
0 foo 1 foo 4
1 bar 2 bar 5
But I'm trying to use the join method, which I've been lead to believe is pretty similar.
left.join(right, on=['key1', 'key2'])
And I get this:
//anaconda/lib/python2.7/site-packages/pandas/tools/merge.pyc in _validate_specification(self)
406 if self.right_index:
407 if not ((len(self.left_on) == self.right.index.nlevels)):
--> 408 raise AssertionError()
409 self.right_on = [None] * n
410 elif self.right_on is not None:
AssertionError:
What am I missing?
|
I always use join on indices:
import pandas as pd
left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]}).set_index('key')
right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]}).set_index('key')
left.join(right, lsuffix='_l', rsuffix='_r')
lval rval
key
foo 1 4
bar 2 5
First, join expects a single, common column. However, that doesn't seem to help here. The follow should, IMO, work. Unfortunately, it does not:
import pandas as pd
left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})
left.join(right, on=('key'), lsuffix='_l', rsuffix='_r')
key_l lval key_r rval
0 foo 1 NaN NaN
1 bar 2 NaN NaN
Specifying an outer join is even more confusing:
import pandas as pd
left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})
left.join(right, on='key', lsuffix='_l', rsuffix='_r', how='outer')
key key_l lval key_r rval
0 foo foo 1 NaN NaN
1 bar bar 2 NaN NaN
1 0 NaN NaN foo 4
1 1 NaN NaN bar 5
Look at that crazy index 0_o
My thoughts are that merge is for columns and join is for indices.
I don't know what else to say :/
|
Checking call order across multiple mocks
|
I have three functions that I'm trying to test the call order of.
Let's say that in module module.py I have the following
# module.py
def a(*args):
# do the first thing
def b(*args):
# do a second thing
def c(*args):
# do a third thing
def main_routine():
a_args = ('a')
b_args = ('b')
c_args = ('c')
a(*a_args)
b(*b_args)
c(*c_args)
I want to check that b is called after a, and before c. So getting a mock for each of a, b and c is easy:
# tests.py
@mock.patch('module.a')
@mock.patch('module.b')
@mock.patch('module.c')
def test_main_routine(c_mock, b_mock, a_mock):
# test all the things here
Checking that each of the individial mocks are called is easy, too. How do I check the order of the calls relative to one another?
call_args_list won't work as it's maintained separately for each mock.
I've tried using a side effect to log each of the calls:
calls = []
def register_call(*args):
calls.append(mock.call(*args))
return mock.DEFAULT
a_mock.side_effect = register_call
b_mock.side_effect = register_call
c_mock.side_effect = register_call
But this only gives me the args that the mocks were called with, but not the actual mock that the call was made against. I can add a bit more logic:
# tests.py
from functools import partial
def register_call(*args, **kwargs):
calls.append(kwargs.pop('caller', None), mock.call(*args, **kwargs))
return mock.DEFAULT
a_mock.side_effect = partial(register_call, caller='a')
b_mock.side_effect = partial(register_call, caller='b')
c_mock.side_effect = partial(register_call, caller='c')
And that seems to get the job done... Is there a better way though? It feels like there should already be something in the API that can do this that I'm missing.
|
Define a Mock manager and attach mocks to it via attach_mock(). Then check for the mock_calls:
@patch('module.a')
@patch('module.b')
@patch('module.c')
def test_main_routine(c, b, a):
manager = Mock()
manager.attach_mock(a, 'a')
manager.attach_mock(b, 'b')
manager.attach_mock(c, 'c')
module.main_routine()
expected_calls = [call.a('a'), call.b('b'), call.c('c')]
assert manager.mock_calls == expected_calls
Just to test that it works, change the order of function calls in the main_routine() function add see that it throws AssertionError.
See more examples at Tracking order of calls and less verbose call assertions
Hope that helps.
|
Python accessing the list while being sorted
|
Can I access a list while it is being sorted in the list.sort()
b = ['b', 'e', 'f', 'd', 'c', 'g', 'a']
f = 'check this'
def m(i):
print i, b, f
return None
b.sort(key=m)
print b
this returns
b [] check this
e [] check this
f [] check this
d [] check this
c [] check this
g [] check this
a [] check this
Note that individual items of list b is sent to function m. But at m the list b is empty, however it can see the variable f, which has same scope as list b. Why does function m print b as []?
|
Looking at the source code (of CPython, maybe different behaviour for other implementations) the strange output of your script becomes obvious:
/* The list is temporarily made empty, so that mutations performed
* by comparison functions can't affect the slice of memory we're
* sorting (allowing mutations during sorting is a core-dump
* factory, since ob_item may change).
*/
saved_ob_size = Py_SIZE(self);
saved_ob_item = self->ob_item;
saved_allocated = self->allocated;
Py_SIZE(self) = 0;
The comment says it all: When you begin sorting, the list is emptied. Well, it is "empty" in the eye of an external observer.
I quite like the term "core-dump factory".
Compare also:
b = ['b','e','f','d','c','g','a']
f = 'check this'
def m(i):
print i, b, f
return None
b = sorted(b, key= m)
print b
|
Is it pythonic to use generators to write header and body of a file?
|
If I was to write a file with this content:
#You have been defeated!
#It's merely a flesh wound!
We are the knights who say Ni!
We are the knights who say Ni!
We are the knights who say Ni!
Would it then be very non-pythonic to do it with a generator using send? I have never seen generators used like this elsewhere.
def write(file, header):
with open(file,'w') as f:
f.write(header)
line = (yield)
while True:
f.write(line)
line = (yield)
return
file='holygrail.txt'
header="#You have been defeated!\n#It's merely a flesh wound!\n"
generator = write(file,header)
generator.send(None)
for i in range(3):
generator.send('We are the knights who say Ni!\n')
generator.close()
I am asking, because the method above would be hugely beneficial to me instead of opening multiple different file streams in a contextlib stack. I would not have to use the contextlib module at all, if I write my files like this.
I have never asked a question like this before, and I don't know, whether it belongs on stackoverflow or not.
|
I like the creativity of your solution, but my subjective opinion would be that using contextlib.ExitStack() will look cleaner, be more readable, than using the generator since each generator would need to be primed with generator.send(None) and explicitly closed.
By the way, (even though I think contextlib will lead to shorter, more readable code), write could be simplified a little bit:
def write(file, header):
with open(file, 'w') as f:
f.write(header)
while True:
line = (yield)
f.write(line)
return
Note you only need one line = (yield) instead of two.
Also, instead of priming the generator with generator.send(None) you could use the coroutine decorator:
def coroutine(func):
""" http://www.python.org/dev/peps/pep-0342/ """
def wrapper(*args, **kw):
gen = func(*args, **kw)
gen.send(None)
return gen
return wrapper
This is a commonly understood idiom (PEP0342, David Beazley talk) for turning a generator into a coroutine. So decorating your generator with it would also serve the purpose of advertising that write is a coroutine.
|
Why is one class variable not defined in list comprehension but another is?
|
I just read the answer to this question: Accessing class variables from a list comprehension in the class definition
It helps me to understand why the following code results in NameError: name 'x' is not defined:
class A:
x = 1
data = [0, 1, 2, 3]
new_data = [i + x for i in data]
print(new_data)
The NameError occurs because x is not defined in the special scope for list comprehension. But I am unable to understand why the following code works without any error.
class A:
x = 1
data = [0, 1, 2, 3]
new_data = [i for i in data]
print(new_data)
I get the output [0, 1, 2, 3]. But I was expecting this error: NameError: name 'data' is not defined because I was expecting just like in the previous example the name x is not defined in the list comprehension's scope, similarly, the name data would not be defined too in the list comprehension's scope.
Could you please help me to understand why x is not defined in the list comprehension's scope but data is?
|
data is the source of the list comprehension; it is the one parameter that is passed to the nested scope created.
Everything in the list comprehension is run in a separate scope (as a function, basically), except for the iterable used for the left-most for loop. You can see this in the byte code:
>>> def foo():
... return [i for i in data]
...
>>> dis.dis(foo)
2 0 LOAD_CONST 1 (<code object <listcomp> at 0x105390390, file "<stdin>", line 2>)
3 LOAD_CONST 2 ('foo.<locals>.<listcomp>')
6 MAKE_FUNCTION 0
9 LOAD_GLOBAL 0 (data)
12 GET_ITER
13 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
16 RETURN_VALUE
The <listcomp> code object is called like a function, and iter(data) is passed in as the argument (CALL_FUNCTION is executed with 1 positional argument, the GET_ITER result).
The <listcomp> code object looks for that one argument:
>>> dis.dis(foo.__code__.co_consts[1])
2 0 BUILD_LIST 0
3 LOAD_FAST 0 (.0)
>> 6 FOR_ITER 12 (to 21)
9 STORE_FAST 1 (i)
12 LOAD_FAST 1 (i)
15 LIST_APPEND 2
18 JUMP_ABSOLUTE 6
>> 21 RETURN_VALUE
The LOAD_FAST call refers to the first and only positional argument passed in; it is unnamed here because there never was a function definition to give it a name.
Any additional names used in the list comprehension (or set or dict comprehension, or generator expression, for that matter) are either locals, closures or globals, not parameters.
|
"'cc' failed with exit status 1" error when install python library
|
Like many others, I'm having issues installing a python library (downloaded as a tar, then extracted).
rodolphe-mbp:python-Levenshtein-0.11.2 Rodolphe$ sudo python setup.py install
running install
running bdist_egg
running egg_info
writing requirements to python_Levenshtein.egg-info/requires.txt
writing python_Levenshtein.egg-info/PKG-INFO
writing namespace_packages to python_Levenshtein.egg-info/namespace_packages.txt
writing top-level names to python_Levenshtein.egg-info/top_level.txt
writing dependency_links to python_Levenshtein.egg-info/dependency_links.txt
writing entry points to python_Levenshtein.egg-info/entry_points.txt
reading manifest file 'python_Levenshtein.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*' under directory 'docs'
warning: no previously-included files matching '*pyc' found anywhere in distribution
warning: no previously-included files matching '.project' found anywhere in distribution
warning: no previously-included files matching '.pydevproject' found anywhere in distribution
writing manifest file 'python_Levenshtein.egg-info/SOURCES.txt'
installing library code to build/bdist.macosx-10.9-intel/egg
running install_lib
running build_ext
building 'Levenshtein' extension
cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c Levenshtein.c -o build/temp.macosx-10.9-intel-2.7/Levenshtein.o
clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future]
clang: note: this will be a hard error (cannot be downgraded to a warning) in the future
error: command 'cc' failed with exit status 1
As suggested elsewhere, I tried entering in terminal "ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future sudo python setup.py install", but no success.
Is there a way around this issue that seems to have appeared with xcode 5.1?
|
Run these two lines in your shell before you build:
export CFLAGS=-Qunused-arguments
export CPPFLAGS=-Qunused-arguments
Those exports tell the compiler to ignore unused arguments rather than complaining about them.
The reason seems to be that Python is compiling modules using the options that it was built with, except one of those options no longer works in mavericks:
the clang 3.4 Apple is shipping defaults to erroring on unknown flags, but CPython builds modules using the same set of flags it was compiled with originally.
(from: http://stackoverflow.com/a/22315129/65295)
Lots of people are running into this:
Ansible Installation -clang: error: unknown argument: '-mno-fused-madd'
Can't install python mysql library on Mac Mavericks
clang error: unknown argument: '-mno-fused-madd' (python package installation failure)
|
how to check the dtype of a column in python pandas
|
i need to use different functions to treat numeric columns and string columns. what I am doing now is really dumb:
allc = list((agg.loc[:, (agg.dtypes==np.float64)|(agg.dtypes==np.int)]).columns)
for y in allc:
treat_numeric(agg[y])
allc = list((agg.loc[:, (agg.dtypes!=np.float64)&(agg.dtypes!=np.int)]).columns)
for y in allc:
treat_str(agg[y])
is there a more elegant way to do this? e.g.
for y in agg.columns:
if(dtype(agg[y]) == 'string'):
treat_str(agg[y])
elif(dtype(agg[y]) != 'string'):
treat_numeric(agg[y])
can anyone advise?
thanks a lot!
|
You can access a dtype of a column with agg[y].dtype:
for y in agg.columns:
if(agg[y].dtype == np.float64 or agg[y].dtype == np.int64):
treat_numeric(agg[y])
else:
treat_str(agg[y])
|
What is the difference between the declarative_base() and db.Model?
|
The quickstart tutorial for the Flask-SQLAlchemy plugin instructs users to create table models inheriting the db.Model class, e.g.
app = Flask(__main__)
db = SQLAlchemy(app)
class Users(db.Model):
__tablename__ = 'users'
...
However, the SQLAlchemy tutorial and the bottle-SQLAlchemy README both suggest that table models inherit a Base instantiated from declarative_base().
Base = declarative_base()
class Users(Base):
__tablename__ = 'users'
...
What is the difference between these two approaches?
|
Looking in the Flask-SQLAlchemy source code the db.Model class is initialized as follows:
self.Model = self.make_declarative_base()
And here is the make_declarative_base() method:
def make_declarative_base(self):
"""Creates the declarative base."""
base = declarative_base(cls=Model, name='Model',
metaclass=_BoundDeclarativeMeta)
base.query = _QueryProperty(self)
return base
The _BoundDeclarativeMeta metaclass is a subclass of SQLAlchemy's DeclarativeMeta, it just adds support for computing a default value for __table__ (the table name) and also to handle binds.
The base.query property enables Flask-SQLAlchemy based models to access a query object as Model.query instead of SQLAlchemy's session.query(Model).
The _QueryProperty query class is also subclassed from SQLAlchemy's query. The Flask-SQLAlchemy subclass adds three additional query methods that do not exist in SQLAlchemy: get_or_404(), first_or_404() and paginate().
I believe these are the only differences.
|
how to multiply multiple columns by a column in Pandas
|
I would like to have:
df[['income_1', 'income_2']] * df['mtaz_proportion']
return those columns multiplied by df['mtaz_proportion']
so that I can set
df[['mtaz_income_1', 'mtaz_income_2']] =
df[['income_1', 'income_2']] * df['mtaz_proportion']
but instead I get:
income_1 income_2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ...
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ...
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN ...
ect...
what simple thing am I missing?
Thank you!
|
use multiply method and set axis="index":
df[["A", "B"]].multiply(df["C"], axis="index")
|
clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future]
|
I get the following error trying to install Scrapy in a Mavericks OS.
I have command line tools and X11 installed I don't really know whats going on and I haven`t found the same error browsing through the Web. I think it might be related to some change in Xcode 5.1
Thanks for the answers!
this is part of the command output:
$pip install scrapy
.
.
.
.
Downloading/unpacking cryptography>=0.2.1 (from pyOpenSSL->scrapy)
Downloading cryptography-0.3.tar.gz (208kB): 208kB downloaded
Running setup.py egg_info for package cryptography
OS/X: confusion between 'cc' versus 'gcc' (see issue 123)
will not use '__thread' in the C code
clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future]
clang: note: this will be a hard error (cannot be downgraded to a warning) in the future
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/Users/agonzamart/.virtualenvs/Parser/build/cryptography/setup.py", line 156, in <module>
"test": PyTest,
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
return self.obtain(req, installer) # try and download/install
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
return installer(requirement)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
return cmd.easy_install(req)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1118, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: command 'cc' failed with exit status 1
Complete output from command python setup.py egg_info:
OS/X: confusion between 'cc' versus 'gcc' (see issue 123)
will not use '__thread' in the C code
clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future]
clang: note: this will be a hard error (cannot be downgraded to a warning) in the future
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/Users/agonzamart/.virtualenvs/Parser/build/cryptography/setup.py", line 156, in <module>
"test": PyTest,
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/dist.py", line 265, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/dist.py", line 289, in fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/pkg_resources.py", line 618, in resolve
dist = best[req.key] = env.best_match(req, self, installer)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/pkg_resources.py", line 862, in best_match
return self.obtain(req, installer) # try and download/install
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/pkg_resources.py", line 874, in obtain
return installer(requirement)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/dist.py", line 339, in fetch_build_egg
return cmd.easy_install(req)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 623, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 653, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 849, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1130, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/Users/agonzamart/.virtualenvs/Parser/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1118, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: command 'cc' failed with exit status 1
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /Users/agonzamart/.virtualenvs/Parser/build/cryptography
|
It is due to a change in clang defaults in Xcode 5.1 and Apple not noticing that it would break extension module builds using the system Python. One workaround is to define the following environment variables first:
export CFLAGS=-Qunused-arguments
export CPPFLAGS=-Qunused-arguments
UPDATE [2014-05-16]: As expected, Apple has fixed this problem with updated system Pythons (2.7, 2.6, and 2.5) in OS X 10.9.3 so the workaround is no longer necessary when using the latest Mavericks and Xcode 5.1+. However, as of now, the workaround is still required for OS X 10.8.x (Mountain Lion, currently 10.8.5) if you are using Xcode 5.1+ there.
|
Scheduling Python Script to run every hour accurately
|
Before I ask, Cron Jobs and Task Scheduler will be my last options, this script will be used across Windows and Linux and I'd prefer to have a coded out method of doing this than leaving this to the end user to complete.
Is there a library for Python that I can use to schedule tasks? I will need to run a function once every hour, however, over time if I run a script once every hour and use .sleep, "once every hour" will run at a different part of the hour from the previous day due to the delay inherent to executing/running the script and/or function.
What is the best way to schedule a function to run at a specific time of day (more than once) without using a Cron Job or scheduling it with Task Scheduler?
Or if this is not possible, I would like your input as well.
AP Scheduler fit my needs exactly.
Version < 3.0
import datetime
import time
from apscheduler.scheduler import Scheduler
# Start the scheduler
sched = Scheduler()
sched.daemonic = False
sched.start()
def job_function():
print("Hello World")
print(datetime.datetime.now())
time.sleep(20)
# Schedules job_function to be run once each minute
sched.add_cron_job(job_function, minute='0-59')
out:
>Hello World
>2014-03-28 09:44:00.016.492
>Hello World
>2014-03-28 09:45:00.0.14110
Version > 3.0
(From Animesh Pandey's answer below)
from apscheduler.schedulers.blocking import BlockingScheduler
sched = BlockingScheduler()
@sched.scheduled_job('interval', seconds=10)
def timed_job():
print('This job is run every 10 seconds.')
@sched.scheduled_job('cron', day_of_week='mon-fri', hour=10)
def scheduled_job():
print('This job is run every weekday at 10am.')
sched.configure(options_from_ini_file)
sched.start()
|
Maybe this can help: Advanced Python Scheduler
Here's a small piece of code from their documentation:
from apscheduler.schedulers.blocking import BlockingScheduler
def some_job():
print "Decorated job"
scheduler = BlockingScheduler()
scheduler.add_job(some_job, 'interval', hours=1)
scheduler.start()
|
Plot topics with bokeh or matplotlib
|
I'm trying to plot topic visualization from a model.
I want to do something like bokeh covariance implementation.
My data is:
data 1: index, topics.
data 2: index, topics, weights(use it for color).
where topic is just set of words.
How do i give the data to bokeh to plot the above data? From the example, data handling is not intuitive.
With matplot, it looks like this.
Obviously, it is not visually helpful to see what topic correspond to each circle.
Here is my matplotlib code:
x = []
y = []
area = []
for row in joined:
x.append(row['index'])
y.append(row['index'])
#weight.append(row['score'])
area.append(np.pi * (15 * row['score'])**2)
scale_values = 1000
plt.scatter(x, y, s=scale_values*np.array(area), alpha=0.5)
plt.show()
Any idea/suggestions?
|
UPDATE: The answer below is still correct in all major points, but the API has changed slightly to be more explicit as of Bokeh 0.7. In general, things like:
rect(...)
should be replaced with
p = figure(...)
p.rect(...)
Here are the relevant lines from the Les Mis examples, simplified to your case. Let's take a look:
# A "ColumnDataSource" is like a dict, it maps names to columns of data.
# These names are not special we can call the columns whatever we like.
source = ColumnDataSource(
data=dict(
x = [row['name'] for row in joined],
y = [row['name'] for row in joined],
color = list_of_colors_one_for_each_row,
)
)
# We need a list of the categorical coordinates
names = list(set(row['name'] for row in joined))
# rect takes center coords (x,y) and width and height. We will draw
# one rectangle for each row.
rect('x', 'y', # use the 'x' and 'y' fields from the data source
0.9, 0.9, # use 0.9 for both width and height of each rectangle
color = 'color', # use the 'color' field to set the color
source = source, # use the data source we created above
x_range = names, # sequence of categorical coords for x-axis
y_range = names, # sequence of categorical coords for y-axis
)
A few notes:
For numeric data x_range and y_range usually get supplied automatically. We have to give them explicitly here because we are using categorial coordinates.
You can order the list of names for x_range and y_range however you like, this is the order they are displayed on the plot axis.
I'm assuming you want to use categorical coordinates. :) This is what the Les Mes example does. See the bottom of this answer if you want numerical coordinates.
For more info there is a Bokeh tutorial at http://bokeh.pydata.org/tutorial/index.html
Also, the Les Mis example was a little more complicated (it had a hover tool) which is why we created a ColumnDataSource by hand. If you just need a simple plot you can probably skip creating a data source yourself, and just pass the data in to rect directly:
names = list(set(row['name'] for row in joined))
rect(names, # x (categorical) coordinate for each rectangle
names, # y (categorical) coordinate for each rectangle
0.9, 0.9, # use 0.9 for both width and height of each rectangle
color = some_colors, # color for each rect
x_range = names, # sequence of categorical coords for x-axis
y_range = names, # sequence of categorical coords for y-axis
)
Another note: this only plots rectangles on the diagonal, where the x- and y-coordinates are the same. That seems to be what you want from your description. But just for completeness, it's possible to plot rectangles that have different x- and y-coordinates. The Les Mis example does this.
Finally, maybe you don't actually want categorical axes? If you just want to use the numeric index of the coordinates, its even simpler:
inds = [row['index'] for row in joined]
rect(inds, # x-coordinate for each rectangle
inds, # y-coordinate for each rectangle
0.9, 0.9, # use 0.9 for both width and height of each rectangle
color = some_colors, # color for each rect
)
Edit: Here is a complete runnable example that uses numeric coords:
from bokeh.plotting import *
output_file("foo.html")
inds = [2, 5, 6, 8, 9]
colors = ["red", "orange", "blue", "green", "#4488aa"]
rect(inds, inds, 1.0, 1.0, color=colors)
show()
and here is one that uses the same values as categorical coords:
from bokeh.plotting import *
output_file("foo.html")
inds = [str(x) for x in [2, 5, 6, 8, 9]]
colors = ["red", "orange", "blue", "green", "#4488aa"]
rect(inds, inds, 1.0, 1.0, color=colors, x_range=inds, y_range=inds)
show()
|
Import order coding standard
|
PEP8 suggests that:
Imports should be grouped in the following order:
standard library imports
related third party imports
local application/library specific imports
You should put a blank line between each group of imports.
Is there a way to check if the standard is violated anywhere in the package using static code analysis tools, like pylint, pyflakes, pychecker, pep8?
Example of violation:
from my_package import my_module
from django.db import models
import os
Correct way to import:
import os
from django.db import models
from my_package import my_module
|
Found it! (accidentally, while reading "Hacker's guide to python")
OpenStack Hacking Style Checks project named hacking introduces several unique flake8 extensions. There is hacking_import_groups among them (related commit).
Example:
requirements
tox
flake8
hacking (from the master branch):
$ git clone https://github.com/openstack-dev/hacking.git
$ cd hacking/
$ python setup.py install
files used in the example
tox.ini (we need to tell flake8 that we want to use a custom check)
[hacking]
local-check = hacking.core.hacking_import_groups
UPD: with the newest version of hacking the path to the check changed, now it is hacking.checks.imports.hacking_import_groups.
test.py (target of the check)
import requests
import sys
from my_module import print_smth
print_smth(requests.get('https://google.com'))
print_smth(sys.version)
my_module.py (local import used by test.py)
def print_smth(smth):
print smth
Then, if I run flake8 against test.py:
$ flake8 test.py
test.py:2:1: H305 imports not grouped correctly (requests: third-party, sys: stdlib)
test.py:3:1: H305 imports not grouped correctly (sys: stdlib, my_module.print_smth: project)
test.py:3:1: H306 imports not in alphabetical order (sys, my_module.print_smth)
Then, if I group the imports in the correct order following PEP8:
import sys
import requests
from my_module import print_smth
print_smth(requests.get('https://google.com'))
print_smth(sys.version)
No warnings found:
$ flake8 test.py
$
Hope this will help somebody in the future.
|
Django test coverage vs code coverage
|
I've successfully installed and configured django-nose with coverage
Problem is that if I just run coverage for ./manage.py shell and exit out of that shell - it shows me 37% code coverage. I fully understand that executed code doesn't mean tested code. My only question is -- what now?
What I'm envisioning is being able to import all the python modules and "settle down" before executing any tests, and directly communicating with coverage saying "Ok, start counting reached code here."
Ideally this would be done by nose essentially resetting the "touched" lines of code right before executing each test suite.
I don't know where to start looking/developing. I've searched online and haven't found anything fruitful. Any help/guidelines would be greatly appreciated.
P.S.
I tried executing something like this:
DJANGO_SETTINGS_MODULE=app.settings_dev coverage run app/tests/gme_test.py
And it worked (showed 1% coverage) but I can't figure out how to do this for the entire app
Edit: Here's my coverage config:
[run]
source = .
branch = False
timid = True
[report]
show_missing = False
include = *.py
omit =
tests.py
*_test.py
*_tests.py
*/site-packages/*
*/migrations/*
[html]
title = Code Coverage
directory = local_coverage_report
|
since you use django-nose you have two options on how to run coverage. The first was already pointed out by DaveB:
coverage run ./manage.py test myapp
The above actually runs coverage which then monitors all code executed by the test command.
But then, there is also a nose coverage plugin included by default in the django-nose package (http://nose.readthedocs.org/en/latest/plugins/cover.html). You can use it like this:
./manage.py test myapp --with-coverage
(There are also some additional options like which modules should be covered, whether to include an html report or not etc . These are all documented in the above link - you can also type ./manage.py test --help for some quick info).
Running the nose coverage plugin will result in coverage running after the django bootstrapping code is executed and therefore the corresponding code will not be reported as covered.
Most of the code you see reported as covered when running coverage the original way, are import statements, class definitions, class members etc. As python evaluates them during import time, coverage will naturally mark them as covered. However, running the nose plugin will not report bootstrapping code as covered since the test runner starts after the django environment is loaded. Of course, a side effect of this is you can never achieve 100% coverage (...or close :)) as your global scope statements will never get covered.
After switching back and forth and playing around with coverage options, I now have ended up using coverage like this:
coverage run --source=myapp,anotherapp ---omit=*/migrations/* ./manage.py test
so that a) coverage will report import statements, class member definitions etc as covered (which is actually the truth - this code was successfully imported and interpreted) and b) it will only cover my code and not django code, or any other third-party app I use; the coverage percentage will reflect how well my project is covered. Hope this helps!
|
Repeatedly failing to install scrapy and lxml
|
I'd previously used Anaconda to handle python, but I'm and start working with virtual environments.
I set up virtualenv and virtualenvwrapper, and have been trying to add modules, specifically scrapy and lxml, for a project I want to try.
Each time I pip install, I hit an error.
For scrapy:
File "/home/philip/Envs/venv/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1003, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
---------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/philip/Envs/venv/build/cryptography
Storing debug log for failure in /home/philip/.pip/pip.log
For lxml:
In file included from src/lxml/lxml.etree.c:346:0:
/home/philip/Envs/venv/build/lxml/src/lxml/includes/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
include "libxml/xmlversion.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up... Command /home/philip/Envs/venv/bin/python -c "import setuptools, tokenize;__file__='/home/philip/Envs/venv/build/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-zIsPdl-record/install-record.txt
--single-version-externally-managed --compile --install-headers /home/philip/Envs/venv/include/site/python2.7 failed with error code 1 in /home/philip/Envs/venv/build/lxml Storing debug log for failure in /home/philip/.pip/pip.log
I tried to install it following scrapy's documentation, but scrapy was still not listed when I called for python's installed modules.
Any ideas? Thanks--really appreciate it!
I'm on Ubuntu 13.10 if it matters. Other modules I've tried have installed fine (though I've only gone for a handful).
|
I had the same problem in Ubuntu 14.04. I've solved it with the instructions of the page linked by @jdigital and the openssl-dev library pointed by @user3115915. Just to help others:
sudo apt-get install libxslt1-dev libxslt1.1 libxml2-dev libxml2 libssl-dev
sudo pip install scrapy
|
"ImportError: No module named httplib2" even after installation
|
I'm having a hard time understanding why I get ImportError: No module named httplib2 after making sure httplib2 is installed. See below:
$ which -a python
/usr/bin/python
/usr/local/bin/python
$ pip -V
pip 1.4.1 from /usr/local/lib/python2.7/site-packages/pip-1.4.1-py2.7.egg (python 2.7
$ pip list
google-api-python-client (1.2)
httplib2 (0.8)
pip (1.4.1)
pudb (2013.5.1)
Pygments (1.6)
setuptools (1.3.2)
wsgiref (0.1.2)
$ pip install httplib2
Requirement already satisfied (use --upgrade to upgrade): httplib2 in /usr/local/lib/python2.7/site-packages
Cleaning up...
$ python
Python 2.7.5 (default, Sep 12 2013, 21:33:34)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import httplib2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named httplib2
I've also done
$ find / | grep httplib2
/usr/local/lib/python2.7/site-packages/httplib2
/usr/local/lib/python2.7/site-packages/httplib2/__init__.py
[... edited for brevity]
PLUMBING! >shakes fist at heavens<
|
If there are multiple Python instances (2 & 3), try different pip, for example:
Python 2:
pip2 install httplib2 --upgrade
Python 3:
pip3 install httplib2 --upgrade
To check what's installed and where, try:
pip list
pip2 list
pip3 list
Then make sure you're using the right Python instance (as suggested in the other answer).
|
PyCharm Unresolved reference 'print'
|
I started to learn python language, and decided to try out PyCharm IDE, which looks really nice. But, whenever I write print it says "Unresolved reference 'print'". I can run the program, but this red-underline is really annoying. How can I fix this?
|
I have had the same problem as you, even though I configured Python 3.4.0 as the project's interpreter and all print's in the code were Python 3 compliant function calls.
I got it sorted out by doing this in PyCharm:
File -> Invalidate Caches / Restart... -> Invalidate and Restart
|
ImportError: cannot import name inplace_column_scale
|
Using Python 2.7 with scikit-learn 0.14 package. It runs well on some examples from the user guild expect the Linear Models.
Traceback (most recent call last):
File "E:\P\plot_ols.py", line 28, in <module>
from sklearn import datasets, linear_model
File "C:\Python27\lib\site-packages\sklearn\linear_model\__init__.py", line 12, in <module>
from .base import LinearRegression
File "C:\Python27\lib\site-packages\sklearn\linear_model\base.py", line 29, in <module>
from ..utils.sparsefuncs import mean_variance_axis0, inplace_column_scale
ImportError: cannot import name inplace_column_scale
Thank you~
|
I was able to fix this by going to my python folder and deleting the file:
python27\Lib\site-packages\sklearn\utils\sparsefuncs.pyd
My guess is that the problem was:
An older version of scikit-learn implemented sparsefuncs as a windows DLL
The current version implements it as a python file
If you install a new version on top of an old version it does not delete the old DLL
When you try to import, Python uses the pyd in preference to the py implementation
But the old implementation did not include this function
This suggests that there might be bigger problems caused by installing a new version and it might be wise to delete the whole sklearn directory before reinstalling the new version.
|
Compact way to assign values by slicing list in Python
|
I have the following list
bar = ['a','b','c','x','y','z']
What I want to do is to assign 1st, 4th and 5th values of bar into v1,v2,v3,
is there a more compact way to do than this:
v1, v2, v3 = [bar[0], bar[3], bar[4]]
Because in Perl you can do something like this:
my($v1, $v2, $v3) = @bar[0,3,4];
|
You can use operator.itemgetter:
>>> from operator import itemgetter
>>> bar = ['a','b','c','x','y','z']
>>> itemgetter(0, 3, 4)(bar)
('a', 'x', 'y')
So for your example you would do the following:
>>> v1, v2, v3 = itemgetter(0, 3, 4)(bar)
|
Relevance of typename in namedtuple
|
from collections import namedtuple
Point=namedtupe('whatsmypurpose',['x','y'])
p=Point(11,22)
print(p)
Output:
whatsmypurpose(x=11,y=22)
What's the relevance/use of 'whatsmypurpose'?
|
namedtuple() is a factory function for tuples. Here, whatsmypurpose is the type name. When you create a named tuple, a class with this name (whatsmypurpose) gets created internally.
You can notice this by using the verbose argument like:
Point=namedtuple('whatsmypurpose',['x','y'], verbose=True)
Also you can try type(p) to verify this.
|
python mysql.connector DictCursor?
|
In Python mysqldb I could declare a cursor as a dictionary cursor like this:
cursor = db.cursor(MySQLdb.cursors.DictCursor)
This would enable me to reference columns in the cursor loop by name like this:
for row in cursor: # Using the cursor as iterator
city = row["city"]
state = row["state"]
Is it possible to create a dictionary cursor using this MySQL connector?
http://dev.mysql.com/doc/connector-python/en/connector-python-example-cursor-select.html
Their example only returns a tuple.
I imagine the creators of MySQL would eventually do this for us?
|
A possible solution involves subclassing the MySQLCursor class like this:
class MySQLCursorDict(mysql.connector.cursor.MySQLCursor):
def _row_to_python(self, rowdata, desc=None):
row = super(MySQLCursorDict, self)._row_to_python(rowdata, desc)
if row:
return dict(zip(self.column_names, row))
return None
db = mysql.connector.connect(user='root', database='test')
cursor = db.cursor(cursor_class=MySQLCursorDict)
Now the _row_to_python() method returns a dictionary instead of a tuple.
I found this on the mysql forum, and I believe it was posted by the mysql developers themselves. I hope they add it to the mysql connector package some day.
I tested this and it does work.
UPDATE: As mentioned below by Karl M.W... this subclass is no longer needed in v2 of the mysql.connector. The mysql.connector has been updated and now you can use the following option to enable a dictionary cursor.
cursor = db.cursor(dictionary=True)
|
auto.arima() equivalent for python
|
I am trying to predict weekly sales using ARMA ARIMA models. I could not find a function for tuning the order(p,d,q) in statsmodels. Currently R has a function auto.arima() which will tune the (p,d,q) parameters.
How do I go about choosing the right order for my model? Are there any libraries available in python for this purpose?
|
You can implement a number of approaches:
ARIMAResults include aic and bic. By their definition, (see here and here), these criteria penalize for the number of parameters in the model. So you may use these numbers to compare the models. Also scipy has optimize.brute which does grid search on the specified parameters space. So a workflow like this should work:
def objfunc(order, exog, endog):
from statsmodels.tsa.arima_model import ARIMA
fit = ARIMA(endog, order, exog).fit()
return fit.aic()
from scipy.optimize import brute
grid = (slice(1, 3, 1), slice(1, 3, 1), slice(1, 3, 1))
brute(objfunc, grid, args=(exog, endog), finish=None)
Make sure you call brute with finish=None.
You may obtain pvalues from ARIMAResults. So a sort of step-forward algorithm is easy to implement where the degree of the model is increased across the dimension which obtains lowest p-value for the added parameter.
Use ARIMAResults.predict to cross-validate alternative models. The best approach would be to keep the tail of the time series (say most recent 5% of data) out of sample, and use these points to obtain the test error of the fitted models.
|
What is the safest way to removing Python framework files that are located in different place than Brew installs
|
I want to remove a Python installed in location that brew complains about, when I run brew doctor
Warning: Python is installed at /Library/Frameworks/Python.framework
What is the best way?
Here are more details / research:
The message from the brew git website:
Important: If you choose to install a Python which isn't either of
these two (system Python or brewed Python), the Homebrew team can only
provide limited support.
I want to make sure I am not missing anything before I remove the extra python libary.
Specifically, I want to remove the entire Python.framework, those files located here.
/Library/Frameworks/Python.framework/
I have Python 2.7.5 installed natively with Mavericks that I'll use instead - located in the path below. (The difference being -- I believe -- that its put in the root /System folder instead of the root /Library folder.) The good, native location is here:
/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
and I already installed python 3.x with brew install python3, which put it here:
==> Summary
/usr/local/Cellar/python3/3.4.0 : 5076 files, 85M, built in 112 seconds
Can I just delete these files or are their consequences (beyond having to relink)?
/Library/Frameworks/Python.framework/
Here are steps to remove python from a stackoverflow question in 2010 and a similar question here
Is that approach still sound? Is there anything I should be aware of?
|
I'll self-answer. I went through steps and it's straight forward. Pycharms (the IDE I'm use) automatically found the new libraries too. Here are the steps I followed to remove the extra Python libraries on Mavericks that were not native to it and not installed by brew.
Step 1:
The native Python 2.7.x version lives here /System/Library/Frameworks/Python.framework/Versions/2.7 (or 2.6, etc), so you can remove any Python that got installed elsewhere.
sudo rm -rf /Library/Frameworks/Python.framework/
Or, according to this article, you should brew install both python 2.7 and python 3.x,
and avoid using system python in Mavericks.
Step 2:
Remove python in Applications directory (the one where all your apps are).
cd into folder /Applications and ls | grep Python to see what have.
Then remove:
sudo rm -rf "Python 3.3"
Step 3:
>>> brew prune
sample output:
Pruned 0 dead formula Pruned 46 symbolic links from /usr/local
Step 4: Run steps recommended by brew doctor
sudo brew link python3
Sample output
Linking /usr/local/Cellar/python3/3.4.0... 19 symlinks created
|
How to make nosetests use python3
|
I try to use nosetests
⯠nosetests '/pathTo/test'
but it uses python 2.7 for my tests:
sys.version_info(major=2, minor=7, micro=5, releaselevel='final', serial=0)
So some of them fails, because they were written in python 3.3.
I work it around and installed virtual environment:
pyvenv-3.3 py3env
Activated it:
source ~/py3env/bin/activate
Check python virsion in virtual environment:
⯠python --version â
Python 3.3.3
(py3env)
Ok.
But nosetest still uses python2.7 even in virtual environment:
sys.version_info(major=2, minor=7, micro=5, releaselevel='final', serial=0)
So my tests fails.
How to make nose use python3?
|
In Python 3.4 and higher versions: in order to make nose use python3 just run ...
python3 -m "nose"
... in the target directory with the tests.
The environment setups are not required.
|
How to avoid HTTP error 429 (Too Many Requests) python
|
I am trying to use Python to login to a website and gather information from several webpages and I get the following error:
Traceback (most recent call last):
File "extract_test.py", line 43, in <module>
response=br.open(v)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in open
return self._mech_open(url, data, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in _mech_open
raise response
mechanize._response.httperror_seek_wrapper: HTTP Error 429: Unknown Response Code
I used time.sleep() and it works, but it seems unintelligent and unreliable, is there any other way to dodge this error?
Here's my code:
import mechanize
import cookielib
import re
first=("example.com/page1")
second=("example.com/page2")
third=("example.com/page3")
fourth=("example.com/page4")
## I have seven URL's I want to open
urls_list=[first,second,third,fourth]
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# Log in credentials
br.open("example.com")
br.select_form(nr=0)
br["username"] = "username"
br["password"] = "password"
br.submit()
for url in urls_list:
br.open(url)
print re.findall("Some String")
|
Receiving a status 429 is not an error, it is the other server "kindly" asking you to please stop spamming requests. Obviously, your rate of requests has been too high and the server is not willing to accept this.
You should not seek to "dodge" this, or even try to circumvent server security settings by trying to spoof your IP, you should simply respect the server's answer by not sending too many requests.
If everything is set up properly, you will also have received a "Retry-after" header along with the 429 response. This header specifies the number of seconds you should wait before making another call. The proper way to deal with this "problem" is to read this header and to sleep your process for that many seconds.
You can find more information on status 429 here: http://tools.ietf.org/html/rfc6585#page-3
|
How to have clusters of stacked bars with python (Pandas)
|
So here is how my data set looks like :
In [1]: df1=pd.DataFrame(np.random.rand(4,2),index=["A","B","C","D"],columns=["I","J"])
In [2]: df2=pd.DataFrame(np.random.rand(4,2),index=["A","B","C","D"],columns=["I","J"])
In [3]: df1
Out[3]:
I J
A 0.675616 0.177597
B 0.675693 0.598682
C 0.631376 0.598966
D 0.229858 0.378817
In [4]: df2
Out[4]:
I J
A 0.939620 0.984616
B 0.314818 0.456252
C 0.630907 0.656341
D 0.020994 0.538303
I want to have stacked bar plot for each dataframe but since they have same index, I'd like to have 2 stacked bars per index.
I've tried to plot both on the same axes :
In [5]: ax = df1.plot(kind="bar", stacked=True)
In [5]: ax2 = df2.plot(kind="bar", stacked=True, ax = ax)
But it overlaps.
Then I tried to concat the two dataset first :
pd.concat(dict(df1 = df1, df2 = df2),axis = 1).plot(kind="bar", stacked=True)
but here everything is stacked
My best try is :
pd.concat(dict(df1 = df1, df2 = df2),axis = 0).plot(kind="bar", stacked=True)
Which gives :
This is basically what I want, except that I want the bar ordered as
(df1,A) (df2,A) (df1,B) (df2,B) etc...
I guess there is a trick but I can't found it !
After @bgschiller's answer I got this :
Which is almost what I want. I would like the bar to be clustered by index, in order to have something visually clear.
Bonus : Having the x-label not redundant, something like :
df1 df2 df1 df2
_______ _______ ...
A B
Thanks for helping.
|
So, I eventually found a trick :
Here it is with a more complete example :
import pandas as pd
import matplotlib.cm as cm
import numpy as np
import matplotlib.pyplot as plt
def plot_clustered_stacked(dfall, labels=None, title="multiple stacked bar plot", H="/", **kwargs):
"""Given a list of dataframes, with identical columns and index, create a clustered stacked bar plot.
labels is a list of the names of the dataframe, used for the legend
title is a string for the title of the plot
H is the hatch used for identification of the different dataframe"""
n_df = len(dfall)
n_col = len(dfall[0].columns)
n_ind = len(dfall[0].index)
axe = plt.subplot(111)
for df in dfall : # for each data frame
axe = df.plot(kind="bar",
linewidth=0,
stacked=True,
ax=axe,
legend=False,
grid=False,
**kwargs) # make bar plots
h,l = axe.get_legend_handles_labels() # get the handles we want to modify
for i in range(0, n_df * n_col, n_col): # len(h) = n_col * n_df
for j, pa in enumerate(h[i:i+n_col]):
for rect in pa.patches: # for each index
rect.set_x(rect.get_x() + 1 / float(n_df + 1) * i / float(n_col))
rect.set_hatch(H * int(i / n_col)) #edited part
rect.set_width(1 / float(n_df + 1))
axe.set_xticks((np.arange(0, 2 * n_ind, 2) + 1 / float(n_df + 1)) / 2.)
axe.set_xticklabels(df.index, rotation = 0)
axe.set_title(title)
# Add invisible data to add another legend
n=[]
for i in range(n_df):
n.append(axe.bar(0, 0, color="gray", hatch=H * i))
l1 = axe.legend(h[:n_col], l[:n_col], loc=[1.01, 0.5])
if labels is not None:
l2 = plt.legend(n, labels, loc=[1.01, 0.1])
axe.add_artist(l1)
return axe
# create fake dataframes
df1 = pd.DataFrame(np.random.rand(4, 5),
index=["A", "B", "C", "D"],
columns=["I", "J", "K", "L", "M"])
df2 = pd.DataFrame(np.random.rand(4, 5),
index=["A", "B", "C", "D"],
columns=["I", "J", "K", "L", "M"])
df3 = pd.DataFrame(np.random.rand(4, 5),
index=["A", "B", "C", "D"],
columns=["I", "J", "K", "L", "M"])
# Then, just call :
plot_clustered_stacked([df1, df2, df3],["df1", "df2", "df3"])
And it gives that :
Tell me if you don't understand something in the code.
Feel free to re-use this code which is under CC0.
|
str.isdecimal() and str.isdigit() difference example
|
Reading python docs I have come to .isdecimal() and .isdigit() string functions and i'm not finding literature too clear on their usable distinction. Could someone supply me with code examples of where these two functions differentiate please.
Similar behaviour:
>>> str.isdecimal('1')
True
>>> str.isdigit('1')
True
>>> str.isdecimal('1.0')
False
>>> str.isdigit('1.0')
False
>>> str.isdecimal('1/2')
False
>>> str.isdigit('1/2')
False
|
There are differences, but they're somewhat rare*. It mainly crops up with various unicode characters, such as 2:
>>> c = '\u00B2'
>>> c.isdecimal()
False
>>> c.isdigit()
True
You can also go further down the careful-unicode-distinction rabbit hole with the isnumeric method:
>>> c = '\u00BD' # ½
>>> c.isdecimal()
False
>>> c.isdigit()
False
>>> c.isnumeric()
True
*At least, I've never encountered production code that needs to distinguish between strings that contain different types of these exceptional situations, but surely use cases exist somewhere.
|
Why is copying a list using a slice[:] faster than using the obvious way?
|
Why is shallow-copying a list using a slice so much faster than using list builtin?
In [1]: x = range(10)
In [2]: timeit x_ = x[:]
10000000 loops, best of 3: 83.2 ns per loop
In [3]: timeit x_ = list(x)
10000000 loops, best of 3: 147 ns per loop
Usually when I see weird things like this, they're fixed in python3 - but this discrepancy is still there:
In [1]: x = list(range(10))
In [2]: timeit x_ = x[:]
10000000 loops, best of 3: 100 ns per loop
In [3]: timeit x_ = list(x)
10000000 loops, best of 3: 178 ns per loop
|
The difference is in additional function call (just SLICE+0 vs CALL_FUNCTION 1 with extra stack operations):
>>> import dis
>>> def f(lst):
... return lst[:]
...
>>> def f1(lst):
... return list(lst)
...
>>> dis.dis(f)
2 0 LOAD_FAST 0 (lst)
3 SLICE+0
4 RETURN_VALUE
>>> dis.dis(f1)
2 0 LOAD_GLOBAL 0 (list)
3 LOAD_FAST 0 (lst)
6 CALL_FUNCTION 1
9 RETURN_VALUE
From dis docs:
SLICE+0()
Implements TOS = TOS[:].
(TOS - top of stack)
CALL_FUNCTION(argc)
Calls a function. The low byte of argc indicates
the number of positional parameters, the high byte the number of
keyword parameters. On the stack, the opcode finds the keyword
parameters first. For each keyword argument, the value is on top of
the key. Below the keyword parameters, the positional parameters are
on the stack, with the right-most parameter on top. Below the
parameters, the function object to call is on the stack. Pops all
function arguments, and the function itself off the stack, and pushes
the return value.
|
BeatifulSoup4 get_text still has javascript
|
I'm trying to remove all the html/javascript using bs4, however, it doesn't get rid of javascript. I still see it there with the text. How can I get around this?
I tried using nltk which works fine however, clean_html and clean_url will be removed moving forward. Is there a way to use soups get_text and get the same result?
I tried looking at these other pages:
BeautifulSoup get_text does not strip all tags and JavaScript
Currently i'm using the nltk's deprecated functions.
EDIT
Here's an example:
import urllib
from bs4 import BeautifulSoup
url = "http://www.cnn.com"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
print soup.get_text()
I still see the following for CNN:
$j(function() {
"use strict";
if ( window.hasOwnProperty('safaripushLib') && window.safaripushLib.checkEnv() ) {
var pushLib = window.safaripushLib,
current = pushLib.currentPermissions();
if (current === "default") {
pushLib.checkPermissions("helloClient", function() {});
}
}
});
/*globals MainLocalObj*/
$j(window).load(function () {
'use strict';
MainLocalObj.init();
});
How can I remove the js?
Only other options I found are:
https://github.com/aaronsw/html2text
The problem with html2text is that it's really really slow at times, and creates noticable lag, which is one thing nltk was always very good with.
|
Based partly on Can I remove script tags with BeautifulSoup?
import urllib
from bs4 import BeautifulSoup
url = "http://www.cnn.com"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
print(text)
|
Remove a column from a nested list in Python
|
I need help figuring how to work around removing a 'column' from a nested list to modify it.
Say I have
L = [[1,2,3,4],
[5,6,7,8],
[9,1,2,3]]
and I want to remove the second column (so values 2,6,1) to get:
L = [[1,3,4],
[5,7,8],
[9,2,3]]
I'm stuck with how to modify the list with just taking out a column. I've done something sort of like this before? Except we were printing it instead, and of course it wouldn't work in this case because I believe the break conflicts with the rest of the values I want in the list.
def L_break(L):
i = 0
while i < len(L):
k = 0
while k < len(L[i]):
print( L[i][k] , end = " ")
if k == 1:
break
k = k + 1
print()
i = i + 1
So, how would you go about modifying this nested list?
Is my mind in the right place comparing it to the code I have posted or does this require something different?
|
You can simply delete the appropriate element from each row using del:
L = [[1,2,3,4],
[5,6,7,8],
[9,1,2,3]]
for row in L:
del row[1] # 0 for column 1, 1 for column 2, etc.
print L
# outputs [[1, 3, 4], [5, 7, 8], [9, 2, 3]]
|
Django ORM - objects.filter() vs. objects.all().filter() - which one is preferred?
|
Very often I see constructs like
MyModel.objects.all().filter(...)
which will return a QuerySet of the default Mananger. At first all() seems to be quite redundant, because
MyMode.objects.filter(...)
delivers the same result.
However, this seems to be save for the default Manager only, because of the following two statements in the Django documentation:
Excerpt fromt the Chapter "Adding extra manager methods"
A custom Manager method can return anything you want. It doesnât have
to return a QuerySet.
Definition of the all() manager method:
all()
Returns a copy of the current QuerySet (or QuerySet subclass).
This can be useful in situations where you might want to pass in
either a model manager or a QuerySet and do further filtering on the
result. After calling all() on either object, youâll definitely have a
QuerySet to work with.
This seems a bit like a contradiction to me. On one hand Django offers the freedom to let a manager method return whatever preferred and on the other hand it requires a QuerySet for the all() method. I'm aware that each manager has a get_queryset method which is called by all(). But who stops me from overriding all() in my custom manager? Although I agree it would be bad design to do so.
So as far as I can see, the all() method does not guarantee to return a QuerySet. What exactly does MyModel.objects return? Does this statement call all()? or `get_queryset()?
Do you prefer MyModel.objects.filter(...) or MyModel.objects.all().filter(...). And if so, why?
Have you ever encountered wonky managers that would mess with those methods in a undesirable way?
|
The method all() on a manager just delegates to get_queryset(), as you can see in the Django source code:
def all(self):
return self.get_queryset()
So it's just a way to get the queryset from the manager. As they explain in the documentation, this can be handy to ensure that you're dealing with a queryset and not a manager, because MyModel.objects returns a model manager.
For example, if you want to iterate over all the items, you can't do this:
for item in MyModel.objects:
# do something with item
Because you can't iterate over a model manager. all() returns the queryset, you can iterate over a queryset:
for item in MyModel.objects.all():
# do something with item
Generally, you should never overwrite all(). You can overwrite get_queryset() but this method must return a queryset.
If you would use a filter method like filter() or exclude(), you would already have the queryset, because these methods are proxied to the queryset. So you don't have to do something like all().filter().
|
Use endswith with multiple extensions
|
I'm trying to detect files with a list of extensions.
ext = [".3g2", ".3gp", ".asf", ".asx", ".avi", ".flv", \
".m2ts", ".mkv", ".mov", ".mp4", ".mpg", ".mpeg", \
".rm", ".swf", ".vob", ".wmv"]
if file.endswith(ext): # how to use the list ?
command 1
elif file.endswith(""): # it should be a folder
command 2
elif file.endswith(".other"): # not a video, not a folder
command 3
|
Use a tuple for it.
>>> ext = [".3g2", ".3gp", ".asf", ".asx", ".avi", ".flv", \
".m2ts", ".mkv", ".mov", ".mp4", ".mpg", ".mpeg", \
".rm", ".swf", ".vob", ".wmv"]
>>> ".wmv".endswith(tuple(ext))
True
>>> ".rand".endswith(tuple(ext))
False
Instead of converting everytime, just convert it to tuple once.
|
What is a Python bytestring?
|
What's a Python bytestring?
All I can find are topics on how to encode to bytestring or decode to ascii or utf-8. I'm trying to understand how it works under the hood. In a normal ASCII string, it's an array or list of characters, and each character represents an ASCII value from 0-255, so that's how you know what character is represented by the number. In Unicode, it's the 8- or 16-byte representation for the character that tells you what character it is.
So what is a bytestring? How does Python know which characters to represent as what? How does it work under the hood? Since you can print or even return these strings and it shows you the string representation, I don't quite get it...
Ok, so my point is definitely getting missed here. I've been told that it's an immutable sequence of bytes without any particular interpretation.
A sequence of bytes.. Okay, let's say one byte:
'a'.encode() returns b'a'.
Simple enough. Why can I read the a?
Say I get the ASCII value for a, by doing this:
printf "%d" "'a"
It returns 97. Okay, good, the integer value for the ASCII character a. If we interpret 97 as ASCII, say in a C char, then we get the letter a. Fair enough. If we convert the byte representation to bits, we get this:
01100001
2^0 + 2^5 + 2^6 = 97. Cool.
So why is 'a'.encode() returning b'a' instead of 01100001??
If it's without a particular interpretation, shouldn't it be returning something like b'01100001'?
It seems like it's interpreting it like ASCII.
Someone mentioned that it's calling __repr__ on the bytestring, so it's displayed in human-readable form. However, even if I do something like:
with open('testbytestring.txt', 'wb') as f:
f.write(b'helloworld')
It will still insert helloworld as a regular string into the file, not as a sequence of bytes... So is a bytestring in ASCII?
|
It is a common misconception that text is ascii or utf8 or cp1252, and therefore bytes are text.
Text is only text, in the way that images are only images. The matter of storing text or images to disk is a matter of encoding that data into a sequence of bytes. There are many ways to encode images into bytes: Jpeg, png, svg, and likewise many ways to encode text, ascii, utf8 or cp1252.
Once encoding has happened, bytes are just bytes. Bytes are not images anymore, they have forgotten the colors they mean; although an image format decoder can recover that information. Bytes have similarly forgotten the letters they used to be. In fact, bytes don't remember wether they were images or text at all. Only out of band knowledge (filename, media headers, etcetera) can guess what those bytes should mean, and even that can be wrong (in case of data corruption)
so, in python (py3), we have two types for things that might otherwise look similar; For text, we have str, which knows it's text; it knows which letters it's supposed to mean. It doesn't know which bytes that might be, since letters are not bytes. We also have bytestring, which doesn't know if it's text or images or any other kind of data.
The two types are superficially similar, since they are both sequences of things, but the things that they are sequences of is quite different.
Implementationally, str is stored in memory as UCS-? where the ? is implementation defined, it may be UCS4, UCS2 or UCS1, depending on compile time options and which codepoints are present in the represented string.
edit "but why"?
Some things that look like text are actually defined in other terms. A really good example of this are the many internet protocols of the world. For instance, HTTP is a "text" protocol that is in fact defined using the ABNF syntax common in RFC's. These protocols are expressed in terms of octets, not characters, although an informal encoding may also be suggested:
2.3. Terminal Values
Rules resolve into a string of terminal values, sometimes called
characters. In ABNF, a character is merely a non-negative integer.
In certain contexts, a specific mapping (encoding) of values into a
character set (such as ASCII) will be specified.
This distinction is important, because it's not possible to send text over the internet, the only thing you can do is send bytes. saying "text but in 'foo' encoding" makes the format that much more complex, since clients and servers need to now somehow figure out the encoding business on their own, hopefully in the same way, since they must ultimately pass data around as bytes anyway. This is doubly useless since these protocols are seldom about text handling anyway, and is only a convenience for implementers. Neither the server owners nor end users are ever interested in reading the words Transfer-Encoding: chunked, so long as both the server and the browser understand it correctly.
By comparison, when working with text, you don't really care how it's encoded. You can express the "Heävy Mëtal Ãmlaüts" any way you like, except "Heδvy Mλtal άmlaÏts"
the distinct types thus give you a way to say "this value 'means' text" or "bytes".
|
How to get tkinter canvas to dynamically resize to window width?
|
I need to get a canvas in Python to set its width to the width of the window, and then dynamically re-size the canvas when the user makes the window smaller/bigger. Is there any way of doing this (easily)? Thank-you in advance.
|
I thought I would add in some extra code to expand on @fredtantini's answer, as it doesn't deal with how to update the shape of widgets drawn on the Canvas.
To do this you need to use the scale method and tag all of the widgets. A complete example is below.
from Tkinter import *
# a subclass of Canvas for dealing with resizing of windows
class ResizingCanvas(Canvas):
def __init__(self,parent,**kwargs):
Canvas.__init__(self,parent,**kwargs)
self.bind("<Configure>", self.on_resize)
self.height = self.winfo_reqheight()
self.width = self.winfo_reqwidth()
def on_resize(self,event):
# determine the ratio of old width/height to new width/height
wscale = float(event.width)/self.width
hscale = float(event.height)/self.height
self.width = event.width
self.height = event.height
# resize the canvas
self.config(width=self.width, height=self.height)
# rescale all the objects tagged with the "all" tag
self.scale("all",0,0,wscale,hscale)
def main():
root = Tk()
myframe = Frame(root)
myframe.pack(fill=BOTH, expand=YES)
mycanvas = ResizingCanvas(myframe,width=850, height=400, bg="red", highlightthickness=0)
mycanvas.pack(fill=BOTH, expand=YES)
# add some widgets to the canvas
mycanvas.create_line(0, 0, 200, 100)
mycanvas.create_line(0, 100, 200, 0, fill="red", dash=(4, 4))
mycanvas.create_rectangle(50, 25, 150, 75, fill="blue")
# tag all of the drawn widgets
mycanvas.addtag_all("all")
root.mainloop()
if __name__ == "__main__":
main()
|
What is the inverse of regularization strength in Logistic Regression? How should it affect my code?
|
I am using sklearn.linear_model.LogisticRegression in scikit learn to run a Logistic Regression.
C : float, optional (default=1.0) Inverse of regularization strength;
must be a positive float. Like in support vector machines, smaller
values specify stronger regularization.
What does C mean here in simple terms please? What is regularization strength?
|
Regularization is applying a penalty to increasing the magnitude of parameter values in order to reduce overfitting. When you train a model such as a logistic regression model, you are choosing parameters that give you the best fit to the data. This means minimizing the error between what the model predicts for your dependent variable given your data compared to what your dependent variable actually is.
The problem comes when you have a lot of parameters (a lot of independent variables) but not too much data. In this case, the model will often tailor the parameter values to idiosyncrasies in your data -- which means it fits your data almost perfectly. However because those idiosyncrasies don't appear in future data you see, your model predicts poorly.
To solve this, as well as minimizing the error as already discussed, you add to what is minimized and also minimize a function that penalizes large values of the parameters. Most often the function is λΣθj2, which is some constant λ times the sum of the squared parameter values θj2. The larger λ is the less likely it is that the parameters will be increased in magnitude simply to adjust for small perturbations in the data. In your case however, rather than specifying λ, you specify C=1/λ.
|
How to get the numerical fitting results when plotting a regression in seaborn?
|
If I use the seaborn library in Python to plot the result of a linear regression, is there a way to find out the numerical results of the regression? For example, I might want to know the fitting coefficients or the R2 of the fit.
I could re-run the same fit using the underlying statsmodels interface, but that would seem to be unnecessary duplicate effort, and anyway I'd want to be able to compare the resulting coefficients to be sure the numerical results are the same as what I'm seeing in the plot.
|
There's no way to do this.
In my opinion, asking a visualization library to give you statistical modeling results is backwards. statsmodels, a modeling library, lets you fit a model and then draw a plot that corresponds exactly to the model you fit. If you want that exact correspondence, this order of operations makes more sense to me.
You might say "but the plots in statsmodels don't have as many aesthetic options as seaborn". But I think that makes sense â statsmodels is a modeling library that sometimes uses visualization in the service of modeling. seaborn is a visualization library that sometimes uses modeling in the service of visualization. It is good to specialize, and bad to try to do everything.
Fortunately, both seaborn and statsmodels use tidy data. That means that you really need very little effort duplication to get both plots and models through the appropriate tools.
|
nose vs pytest - what are the (subjective) differences that should make me pick either?
|
I've started working on a rather big (multithreaded) Python project, with loads of (unit)tests. The most important problem there is that running the application requires a preset environment, which is implemented by a context manager. So far we made use of a patched version of the unittest runner that would run the tests inside this manager, but that doesn't allow switching context between different test-modules.
Both nose and pytest do support such a thing, because they support fixtures at many granularities, so we're looking into switching to nose or pytest. Both these libraries would also support 'tagging' tests and running only these tagged subsets, which is something we also would like to do.
I have been looking through the documentation of both nose and pytest a bit, and as far as I can see the bigger part of those libraries essentially support the same functionality, except that it may be named differently, or require slightly different syntax. Also I noted some small differences in the available plugins (nose has multiprocess-support, pytest doesn't seem to for instance)
So it seems, the devil is in the detail, which means (often at least) in personal taste and we better go with the library that fits our personal taste best.
So I'd would to ask for a subjective argumentation why I should be going with nose or pytest in order to choose the library/community combo that best fits our needs.
|
I used to use Nose because it was the default with Pylons. I didn't like it at all. It had configuration tendrils in multiple places, virtually everything seemed to be done with an underdocumented plugin which made it all even more indirect and confusing, and because it did unittest tests by default, it regularly broke with Unicode tracebacks, hiding the sources of errors.
I've been pretty happy with py.test the last couple years. Being able to just write a test with assert out of the box makes me hate writing tests way less, and hacking whatever I need atop the core has been pretty easy. Rather than a fixed plugin interface it just has piles of hooks, and pretty understandable source code should you need to dig further. I even wrote an adapter for running Testify tests under py.test, and had more trouble with Testify than with py.test.
That said, I hear nose has plugins for classless tests and assert introspection nowadays, so you'll probably do fine with either. I still feel like I can hit the ground running with py.test, though, and I can understand what's going on when it breaks.
|
Include html file in Jinja2 template
|
I am using Flask microframework for my server which uses Jinja templates. I have parent template.html and some childs child1.html, child2.html. Some of these childs are pretty large html files and I would like to somehow split them for better lucidity over my work.
the main.py:
from flask import Flask, request, render_template
app = Flask(__name__)
@app.route('/')
@app.route('/<task>')
def home(task=''):
return render_template('child1.html', task=task)
app.run()
The simplified template.html:
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<div class="container">
{% block content %}
{% endblock %}
</div>
</body>
</html>
And the magic is in child1.html:
{% extends 'template.html' %}
{% block content %}
{% if task == 'content1' %}
<!-- include content1.html -->
{% endif %}
{% if task == 'content2' %}
<!-- include content2.html -->
{% endif %}
{% endblock %}
Instead of the comments
<!-- include content1.html -->
I have a lot of html text. And it is very hard to keep track of changes and not to make some mistakes, which are then pretty hard to find and correct. So I would like to just load the content1.html instead of writing it all in child1.html. I came across this question Include another HTML file in a HTML file , but I had problems implementing it. I think Jinja2 might have a better tool for that.
Note.
The code above might not be working properly, I just wrote it to ilustrate the problem.
|
Use the jinja2 {% include %} directive.
{% extends 'template.html' %}
{% block content %}
{% if task == 'content1' %}
{% include 'content1.html' %}
{% endif %}
{% if task == 'content2' %}
{% include 'content2.html' %}
{% endif %}
{% endblock %}
This will include the content from the correct content-file.
|
Is the list of Python reserved words and builtins available in a library?
|
Is the list of Python reserved words and builtins available in a library? I want to do something like:
from x.y import reserved_words_and_builtins
if x in reserved_words_and_builtins:
x += '_'
|
To verify that a string is a keyword you can use keyword.iskeyword; to get the list of reserved keywords you can use keyword.kwlist:
>>> import keyword
>>> keyword.iskeyword('break')
True
>>> keyword.kwlist
['and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'exec', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'not', 'or', 'pass', 'print', 'raise', 'return', 'try', 'while', 'with', 'yield']
If you want to include built-in names as well(Python 2), then :
>>> import __builtin__
>>> dir(__builtin__)
['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException', 'BufferError', 'BytesWarning', 'DeprecationWarning', 'EOFError', 'Ellipsis', 'EnvironmentError', 'Exception', 'False', 'FloatingPointError', 'FutureWarning', 'GeneratorExit', 'IOError', 'ImportError', 'ImportWarning', 'IndentationError', 'IndexError', 'KeyError', 'KeyboardInterrupt', 'LookupError', 'MemoryError', 'NameError', 'None', 'NotImplemented', 'NotImplementedError', 'OSError', 'OverflowError', 'PendingDeprecationWarning', 'ReferenceError', 'RuntimeError', 'RuntimeWarning', 'StandardError', 'StopIteration', 'SyntaxError', 'SyntaxWarning', 'SystemError', 'SystemExit', 'TabError', 'True', 'TypeError', 'UnboundLocalError', 'UnicodeDecodeError', 'UnicodeEncodeError', 'UnicodeError', 'UnicodeTranslateError', 'UnicodeWarning', 'UserWarning', 'ValueError', 'Warning', 'WindowsError', 'ZeroDivisionError', '_', '__debug__', '__doc__', '__import__', '__name__', '__package__', 'abs', 'all', 'any', 'apply', 'basestring', 'bin', 'bool', 'buffer', 'bytearray', 'bytes', 'callable', 'chr', 'classmethod', 'cmp', 'coerce', 'compile', 'complex', 'copyright', 'credits', 'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'execfile', 'exit', 'file', 'filter', 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr', 'hash', 'help', 'hex', 'id', 'input', 'int', 'intern', 'isinstance', 'issubclass', 'iter', 'len', 'license', 'list', 'locals', 'long', 'map', 'max', 'memoryview', 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print', 'property', 'quit', 'range', 'raw_input', 'reduce', 'reload', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', 'unichr', 'unicode', 'vars', 'xrange', 'zip']
For Python 3 you'll need to use builtins module:
>>> import builtins
>>> dir(builtins)
['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException', 'BlockingIOError', 'BrokenPipeError', 'BufferError', 'BytesWarning', 'ChildProcessError', 'ConnectionAbortedError', 'ConnectionError', 'ConnectionRefusedError', 'ConnectionResetError', 'DeprecationWarning', 'EOFError', 'Ellipsis', 'EnvironmentError', 'Exception', 'False', 'FileExistsError', 'FileNotFoundError', 'FloatingPointError', 'FutureWarning', 'GeneratorExit', 'IOError', 'ImportError', 'ImportWarning', 'IndentationError', 'IndexError', 'InterruptedError', 'IsADirectoryError', 'KeyError', 'KeyboardInterrupt', 'LookupError', 'MemoryError', 'NameError', 'None', 'NotADirectoryError', 'NotImplemented', 'NotImplementedError', 'OSError', 'OverflowError', 'PendingDeprecationWarning', 'PermissionError', 'ProcessLookupError', 'ReferenceError', 'ResourceWarning', 'RuntimeError', 'RuntimeWarning', 'StopIteration', 'SyntaxError', 'SyntaxWarning', 'SystemError', 'SystemExit', 'TabError', 'TimeoutError', 'True', 'TypeError', 'UnboundLocalError', 'UnicodeDecodeError', 'UnicodeEncodeError', 'UnicodeError', 'UnicodeTranslateError', 'UnicodeWarning', 'UserWarning', 'ValueError', 'Warning', 'WindowsError', 'ZeroDivisionError', '_', '__build_class__', '__debug__', '__doc__', '__import__', '__loader__', '__name__', '__package__', '__spec__', 'abs', 'all', 'any', 'ascii', 'bin', 'bool', 'bytearray', 'bytes', 'callable', 'chr', 'classmethod', 'compile', 'complex', 'copyright', 'credits', 'delattr', 'dict', 'dir', 'divmod', 'enumerate', 'eval', 'exec', 'exit', 'filter', 'float', 'format', 'frozenset', 'getattr', 'globals', 'hasattr', 'hash', 'help', 'hex', 'id', 'input', 'int', 'isinstance', 'issubclass', 'iter', 'len', 'license', 'list', 'locals', 'map', 'max', 'memoryview', 'min', 'next', 'object', 'oct', 'open', 'ord', 'pow', 'print', 'property', 'quit', 'range', 'repr', 'reversed', 'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod', 'str', 'sum', 'super', 'tuple', 'type', 'vars', 'zip']
|
Git Bash won't run my python files?
|
I have been trying to run my python files in Git Bash but I keep getting an error and can't figure out how to fix it. My command as follows in the git bash executable python filename.py then it says
"Bash.exe": python.exe: command not found
I'm a windows user and I have added the path to my environment variables like so C:\Python27\python.exe;C:\Program Files\Git\bin\bash.exe
I have been looking around but I can't find anyone that has had this problem or they don't give a straightforward answer please help.
Also I have never used Git before this is my first time.
|
Adapting the PATH should work. Just tried on my Git bash:
$ python --version
sh.exe": python: command not found
$ PATH=$PATH:/c/Python27/
$ python --version
Python 2.7.6
In particular, only provide the directory; don't specify the .exe on the PATH ; and use slashes.
|
Error installing bcrypt with pip on OS X: cant find ffi.h (libffi is installed)
|
I'm getting this error when trying to install bcrypt with pip. I have libffi installed in a couple places (the Xcode OS X SDK, and from homebrew), but I don't know how to tell pip to look for it. Any suggestions?
Downloading/unpacking bcrypt==1.0.2 (from -r requirements.txt (line 41))
Running setup.py egg_info for package bcrypt
OS/X: confusion between 'cc' versus 'gcc' (see issue 123)
will not use '__thread' in the C code
c/_cffi_backend.c:14:10: fatal error: 'ffi.h' file not found
#include <ffi.h>
^
1 error generated.
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/Users/cody/virtualenvs/analytics/build/bcrypt/setup.py", line 104, in <module>
"Programming Language :: Python :: 3.3",
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "build/bdist.macosx-10.9-intel/egg/setuptools/dist.py", line 239, in __init__
File "build/bdist.macosx-10.9-intel/egg/setuptools/dist.py", line 264, in fetch_build_eggs
File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 620, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 858, in best_match
return self.obtain(req, installer) # try and download/install
File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 870, in obtain
return installer(requirement)
File "build/bdist.macosx-10.9-intel/egg/setuptools/dist.py", line 314, in fetch_build_egg
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 593, in easy_install
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 623, in install_item
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 811, in install_eggs
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 1017, in build_and_install
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 1005, in run_setup
distutils.errors.DistutilsError: Setup script exited with error: command 'cc' failed with exit status 1
Complete output from command python setup.py egg_info:
OS/X: confusion between 'cc' versus 'gcc' (see issue 123)
will not use '__thread' in the C code
c/_cffi_backend.c:14:10: fatal error: 'ffi.h' file not found
#include <ffi.h>
^
1 error generated.
Traceback (most recent call last):
File "<string>", line 16, in <module>
File "/Users/cody/virtualenvs/analytics/build/bcrypt/setup.py", line 104, in <module>
"Programming Language :: Python :: 3.3",
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "build/bdist.macosx-10.9-intel/egg/setuptools/dist.py", line 239, in __init__
File "build/bdist.macosx-10.9-intel/egg/setuptools/dist.py", line 264, in fetch_build_eggs
File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 620, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 858, in best_match
return self.obtain(req, installer) # try and download/install
File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 870, in obtain
return installer(requirement)
File "build/bdist.macosx-10.9-intel/egg/setuptools/dist.py", line 314, in fetch_build_egg
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 593, in easy_install
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 623, in install_item
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 811, in install_eggs
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 1017, in build_and_install
File "build/bdist.macosx-10.9-intel/egg/setuptools/command/easy_install.py", line 1005, in run_setup
distutils.errors.DistutilsError: Setup script exited with error: command 'cc' failed with exit status 1
----------------------------------------
Command python setup.py egg_info failed with error code 1 in /Users/cody/virtualenvs/analytics/build/bcrypt
|
Without using sudo and CFLAGS and CPPFLAGS (unnecessary for pip):
$ brew install pkg-config libffi
$ export PKG_CONFIG_PATH=/usr/local/Cellar/libffi/3.0.13/lib/pkgconfig/
$ pip install bcrypt
|
Splitting a string with repeated characters into a list using regex
|
I am not well experienced with Regex but I have been reading a lot about it. Assume there's a string s = '111234' I want a list with the string split into L = ['111', '2', '3', '4']. My approach was to make a group checking if it's a digit or not and then check for a repetition of the group. Something like this
L = re.findall('\d[\1+]', s)
I think that \d[\1+] will basically check for either "digit" or "digit +" the same repetitions. I think this might do what I want. Can anyone help?
|
Use re.finditer():
>>> s='111234'
>>> [m.group(0) for m in re.finditer(r"(\d)\1*", s)]
['111', '2', '3', '4']
|
What is the difference between Lock and RLock
|
From the docs:
threading.RLock() --
A factory function that returns a new reentrant lock object. A reentrant lock must be released by the thread that acquired it. Once a thread has acquired a reentrant lock, the same thread may acquire it again without blocking; the thread must release it once for each time it has acquired it.
I am not sure why do we need this?
what's the difference between Rlock and Lock?
thanks
|
The main difference is that a Lock can only be acquired once. It cannot be acquired again, until it is released. (After it's been released, it can be re-acaquired by any thread).
An RLock on the other hand, can be acquired multiple times, by the same thread. It needs to be released the same number of times in order to be "unlocked".
Another difference is that an acquired Lock can be released by any thread, while an acquired RLock can only be released by the thread which acquired it.
Here's an example demostrating why RLock is useful at times. Suppose you have:
def f():
g()
h()
def g():
h()
do_something1()
def h():
do_something2()
Let's say all of f, g, and h are public (i.e. can be called directly by an external caller), and all of them require syncronization.
Using a Lock, you can do something like:
lock = Lock()
def f():
with lock:
_g()
_h()
def g():
with lock:
_g()
def _g():
_h()
do_something1()
def h():
with lock:
_h()
def _h():
do_something2()
Basically, since f cannot call g after acquiring the lock, it needs to call a "raw" version of g (i.e. _g). So you end up with a "synced" version and a "raw" version of each function.
Using an RLock elegantly solves the problem:
lock = RLock()
def f():
with lock:
g()
h()
def g():
with lock:
h()
do_something1()
def h():
with lock:
do_something2()
|
Error installing gnureadline via pip
|
I broke my IPython setup whilst trying to upgrade to IPython 2.0. The installation fails when gnureadline is being installed. I originally had this problem but I fixed it. Now I'm getting this error:
cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. -Qunused-arguments -Qunused-arguments build/temp.macosx-10.9-intel-2.7/Modules/2.x/readline.o readline/libreadline.a readline/libhistory.a -lncurses -o build/lib.macosx-10.9-intel-2.7/gnureadline.so
clang: error: no such file or directory: 'readline/libreadline.a'
clang: error: no such file or directory: 'readline/libhistory.a'
error: command 'cc' failed with exit status 1
----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools;__file__='/private/var/folders/jj/0w0dd3n16jq4g5579g6c7h040000gn/T/pip_build_root/gnureadline/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/jj/0w0dd3n16jq4g5579g6c7h040000gn/T/pip-iJITYv-record/install-record.txt --single-version-externally-managed failed with error code 1 in /private/var/folders/jj/0w0dd3n16jq4g5579g6c7h040000gn/T/pip_build_root/gnureadline
Storing complete log in /Users/mc/Library/Logs/pip.log
Thanks for any help.
|
sudo apt-get install libncurses5-dev
Reference:
Ipython no readline available and pip install readline error
|
filtering pandas dataframes on dates
|
I have a pandas data frame with a 'date' column. Now i need to filter out all rows in the dataframe that have dates outside of the next two months.
Essentially, I only need to retain the row that are within the next two months.
What is the best way to achieve this.
|
If it's the index, you should use the .ix or .loc selector.
For example:
df.ix['2014-01-01':'2014-02-01']
See details here http://pandas.pydata.org/pandas-docs/stable/dsintro.html#indexing-selection
I guess it's smart to read up on pandas quite extensively before you start, as it's a very powerful library. Otherwise it will be hard to get anything meaningful done.
UPDATE:
If the column is not the index you have two choices:
Make it the index (either temporarily or permanently if it's time-series data)
df[(df['date'] > '2013-01-01') & (df['date'] < '2013-02-01')]
See here for the general explanation
|
Keyboard interrupt in debug mode PyCharm
|
Is there any way to send a keyboard interrupt event in PyCharm IDE (3.1) while in the debugging mode?
|
Unfortunately, there is no simple way to do this. You will need to use psutil and the signal module. For this to work you need to install psutil and the best way to do that is through pip:
pip install psutil
So, lets say we have here, exhibit A:
while True:
try:
time.sleep(3)
print "Zzzz"
time.sleep(3)
print("gong!")
except KeyboardInterrupt as e:
print "Closed by an Interrupt"
break
And you're running this in PyCharm. Make sure that the interpreter you're using has psutils installed. You can check:
Make sure you've set your interpreter correctly:
If you haven't installed psutil, you can always do so though the Install button.
Okay then, so now that we have everything set up, lets debug the program:
Now all we have to do is get the process ID, and we can get that at the very start of the program:
So, lets fire up our console, and send a signal:
And if that worked properly, you should see the while loop ending:
You can further streamline the process by adding a function to send an interrupt in the starting script for your console:
Once you're done with all of that, all you need to do is call interrupt(<pid here>) to call a keyboard interrupt on your process.
I hope that answers your question.
|
Why does `for x in list[None:None]:` work?
|
I have a script that attempts to read the begin and end point for a subset via a binary search, these values are then used to create a slice for further processing.
I noticed that when these variables did not get set (the search returned None) the code would still run and in the end I noticed that a slice spanning from None to None works as if examining the entire list (see example below).
#! /usr/bin/env python
list = [1,2,3,4,5,6,7,8,9,10]
for x in list[None:None]:
print x
Does anyone know why the choice was made to see the list[None:None] simply as list[:], at least that's what I think that happens (correct me if I'm wrong). I personally would think that throwing a TypeError would be desirable in such a case.
|
Because None is the default for slice positions. You can use either None or omit the value altogether, at which point None is passed in for you.
None is the default because you can use a negative stride, at which point what the default start and end positions are changed. Compare list[0:len(list):-1] to list[None:None:-1], for example.
Python uses None for 'value not specified' throughout the standard library; this is no exception.
Note that if your class implements the object.__getitem__ hook, you'll get passed a slice() object with the start, end and stride attributes set to None as well:
>>> class Foo(object):
... def __getitem__(self, key):
... print key
...
>>> Foo()[:]
slice(None, None, None)
Since Foo() doesn't even implement a __len__ having the defaults use None is entirely logical here.
|
Can I use CountVectorizer in scikit-learn to count frequency of documents that were not used to extract the tokens?
|
I have been working with the CountVectorizer class in scikit-learn.
I understand that if used in the manner shown below, the final output will consist of an array containing counts of features, or tokens.
These tokens are extracted from a set of keywords, i.e.
tags = [
"python, tools",
"linux, tools, ubuntu",
"distributed systems, linux, networking, tools",
]
The next step is:
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer(tokenizer=tokenize)
data = vec.fit_transform(tags).toarray()
print data
Where we get
[[0 0 0 1 1 0]
[0 1 0 0 1 1]
[1 1 1 0 1 0]]
This is fine, but my situation is just a little bit different.
I want to extract the features the same way as above, but I don't want the rows in data to be the same documents that the features were extracted from.
In other words, how can I get counts of another set of documents, say,
list_of_new_documents = [
["python, chicken"],
["linux, cow, ubuntu"],
["machine learning, bird, fish, pig"]
]
And get:
[[0 0 0 1 0 0]
[0 1 0 0 0 1]
[0 0 0 0 0 0]]
I have read the documentation for the CountVectorizer class, and came across the vocabulary argument, which is a mapping of terms to feature indices. I can't seem to get this argument to help me, however.
Any advice is appreciated.
PS: all credit due to Matthias Friedrich's Blog for the example I used above.
|
You're right that vocabulary is what you want. It works like this:
>>> cv = sklearn.feature_extraction.text.CountVectorizer(vocabulary=['hot', 'cold', 'old'])
>>> cv.fit_transform(['pease porridge hot', 'pease porridge cold', 'pease porridge in the pot', 'nine days old']).toarray()
array([[1, 0, 0],
[0, 1, 0],
[0, 0, 0],
[0, 0, 1]], dtype=int64)
So you pass it a dict with your desired features as the keys.
If you used CountVectorizer on one set of documents and then you want to use the set of features from those documents for a new set, use the vocabulary_ attribute of your original CountVectorizer and pass it to the new one. So in your example, you could do
newVec = CountVectorizer(vocabulary=vec.vocabulary_)
to create a new tokenizer using the vocabulary from your first one.
|
elasticsearch-py scan and scroll to return all documents
|
I am using elasticsearch-py to connect to my ES database which contains over 3 million documents. I want to return all the documents so I can abstract data and write it to a csv. I was able to accomplish this easily for 10 documents (the default return) using the following code.
es=Elasticsearch("glycerin")
query={"query" : {"match_all" : {}}}
response= es.search(index="_all", doc_type="patent", body=query)
for hit in response["hits"]["hits"]:
print hit
Unfortunately, when I attempted to implement the scan & scroll so I could get all the documents I ran into issues. I tried it two different ways with no success.
Method 1:
scanResp= es.search(index="_all", doc_type="patent", body=query, search_type="scan", scroll="10m")
scrollId= scanResp['_scroll_id']
response= es.scroll(scroll_id=scrollId, scroll= "10m")
print response
After scroll/ it gives the scroll id and then ends with ?scroll=10m (Caused by <class 'httplib.BadStatusLine'>: ''))
Method 2:
query={"query" : {"match_all" : {}}}
scanResp= helpers.scan(client= es, query=query, scroll= "10m", index="", doc_type="patent", timeout="10m")
for resp in scanResp:
print "Hiya"
If I print out scanResp before the for loop I get <generator object scan at 0x108723dc0>. Because of this I'm relatively certain that I'm messing up my scroll somehow, but I'm not sure where or how to fix it.
Results:
Again, after scroll/ it gives the scroll id and then ends with ?scroll=10m (Caused by <class 'httplib.BadStatusLine'>: ''))
I tried increasing the Max retries for the transport class, but that didn't make a difference.I would very much appreciate any insight into how to fix this.
Note: My ES is located on a remote desktop on the same network.
|
The python scan method is generating a GET call to the rest api. It is trying to send over your scroll_id over http. The most likely case here is that your scroll_id is too large to be sent over http and so you are seeing this error because it returns no response.
Because the scroll_id grows based on the number of shards you have it is better to use a POST and send the scroll_id in JSON as part of the request. This way you get around the limitation of it being too large for an http call.
|
if __name__ == '__main__' in IPython
|
I have Python scripts that use the if __name__ == '__main__' trick to have some code only run when the script is called as a script and not when it is loaded into the interactive interpreter. However, when I edit these scripts from IPython using the %edit command, IPython apparently sets __name__ to '__main__' and so the code gets run every time I exit the editing session. Is there a good way to make this code not run when the module is edited from IPython?
|
It sounds like you might just need the -x switch:
In [1]: %edit
IPython will make a temporary file named: /tmp/ipython_edit_J8j9Wl.py
Editing... done. Executing edited code...
Name is main -- executing
Out[1]: "if __name__ == '__main__':\n print 'Name is main -- executing'\n"
In [2]: %edit -x /tmp/ipython_edit_J8j9Wl
Editing...
When you call %edit -x the code is not executed after you exit your editor.
|
Calculate Pandas DataFrame Time Difference Between Two Columns in Hours and Minutes
|
I have two columns from and to date in a dataframe
when I try add new column diff with to find the difference between two date using
df['diff'] = df['todate'] - df['fromdate']
I get the diff column in days if more than 24 hours.
2014-01-24 13:03:12.050000,2014-01-26 23:41:21.870000,"2 days, 10:38:09.820000"
2014-01-27 11:57:18.240000,2014-01-27 15:38:22.540000,03:41:04.300000
2014-01-23 10:07:47.660000,2014-01-23 18:50:41.420000,08:42:53.760000
How do I convert my results only in hours and minutes ignoring days and even seconds.
|
Pandas timestamp differences returns a datetime.timedelta object. This can easily be converted into hours by using the *as_type* method, like so
import pandas
df = pandas.DataFrame(columns=['to','fr','ans'])
df.to = [pandas.Timestamp('2014-01-24 13:03:12.050000'), pandas.Timestamp('2014-01-27 11:57:18.240000'), pandas.Timestamp('2014-01-23 10:07:47.660000')]
df.fr = [pandas.Timestamp('2014-01-26 23:41:21.870000'), pandas.Timestamp('2014-01-27 15:38:22.540000'), pandas.Timestamp('2014-01-23 18:50:41.420000')]
(df.fr-df.to).astype('timedelta64[h]')
to yield,
0 58
1 3
2 8
dtype: float64
|
Selecting specific rows and columns from NumPy array
|
I've been going crazy trying to figure out what stupid thing I'm doing wrong here.
I'm using NumPy, and I have specific row indices and specific column indices that I want to select from. Here's the gist of my problem:
import numpy as np
a = np.arange(20).reshape((5,4))
# array([[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11],
# [12, 13, 14, 15],
# [16, 17, 18, 19]])
# If I select certain rows, it works
print a[[0, 1, 3], :]
# array([[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [12, 13, 14, 15]])
# If I select certain rows and a single column, it works
print a[[0, 1, 3], 2]
# array([ 2, 6, 14])
# But if I select certain rows AND certain columns, it fails
print a[[0,1,3], [0,2]]
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# ValueError: shape mismatch: objects cannot be broadcast to a single shape
Why is this happening? Surely I should be able to select the 1st, 2nd, and 4th rows, and 1st and 3rd columns? The result I'm expecting is:
a[[0,1,3], [0,2]] => [[0, 2],
[4, 6],
[12, 14]]
|
Fancy indexing requires you to provide all indices for each dimension. You are providing 3 indices for the first one, and only 2 for the second one, hence the error. You want to do something like this:
>>> a[[[0, 0], [1, 1], [3, 3]], [[0,2], [0,2], [0, 2]]]
array([[ 0, 2],
[ 4, 6],
[12, 14]])
That is of course a pain to write, so you can let broadcasting help you:
>>> a[[[0], [1], [3]], [0, 2]]
array([[ 0, 2],
[ 4, 6],
[12, 14]])
This is much simpler to do if you index with arrays, not lists:
>>> row_idx = np.array([0, 1, 3])
>>> col_idx = np.array([0, 2])
>>> a[row_idx[:, None], col_idx]
array([[ 0, 2],
[ 4, 6],
[12, 14]])
|
How to clear variables in ipython?
|
Sometimes I rerun a script within the same ipython session and I get bad surprises when variables haven't been cleared. How do I clear all variables?
And is it possible to force this somehow every time I invoke the magic command %run?
Thanks
|
%reset seems to clear defined variables.
|
Error trying to install Postgres for python (psycopg2)
|
I tried to install psycopg2 to my environment, but I get the following error:
(venv)avlahop@apostolos-laptop:~/development/django/rhombus-dental$ sudo pip install psycopg2
Downloading/unpacking psycopg2,
Downloading psycopg2-2.5.2.tar.gz (685kB): 685kB downloaded
Running setup.py egg_info for package psycopg2
Installing collected packages: psycopg2
Running setup.py install for psycopg2
building 'psycopg2._psycopg' extension
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.2 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010D -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o -Wdeclaration-after-statement
In file included from psycopg/psycopgmodule.c:27:0:
./psycopg/psycopg.h:30:20: fatal error: Python.h: Îεν Ï
ÏάÏÏει ÏÎÏοιο αÏÏείο ή καÏάλογοÏ
#include <Python.h>
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Complete output from command /usr/bin/python -c "import setuptools;__file__='/tmp/pip_build_root/psycopg2/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-SgfQCA-record/install-record.txt --single-version-externally-managed:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.7
creating build/lib.linux-x86_64-2.7/psycopg2
copying lib/pool.py -> build/lib.linux-x86_64-2.7/psycopg2
copying lib/errorcodes.py -> build/lib.linux-x86_64-2.7/psycopg2
copying lib/__init__.py -> build/lib.linux-x86_64-2.7/psycopg2
copying lib/_json.py -> build/lib.linux-x86_64-2.7/psycopg2
copying lib/_range.py -> build/lib.linux-x86_64-2.7/psycopg2
copying lib/extensions.py -> build/lib.linux-x86_64-2.7/psycopg2
copying lib/psycopg1.py -> build/lib.linux-x86_64-2.7/psycopg2
copying lib/tz.py -> build/lib.linux-x86_64-2.7/psycopg2
copying lib/extras.py -> build/lib.linux-x86_64-2.7/psycopg2
creating build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/testconfig.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copyng tests/test_bug_gc.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_dates.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_copy.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_cancel.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_bugX000.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_extras_dictcursor.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_psycopg2_dbapi20.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_types_basic.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_async.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_lobject.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_cursor.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_with.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/__init__.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_types_extras.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/testutils.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_notify.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_green.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_quote.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_connection.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_transaction.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/dbapi20.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/test_module.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
copying tests/dbapi20_tpc.py -> build/lib.linux-x86_64-2.7/psycopg2/tests
running build_ext
building 'psycopg2._psycopg' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/psycopg
x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.2 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010D -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o -Wdeclaration-after-statement
In file included from psycopg/psycopgmodule.c:27:0:
./psycopg/psycopg.h:30:20: fatal error: Python.h: No such file or directory
#include <Python.h>
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools;__file__='/tmp/pip_build_root/psycopg2/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-SgfQCA-record/install-record.txt --single-version-externally-managed failed with error code 1 in /tmp/pip_build_root/psycopg2
Traceback (most recent call last):
File "/usr/bin/pip", line 9, in <module>
load_entry_point('pip==1.4.1', 'console_scripts', 'pip')()
File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 148, in main
return command.main(args[1:], options)
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 169, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xce in position 54: ordinal not in range(128)
I also get errors trying to install gunicorn. I followed this example here but nothing. My virtual environment was created using the --no-site-packagesflag. I use ubuntu 13.10 and greek environment. Maybe it's that?
|
The python-dev package is required for compilation of Python extensions written in C or C++, like psycopg2. If you're running a Debian-based distribution (e.g. Ubuntu), you can install python-dev by running
apt-get install python-dev
or
apt-get install python3-dev
depending on your python version.
After that, proceed to install psycopg2 in your virtualenv environment as usual.
|
sum of absolute differences of a number in an array
|
I want to calculate the sum of absolute differences of a number at index i with all integers up to index i-1 in o(n). But i am not able to think of any approach better than o(n^2) .
For E.g. :
[3,5,6,7,1]
array with absolute sum will be(for integer at index i sum will be at index i in another array):
[0, 2, 4, 7, 17]
Can anyone help me to reduce the complexity to o(n) (if not possible then at least better optimization in terms of time complexity)?
Here my python code:
a=[3,5,6,7,1]
n=5
absoluteSumArray=[]
for i in range(0,n):
Sum=0
for j in range(0,i):
Sum+=abs(int(a[i])-int(a[j]))
absoluteSumArray.append(Sum)
|
I can offer an O(n log n) solution for a start: Let fi be the i-th number of the result. We have:
When walking through the array from left to right and maintain a binary search tree of the elements a0 to ai-1, we can solve all parts of the formula in O(log n):
Keep subtree sizes to count the elements larger than/smaller than a given one
Keep cumulative subtree sums to answer the sum queries for elements larger than/smaller than a given one
We can replace the augmented search tree with some simpler data structures if we want to avoid the implementation cost:
Sort the array beforehand. Assign every number its rank in the sorted order
Keep a binary indexed tree of 0/1 values to calculate the number of elements smaller than a given value
Keep another binary indexed tree of the array values to calculate the sums of elements smaller than a given value
TBH I don't think this can be solved in O(n) in the general case. At the very least you would need to sort the numbers at some point. But maybe the numbers are bounded or you have some other restriction, so you might be able to implement the sum and count operations in O(1).
An implementation:
# binary-indexed tree, allows point updates and prefix sum queries
class Fenwick:
def __init__(self, n):
self.tree = [0]*(n+1)
self.n = n
def update_point(self, i, val): # O(log n)
i += 1
while i <= self.n:
self.tree[i] += val
i += i & -i
def read_prefix(self, i): # O(log n)
i += 1
sum = 0
while i > 0:
sum += self.tree[i]
i -= i & -i
return sum
def solve(a):
rank = { v : i for i, v in enumerate(sorted(a)) }
res = []
counts, sums = Fenwick(len(a)), Fenwick(len(a))
total_sum = 0
for i, x in enumerate(a):
r = rank[x]
num_smaller = counts.read_prefix(r)
sum_smaller = sums.read_prefix(r)
res.append(total_sum - 2*sum_smaller + x * (2*num_smaller - i))
counts.update_point(r, 1)
sums.update_point(r, x)
total_sum += x
return res
print(solve([3,5,6,7,1])) # [0, 2, 4, 7, 17]
print(solve([2,0,1])) # [0, 2, 2]
|
Check if requirements are up to date
|
I'm using pip requirements files for keeping my dependency list.
I also try to follow best practices for managing dependencies and provide precise package versions inside the requirements file. For example:
Django==1.5.1
lxml==3.0
The question is: Is there a way to tell that there are any newer package versions available in the Python Package Index for packages listed inside requirements.txt?
For this particular example, currently latest available versions are 1.6.2 and 3.3.4 for Django and lxml respectively.
I've tried pip install --upgrade -r requirements.txt, but it says that all is up-to-date:
$ pip install --upgrade -r requirements.txt
Requirement already up-to-date: Django==1.5.1 ...
Note that at this point I don't want to run an actual upgrade - I just want to see if there are any updates available.
|
Pip has this functionality built-in. Assuming that you're inside your virtualenv type:
$ pip list --outdated
psycopg2 (Current: 2.5.1 Latest: 2.5.2)
requests (Current: 2.2.0 Latest: 2.2.1)
$ pip install -U psycopg2 requests
After that new versions of psycopg2 and requests will be downloaded and installed. Then:
$ pip freeze > requirements.txt
And you are done. This is not one command but the advantage is that you don't need any external dependencies.
|
Check requirements for python 3 support
|
I have several python projects with different set of dependencies listed in pip requirements files. I've started to think about porting the code to python 3, but I need to know if my dependencies are already there.
Is it possible to check what packages from a requirements.txt file support python 3 and what don't?
Example requirements.txt contents:
mysql-python==1.2.5
lxml==3.3.4
Fabric==1.8.0
From this list, only lxml supports python 3.
Just a side note.
There is a Python 3 Wall of Superpowers (python3wos project) that shows python 3 support of popular python packages.
As far as I understand, python3wos periodically parses the Python Package Index html pages and checks for the Programming Language :: Python :: 3 text to define whether a packages supports python 3rd branch or not. Isn't there anything simpler than scraping the html on PyPI?
|
With the help of @thefourtheye and py3readiness.org sources, I've found exactly what I needed.
caniusepython3 module by Brett Cannon:
Determine what projects are blocking you from porting to Python 3
This script takes in a set of dependencies and then figures out which
of them are holding you up from porting to Python 3.
Example (for the requirements.txt from the question):
$ caniusepython3 -r requirements.txt
Finding and checking dependencies ...
You need 2 projects to transition to Python 3.
Of those 2 projects, 2 have no direct dependencies blocking their transition:
fabric
mysql-python
I should note that it still uses the same approach as python3wos - looking for Programming Language :: Python :: 3x classifiers on the package page.
There is also a web-interface where you can type your dependencies or drop a requirements.txt file.
|
Why does numpy.power return 0 for small exponents while math.pow returns the correct answer?
|
In [25]: np.power(10,-100)
Out[25]: 0
In [26]: math.pow(10,-100)
Out[26]: 1e-100
I would expect both the commands to return 1e-100. This is not a precision issue either, since the issue persists even after increasing precision to 500. Is there some setting which I can change to get the correct answer?
|
Oh, it's much "worse" than that:
In [2]: numpy.power(10,-1)
Out[2]: 0
But this is a hint to what's going on: 10 is an integer, and numpy.power doesn't coerce the numbers to floats. But this works:
In [3]: numpy.power(10.,-1)
Out[3]: 0.10000000000000001
In [4]: numpy.power(10.,-100)
Out[4]: 1e-100
Note, however, that the power operator, **, does convert to float:
In [5]: 10**-1
Out[5]: 0.1
|
Creating a zero-filled pandas data frame
|
What is the best way to create a zero-filled pandas data frame of a given size?
I have used:
zero_data = np.zeros(shape=(len(data),len(feature_list)))
d = pd.DataFrame(zero_data, columns=feature_list)
Is there a better way to do it?
|
You can try this:
d = pd.DataFrame(0, index=np.arange(len(data)), columns=feature_list)
|
pygame installation issue in mac os
|
I have following issue of installing pygame package.
In file included from src/_numericsurfarray.c:23:
src/pygame.h:106:10: fatal error: 'SDL.h' file not found
#include <SDL.h>
^
1 error generated.
error: Setup script exited with error: command 'gcc' failed with exit status 1
System information
Mac OS-10.9.2
python version- Python 2.7.5 :: Anaconda 1.6.1 (x86_64)
Any suggestion will be greatly appreciate ? Thanks.
|
Here (OSX Mavericks) I got able to install this way:
brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
pip install https://bitbucket.org/pygame/pygame/get/default.tar.gz
("default" branch is on commit e3ae850 right now)
Source: https://bitbucket.org/pygame/pygame/issue/139/sdlh-not-found-even-thought-it-exists#comment-3822470
See this other StackOverflow question too: PyGame in a virtualenv on OS X with brew?
|
Why `float` function is slower than multiplying by 1.0?
|
I understand that this could be argued as a non-issue, but I write software for HPC environments, so this 3.5x speed increase actually makes a difference.
In [1]: %timeit 10 / float(98765)
1000000 loops, best of 3: 313 ns per loop
In [2]: %timeit 10 / (98765 * 1.0)
10000000 loops, best of 3: 80.6 ns per loop
I used dis to have a look at the code, and I assume float() will be slower as it requires a function call (unfortunately I couldn't dis.dis(float) to see what it's actually doing).
I guess a second question would be when should I use float(n) and when should I use n * 1.0?
|
Because Peep hole optimizer optimizes it by precalculating the result of that multiplication
import dis
dis.dis(compile("10 / float(98765)", "<string>", "eval"))
1 0 LOAD_CONST 0 (10)
3 LOAD_NAME 0 (float)
6 LOAD_CONST 1 (98765)
9 CALL_FUNCTION 1
12 BINARY_DIVIDE
13 RETURN_VALUE
dis.dis(compile("10 / (98765 * 1.0)", "<string>", "eval"))
1 0 LOAD_CONST 0 (10)
3 LOAD_CONST 3 (98765.0)
6 BINARY_DIVIDE
7 RETURN_VALUE
It stores the result of 98765 * 1.0 in the byte code as a constant value. So, it just has to load it and divide, where as in the first case we have to call the function.
We can see that even more clearly like this
print compile("10 / (98765 * 1.0)", "<string>", "eval").co_consts
# (10, 98765, 1.0, 98765.0)
Since the value is pre-calculated during the compile time itself, second one is faster.
Edit: As pointed out by Davidmh in the comments,
And the reason why it is not also optimising away the division is because its behaviour depends on flags, like from __future__ import division and also because of -Q flag.
Quoting the comment from the actual peephole optimizer code for Python 2.7.9,
/* Cannot fold this operation statically since
the result can depend on the run-time presence
of the -Qnew flag */
|
Recovering features names of explained_variance_ratio_ in PCA with sklearn
|
I'm trying to recover from a PCA done with scikit-learn, which features are selected as relevant.
A classic example with IRIS dataset.
import pandas as pd
import pylab as pl
from sklearn import datasets
from sklearn.decomposition import PCA
# load dataset
iris = datasets.load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
# normalize data
df_norm = (df - df.mean()) / df.std()
# PCA
pca = PCA(n_components=2)
pca.fit_transform(df_norm.values)
print pca.explained_variance_ratio_
This returns
In [42]: pca.explained_variance_ratio_
Out[42]: array([ 0.72770452, 0.23030523])
How can I recover which two features allow these two explained variance among the dataset ?
Said diferently, how can i get the index of this features in iris.feature_names ?
In [47]: print iris.feature_names
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
Thanks in advance for your help.
|
Each principal component is a linear combination of the original variables:
where X_is are the original variables, and Beta_is are the corresponding weights or so called coefficients.
To obtain the weights, you may simply pass identity matrix to the transform method:
>>> i = np.identity(df.shape[1]) # identity matrix
>>> i
array([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]])
>>> coef = pca.transform(i)
>>> coef
array([[ 0.5224, -0.3723],
[-0.2634, -0.9256],
[ 0.5813, -0.0211],
[ 0.5656, -0.0654]])
Each column of the coef matrix above shows the weights in the linear combination which obtains corresponding principal component:
>>> pd.DataFrame(coef, columns=['PC-1', 'PC-2'], index=df.columns)
PC-1 PC-2
sepal length (cm) 0.522 -0.372
sepal width (cm) -0.263 -0.926
petal length (cm) 0.581 -0.021
petal width (cm) 0.566 -0.065
[4 rows x 2 columns]
For example, above shows that the second principal component (PC-2) is mostly aligned with sepal width, which has the highest weight of 0.926 in absolute value;
Since the data were normalized, you can confirm that the principal components have variance 1.0 which is equivalent to each coefficient vector having norm 1.0:
>>> np.linalg.norm(coef,axis=0)
array([ 1., 1.])
One may also confirm that the principal components can be calculated as the dot product of the above coefficients and the original variables:
>>> np.allclose(df_norm.values.dot(coef), pca.fit_transform(df_norm.values))
True
Note that we need to use numpy.allclose instead of regular equality operator, because of floating point precision error.
|
Pass extra arguments to Serializer Class in Django Rest Framework
|
I want to pass some arguments to DRF Serializer class from Viewset, so for I have tried this:
class OneZeroSerializer(rest_serializer.ModelSerializer):
def __init__(self, *args, **kwargs):
print args # show values that passed
location = rest_serializer.SerializerMethodField('get_alternate_name')
def get_alternate_name(self, obj):
return ''
class Meta:
model = OneZero
fields = ('id', 'location')
Views
class OneZeroViewSet(viewsets.ModelViewSet):
serializer_class = OneZeroSerializer(realpart=1)
#serializer_class = OneZeroSerializer
queryset = OneZero.objects.all()
Basically I want to pass some value based on querystring from views to Serializer class and then these will be allocate to fields.
These fields are not include in Model in fact dynamically created fields.
Same case in this question stackoverflow, but I cannot understand the answer.
Can anyone help me in this case or suggest me better options.
|
It's very easy with "context" arg for "ModelSerializer" constructor.
For example:
in view:
my_objects = MyModelSerializer(
input_collection,
many=True,
context={'user_id': request.user.id}
).data
in serializers:
class MyModelSerializer(serializers.ModelSerializer):
...
is_my_object = serializers.SerializerMethodField('_is_my_find')
...
def _is_my_find(self, obj):
user_id = self.context.get("user_id")
if user_id:
return user_id in obj.my_objects.values_list("user_id", flat=True)
return False
...
so you can use "self.context" for getting extra params.
Reference
|
Is there a Python equivalent for Scala's Option or Either?
|
I really enjoy using the Option and Either monads in Scala. Are there any equivalent for these things in Python? If there aren't, then what is the pythonic way of handling errors or "absence of value" without throwing exceptions?
|
Well, really, the pythonic way for a function to say "I am not defined at this point" is to raise an exception.
>>> int("blarg")
Traceback (most recent call last):
...
ValueError: invalid literal for int() with base 10: 'blarg'
>>> dict(foo=5)['bar']
Traceback (most recent call last):
...
KeyError: 'bar'
>>> 1 / 0
Traceback (most recent call last):
...
ZeroDivisionError: integer division or modulo by zero
This is, in part because there's no (generally useful) static type checker for python. A python function cannot syntactically state, at compile time, that it has a particular codomain; there's no way to force callers to match all of the cases in the function's return type.
If you prefer, you can write (unpythonically) a Maybe wrapper:
class Maybe(object):
def get_or_else(self, default):
return self.vaue if isinstance(self, Just) else default
class Just(Maybe):
def __init__(self, value):
self.value = value
class Nothing(Maybe):
pass
But I would not do this, unless you're trying to port something from scala to python without changing much
|
How come the Python's logging module doesn't follow PEP8 conventions?
|
This is is just a curiosity with historical purposes:
I was wondering if someone knows why the very widely used (and core module) logging doesn't follow the Python's PEP-8 naming convention.
For instance, in
>>> import logging
>>> log = logging.getLogger("hello")
I would expect it to be get_logger, but it isn't.
When it comes to function names, the PEP8 standard says:
mixedCase is allowed only in contexts where that's already the
prevailing style (e.g. threading.py), to retain backwards
compatibility.
Was that the case? If so, with what other logging thingy it had to maintain backwards compatibility? Or was it just that the developers of logging felt like using camel-case naming?
Of course, the module is well documented and is not a big deal at all. I'm just curious.
|
The logging module was developed by a separate company in 2001, and was heavily based on Log4j. As such it follows the naming conventions the original author picked, which mirror the Log4j choices; the latter has a getLogger() method too.
Not until a year later did PEP 282 propose to add it to the standard library, by which time the naming convention was set in stone.
It is a known issue with the package, but it is not the only package to violate the PEP. From the linked Wiki:
PEP8 says - consistency with this style guide is important. Consistency within a project is more important. Consistency within one module or function is most important.
So True, but can't not be changed, because of backward compatibility. logging2 maybe. -- techtonik
It's a low priority right now, unless there's an initiative to ensure the rest of the stdlib is made to conform to PEP8. -- VinaySajip
Last but not least, the styleguide itself has this to say on applying styleguides:
A Foolish Consistency is the Hobgoblin of Little Minds
A style guide is about consistency. Consistency with this style guide is important. Consistency within a project is more important. Consistency within one module or function is most important.
But most importantly: know when to be inconsistent -- sometimes the style guide just doesn't apply. When in doubt, use your best judgment. Look at other examples and decide what looks best. And don't hesitate to ask!
In particular: do not break backwards compatibility just to comply with this PEP!
'Fixing' logging would break backwards compatibility, which is just not worth it.
|
Pandas: SettingWithCopyWarning
|
I'd like to replace values in a Pandas DataFrame larger than an arbitrary number (100 in this case) with NaN (as values this large are indicative of a failed experiment). Previously I've used this to replace unwanted values:
sve2_all[sve2_all[' Hgtot ng/l'] > 100] = np.nan
However, I got the following error:
-c:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
C:\Users\AppData\Local\Enthought\Canopy32\User\lib\site-packages\pandas\core\indexing.py:346: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
self.obj[item] = s
From this StackExchange question, it seems that sometimes this warning can be ignored, but I can't follow the discussion well enough to be certain whether this applies to my situation. Is the warning basically letting me know that I'll be overwriting some of the values in my DataFrame?
Edit: As far as I can tell, everything behaved as it should. As a follow up is my method of replacing values non-standard? Is there a better way to replace values?
|
As suggested in the error message, you should use loc to do this:
sve2_all.loc[sve2_all['Hgtot ng/l'] > 100] = np.nan
The warning is here to stop you modifying a copy (here sve2_all[sve2_all[' Hgtot ng/l'] > 100] is potentially a copy, and if it is then any modifications would not change the original frame. It could be that it works correctly in some cases but pandas cannot guarantee it will work in all cases... use at your own risk (consider yourself warned! ;) ).
|
Max retries exceeded with URL
|
I'm trying to get the content of this url "https://itunes.apple.com/in/genre/ios-business/id6000?mt=8" and its showing this error
Traceback (most recent call last):
File "/home/preetham/Desktop/eg.py", line 17, in <module>
page1 = requests.get(ap)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 383, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 378, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='itunes.apple.com', port=443): Max retries exceeded with url: /in/app/adobe-reader/id469337564?mt=8 (Caused by <class 'socket.gaierror'>: [Errno -2] Name or service not known)
the code is
url="https://itunes.apple.com/in/genre/ios-business/id6000?mt=8"
page = requests.get(url)
tree = html.fromstring(page.text)
flist=[]
plist=[]
for i in range(0,100):
app = tree.xpath("//div[@class='column first']/ul/li/a/@href")
ap=app[0]
page1 = requests.get(ap)
when I try the range with (0,2) it works but when I put the range in 100's it shows this error.
|
What happened here is that itunes server refuses your connection (you're sending too many requests from same ip address in short period of time)
Max retries exceeded with url: /in/app/adobe-reader/id469337564?mt=8
error trace is misleading it should be something like "No connection could be made because the target machine actively refused it".
There is an issue at about python.requests lib at Github, check it out here
To overcome this issue (not so much an issue as it is misleading debug trace) you should catch connection related exceptions like so:
try:
page1 = requests.get(ap)
except requests.exceptions.ConnectionError:
r.status_code = "Connection refused"
Another way to overcome this problem is if you use enough time gap to send requests to server this can be achieved by sleep(timeinsec) function in python (don't forget to import sleep)
from time import sleep
All in all requests is awesome python lib, hope that solves your problem.
|
Why do ints require three times as much memory in Python?
|
On a 64-bit system an integer in Python takes 24 bytes. This is 3 times the memory that would be needed in e.g. C for a 64-bit integer. Now, I know this is because Python integers are objects. But what is the extra memory used for? I have my guesses, but it would be nice to know for sure.
|
Remember that the Python int type does not have a limited range like C int has; the only limit is the available memory.
Memory goes to storing the value, the current size of the integer storage (the storage size is variable to support arbitrary sizes), and the standard Python object bookkeeping (a reference to the relevant object and a reference count).
You can look up the longintrepr.h source (the Python 3 int type was traditionally known as the long type in Python 2); it makes effective use of the PyVarObject C type to track integer size:
struct _longobject {
PyObject_VAR_HEAD
digit ob_digit[1];
};
The ob_digit array stores 'digits' of either 15 or 30 bits wide (depending on your platform); so on my 64-bit OS X system, an integer up to (2 ^ 30) - 1 uses 1 'digit':
>>> sys.getsizeof((1 << 30) - 1)
28
but if you use 2 30-bit digits in the number an additional 4 bytes are needed, etc:
>>> sys.getsizeof(1 << 30)
32
>>> sys.getsizeof(1 << 60)
36
>>> sys.getsizeof(1 << 90)
40
The base 24 bytes then are the PyObject_VAR_HEAD structure, holding the object size, the reference count and the type pointer (each 8 bytes / 64 bits on my 64-bit OS X platform).
On Python 2, integers <= sys.maxint but >= -sys.maxint - 1 are stored using a simpler structure storing just the single value:
typedef struct {
PyObject_HEAD
long ob_ival;
} PyIntObject;
because this uses PyObject instead of PyVarObject there is no ob_size field in the struct and the memory size is limited to just 24 bytes; 8 for the long value, 8 for the reference count and 8 for the type object pointer.
|
Fastest way to calculate the centroid of a set of coordinate tuples in python without numpy
|
Good afternoon everyone!
I've been working on a project that is incredibly time sensitive (that unfortunately has to be in python) and one of the functions that is used extensively is a function that calculates the centroid of a list of (x, y) tuples. To illustrate:
def centroid(*points):
x_coords = [p[0] for p in points]
y_coords = [p[1] for p in points]
_len = len(points)
centroid_x = sum(x_coords)/_len
centroid_y = sum(y_coords)/_len
return [centroid_x, centroid_y]
where
>>> centroid((0, 0), (10, 0), (10, 10), (0, 10))
[5, 5]
This function runs fairly quickly, the above example completing in an average of 1.49e-05 seconds on my system but I'm looking for the fastest way to calculate the centroid. Do you guys have any ideas?
One of the other solutions I had was to do the following (where l is the list of tuples):
map(len(l).__rtruediv__, map(sum, zip(*l)))
Which runs in between 1.01e-05 and 9.6e-06 seconds, but unfortunately converting to a list (by surrounding the whole statement in list( ... )) nearly doubles computation time.
All suggestions are welcome! Thanks!
EDIT: Suggestions are welcome in pure python BUT NOT numpy.
EDIT2: Just found out that if a separate variable is kept for the length of the list of tuples, then my above implementation with map runs reliably under 9.2e-06 seconds, but there's still the problem of converting back to a list.
EDIT3:
Now I'm only accepting answers in pure python, NOT in numpy (sorry to those that already answered in numpy!)
|
import numpy as np
data = np.random.randint(0, 10, size=(100000, 2))
this here is fast
def centeroidnp(arr):
length = arr.shape[0]
sum_x = np.sum(arr[:, 0])
sum_y = np.sum(arr[:, 1])
return sum_x/length, sum_y/length
%timeit centeroidnp(data)
10000 loops, best of 3: 181 µs per loop
surprisingly, this is much slower:
%timeit data.mean(axis=0)
1000 loops, best of 3: 1.75 ms per loop
numpy seems very quick to me...
For completeness:
def centeroidpython(data):
x, y = zip(*data)
l = len(x)
return sum(x) / l, sum(y) / l
#take the data conversion out to be fair!
data = list(tuple(i) for i in data)
%timeit centeroidpython(data)
10 loops, best of 3: 57 ms per loop
|
How to test Python 3.4 asyncio code?
|
What's the best way to write unit tests for code using the Python 3.4 asyncio library? Assume I want to test a TCP client (SocketConnection):
import asyncio
import unittest
class TestSocketConnection(unittest.TestCase):
def setUp(self):
self.mock_server = MockServer("localhost", 1337)
self.socket_connection = SocketConnection("localhost", 1337)
@asyncio.coroutine
def test_sends_handshake_after_connect(self):
yield from self.socket_connection.connect()
self.assertTrue(self.mock_server.received_handshake())
When running this test case with the default test runner, the test will always succeed as the method executes only up until the first yield from instruction, after which it returns before executing any assertions. This causes tests to always succeed.
Is there a prebuilt test runner that is able to handle asynchronous code like this?
|
async_test, suggested by Marvin Killing, definitely can help -- as well as direct calling loop.run_until_complete()
But I also strongly recommend to recreate new event loop for every test and directly pass loop to API calls (at least asyncio itself accepts loop keyword-only parameter for every call that need it).
Like
class Test(unittest.TestCase):
def setUp(self):
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(None)
def test_xxx(self):
@asyncio.coroutine
def go():
reader, writer = yield from asyncio.open_connection(
'127.0.0.1', 8888, loop=self.loop)
yield from asyncio.sleep(0.01, loop=self.loop)
self.loop.run_until_complete(go())
that isolates tests in test case and prevents strange errors like longstanding coroutine that has been created in test_a but finished only on test_b execution time.
|
What is the purpose of bare asterix in function arguments?
|
I've seen this SO question (this is not a duplicate): Python bare asterisk in function argument
In python-3.x you can add a bare * to the function arguments, this means that (quote from docs):
Parameters after â*â or â*identifierâ are keyword-only parameters and
may only be passed used keyword arguments.
Ok, so, I've defined a function:
>>> def f(a, b, *, c=1, d=2, e=3):
... print('Hello, world!')
...
I can pass c, d and e variable values only by specifying keywords:
>>> f(1, 2, 10, 20, 30)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() takes 2 positional arguments but 5 were given
>>> f(1, 2, c=10, d=20, e=30)
Hello, world!
Questions are:
What is the motivation for this kind of restriction/syntax sugar?
What use cases does it cover?
Is it really used in third-party libraries that switched to python3?
Some "real-world" examples would help a lot. Thanks in advance.
|
PEP 3102 explains the rationale pretty clearly: the point is to allow functions to accept various "options" that are essentially orthogonal in nature. Specifying these positionally is awkward both on the defining and calling side, since they don't have any obvious "priority" that would translate into a positional order.
There are lots of example of functions that would benefit from this in various libraries. For instance, the call signature of pandas.read_csv is:
def parser_f(filepath_or_buffer,
sep=sep,
dialect=None,
compression=None,
doublequote=True,
escapechar=None,
quotechar='"',
quoting=csv.QUOTE_MINIMAL,
skipinitialspace=False,
lineterminator=None,
header='infer',
index_col=None,
names=None,
prefix=None,
skiprows=None,
skipfooter=None,
skip_footer=0,
na_values=None,
na_fvalues=None,
true_values=None,
false_values=None,
delimiter=None,
converters=None,
dtype=None,
usecols=None,
engine='c',
delim_whitespace=False,
as_recarray=False,
na_filter=True,
compact_ints=False,
use_unsigned=False,
low_memory=_c_parser_defaults['low_memory'],
buffer_lines=None,
warn_bad_lines=True,
error_bad_lines=True,
keep_default_na=True,
thousands=None,
comment=None,
decimal=b'.',
parse_dates=False,
keep_date_col=False,
dayfirst=False,
date_parser=None,
memory_map=False,
nrows=None,
iterator=False,
chunksize=None,
verbose=False,
encoding=None,
squeeze=False,
mangle_dupe_cols=True,
tupleize_cols=False,
infer_datetime_format=False):
Except for the filepath, most of these are orthogonal options that specify different aspects of how a CSV file is to be parsed. There's no particular reason why they would be passed in any particular order. You'd go nuts keeping track of any positional order for these. It makes more sense to pass them as keywords.
Now, you can see that pandas doesn't actually define them as keyword-only arguments, presumably to maintain compatibility with Python 2. I would imagine that many libraries have refrained from using the syntax for the same reason. I don't know offhand which libraries (if any) have started using it.
|
Manage empty list/invalid input when finding max/min value of list (Python)
|
I'm finding max value and min value of a list by using max(list) and min(list) in Python. However, I wonder how to manage empty lists.
For example if the list is an empty list [], the program raises 'ValueError: min() arg is an empty sequence' but I would like to know how to make the program just print 'empty list or invalid input' instead of just crashing. How to manage those errors?
|
Specifying a default in earlier versions of Python:
max(list or [0])
max(list or ['empty list'])
|
Open files in 'rt' and 'wt' modes
|
Several times here on SO I've seen people using rt and wt modes for reading and writing files.
For example:
with open('input.txt', 'rt') as input_file:
with open('output.txt', 'wt') as output_file:
...
I don't see the modes documented, but since open() doesn't throw an error - looks like it's pretty much legal to use.
What is it for and is there any difference between using wt vs w and rt vs r?
|
t refers to the text mode. There is no difference between r and rt or w and wt since text mode is the default.
Documented here:
Character Meaning
'r' open for reading (default)
'w' open for writing, truncating the file first
'x' open for exclusive creation, failing if the file already exists
'a' open for writing, appending to the end of the file if it exists
'b' binary mode
't' text mode (default)
'+' open a disk file for updating (reading and writing)
'U' universal newlines mode (deprecated)
|
Open files in "rock&roll" mode
|
I'm wondering what is going on with the file open() mode validation (Python2.7):
>>> with open('input.txt', 'illegal') as f:
... for line in f:
... print line
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: mode string must begin with one of 'r', 'w', 'a' or 'U', not 'illegal'
>>> with open('input.txt', 'rock&roll') as f:
... for line in f:
... print line
...
1
2
3
So, I cannot open the file in illegal mode, but I can open it in rock&roll mode. What mode is actually used for opening the file in this case?
Note that on python3 I cannot use both illegal and rock&roll:
>>> with open('input.txt', 'rock&roll') as f:
... for line in f:
... print(line)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid mode: 'rock&roll'
>>> with open('input.txt', 'illegal') as f:
... for line in f:
... print(line)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid mode: 'illegal'
And, this is confusing, why the behavior is different for python3.x?
|
The Python 2.x open function essentially delegates its work to the C library fopen function. On my system, the documentation for fopen contains:
The argument mode points to a string beginning with one of the following sequences (Additional characters may follow these sequences.):
Your ock&roll is considered "additional characters".
In Python 3, the allowed open modes are more restricted (essentially, only valid strings are permitted).
|
Does the SVM in sklearn support incremental (online) learning?
|
I am currently in the process of designing a recommender system for text articles (a binary case of 'interesting' or 'not interesting'). One of my specifications is that it should continuously update to changing trends.
From what I can tell, the best way to do this is to make use of machine learning algorithm that supports incremental/online learning.
Algorithms like the Perceptron and Winnow support online learning but I am not completely certain about Support Vector Machines. Does the scikit-learn python library support online learning and if so, is a support vector machine one of the algorithms that can make use of it?
I am obviously not completely tied down to using support vector machines, but they are usually the go to algorithm for binary classification due to their all round performance. I would be willing to change to whatever fits best in the end.
|
While online algorithms for SVMs do exist, it has become important to specify if you want kernel or linear SVMs, as many efficient algorithms have been developed for the special case of linear SVMs.
For the linear case, if you use the SGD classifier in scikit-learn with the hinge loss and L2 regularization you will get an SVM that can be updated online/incrementall. You can combine this with feature transforms that approximate a kernel to get similar to an online kernel SVM.
One of my specifications is that it should continuously update to changing trends.
This is referred to as concept drift, and will not be handled well by a simple online SVM. Using the PassiveAggresive classifier will likely give you better results, as it's learning rate does not decrease over time.
Assuming you get feedback while training / running, you can attempt to detect decreases in accuracy over time and begin training a new model when the accuracy starts to decrease (and switch to the new one when you believe that it has become more accurate). JSAT has 2 drift detection methods (see jsat.driftdetectors) that can be used to track accuracy and alert you when it has changed.
It also has more online linear and kernel methods.
(bias note: I'm the author of JSAT).
|
How to get all mappings between two lists?
|
We have two lists, A and B:
A = ['a','b','c']
B = [1, 2]
Is there a pythonic way to build the set of all maps between A and B containing 2^n (here 2^3=8)? That is:
[(a,1), (b,1), (c,1)]
[(a,1), (b,1), (c,2)]
[(a,1), (b,2), (c,1)]
[(a,1), (b,2), (c,2)]
[(a,2), (b,1), (c,1)]
[(a,2), (b,1), (c,2)]
[(a,2), (b,2), (c,1)]
[(a,2), (b,2), (c,2)]
Using itertools.product, it's possible to get all the tuples:
import itertools as it
P = it.product(A, B)
[p for p in P]
Which gives:
Out[3]: [('a', 1), ('a', 2), ('b', 1), ('b', 2), ('c', 1), ('c', 2)]
|
You can do this with itertools.product and zip
from itertools import product
print [zip(A, item) for item in product(B, repeat=len(A))]
Output
[[('a', 1), ('b', 1), ('c', 1)],
[('a', 1), ('b', 1), ('c', 2)],
[('a', 1), ('b', 2), ('c', 1)],
[('a', 1), ('b', 2), ('c', 2)],
[('a', 2), ('b', 1), ('c', 1)],
[('a', 2), ('b', 1), ('c', 2)],
[('a', 2), ('b', 2), ('c', 1)],
[('a', 2), ('b', 2), ('c', 2)]]
product(B, repeat=len(A)) produces
[(1, 1, 1),
(1, 1, 2),
(1, 2, 1),
(1, 2, 2),
(2, 1, 1),
(2, 1, 2),
(2, 2, 1),
(2, 2, 2)]
Then we pick each element from the product and zip it with A, to get your desired output.
|
Plot histogram with colors taken from colormap
|
I want to plot a simple 1D histogram where the bars should follow the color-coding of a given colormap.
Here's an MWE:
import numpy as n
import matplotlib.pyplot as plt
# Random gaussian data.
Ntotal = 1000
data = 0.05 * n.random.randn(Ntotal) + 0.5
# This is the colormap I'd like to use.
cm = plt.cm.get_cmap('RdYlBu_r')
# Plot histogram.
n, bins, patches = plt.hist(data, 25, normed=1, color='green')
plt.show()
which outputs this:
Instead of the color being green for the entire histogram, I'd like the columns to follow a color-coding given by the colormap defined in cm and the values of the bins. This would mean that bins closer to zero (not in height but in position) should look bluer and those closer to one redder, according to the chosen colormap RdYlBu_r.
Since plt.histo doesn't take a cmap argument I don't know how to tell it to use the colormap defined in cm.
|
The hist command returns a list of patches, so you can iterate over them and set their color like so:
import numpy as n
import matplotlib.pyplot as plt
# Random gaussian data.
Ntotal = 1000
data = 0.05 * n.random.randn(Ntotal) + 0.5
# This is the colormap I'd like to use.
cm = plt.cm.get_cmap('RdYlBu_r')
# Plot histogram.
n, bins, patches = plt.hist(data, 25, normed=1, color='green')
bin_centers = 0.5 * (bins[:-1] + bins[1:])
# scale values to interval [0,1]
col = bin_centers - min(bin_centers)
col /= max(col)
for c, p in zip(col, patches):
plt.setp(p, 'facecolor', cm(c))
plt.show()
To get the colors, you need to call the colormap with a value between 0 and 1. Resulting figure:
|
Compiler problems with pip during numpy install under Windows 8.1, 7 Enterprise and 7 Home Editions
|
I am unable to install numpy via pip install numpy on my computer running Python 3.4 due to various errors I receive linked to compilation issues (This is only the case on a 64-bit installation of Python).
This is a problem that has been reported extensively and I had a related question about unable to find vcvarsall.bat. I have tested this on three different clean machines running Windows 8.1, 7 Enterprise and 7 Home Editions and it always comes up.
Installing Visual Studio 2010 Express C++ gets rid of the first error in the link - i.e. Unable to find vcvarsall.bat but throws out a next exception ending with a ValueError as here:
File "C:\Python34\lib\distutils\msvc9compiler.py", line 371, in initialize
vc_env = query_vcvarsall(VERSION, plat_spec)
File "C:\Python34\lib\distutils\msvc9compiler.py", line 287, in query_vcvarsall
raise ValueError(str(list(result.keys())))
ValueError: ['path']
I have then followed this advice and patched the file as linked in the discussion forum which resulted in a KEY_BASE error.
File "C:\Users\Matej\AppData\Local\Temp\pip_build_Matej\numpy\numpy\distutils\command\config.py", line 18, in <module>
from numpy.distutils.mingw32ccompiler import generate_manifest
File "C:\Users\Matej\AppData\Local\Temp\pip_build_Matej\numpy\numpy\distutils\mingw32ccompiler.py", line 36, in <module>
from distutils.msvccompiler import get_build_version as get_build_msvc_version
File "C:\Python34\lib\distutils\msvccompiler.py", line 638, in <module>
from distutils.msvc9compiler import MSVCCompiler
File "C:\Python34\lib\distutils\msvc9compiler.py", line 71, in <module>
r"v%sA"
File "C:\Python34\lib\distutils\msvc9compiler.py", line 67, in <listcomp>
WINSDK_PATH_KEYS = [KEY_BASE + "Microsoft SDKs\\Windows\\" + rest for rest in (
NameError: name 'KEY_BASE' is not defined
Following the advice in the same link, I have added the following definition of KEY_BASE before the variable gets called in msvc9compiler.py:
KEY_BASE = r"Software\Microsoft\\"
Which results in the final error I was not able to troubleshoot:
File "C:\Users\Matej\AppData\Local\Temp\pip_build_Matej\numpy\numpy\distutils\command\build_src.py", line 164, in build_sources
self.build_library_sources(*libname_info)
File "C:\Users\Matej\AppData\Local\Temp\pip_build_Matej\numpy\numpy\distutils\command\build_src.py", line 299, in build_library_sources
sources = self.generate_sources(sources, (lib_name, build_info))
File "C:\Users\Matej\AppData\Local\Temp\pip_build_Matej\numpy\numpy\distutils\command\build_src.py", line 386, in generate_sources
source = func(extension, build_dir)
File "numpy\core\setup.py", line 682, in get_mathlib_info
raise RuntimeError("Broken toolchain: cannot link a simple C program")
RuntimeError: Broken toolchain: cannot link a simple C program
I have tried the following but none of it resolved the Broken toolchain error:
This link that includes a further patch to msvc9compiler.py
This link by Peter Cock (This helps on the 32-bit install of Python3.4)
Tried installing using easy_install as some users suggested it might work that way
NOTE 1: I am aware of the workaround using the compiled binaries on this website I am specifically looking for solution using distutils if there is one.
NOTE 2: The error logs are obviously larger and I cut them for readability.
|
I was able to reproduce all these errors in Windows 7 Professional (64 bit).
Your final issue (Broken toolchain) is caused by more manifest related issues. I was able to work around this by changing the following line (in msvc9compiler.py):
mfinfo = self.manifest_get_embed_info(target_desc, ld_args)
to
mfinfo = None
thus bypassing the if statement which immediately follows. After this change numpy successfully compiled for me.
|
Django Rest Framework - How to test ViewSet?
|
I'm having trouble testing a ViewSet:
class ViewSetTest(TestCase):
def test_view_set(self):
factory = APIRequestFactory()
view = CatViewSet.as_view()
cat = Cat(name="bob")
cat.save()
request = factory.get(reverse('cat-detail', args=(cat.pk,)))
response = view(request)
I'm trying to replicate the syntax here:
http://www.django-rest-framework.org/api-guide/testing#forcing-authentication
But I think their AccountDetail view is different from my ViewSet, so I'm getting this error from the last line:
AttributeError: 'NoneType' object has no attributes 'items'
Is there a correct syntax here or am I mixing up concepts? My APIClient tests work, but I'm using the factory here because I would eventually like to add "request.user = some_user". Thanks in advance!
Oh and the client test works fine:
def test_client_view(self):
response = APIClient().get(reverse('cat-detail', args=(cat.pk,)))
self.assertEqual(response.status_code, 200)
|
I think I found the correct syntax, but not sure if it is conventional (still new to Django):
def test_view_set(self):
request = APIRequestFactory().get("")
cat_detail = CatViewSet.as_view({'get': 'retrieve'})
cat = Cat.objects.create(name="bob")
response = cat_detail(request, pk=cat.pk)
self.assertEqual(response.status_code, 200)
So now this passes and I can assign request.user, which allows me to customize the retrieve method under CatViewSet to consider the user.
|
How to "test" NoneType in python?
|
I have a method that sometimes returns a NoneType value. So how can I question a variable that is a NoneType? I need to use if method, for example
if not new:
new = '#'
I know that is the wrong way and I hope you understand what I meant.
|
So how can I question a variable that is a NoneType?
Use is operator, like this
if variable is None:
Why this works?
Since None is the sole singleton object of NoneType in Python, we can use is operator to check if a variable has None in it or not.
Quoting from is docs,
The operators is and is not test for object identity: x is y is true if and only if x and y are the same object. x is not y yields the inverse truth value.
Since there can be only one instance of None, is would be the preferred way to check None.
Hear it from the horse's mouth
Quoting Python's Coding Style Guidelines - PEP-008 (jointly defined by Guido himself),
Comparisons to singletons like None should always be done with is or is not, never the equality operators.
|
How to scrape a website which requires login using python and beautifulsoup?
|
If I want to scrape a website that requires login with password first, how can I start scraping it with python using beautifulsoup4 library? Below is what I do for websites that do not require login.
from bs4 import BeautifulSoup
import urllib2
url = urllib2.urlopen("http://www.python.org")
content = url.read()
soup = BeautifulSoup(content)
How should the code be changed to accommodate login? Assume that the website I want to scrape is a forum that requires login. An example is http://forum.arduino.cc/index.php
|
You can use mechanize:
import mechanize
from bs4 import BeautifulSoup
import urllib2
import cookielib
cj = cookielib.CookieJar()
br = mechanize.Browser()
br.set_cookiejar(cj)
br.open("https://id.arduino.cc/auth/login/")
br.select_form(nr=0)
br.form['username'] = 'username'
br.form['password'] = 'password.'
br.submit()
print br.response().read()
Or urllib - Login to website using urllib2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.