instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Docker Compose vs. Dockerfile - which is better?
|
I have been reading up and learning about Docker, and am trying to correctly choose the Django setup to use. So far there is either:
Docker Compose or Dockerfile
I understand that Dockerfiles are used in Docker Compose, but I am not sure if it is good practice to put everything in one large Dockerfile with multiple FROM commands for the different images?
I want to use several different images that include:
uwsgi
nginx
postgres
redis
rabbitmq
celery with cron
Please advise on what is best practices in setting up this type of environment using Docker.
If it helps, I am on a Mac, so using boot2docker.
Some Issues I've had:
Docker Compose is not compatible with Python3
I want to containerize my project, so if one large Dockerfile
is not ideal, then I feel I'd need to break it up using Docker Compose
I am ok to make the project Py2 & Py3 compatible, so am leaning towards django-compose
|
The answer is neither.
Docker Compose (herein referred to as compose) will use the Dockerfile if you add the build command to your project's docker-compose.yml.
Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
You can specify the path to your individual Dockerfiles using build /path/to/dockerfiles/blah where /path/to/dockerfiles/blah is where blah's Dockerfile lives.
|
How can I tell if a string repeats itself in Python?
|
I'm looking for a way to test whether or not a given string repeats itself for the entire string or not.
Examples:
[
'0045662100456621004566210045662100456621', # '00456621'
'0072992700729927007299270072992700729927', # '00729927'
'001443001443001443001443001443001443001443', # '001443'
'037037037037037037037037037037037037037037037', # '037'
'047619047619047619047619047619047619047619', # '047619'
'002457002457002457002457002457002457002457', # '002457'
'001221001221001221001221001221001221001221', # '001221'
'001230012300123001230012300123001230012300123', # '00123'
'0013947001394700139470013947001394700139470013947', # '0013947'
'001001001001001001001001001001001001001001001001001', # '001'
'001406469760900140646976090014064697609', # '0014064697609'
]
are strings which repeat themselves, and
[
'004608294930875576036866359447',
'00469483568075117370892018779342723',
'004739336492890995260663507109',
'001508295625942684766214177978883861236802413273',
'007518796992481203',
'0071942446043165467625899280575539568345323741',
'0434782608695652173913',
'0344827586206896551724137931',
'002481389578163771712158808933',
'002932551319648093841642228739',
'0035587188612099644128113879',
'003484320557491289198606271777',
'00115074798619102416570771',
]
are examples of ones that do not.
The repeating sections of the strings I'm given can be quite long, and the strings themselves can be 500 or more characters, so looping through each character trying to build a pattern then checking the pattern vs the rest of the string seems awful slow. Multiply that by potentially hundreds of strings and I can't see any intuitive solution.
I've looked into regexes a bit and they seem good for when you know what you're looking for, or at least the length of the pattern you're looking for. Unfortunately, I know neither.
How can I tell if a string is repeating itself and if it is, what the shortest repeating subsequence is?
|
Here's a concise solution which avoids regular expressions and slow in-Python loops:
def principal_period(s):
i = (s+s).find(s, 1, -1)
return None if i == -1 else s[:i]
See the Community Wiki answer started by @davidism for benchmark results. In summary,
David Zhang's solution is the clear winner, outperforming all others by at least 5x for the large example set.
(That answer's words, not mine.)
This is based on the observation that a string is periodic if and only if it is equal to a nontrivial rotation of itself. Kudos to @AleksiTorhamo for realizing that we can then recover the principal period from the index of the first occurrence of s in (s+s)[1:-1], and for informing me of the optional start and end arguments of Python's string.find.
|
How to get all values from python enum class?
|
I'm using Enum4 library to create an enum class as follows:
class Color(Enum):
RED = 1
BLUE = 2
I want to print [1, 2] as a list somewhere. How can I achieve this?
|
You can do the following:
[e.value for e in Color]
|
Why do new style class and old style class have different behavior in this case?
|
I found something interesting, here is a snippet of code:
class A(object):
def __init__(self):
print "A init"
def __del__(self):
print "A del"
class B(object):
a = A()
If I run this code, I will get:
A init
But if I change class B(object) to class B(), I will get:
A init
A del
I found a note in the __del__ doc:
It is not guaranteed that del() methods are called for objects
that still exist when the interpreter exits.
Then, I guess it's because that B.a is still referenced(referenced by class B) when the interpreter exists.
So, I added a del B before the interpreter exists manually, and then I found a.__del__() was called.
Now, I am a little confused about that. Why is a.__del__() called when using old style class? Why do new and old style classes have different behavior?
I found a similar question here, but I think the answers are not clear enough.
|
TL;DR: this is an old issue in CPython, that was finally fixed in CPython 3.4. Objects kept live by reference cycles that are referred to by module globals are not properly finalized on interpreter exit in CPython versions prior to 3.4. New-style classes have implicit cycles in their type instances; old-style classes (of type classobj) do not have implicit reference cycles.
Even though fixed in this case, the CPython 3.4 documentation still recommends to not depend on __del__ being called on interpreter exit - consider yourself warned.
New style classes have reference cycles in themselves: most notably
>>> class A(object):
... pass
>>> A.__mro__[0] is A
True
This means that they cannot be deleted instantly*, but only when the garbage collector is run. Since a reference to them is being held by the main module, they will stay in memory until the interpreter shutdown. At the end, during the module clean-up, all the module global names in the main are set to point to None, and whichever objects had their reference counts decreased to zero (your old-style class for example) were also deleted. However, the new-style classes, having reference cycles, would not be released/finalized by this.
The cyclic garbage collector would not be run at the interpreter exit (which is allowed by the CPython documentation:
It is not guaranteed that __del__() methods are called for objects that still exist when the interpreter exits.
Now, old-style classes in Python 2 do not have implicit cycles. When the CPython module cleanup/shutdown code sets the global variables to None, the only remaining reference to class B is dropped; then B is deleted, and the last reference to a is dropped, and a too is finalized.
To demonstrate the fact that the new-style classes have cycles and require a GC sweep, whereas the old-style classes do not, you can try the following program in CPython 2 (CPython 3 does not have old-style classes any more):
import gc
class A(object):
def __init__(self):
print("A init")
def __del__(self):
print("A del")
class B(object):
a = A()
del B
print("About to execute gc.collect()")
gc.collect()
With B as new-style class as above, the output is
A init
About to execute gc.collect()
A del
With B as old-style class (class B:), the output is
A init
A del
About to execute gc.collect()
That is, the new-style class was deleted only after gc.collect() even though the last outside reference to it was dropped already; but the old-style class was deleted instantly.
Much of this is already fixed in Python 3.4: thanks to PEP 442, which included the module shutdown procedure based on GC code. Now even on interpreter exit the module globals are finalized using the ordinary garbage collection. If you run your program under Python 3.4, the program will print
A init
A del
Whereas with Python <=3.3 it will print
A init
(Do note that other implementations still might or might not execute __del__ at this moment, regardless of the version of them being above, at, or below, 3.4)
|
Celery & RabbitMQ running as docker containers: Received unregistered task of type '...'
|
I am relatively new to docker, celery and rabbitMQ.
In our project we currently have the following setup:
1 physical host with multiple docker containers running:
1x rabbitmq:3-management container
# pull image from docker hub and install
docker pull rabbitmq:3-management
# run docker image
docker run -d -e RABBITMQ_NODENAME=my-rabbit --name some-rabbit -p 8080:15672 -p 5672:5672 rabbitmq:3-management
1x celery container
# pull docker image from docker hub
docker pull celery
# run celery container
docker run --link some-rabbit:rabbit --name some-celery -d celery
(there are some more containers, but they should not have to do anything with the problem)
Task File
To get to know celery and rabbitmq a bit, I created a tasks.py file on the physical host:
from celery import Celery
app = Celery('tasks', backend='amqp', broker='amqp://guest:guest@172.17.0.81/')
@app.task(name='tasks.add')
def add(x, y):
return x + y
The whole setup seems to be working quite fine actually. So when I open a python shell in the directory where tasks.py is located and run
>>> from tasks import add
>>> add.delay(4,4)
The task gets queued and directly pulled from the celery worker.
However, the celery worker does not know the tasks module regarding to the logs:
$ docker logs some-celery
[2015-04-08 11:25:24,669: ERROR/MainProcess] Received unregistered task of type 'tasks.add'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'callbacks': None, 'timelimit': (None, None), 'retries': 0, 'id': '2b5dc209-3c41-4a8d-8efe-ed450d537e56', 'args': (4, 4), 'eta': None, 'utc': True, 'taskset': None, 'task': 'tasks.add', 'errbacks': None, 'kwargs': {}, 'chord': None, 'expires': None} (256b)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/celery/worker/consumer.py", line 455, in on_task_received
strategies[name](message, body,
KeyError: 'tasks.add'
So the problem obviously seems to be, that the celery workers in the celery container do not know the tasks module.
Now as I am not a docker specialist, I wanted to ask how I would best import the tasks module into the celery container?
Any help is appreciated :)
EDIT 4/8/2015, 21:05:
Thanks to Isowen for the answer. Just for completeness here is what I did:
Let's assume my tasks.py is located on my local machine in /home/platzhersh/celerystuff. Now I created a celeryconfig.py in the same directory with the following content:
CELERY_IMPORTS = ('tasks')
CELERY_IGNORE_RESULT = False
CELERY_RESULT_BACKEND = 'amqp'
As mentioned by Isowen, celery searches /home/user of the container for tasks and config files. So we mount the /home/platzhersh/celerystuff into the container when starting:
run -v /home/platzhersh/celerystuff:/home/user --link some-rabbit:rabbit --name some-celery -d celery
This did the trick for me. Hope this helps some other people with similar problems.
I'll now try to expand that solution by putting the tasks also in a separate docker container.
|
As you suspect, the issue is because the celery worker does not know the tasks module. There are two things you need to do:
Get your tasks definitions "into" the docker container.
Configure the celery worker to load those task definitions.
For Item (1), the easiest way is probably to use a "Docker Volume" to mount a host directory of your code onto the celery docker instance. Something like:
docker run --link some-rabbit:rabbit -v /path/to/host/code:/home/user --name some-celery -d celery
Where /path/to/host/code is the your host path, and /home/user is the path to mount it on the instance. Why /home/user in this case? Because the Dockerfile for the celery image defines the working directory (WORKDIR) as /home/user.
(Note: Another way to accomplish Item (1) would be to build a custom docker image with the code "built in", but I will leave that as an exercise for the reader.)
For Item (2), you need to create a celery configuration file that imports the tasks file. This is a more general issue, so I will point to a previous stackoverflow answer: Celery Received unregistered task of type (run example)
|
Add column to dataframe with default value
|
I have an existing dataframe which I need to add an additional column to which will contain the same value for every row.
Existing df:
Date, Open, High, Low, Close
01-01-2015, 565, 600, 400, 450
New df:
Name, Date, Open, High, Low, Close
abc, 01-01-2015, 565, 600, 400, 450
I know how to append an existing series / dataframe column. But this is a different situation, because all I need is to add the 'Name' column and set every row to the same value, in this case 'abc'.
Im not entirely sure how to do that.
|
df['Name']='abc' will add the new column and set all rows to that value:
In [79]:
df
Out[79]:
Date, Open, High, Low, Close
0 01-01-2015, 565, 600, 400, 450
In [80]:
df['Name'] = 'abc'
df
Out[80]:
Date, Open, High, Low, Close Name
0 01-01-2015, 565, 600, 400, 450 abc
|
Produce a RA vs DEC equatorial coordinates plot with python
|
I'm trying to generate an equatorial coordinates plot that should look more or less like this one:
(The figure is taken from this article, and it shows the position of the Large and Small MCs in equatorial coordinates)
Important things to notice about this plot:
The theta axis (ie: the right ascension) is in h:m:s (hours, minutes, seconds) as it is accustomed in astronomy, rather than in degrees as the default polar option does in matplotlib.
The r axis (ie: the declination) increases outward from -90º and the grid is centered in (0h, -90º).
The plot is clipped, meaning only a portion of it shows as opposed to the entire circle (as matplotlib does by default).
Using the polar=True option in matplotlib, the closest plot I've managed to produce is this (MWE below, data file here; some points are not present compared to the image above since the data file is a bit smaller):
I also need to add a third column of data to the plot, which is why I add a colorbar and color each point accordingly to a z array:
So what I mostly need right now is a way to clip the plot. Based mostly on this question and this example @cphlewis came quite close with his answer, but several things are still missing (mentioned in his answer).
Any help and/or pointers with this issue will be greatly appreciated.
MWE
(Notice I use gridspec to position the subplot because I need to generate several of these in the same output image file)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
def skip_comments(f):
'''
Read lines that DO NOT start with a # symbol.
'''
for line in f:
if not line.strip().startswith('#'):
yield line
def get_data_bb():
'''RA, DEC data file.
'''
# Path to data file.
out_file = 'bb_cat.dat'
# Read data file
with open(out_file) as f:
ra, dec = [], []
for line in skip_comments(f):
ra.append(float(line.split()[0]))
dec.append(float(line.split()[1]))
return ra, dec
# Read RA, DEC data from file.
ra, dec = get_data_bb()
# Convert RA from decimal degrees to radians.
ra = [x / 180.0 * 3.141593 for x in ra]
# Make plot.
fig = plt.figure(figsize=(20, 20))
gs = gridspec.GridSpec(4, 2)
# Position plot in figure using gridspec.
ax = plt.subplot(gs[0], polar=True)
ax.set_ylim(-90, -55)
# Set x,y ticks
angs = np.array([330., 345., 0., 15., 30., 45., 60., 75., 90., 105., 120.])
plt.xticks(angs * np.pi / 180., fontsize=8)
plt.yticks(np.arange(-80, -59, 10), fontsize=8)
ax.set_rlabel_position(120)
ax.set_xticklabels(['$22^h$', '$23^h$', '$0^h$', '$1^h$', '$2^h$', '$3^h$',
'$4^h$', '$5^h$', '$6^h$', '$7^h$', '$8^h$'], fontsize=10)
ax.set_yticklabels(['$-80^{\circ}$', '$-70^{\circ}$', '$-60^{\circ}$'],
fontsize=10)
# Plot points.
ax.scatter(ra, dec, marker='o', c='k', s=1, lw=0.)
# Use this block to generate colored points with a colorbar.
#cm = plt.cm.get_cmap('RdYlBu_r')
#z = np.random.random((len(ra), 1)) # RGB values
#SC = ax.scatter(ra, dec, marker='o', c=z, s=10, lw=0., cmap=cm)
# Colorbar
#cbar = plt.colorbar(SC, shrink=1., pad=0.05)
#cbar.ax.tick_params(labelsize=8)
#cbar.set_label('colorbar', fontsize=8)
# Output png file.
fig.tight_layout()
plt.savefig(ra_dec_plot.png', dpi=300)
|
Chewing on the AxisArtist example is actually pretty promising (this combines two AxisArtist examples -- I wouldn't be surprised if AxisArtist was written with RA plots in mind):
Still to do:
Declination should run from -90 at the origin to 0
Be able to use
and add a colorbar
adjust limits if plotting outside them
aesthetic:
Serif font in axis labels
Dashed gridlines for ascension
anything else?
"""
An experimental support for curvilinear grid.
"""
import numpy as np
import mpl_toolkits.axisartist.angle_helper as angle_helper
import matplotlib.cm as cmap
from matplotlib.projections import PolarAxes
from matplotlib.transforms import Affine2D
from mpl_toolkits.axisartist import SubplotHost
from mpl_toolkits.axisartist import GridHelperCurveLinear
def curvelinear_test2(fig):
"""
polar projection, but in a rectangular box.
"""
global ax1
# see demo_curvelinear_grid.py for details
tr = Affine2D().scale(np.pi/180., 1.) + PolarAxes.PolarTransform()
extreme_finder = angle_helper.ExtremeFinderCycle(10, 60,
lon_cycle = 360,
lat_cycle = None,
lon_minmax = None,
lat_minmax = (0, np.inf),
)
grid_locator1 = angle_helper.LocatorHMS(12) #changes theta gridline count
tick_formatter1 = angle_helper.FormatterHMS()
grid_locator2 = angle_helper.LocatorDMS(6)
tick_formatter2 = angle_helper.FormatterDMS()
grid_helper = GridHelperCurveLinear(tr,
extreme_finder=extreme_finder,
grid_locator1=grid_locator1,
tick_formatter1=tick_formatter1,
grid_locator2=grid_locator2,
tick_formatter2=tick_formatter2
)
ax1 = SubplotHost(fig, 1, 1, 1, grid_helper=grid_helper)
# make ticklabels of right and top axis visible.
ax1.axis["right"].major_ticklabels.set_visible(True)
ax1.axis["top"].major_ticklabels.set_visible(True)
ax1.axis["bottom"].major_ticklabels.set_visible(True) #Turn off?
# let right and bottom axis show ticklabels for 1st coordinate (angle)
ax1.axis["right"].get_helper().nth_coord_ticks=0
ax1.axis["bottom"].get_helper().nth_coord_ticks=0
fig.add_subplot(ax1)
grid_helper = ax1.get_grid_helper()
ax1.set_aspect(1.)
ax1.set_xlim(-4,15) # moves the origin left-right in ax1
ax1.set_ylim(-3, 20) # moves the origin up-down
ax1.set_ylabel('90$^\circ$ + Declination')
ax1.set_xlabel('Ascension')
ax1.grid(True)
#ax1.grid(linestyle='--', which='x') # either keyword applies to both
#ax1.grid(linestyle=':', which='y') # sets of gridlines
return tr
import matplotlib.pyplot as plt
fig = plt.figure(1, figsize=(5, 5))
fig.clf()
tr = curvelinear_test2(fig) # tr.transform_point((x, 0)) is always (0,0)
# => (theta, r) in but (r, theta) out...
r_test = [0, 1.2, 2.8, 3.8, 5, 8, 10, 13.3, 17] # distance from origin
deg_test = [0, -7, 12, 28, 45, 70, 79, 90, 100] # degrees ascension
out_test = tr.transform(zip(deg_test, r_test))
sizes = [40, 30, 10, 30, 80, 33, 12, 48, 45]
#hues = [.9, .3, .2, .8, .6, .1, .4, .5,.7] # Oddly, floats-to-colormap worked for a while.
hues = np.random.random((9,3)) #RGB values
ax1.scatter(out_test[:,0], #ax1 is a global
out_test[:,1],
s=sizes,
c=hues,
#cmap=cmap.RdYlBu_r,
zorder=9) #on top of gridlines
plt.show()
|
SQLAlchemy ORM conversion to pandas DataFrame
|
This topic hasn't been addressed in a while, here or elsewhere. Is there a solution converting a SQLAlchemy <Query object> to a pandas DataFrame?
Pandas has the capability to use pandas.read_sql but this requires use of raw SQL. I have two reasons for wanting to avoid it: 1) I already have everything using the ORM (a good reason in and of itself) and 2) I'm using python lists as part of the query (eg: .db.session.query(Item).filter(Item.symbol.in_(add_symbols) where Item is my model class and add_symbols is a list). This is the equivalent of SQL SELECT ... from ... WHERE ... IN.
Is anything possible?
|
Below should work in most cases:
df = pd.read_sql(query.statement, query.session.bind)
|
Python pandas: check if any value is NaN in DataFrame
|
In python pandas, what's the best way to check whether a DataFrame has one (or more) NaN values?
I know about the function pd.isnan, but this returns a DataFrame of booleans for each element. This post right here doesn't exactly answer my question either.
|
jwilner's response is spot on. I was exploring to see if there's a faster option, since in my experience, summing flat arrays is (strangely) faster than counting. This code seems faster:
df.isnull().values.any()
For example:
In [2]: df = pd.DataFrame(np.random.randn(1000,1000))
In [3]: df[df > 0.9] = pd.np.nan
In [4]: %timeit df.isnull().any().any()
100 loops, best of 3: 14.7 ms per loop
In [5]: %timeit df.isnull().values.sum()
100 loops, best of 3: 2.15 ms per loop
In [6]: %timeit df.isnull().sum().sum()
100 loops, best of 3: 18 ms per loop
In [7]: %timeit df.isnull().values.any()
1000 loops, best of 3: 948 µs per loop
df.isnull().sum().sum() is a bit slower, but of course, has additional information -- the number of NaNs.
|
Whats the best way to present a flask interface to ongoing backround task?
|
I have a long running process that continuously reads from a telnet port and may occasionally write to it. Sometimes I want to send an HTTP request to it to fetch the info its read since the last time I asked. Sometimes I may send an HTTP request to write certain data to another telnet port.
Should I do this with 2 threads and if so should I use a mutex or an instruction queue. How do you do threading with flask anyway? Should I use multiprocessing? Something else?
The reason I ask this is I had problem with a similar problem(but serial ports instead of telnet port and directly in the app instead of a local/remote HTTP service) and ended up with the non data reading thread somehow almost never running even when I inserted tons of sleep calls. I ended up re-writing it from mutex to queues and then to using multiprocesing w/ queues.
Edit: The telnet ports are connections to an application which communicates(mainly reads debug data) with hardware(a printer). The flask HTTP service I want to write would be accessed by test running against the printer(either on the same machine or a different machine then the HTTP service), none of this involves a web browser!
|
These kind of long polling jobs are best achieved using sockets, they don't really fit the Flask/WSGI model as this is not geared to asynchronous operations. You may want to look at twisted or tornado.
That said your back-end process that reads/writes to telnet could be running in a separate thread that may or may not be initiated from a HTTP request. Once you kick off a thread from the flask app it won't block the response.
You can just read from the data store it writes to by occasionally polling the Flask app for new data. This could be achieved client-side in a browser using javascript and timeouts, but it's a bit hacky.
|
Python 2 - How would you round up/down to the nearest 6 minutes?
|
There are numerous examples of people rounding to the nearest ten minutes but I can't figure out the logic behind rounding to the nearest six. I thought it would be a matter of switching a few numbers around but I can't get it to work.
The code I'm working with is located at my Github. The block I've got that isn't even close to working (won't give any output) is:
def companyTimer():
if minutes % 6 > .5:
companyMinutes = minutes + 1
elif minutes % 6 < 5:
companyMinutes = minutes - 1
else:
companyMinutes = minutes
print companyMinutes
Looking at it now, I see that my logic is incorrect - even if it were working, the add and subtract 1 minute portion of the code doesn't make sense.
Anyway, I have no idea how to remedy this - could someone point me in the right direction, please?
PS - this is something I'm making for personal use at work.. not asking for help with my job but this will help me keep track of my hours at work. Don't want there to be any issues with that.
Thanks!
|
Here's a general function to round to nearest x:
def round_to_nearest(num, base):
n = num + (base//2)
return n - (n % base)
[round_to_nearest(i, 6) for i in range(20)]
# [0, 0, 0, 6, 6, 6, 6, 6, 6, 12, 12, 12, 12, 12, 12, 18, 18, 18, 18, 18]
Explanation:
n % base is the remainder left over when dividing n by base. Also known as the modulo operator.
Simply subtracting num%6 from num would give you 0 for 0-5, 6 for 6-11, and so on.
Since we want to "round" instead of "floor", we can bias this result by adding half of the base (base//2) beforehand.
|
Algorithm to group sets of points together that follow a direction
|
Note: I am placing this question in both the MATLAB and Python tags as I am the most proficient in these languages. However, I welcome solutions in any language.
Question Preamble
I have taken an image with a fisheye lens. This image consists of a pattern with a bunch of square objects. What I want to do with this image is detect the centroid of each of these squares, then use these points to perform an undistortion of the image - specifically, I am seeking the right distortion model parameters. It should be noted that not all of the squares need to be detected. As long as a good majority of them are, then that's totally fine.... but that isn't the point of this post. The parameter estimation algorithm I have already written, but the problem is that it requires points that appear collinear in the image.
The base question I want to ask is given these points, what is the best way to group them together so that each group consists of a horizontal line or vertical line?
Background to my problem
This isn't really important with regards to the question I'm asking, but if you'd like to know where I got my data from and to further understand the question I'm asking, please read. If you're not interested, then you can skip right to the Problem setup section below.
An example of an image I am dealing with is shown below:
It is a 960 x 960 image. The image was originally higher resolution, but I subsample the image to facilitate faster processing time. As you can see, there are a bunch of square patterns that are dispersed in the image. Also, the centroids I have calculated are with respect to the above subsampled image.
The pipeline I have set up to retrieve the centroids is the following:
Perform a Canny Edge Detection
Focus on a region of interest that minimizes false positives. This region of interest is basically the squares without any of the black tape that covers one of their sides.
Find all distinct closed contours
For each distinct closed contour...
a. Perform a Harris Corner Detection
b. Determine if the result has 4 corner points
c. If this does, then this contour belonged to a square and find the centroid of this shape
d. If it doesn't, then skip this shape
Place all detected centroids from Step #4 into a matrix for further examination.
Here's an example result with the above image. Each detected square has the four points colour coded according to the location of where it is with respect to the square itself. For each centroid that I have detected, I write an ID right where that centroid is in the image itself.
With the above image, there are 37 detected squares.
Problem Setup
Suppose I have some image pixel points stored in a N x 3 matrix. The first two columns are the x (horizontal) and y (vertical) coordinates where in image coordinate space, the y coordinate is inverted, which means that positive y moves downwards. The third column is an ID associated with the point.
Here is some code written in MATLAB that takes these points, plots them onto a 2D grid and labels each point with the third column of the matrix. If you read the above background, these are the points that were detected by my algorithm outlined above.
data = [ 475. , 605.75, 1.;
571. , 586.5 , 2.;
233. , 558.5 , 3.;
669.5 , 562.75, 4.;
291.25, 546.25, 5.;
759. , 536.25, 6.;
362.5 , 531.5 , 7.;
448. , 513.5 , 8.;
834.5 , 510. , 9.;
897.25, 486. , 10.;
545.5 , 491.25, 11.;
214.5 , 481.25, 12.;
271.25, 463. , 13.;
646.5 , 466.75, 14.;
739. , 442.75, 15.;
340.5 , 441.5 , 16.;
817.75, 421.5 , 17.;
423.75, 417.75, 18.;
202.5 , 406. , 19.;
519.25, 392.25, 20.;
257.5 , 382. , 21.;
619.25, 368.5 , 22.;
148. , 359.75, 23.;
324.5 , 356. , 24.;
713. , 347.75, 25.;
195. , 335. , 26.;
793.5 , 332.5 , 27.;
403.75, 328. , 28.;
249.25, 308. , 29.;
495.5 , 300.75, 30.;
314. , 279. , 31.;
764.25, 249.5 , 32.;
389.5 , 249.5 , 33.;
475. , 221.5 , 34.;
565.75, 199. , 35.;
802.75, 173.75, 36.;
733. , 176.25, 37.];
figure; hold on;
axis ij;
scatter(data(:,1), data(:,2),40, 'r.');
text(data(:,1)+10, data(:,2)+10, num2str(data(:,3)));
Similarly in Python, using numpy and matplotlib, we have:
import numpy as np
import matplotlib.pyplot as plt
data = np.array([[ 475. , 605.75, 1. ],
[ 571. , 586.5 , 2. ],
[ 233. , 558.5 , 3. ],
[ 669.5 , 562.75, 4. ],
[ 291.25, 546.25, 5. ],
[ 759. , 536.25, 6. ],
[ 362.5 , 531.5 , 7. ],
[ 448. , 513.5 , 8. ],
[ 834.5 , 510. , 9. ],
[ 897.25, 486. , 10. ],
[ 545.5 , 491.25, 11. ],
[ 214.5 , 481.25, 12. ],
[ 271.25, 463. , 13. ],
[ 646.5 , 466.75, 14. ],
[ 739. , 442.75, 15. ],
[ 340.5 , 441.5 , 16. ],
[ 817.75, 421.5 , 17. ],
[ 423.75, 417.75, 18. ],
[ 202.5 , 406. , 19. ],
[ 519.25, 392.25, 20. ],
[ 257.5 , 382. , 21. ],
[ 619.25, 368.5 , 22. ],
[ 148. , 359.75, 23. ],
[ 324.5 , 356. , 24. ],
[ 713. , 347.75, 25. ],
[ 195. , 335. , 26. ],
[ 793.5 , 332.5 , 27. ],
[ 403.75, 328. , 28. ],
[ 249.25, 308. , 29. ],
[ 495.5 , 300.75, 30. ],
[ 314. , 279. , 31. ],
[ 764.25, 249.5 , 32. ],
[ 389.5 , 249.5 , 33. ],
[ 475. , 221.5 , 34. ],
[ 565.75, 199. , 35. ],
[ 802.75, 173.75, 36. ],
[ 733. , 176.25, 37. ]])
plt.figure()
plt.gca().invert_yaxis()
plt.plot(data[:,0], data[:,1], 'r.', markersize=14)
for idx in np.arange(data.shape[0]):
plt.text(data[idx,0]+10, data[idx,1]+10, str(int(data[idx,2])), size='large')
plt.show()
We get:
Back to the question
As you can see, these points are more or less in a grid pattern and you can see that we can form lines between the points. Specifically, you can see that there are lines that can be formed horizontally and vertically.
For example, if you reference the image in the background section of my problem, we can see that there are 5 groups of points that can be grouped in a horizontal manner. For example, points 23, 26, 29, 31, 33, 34, 35, 37 and 36 form one group. Points 19, 21, 24, 28, 30 and 32 form another group and so on and so forth. Similarly in a vertical sense, we can see that points 26, 19, 12 and 3 form one group, points 29, 21, 13 and 5 form another group and so on.
Question to ask
My question is this: What is a method that can successfully group points in horizontal groupings and vertical groupings separately, given that the points could be in any orientation?
Conditions
There must be at least three points per line. If there is anything less than that, then this does not qualify as a segment. Therefore, the points 36 and 10 don't qualify as a vertical line, and similarly the isolated point 23 shouldn't quality as a vertical line, but it is part of the first horizontal grouping.
The above calibration pattern can be in any orientation. However, for what I'm dealing with, the worst kind of orientation you can get is what you see above in the background section.
Expected Output
The output would be a pair of lists where the first list has elements where each element gives you a sequence of point IDs that form a horizontal line. Similarly, the second list has elements where each element gives you a sequence of point IDs that form a vertical line.
Therefore, the expected output for the horizontal sequences would look something like this:
MATLAB
horiz_list = {[23, 26, 29, 31, 33, 34, 35, 37, 36], [19, 21, 24, 28, 30, 32], ...};
vert_list = {[26, 19, 12, 3], [29, 21, 13, 5], ....};
Python
horiz_list = [[23, 26, 29, 31, 33, 34, 35, 37, 36], [19, 21, 24, 28, 30, 32], ....]
vert_list = [[26, 19, 12, 3], [29, 21, 13, 5], ...]
What I have tried
Algorithmically, what I have tried is to undo the rotation that is experienced at these points. I've performed Principal Components Analysis and I tried projecting the points with respect to the computed orthogonal basis vectors so that the points would more or less be on a straight rectangular grid.
Once I have that, it's just a simple matter of doing some scanline processing where you could group points based on a differential change on either the horizontal or vertical coordinates. You'd sort the coordinates by either the x or y values, then examine these sorted coordinates and look for a large change. Once you encounter this change, then you can group points in between the changes together to form your lines. Doing this with respect to each dimension would give you either the horizontal or vertical groupings.
With regards to PCA, here's what I did in MATLAB and Python:
MATLAB
%# Step #1 - Get just the data - no IDs
data_raw = data(:,1:2);
%# Decentralize mean
data_nomean = bsxfun(@minus, data_raw, mean(data_raw,1));
%# Step #2 - Determine covariance matrix
%# This already decentralizes the mean
cov_data = cov(data_raw);
%# Step #3 - Determine right singular vectors
[~,~,V] = svd(cov_data);
%# Step #4 - Transform data with respect to basis
F = V.'*data_nomean.';
%# Visualize both the original data points and transformed data
figure;
plot(F(1,:), F(2,:), 'b.', 'MarkerSize', 14);
axis ij;
hold on;
plot(data(:,1), data(:,2), 'r.', 'MarkerSize', 14);
Python
import numpy as np
import numpy.linalg as la
# Step #1 and Step #2 - Decentralize mean
centroids_raw = data[:,:2]
mean_data = np.mean(centroids_raw, axis=0)
# Transpose for covariance calculation
data_nomean = (centroids_raw - mean_data).T
# Step #3 - Determine covariance matrix
# Doesn't matter if you do this on the decentralized result
# or the normal result - cov subtracts the mean off anyway
cov_data = np.cov(data_nomean)
# Step #4 - Determine right singular vectors via SVD
# Note - This is already V^T, so there's no need to transpose
_,_,V = la.svd(cov_data)
# Step #5 - Transform data with respect to basis
data_transform = np.dot(V, data_nomean).T
plt.figure()
plt.gca().invert_yaxis()
plt.plot(data[:,0], data[:,1], 'b.', markersize=14)
plt.plot(data_transform[:,0], data_transform[:,1], 'r.', markersize=14)
plt.show()
The above code not only reprojects the data, but it also plots both the original points and the projected points together in a single figure. However, when I tried reprojecting my data, this is the plot I get:
The points in red are the original image coordinates while the points in blue are reprojected onto the basis vectors to try and remove the rotation. It still doesn't quite do the job. There is still some orientation with respect to the points so if I tried to do my scanline algorithm, points from the lines below for horizontal tracing or to the side for vertical tracing would be inadvertently grouped and this isn't correct.
Perhaps I'm overthinking the problem, but any insights you have regarding this would be greatly appreciated. If the answer is indeed superb, I would be inclined to award a high bounty as I've been stuck on this problem for quite some time.
I hope this question wasn't long winded. If you don't have an idea of how to solve this, then I thank you for your time in reading my question regardless.
Looking forward to any insights that you may have. Thanks very much!
|
Note 1: It has a number of settings -> which for other images may need to altered to get the result you want see % Settings - play around with these values
Note 2: It doesn't find all of the lines you want -> but its a starting point....
To call this function, invoke this in the command prompt:
>> [h, v] = testLines;
We get:
>> celldisp(h)
h{1} =
1 2 4 6 9 10
h{2} =
3 5 7 8 11 14 15 17
h{3} =
1 2 4 6 9 10
h{4} =
3 5 7 8 11 14 15 17
h{5} =
1 2 4 6 9 10
h{6} =
3 5 7 8 11 14 15 17
h{7} =
3 5 7 8 11 14 15 17
h{8} =
1 2 4 6 9 10
h{9} =
1 2 4 6 9 10
h{10} =
12 13 16 18 20 22 25 27
h{11} =
13 16 18 20 22 25 27
h{12} =
3 5 7 8 11 14 15 17
h{13} =
3 5 7 8 11 14 15
h{14} =
12 13 16 18 20 22 25 27
h{15} =
3 5 7 8 11 14 15 17
h{16} =
12 13 16 18 20 22 25 27
h{17} =
19 21 24 28 30
h{18} =
21 24 28 30
h{19} =
12 13 16 18 20 22 25 27
h{20} =
19 21 24 28 30
h{21} =
12 13 16 18 20 22 24 25
h{22} =
12 13 16 18 20 22 24 25 27
h{23} =
23 26 29 31 33 34 35
h{24} =
23 26 29 31 33 34 35 37
h{25} =
23 26 29 31 33 34 35 36 37
h{26} =
33 34 35 37 36
h{27} =
31 33 34 35 37
>> celldisp(v)
v{1} =
33 28 18 8 1
v{2} =
34 30 20 11 2
v{3} =
26 19 12 3
v{4} =
35 22 14 4
v{5} =
29 21 13 5
v{6} =
25 15 6
v{7} =
31 24 16 7
v{8} =
37 32 27 17 9
A figure is also generated that draws the lines through each proper set of points:
function [horiz_list, vert_list] = testLines
global counter;
global colours;
close all;
data = [ 475. , 605.75, 1.;
571. , 586.5 , 2.;
233. , 558.5 , 3.;
669.5 , 562.75, 4.;
291.25, 546.25, 5.;
759. , 536.25, 6.;
362.5 , 531.5 , 7.;
448. , 513.5 , 8.;
834.5 , 510. , 9.;
897.25, 486. , 10.;
545.5 , 491.25, 11.;
214.5 , 481.25, 12.;
271.25, 463. , 13.;
646.5 , 466.75, 14.;
739. , 442.75, 15.;
340.5 , 441.5 , 16.;
817.75, 421.5 , 17.;
423.75, 417.75, 18.;
202.5 , 406. , 19.;
519.25, 392.25, 20.;
257.5 , 382. , 21.;
619.25, 368.5 , 22.;
148. , 359.75, 23.;
324.5 , 356. , 24.;
713. , 347.75, 25.;
195. , 335. , 26.;
793.5 , 332.5 , 27.;
403.75, 328. , 28.;
249.25, 308. , 29.;
495.5 , 300.75, 30.;
314. , 279. , 31.;
764.25, 249.5 , 32.;
389.5 , 249.5 , 33.;
475. , 221.5 , 34.;
565.75, 199. , 35.;
802.75, 173.75, 36.;
733. , 176.25, 37.];
figure; hold on;
axis ij;
% Change due to Benoit_11
scatter(data(:,1), data(:,2),40, 'r.'); text(data(:,1)+10, data(:,2)+10, num2str(data(:,3)));
text(data(:,1)+10, data(:,2)+10, num2str(data(:,3)));
% Process your data as above then run the function below(note it has sub functions)
counter = 0;
colours = 'bgrcmy';
[horiz_list, vert_list] = findClosestPoints ( data(:,1), data(:,2) );
function [horiz_list, vert_list] = findClosestPoints ( x, y )
% calc length of points
nX = length(x);
% set up place holder flags
modelledH = false(nX,1);
modelledV = false(nX,1);
horiz_list = {};
vert_list = {};
% loop for all points
for p=1:nX
% have we already modelled a horizontal line through these?
% second last param - true - horizontal, false - vertical
if modelledH(p)==false
[modelledH, index] = ModelPoints ( p, x, y, modelledH, true, true );
horiz_list = [horiz_list index];
else
[~, index] = ModelPoints ( p, x, y, modelledH, true, false );
horiz_list = [horiz_list index];
end
% make a temp copy of the x and y and remove any of the points modelled
% from the horizontal -> this is to avoid them being found in the
% second call.
tempX = x;
tempY = y;
tempX(index) = NaN;
tempY(index) = NaN;
tempX(p) = x(p);
tempY(p) = y(p);
% Have we found a vertial line?
if modelledV(p)==false
[modelledV, index] = ModelPoints ( p, tempX, tempY, modelledV, false, true );
vert_list = [vert_list index];
end
end
end
function [modelled, index] = ModelPoints ( p, x, y, modelled, method, fullRun )
% p - row in your original data matrix
% x - data(:,1)
% y - data(:,2)
% modelled - array of flags to whether rows have been modelled
% method - horizontal or vertical (used to calc graadients)
% fullRun - full calc or just to get indexes
% this could be made better by storing the indexes of each horizontal in the method above
% Settings - play around with these values
gradDelta = 0.2; % find points where gradient is less than this value
gradLimit = 0.45; % if mean gradient of line is above this ignore
numberOfPointsToCheck = 7; % number of points to check when look along the line
% to find other points (this reduces chance of it
% finding other points far away
% I optimised this for your example to be 7
% Try varying it and you will see how it effect the result.
% Find the index of points which are inline.
[index, grad] = CalcIndex ( x, y, p, gradDelta, method );
% check gradient of line
if abs(mean(grad))>gradLimit
index = [];
return
end
% add point of interest to index
index = [p index];
% loop through all points found above to find any other points which are in
% line with these points (this allows for slight curvature
combineIndex = [];
for ii=2:length(index)
% Find inedex of the points found above (find points on curve)
[index2] = CalcIndex ( x, y, index(ii), gradDelta, method, numberOfPointsToCheck, grad(ii-1) );
% Check that the point on this line are on the original (i.e. inline -> not at large angle
if any(ismember(index,index2))
% store points found
combineIndex = unique([index2 combineIndex]);
end
end
% copy to index
index = combineIndex;
if fullRun
% do some plotting
% TODO: here you would need to calculate your arrays to output.
xx = x(index);
[sX,sOrder] = sort(xx);
% Check its found at least 3 points
if length ( index(sOrder) ) > 2
% flag the modelled on the points found
modelled(index(sOrder)) = true;
% plot the data
plot ( x(index(sOrder)), y(index(sOrder)), colours(mod(counter,numel(colours)) + 1));
counter = counter + 1;
end
index = index(sOrder);
end
end
function [index, gradCheck] = CalcIndex ( x, y, p, gradLimit, method, nPoints2Consider, refGrad )
% x - data(:,1)
% y - data(:,2)
% p - point of interest
% method (x/y) or (y\x)
% nPoints2Consider - only look at N points (options)
% refgrad - rather than looking for gradient of closest point -> use this
% - reference gradient to find similar points (finds points on curve)
nX = length(x);
% calculate gradient
for g=1:nX
if method
grad(g) = (x(g)-x(p))\(y(g)-y(p));
else
grad(g) = (y(g)-y(p))\(x(g)-x(p));
end
end
% find distance to all other points
delta = sqrt ( (x-x(p)).^2 + (y-y(p)).^2 );
% set its self = NaN
delta(delta==min(delta)) = NaN;
% find the closest points
[m,order] = sort(delta);
if nargin == 7
% for finding along curve
% set any far away points to be NaN
grad(order(nPoints2Consider+1:end)) = NaN;
% find the closest points to the reference gradient within the allowable limit
index = find(abs(grad-refGrad)<gradLimit==1);
% store output
gradCheck = grad(index);
else
% find the points which are closes to the gradient of the closest point
index = find(abs(grad-grad(order(1)))<gradLimit==1);
% store gradients to output
gradCheck = grad(index);
end
end
end
|
Why is "if not (a and b)" faster than "if not a or not b"?
|
On a whim, I recently tested these two methods with timeit, to see which evaluation method was faster:
import timeit
"""Test method returns True if either argument is falsey, else False."""
def and_chk((a, b)):
if not (a and b):
return True
return False
def not_or_chk((a, b)):
if not a or not b:
return True
return False
...and got these results:
VALUES FOR a,b -> 0,0 0,1 1,0 1,1
method
and_chk(a,b) 0.95559 0.98646 0.95138 0.98788
not_or_chk(a,b) 0.96804 1.07323 0.96015 1.05874
...seconds per 1,111,111 cycles.
The difference in efficiency is between one and nine percent, always in favour of if not (a and b), which is the opposite of what I might expect since I understand that if not a or not b will evaluate its terms (if not a and then if not b) in order, running the if block once it encounters a true expression (and there are no and clauses). In contrast, the and_chk method needs to evaluate both clauses before it can return any result to the if not.. that wraps it.
The timing results, however, disprove this understanding. How, then, is the if condition being evaluated? I am perfectly aware of the fact that this degree of microoptimization is practically, if not completely, pointless. I just want to understand how Python is going about it.
For completion's sake, this is how I set up timeit...
cyc = 1111111
bothFalse_and = iter([(0,0)] * cyc)
zeroTrue_and = iter([(1,0)] * cyc)
oneTrue_and = iter([(0,1)] * cyc)
bothTrue_and = iter([(1,1)] * cyc)
bothFalse_notor = iter([(0,0)] * cyc)
zeroTrue_notor = iter([(1,0)] * cyc)
oneTrue_notor = iter([(0,1)] * cyc)
bothTrue_notor = iter([(1,1)] * cyc)
time_bothFalse_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import bothFalse_and as tups, and_chk')
time_zeroTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import zeroTrue_and as tups, and_chk')
time_oneTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import oneTrue_and as tups, and_chk')
time_bothTrue_and = timeit.Timer('and_chk(next(tups))', 'from __main__ import bothTrue_and as tups, and_chk')
time_bothFalse_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import bothFalse_notor as tups, not_or_chk')
time_zeroTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import zeroTrue_notor as tups, not_or_chk')
time_oneTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import oneTrue_notor as tups, not_or_chk')
time_bothTrue_notor = timeit.Timer('not_or_chk(next(tups))', 'from __main__ import bothTrue_notor as tups, not_or_chk')
...then ran each timeit.Timer(..) function with .timeit(cyc) to get the results posted.
|
TL;DR
The not_or_chk function requires two unary operations in addition to two jumps (in the worst case), while the and_chk function only has the two jumps (again, in the worst case).
Details
The dis module to the rescue! The dis module lets you take a look at the Python bytecode disassembly of your code. For example:
import dis
"""Test method returns True if either argument is falsey, else False."""
def and_chk((a, b)):
if not (a and b):
return True
return False
def not_or_chk((a, b)):
if not a or not b:
return True
return False
print("And Check:\n")
print(dis.dis(and_chk))
print("Or Check:\n")
print(dis.dis(not_or_chk))
Produces this output:
And Check:
5 0 LOAD_FAST 0 (.0)
3 UNPACK_SEQUENCE 2
6 STORE_FAST 1 (a)
9 STORE_FAST 2 (b)
6 12 LOAD_FAST 1 (a) * This block is the *
15 JUMP_IF_FALSE_OR_POP 21 * disassembly of *
18 LOAD_FAST 2 (b) * the "and_chk" *
>> 21 POP_JUMP_IF_TRUE 28 * function *
7 24 LOAD_GLOBAL 0 (True)
27 RETURN_VALUE
8 >> 28 LOAD_GLOBAL 1 (False)
31 RETURN_VALUE
None
Or Check:
10 0 LOAD_FAST 0 (.0)
3 UNPACK_SEQUENCE 2
6 STORE_FAST 1 (a)
9 STORE_FAST 2 (b)
11 12 LOAD_FAST 1 (a) * This block is the *
15 UNARY_NOT * disassembly of *
16 POP_JUMP_IF_TRUE 26 * the "not_or_chk" *
19 LOAD_FAST 2 (b) * function *
22 UNARY_NOT
23 POP_JUMP_IF_FALSE 30
12 >> 26 LOAD_GLOBAL 0 (True)
29 RETURN_VALUE
13 >> 30 LOAD_GLOBAL 1 (False)
33 RETURN_VALUE
None
Take a look at the two blocks of Python bytecode that I've marked with the asterisks. Those blocks are your two disassembled functions. Note that and_chk only has two jumps, and the calculations in the function are made while deciding whether or not to take the jump.
On the other hand, the not_or_chkfunction requires the not operation to be carried out twice in the worst case, in addition to the interpreter deciding whether or not to take the jump.
|
Using Django's collectstatic with boto S3 throws "Error 32: Broken Pipe" after a while
|
I'm using boto with S3 to store my Django site's static files. When using the collectstatic command, it uploads a good chunk of the files perfectly before stopping at a file and throwing "Error 32: Broken Pipe." When I try to run the command again, it skips over the files it has already uploaded and starts at the file where it left off, before throwing the same error without having uploaded anything new.
|
The key seems to be to specify which AWS Endpoint your bucket is located in. I tried doing this a bunch of different ways, but the solution that finally worked for me was to create a config file for boto as specified in the documentation.
Here are the contents of the config file I created at ~/.boto:
[Credentials]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
[s3]
host=s3-us-west-2.amazonaws.com
|
How to suppress the deprecation warnings in Django?
|
Every time I'm using the django-admin command â even on TABâcompletion â it throws a RemovedInDjango19Warning (and a lot more if I use the test command). How can I suppress those warnings?
I'm using Django 1.8 with Python 3.4 (in a virtual environment).
As far as I can tell, all those warnings come from libraries not from my code, here are some examples:
â¦/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: django.contrib.contenttypes.generic is deprecated and will be removed in Django 1.9. Its contents have been moved to the fields, forms, and admin submodules of django.contrib.contenttypes.
return f(*args, **kwds)
â¦/lib/python3.4/site-packages/django/contrib/admin/util.py:7: RemovedInDjango19Warning: The django.contrib.admin.util module has been renamed. Use django.contrib.admin.utils instead.
"Use django.contrib.admin.utils instead.", RemovedInDjango19Warning)
â¦/lib/python3.4/site-packages/django/templatetags/future.py:25: RemovedInDjango19Warning: Loading the ``url`` tag from the ``future`` library is deprecated and will be removed in Django 1.9. Use the default ``url`` tag instead.
RemovedInDjango19Warning)
|
Adding a logging filter to settings.py can suppress these console warnings (at least for manage.py commands in Django 1.7, Python 3.4).
A filter can selectively suppress warnings. The following code creates a new "suppress_deprecated" filter for the console and appends it to the default logging filters. Add this block to settings.py to configure the LOGGING variable:
import logging, copy
from django.utils.log import DEFAULT_LOGGING
LOGGING = copy.deepcopy(DEFAULT_LOGGING)
LOGGING['filters']['suppress_deprecated'] = {
'()': 'mysite.settings.SuppressDeprecated'
}
LOGGING['handlers']['console']['filters'].append('suppress_deprecated')
class SuppressDeprecated(logging.Filter):
def filter(self, record):
WARNINGS_TO_SUPPRESS = [
'RemovedInDjango18Warning',
'RemovedInDjango19Warning'
]
# Return false to suppress message.
return not any([warn in record.getMessage() for warn in WARNINGS_TO_SUPPRESS])
The 'mysite.settings.SuppressDeprecated' string needs to change if the root website module (or filter location and/or name) is different.
|
Why don't I get any syntax errors when I execute my Python script with Perl?
|
I just wrote some testing python code into test.py, and I'm launching it as follows:
perl test.py
After a while I realized my mistake. I say "after a while", because the
Python code gets actually correctly executed, as if in Python interpreter!
Why is my Perl interpreting my Python? test.py looks like this:
#!/usr/bin/python
...Python code here...
Interestingly, if I do the opposite (i.e. call python something.pl) I get a good deal of syntax errors.
|
From perlrun,
If the #! line does not contain the word "perl" nor the word "indir" the program named after the #! is executed instead of the Perl interpreter. This is slightly bizarre, but it helps people on machines that don't do #! , because they can tell a program that their SHELL is /usr/bin/perl, and Perl will then dispatch the program to the correct interpreter for them.
For example,
$ cat >a
#!/bin/cat
meow
$ perl a
#!/bin/cat
meow
|
Using lxml to parse namepaced HTML?
|
This is driving me totally nuts, I've been struggling with it for many hours. Any help would be much appreciated.
I'm using PyQuery 1.2.9 (which is built on top of lxml) to scrape this URL. I just want to get a list of all the links in the .linkoutlist section.
This is my request in full:
response = requests.get('http://www.ncbi.nlm.nih.gov/pubmed/?term=The%20cost-effectiveness%20of%20mirtazapine%20versus%20paroxetine%20in%20treating%20people%20with%20depression%20in%20primary%20care')
doc = pq(response.content)
links = doc('#maincontent .linkoutlist a')
print links
But that returns an empty array. If I use this query instead:
links = doc('#maincontent .linkoutlist')
Then I get this back this HTML:
<div xmlns="http://www.w3.org/1999/xhtml" xmlns:xi="http://www.w3.org/2001/XInclude" class="linkoutlist">
<h4>Full Text Sources</h4>
<ul>
<li><a title="Full text at publisher's site" href="http://meta.wkhealth.com/pt/pt-core/template-journal/lwwgateway/media/landingpage.htm?issn=0268-1315&volume=19&issue=3&spage=125" ref="itool=Abstract&PrId=3159&uid=15107654&db=pubmed&log$=linkoutlink&nlmid=8609061" target="_blank">Lippincott Williams & Wilkins</a></li>
<li><a href="http://ovidsp.ovid.com/ovidweb.cgi?T=JS&PAGE=linkout&SEARCH=15107654.ui" ref="itool=Abstract&PrId=3682&uid=15107654&db=pubmed&log$=linkoutlink&nlmid=8609061" target="_blank">Ovid Technologies, Inc.</a></li>
</ul>
<h4>Other Literature Sources</h4>
...
</div>
So the parent selectors do return HTML with lots of <a> tags. This also appears to be valid HTML.
More experimenting reveals that lxml does not like the xmlns attribute on the opening div, for some reason.
How can I ignore this in lxml, and just parse it like regular HTML?
UPDATE: Trying ns_clean, still failing:
parser = etree.XMLParser(ns_clean=True)
tree = etree.parse(StringIO(response.content), parser)
sel = CSSSelector('#maincontent .rprt_all a')
print sel(tree)
|
You need to handle namespaces, including an empty one.
Working solution:
from pyquery import PyQuery as pq
import requests
response = requests.get('http://www.ncbi.nlm.nih.gov/pubmed/?term=The%20cost-effectiveness%20of%20mirtazapine%20versus%20paroxetine%20in%20treating%20people%20with%20depression%20in%20primary%20care')
namespaces = {'xi': 'http://www.w3.org/2001/XInclude', 'test': 'http://www.w3.org/1999/xhtml'}
links = pq('#maincontent .linkoutlist test|a', response.content, namespaces=namespaces)
for link in links:
print link.attrib.get("title", "No title")
Prints titles of all links matching the selector:
Full text at publisher's site
No title
Free resource
Free resource
Free resource
Free resource
Or, just set the parser to "html" and forget about namespaces:
links = pq('#maincontent .linkoutlist a', response.content, parser="html")
for link in links:
print link.attrib.get("title", "No title")
|
Avoiding code repetition in default arguments in Python
|
Consider a typical function with default arguments:
def f(accuracy=1e-3, nstep=10):
...
This is compact and easy to understand. But what if we have another function g that will call f, and we want to pass on some arguments of g to f? A natural way of doing this is:
def g(accuracy=1e-3, nstep=10):
f(accuracy, nstep)
...
The problem with this way of doing things is that the default values of the optional arguments get repeated. Usually when propagating default arguments like this, one wants the same default in the upper function (g) as in the lower function (f), and hence any time the default changes in f one needs to go through all the functions that call it and update the defaults of any of their arguments they would propagate to f.
Another way of doing this is to use a placeholder argument, and fill in its value inside the function:
def f(accuracy=None, nstep=None):
if accuracy is None: accuracy = 1e-3
if nstep is None: nstep=10
...
def g(accuracy=None, nstep=None):
f(accuracy, nstep)
...
Now the calling function doesn't need to know what f's defaults are. But the f interface is now a bit more cumbersome, and less clear. This is the typical approach in languages without explicit default argument support, like fortran or javascript. But if one does everything this way in python, one is throwing away most of the language's default argument support.
Is there a better approach than these two? What is the standard, pythonic way of doing this?
|
Define global constants:
ACCURACY = 1e-3
NSTEP = 10
def f(accuracy=ACCURACY, nstep=NSTEP):
...
def g(accuracy=ACCURACY, nstep=NSTEP):
f(accuracy, nstep)
If f and g are defined in different modules, then you could make a constants.py module too:
ACCURACY = 1e-3
NSTEP = 10
and then define f with:
from constants import ACCURACY, NSTEP
def f(accuracy=ACCURACY, nstep=NSTEP):
...
and similarly for g.
|
Generators and for loops in Python
|
So I have a generator function, that looks like this.
def generator():
while True:
for x in range(3):
for j in range(5):
yield x
After I load up this function and call "next" a bunch of times, I'd expect it to yield values
0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 0 0 0 0 0 ...
But instead it just yields 0 all the time. Why is that?
>>> execfile("test.py")
>>> generator
<function generator at 0x10b6121b8>
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
>>> generator().next()
0
|
generator() initializes new generator object:
In [4]: generator() is generator() # Creating 2 separate objects
Out[4]: False
Then generator().next() gets the first value from the newly created generator object (0 in your case).
You should call generator once:
In [5]: gen = generator() # Storing new generator object, will reuse it
In [6]: [gen.next() for _ in range(6)] # Get first 6 values for demonstration purposes
Out[6]: [0, 0, 0, 0, 0, 1]
Note: generator.next was removed from Python 3 (PEP 3114) - use next function instead.
|
Shuffle DataFrame rows
|
I have the following DataFrame:
Col1 Col2 Col3 Type
0 1 2 3 1
1 4 5 6 1
...
20 7 8 9 2
21 10 11 12 2
...
45 13 14 15 3
46 16 17 18 3
...
The DataFrame is read from a csv file. All rows which have Type 1 are on top, followed by the rows with Type 2, followed by the rows with Type 3, etc.
I would like to shuffle the DataFrame's rows, so that all Type's are mixed. A possible result could be:
Col1 Col2 Col3 Type
0 7 8 9 2
1 13 14 15 3
...
20 1 2 3 1
21 10 11 12 2
...
45 4 5 6 1
46 16 17 18 3
...
As can be seen from the result, the order of the rows is shuffled, but the columns remain the same. I don't know if I am explaining this clearly. Let me know if I don't.
How can I achieve this?
|
The more idiomatic way to do this with pandas is to use the .sample method of your dataframe, i.e.
df.sample(frac=1)
The frac keyword argument specifies the fraction of rows to return in the random sample, so frac=1 means return all rows (in random order).
Note:
If you wish to shuffle your dataframe in-place and reset the index, you could do e.g.
df = df.sample(frac=1).reset_index(drop=True)
Here, specifying drop=True prevents .reset_index from creating a column containing the old index entries.
|
Still can't install scipy due to missing fortran compiler after brew install gcc on Mac OC X
|
I have read and followed this answer to install scipy/numpy/theano. However, it still failed on the same error of missing Fortran compiler after brew install gcc. While HomeBrew installed the gcc-4.8, it didn't install any gfortran or g95 commands. I figure gfortran may be just a synonymy of gcc, then I create a symlink
$ cd /usr/local/bin
$ ln -s gcc-4.8 gfortran
$ pip install scipy
Then it detects the gfortran command but still complaining no Fortran compiler
customize Gnu95FCompiler
Found executable /usr/local/bin/gfortran
customize NAGFCompiler
Could not locate executable f95
customize AbsoftFCompiler
Could not locate executable f90
Could not locate executable f77
customize IBMFCompiler
Could not locate executable xlf90
Could not locate executable xlf
customize IntelFCompiler
Could not locate executable ifort
Could not locate executable ifc
customize GnuFCompiler
Could not locate executable g77
customize G95FCompiler
Could not locate executable g95
customize PGroupFCompiler
Could not locate executable pgfortran
don't know how to compile Fortran code on platform 'posix'
building 'dfftpack' library
error: library dfftpack has Fortran sources but no Fortran compiler found
What else should I do?
|
Fixed by upgrading pip, even though I just installed my pip/virtualenv the first time anew on the same day.
(mypy)MAC0227: $ pip install --upgrade pip
...
(mypy)MAC0227: $ pip install theano
/Users/me/.virtualenvs/mypy/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Requirement already satisfied (use --upgrade to upgrade): theano in /Users/me/.virtualenvs/mypy/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): numpy>=1.6.2 in /Users/me/.virtualenvs/mypy/lib/python2.7/site-packages (from theano)
Collecting scipy>=0.11 (from theano)
/Users/me/.virtualenvs/mypy/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
Downloading scipy-0.15.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (19.8MB)
100% |ââââââââââââââââââââââââââââââââ| 19.8MB 23kB/s
Installing collected packages: scipy
Successfully installed scipy-0.15.1
|
Plot width settings in ipython notebook
|
I've got the following plots:
It would look nicer if they have the same width. Do you have any idea how to do it in ipython notebook when I am using %matplotlib inline?
UPDATE:
To generate both figures I am using the following functions:
import numpy as np
import matplotlib.pyplot as plt
def show_plots2d(title, plots, points, xlabel = '', ylabel = ''):
"""
Shows 2D plot.
Arguments:
title : string
Title of the plot.
plots : array_like of pairs like array_like and array_like
List of pairs,
where first element is x axis and the second is the y axis.
points : array_like of pairs like integer and integer
List of pairs,
where first element is x coordinate
and the second is the y coordinate.
xlabel : string
Label of x axis
ylabel : string
Label of y axis
"""
xv, yv = zip(*plots)
y_exclNone = [y[y != np.array(None)] for y in yv]
y_mins, y_maxs = zip(*
[(float(min(y)), float(max(y))) for y in y_exclNone]
)
y_min = min(y_mins)
y_max = max(y_maxs)
y_amp = y_max - y_min
plt.figure().suptitle(title)
plt.axis(
[xv[0][0], xv[0][-1], y_min - 0.3 * y_amp, y_max + 0.3 * y_amp]
)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
for x, y in plots:
plt.plot(x, y)
for x, y in points:
plt.plot(x, y, 'bo')
plt.show()
def show_plot3d(title, x, y, z, xlabel = '', ylabel = '', zlabel = ''):
"""
Shows 3D plot.
Arguments:
title : string
Title of the plot.
x : array_like
List of x coordinates
y : array_like
List of y coordinates
z : array_like
List of z coordinates
xlabel : string
Label of x axis
ylabel : string
Label of y axis
zlabel : string
Label of z axis
"""
plt.figure().suptitle(title)
plt.pcolormesh(x, y, z)
plt.axis([x[0], x[-1], y[0], y[-1]])
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.colorbar().set_label(zlabel)
plt.show()
|
If you use %pylab inline you can (on a new line) insert the following command:
%pylab inline
pylab.rcParams['figure.figsize'] = (10, 6)
This will set all figures in your document (unless otherwise specified) to be of the size (10, 6), where the first entry is the width and the second is the height.
See this SO post for more details. http://stackoverflow.com/a/17231361/1419668
|
Is there an idiomatic file extension for Jinja templates?
|
I need to programatically distinguish between Jinja template files, other template files (such as ERB), and template-less plain text files.
According to Jinja documentation:
A Jinja template doesnât need to have a specific extension: .html, .xml, or any other extension is just fine.
But what should I use when an explicit extension is required? .py is misleading, and any search including the words "jinja" and "extension" are badly searchwashed by discussion around Jinja Extensions.
I could easily dictate a project-wide convention (.jnj or .ja come to mind) but this is for open source so I don't want to buck the trend if there's already established practice somewhere.
EDIT 1: Again, I understand that the Jinja project — purposefully — does not define a default file extension. I'm asking if there are any unofficial conventions that have emerged for circumstances where one is desired for some project-specific reason.
EDIT 2: Clarification: This is not for HTML content.
|
Jinja Authors did not define a default extension. Most of Jinja template editors like Vim extension, TextMate extension, Emacs extension, and PyCharm mention no default extension to enforce Jinja highlighting.
Django had already a similar debate about setting a default extension, and ended as a wontfix issue. I quote from the closing message:
Filetype detection based on extension is flawed for the very reasons described in these comments, so you have to do some internal inspection, just like MIME type detection works.
I suggest that you should use your own since there is no common one.
|
How should I handle inclusive ranges in Python?
|
I am working in a domain in which ranges are conventionally described inclusively. I have human-readable descriptions such as from A to B , which represent ranges that include both end points - e.g. from 2 to 4 means 2, 3, 4.
What is the best way to work with these ranges in Python code? The following code works to generate inclusive ranges of integers, but I also need to perform inclusive slice operations:
def inclusive_range(start, stop, step):
return range(start, (stop + 1) if step >= 0 else (stop - 1), step)
The only complete solution I see is to explicitly use + 1 (or - 1) every time I use range or slice notation (e.g. range(A, B + 1), l[A:B+1], range(B, A - 1, -1)). Is this repetition really the best way to work with inclusive ranges?
Edit: Thanks to L3viathan for answering. Writing an inclusive_slice function to complement inclusive_range is certainly an option, although I would probably write it as follows:
def inclusive_slice(start, stop, step):
...
return slice(start, (stop + 1) if step >= 0 else (stop - 1), step)
... here represents code to handle negative indices, which are not straightforward when used with slices - note, for example, that L3viathan's function gives incorrect results if slice_to == -1.
However, it seems that an inclusive_slice function would be awkward to use - is l[inclusive_slice(A, B)] really any better than l[A:B+1]?
Is there any better way to handle inclusive ranges?
Edit 2: Thank you for the new answers. I agree with Francis and Corley that changing the meaning of slice operations, either globally or for certain classes, would lead to significant confusion. I am therefore now leaning towards writing an inclusive_slice function.
To answer my own question from the previous edit, I have come to the conclusion that using such a function (e.g. l[inclusive_slice(A, B)]) would be better than manually adding/subtracting 1 (e.g. l[A:B+1]), since it would allow edge cases (such as B == -1 and B == None) to be handled in a single place. Can we reduce the awkwardness in using the function?
Edit 3: I have been thinking about how to improve the usage syntax, which currently looks like l[inclusive_slice(1, 5, 2)]. In particular, it would be good if the creation of an inclusive slice resembled standard slice syntax. In order to allow this, instead of inclusive_slice(start, stop, step), there could be a function inclusive that takes a slice as a parameter. The ideal usage syntax for inclusive would be line 1:
l[inclusive(1:5:2)] # 1
l[inclusive(slice(1, 5, 2))] # 2
l[inclusive(s_[1:5:2])] # 3
l[inclusive[1:5:2]] # 4
l[1:inclusive(5):2] # 5
Unfortunately this is not permitted by Python, which only allows the use of : syntax within []. inclusive would therefore have to be called using either syntax 2 or 3 (where s_ acts like the version provided by numpy).
Other possibilities are to make inclusive into an object with __getitem__, permitting syntax 4, or to apply inclusive only to the stop parameter of the slice, as in syntax 5. Unfortunately I do not believe the latter can be made to work since inclusive requires knowledge of the step value.
Of the workable syntaxes (the original l[inclusive_slice(1, 5, 2)], plus 2, 3 and 4), which would be the best to use? Or is there another, better option?
Final Edit: Thank you all for the replies and comments, this has been very interesting. I have always been a fan of Python's "one way to do it" philosophy, but this issue has been caused by a conflict between Python's "one way" and the "one way" proscribed by the problem domain. I have definitely gained some appreciation for TIMTOWTDI in language design.
For giving the first and highest-voted answer, I award the bounty to L3viathan.
|
Write an additional function for inclusive slice, and use that instead of slicing. While it would be possible to e.g. subclass list and implement a __getitem__ reacting to a slice object, I would advise against it, since your code will behave contrary to expectation for anyone but you â and probably to you, too, in a year.
inclusive_slice could look like this:
def inclusive_slice(myList, slice_from=None, slice_to=None, step=1):
if slice_to is not None:
slice_to += 1 if step > 0 else -1
if slice_to == 0:
slice_to = None
return myList[slice_from:slice_to:step]
What I would do personally, is just use the "complete" solution you mentioned (range(A, B + 1), l[A:B+1]) and comment well.
|
import check_arrays from sklearn
|
I'm trying to use a svm function from the scikit learn package for python but I get the error message:
from sklearn.utils.validation import check_arrays
ImportError: cannot import name 'check_arrays'
I'm using python 3.4. Can anyone give me an advice? Thanks in advance.
|
This method was removed in 0.16, replaced by a (very different) check_array function.
You are likely getting this error because you didn't upgrade from 0.15 to 0.16 properly. [Or because you relied on a not-really-public function in sklearn]. See http://scikit-learn.org/dev/install.html#canopy-and-anaconda-for-all-supported-platforms .
If you installed using anaconda / conda, you should use the conda mechanism to upgrade, not pip. Otherwise old .pyc files might remain in your folder.
|
How to uninstall mini conda? python
|
I've install the conda package as such:
$ wget http://bit.ly/miniconda
$ bash miniconda
$ conda install numpy pandas scipy matplotlib scikit-learn nltk ipython-notebook seaborn
I want to uninstall it because it's messing up my pips and environment.
How do I uninstall conda totally?
Will it uninstall also my pip managed packages? If so, is there a way to uninstall conda safely without uninstalling packages managed by pip?
|
In order to uninstall miniconda, simply remove the miniconda folder,
rm -r ~/miniconda/
this should not remove any of your pip installed packages (but you should check the contents of the ~/miniconda folder to confirm).
As to avoid conflicts between different python environements, you can use virtualenv. In particular, with miniconda, the following workflow could be used,
$ wget http://bit.ly/miniconda
$ bash miniconda
$ conda env remove --yes -n new_env # remove the environement new_env if it exists (optional)
$ conda create --yes -n new_env pip numpy pandas scipy matplotlib scikit-learn nltk ipython-notebook seaborn python=2
$ activate new_env
$ # pip install modules if needed, run python scripts, etc
# everything will be installed in the new_env
# located in ~/miniconda/envs/new_env
$ deactivate
|
regex.sub() gives different results to re.sub()
|
I work with Czech accented text in Python 3.4.
Calling re.sub() to perform substitution by regex on an accented sentence works well, but using a regex compiled with re.compile() and then calling regex.sub() fails.
Here is the case, where I use the same arguments for re.sub() and regex.sub()
import re
pattern = r'(?<!\*)(Poplatn[iÃ]\w+ da[nÅ]\w+)'
flags = re.I|re.L
compiled = re.compile(pattern, flags)
text = 'PoplatnÃkem danÄ z pozemků je vlastnÃk pozemku'
mark = r'**\1**' # wrap 1st matching group in double stars
print(re.sub(pattern, mark, text, flags))
# outputs: **PoplatnÃkem danÄ** z pozemků je vlastnÃk pozemku
# substitution works
print(compiled.sub(mark, text))
# outputs: PoplatnÃkem danÄ z pozemků je vlastnÃk pozemku
# substitution fails
I believe that the reason is accents, because for a non-accented sentence re.sub() and regex.sub() work identically.
But it seems to me like a bug, because passing the same arguments returns different results, which should not happen. This topic is complicated by different platforms and locales, so it may not be reproducible on your system. Here is screenshot of my console.
Do you see any fault in my code, or should I report it as a bug?
|
As Padraic Cunningham figured out, this is not actually a bug.
However, it is related to a bug which you didn't run into, and to you using a flag you probably shouldn't be using, so I'll leave my earlier answer below, even though his is the right answer to your problem.
There's a recent-ish change (somewhere between 3.4.1 and 3.4.3, and between 2.7.3 and 2.7.8) that affects this. Before that change, you can't even compile that pattern without raising an OverflowError.
More importantly, why are you using re.L? The re.L mechanism does not mean "use the Unicode rules for my locale", it means "use some unspecified non-Unicode rules that only really make sense for Latin-1-derived locales and may not work right on Windows". Or, as the docs put it:
Make \w, \W, \b, \B, \s and \S dependent on the current locale. The use of this flag is discouraged as the locale mechanism is very unreliable, and it only handles one âcultureâ at a time anyway; you should use Unicode matching instead, which is the default in Python 3 for Unicode (str) patterns.
See bug #22407 and the linked python-dev thread for some recent discussion of this.
And if I remove the re.L flag, the code now compiles just fine on 3.4.1. (I also get the "right" results on both 3.4.1 and 3.4.3, but that's just a coincidence; I'm now intentionally not passing the screwy flag and screwing it up in the first version, and still accidentally not passing the screwy flag and screwing it up in the second, so they matchâ¦)
So, even if this were a bug, there's a good chance it would be closed WONTFIX. The resolution for #22407 was to deprecate re.L for non-bytes patterns in 3.5 and remove it in 3.6, so I doubt anyone's going to care about fixing bugs with it now. (Not to mention that re itself is theoretically going away in favor of regex one of these decades⦠and IIRC, regex also deprecated the L flag unless you're using a bytes pattern and re-compatible mode.)
|
python Spark avro
|
When attempting to write avro, I get the following error:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 in stage 35.0 failed 1 times, most recent failure: Lost task 7.0 in stage 35.0 (TID 110, localhost): java.lang.ClassCastException: java.util.HashMap cannot be cast to org.apache.avro.mapred.AvroWrapper
I had read in an avro file with 3 records using:
avro_rdd = sc.newAPIHadoopFile(
"threerecords.avro",
"org.apache.avro.mapreduce.AvroKeyInputFormat",
"org.apache.avro.mapred.AvroKey",
"org.apache.hadoop.io.NullWritable",
keyConverter="org.apache.spark.examples.pythonconverters.AvroWrapperToJavaConverter",
conf=None)
output = avro_rdd.map(lambda x: x[0]).collect()
Then I tried to write out a single record (output kept in avro) with:
conf = {"avro.schema.input.key": reduce(lambda x, y: x + y, sc.textFile("myschema.avsc", 1).collect())}
sc.parallelize([output[0]]).map(lambda x: (x, None)).saveAsNewAPIHadoopFile(
"output.avro",
"org.apache.avro.mapreduce.AvroKeyOutputFormat",
"org.apache.avro.mapred.AvroKey",
"org.apache.hadoop.io.NullWritable",
keyConverter="org.apache.spark.examples.pythonconverters.AvroWrapperToJavaConverter",
conf=conf)
How do I get around that error/write out an individual avro record succsssfully? I know my schema is correct because it is from the avro itself.
|
It looks like this isn't supported at the moment. You are now trying to use the java map as an Avro Record and covert it to a Java map again. That's why you get the error the error about the java hashmap.
There is a pull request from staslos to add the Avro output format, see link for the pull request and the example.
There is a converter required which is missing in AvroConverters.scala to convert from the java map back to the avro format.
|
ANTLR4 grammar token recognition error after import
|
I am using a parser grammar and a lexer grammar for antlr4 from GitHub to parse PHP in Python3.
When I use these grammars directly my PoC code works:
antlr-test.py
from antlr4 import *
# from PHPParentLexer import PHPParentLexer
# from PHPParentParser import PHPParentParser
# from PHPParentParser import PHPParentListener
from PHPLexer import PHPLexer as PHPParentLexer
from PHPParser import PHPParser as PHPParentParser
from PHPParser import PHPParserListener as PHPParentListener
class PhpGrammarListener(PHPParentListener):
def enterFunctionInvocation(self, ctx):
print("enterFunctionInvocation " + ctx.getText())
if __name__ == "__main__":
scanner_input = FileStream('test.php')
lexer = PHPParentLexer(scanner_input)
stream = CommonTokenStream(lexer)
parser = PHPParentParser(stream)
tree = parser.htmlDocument()
walker = ParseTreeWalker()
printer = PhpGrammarListener()
walker.walk(printer, tree)
which gives the output
/opt/local/bin/python3.4 /Users/d/PycharmProjects/name/antlr-test.py
enterFunctionInvocation echo("hi")
enterFunctionInvocation another_method("String")
enterFunctionInvocation print("print statement")
Process finished with exit code 0
When I use the following PHPParent.g4 grammar, I get a lot of errors:
grammar PHPParent;
options { tokenVocab=PHPLexer; }
import PHPParser;
After swapping comments on pythons imports, I get this error
/opt/local/bin/python3.4 /Users/d/PycharmProjects/name/antlr-test.py
line 1:1 token recognition error at: '?'
line 1:2 token recognition error at: 'p'
line 1:3 token recognition error at: 'h'
line 1:4 token recognition error at: 'p'
line 1:5 token recognition error at: '\n'
...
line 2:8 no viable alternative at input '<('
line 2:14 mismatched input ';' expecting {<EOF>, '<', '{', '}', ')', '?>', 'list', 'global', 'continue', 'return', 'class', 'do', 'switch', 'function', 'break', 'if', 'for', 'foreach', 'while', 'new', 'clone', '&', '!', '-', '~', '@', '$', <INVALID>, 'Interface', 'abstract', 'static', Array, RequireOperator, DecimalNumber, HexNumber, OctalNumber, Float, Boolean, SingleQuotedString, DoubleQuotedString_Start, Identifier, IncrementOperator}
line 3:28 mismatched input ';' expecting {<EOF>, '<', '{', '}', ')', '?>', 'list', 'global', 'continue', 'return', 'class', 'do', 'switch', 'function', 'break', 'if', 'for', 'foreach', 'while', 'new', 'clone', '&', '!', '-', '~', '@', '$', <INVALID>, 'Interface', 'abstract', 'static', Array, RequireOperator, DecimalNumber, HexNumber, OctalNumber, Float, Boolean, SingleQuotedString, DoubleQuotedString_Start, Identifier, IncrementOperator}
line 4:28 mismatched input ';' expecting {<EOF>, '<', '{', '}', ')', '?>', 'list', 'global', 'continue', 'return', 'class', 'do', 'switch', 'function', 'break', 'if', 'for', 'foreach', 'while', 'new', 'clone', '&', '!', '-', '~', '@', '$', <INVALID>, 'Interface', 'abstract', 'static', Array, RequireOperator, DecimalNumber, HexNumber, OctalNumber, Float, Boolean, SingleQuotedString, DoubleQuotedString_Start, Identifier, IncrementOperator}
However I get no errors when running the antlr4 tool over the grammars. I'm stumped here - what could be causing this issue?
$ a4p PHPLexer.g4
warning(146): PHPLexer.g4:363:0: non-fragment lexer rule DoubleQuotedStringBody can match the empty string
$ a4p PHPParser.g4
warning(154): PHPParser.g4:523:0: rule doubleQuotedString contains an optional block with at least one alternative that can match an empty string
$ a4p PHPParent.g4
warning(154): PHPParent.g4:523:0: rule doubleQuotedString contains an optional block with at least one alternative that can match an empty string
|
Import is ANTLR4 is kind of messy.
First, tokenVocab can not generate the lexer you need. It just means that this grammar is using the tokens of PHPLexer. If you delete PHPLexer.tokens, it won't even compile!
Take a look at PHPParser.g4 where we also use options { tokenVocab=PHPLexer; }. Yet in the python script we still need to use lexer from PHPLexer to make it work. Well, this PHPParentLexer is not useable at all. That's why you got all the error.
To generate a new lexer out of combined grammar, you need to import it like this:
grammar PHPParent;
import PHPLexer;
However, mode is not supported when importing. PHPLexer itself uses mode a lot. So it's also not an option.
Can we simply replace PHPParentLexer with PHPLexer? Sadly, no. Because PHPParentParser is generated with PHPParentLexer, they are tightly coupled and can not be used seperatly. If you use PHPLexer, PHPParentParser also won't work. As for this grammar, thanks to the error recovery, it actually works, but gives some error.
There seems to be no better way but to rewrite some of the grammar. There are definitely some design issues in this import part of ANTLR4.
|
Better approach to handling sqlalchemy disconnects
|
We've been experimenting with sqlalchemy's disconnect handling, and how it integrates with ORM. We've studied the docs, and the advice seems to be to catch the disconnect exception, issue a rollback() and retry the code.
eg:
import sqlalchemy as SA
retry = 2
while retry:
retry -= 1
try:
for name in session.query(Names):
print name
break
except SA.exc.DBAPIError as exc:
if retry and exc.connection_invalidated:
session.rollback()
else:
raise
I follow the rationale -- you have to rollback any active transactions and replay them to ensure a consistent ordering of your actions.
BUT -- this means a lot of extra code added to every function that wants to work with data. Furthermore, in the case of SELECT, we're not modifying data and the concept of rollback/re-request is not only unsightly, but a violation of the principle of DRY (don't repeat yourself).
I was wondering if others would mind sharing how they handle disconnects with sqlalchemy.
FYI: we're using sqlalchemy 0.9.8 and Postgres 9.2.9
|
The way I like to approach this is place all my database code in a lambda or closure, and pass that into a helper function that will handle catching the disconnect exception, and retrying.
So with your example:
import sqlalchemy as SA
def main():
def query():
for name in session.query(Names):
print name
run_query(query)
def run_query(f, retry=2):
while retry:
retry -= 1
try:
return f() # "break" if query was successful and return any results
except SA.exc.DBAPIError as exc:
if retry and exc.connection_invalidated:
session.rollback()
else:
raise
You can make this more fancy by passing a boolean into run_query to handle the case where you are only doing a read, and therefore want to retry without rolling back.
This helps you satisfy the DRY principle since all the ugly boiler-plate code for managing retries + rollbacks is placed in one location.
|
Django 1.9 deprecation warnings app_label
|
I've just updated to Django v1.8, and testing my local setup before updating my project and I've had a deprecation warning that I've never seen before, nor does it make any sense to me. I may be just overlooking something or misunderstanding the documentation.
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:6: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Difficulty doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Difficulty(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:21: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Zone doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Zone(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:49: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Boss doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Boss(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/raiding/models.py:79: RemovedInDjango19Warning: Model class ankylosguild.apps.raiding.models.Item doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Item(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:14: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Category doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Category(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:36: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Comment doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Comment(ScoreMixin, ProfileMixin, models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:64: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Forum doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Forum(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:88: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.Post doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Post(ScoreMixin, ProfileMixin, models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:119: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.CommentPoint doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class CommentPoint(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/forum/models.py:127: RemovedInDjango19Warning: Model class ankylosguild.apps.forum.models.TopicPoint doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class TopicPoint(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/auctionhouse/models.py:10: RemovedInDjango19Warning: Model class ankylosguild.apps.auctionhouse.models.Auction doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Auction(models.Model):
/Users/neilhickman/Sites/guild/ankylosguild/apps/auctionhouse/models.py:83: RemovedInDjango19Warning: Model class ankylosguild.apps.auctionhouse.models.Bid doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Bid(models.Model):
Now this poses 3 questions for me.
According to the documentation, Options.app_label isn't a requirement unless the model is outside of the application module, which in my case, it isn't. Secondly, this behaviour was deprecated in 1.7 anyway, so why is it even an issue?
The applications are all in the INSTALLED_APPS tuple, so it surely can't be that?
Why would the applications not be loaded before they are called if everything is in the INSTALLED_APPS tuple?
If I am indeed doing something wrong, what is the correct way of doing it as the docs don't really clear up what is causing this problem or how to rectify it.
|
Similar error. In my case the error was:
RemovedInDjango19Warning: Model class django.contrib.sites.models.Site doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
class Site(models.Model):
My solution was:
Added 'django.contrib.sites' to INSTALLED_APPS
|
igraph Graph from numpy or pandas adjacency matrix
|
I have an adjacency matrix stored as a pandas.DataFrame:
node_names = ['A', 'B', 'C']
a = pd.DataFrame([[1,2,3],[3,1,1],[4,0,2]],
index=node_names, columns=node_names)
a_numpy = a.as_matrix()
I'd like to create an igraph.Graph from either the pandas or the numpy adjacency matrices. In an ideal world the nodes would be named as expected.
Is this possible? The tutorial seems to be silent on the issue.
|
In igraph you can use igraph.Graph.Adjacency to create a graph from an adjacency matrix without having to use zip. There are some things to be aware of when a weighted adjacency matrix is used and stored in a np.array or pd.DataFrame.
igraph.Graph.Adjacency can't take an np.array as argument, but that is easily solved using tolist.
Integers in adjacency-matrix are interpreted as number of edges between nodes rather than weights, solved by using adjacency as boolean.
An example of how to do it:
import igraph
import pandas as pd
node_names = ['A', 'B', 'C']
a = pd.DataFrame([[1,2,3],[3,1,1],[4,0,2]], index=node_names, columns=node_names)
# Get the values as np.array, it's more convenenient.
A = a.values
# Create graph, A.astype(bool).tolist() or (A / A).tolist() can also be used.
g = igraph.Graph.Adjacency((A > 0).tolist())
# Add edge weights and node labels.
g.es['weight'] = A[A.nonzero()]
g.vs['label'] = node_names # or a.index/a.columns
You can reconstruct your adjacency dataframe using get_adjacency by:
df_from_g = pd.DataFrame(g.get_adjacency(attribute='weight').data,
columns=g.vs['label'], index=g.vs['label'])
(df_from_g == a).all().all() # --> True
|
Normalize numpy array columns in python
|
I have a numpy array where each cell of a specific row represents a value for a feature. I store all of them in an 100*4 matrix.
A B C
1000 10 0.5
765 5 0.35
800 7 0.09
Any idea how I can normalize rows of this numpy.array where each value is between 0 and 1?
My desired output is:
A B C
1 1 1
0.765 0.5 0.7
0.8 0.7 0.18(which is 0.09/0.5)
Thanks in advance :)
|
If I understand correctly, what you want to do is divide by the maximum value in each column. You can do this easily using broadcasting.
Starting with your example array:
import numpy as np
x = np.array([[1000, 10, 0.5],
[ 765, 5, 0.35],
[ 800, 7, 0.09]])
x_normed = x / x.max(axis=0)
print(x_normed)
# [[ 1. 1. 1. ]
# [ 0.765 0.5 0.7 ]
# [ 0.8 0.7 0.18 ]]
x.max(0) takes the maximum over the 0th dimension (i.e. rows). This gives you a vector of size (ncols,) containing the maximum value in each column. You can then divide x by this vector in order to normalize your values such that the maximum value in each column will be scaled to 1.
If x contains negative values you would need to subtract the minimum first:
x_normed = (x - x.min(0)) / x.ptp(0)
Here, x.ptp(0) returns the "peak-to-peak" (i.e. the range, max - min) along axis 0. This normalization also guarantees that the minimum value in each column will be 0.
|
Strange if statement
|
I found this strange if-statement in somebody elseâs code:
if variable & 1 == 0:
I don't understand it. It should have two ==, right?
Can somebody explain this?
|
The conditional is a bitwise operator comparison:
>>> 1 & 1
1
>>> 0 & 1
0
>>> a = 1
>>> a & 1 == 0
False
>>> b = 0
>>> b & 1 == 0
True
As many of the comments say, for integers this conditional is True for evens and False for odds. The prevalent way to write this is if variable % 2 == 0: or if not variable % 2:
Using timeit we can see that there isn't much difference in performance.
n & 1("== 0" and "not")
>>> timeit.Timer("bitwiseIsEven(1)", "def bitwiseIsEven(n): return n & 1 == 0").repeat(4, 10**6)
[0.2037370204925537, 0.20333600044250488, 0.2028651237487793, 0.20192503929138184]
>>> timeit.Timer("bitwiseIsEven(1)", "def bitwiseIsEven(n): return not n & 1").repeat(4, 10**6)
[0.18392395973205566, 0.18273091316223145, 0.1830739974975586, 0.18445897102355957]
n % 2("== 0" and "not")
>>> timeit.Timer("modIsEven(1)", "def modIsEven(n): return n % 2 == 0").repeat(4, 10**6)
[0.22193098068237305, 0.22170782089233398, 0.21924591064453125, 0.21947598457336426]
>>> timeit.Timer("modIsEven(1)", "def modIsEven(n): return not n % 2").repeat(4, 10**6)
[0.20426011085510254, 0.2046220302581787, 0.2040550708770752, 0.2044820785522461]
Overloaded Operators:
Both the % and & operators are overloaded.
The bitwise and operator is overloaded for set. s.intersection(t) is equivalent to s & t and returns a "new set with elements common to s and t".
>>> {1} & {1}
set([1])
This doesn't effect our conditional:
>>> def bitwiseIsEven(n):
... return n & 1 == 0
>>> bitwiseIsEven('1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in bitwiseIsEven
TypeError: unsupported operand type(s) for &: 'str' and 'int'
>>> bitwiseIsEven({1})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in bitwiseIsEven
TypeError: unsupported operand type(s) for &: 'set' and 'int'
The modulo operator will also throw TypeError: unsupported operand type(s) for most non-ints.
>>> def modIsEven(n):
... return n % 2 == 0
>>> modIsEven({1})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in modIsEven
TypeError: unsupported operand type(s) for %: 'set' and 'int'
It is overloaded as a string interpolation operator for the old %-formatting. It throws TypeError: not all arguments converted during string formatting if a string is used for the comparison.
>>> modIsEven('1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in modIsEven
TypeError: not all arguments converted during string formatting
This won't throw if the string includes a valid conversion specifier.
>>> modIsEven('%d')
False
|
Python app does not print anything when running detached in docker
|
I have a Python (2.7) app which is started in my dockerfile:
CMD ["python","main.py"]
main.py prints some strings when it is started and goes into a loop afterwards:
print "App started"
while True:
time.sleep(1)
As long as I start the container with the -it flag, everything works as expected:
$ docker run --name=myapp -it myappimage
> App started
And I can see the same output via logs later:
$ docker logs myapp
> App started
If I try to run the same container with the -d flag, the container seems to start normally, but I can't see any output:
$ docker run --name=myapp -d myappimage
> b82db1120fee5f92c80000f30f6bdc84e068bafa32738ab7adb47e641b19b4d1
$ docker logs myapp
$ (empty)
But the container still seems to run;
$ docker ps
Container Status ...
myapp up 4 minutes ...
Attach does not display anything either:
$ docker attach --sig-proxy=false myapp
(working, no output)
Any ideas whats going wrong? Does "print" behave differently when ran in background?
Docker version:
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.2
Git commit (client): a8a31ef
OS/Arch (client): linux/arm
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.2
Git commit (server): a8a31ef
|
Finally I found a solution to see Python output when running daemonized in Docker, thanks to @ahmetalpbalkan over at GitHub. Answering it here myself for further reference :
Using unbuffered output with
CMD ["python","-u","main.py"]
instead of
CMD ["python","main.py"]
solves the problem; you can see the output (both, stderr and stdout) via
docker logs myapp
now!
|
What should I use instead of syncdb in Django 1.9?
|
Take a look at this:
$ pypy ./manage.py syncdb
/usr/lib64/pypy-2.4.0/site-packages/django/core/management/commands/syncdb.py:24: RemovedInDjango19Warning: The syncdb command will be removed in Django 1.9
warnings.warn("The syncdb command will be removed in Django 1.9", RemovedInDjango19Warning)
(cut)
I ran a quick google search, but could not find the answer - what should I be using instead of syncdb?
|
syncdb is deprecated because of the migration system1.
Now you can track your changes using makemigrations. This transforms your model changes into python code to make them deployable to another databases.
After you created the migrations you have to apply them: migrate.
So instead of using syncdb you should use makemigrations and then migrate.
1: imho: the best new feature of django.
|
Gettext message catalogues from virtual dir within PYZ for GtkBuilder widgets
|
Is there an established approach to embed gettext locale/xy/LC_MESSAGES/* in a PYZ bundle? Specifically to have Gtks automatic widget translation pick them up from within the ZIP archive.
For other embedded resources pkgutil.get_deta or inspect/get_source work well enough. But system and Python gettext APIs depend on bindtextdomain being supplied a plain old localedir; no resources or strings etc.
So I couldn't contrive a workable or even remotely practical workaround:
Virtual gvfs/gio paths
Now using archive://file%3A%2F%2Fmypkg.pyz%2Fmessages%2F IRIs would be an alternative to read other files directly from a zip. But glibs g_dgettext is still just a thin wrapper around the system lib. And therefore any such URLs can't be used as localedir.
Partially extracting the zip
That's how PyInstaller works I think. But it's of course somewhat ridiculous to bundle something as .pyz application, only to have it preextracted on each invocation.
Userland gettext .mo/.po extraction
Now reading out the message catalogues manually or just using trivial dicts instead would be an option. But only for in-application strings. That's again no way to have Gtk/GtkBuilder pick them up implicitly.
Thus I had to manually traverse the whole widget tree, Labels, text, inner widgets, markup_text, etc. Possible, but meh.
FUSE mounting
This would be superflaky. But of course, the zip contents could be accessed gvfs-mount etc. Just seems like a certain memory hog. And I doubt it's gonna stay reliable with e.g. two app instances running, or a previous uncleanly terminated. (Like dunno, due to a system library, like gettext, stumbling over a fragile zip fuse point..)
Gtk signal/event for translation(?)
I've found squat about this, so I'm somewhat certain there's no alternative mechanism for widget translations in Gtk/PyGtk/GI. Gtk/Builder expects and is tied to gettext.
Is there a more dependable approach perhaps?
|
This my example Glade/GtkBuilder/Gtk application. I've defined a function xml_gettext which transparently translates glade xml files and passes to gtk.Builder instance as a string.
import mygettext as gettext
import os
import sys
import gtk
from gtk import glade
glade_xml = '''<?xml version="1.0" encoding="UTF-8"?>
<interface>
<!-- interface-requires gtk+ 3.0 -->
<object class="GtkWindow" id="window1">
<property name="can_focus">False</property>
<signal name="delete-event" handler="onDeleteWindow" swapped="no"/>
<child>
<object class="GtkButton" id="button1">
<property name="label" translatable="yes">Welcome to Python!</property>
<property name="use_action_appearance">False</property>
<property name="visible">True</property>
<property name="can_focus">True</property>
<property name="receives_default">True</property>
<property name="use_action_appearance">False</property>
<signal name="pressed" handler="onButtonPressed" swapped="no"/>
</object>
</child>
</object>
</interface>'''
class Handler:
def onDeleteWindow(self, *args):
gtk.main_quit(*args)
def onButtonPressed(self, button):
print('locale: {}\nLANGUAGE: {}'.format(
gettext.find('myapp','locale'),os.environ['LANGUAGE']))
def main():
builder = gtk.Builder()
translated_xml = gettext.xml_gettext(glade_xml)
builder.add_from_string(translated_xml)
builder.connect_signals(Handler())
window = builder.get_object("window1")
window.show_all()
gtk.main()
if __name__ == '__main__':
main()
I've archived my locale directories into locale.zip which is included in the pyz bundle.
This is contents of locale.zip
(u'/locale/fr_FR/LC_MESSAGES/myapp.mo',
u'/locale/en_US/LC_MESSAGES/myapp.mo',
u'/locale/en_IN/LC_MESSAGES/myapp.mo')
To make the locale.zip as a filesystem I use ZipFS from fs.
Fortunately Python gettext is not GNU gettext. gettext is pure Python it doesn't use GNU gettext but mimics it. gettext has two core functions find and translation. I've redefined these two in a seperate module named mygettext to make them use files from the ZipFS.
gettext uses os.path ,os.path.exists and open to find files and open them which I replace with the equivalent ones form fs module.
This is contents of my application.
pyzzer.pyz -i glade_v1.pyz
# A zipped Python application
# Built with pyzzer
Archive contents:
glade_dist/glade_example.py
glade_dist/locale.zip
glade_dist/__init__.py
glade_dist/mygettext.py
__main__.py
Because pyz files have text, usually a shebang, prepended to it, I skip this line after opening the pyz file in binary mode. Other modules in the application that want to use the gettext.gettext function, should import zfs_gettext instead from mygettext and make it an alias to _.
Here goes mygettext.py.
from errno import ENOENT
from gettext import _expand_lang, _translations, _default_localedir
from gettext import GNUTranslations, NullTranslations
import gettext
import copy
import os
import sys
from xml.etree import ElementTree as ET
import zipfile
import fs
from fs.zipfs import ZipFS
zfs = None
if zipfile.is_zipfile(sys.argv[0]):
try:
myself = open(sys.argv[0],'rb')
next(myself)
zfs = ZipFS(ZipFS(myself,'r').open('glade_dist/locale.zip','rb'))
except:
pass
else:
try:
zfs = ZipFS('locale.zip','r')
except:
pass
if zfs:
os.path = fs.path
os.path.exists = zfs.exists
open = zfs.open
def find(domain, localedir=None, languages=None, all=0):
# Get some reasonable defaults for arguments that were not supplied
if localedir is None:
localedir = _default_localedir
if languages is None:
languages = []
for envar in ('LANGUAGE', 'LC_ALL', 'LC_MESSAGES', 'LANG'):
val = os.environ.get(envar)
if val:
languages = val.split(':')
break
if 'C' not in languages:
languages.append('C')
# now normalize and expand the languages
nelangs = []
for lang in languages:
for nelang in _expand_lang(lang):
if nelang not in nelangs:
nelangs.append(nelang)
# select a language
if all:
result = []
else:
result = None
for lang in nelangs:
if lang == 'C':
break
mofile = os.path.join(localedir, lang, 'LC_MESSAGES', '%s.mo' % domain)
mofile_lp = os.path.join("/usr/share/locale-langpack", lang,
'LC_MESSAGES', '%s.mo' % domain)
# first look into the standard locale dir, then into the
# langpack locale dir
# standard mo file
if os.path.exists(mofile):
if all:
result.append(mofile)
else:
return mofile
# langpack mofile -> use it
if os.path.exists(mofile_lp):
if all:
result.append(mofile_lp)
else:
return mofile
# langpack mofile -> use it
if os.path.exists(mofile_lp):
if all:
result.append(mofile_lp)
else:
return mofile_lp
return result
def translation(domain, localedir=None, languages=None,
class_=None, fallback=False, codeset=None):
if class_ is None:
class_ = GNUTranslations
mofiles = find(domain, localedir, languages, all=1)
if not mofiles:
if fallback:
return NullTranslations()
raise IOError(ENOENT, 'No translation file found for domain', domain)
# Avoid opening, reading, and parsing the .mo file after it's been done
# once.
result = None
for mofile in mofiles:
key = (class_, os.path.abspath(mofile))
t = _translations.get(key)
if t is None:
with open(mofile, 'rb') as fp:
t = _translations.setdefault(key, class_(fp))
# Copy the translation object to allow setting fallbacks and
# output charset. All other instance data is shared with the
# cached object.
t = copy.copy(t)
if codeset:
t.set_output_charset(codeset)
if result is None:
result = t
else:
result.add_fallback(t)
return result
def xml_gettext(xml_str):
root = ET.fromstring(xml_str)
labels = root.findall('.//*[@name="label"][@translatable="yes"]')
for label in labels:
label.text = _(label.text)
return ET.tostring(root)
gettext.find = find
gettext.translation = translation
_ = zfs_gettext = gettext.gettext
gettext.bindtextdomain('myapp','locale')
gettext.textdomain('myapp')
The following two shouldn't be called because glade doesn't use Python gettext.
glade.bindtextdomain('myapp','locale')
glade.textdomain('myapp')
|
Creating classes with a lot of imported functions here and there
|
Let's say i have a lot of functions in alotoffunc.py that is used by more than 1 type of object.
Let's say ObjectI and ObjectII and ObjectXI all uses some functions in alotoffunc.py. And each of the Object were using different set of functions but all the objects have the variable object.table.
alotoffunc.py:
def abc(obj, x):
return obj.table(x) * 2
def efg(obj, x):
return obj.table(x) * obj.table(x)
def hij(obj, x, y):
return obj.table(x) * obj.table(y)
def klm(obj, x, y):
return obj.table(x) *2 - obj.table(y)
And then i import the functions and overload them:
import alotoffunc
class ObjectI:
def abc(self, x):
return alotoffunc.abc(self, x)
def efg(self, x):
return alotoffunc.efg(self, x)
class ObjectII:
def efg(self, x):
return alotoffunc.efg(self, x)
def klm(self, x, y):
return alotoffunc.klm(self, x, y)
class ObjectXI:
def abc(self, x):
return alotoffunc.abc(self, x)
def klm(self, x, y):
return alotoffunc.klm(self, x, y)
It looks a like a big mess now, how should I go about building my object class and arrange my alotoffunc.py?
|
(1) You can have a base class that implements all the methods then override the unnecessary ones to raise a NotImplementedError in the subclasses.
(2) You can have mixins to reduce repetition:
import alotoffunc
class MixinAbc:
def abc(self, x):
return alotoffunc.abc(self, x)
class MixinEfg:
def efg(self, x):
return alotoffunc.efg(self, x)
class MixinKlm:
def klm(self, x, y):
return alotoffunc.klm(self, x, y)
class ObjectI(MixinAbc, MixinEfg):
pass
class ObjectII(MixinEfg, MixinKlm):
pass
class ObjectXI(MixinAbc, MixinKlm):
pass
You can also combine this method with that of @cpburnz.
|
mean, nanmean and warning: Mean of empty slice
|
Say I construct two numpy arrays:
a = np.array([np.NaN, np.NaN])
b = np.array([np.NaN, np.NaN, 3])
Now I find that np.mean returns nan for both a and b:
>>> np.mean(a)
nan
>>> np.mean(b)
nan
Since numpy 1.8, we've been blessed with nanmean, which ignores nan values:
>>> np.nanmean(b)
3.0
However, when the array has nothing but nan values, it raises a warning:
>>> np.nanmean(a)
nan
C:\python-3.4.3\lib\site-packages\numpy\lib\nanfunctions.py:598: RuntimeWarning: Mean of empty slice
warnings.warn("Mean of empty slice", RuntimeWarning)
I don't like suppressing warnings; is there a better function I can use to get the behaviour of nanmean without that warning?
|
I really can't see any good reason not to just suppress the warning.
The safest way would be to use the warnings.catch_warnings context manager to suppress the warning only where you anticipate it occurring - that way you won't miss any additional RuntimeWarnings that might be unexpectedly raised in some other part of your code:
import numpy as np
import warnings
x = np.ones((1000, 1000)) * np.nan
# I expect to see RuntimeWarnings in this block
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=RuntimeWarning)
foo = np.nanmean(x, axis=1)
dawg's solution is elegant, but ultimately any additional steps that you have to take in order to avoid computing np.nanmean on an array of all NaNs are going to incur some extra overhead that you could avoid by just suppressing the warning. Also your intent will be much more clearly reflected in the code.
|
auth_user error with Django 1.8 and syncdb / migrate
|
When upgrading to Django 1.8 (with zc.buildout) and running syncdb or migrate, I get this message:
django.db.utils.ProgrammingError: relation "auth_user" does not exist
One of my models contains django.contrib.auth.models.User:
user = models.ForeignKey(
User, related_name='%(app_label)s_%(class)s_user',
blank=True, null=True, editable=False
)
Downgrading to Django 1.7 removes the error. Do I have to include the User object differently in Django 1.8?
|
I fix this by running auth first, then the rest of my migrations:
python manage.py migrate auth
python manage.py migrate
|
Django-cms installs, but pull-downs and other JS doesn't work - ideas for fixing?
|
I've installed Django-CMS onto an existing site and while it isn't throwing errors, it isn't working. In particular, the header on a given page appears when I use "/?edit" but none of the pull down menus work, and very little (possibly none) of the JavaScript works.
Other facets:
I've done this on a local install of Django with largely development components (for example, SQLite, and the server provided via the django tutorial)
I've done this with the same result on an install on WebFactional using MySQL and an apache server
The install is basically the process described here:
http://docs.django-cms.org/en/support-3.0.x/how_to/install.html
The DB install worked w/out errors and the /admin site has a section for CMS
The CMS check showed 1 test skipped, and all other tests passed.
I'm using Django 1.6.5
This isn't the only time I've had trouble getting django to deliver javascript in a way that executes properly on a project - I had problems with fairly simple drop down menus in the past that I never resolved.
Any ideas on what I could be doing wrong? My configuration changes could be seen here:
https://github.com/bethlakshmi/GBE2/compare/GBE-398
The Local Settings (recent edit)
DEBUG = True
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['*domain of server*']
LOGIN_REDIRECT_URL = '/'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': '*db name*',
'USER': '*username*',
'PASSWORD': '*password*',
'HOST': '',
'PORT': '',
}
}
STATIC_ROOT = '*path to the static host in the file system*'
#STATIC_ROOT = '/'
EMAIL_HOST = '* email settings*'
EMAIL_HOST_USER = '*email settings*'
EMAIL_HOST_PASSWORD = '*email settings*'
DEFAULT_FROM_EMAIL = '*valid email*'
SERVER_EMAIL = '*valid email*'
domain of server - the site is hosted on a subdomain: - prototypecms.gbeadmin.webfactional.com, the Allowed Host is "gbeadmin.webfactional.com"
db name, username, password - the correct settings for the locally hosted database. The website itself works just fine with these database settings. I can log in with the same info using PHP Admin from the console. And when I look in the DB, I see the cms_* tables that came from django-cms during a syncdb.
path to the static host in the file system - its a valid location in the server's file system. The CSS and JS are there and when I download the source page in the browser and look at /static links it references, I get the correct JS or CSS that I would expect from the server. The host recommends a particular separate area for static files and a particular configuration - which I've followed and gotten working successfully in the pre-django-cms application. If it wasn't working, I believe the CSS would not render correctly, and it works fine.
email settings - are the email settings for the server. Right now they are not working and need to be tested and fixed, but I have a large amount of doubt that email settings could be a factor here.
valid email the various email settings used by django in creating a mail. These are valid addresses relevant to the business.
|
After staring at this for about 1.5 weeks, I think I found the answer.
The eventual process to the solution was to get the tutorial up and running in the same environment and start slavishly comparing settings and templates. With a working tutorial, I could see what was there and slavishly imitate it.
The settings.py and local_settings.py was a rat hole - they worked just fine.
Eventual answer was that the pre-existing site and django-cms were contending over base.html and the block for "content" - there was a mapping for "/" in the base site urls and that meant that it didn't connect to a template, and it didn't have any content blocks. This really seemed to confuse Django-CMS site to the point where it would offer no pulldowns. Once I got base.html (now base.tmpl) to more closely imitate the tutorial, I was able to get the pull downs working.
The commit of the original solution was:
https://github.com/bethlakshmi/GBE2/commit/8286a9afd6e3ba8688dfefc4c9d888f5a2fd320f
And on the branch here:
https://github.com/bethlakshmi/GBE2/tree/GBE-398
There have been many more refinements.
The areas to look at would be gbe/base.tmpl, and also the landing and landing_page areas as the first thing that got executed was predictably the url resolution of "/" - so that was a particular blocker.
This is the leap forward I needed, but still a partial solution as there is a massive amount of integration yet to be done here.
|
Making an object x such that "x in [x]" returns False
|
If we make a pathological potato like this:
>>> class Potato:
... def __eq__(self, other):
... return False
... def __hash__(self):
... return random.randint(1, 10000)
...
>>> p = Potato()
>>> p == p
False
We can break sets and dicts this way (note: it's the same even if __eq__ returns True, it's mucking with the hash that broke them):
>>> p in {p}
False
>>> p in {p: 0}
False
Also len({p: 0, p: 0}) == 2, and {p: 0}[p] raises KeyError, basically all mapping related stuff goes out the window, as expected.
But what I didn't expect is that we can't break lists
>>> p in [p]
True
Why is that? It seems that list.__contains__ iterates, but it's first checking identity before checking equality. Since it is not the case that identity implies equality (see for example NaN object), what is the reason for lists short-circuiting on identity comparisons?
|
list, tuple, etc., does indeed do an identity check before an equality check, and this behavior is motivated by these invariants:
assert a in [a]
assert a in (a,)
assert [a].count(a) == 1
for a in container:
assert a in container # this should ALWAYS be true
Unfortunately, dicts, sets, and friends operate by hashes, so if you mess with those you can indeed effectively break them.
See this issue and this issue for some history.
|
My answer is changing with the same code
|
I am a complete python beginner and I am trying to solve this problem :
A number is called triangular if it is the sum of the first n positive
integers for some n For example, 10 is triangular because 10 = 1+2+3+4
and 21 is triangular because 21 = 1+2+3+4+5+6. Write a Python program
to find the smallest 6-digit triangular number. Enter it as your
answer below.
I have written this program:
n = 0
trinum = 0
while len(str(trinum)) < 6:
trinum = n*(n+1)/2
n += 1
print(trinum)
And it only works in the python I have installed on my computer if I say while len(str(trinum)) < 8: but it is supposed to be while len(str(trinum)) < 6:. So I went to http://www.skulpt.org/ and ran my code there and it gave me the right answer with while len(str(trinum)) < 6: like it's supposed to. But it doesn't work with 6 with the python i have installed on my computer. Does anyone have any idea what's going on?
|
Short Answer
In Python 3, division is always floating point division. So on the first pass you get something like str(trinum) == '0.5'. Which isn't what you want.
You're looking for integer division. The operator for that is //.
Long Answer
The division operator changed in Python 2.x to 3.x. Previously, the type of the result was dependent on the arguments. So 1/2 does integer division, but 1./2 does floating point division.
To clean this up, a new operator was introduced: //. This operator will always do integer division.
So in Python 3.x, this expression (4 * 5)/2 is equal to 10.0. Note that this number is less than 100, but it has 4 characters in it.
If instead, we did (4*5)//2, we would get the integer 10 back. Which would allow your condition to hold true.
|
Finding highest product of three numbers
|
Given an array of ints, arrayofints, find the highest product, Highestproduct, you can get from three of the integers. The input array of ints will always have at least three integers.
So I've popped three numbers from arrayofints and stuck them in highestproduct:
Highestproduct = arrayofints[:2]
for item in arrayofints[3:]:
If min(Highestproduct) < item:
Highestproduct[highestproduct.index(min(Highestproduct))] = item
If min of highestproduct less than item: Replace the lowest number with the current number.
This would end up with highest product, but apparently there is a better solution. What's wrong with my approach? Would my solution be O(n)?
|
Keep track of the two minimal elements and three maximal elements, the answer should be min1 * min2 * max1 or max1 * max2 * max3.
To get the maximum product of 3 ints we have to choose 3 maximum elements. However there is a catch that we can substitute 2 of the smallest of 3 max elements with the 2 min ints. If both smallest ints are negative their product is positive so min1 * min2 might be bigger than max2 * max3 (where max2 and max3 are 2 of the smallest of 3 max elements from the array).
This runs in O(n) time.
|
PySpark groupByKey returning pyspark.resultiterable.ResultIterable
|
I am trying to figure out why my groupByKey is returning the following:
[(0, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a210>), (1, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a4d0>), (2, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a390>), (3, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a290>), (4, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a450>), (5, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a350>), (6, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a1d0>), (7, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a490>), (8, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a050>), (9, <pyspark.resultiterable.ResultIterable object at 0x7fc659e0a650>)]
I have flatMapped values that look like this:
[(0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D'), (0, u'D')]
I'm doing just a simple:
groupRDD = columnRDD.groupByKey()
|
What you're getting back is an object which allows you to iterate over the results. You can turn the results of groupByKey into a list by calling list() on the values, e.g.
example = sc.parallelize([(0, u'D'), (0, u'D'), (1, u'E'), (2, u'F')])
example.groupByKey().collect()
# Gives [(0, <pyspark.resultiterable.ResultIterable object ......]
example.groupByKey().map(lambda x : (x[0], list(x[1]))).collect()
# Gives [(0, [u'D', u'D']), (1, [u'E']), (2, [u'F'])]
|
Python fails to open 11gb csv in r+ mode but opens in r mode
|
I'm having problems with some code that loops through a bunch of .csvs and deletes the final line if there's nothing in it (i.e. files that end with the \n newline character)
My code works successfully on all files except one, which is the largest file in the directory at 11gb. The second largest file is 4.5gb.
The line it fails on is simply:
with open(path_str,"r+") as my_file:
and I get the following message:
IOError: [Errno 22] invalid mode ('r+') or filename: 'F:\\Shapefiles\\ab_premium\\processed_csvs\\a.csv'
The path_str I create using os.file.join to avoid errors, and I tried renaming the file to a.csv just to make sure there wasn't anything odd going on with the filename. This made no difference.
Even more strangely, the file is happy to open in r mode. I.e. the following code works fine:
with open(path_str,"r") as my_file:
I have tried navigating around the file in read mode, and it's happy to read characters at the start, end, and in the middle of the file.
Does anyone know of any limits on the size of file that Python can deal with or why I might be getting this error? I'm on Windows 7 64bit and have 16gb of RAM.
|
The default I/O stack in Python 2 is layered over CRT FILE streams. On Windows these are built on top of a POSIX emulation API that uses file descriptors (which in turn is layered over the user-mode Windows API, which is layered over the kernel-mode I/O system, which itself is a deeply layered system based on I/O request packets; the hardware is down there somewhere...). In the POSIX layer, opening a file with _O_RDWR | _O_TEXT mode (as in "r+"), requires seeking to the end of the file to remove CTRL+Z, if it's present. Here's a quote from the CRT's fopen documentation:
Open in text (translated) mode. In this mode, CTRL+Z is interpreted as
an end-of-file character on input. In files opened for reading/writing
with "a+", fopen checks for a CTRL+Z at the end of the file and
removes it, if possible. This is done because using fseek and ftell to
move within a file that ends with a CTRL+Z, may cause fseek to behave
improperly near the end of the file.
The problem here is that the above check calls the 32-bit _lseek (bear in mind that sizeof long is 4 bytes on 64-bit Windows, unlike most other 64-bit platforms), instead of _lseeki64. Obviously this fails for an 11 GB file. Specifically, SetFilePointer fails because it gets called with a NULL value for lpDistanceToMoveHigh. Here's the return value and LastErrorValue for the latter call:
0:000> kc 2
Call Site
KERNELBASE!SetFilePointer
MSVCR90!lseek_nolock
0:000> r rax
rax=00000000ffffffff
0:000> dt _TEB @$teb LastErrorValue
ntdll!_TEB
+0x068 LastErrorValue : 0x57
The error code 0x57 is ERROR_INVALID_PARAMETER. This is referring to lpDistanceToMoveHigh being NULL when trying to seek from the end of a large file.
To work around this problem with CRT FILE streams, I recommend opening the file using io.open instead. This is a backported implementation of Python 3's I/O stack. It always opens files in raw binary mode (_O_BINARY), and it implements its own buffering and text-mode layers on top of the raw layer.
>>> import io
>>> f = io.open('a.csv', 'r+')
>>> f
<_io.TextIOWrapper name='a.csv' encoding='cp1252'>
>>> f.buffer
<_io.BufferedRandom name='a.csv'>
>>> f.buffer.raw
<_io.FileIO name='a.csv' mode='rb+'>
>>> f.seek(0, os.SEEK_END)
11811160064L
|
How can i get all models in django 1.8
|
I am using this code in my admin.py
from django.db.models import get_models, get_app
for model in get_models(get_app('myapp')):
admin.site.register(model)
But i get warning that get_models is deprecated
How can i do that in django 1.8
|
This should work,
from django.apps import apps
apps.get_models()
The get_models method returns a list of all installed models. You can also pass three keyword arguments include_auto_created, include_deferred and include_swapped.
If you want to get the models for a specific app, you can do something like this.
from django.apps import apps
myapp = apps.get_app_config('myapp')
myapp.models
This will return an OrderedDict instance of the models for that app.
|
Error packaging Kivy with numpy library for Android using buildozer
|
I am trying to create an Android package of my Kivy application using buildozer but I am getting this error when I try to include the numpy:
resume of the error:
compile options: '-DNO_ATLAS_INFO=1 -Inumpy/core/include -Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/home/joao/github/buildozer/.buildozer/android/platform/python-for-android/build/python-install/include/python2.7 -Ibuild/src.linux-x86_64-2.7/numpy/core/src/multiarray -Ibuild/src.linux-x86_64-2.7/numpy/core/src/umath -c'
ccache: numpy/linalg/lapack_litemodule.c
ccache: numpy/linalg/python_xerbla.c
/usr/bin/gfortran -Wall -lm build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib -L/home/joao/github/buildozer/.buildozer/android/platform/python-for-android/build/python-install/lib -Lbuild/temp.linux-x86_64-2.7 -llapack -lblas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so
/usr/bin/ld: build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: Relocations in generic ELF (EM: 40)
/usr/bin/ld: build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: Relocations in generic ELF (EM: 40)
build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status
/usr/bin/ld: build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: Relocations in generic ELF (EM: 40)
/usr/bin/ld: build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: Relocations in generic ELF (EM: 40)
build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status
unable to execute _configtest: Exec format error
error: Command "/usr/bin/gfortran -Wall -lm build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o -L/usr/lib -L/home/joao/github/buildozer/.buildozer/android/platform/python-for-android/build/python-install/lib -Lbuild/temp.linux-x86_64-2.7 -llapack -lblas -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with exit status 1
does anyone knows how to solve it?
P.S. I am using Ubuntu 14.04 64-bit
|
Try sudo apt-get install libatlas-base-dev it looks like you're missing some libraries
|
Append to a list defined in a tuple - is it a bug?
|
So I have this code:
tup = ([1,2,3],[7,8,9])
tup[0] += (4,5,6)
which generates this error:
TypeError: 'tuple' object does not support item assignment
While this code:
tup = ([1,2,3],[7,8,9])
try:
tup[0] += (4,5,6)
except TypeError:
print tup
prints this:
([1, 2, 3, 4, 5, 6], [7, 8, 9])
Is this behavior expected?
Note
I realize this is not a very common use case. However, while the error is expected, I did not expect the list change.
|
Yes it's expected.
A tuple cannot be changed. A tuple, like a list, is a structure that points to other objects. It doesn't care about what those objects are. They could be strings, numbers, tuples, lists, or other objects.
So doing anything to one of the objects contained in the tuple, including appending to that object if it's a list, isn't relevant to the semantics of the tuple.
(Imagine if you wrote a class that had methods on it that cause its internal state to change. You wouldn't expect it to be impossible to call those methods on an object based on where it's stored).
Or another example:
>>> l1 = [1, 2, 3]
>>> l2 = [4, 5, 6]
>>> t = (l1, l2)
>>> l3 = [l1, l2]
>>> l3[1].append(7)
Two mutable lists referenced by a list and by a tuple. Should I be able to do the last line (answer: yes). If you think the answer's no, why not? Should t change the semantics of l3 (answer: no).
If you want an immutable object of sequential structures, it should be tuples all the way down.
Why does it error?
This example uses the infix operator:
Many operations have an âin-placeâ version. The following functions
provide a more primitive access to in-place operators than the usual
syntax does; for example, the statement x += y is equivalent to x =
operator.iadd(x, y). Another way to put it is to say that z =
operator.iadd(x, y) is equivalent to the compound statement z = x; z
+= y.
https://docs.python.org/2/library/operator.html
So this:
l = [1, 2, 3]
tup = (l,)
tup[0] += (4,5,6)
is equivalent to this:
l = [1, 2, 3]
tup = (l,)
x = tup[0]
x = x.__iadd__([4, 5, 6]) # like extend, but returns x instead of None
tup[0] = x
The __iadd__ line succeeds, and modifies the first list. So the list has been changed. The __iadd__ call returns the mutated list.
The second line tries to assign the list back to the tuple, and this fails.
So, at the end of the program, the list has been extended but the second part of the += operation failed. For the specifics, see this question.
|
How to find a Python package's dependencies
|
How can you programmatically get a Python package's list of dependencies?
The standard setup.py has these documented, but I can't find an easy way to access it from either Python or the command line.
Ideally, I'm looking for something like:
$ pip install somepackage --only-list-deps
kombu>=3.0.8
billiard>=3.3.0.13
boto>=2.26
or:
>>> import package_deps
>>> package = package_deps.find('somepackage')
>>> print package.dependencies
['kombu>=3.0.8', 'billiard>=3.3.0.13', 'boto>=2.26']
Note, I'm not talking about importing a package and finding all referenced modules. While this might find most of the dependent packages, it wouldn't be able to find the minimum version number required. That's only stored in the setup.py.
|
Try to use show command in pip, for example:
$ pip show tornado
---
Name: tornado
Version: 4.1
Location: *****
Requires: certifi, backports.ssl-match-hostname
Update (retrieve deps with specified version):
from pip._vendor import pkg_resources
_package_name = 'somepackage'
_package = pkg_resources.working_set.by_key[_package_name]
print([str(r) for r in _package.requires()]) # retrieve deps from setup.py
Output: ['kombu>=3.0.8',
'billiard>=3.3.0.13',
'boto>=2.26']
|
Scope of variables in python decorator
|
I'm having a very weird problem in a Python 3 decorator.
If I do this:
def rounds(nr_of_rounds):
def wrapper(func):
@wraps(func)
def inner(*args, **kwargs):
return nr_of_rounds
return inner
return wrapper
it works just fine. However, if I do this:
def rounds(nr_of_rounds):
def wrapper(func):
@wraps(func)
def inner(*args, **kwargs):
lst = []
while nr_of_rounds > 0:
lst.append(func(*args, **kwargs))
nr_of_rounds -= 1
return max(lst)
return inner
return wrapper
I get:
while nr_of_rounds > 0:
UnboundLocalError: local variable 'nr_of_rounds' referenced before assignment
In other words, I can use nr_of_roundsin the inner function if I use it in a return, but I can't do anything else with it. Why is that?
|
Since nr_of_rounds is picked up by the closure, you can think of it as a "read-only" variable. If you want to write to it (e.g. to decrement it), you need to tell python explicitly -- In this case, the python3.x nonlocal keyword would work.
As a brief explanation, what Cpython does when it encounters a function definition is it looks at the code and decides if all the variables are local or non-local. Local variables (by default) are anything that appear on the left-hand side of an assignment statement, loop variables and the input arguments. Every other name is non-local. This allows some neat optimizations1. To use a non-local variable the same way you would a local, you need to tell python explicitly either via a global or nonlocal statement. When python encounters something that it thinks should be a local, but really isn't, you get an UnboundLocalError.
1The Cpython bytecode generator turns the local names into indices in an array so that local name lookup (the LOAD_FAST bytecode instruction) is as fast as indexing an array plus the normal bytecode overhead.
|
How can I resolve 'django_content_type already exists'?
|
After upgrading to django 1.8 I'm recieving the error during migration:
ProgrammingError: relation "django_content_type" already exists
I'd be interested in the background behind this error, but more importantly,
How can I resolve it?
|
Initial migrations on a project can sometimes be troubleshot using --fake-initial
python manage.py migrate --fake-initial
It's new in 1.8. In 1.7, --fake-initial was an implicit default, but explicit in 1.8.
From the Docs:
The --fake-initial option can be used to allow Django to skip an appâs initial migration if all database tables with the names of all models created by all CreateModel operations in that migration already exist. This option is intended for use when first running migrations against a database that preexisted the use of migrations. This option does not, however, check for matching database schema beyond matching table names and so is only safe to use if you are confident that your existing schema matches what is recorded in your initial migration.
https://docs.djangoproject.com/en/1.8/ref/django-admin/#django-admin-option---fake-initial
|
Joining elements in a list without the join command
|
I need to join the elements in a list without using the join command, so if for example I have the list:
[12,4,15,11]
The output should be:
1241511
Here is my code so far:
def lists(list1):
answer = 0
h = len(list1)
while list1 != []:
answer = answer + list1[0] * 10 ** h
h = h - 1
list1.pop(0)
print(answer)
But, in the end, the answer ends up being 125610 which is clearly wrong.
I think the logic is OK, but I can't find the problem?
|
If you just want to print the number rather than return an actual int:
>>> a = [12,4,15,11]
>>> print(*a, sep='')
1241511
|
How / why does Python type hinting syntax work?
|
I have just seen the following example in PEP 484:
def greeting(name: str) -> str:
return 'Hello ' + name
print(greeting('Martin'))
print(greeting(1))
As expected, this does not work in Python 2:
File "test.py", line 1
def greeting(name: str) -> str:
^
SyntaxError: invalid syntax
However, it works for Python 3:
Hello Martin
Traceback (most recent call last):
File "test.py", line 5, in <module>
print(greeting(1))
File "test.py", line 2, in greeting
return 'Hello ' + name
TypeError: Can't convert 'int' object to str implicitly
This was unexpected. It does not really check types yet, as you can see with the following example (it runs, but does not throw an exception):
def greeting(name: str) -> int:
return 'Hello ' + name
print(greeting('Martin'))
It seems as if after the : has to be the name of a function, but the function seems to be ignored:
def aha(something):
print("aha")
return something+"!"
def greeting(name: aha, foo) -> int:
return 'Hello ' + name + foo
print(greeting('Martin', 'ad'))
The same seems to be true for the name after ->.
Is this type hinting syntax using something else (like Java Modeling language makes use of comments)? When was this syntax introduced to Python? Is there a way to do static type checking already with this Syntax? Does it always break Python 2 compatibility?
|
There is no type hinting going on here. All you did was provide annotations; these were introduced with PEP 3107 (only in Python 3, there is no support for this in Python 2); they let you annotate arguments and return values with arbitrary information for later inspection:
>>> greeting.__annotations__
{'name': <class 'str'>, 'return': <class 'str'>}
They are otherwise not consulted at all here. Instead, the error message you got is from trying to concatenate string and integer values in the body of the function:
>>> 'Hello ' + 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't convert 'int' object to str implicitly
It is a custom type error aimed at providing additional information as to why the str + int concatenation failed; it is thrown by the str.__add__ method for any type that is not str:
>>> ''.__add__(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't convert 'int' object to str implicitly
>>> ''.__add__(True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Can't convert 'bool' object to str implicitly
PEP 484 then proposes to make use of those annotations to do actual static type checking with additional tools, but as the introduction of the PEP states:
While these annotations are available at runtime through the usual __annotations__ attribute, no type checking happens at runtime. Instead, the proposal assumes the existence of a separate off-line type checker which users can run over their source code voluntarily. Essentially, such a type checker acts as a very powerful linter.
Emphasis in the original.
The PEP was inspired by existing tools that use PEP 3107 annotations; specifically the mypy project (which is looping right back by adopting PEP 484), but also the type hinting support in the PyCharm IDE and the pytypedecl project. See Guido van Rossum's original email kickstarting this effort as well as a follow-up email.
mypy apparently supports Python 2 by preprocessing the annotations, removing them before byte-compiling the source code for you, but you otherwise cannot normally use the syntax Python code meant to work in Python 2.
PEP 484 also describes the use of stub files, which sit next to the regular Python files; these use the .pyi extension and only contain the signatures (with type hints), leaving the main .py files annotation free and thus usable on Python 2 (provided you wrote Polyglot Python code otherwise).
|
Make ipython notebook print in real time
|
Ipython Notebook doesn't seem to print results in real time, but seems to buffer in a certain way and then bulk output the prints. How can I make ipython print my results as soon as the print command is processed?
Example code:
import time
def printer():
for i in range(100):
time.sleep(5)
print i
Supposing that the above code is in a file that is imported. How could I make it that when I call the printer function it prints a number every 5 seconds and not all the numbers at the very end?
Please note that I cannot edit the function printer() because I get it from some external module. I want the to change the configs of ipython notebook somehow so that it doesn't use a buffer. Therefore, I also do not wish to use sys.stdout.flush(), I want to do it in real-time according to the question, I don't want any buffer to start with.
I also tried loading ipython notebook with the command:
ipython notebook --cache-size=0
but that also doesn't seem to work.
|
This is merely one of the answers to the question suggested by Carsten incorporating the __getattr__ delegation suggested by diedthreetimes in a comment:
import sys
oldsysstdout = sys.stdout
class flushfile():
def __init__(self, f):
self.f = f
def __getattr__(self,name):
return object.__getattribute__(self.f, name)
def write(self, x):
self.f.write(x)
self.f.flush()
def flush(self):
self.f.flush()
sys.stdout = flushfile(sys.stdout)
In the original answer, the __getattr__ method is not implemented. Without that, it fails. Other variants in answers to that question also fail in a notebook.
In a notebook, sys.stdout is an instance of IPython.kernel.zmq.iostream.OutStream and has a number of methods and attributes not present in the usual sys.stdout. Delegating __getattr__ allows a flushfile to masquerade as a ...zmq.iostream.OutStream duck.
This works in a python 2.7 notebook run with ipython 3.1.0
|
Run specific Django tests (with django-nose?)
|
I am having a very complicated tests.py file.
Actually the tests classes and methods are generated at run time w/ type (to account for data listed in auxiliary files). I am doing things in the following fashion (see below for more code):
klass = type(name, (TestCase,), attrs)
setattr(current_module, name, klass)
FYI, with the usual django test runner, all those tests get run when doing ./manage.py test myapp (thanks to the setattr shown above).
What I want to do is run only part of those tests, without listing their names by hand.
For example, I could give each test "tags" in the class names or method names so that I could filter on them. For example I would then perform: run all tests which method name contains the string "test_postgres_backend_"
I tried using django-nose because of nose's -m option, which should be able to select tests based on regular expressions, an ideal solution to my problem.
Unfortunately, here is what is happening when using django-nose as the django test runner:
./manage.py test myapp is not finding automatically the type-generated test classes (contrarily to the django test runner)
neither ./manage.py test -m ".*" myapp nor ./manage.py test myapp -m ".*" find ANY test, even if normal TestCase classes are present in the file
So:
Do you have another kind of solution to my general problem, rather than trying to use django-nose -m?
With django-nose, do you know how to make the -m work?
mcve
Add the following to an empty myapp/tests.py file:
from django.test import TestCase
from sys import modules
current_module = modules[__name__]
def passer(self, *args, **kw):
self.assertEqual(1, 1)
def failer(self, *args, **kw):
self.assertEqual(1, 2)
# Create a hundred ...
for i in xrange(100):
# ... of a stupid TestCase class that has 1 method that passes if `i` is
# even and fails if `i` is odd
klass_name = "Test_%s" % i
if i % 2: # Test passes if even
klass_attrs = {
'test_something_%s' % i: passer
}
else: # Fail if odd
klass_attrs = {
'test_something_%s' % i: failer
}
klass = type(klass_name, (TestCase,), klass_attrs)
# Set the class as "child" of the current module so that django test runner
# finds it
setattr(current_module, klass_name, klass)
If makes for this output run (in alphab order) by django test runnner:
F.F.F.F.F.F.FF.F.F.F.F..F.F.F.F.F.FF.F.F.F.F..F.F.F.F.F.FF.F.F.F.F..F.F.F.F.F.FF.F.F.F.F..F.F.F.F.F..
If you change to django_nose test runner, nothing happens on ./manage.py test myapp.
After fixing this, I would then like would be able to run only the test methods which name end with a 0 (or some other kind of regexable filtering)
|
The problem you ran into is that Nose determines whether or not to include a method into the set of tests to run by looking at the name recorded on the function itself, rather than the attribute that gives access to the function. If I rename your passer and failer to test_pass and test_fail then Nose is able to find the tests. So the functions themselves have to be named in a way that will be matched by what is given to -m (or its default value).
Here's the modified code that gives the expected results:
from django.test import TestCase
from sys import modules
current_module = modules[__name__]
def test_pass(self, *args, **kw):
self.assertEqual(1, 1)
def test_fail(self, *args, **kw):
self.assertEqual(1, 2)
# Create a hundred ...
for i in xrange(100):
# ... of a stupid TestCase class that has 1 method that passes if `i` is
# even and fails if `i` is odd
klass_name = "Test_%s" % i
if i % 2: # Test passes if even
klass_attrs = {
'test_something_%s' % i: test_pass
}
else: # Fail if odd
klass_attrs = {
'test_something_%s' % i: test_fail
}
klass = type(klass_name, (TestCase,), klass_attrs)
# Set the class as "child" of the current module so that django test runner
# finds it
setattr(current_module, klass_name, klass)
# This prevents Nose from seeing them as tests after the loop is over.
test_pass = None
test_fail = None
Without the final two assignments to None, Nose will consider the two top level functions to be module-level tests and will run them in addition to the tests in the classes.
Another way to get the same results would be to define __test__ on your two functions:
def passer(self, *args, **kw):
self.assertEqual(1, 1)
passer.__test__ = 1
def failer(self, *args, **kw):
self.assertEqual(1, 2)
failer.__test__ = 1
And at the end of the file:
# This prevents Nose from seeing them as tests after the loop is over.
passer = None
failer = None
Nose looks for the presence of these on functions and if present and set to a value is considered to be true, it will take the function as a test case.
The logic governing the selection of methods can be found in Nose's selector.py file, at the wantMethod method:
def wantMethod(self, method):
"""Is the method a test method?
"""
try:
method_name = method.__name__
except AttributeError:
# not a method
return False
if method_name.startswith('_'):
# never collect 'private' methods
return False
declared = getattr(method, '__test__', None)
if declared is not None:
wanted = declared
else:
wanted = self.matches(method_name)
plug_wants = self.plugins.wantMethod(method)
if plug_wants is not None:
wanted = plug_wants
log.debug("wantMethod %s? %s", method, wanted)
return wanted
I'm not seeing a clear way to use -m to run only some tests the way you want it. The problem is that -m matches file, directorie, module, class, and function names equally. If you set something like -m0$ then all the individual parts I just listed must match the regular expression for the test to be selected. (Nose does not combine them and then match on the combination.) It is possible to list tests individually on the command line but this is a poor substitute to a regular expression match.
|
Given 2 int values, return True if one is negative and other is positive
|
def logical_xor(a, b): # for example, -1 and 1
print (a < 0) # evaluates to True
print (b < 0) # evaluates to False
print (a < 0 != b < 0) # EVALUATES TO FALSE! why??? it's True != False
return (a < 0 != b < 0) # returns False when it should return True
print ( logical_xor(-1, 1) ) # returns FALSE!
# now for clarification
print ( True != False) # PRINTS TRUE!
Could someone explain what is happening? I'm trying to make a one liner:
lambda a, b: (a < 0 != b < 0)
|
All comparison operators in Python have the same precedence. In addition, Python does chained comparisons. Thus,
(a < 0 != b < 0)
breaks down as:
(a < 0) and (0 != b) and (b < 0)
If any one of these is false, the total result of the expression will be False.
What you want to do is evaluate each condition separately, like so:
(a < 0) != (b < 0)
Other variants, from comments:
(a < 0) is not (b < 0) # True and False are singletons so identity-comparison works
(a < 0) ^ (b < 0) # bitwise-xor does too, as long as both sides are boolean
(a ^ b < 0) # or you could directly bitwise-xor the integers;
# the sign bit will only be set if your condition holds
# this one fails when you mix ints and floats though
(a * b < 0) # perhaps most straightforward, just multiply them and check the sign
|
Delaunay Triangulation of points from 2D surface in 3D with python?
|
I have a collection of 3D points. These points are sampled at constant levels (z=0,1,...,7). An image should make it clear:
These points are in a numpy ndarray of shape (N, 3) called X. The above plot is created using:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
X = load('points.npy')
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_wireframe(X[:,0], X[:,1], X[:,2])
ax.scatter(X[:,0], X[:,1], X[:,2])
plt.draw()
I'd like to instead triangulate only the surface of this object, and plot the surface. I do not want the convex hull of this object, however, because this loses subtle shape information I'd like to be able to inspect.
I have tried ax.plot_trisurf(X[:,0], X[:,1], X[:,2]), but this results in the following mess:
Any help?
Example data
Here's a snippet to generate 3D data that is representative of the problem:
import numpy as np
X = []
for i in range(8):
t = np.linspace(0,2*np.pi,np.random.randint(30,50))
for j in range(t.shape[0]):
# random circular objects...
X.append([
(-0.05*(i-3.5)**2+1)*np.cos(t[j])+0.1*np.random.rand()-0.05,
(-0.05*(i-3.5)**2+1)*np.sin(t[j])+0.1*np.random.rand()-0.05,
i
])
X = np.array(X)
Example data from original image
Here's a pastebin to the original data:
http://pastebin.com/YBZhJcsV
Here are the slices along constant z:
|
I realise that you mentioned in your question that you didn't want to use the convex hull because you might lose some shape information. I have a simple solution that works pretty well for your 'jittered spherical' example data, although it does use scipy.spatial.ConvexHull. I thought I would share it here anyway, just in case it's useful for others:
from matplotlib.tri import triangulation
from scipy.spatial import ConvexHull
# compute the convex hull of the points
cvx = ConvexHull(X)
x, y, z = X.T
# cvx.simplices contains an (nfacets, 3) array specifying the indices of
# the vertices for each simplical facet
tri = Triangulation(x, y, triangles=cvx.simplices)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.hold(True)
ax.plot_trisurf(tri, z)
ax.plot_wireframe(x, y, z, color='r')
ax.scatter(x, y, z, color='r')
plt.draw()
It does pretty well in this case, since your example data ends up lying on a more-or-less convex surface. Perhaps you could make some more challenging example data? A toroidal surface would be a good test case which the convex hull method would obviously fail.
Mapping an arbitrary 3D surface from a point cloud is a really tough problem. Here's a related question containing some links that might be helpful.
|
Python multiprocessing doesn't seem to use more than one core
|
I want to use Python multiprocessing to run grid search for a predictive model.
When I look at core usage, it always seem to be using only one core. Any idea what I'm doing wrong?
import multiprocessing
from sklearn import svm
import itertools
#first read some data
#X will be my feature Numpy 2D array
#y will be my 1D Numpy array of labels
#define the grid
C = [0.1, 1]
gamma = [0.0]
params = [C, gamma]
grid = list(itertools.product(*params))
GRID_hx = []
def worker(par, grid_list):
#define a sklearn model
clf = svm.SVC(C=g[0], gamma=g[1],probability=True,random_state=SEED)
#run a cross validation fuction: returns error
ll = my_cross_validation_function(X, y, model=clf, n=1, test_size=0.2)
print(par, ll)
grid_list.append((par, ll))
if __name__ == '__main__':
manager = multiprocessing.Manager()
GRID_hx = manager.list()
jobs = []
for g in grid:
p = multiprocessing.Process(target=worker, args=(g,GRID_hx))
jobs.append(p)
p.start()
p.join()
print("\n-------------------")
print("SORTED LIST")
print("-------------------")
L = sorted(GRID_hx, key=itemgetter(1))
for l in L[:5]:
print l
|
Your problem is that you join each job immediately after you started it:
for g in grid:
p = multiprocessing.Process(target=worker, args=(g,GRID_hx))
jobs.append(p)
p.start()
p.join()
join blocks until the respective process has finished working. This means that your code starts only one process at once, waits until it is finished and then starts the next one.
In order for all processes to run in parallel, you need to first start them all and then join them all:
jobs = []
for g in grid:
p = multiprocessing.Process(target=worker, args=(g,GRID_hx))
jobs.append(p)
p.start()
for j in jobs:
j.join()
Documentation: link
|
Why use re.match(), when re.search() can do the same thing?
|
From the documentation, it's very clear that:
match() -> apply pattern match at the beginning of the string
search() -> search through the string and return first match
And search with '^' and without re.M flag would work the same as match.
Then why does python have match()? Isn't it redundant?
Are there any performance benefits to keeping match() in python?
|
The pos argument behaves differently in important ways:
>>> s = "a ab abc abcd"
>>> re.compile('a').match(s, pos=2)
<_sre.SRE_Match object; span=(2, 3), match='a'>
>>> re.compile('^a').search(s, pos=2)
None
match makes it possible to write a tokenizer, and ensure that characters are never skipped. search has no way of saying "start from the earliest allowable character".
Example use of match to break up a string with no gaps:
def tokenize(s, patt):
at = 0
while at < len(s):
m = patt.match(s, pos=at)
if not m:
raise ValueError("Did not expect character at location {}".format(at))
at = m.end()
yield m
|
Different ways of deleting lists
|
I want to understand why:
a = [];
del a; and
del a[:];
behave so differently.
I ran a test for each to illustrate the differences I witnessed:
>>> # Test 1: Reset with a = []
...
>>> a = [1,2,3]
>>> b = a
>>> a = []
>>> a
[]
>>> b
[1, 2, 3]
>>>
>>> # Test 2: Reset with del a
...
>>> a = [1,2,3]
>>> b = a
>>> del a
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
>>> b
[1, 2, 3]
>>>
>>> # Test 3: Reset with del a[:]
...
>>> a = [1,2,3]
>>> b = a
>>> del a[:]
>>> a
[]
>>> b
[]
I did find Clearing Python lists, but I didn't find an explanation for the differences in behaviour. Can anyone clarify this?
|
Test 1
>>> a = [1,2,3] # set a to point to a list [1, 2, 3]
>>> b = a # set b to what a is currently pointing at
>>> a = [] # now you set a to point to an empty list
# Step 1: A --> [1 2 3]
# Step 2: A --> [1 2 3] <-- B
# Step 3: A --> [ ] [1 2 3] <-- B
# at this point a points to a new empty list
# whereas b points to the original list of a
Test 2
>>> a = [1,2,3] # set a to point to a list [1, 2, 3]
>>> b = a # set b to what a is currently pointing at
>>> del a # delete the reference from a to the list
# Step 1: A --> [1 2 3]
# Step 2: A --> [1 2 3] <-- B
# Step 3: [1 2 3] <-- B
# so a no longer exists because the reference
# was destroyed but b is not affected because
# b still points to the original list
Test 3
>>> a = [1,2,3] # set a to point to a list [1, 2, 3]
>>> b = a # set b to what a is currently pointing at
>>> del a[:] # delete the contents of the original
# Step 1: A --> [1 2 3]
# Step 2: A --> [1 2 3] <-- B
# Step 2: A --> [ ] <-- B
# both a and b are empty because they were pointing
# to the same list whose elements were just removed
|
finding needle in haystack, what is a better solution?
|
so given "needle" and "there is a needle in this but not thisneedle haystack"
I wrote
def find_needle(n,h):
count = 0
words = h.split(" ")
for word in words:
if word == n:
count += 1
return count
This is O(n) but wondering if there is a better approach? maybe not by using split at all?
How would you write tests for this case to check that it handles all edge cases?
|
I don't think it's possible to get bellow O(n) with this (because you need to iterate trough the string at least once). You can do some optimizations.
I assume you want to match "whole words", for example looking up foo should match like this:
foo and foo, or foobar and not foo.
^^^ ^^^ ^^^
So splinting just based on space wouldn't do the job, because:
>>> 'foo and foo, or foobar and not foo.'.split(' ')
['foo', 'and', 'foo,', 'or', 'foobar', 'and', 'not', 'foo.']
# ^ ^
This is where re module comes in handy, which will allows you to build fascinating conditions. For example \b inside the regexp means:
Matches the empty string, but only at the beginning or end of a word. A word is defined as a sequence of Unicode alphanumeric or underscore characters, so the end of a word is indicated by whitespace or a non-alphanumeric, non-underscore Unicode character. Note that formally, \b is defined as the boundary between a \w and a \W character (or vice versa), or between \w and the beginning/end of the string. This means that r'\bfoo\b' matches 'foo', 'foo.', '(foo)', 'bar foo baz' but not 'foobar' or 'foo3'.
So r'\bfoo\b' will match only whole word foo. Also don't forget to use re.escape():
>>> re.escape('foo.bar+')
'foo\\.bar\\+'
>>> r'\b{}\b'.format(re.escape('foo.bar+'))
'\\bfoo\\.bar\\+\\b'
All you have to do now is use re.finditer() to scan the string. Based on documentation:
Return an iterator yielding match objects over all non-overlapping matches for the RE pattern in string. The string is scanned left-to-right, and matches are returned in the order found. Empty matches are included in the result unless they touch the beginning of another match.
I assume that matches are generated on the fly, so they never have to be in memory at once (which may come in handy with large strings, with many matched items). And in the end just count them:
>>> r = re.compile(r'\bfoo\b')
>>> it = r.finditer('foo and foo, or foobar and not foo.')
>>> sum(1 for _ in it)
3
|
How to add a title to Seaborn Facet Plot
|
How do I add a title to this Seaborne plot? Let's give it a title 'I AM A TITLE'.
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col="sex", row="smoker", margin_titles=True)
g.map(sns.plt.scatter, "total_bill", "tip")
|
After those lines:
plt.subplots_adjust(top=0.9)
g.fig.suptitle('THIS IS A TITLE, YOU BET') # can also get the figure from plt.gcf()
If you add a suptitle without adjusting the axis, the seaborn facet titles overlap it.
(With different data):
|
Pandas DataFrame to List of Dictionaries (Dics)
|
I have the following DataFrame:
customer item1 item2 item3
1 apple milk tomato
2 water orange potato
3 juice mango chips
which I want to translate it to list of dictionaries per row
rows = [{'customer': 1, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
{'customer': 2, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
{'customer': 3, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]
|
Use df.to_dict('records') -- gives the output without having to transpose externally.
In [2]: df.to_dict('records')
Out[2]:
[{'customer': 1L, 'item1': 'apple', 'item2': 'milk', 'item3': 'tomato'},
{'customer': 2L, 'item1': 'water', 'item2': 'orange', 'item3': 'potato'},
{'customer': 3L, 'item1': 'juice', 'item2': 'mango', 'item3': 'chips'}]
|
django.db.utils.ProgrammingError: relation already exists
|
I'm trying to set up the tables for a new django project (that is, the tables do NOT already exist in the database); the django version is 1.7 and the db back end is PostgreSQL. The name of the project is crud. Results of migration attempt follow:
python manage.py makemigrations crud
Migrations for 'crud':
0001_initial.py:
- Create model AddressPoint
- Create model CrudPermission
- Create model CrudUser
- Create model LDAPGroup
- Create model LogEntry
- Add field ldap_groups to cruduser
- Alter unique_together for crudpermission (1 constraint(s))
python manage.py migrate crud
Operations to perform:
Apply all migrations: crud
Running migrations:
Applying crud.0001_initial...Traceback (most recent call last):
File "manage.py", line 18, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/commands/migrate.py", line 161, in handle
executor.migrate(targets, plan, fake=options.get("fake", False))
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/executor.py", line 68, in migrate
self.apply_migration(migration, fake=fake)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/executor.py", line 102, in apply_migration
migration.apply(project_state, schema_editor)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/migration.py", line 108, in apply
operation.database_forwards(self.app_label, schema_editor, project_state, new_state)
File "/usr/local/lib/python2.7/dist-packages/django/db/migrations/operations/models.py", line 36, in database_forwards
schema_editor.create_model(model)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/schema.py", line 262, in create_model
self.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/schema.py", line 103, in execute
cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 82, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 66, in execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python2.7/dist-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/usr/local/lib/python2.7/dist-packages/django/db/backends/utils.py", line 66, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "crud_crudpermission" already exists
Some highlights from the migration file:
dependencies = [
('auth', '0001_initial'),
('contenttypes', '0001_initial'),
]
migrations.CreateModel(
name='CrudPermission',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('_created_by', models.CharField(default=b'', max_length=64, null=True, editable=False, blank=True)),
('_last_updated_by', models.CharField(default=b'', max_length=64, null=True, editable=False, blank=True)),
('_created', models.DateTimeField(null=True, editable=False, blank=True)),
('_last_updated', models.DateTimeField(null=True, editable=False, blank=True)),
('domain', models.CharField(max_length=32, choices=[(b'town', b'Town'), (b'boe', b'BOE'), (b'police', b'Police')])),
('ldap_group', models.CharField(max_length=128, verbose_name=b'LDAP group')),
('can_add', models.BooleanField(default=False, verbose_name=b'add')),
('can_change', models.BooleanField(default=False, verbose_name=b'change')),
('restrict_change_to_own', models.BooleanField(default=False)),
('can_delete', models.BooleanField(default=False, verbose_name=b'delete')),
('restrict_delete_to_own', models.BooleanField(default=False)),
('models', models.ManyToManyField(to='contenttypes.ContentType', null=True, blank=True)),
],
options={
'verbose_name': 'CRUD permission',
},
bases=(models.Model,),
),
migrations.AlterUniqueTogether(
name='crudpermission',
unique_together=set([('ldap_group', 'can_add', 'can_change', 'can_delete', 'domain')]),
)
,
The crud app is not meant to actually do anything, but I use it another app, so when I try migrate from that app, I trigger the above problem.
I've found other examples on the web of people with similar issues, but none of their cases seem to apply because
The problem affects an entire relation, not just one column
I am not using multiple inheritance.
Where should I look next to find the underlying problem?
|
This works pretty fine
./manage.py migrate --fake default
Source: -https://github.com/nijel/weblate/issues/587
|
Numpy 1-hot array
|
Let's say I have a 1d numpy array
a=[1,0,3]
I would like to encode this as a 2d 1-hot array
b=[[0,1,0,0], [1,0,0,0], [0,0,0,1]]
Is there a quick way to do this? Quicker than just looping over a to set elements of b, that is.
|
Your array a defines the columns, you just need to define the rows and then use fancy indexing:
>>> a = np.array([1, 0, 3])
>>> b = np.zeros((3, 4))
>>> b[np.arange(3), a] = 1
>>> b
array([[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 0., 0., 0., 1.]])
>>>
This is just for illustration. You may want to choose a more appropriate dtype for b such as np.bool.
|
Optimize the performance of dictionary membership for a list of Keys
|
I am trying to write a code which should return true if any element of list is present in a dictionary. Performance of this piece is really important. I know I can just loop over list and break if I find the first search hit. Is there any faster or more Pythonic way for this than given below?
for x in someList:
if x in someDict:
return True
return False
EDIT: I am using Python 2.7. My first preference would be a faster method.
|
Use of builtin any can have some performance edge over two loops
any(x in someDict for x in someList)
but you might need to measure your mileage. If your list and dict remains pretty static and you have to perform the comparison multiple times, you may consider using set
someSet = set(someList)
someDict.viewkeys() & someSet
Note Python 3.X, by default returns views rather than a sequence, so it would be straight forward when using Python 3.X
someSet = set(someList)
someDict.keys() & someSet
In both the above cases you can wrap the result with a bool to get a boolean result
bool(someDict.keys() & set(someSet ))
Heretic Note
My curiosity got the better of me and I timed all the proposed solutions. It seems that your original solution is better performance wise. Here is the result
Sample Randomly generated Input
def test_data_gen():
from random import sample
for i in range(1,5):
n = 10**i
population = set(range(1,100000))
some_list = sample(list(population),n)
population.difference_update(some_list)
some_dict = dict(zip(sample(population,n),
sample(range(1,100000),n)))
yield "Population Size of {}".format(n), (some_list, some_dict), {}
The Test Engine
I rewrote the Test Part of the answer as it was messy and the answer was receiving quite a decent attention. I created a timeit compare python module and moved it onto github
The Test Result
Timeit repeated for 10 times
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
======================================
Test Run for Population Size of 10
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.000011 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.000014 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.000015 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_imap_any |0.000018 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 5|foo_any |0.000019 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_ifilter_next |0.000022 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.000024 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.000047 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 100
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.000071 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_nested |0.000072 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.000073 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_next |0.000076 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_imap_any |0.000082 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.000092 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.000170 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.000638 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 1000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_not_not |0.000746 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.000746 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_next |0.000752 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_nested |0.000771 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.000838 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.000842 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.000933 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.001702 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 10000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.007195 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_next |0.007410 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_any |0.007491 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_not_not |0.007671 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.008385 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.011327 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.011533 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.018313 |some_dict.viewkeys() & set(some_list )
Timeit repeated for 100 times
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
======================================
Test Run for Population Size of 10
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.000098 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.000124 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.000131 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_imap_any |0.000142 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 5|foo_ifilter_next |0.000151 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.000158 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.000186 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.000496 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 100
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.000661 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_not_not |0.000677 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_nested |0.000683 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_next |0.000684 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_imap_any |0.000762 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.000854 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.001291 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.005018 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 1000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.007585 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_nested |0.007713 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 3|foo_set_ashwin |0.008256 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_not_not |0.008526 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_any |0.009422 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_ifilter_next |0.010259 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 7|foo_imap_any |0.011414 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.019862 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 10000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_imap_any |0.082221 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.083573 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_nested |0.095736 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 4|foo_set_ashwin |0.103427 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 5|foo_ifilter_next |0.104589 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 6|foo_ifilter_not_not |0.117974 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.127739 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.208228 |some_dict.viewkeys() & set(some_list )
Timeit repeated for 1000 times
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
======================================
Test Run for Population Size of 10
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.000953 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.001134 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.001213 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_next |0.001340 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_imap_any |0.001407 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.001535 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.002252 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.004701 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 100
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.006209 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_next |0.006411 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_not_not |0.006657 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_nested |0.006727 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 5|foo_imap_any |0.007562 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.008262 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.012260 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.046773 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 1000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_not_not |0.071888 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_next |0.072150 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_nested |0.073382 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_any |0.075698 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.077367 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.090623 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.093301 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.177051 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 10000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.701317 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_next |0.706156 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_any |0.723368 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_not_not |0.746650 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.776704 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.832117 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |0.881777 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |1.665962 |some_dict.viewkeys() & set(some_list )
Timeit repeated for 10000 times
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
======================================
Test Run for Population Size of 10
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.010581 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_any |0.013512 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 3|foo_imap_any |0.015321 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 4|foo_ifilter_not_not |0.017680 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 5|foo_ifilter_next |0.019334 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.026274 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.030881 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.053605 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 100
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_nested |0.070194 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_not_not |0.078524 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_any |0.079499 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 4|foo_imap_any |0.087349 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 5|foo_ifilter_next |0.093970 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 6|foo_any |0.097948 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 7|foo_set_ashwin |0.130725 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |0.480841 |some_dict.viewkeys() & set(some_list )
======================================
Test Run for Population Size of 1000
======================================
|Rank |FunctionName |Result |Description
+------+---------------------+----------+-----------------------------------------------
| 1|foo_ifilter_any |0.754491 |any(ifilter(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 2|foo_ifilter_not_not |0.756253 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 3|foo_ifilter_next |0.771382 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+----------+-----------------------------------------------
| 4|foo_nested |0.787152 |Original OPs Code
+------+---------------------+----------+-----------------------------------------------
| 5|foo_set_ashwin |0.818520 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+----------+-----------------------------------------------
| 6|foo_imap_any |0.902947 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+----------+-----------------------------------------------
| 7|foo_any |1.001810 |any(x in some_dict for x in some_list)
+------+---------------------+----------+-----------------------------------------------
| 8|foo_set |2.012781 |some_dict.viewkeys() & set(some_list )
=======================================
Test Run for Population Size of 10000
=======================================
|Rank |FunctionName |Result |Description
+------+---------------------+-----------+-----------------------------------------------
| 1|foo_imap_any |10.071469 |any(imap(some_dict.__contains__, some_list))
+------+---------------------+-----------+-----------------------------------------------
| 2|foo_any |11.127034 |any(x in some_dict for x in some_list)
+------+---------------------+-----------+-----------------------------------------------
| 3|foo_set |18.881414 |some_dict.viewkeys() & set(some_list )
+------+---------------------+-----------+-----------------------------------------------
| 4|foo_nested |8.731133 |Original OPs Code
+------+---------------------+-----------+-----------------------------------------------
| 5|foo_ifilter_not_not |9.019190 |not not next(ifilter(some_dict.__contains__...
+------+---------------------+-----------+-----------------------------------------------
| 6|foo_ifilter_next |9.189966 |bool(next(ifilter(some_dict.__contains__...
+------+---------------------+-----------+-----------------------------------------------
| 7|foo_set_ashwin |9.363886 |not set(some_dct).isdisjoint(some_lst)
+------+---------------------+-----------+-----------------------------------------------
| 8|foo_ifilter_any |9.442759 |any(ifilter(some_dict.__contains__, some_list))
And a Graphical Comparison from the above referred module
Conclusion
Premature optimization is evil. It is evident that none of the solutions have optimal performance when the test domain varies. Depending on population size and frequency of iteration, performance of solutions varies considerably. The result again speaks out about the fact that in Python, one should ensure that the code should be readable rather than ensuring that the code is either nifty or optimized for performance for certain cases, but then it may not be scalable.
Note There were some doubts on why not using ifilter performs better than the rest
"In Abhit's answer, he timed the different approaches and found that
ifilter/next was not the fastest; any idea why this would be the case?
"
It is a known fact that in python, there is an overhead when calling C functions, and if the population size is low but the frequency of iteration is high, the accumulation of C function call overhead would slowly show up. As can be seen in the graphs, where population size is low but iteration is high, using ifilter, performance deviates considerably.
|
How to write a function which takes a slice?
|
I would like to write a function in Python which takes a slice as a parameter. Ideally a user would be to be able to call the function as follows:
foo(a:b:c)
Unfortunately, this syntax is not permitted by Python - the use of a:b:c is only allowed within [], not ().
I therefore see three possibilities for my function:
Require the user to use a slice "constructor" (where s_ acts like the version provided by numpy):
foo(slice(a, b, c))
foo(s_[a:b:c])
Put the logic of my function into a __getitem__ method:
foo[a:b:c]
Give up trying to take a slice and take start, stop and step individually:
foo(a, b, c)
Is there a way to get the original syntax to work? If not, which of the workaround syntaxes would be preferred? Or is there another, better option?
|
Don't surprise your users.
If you use the slicing syntax consistently with what a developer expects from a slicing syntax, that same developer will expect square brackets operation, i.e. a __getitem__() method.
If instead the returned object is not somehow a slice of the original object, people will be confused if you stick to a __getitem__() solution. Use a function call foo(a, b, c), don't mention slices at all, and optionally assign default values if that makes sense.
|
Python - multiprocessing for matplotlib griddata
|
Following my former question [1], I would like to apply multiprocessing to matplotlib's griddata function. Is it possible to split the griddata into, say 4 parts, one for each of my 4 cores? I need this to improve performance.
For example, try the code below, experimenting with different values for size:
import numpy as np
import matplotlib.mlab as mlab
import time
size = 500
Y = np.arange(size)
X = np.arange(size)
x, y = np.meshgrid(X, Y)
u = x * np.sin(5) + y * np.cos(5)
v = x * np.cos(5) + y * np.sin(5)
test = x + y
tic = time.clock()
test_d = mlab.griddata(
x.flatten(), y.flatten(), test.flatten(), x+u, y+v, interp='linear')
toc = time.clock()
print 'Time=', toc-tic
|
I ran the example code below in Python 3.4.2, with numpy version 1.9.1 and matplotlib version 1.4.2, on a Macbook Pro with 4 physical CPUs (i.e., as opposed to "virtual" CPUs, which the Mac hardware architecture also makes available for some use cases):
import numpy as np
import matplotlib.mlab as mlab
import time
import multiprocessing
# This value should be set much larger than nprocs, defined later below
size = 500
Y = np.arange(size)
X = np.arange(size)
x, y = np.meshgrid(X, Y)
u = x * np.sin(5) + y * np.cos(5)
v = x * np.cos(5) + y * np.sin(5)
test = x + y
tic = time.clock()
test_d = mlab.griddata(
x.flatten(), y.flatten(), test.flatten(), x+u, y+v, interp='linear')
toc = time.clock()
print('Single Processor Time={0}'.format(toc-tic))
# Put interpolation points into a single array so that we can slice it easily
xi = x + u
yi = y + v
# My example test machine has 4 physical CPUs
nprocs = 4
jump = int(size/nprocs)
# Enclose the griddata function in a wrapper which will communicate its
# output result back to the calling process via a Queue
def wrapper(x, y, z, xi, yi, q):
test_w = mlab.griddata(x, y, z, xi, yi, interp='linear')
q.put(test_w)
# Measure the elapsed time for multiprocessing separately
ticm = time.clock()
queue, process = [], []
for n in range(nprocs):
queue.append(multiprocessing.Queue())
# Handle the possibility that size is not evenly divisible by nprocs
if n == (nprocs-1):
finalidx = size
else:
finalidx = (n + 1) * jump
# Define the arguments, dividing the interpolation variables into
# nprocs roughly evenly sized slices
argtuple = (x.flatten(), y.flatten(), test.flatten(),
xi[:,(n*jump):finalidx], yi[:,(n*jump):finalidx], queue[-1])
# Create the processes, and launch them
process.append(multiprocessing.Process(target=wrapper, args=argtuple))
process[-1].start()
# Initialize an array to hold the return value, and make sure that it is
# null-valued but of the appropriate size
test_m = np.asarray([[] for s in range(size)])
# Read the individual results back from the queues and concatenate them
# into the return array
for q, p in zip(queue, process):
test_m = np.concatenate((test_m, q.get()), axis=1)
p.join()
tocm = time.clock()
print('Multiprocessing Time={0}'.format(tocm-ticm))
# Check that the result of both methods is actually the same; should raise
# an AssertionError exception if assertion is not True
assert np.all(test_d == test_m)
and I got the following result:
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/matplotlib/tri/triangulation.py:110: FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.self._neighbors)
Single Processor Time=8.495998
Multiprocessing Time=2.249938
I'm not really sure what is causing the "future warning" from triangulation.py (evidently my version of matplotlib did not like something about the input values that were originally provided for the question), but regardless, the multiprocessing does appear to achieve the desired speedup of 8.50/2.25 = 3.8, (edit: see comments) which is roughly in the neighborhood of about 4X that we would expect for a machine with 4 CPUs. And the assertion statement at the end also executes successfully, proving that the two methods get the same answer, so in spite of the slightly weird warning message, I believe that the code above is a valid solution.
EDIT: A commenter has pointed out that both my solution, as well as the code snippet posted by the original author, are likely using the wrong method, time.clock(), for measuring execution time; he suggests using time.time() instead. I think I'm also coming around to his point of view. (Digging into the Python documentation a bit further, I'm still not convinced that even this solution is 100% correct, as newer versions of Python appear to have deprecated time.clock() in favor of time.perf_counter() and time.process_time(). But regardless, I do agree that whether or not time.time() is absolutely the most correct way of taking this measurement, it's still probably more correct than what I had been using before, time.clock().)
Assuming the commenter's point is correct, then it means the approximately 4X speedup that I thought I had measured is in fact wrong.
However, that does not mean that the underlying code itself wasn't correctly parallelized; rather, it just means that parallelization didn't actually help in this case; splitting up the data and running on multiple processors didn't improve anything. Why would this be? Other users have pointed out that, at least in numpy/scipy, some functions run on multiple cores, and some do not, and it can be a seriously challenging research project for an end-user to try to figure out which ones are which.
Based on the results of this experiment, if my solution correctly achieves parallelization within Python, but no further speedup is observed, then I would suggest the simplest likely explanation is that matplotlib is probably also parallelizing some of its functions "under the hood", so to speak, in compiled C++ libraries, just like numpy/scipy already do. Assuming that's the case, then the correct answer to this question would be that nothing further can be done: further parallelizing in Python will do no good if the underlying C++ libraries are already silently running on multiple cores to begin with.
|
Why is it valid to assign to an empty list but not to an empty tuple?
|
This came up in a recent PyCon talk.
The statement
[] = []
does nothing meaningful, but it does not throw an exception either. I have the feeling this must be due to unpacking rules. You can do tuple unpacking with lists too, e.g.,
[a, b] = [1, 2]
does what you would expect. As logical consequence, this also should work, when the number of elements to unpack is 0, which would explain why assigning to an empty list is valid. This theory is further supported by what happens when you try to assign a non-empty list to an empty list:
>>> [] = [1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: too many values to unpack
I would be happy with this explanation, if the same would also be true for tuples. If we can unpack to a list with 0 elements, we should also be able to unpack to a tuple with 0 elements, no? However:
>>> () = ()
File "<stdin>", line 1
SyntaxError: can't assign to ()
It seems like unpacking rules are not applied for tuples as they are for lists. I cannot think of any explanation for this inconsistency. Is there a reason for this behavior?
|
The comment by @user2357112 that this seems to be coincidence appears to be correct. The relevant part of the Python source code is in Python/ast.c:
switch (e->kind) {
# several cases snipped
case List_kind:
e->v.List.ctx = ctx;
s = e->v.List.elts;
break;
case Tuple_kind:
if (asdl_seq_LEN(e->v.Tuple.elts)) {
e->v.Tuple.ctx = ctx;
s = e->v.Tuple.elts;
}
else {
expr_name = "()";
}
break;
# several more cases snipped
}
/* Check for error string set by switch */
if (expr_name) {
char buf[300];
PyOS_snprintf(buf, sizeof(buf),
"can't %s %s",
ctx == Store ? "assign to" : "delete",
expr_name);
return ast_error(c, n, buf);
}
tuples have an explicit check that the length is not zero and raise an error when it is. lists do not have any such check, so there's no exception raised.
I don't see any particular reason for allowing assignment to an empty list when it is an error to assign to an empty tuple, but perhaps there's some special case that I'm not considering. I'd suggest that this is probably a (trivial) bug and that the behaviors should be the same for both types.
|
How to mock asyncio coroutines?
|
The following code fails with TypeError: 'Mock' object is not iterable in ImBeingTested.i_call_other_coroutines because I've replaced ImGoingToBeMocked by a Mock object.
How can I mock coroutines?
class ImGoingToBeMocked:
@asyncio.coroutine
def yeah_im_not_going_to_run(self):
yield from asyncio.sleep(1)
return "sup"
class ImBeingTested:
def __init__(self, hidude):
self.hidude = hidude
@asyncio.coroutine
def i_call_other_coroutines(self):
return (yield from self.hidude.yeah_im_not_going_to_run())
class TestImBeingTested(unittest.TestCase):
def test_i_call_other_coroutines(self):
mocked = Mock(ImGoingToBeMocked)
ibt = ImBeingTested(mocked)
ret = asyncio.get_event_loop().run_until_complete(ibt.i_call_other_coroutines())
|
Since mock library doesn't support coroutines I create mocked coroutines manually and assign those to mock object. A bit more verbose but it works.
Your example may look like this:
import asyncio
import unittest
from unittest.mock import Mock
class ImGoingToBeMocked:
@asyncio.coroutine
def yeah_im_not_going_to_run(self):
yield from asyncio.sleep(1)
return "sup"
class ImBeingTested:
def __init__(self, hidude):
self.hidude = hidude
@asyncio.coroutine
def i_call_other_coroutines(self):
return (yield from self.hidude.yeah_im_not_going_to_run())
class TestImBeingTested(unittest.TestCase):
def test_i_call_other_coroutines(self):
mocked = Mock(ImGoingToBeMocked)
ibt = ImBeingTested(mocked)
@asyncio.coroutine
def mock_coro():
return "sup"
mocked.yeah_im_not_going_to_run = mock_coro
ret = asyncio.get_event_loop().run_until_complete(
ibt.i_call_other_coroutines())
self.assertEqual("sup", ret)
if __name__ == '__main__':
unittest.main()
|
Operate on a list in a pythonic way when output depends on other elements
|
I have a task requiring an operation on every element of a list, with the outcome of the operation depending on other elements in the list.
For example, I might like to concatenate a list of strings conditional on them starting with a particular character:
This code solves the problem:
x = ['*a', 'b', 'c', '*d', 'e', '*f', '*g']
concat = []
for element in x:
if element.startswith('*'):
concat.append(element)
else:
concat[len(concat) - 1] += element
resulting in:
concat
Out[16]: ['*abc', '*de', '*f', '*g']
But this seems horribly un-Pythonic. How should one operate on the elements of a list when the outcome of the operation depends on previous outcomes?
|
A few relevant excerpts from import this (the arbiter of what is Pythonic):
Simple is better than complex
Readability counts
Explicit is better than implicit.
I would just use code like this, and not worry about replacing the for loop with something "flatter".
x = ['*a', 'b', 'c', '*d', 'e', '*f', '*g']
partials = []
for element in x:
if element.startswith('*'):
partials.append([])
partials[-1].append(element)
concat = map("".join, partials)
|
Microsoft Visual C++ Compiler for Python 3.4
|
I know that there is a "Microsoft Visual C++ Compiler for Python 2.7" but is there, currently or planned, a Microsoft Visual C++ Compiler for Python 3.4 or eve Microsoft Visual C++ Compiler for Python 3.x for that matter? It would be supremely beneficial if I didn't have to install a different version of visual studio on my entire lab.
|
Unfortunately to be able to use the extension modules provided by others you'll be forced to use the official compiler to compile Python. These are:
Visual Studio 2008 for Python 2.7.
See: https://docs.python.org/2.7/using/windows.html#compiling-python-on-windows
Visual Studio 2010 for Python 3.4.
See: https://docs.python.org/3.4/using/windows.html#compiling-python-on-windows
Alternatively, you can use MinGw to compile extensions in a way that won't depend on others.
See: https://docs.python.org/2/install/#gnu-c-cygwin-MinGW or https://docs.python.org/3.4/install/#gnu-c-cygwin-mingw
This allows you to have one compiler to build your extensions for both versions of Python, Python 2.x and Python 3.x.
|
Upgrading from Django 1.6 (with south) to 1.8 doesn't modify 'last_login' on the user table
|
I have upgraded from Django 1.6.5 (with south migrations) to Django 1.8. I have followed the instructions here:
https://docs.djangoproject.com/en/1.8/topics/migrations/#upgrading-from-south
So, I remove South, delete my previous migrations and run python manage.py makemigrations which makes a new migration file. Then I run python manage.py migrate --fake-initial to fake the initial migration. Then I run python manage.py migrate.
It all runs fine with no errors.
I have a custom user model which inherits AbstractBaseUser. In Django 1.8 it seems there is a change to the last_login field where it is now able to accept a null value (https://docs.djangoproject.com/fr/1.8/ref/contrib/auth/#django.contrib.auth.models.User.last_login).
The problem I have, is that the migration does not change the last_login field in the database table to accept null values. As a result, when I try and save I get a IntegrityError saying (1048, "Column 'last_login' cannot be null").
How can I fix this? Do I need to manually change the field in the database, or is there a way to fix it with migrations?
Edit
When I run python manage.py migrate --fake-initial the output is this:
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... FAKED
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... FAKED
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying core.0001_initial... FAKED
Applying admin.0001_initial... FAKED
Applying sessions.0001_initial... FAKED
However, when I look in the database I still see user_last_login as not-null.
|
As noted in the 1.8 release notes:
If you are using a custom user model that inherits from AbstractUser, youâll need to run makemigrations and generate a migration for your app that contains that model.
|
Change user agent for selenium driver
|
I have the following code in Python:
from selenium.webdriver import Firefox
from contextlib import closing
with closing(Firefox()) as browser:
browser.get(url)
I would like to print the user-agent HTTP header and
possibly change it. Is it possible?
|
There is no way in Selenium to read the request or response headers. You could do it by instructing your browser to connect through a proxy that records this kind of information.
Setting the User Agent in Firefox
The usual way to change the user agent for Firefox is to set the variable "general.useragent.override" in your Firefox profile. Note that this is independent from Selenium.
You can direct Selenium to use a profile different from the default one, like this:
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.set_preference("general.useragent.override", "whatever you want")
driver = webdriver.Firefox(profile)
Setting the User Agent in Chrome
With Chrome, what you want to do is use the user-agent command line option. Again, this is not a Selenium thing. You can invoke Chrome at the command line with chrome --user-agent=foo to set the agent to the value foo.
With Selenium you set it like this:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
opts = Options()
opts.add_argument("user-agent=whatever you want")
driver = webdriver.Chrome(chrome_options=opts)
Both methods above were tested and found to work. I don't know about other browsers.
Getting the User Agent
Selenium does not have methods to query the user agent from an instance of WebDriver. Even in the case of Firefox, you cannot discover the default user agent by checking what general.useragent.override would be if not set to a custom value. (This setting does not exist before it is set to some value.)
Once the browser is started, however, you can get the user agent by executing:
agent = driver.execute_script("return navigator.userAgent")
The agent variable will contain the user agent.
|
Find the column name which has maximum value for each row [pandas]
|
I have a dataframe like this one:
In [7]:
frame.head()
Out[7]:
Communications and Search Business General Lifestyle
0 0.745763 0.050847 0.118644 0.084746
0 0.333333 0.000000 0.583333 0.083333
0 0.617021 0.042553 0.297872 0.042553
0 0.435897 0.000000 0.410256 0.153846
0 0.358974 0.076923 0.410256 0.153846
In here, I want to ask how to get column name which has maximum value for each row, the desired output is like this:
In [7]:
frame.head()
Out[7]:
Communications and Search Business General Lifestyle Max
0 0.745763 0.050847 0.118644 0.084746 Communications
0 0.333333 0.000000 0.583333 0.083333 Business
0 0.617021 0.042553 0.297872 0.042553 Communications
0 0.435897 0.000000 0.410256 0.153846 Communications
0 0.358974 0.076923 0.410256 0.153846 Business
|
You can use idxmax() to find the column with the greatest value on each row:
>>> df.idxmax(axis=1)
0 Communications
1 Business
2 Communications
3 Communications
4 Business
dtype: object
To create the new column use df['Max'] = df.idxmax(axis=1).
|
Function decorated using functools.wraps raises TypeError with the name of the wrapper. Why? How to avoid?
|
def decorated(f):
@functools.wraps(f)
def wrapper():
return f()
return wrapper
@decorated
def g():
pass
functools.wraps does its job at preserving the name of g:
>>> g.__name__
'g'
But if I pass an argument to g, I get a TypeError containing the name of the wrapper:
>>> g(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: wrapper() takes no arguments (1 given)
Where does this name come from? Where is it preserved? And is there a way to make the exception look like g() takes no arguments?
|
The name comes from the code object; both the function and the code object (containing the bytecode to be executed, among others) contain that name:
>>> g.__name__
'g'
>>> g.__code__.co_name
'wrapper'
The attribute on the code object is read-only:
>>> g.__code__.co_name = 'g'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: readonly attribute
You'd have to create a whole new code object to rename that, see a previous answer of mine where I defined a function to do that; using the rename_code_object() function on your decorated function:
>>> g = rename_code_object(g, 'g')
>>> g(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: g() takes no arguments (1 given)
Note, however, that this will entirely mask what code was being run! You generally want to see that a decorator wrapper was involved; it is the wrapper that throws the exception, not the original function, after all.
|
Extract cow number from image
|
every now and then my mom has to shift through these type of photos to extract the number from the image and rename it to the number.
I'm trying to use OpenCV, Python, Tesseract to get the process done. I'm really lost trying to extract the portion of the image with the numbers. How could I do this? Any suggestions i'm really new at OpenCV.
I tried to extract the white rectangular board using thresholds and contours, but no avail because the RGB I choose for thresh doesn't always work and I don't know how to choose the contour.
EDIT:
Looking at this paper http://yoni.wexlers.org/papers/2010TextDetection.pdf . Looks prominisn
|
I have been having another look at this, and had a couple of inspirations along the way....
Tesseract can accept custom dictionaries, and if you dig a little more, it appears that from v3.0, it accepts the command-line parameter digits to make it recognise digits only - seems a useful idea for your needs.
It may not be necessary to find the boards with the digits on - it may be easier to run Tesseract multiple times with various slices of the image and let it have a try itself as that is what it is supposed to do.
So, I decided to preprocess the image by changing everything that is within 25% of black to pure black, and everything else to pure white. That gives pre-processed images like this:
Next, I generate a series of images and pass them, one at a time to Tesseract. I decided to assume that the digits are probably between 40% to 10% of the image height, so I made a loop over strips 40, 30, 20 and 10% of the image height. I then slide the strip down the image from top to bottom in 20 steps passing each strip to Tesseract, till the strip is essentially across the bottom of the image.
Here are the 40% strips - each frame of the animation is passed to Tesseract:
Here are the 20% strips - each frame of the animation is passed to Tesseract:
Having got the strips, I resize them nicely for Tesseract's sweet spot and clean them up from noise etc. Then, I pass them into Tesseract and assess the quality of the recognition, somewhat crudely, by counting the number of digits it found. Finally, I sort the output by number of digits - presumably more digits is maybe better...
There are some rough edges and bits that you could dink around with, but it is a start!
#!/bin/bash
image=${1-c1.jpg}
# Make everything that is nearly black go fully black, everything else goes white. Median for noise
# convert -delay 500 c1.jpg c2.jpg c3.jpg -normalize -fuzz 25% -fill black -opaque black -fuzz 0 -fill white +opaque black -median 9 out.gif
convert "${image}" -normalize \
-fuzz 25% -fill black -opaque black \
-fuzz 0 -fill white +opaque black \
-median 9 tmp_$$.png
# Get height of image - h
h=$(identify -format "%h" "${image}")
# Generate strips that are 40%, 30%, 20% and 10% of image height
for pc in 40 30 20 10; do
# Calculate height of this strip in pixels - sh
((sh=(h*pc)/100))
# Calculate offset from top of picture to top of bottom strip - omax
((omax=h-sh))
# Calculate step size, there will be 20 steps
((step=omax/20))
# Cut strips sh pixels high from the picture starting at top and working down in 20 steps
for (( off=0;off<$omax;off+=$step)) do
t=$(printf "%05d" $off)
# Extract strip and resize to 80 pixels tall for tesseract
convert tmp_$$.png -crop x${sh}+0+${off} \
-resize x80 -median 3 -median 3 -median 3 \
-threshold 90% +repage slice_${pc}_${t}.png
# Run slice through tesseract, seeking only digits
tesseract slice_${pc}_${t}.png temp digits quiet
# Now try and assess quality of output :-) ... by counting number of digits
digits=$(tr -cd "[0-9]" < temp.txt)
ndigits=${#digits}
[ $ndigits -gt 0 ] && [ $ndigits -lt 6 ] && echo $ndigits:$digits
done
done | sort -n
Output for Cow 618 (first number is the number of digits found)
2:11
2:11
3:573
5:33613 <--- not bad
Output for Cow 2755 (first number is the number of digits found)
2:51
3:071
3:191
3:517
4:2155 <--- pretty close
4:2755 <--- nailed that puppy :-)
4:2755 <--- nailed that puppy :-)
4:5212
5:12755 <--- pretty close
Output for Cow 3174 (first number is the number of digits found)
3:554
3:734
5:12732
5:31741 <--- pretty close
Cool question - thank you!
|
Why variable = object doesn't work like variable = number
|
These variable assignments work as I expect:
>>> a = 3
>>> b = a
>>> print(a, b)
(3, 3)
>>> b=4
>>> print(a, b)
(3, 4)
However, these assignments behave differently:
>>> class number():
... def __init__(self, name, number):
... self.name = name
... self.number = number
...
>>> c = number("one", 1)
>>> d = c
>>> print(c.number, d.number)
(1, 1)
>>> d.number = 2
>>> print(c.number, d.number)
(2, 2)
Why is c is same as d, unlike in (a, b) example? How can I do something like in (a, b) in (c, d) classes example? That is, copy the object and then change one part of it (that won't affect the object that I borrowed properties from)?
|
These lines:
c = number("one", 1)
d = c
...are effectively:
Create a new instance of number and assign it to c
Assign the existing reference called c to a new variable d
You haven't changed or modified anything about c; d is another name that points to the same instance.
Without cloning the instance or creating a new instance, you can't do anything similar to how the primitive int is behaving.
To correct a bit of information, the explanation above is rather simplified and a bit incomplete in its nature, although it mostly describes what's going on at 10,000 feet.
For a closer look, we have to realize a few things about Python's variables, or "names", and how they interact with this program.
As mentioned above, you have the notion of "names" and "bindings", which are pretty straightforward to reason at:
a = 3
b = a
In this context, a is a name, and b is a binding to a. We haven't modified or changed anything about a.
As noted before, there are two types of data in Python: mutable and immutable. A name that points to immutable data, such as primitives and tuples, can be reassigned without any ill effect to any other bindings present on it, because no state is changing with respect to the binding.
This is why this reassignment does what we would expect it to:
print(a, b)
b = 4
print(a, b)
The result of b = 4 is that b is now pointing at a new copy of an integer, the value 4.
Recall that I did mention tuples as immutable data. You can't change the binding of a particular entity in your tuple...
t = ('foo', 'bar')
t[0] = 'baz' # illegal
...but you can have mutable data structures as part of those bindings.
t = ([1, 2, 3], 'bar')
t[0].append([4, 5, 6]) # ([1, 2, 3, [4, 5, 6]], 'bar')
So where does that leave our example?
c = number("one", 1)
d = c
number is a mutable type which is named as c, and its values can be changed at will between multiple different bindings to c.
Effectively, we've got a name and a binding to a name:
We have a new instance of number and refer to it by the name c.
Bind the reference c to another name d.
Again, nothing's changed about c, but it can be referenced through another name.
Unlike with the immutable data, when we reassign the value of d.number, we're reassigning the same binding that c is aware of:
>>> id(d.number)
36696408
>>> id(c.number)
36696408
This is why you require either a new instance or a copy. You have to refer to a different instance of number. With this simple binding, you're not going to accomplish that.
from copy import copy
c = number("one", 1)
d = copy(c)
id(c) # 140539175695784
id(d) # 140539175695856
|
Calculating the averages for each KEY in a Pairwise (K,V) RDD in Spark with Python
|
I want to share this particular Apache Spark with Python solution because documentation for it is quite poor.
I wanted to calculate the average value of K/V pairs (stored in a Pairwise RDD), by KEY. Here is what the sample data looks like:
>>> rdd1.take(10) # Show a small sample.
[(u'2013-10-09', 7.60117302052786),
(u'2013-10-10', 9.322709163346612),
(u'2013-10-10', 28.264462809917358),
(u'2013-10-07', 9.664429530201343),
(u'2013-10-07', 12.461538461538463),
(u'2013-10-09', 20.76923076923077),
(u'2013-10-08', 11.842105263157894),
(u'2013-10-13', 32.32514177693762),
(u'2013-10-13', 26.249999999999996),
(u'2013-10-13', 10.693069306930692)]
Now the following code sequence is a less than optimal way to do it, but it does work. It is what I was doing before I figured out a better solution. It's not terrible but -- as you'll see in the answer section -- there is a more concise, efficient way.
>>> import operator
>>> countsByKey = sc.broadcast(rdd1.countByKey()) # SAMPLE OUTPUT of countsByKey.value: {u'2013-09-09': 215, u'2013-09-08': 69, ... snip ...}
>>> rdd1 = rdd1.reduceByKey(operator.add) # Calculate the numerators (i.e. the SUMs).
>>> rdd1 = rdd1.map(lambda x: (x[0], x[1]/countsByKey.value[x[0]])) # Divide each SUM by it's denominator (i.e. COUNT)
>>> print(rdd1.collect())
[(u'2013-10-09', 11.235365503035176),
(u'2013-10-07', 23.39500642456595),
... snip ...
]
|
Now a much better way to do this is to use the rdd.aggregateByKey() method. Because that method is so poorly documented in the Apache Spark with Python documentation (which is why I'm writing this), until recently I had been using the above code sequence. But again, it's less efficient, so don't do it that way unless you need to.
Here's how to do the same using the rdd.aggregateByKey() method (recommended) ...
By KEY, simultaneously calculate the SUM (numerator for the average we want to compute), and COUNT (denominator for the average we want to compute).
>>> rdd1 = rdd1.aggregateByKey((0,0), lambda a,b: (a[0] + b, a[1] + 1),
lambda a,b: (a[0] + b[0], a[1] + b[1]))
Where the following is true about the meaning of each 'a' and 'b' pair above (just so you can visualize what's happening):
First lambda expression for Within-Partition Reduction Step::
a: is a TUPLE that holds: (runningSum, runningCount).
b: is a SCALAR that holds the next Value
Second lambda expression for Cross-Partition Reduction Step::
a: is a TUPLE that holds: (runningSum, runningCount).
b: is a TUPLE that holds: (nextPartitionsSum, nextPartitionsCount).
Finally, calculate the average for each KEY, and collect results.
>>> finalResult = rdd1.mapValues(lambda v: v[0]/v[1]).collect()
>>> print(finalResult)
[(u'2013-09-09', 11.235365503035176),
(u'2013-09-01', 23.39500642456595),
(u'2013-09-03', 13.53240060820617),
(u'2013-09-05', 13.141148418977687),
... snip ...
]
I hope this aggregateByKey() illustration will help others.
|
Why is `if` so much faster when checked before a statement than after a statement?
|
Here's an example of what I mean:
s = """
if x > 10:
x -= 10
else:
x = 0
"""
import timeit
print(timeit.timeit(s, setup="x=5", number=99999999))
Outputs approximately 3 seconds on my computer, regardless of the setup (x=5 vs x=15, no difference)
If I were to use much shorter code, one that first decreases x -= 10 and only then checks if x < 0, I will get much worse results:
s = """
x -= 10
if x < 0:
x = 0
"""
import timeit
print(timeit.timeit(s, setup="x=5", number=99999999))
It outputs around 6 seconds, again regardless whether the initial value of x was 5 or 15.
I understand that it would be slower when x < 10 since we'd first call x -= 10 and then set x = 0 instead of simply setting x once.
The thing is, 99% of the time the x's initial value in my program is set to a number much higher than 10, so I thought I'd use the shorter version since most of the time I should see no difference in performance.
However, there's a huge difference in performance even when x > 10, why's this?
|
Your premise is wrong. setup only gets run once for the entire timeit. If you make sure that x stays above 10 then the symptoms disappear:
>>> s1 = """
... if x > 10:
... x -= 10
... else:
... x = 0
... """
>>> s2 = """
... x -= 10
... if x < 0:
... x = 0
... """
>>> import timeit
>>> print(timeit.timeit(s1, setup="x=1000000000", number=99999999))
8.934118068675566
>>> print(timeit.timeit(s2, setup="x=1000000000", number=99999999))
8.744505329313448
|
Multikey Multivalue Non Deterministic python dictionary
|
There is already a multi key dict in python and also a multivalued dict. I needed a python dictionary which is both:
example:
# probabilistically fetch any one of baloon, toy or car
d['red','blue','green']== "baloon" or "car" or "toy"
Probability of d['red']==d['green'] is high and Probability of d['red']!=d['red'] is low but possible
the single output value should be probabilistically determined (fuzzy) based on a rule from keys
eg:in above case rule could be if keys have both "red" and "blue" then return "baloon" 80% of time if only blue then return "toy" 15% of time else "car" 5% of time.
The setitem method should be designed such that following is possible:
d["red", "blue"] =[
("baloon",haseither('red','green'),0.8),
("toy",.....)
,....
]
Above assigns multiple values to the dictionary with a predicate function and corresponding probability. And instead of the assignment list above even a dictionary as assignment would be preferable:
d["red", "blue"] ={
"baloon": haseither('red','green',0.8),
"toy": hasonly("blue",0.15),
"car": default(0.05)
}
In the above baloon will be returned 80% of time if "red" or green is present
, return toy 15% of time if blue present and return car 5% of time without any condition.
Are there any existing data structures which already satisfy the above requirements in python? if no then how can multikeydict code be modified to meet the above requirements in python?
if using dictionary then there can be a configuration file or use of appropriate nested decorators which configures the above probabilistic predicate logics without having to hard code if \else statements .
Note: Above is a useful automata for a rule based auto responder application hence do let me know if any similar rule based framework is available in python even if it does not use the dictionary structure?
|
the single output value should be probabilistically determined (fuzzy) based on a rule from keys eg:in above case rule could be if keys have both "red" and "blue" then return "baloon" 80% of time if only blue then return "toy" 15% of time else "car" 5% of time.
Bare in mind your case analysis is not complete, and it's ambiguous, but you can do the following "in spirit" (fleshing out the desired results):
import random
def randomly_return(*colors):
colors = set(*colors)
if 'red' in colors and 'blue' in colors:
if random.random() < 0.8: # 80 % of the time
return "baloon"
if 'blue' in colors and len(colors) == 1: # only blue in colors
if random.random() < 0.15:
return "toy"
else:
if random.random() < 0.05:
return "car"
# other cases to consider
I would keep this as a function, because it is a function! But if you insist to make it dict-like, then python let's you do this by overriding __getitem__ (IMO it's not pythonic).
class RandomlyReturn(object):
def __getitem__(self, *colors):
return randomly_return(*colors)
>>> r = RandomlyReturn()
>>> r["red", "blue"] # 80% of the time it'll return "baloon"
"baloon"
From your clarification, OP wants to pass and generate:
randreturn((haseither(red,blue),baloon:0.8),((hasonly(blue),toy:0.15)),(default(ââ),car:0.05)))
you want to generate a function as follows:
funcs = {"haseither": lambda needles, haystack: any(n in haystack for n in needles),
"hasonly": lambda needles, haystack: len(needles) == 1 and needles[1] in haystack}
def make_random_return(crits, default):
def random_return(*colors):
colors = set(*colors)
for c in crits:
if funcs[c["func"]](c["args"], colors) and random.random() > c["with_prob"]:
return c["return_value"]
return default
return random_return
where the crit and default in this case would be:
crit = [{"func": "haseither", "args": ("red", "blue"), "return_value": "baloon", "with_prob": 0.8}, ...]
default = "car" # ??
my_random_return = make_random_return(crits, default)
As I say, your probabilities are ambiguous/don't add up, so you're most likely going to need to tweak this...
You can extend the class definition by passing crit and default upon instantiation:
class RandomlyReturn(object):
def __init__(self, crit, default):
self.randomly_return = make_random_return(crit, default)
def __getitem__(self, *colors):
return self.randomly_return(*colors)
>>> r = RandomlyReturn(crit, default)
>>> r["red", "blue"] # 80% of the time it'll return "baloon"
"baloon"
|
Set debugger breakpoint at end of a function without return
|
I am debugging method f() that has no return in it.
class A(object):
def __init__(self):
self.X = []
def f(self):
for i in range(10):
self.X.append(i)
I need to see how this method modifies variable X right after it is called. To do that, I insert a return at the end of the method, and set the breakpoint there:
That way, as soon as the method reaches its return, I can see the value of my variable X.
This does the job, but I am pretty sure there is a better way. Editing a method or function every time I need to debug it seems silly.
Question:
Is there a different way (e.g. an option in the debugger) to set a breakpoint at the end of a method that does not have a return?
(Note that setting a breakpoint at the function call and using Step Over would not display X when mouseovering, since the function is called from a different module.)
|
You can add a conditional breakpoint on the last line and set the condition to be something that occurs only in the last iteration.
In this instance the condition is very easy since it's just i == 9, but it may be a lot more complex depending on your loop condition so sometimes adding a statement at the end will be the easier solution.
That screenshot is from IntelliJ IDEA and your screenshot looks like it's from the same IDE, so just right-click the breakpoint to show the dialog and enter your condition.
If you're using some other IDE I'm sure there is capability to make a breakpoint conditional.
Update:
There is no support for breaking at the end of a method in the Python debugger, only at the start of a method:
b(reak) [[filename:]lineno | function[, condition]]
With a lineno argument, set a break there in the current file. With a function argument, set a break at the first executable statement within that function. The line number may be prefixed with a filename and a colon, to specify a breakpoint in another file (probably one that hasn't been loaded yet). The file is searched on sys.path. Note that each breakpoint is assigned a number to which all the other breakpoint commands refer.
If a second argument is present, it is an expression which must evaluate to true before the breakpoint is honored.
Without argument, list all breaks, including for each breakpoint, the number of times that breakpoint has been hit, the current ignore count, and the associated condition if any.
|
Renaming downloaded images in Scrapy 0.24 with content from an item field while avoiding filename conflicts?
|
I'm attempting to rename the images that are downloaded by my Scrapy 0.24 spider. Right now the downloaded images are stored with a SHA1 hash of their URLs as the file names. I'd like to instead name them the value I extract with item['model']. This question from 2011 outlines what I want, but the answers are for previous versions of Scrapy and don't work with the latest version.
Once I manage to get this working I'll also need to make sure I account for different images being downloaded with the same filename. So I'll need to download each image to its own uniquely named folder, presumably based on the original URL.
Here is a copy of the code I am using in my pipeline. I got this code from a more recent answer in the link above, but it's not working for me. Nothing errors out and the images are downloaded as normal. It doesn't seem my extra code has any effect on the filenames as they still appear as SHA1 hashes.
pipelines.py
class AllenheathPipeline(object):
def process_item(self, item, spider):
return item
import scrapy
from scrapy.contrib.pipeline.images import ImagesPipeline
from scrapy.http import Request
from scrapy.exceptions import DropItem
class MyImagesPipeline(ImagesPipeline):
#Name download version
def file_path(self, request, response=None, info=None):
item=request.meta['item'] # Like this you can use all from item, not just url.
image_guid = request.url.split('/')[-1]
return 'full/%s' % (image_guid)
#Name thumbnail version
def thumb_path(self, request, thumb_id, response=None, info=None):
image_guid = thumb_id + request.url.split('/')[-1]
return 'thumbs/%s/%s.jpg' % (thumb_id, image_guid)
def get_media_requests(self, item, info):
#yield Request(item['images']) # Adding meta. Dunno how to put it in one line :-)
for image in item['images']:
yield Request(image)
def item_completed(self, results, item, info):
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Item contains no images")
item['image_paths'] = image_paths
return item
settings.py
BOT_NAME = 'allenheath'
SPIDER_MODULES = ['allenheath.spiders']
NEWSPIDER_MODULE = 'allenheath.spiders'
ITEM_PIPELINES = {'scrapy.contrib.pipeline.images.ImagesPipeline': 1}
IMAGES_STORE = 'c:/allenheath/images'
products.py (my spider)
import scrapy
import urlparse
from allenheath.items import ProductItem
from scrapy.selector import Selector
from scrapy.http import HtmlResponse
class productsSpider(scrapy.Spider):
name = "products"
allowed_domains = ["http://www.allen-heath.com/"]
start_urls = [
"http://www.allen-heath.com/ahproducts/ilive-80/",
"http://www.allen-heath.com/ahproducts/ilive-112/"
]
def parse(self, response):
for sel in response.xpath('/html'):
item = ProductItem()
item['model'] = sel.css('#prodsingleouter > div > div > h2::text').extract() # The value I'd like to use to name my images.
item['shortdesc'] = sel.css('#prodsingleouter > div > div > h3::text').extract()
item['desc'] = sel.css('#tab1 #productcontent').extract()
item['series'] = sel.css('#pagestrip > div > div > a:nth-child(3)::text').extract()
item['imageorig'] = sel.css('#prodsingleouter > div > div > h2::text').extract()
item['image_urls'] = sel.css('#tab1 #productcontent .col-sm-9 img').xpath('./@src').extract()
item['image_urls'] = [urlparse.urljoin(response.url, url) for url in item['image_urls']]
yield item
items.py
import scrapy
class ProductItem(scrapy.Item):
model = scrapy.Field()
itemcode = scrapy.Field()
shortdesc = scrapy.Field()
desc = scrapy.Field()
series = scrapy.Field()
imageorig = scrapy.Field()
image_urls = scrapy.Field()
images = scrapy.Field()
Here's a pastebin of the output I get from the command prompt when I run the spider: http://pastebin.com/ir7YZFqf
Any help would be greatly appreciated!
|
The pipelines.py:
from scrapy.contrib.pipeline.images import ImagesPipeline
from scrapy.http import Request
from scrapy.exceptions import DropItem
from scrapy import log
class MyImagesPipeline(ImagesPipeline):
#Name download version
def file_path(self, request, response=None, info=None):
image_guid = request.meta['model'][0]
log.msg(image_guid, level=log.DEBUG)
return 'full/%s' % (image_guid)
#Name thumbnail version
def thumb_path(self, request, thumb_id, response=None, info=None):
image_guid = thumb_id + request.url.split('/')[-1]
log.msg(image_guid, level=log.DEBUG)
return 'thumbs/%s/%s.jpg' % (thumb_id, image_guid)
def get_media_requests(self, item, info):
yield Request(item['image_urls'][0], meta=item)
You're using the settings.py wrong. You should use this:
ITEM_PIPELINES = {'allenheath.pipelines.MyImagesPipeline': 1}
For thumbsnails to work, add this to settings.py:
IMAGES_THUMBS = {
'small': (50, 50),
'big': (100, 100),
}
|
What is the relationship between virtualenv and pyenv?
|
I recently learned how to use virtualenv and virtualenvwrapper in my workflow but I've seen pyenv mentioned in a few guides but I can't seem to get an understanding of what pyenv is and how it is different/similar to virtualenv. Is pyenv a better/newer replacement for virtualenv or a complimentary tool? If the latter what does it do differently and how do the two (and virtualenvwrapper if applicable) work together?
|
Pyenv and virtualenv are very different tools that work in different ways to do different things:
Pyenv is a bash extension - will not work on Windows - that intercepts your calls to python, pip, etc., to direct them to one of several of the system python tool-chains. So you always have all the libraries that you have installed in the selected python version available - as such it is good for users who have to switch between different versions of python.
VirtualEnv, is pure python so works everywhere, it makes a copy of, optionally a specific version of, python and pip local to the activate environment which may or may not include links to the current system tool-chain, if it does not you can install just a known subset of libraries into that environment. As such it is almost certainly much better for testing and deployment as you know exactly which libraries, at which versions, are used and a global change will not impact your module.
Additional Tools
There are a number of tools that it is worth mentioning, and considering, as they can help with the use of one or more of the above:
VirtualEnvWrapper Manage and simplify the use and management of VirtualEnv - Cross Platform.
pyenv-virtualenv, installed by pyenv-installer, which gives PyEnv tools for managing and interfacing to VirtualEnv - with this you can have a base installation that includes more than one version of python and create isolated environments within each of them - Linux Only. Suggested by Johann Visagie
PyInstaller can take your python code, possibly developed & tested under VirtualEnv, and bundle it up so that it can run one platforms that do not have your version of python installed - Note that it is not a cross compiler you will need a Windows (virtual-)machine to build Windows installs, etc., but it can be handy even where you can be sure that python will be installed but cannot be sure that the version of python and all the libraries will be compatible with your code.
|
DRF: Simple foreign key assignment with nested serializers?
|
With Django REST Framework, a standard ModelSerializer will allow ForeignKey model relationships to be assigned or changed by POSTing an ID as an Integer.
What's the simplest way to get this behavior out of a nested serializer?
Note, I am only talking about assigning existing database objects, not nested creation.
I have hacked away around this in the past with additional 'id' fields in the serializer and with custom create and update methods, but this is such a seemingly simple and frequent issue for me that I'm curious to know the best way.
class Child(models.Model):
name = CharField(max_length=20)
class Parent(models.Model):
name = CharField(max_length=20)
phone_number = models.ForeignKey(PhoneNumber)
child = models.ForeignKey(Child)
class ChildSerializer(ModelSerializer):
class Meta:
model = Child
class ParentSerializer(ModelSerializer):
# phone_number relation is automatic and will accept ID integers
children = ChildSerializer() # this one will not
class Meta:
model = Parent
|
The best solution here is to use two different fields: one for reading and the other for writing. Without doing some heavy lifting, it is difficult to get what you are looking for in a single field.
The read-only field would be your nested serializer (ChildSerializer in this case) and it will allow you to get the same nested representation that you are expecting. Most people define this as just child, because they already have their front-end written by this point and changing it would cause problems.
The write-only field would be a PrimaryKeyRelatedField, which is what you would typically use for assigning objects based on their primary key. This does not have to be write-only, especially if you are trying to go for symmetry between what is received and what is sent, but it sounds like that might suit you best. This field should have a source set to the foreign key field (child in this example) so it assigns it properly on creation and updating.
This has been brought up on the discussion group a few times, and I think this is still the best solution. Thanks to Sven Maurer for pointing it out.
|
Finding substring (nonconsecutive)
|
In Python if I have string needle and I want to see if it exists (contiguously) as a substring in haystack, I only need to check if needle in haystack.
What happens in the case of a subsequence?
Example:
haystack = "qabcdzzzefgyyyh"
needle = "acgh"
"acgh" is a subsequence of qabcdzzzefgyyyh -- does not exist contiguously in haystack, but it does non-contiguously. The c appears after the a, the g appears after the c, and the h appears after the g.
|
I don't know if there's builtin function, but it is rather simple to do manually
def exists(a, b):
"""checks if b exists in a as a subsequence"""
pos = 0
for ch in a:
if pos < len(b) and ch == b[pos]:
pos += 1
return pos == len(b)
>>> exists("moo", "mo")
True
>>> exists("moo", "oo")
True
>>> exists("moo", "ooo")
False
>>> exists("haystack", "hack")
True
>>> exists("haystack", "hach")
False
>>>
|
What does this notation do for lists in Python: "someList[:]"?
|
I sometimes get across this way of printing or returning a list - someList[:].
I don't see why people use it, as it returns the full list.
Why not simply write someList, whithout the [:] part?
|
[:] creates a slice, usually used to get just a part of a list. Without any minimum/maximum index given, it creates a copy of the entire list. Here's a Python session demonstrating it:
>>> a = [1,2,3]
>>> b1 = a
>>> b2 = a[:]
>>> b1.append(50)
>>> b2.append(51)
>>> a
[1, 2, 3, 50]
>>> b1
[1, 2, 3, 50]
>>> b2
[1, 2, 3, 51]
Note how appending to b1 also appended the value to a. Appending to b2 however did not modify a, i.e. b2 is a copy.
|
Finding all keys in a dictionary from a given list QUICKLY
|
I have a (potentially quite big) dictionary and a list of 'possible' keys. I want to quickly find which of the keys have matching values in the dictionary. I've found lots of discussion of single dictionary values here and here, but no discussion of speed or multiple entries.
I've come up with four ways, and for the three that work best I compare their speed on different sample sizes below - are there better methods? If people can suggest sensible contenders I'll subject them to the analysis below as well.
Sample lists and dictionaries are created as follows:
import cProfile
from random import randint
length = 100000
listOfRandomInts = [randint(0,length*length/10-1) for x in range(length)]
dictionaryOfRandomInts = {randint(0,length*length/10-1): "It's here" for x in range(length)}
Method 1: the 'in' keyword:
def way1(theList,theDict):
resultsList = []
for listItem in theList:
if listItem in theDict:
resultsList.append(theDict[listItem])
return resultsList
cProfile.run('way1(listOfRandomInts,dictionaryOfRandomInts)')
32 function calls in 0.018 seconds
Method 2: error handling:
def way2(theList,theDict):
resultsList = []
for listItem in theList:
try:
resultsList.append(theDict[listItem])
except:
;
return resultsList
cProfile.run('way2(listOfRandomInts,dictionaryOfRandomInts)')
32 function calls in 0.087 seconds
Method 3: set intersection:
def way3(theList,theDict):
return list(set(theList).intersection(set(theDict.keys())))
cProfile.run('way3(listOfRandomInts,dictionaryOfRandomInts)')
26 function calls in 0.046 seconds
Method 4: Naive use of dict.keys():
This is a cautionary tale - it was my first attempt and BY FAR the slowest!
def way4(theList,theDict):
resultsList = []
keys = theDict.keys()
for listItem in theList:
if listItem in keys:
resultsList.append(theDict[listItem])
return resultsList
cProfile.run('way4(listOfRandomInts,dictionaryOfRandomInts)')
12 function calls in 248.552 seconds
EDIT: Bringing the suggestions given in the answers into the same framework that I've used for consistency. Many have noted that more performance gains can be achieved in Python 3.x, particularly list comprehension-based methods. Many thanks for all of the help!
Method 5: Better way of performing intersection (thanks jonrsharpe):
def way5(theList, theDict):
return = list(set(theList).intersection(theDict))
25 function calls in 0.037 seconds
Method 6: List comprehension (thanks jonrsharpe):
def way6(theList, theDict):
return [item for item in theList if item in theDict]
24 function calls in 0.020 seconds
Method 7: Using the & keyword (thanks jonrsharpe):
def way7(theList, theDict):
return list(theDict.viewkeys() & theList)
25 function calls in 0.026 seconds
For methods 1-3 and 5-7 I timed them as above with list/dictionaries of length 1000, 10000, 100000, 1000000, 10000000 and 100000000 and show a log-log plot of time taken. Across all lengths the intersection and in-statement method perform better. The gradients are all about 1 (maybe a bit higher), indicating O(n) or perhaps slightly super-linear scaling.
|
Of a couple of additional methods I've tried, the fastest was a simple list comprehension:
def way6(theList, theDict):
return [item for item in theList if item in theDict]
This runs the same process as your fastest approach, way1, but more quickly. For comparison, the quickest set-based way was
def way5(theList, theDict):
return list(set(theList).intersection(theDict))
timeit results:
>>> import timeit
>>> setup = """from __main__ import way1, way5, way6
from random import randint
length = 100000
listOfRandomInts = [randint(0,length*length/10-1) for x in range(length)]
dictionaryOfRandomInts = {randint(0,length*length/10-1): "It's here" for x in range(length)}
"""
>>> timeit.timeit('way1(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
14.550477756582723
>>> timeit.timeit('way5(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
19.597916393388232
>>> timeit.timeit('way6(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
13.652289059326904
Having added @abarnert's suggestion:
def way7(theList, theDict):
return list(theDict.viewkeys() & theList)
and re-run the timing I now get:
>>> timeit.timeit('way1(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
13.110055883138497
>>> timeit.timeit('way5(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
17.292466681101036
>>> timeit.timeit('way6(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
14.351759544463917
>>> timeit.timeit('way7(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
17.206370930653392
way1 and way6 have switched places, so I re-ran again:
>>> timeit.timeit('way1(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
13.648176054011941
>>> timeit.timeit('way6(listOfRandomInts,dictionaryOfRandomInts)', setup=setup, number=1000)
13.847062579316628
So it looks like the set approach is slower than the list, but the difference between the list and list comprehension is (surprisingly, to me at least) is a bit variable. I'd say just pick one, and not worry about it unless it becomes a real bottleneck later.
|
Count number of non-NaN entries in every column of Dataframe
|
I have a really big DataFrame and I was wondering if there was short (one or two liner) way to get the a count of non-NaN entries in a DataFrame. I don't want to do this one column at a time as I have close to 1000 columns.
df1 = pd.DataFrame([(1,2,None),(None,4,None),(5,None,7),(5,None,None)],
columns=['a','b','d'], index = ['A', 'B','C','D'])
a b d
A 1 2 NaN
B NaN 4 NaN
C 5 NaN 7
D 5 NaN NaN
Output:
a: 3
b: 2
d: 1
|
The count() method returns the number of non-NaN values in each column:
>>> df1.count()
a 3
b 2
d 1
dtype: int64
Similarly, count(axis=1) returns the number of non-NaN values in each row.
|
Python theano with index computed inside the loop
|
I have installed the Theano library for increasing the speed of a computation, so that I can use the power of a GPU.
However, inside the inner loop of the computation a new index is calculated, based on the loop index and corresponding values of a couple of arrays.
That calculated index is then used to access an element of another array, which, in turn, is used for another calculation.
Is this too complicated to expect any significant speedups from Theano?
So let me rephrase my question, the other way round.
Here is an example of GPU code snippet. Some initialisations are left out for reasons of brevity. Can I translate this to Python/Theano without increasing computation times considerably?
__global__ void SomeKernel(const cuComplex* __restrict__ data,
float* __restrict__ voxels)
{
unsigned int idx = blockIdx.x * blockDim.x + threadIdx.x;
unsigned int idy = blockIdx.y * blockDim.y + threadIdx.y;
unsigned int pos = (idy * NX + idx);
unsigned int ind1 = pos * 3;
float x = voxels[ind1];
float y = voxels[ind1 + 1];
float z = voxels[ind1 + 2];
int m;
for (m = 0; m < M; ++m)
{
unsigned int ind2 = 3 * m;
float diff_x = x - some_pos[ind2];
float diff_y = y - some_pos[ind2 + 1];
float diff_z = z - some_pos[ind2 + 2];
float distance = sqrtf(diff_x * diff_x
+ diff_y * diff_y
+ diff_z * diff_z);
unsigned int dist = rintf(distance/some_factor);
ind3 = m * another_factor + dist;
cuComplex some_element = data[ind3];
Main calculation starts, involving some_element.
|
No, I see nothing which cannot be done using Tensors instead of a for-loop. This should mean that you might see an increase in speed, but this will really depend on the application. You have an overhead of python+theano as well, especially coming from c-like code.
So, instead of
for (m = 0; m < M; ++m)
{
unsigned int ind2 = 3 * m;
float diff_x = x - some_pos[ind2];
float diff_y = y - some_pos[ind2 + 1];
float diff_z = z - some_pos[ind2 + 2];
float distance = sqrtf(diff_x * diff_x
+ diff_y * diff_y
+ diff_z * diff_z);
unsigned int dist = rintf(distance/some_factor);
ind3 = m * another_factor + dist;
cuComplex some_element = data[ind3];
}
You could do something like (of the top of my head)
diff_xyz = T.Tensor([x,y,z]).dimshuffle('x',0) - some_pos.reshape(-1,3)
distance = T.norm(diff_xyz)
dist = T.round(distance/some_factor)
data = data.reshape(another_factor,-1)
some_elements = data[:,dist]
See? No more loops, therefore a GPU can parallellize this.
However, inside the inner loop of the computation a new index is calculated, based on the loop index and corresponding values of a couple of arrays. (...) Is this too complicated to expect any significant speedups from Theano?
In general: this can be optimized, as long as the loop index has a linear relation with the index needed, by using tensors instead of loops. It however needs a bit of creativity and massaging to get right.
Non-linear relations are also possible using Tensor.take(), but I don't dare to vouch for its speed on GPU. My gut-feeling always told me to stay away from it, as it is probably too flexible to optimize nicely. However, it is possible to use when there are no alternatives.
|
Why might Python's `from` form of an import statement bind a module name?
|
I have a Python project with the following structure:
testapp/
âââ __init__.py
âââ api
â  âââ __init__.py
â  âââ utils.py
âââ utils.py
All of the modules are empty except testapp/api/__init__.py which has the following code:
from testapp import utils
print "a", utils
from testapp.api.utils import x
print "b", utils
and testapp/api/utils.py which defines x:
x = 1
Now from the root I import testapp.api:
$ export PYTHONPATH=$PYTHONPATH:.
$ python -c "import testapp.api"
a <module 'testapp.utils' from 'testapp/utils.pyc'>
b <module 'testapp.api.utils' from 'testapp/api/utils.pyc'>
The result of the import surprises me, because it shows that the second import statement has overwritten utils. Yet the docs state that the from statement will not bind a module name:
The from form does not bind the module name: it goes through the list
of identifiers, looks each one of them up in the module found in step
(1), and binds the name in the local namespace to the object thus
found.
And indeed, when in a terminal I use a from ... import ... statement, no module names are introduced:
>>> from os.path import abspath
>>> path
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'path' is not defined
I suspect this has to do with Python, at the time of the second import statement, trying to import testapp.api.utils which refers to testapp.utils and failing but I'm not certain.
What is happening here?
|
From the import system documentation:
When a submodule is loaded using any mechanism (e.g. importlib APIs,
the import or import-from statements, or built-in __import__())
a binding is placed in the parent moduleâs namespace to the submodule
object. For example, if package spam has a submodule foo, after
importing spam.foo, spam will have an attribute foo which is
bound to the submodule. Letâs say you have the following directory
structure:
spam/
__init__.py
foo.py
bar.py
and spam/__init__.py has the following lines in it:
from .foo import Foo
from .bar import Bar
then executing the following puts a name binding to foo and bar in
the spam module:
>>> import spam
>>> spam.foo
<module 'spam.foo' from '/tmp/imports/spam/foo.py'>
>>> spam.bar
<module 'spam.bar' from '/tmp/imports/spam/bar.py'>
Given Pythonâs familiar name binding rules this might seem surprising,
but itâs actually a fundamental feature of the import system. The
invariant holding is that if you have sys.modules['spam'] and
sys.modules['spam.foo'] (as you would after the above import), the
latter must appear as the foo attribute of the former.
If you do from testapp.api.utils import x, the import statement will not load utils into the local namespace. However, the import machinery will load utils into the testapp.api namespace, to make further imports work right. It just happens that in your case, testapp.api is also the local namespace, so you're getting a surprise.
|
How to execute Python code from within Visual Studio Code
|
Visual Studio Code was recently released and I liked the look of it and the features it offered, so I figured I would give it a go.
I downloaded the application from the downloads page
fired it up, messed around a bit with some of the features ... and then realized I had no idea how to actually execute any of my Python code!
I really like the look and feel/usability/features of Visual Studio Code, but I can't seem to find out how to run my Python code, a real killer because that's what I program primarily in.
Does anyone know if there is a way to execute Python code in Visual Studio Code?
|
Here is how to Configure Task Runner in Visual Studio Code to run a py file.
In your console press Ctrl+Shift+P (Windows) or Cmd+Shift+P (Apple) and this brings up a search box where you search for "Configure Task Runner"
EDIT: If this is the first time you open the "Task: Configure Task Runner", you need to select "other" at the bottom of the next selection list.
This will bring up the properties which you can then change to suit your preference. In this case you want to change the following properties;
Change the Command property from "tsc" (TypeScript) to "Python"
Change showOutput from "silent" to "Always"
Change args (Arguments) from ["Helloworld.ts"] to ["${file}"] (filename)
Delete the last property problemMatcher
Save the changes made
You can now open your py file and run it nicely with the shortcut Ctrl+Shift+B (Windows) or Cmd+Shift+B (Apple)
Enjoy!
|
Problems obtaining most informative features with scikit learn?
|
Im triying to obtain the most informative features from a textual corpus. From this well answered question I know that this task could be done as follows:
def most_informative_feature_for_class(vectorizer, classifier, classlabel, n=10):
labelid = list(classifier.classes_).index(classlabel)
feature_names = vectorizer.get_feature_names()
topn = sorted(zip(classifier.coef_[labelid], feature_names))[-n:]
for coef, feat in topn:
print classlabel, feat, coef
Then:
most_informative_feature_for_class(tfidf_vect, clf, 5)
For this classfier:
X = tfidf_vect.fit_transform(df['content'].values)
y = df['label'].values
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,
y, test_size=0.33)
clf = SVC(kernel='linear', C=1)
clf.fit(X, y)
prediction = clf.predict(X_test)
The problem is the output of most_informative_feature_for_class:
5 a_base_de_bien bastante (0, 2451) -0.210683496368
(0, 3533) -0.173621065386
(0, 8034) -0.135543062425
(0, 10346) -0.173621065386
(0, 15231) -0.154148294738
(0, 18261) -0.158890483047
(0, 21083) -0.297476572586
(0, 434) -0.0596263855375
(0, 446) -0.0753492277856
(0, 769) -0.0753492277856
(0, 1118) -0.0753492277856
(0, 1439) -0.0753492277856
(0, 1605) -0.0753492277856
(0, 1755) -0.0637950312345
(0, 3504) -0.0753492277856
(0, 3511) -0.115802483001
(0, 4382) -0.0668983049212
(0, 5247) -0.315713152154
(0, 5396) -0.0753492277856
(0, 5753) -0.0716096348446
(0, 6507) -0.130661516772
(0, 7978) -0.0753492277856
(0, 8296) -0.144739048504
(0, 8740) -0.0753492277856
(0, 8906) -0.0753492277856
: :
(0, 23282) 0.418623443832
(0, 4100) 0.385906085143
(0, 15735) 0.207958503155
(0, 16620) 0.385906085143
(0, 19974) 0.0936828782325
(0, 20304) 0.385906085143
(0, 21721) 0.385906085143
(0, 22308) 0.301270427482
(0, 14903) 0.314164150621
(0, 16904) 0.0653764031957
(0, 20805) 0.0597723455204
(0, 21878) 0.403750815828
(0, 22582) 0.0226150073272
(0, 6532) 0.525138162099
(0, 6670) 0.525138162099
(0, 10341) 0.525138162099
(0, 13627) 0.278332617058
(0, 1600) 0.326774799211
(0, 2074) 0.310556919237
(0, 5262) 0.176400451433
(0, 6373) 0.290124806858
(0, 8593) 0.290124806858
(0, 12002) 0.282832270298
(0, 15008) 0.290124806858
(0, 19207) 0.326774799211
It is not returning the label nor the words. Why this is happening and how can I print the words and the labels?. Do you guys this is happening since I am using pandas to read the data?. Another thing I tried is the following, form this question:
def print_top10(vectorizer, clf, class_labels):
"""Prints features with the highest coefficient values, per class"""
feature_names = vectorizer.get_feature_names()
for i, class_label in enumerate(class_labels):
top10 = np.argsort(clf.coef_[i])[-10:]
print("%s: %s" % (class_label,
" ".join(feature_names[j] for j in top10)))
print_top10(tfidf_vect,clf,y)
But I get this traceback:
Traceback (most recent call last):
File "/Users/user/PycharmProjects/TESIS_FINAL/Classification/Supervised_learning/Final/experimentos/RBF/SVM_con_rbf.py", line 237, in <module>
print_top10(tfidf_vect,clf,5)
File "/Users/user/PycharmProjects/TESIS_FINAL/Classification/Supervised_learning/Final/experimentos/RBF/SVM_con_rbf.py", line 231, in print_top10
for i, class_label in enumerate(class_labels):
TypeError: 'int' object is not iterable
Any idea of how to solve this, in order to get the features with the highest coefficient values?.
|
To solve this specifically for linear SVM, we first have to understand the formulation of the SVM in sklearn and the differences that it has to MultinomialNB.
The reason why the most_informative_feature_for_class works for MultinomialNB is because the output of the coef_ is essentially the log probability of features given a class (and hence would be of size [nclass, n_features], due to the formulation of the naive bayes problem. But if we check the documentation for SVM, the coef_ is not that simple. Instead coef_ for (linear) SVM is [n_classes * (n_classes -1)/2, n_features] because each of the binary models are fitted to every possible class.
If we do possess some knowledge on which particular coefficient we're interested in, we could alter the function to look like the following:
def most_informative_feature_for_class_svm(vectorizer, classifier, classlabel, n=10):
labelid = ?? # this is the coef we're interested in.
feature_names = vectorizer.get_feature_names()
svm_coef = classifier.coef_.toarray()
topn = sorted(zip(svm_coef[labelid], feature_names))[-n:]
for coef, feat in topn:
print feat, coef
This would work as intended and print out the labels and the top n features according to the coefficient vector that you're after.
As for getting the correct output for a particular class, that would depend on the assumptions and what you aim to output. I suggest reading through the multi-class documentation within the SVM documentation to get a feel for what you're after.
So using the train.txt file which was described in this question, we can get some kind of output, though in this situation it isn't particularly descriptive or helpful to interpret. Hopefully this helps you.
import codecs, re, time
from itertools import chain
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
trainfile = 'train.txt'
# Vectorizing data.
train = []
word_vectorizer = CountVectorizer(analyzer='word')
trainset = word_vectorizer.fit_transform(codecs.open(trainfile,'r','utf8'))
tags = ['bs','pt','es','sr']
# Training NB
mnb = MultinomialNB()
mnb.fit(trainset, tags)
from sklearn.svm import SVC
svcc = SVC(kernel='linear', C=1)
svcc.fit(trainset, tags)
def most_informative_feature_for_class(vectorizer, classifier, classlabel, n=10):
labelid = list(classifier.classes_).index(classlabel)
feature_names = vectorizer.get_feature_names()
topn = sorted(zip(classifier.coef_[labelid], feature_names))[-n:]
for coef, feat in topn:
print classlabel, feat, coef
def most_informative_feature_for_class_svm(vectorizer, classifier, n=10):
labelid = 3 # this is the coef we're interested in.
feature_names = vectorizer.get_feature_names()
svm_coef = classifier.coef_.toarray()
topn = sorted(zip(svm_coef[labelid], feature_names))[-n:]
for coef, feat in topn:
print feat, coef
most_informative_feature_for_class(word_vectorizer, mnb, 'pt')
print
most_informative_feature_for_class_svm(word_vectorizer, svcc)
with output:
pt teve -4.63472898823
pt tive -4.63472898823
pt todas -4.63472898823
pt vida -4.63472898823
pt de -4.22926388012
pt foi -4.22926388012
pt mais -4.22926388012
pt me -4.22926388012
pt as -3.94158180767
pt que -3.94158180767
no 0.0204081632653
parecer 0.0204081632653
pone 0.0204081632653
por 0.0204081632653
relación 0.0204081632653
una 0.0204081632653
visto 0.0204081632653
ya 0.0204081632653
es 0.0408163265306
lo 0.0408163265306
|
How to find the first index of any of a set of characters in a string
|
I'd like to find the index of the first occurrence of any âspecialâ character in a string, like so:
>>> "Hello world!".index([' ', '!'])
5
â¦except that's not valid Python syntax. Of course, I can write a function that emulates this behavior:
def first_index(s, characters):
i = []
for c in characters:
try:
i.append(s.index(c))
except ValueError:
pass
if not i:
raise ValueError
return min(i)
I could also use regular expressions, but both solutions seem to be a bit overkill. Is there any âsaneâ way to do this in Python?
|
You can use enumerate and next with a generator expression, getting the first match or returning None if no character appears in s:
s = "Hello world!"
st = {"!"," "}
ind = next((i for i, ch in enumerate(s) if ch in st),None)
print(ind)
You can pass any value you want to next as a default return value if there is no match.
If you want to use a function and raise a ValueError:
def first_index(s, characters):
st = set(characters)
ind = next((i for i, ch in enumerate(s) if ch in st), None)
if ind is not None:
return ind
raise ValueError
For smaller inputs using a set won't make much if any difference but for large strings it will be a more efficient.
Some timings:
In the string, last character of character set:
In [40]: s = "Hello world!" * 100
In [41]: string = s
In [42]: %%timeit
st = {"x","y","!"}
next((i for i, ch in enumerate(s) if ch in st), None)
....:
1000000 loops, best of 3: 1.71 µs per loop
In [43]: %%timeit
specials = ['x', 'y', '!']
min(map(lambda x: (string.index(x) if (x in string) else len(string)), specials))
....:
100000 loops, best of 3: 2.64 µs per loop
Not in the string, larger character set:
In [44]: %%timeit
st = {"u","v","w","x","y","z"}
next((i for i, ch in enumerate(s) if ch in st), None)
....:
1000000 loops, best of 3: 1.49 µs per loop
In [45]: %%timeit
specials = ["u","v","w","x","y","z"]
min(map(lambda x: (string.index(x) if (x in string) else len(string)), specials))
....:
100000 loops, best of 3: 5.48 µs per loop
In string an very first character of character set:
In [47]: %%timeit
specials = ['H', 'y', '!']
min(map(lambda x: (string.index(x) if (x in string) else len(string)), specials))
....:
100000 loops, best of 3: 2.02 µs per loop
In [48]: %%timeit
st = {"H","y","!"}
next((i for i, ch in enumerate(s) if ch in st), None)
....:
1000000 loops, best of 3: 903 ns per loop
|
Interleave list with fixed element
|
I know that I can interleave two python lists with:
[elem for pair in zip(*lists) for elem in pair]
Now I need to interleave a list with a fixed element like:
list = [1, 2, 3, 4]
# í ½í° python magic í ½í°
output = [1, 0, 2, 0, 3, 0, 4]
|
You can try the following itertools magic:
>>> from itertools import repeat, chain, izip
>>> l = [1, 2, 3, 4]
>>> list(chain.from_iterable(izip(l[:-1], repeat(0)))) + l[-1:]
[1, 0, 2, 0, 3, 0, 4]
|
What are the differences between Conda and Anaconda
|
I first installed Anaconda on my ubuntu at ~/anaconda, when I was trying to update my anaconda, according to the documentation from Continuum Analytics, I should use the following commands:
conda update conda
conda update anaconda
Then I realized that I did not have conda installed, so I installed it using the documentation from here.
After conda is installed, when I run conda update anaconda, I got the following error:
Error: package 'anaconda' is not installed in /home/xiang/miniconda
It appears conda is assuming my anaconda is installed under /home/xiang/miniconda which is NOT true.
My questions are:
What is the differences between conda and anaconda?
How can I tell conda where my anaconda is installed?
|
conda is the package manager. Anaconda is a set of about a hundred packages including conda, numpy, scipy, ipython notebook, and so on.
You installed Miniconda, which is a smaller alternative to Anaconda that is just conda and its dependencies (as opposed to Anaconda, which is conda and a bunch of other packages like numpy, scipy, ipython notebook, etc.). Once you have Miniconda, you can easily install Anaconda into it with conda install anaconda.
|
Python self and super in multiple inheritance
|
In Raymond Hettinger's talk "Super considered super speak" at PyCon 2015 he explains the advantages of using super in Python in multiple inheritance context. This is one of the examples that Raymond used during his talk:
class DoughFactory(object):
def get_dough(self):
return 'insecticide treated wheat dough'
class Pizza(DoughFactory):
def order_pizza(self, *toppings):
print('Getting dough')
dough = super().get_dough()
print('Making pie with %s' % dough)
for topping in toppings:
print('Adding: %s' % topping)
class OrganicDoughFactory(DoughFactory):
def get_dough(self):
return 'pure untreated wheat dough'
class OrganicPizza(Pizza, OrganicDoughFactory):
pass
if __name__ == '__main__':
OrganicPizza().order_pizza('Sausage', 'Mushroom')
Somebody in the audience asked Raymond about the difference of using self.get_dough() instead super().get_dough(). I didn't understand very well the brief answer of Raymond but I coded the two implementations of this example to see the differences. The output are the same for both cases:
Getting dough
Making pie with pure untreated wheat dough
Adding: Sausage
Adding: Mushroom
If you alter the class order from OrganicPizza(Pizza, OrganicDoughFactory) to OrganicPizza(OrganicDoughFactory, Pizza) using self.get_dough(), you will get this result:
Making pie with pure untreated wheat dough
However if you use super().get_dough() this is the output:
Making pie with insecticide treated wheat dough
I understand the super() behavior as Raymond explained. But what is the expected behavior of self in multiple inheritance scenario?
|
Just to clarify, there are four cases, based on changing the second line in Pizza.order_pizza and the definition of OrganicPizza:
super(), (Pizza, OrganicDoughFactory) (original): 'Making pie with pure untreated wheat dough'
self, (Pizza, OrganicDoughFactory): 'Making pie with pure untreated wheat dough'
super(), (OrganicDoughFactory, Pizza): 'Making pie with insecticide treated wheat dough'
self, (OrganicDoughFactory, Pizza): 'Making pie with pure untreated wheat dough'
Case 3. is the one that's surprised you; if we switch the order of inheritance but still use super, we apparently end up calling the original DoughFactory.get_dough.
What super really does is ask "which is next in the MRO (method resolution order)?" So what does OrganicPizza.mro() look like?
(Pizza, OrganicDoughFactory): [<class '__main__.OrganicPizza'>, <class '__main__.Pizza'>, <class '__main__.OrganicDoughFactory'>, <class '__main__.DoughFactory'>, <class 'object'>]
(OrganicDoughFactory, Pizza): [<class '__main__.OrganicPizza'>, <class '__main__.OrganicDoughFactory'>, <class '__main__.Pizza'>, <class '__main__.DoughFactory'>, <class 'object'>]
The crucial question here is: which comes after Pizza? As we're calling super from inside Pizza, that is where Python will go to find get_dough*. For 1. and 2. it's OrganicDoughFactory, so we get the pure, untreated dough, but for 3. and 4. it's the original, insecticide-treated DoughFactory.
Why is self different, then? self is always the instance, so Python goes looking for get_dough from the start of the MRO. In both cases, as shown above, OrganicDoughFactory is earlier in the list than DoughFactory, which is why the self versions always get untreated dough; self.get_dough always resolves to OrganicDoughFactor.get_dough(self).
* I think that this is actually clearer in the two-argument form of super used in Python 2.x, which would be super(Pizza, self).get_dough(); the first argument is the class to skip (i.e. Python looks in the rest of the MRO after that class).
|
Is there a way to compare Arabic characters without regard to their initial/medial/final form?
|
In Latin script, letters have an upper case and a lower case form. In Python, if you want to compare two strings without regard to their case, you can convert them to the same case using 'string'.upper() or 'string'.lower()
In Arabic script, letters can have an initial, medial, or final form. Is there a similar way to compare strings of Arabic characters without caring which form the letters are in?
|
There are two parts to this, which should work for all languages:*
Your strings must be into NFKD normalization to guarantee that two equal strings have equal code units.
To ignore case in comparing two NFKD strings, use the Unicode case-folding algorithm.
Between the two, this handles English upper and lower case, Arabic initial/medial/final (plus isolated), German à vs. ss, é as a single code point vs. e\N{COMBINING ACUTE ACCENT}, Chinese rotated characters, Japanese half-width kana, and probably all kinds of other things you haven't thought of.
In Python, that looks like this:
>>> s1 = 'ï»§'
>>> s2 = 'ﻨ'
>>> unicodedata.normalize('NFKD', s1).casefold() == unicodedata.normalize('NFKD', s2)
True
Note that casefold wasn't added until Python 3.3. If you're using an earlier version of Python, there are implementations on PyPI; using them should be similar to using the 3.3+ builtin.
If you're interested in exactly how this works for Arabic, rather than just the fact that it works for Arabic along with every other language, you have read the algorithms and tables at unicode.org. IIRC, the W3C document that recommends doing this explains why it works using Arabic as an example. I believe it's because Unicode treats initial, medial, final, and isolated as compatibility-equivalent presentation forms of the same character, so normalizing to decomposed gives you effectively the isolated form plus a modifier that casefolding can skip or transform, even though casefolding directly on a combined character just returns the character itself.
* There are a few cases where two different languages or cultures use the same script, but have different case-folding rules; in that case, you need locale-specific casefolding, which Python doesn't include. But that shouldn't be relevant here.
|
Truth value of numpy array with one falsey element seems to depend on dtype
|
import numpy as np
a = np.array([0])
b = np.array([None])
c = np.array([''])
d = np.array([' '])
Why should we have this inconsistency:
>>> bool(a)
False
>>> bool(b)
False
>>> bool(c)
True
>>> bool(d)
False
|
I'm pretty sure the answer is, as explained in Scalars, that:
Array scalars have the same attributes and methods as ndarrays. [1] This allows one to treat items of an array partly on the same footing as arrays, smoothing out rough edges that result when mixing scalar and array operations.
So, if it's acceptable to call bool on a scalar, it must be acceptable to call bool on an array of shape (1,), because they are, as far as possible, the same thing.
And, while it isn't directly said anywhere in the docs that I know of, it's pretty obvious from the design that NumPy's scalars are supposed to act like native Python objects.
So, that explains why np.array([0]) is falsey rather than truthy, which is what you were initially surprised about.
So, that explains the basics. But what about the specifics of case c?
First, note that your array np.array(['']) is not an array of one Python object, but an array of one NumPy <U1 null-terminated character string of length 1. Fixed-length-string values don't have the same truthiness rule as Python stringsâand they really couldn't; for a fixed-length-string type, "false if empty" doesn't make any sense, because they're never empty. You could argument about whether NumPy should have been designed that way or not, but it clearly does follow that rule consistently, and I don't think the opposite rule would be any less confusing here, just different.
But there seems to be something else weird going on with strings. Consider this:
>>> np.array(['a', 'b']) != 0
True
That's not doing an elementwise comparison of the <U2 strings to 0 and returning array([True, True]) (as you'd get from np.array(['a', 'b'], dtype=object)), it's doing an array-wide comparison and deciding that no array of strings is equal to 0, which seems odd⦠I'm not sure whether this deserves a separate answer here or even a whole separate question, but I am pretty sure I'm not going to be the one who writes that answer, because I have no clue what's going on here. :)
Beyond arrays of shape (1,), arrays of shape () are treated the same way, but anything else is a ValueError, because otherwise it would be very easily to misuse arrays with and and other Python operators that NumPy can't automagically convert into elementwise operations.
I personally think being consistent with other arrays would be more useful than being consistent with scalars hereâin other words, just raise a ValueError. I also think that, if being consistent with scalars were important here, it would be better to be consistent with the unboxed Python values. In other words, if bool(array([v])) and bool(array(v)) are going to be allowed at all, they should always return exactly the same thing as bool(v), even if that's not consistent with np.nonzero. But I can see the argument the other way.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.