instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Is there a HAML implementation for use with Python and Django
|
I happened to stumble across HAML, an interesting and beautiful way to mark up contents and write templates for HTML.
Since I use Python and Django for my web developing need, I would like to see if there is a Python implementation of HAML (or some similar concepts -- need not be exactly identical) that can be used to replace the Django template engine.
|
You might be interested in SHPAML:
http://shpaml.com/
I am actively maintaining it. It is a simple preprocessor, so it is not tied to any other tools like Genshi. I happen to use it with Django, so there is a little bit of Django support, but it should not interfere with most other use cases.
|
Cross-platform gui toolkit for deploying Python applications
|
Building on:
http://www.reddit.com/r/Python/comments/7v5ra/whats_your_favorite_gui_toolkit_and_why/
Merits:
1 - ease of design / integration - learning curve
2 - support / availability for *nix, Windows, Mac, extra points for native l&f, support for mobile or web
3 - pythonic API
4 - quality of documentation - I want to do something a bit more complicated, now what?
5 - light weight packaging so it's not necessary to include a full installer (py2exe, py2app would ideally work as-is and not generate a gazillion MBs file)
6 - licensing
7 - others? (specify)
Contenders:
1 - tkinter, as currently supported (as of 2.6, 3.0)
2 - pyttk library
3 - pyGTK
4 - pyQt
5 - wxPython
6 - HTML-CGI via Python-based framework (Django, Turbogears, web.py, Pylons...) or Paste
7 - others? (specify)
|
Please don't hesitate to expand this answer.
Tkinter
Tkinter is the toolkit that comes with python. That means you already have everything you need to write a GUI. What that also means is that if you choose to distribute your program, most likely everyone else already has what they need to run your program.
Tkinter is mature and stable, and is (at least arguably) quite easy to use.I found it easier to use than wxPython, but obviously that's somewhat subjective.
Tkinter gets a bad rap for looking ugly and out of date. While it's true that it's easy to create ugly GUIs with Tkinter, it's also pretty easy to create nice looking GUIs. Tkinter doesn't hold your hand, but it doesn't much get in the way, either. Tkinter looks best on the Mac and Windows since it uses native widgets there, but it looks OK on linux, too.
The other point about the look of Tkinter is that, for the most part, look isn't as important as people make it out to be. Most applications written with toolkits such as Tkinter, wxPython, PyQT, etc are special-purpose applications. For the types of applications these toolkits are used for, usability trumps looks. If the look of the application is important, it's easy enough to polish up a Tkinter application.
Tkinter has some features that other toolkits don't come close to matching. Variable traces, named fonts, geometry (layout) managers, and the way Tkinter processes events are still the standard to which other toolkits should be judged.
On the downside, Tkinter is a wrapper around a Tcl interpreter that runs inside python. This is mostly invisible to anyone developing with Tkinter, but it sometimes results in error messages that expose this architecture. You'll get an error complaining about a widget with a name like ".1245485.67345" which will make almost no sense to anyone unless you're also familiar with how Tcl/tk works.
Another downside is that Tkinter doesn't have as many pre-built widgets as wxPython. The hierarchical tree widget in Tkinter is a little weak, for example, and there's no built-in table widget. On the other hand, Tkinter's canvas and text widgets are extremely powerful and easy to use. For most types of applications you will write, however, you'll have everything you need. Just don't expect to replicate Microsoft Word or Photoshop with Tkinter.
I don't know what the license is for Tkinter, I assume the same as for python as a whole. Tcl/tk has a BSD-style license.
PyQt
It's build on top of Qt, a C++ framework. It's quite advanced and has some good tools like the Qt Designer to design your applications. You should be aware though, that it doesn't feel like Python 100%, but close to it. The documentation is excellent
This framework is really good. It's being actively developed by Trolltech, who is owned by Nokia. The bindings for Python are developed by Riverbank.
PyQt is available under the GPL license or a commercial one. The price of a riverbank PyQt license is about 400 euro per developer.
Qt is not only a GUI-framework but has a lot of other classes too, one can create an application by just using Qt classes. (Like SQL, networking, scripting, â¦)
Qt used to emulate GUI elements on every platform but now uses native styles of the platforms (although not native GUI toolkits): see the documentation for Mac OS X and the windows XP style
Packaging is as simple as running py2exe or pyInstaller. The content of my PyQt app looks like this on windows (I have used InnoSetup on top of it for proper installation):
pyticroque.exe PyQt4.QtGui.pyd unicodedata.pyd
MSVCP71.dll PyQt4._qt.pyd unins000.dat
MSVCR71.dll python25.dll unins000.exe
PyQt4.QtCore.pyd sip.pyd _socket.pyd
QT comes with a widget designer and even in recent versions with an IDE to help design Qt software.
PySide
PySide is a LGPL binding to Qt. It's developed by nokia as a replacement for the GPL PyQt.
Although based on a different
technology than the existing
GPL-licensed PyQt bindings, PySide
will initially aim to be
API-compatible with them. In addition
to the PyQt-compatible API, a more
Pythonic API will be provided in the
future.
wxPython
wxPython is a binding for Python using the wxWidgets-Framework. This framework is under the LGPL licence and is developed by the open source community.
What I'm really missing is a good tool to design the interface, they have about 3 but none of them is usable.
One thing I should mention is that I found a bug in the tab-view despite the fact that I didn't use anything advanced. (Only on Mac OS X) I think wxWidgets isn't as polished as Qt.
wxPython is really only about the GUI-classes, there isn't much else.
wxWidgets uses native GUI elements.
An advantage wxPython has over Tkinter is that wxPython has a much larger library of widgets from which to choose from.
Others
I haven't got any experience with other GUI frameworks, maybe someone else has.
|
What's the cleanest way to extract URLs from a string using Python?
|
Although I know I could use some hugeass regex such as the one posted here I'm wondering if there is some tweaky as hell way to do this either with a standard module or perhaps some third-party add-on?
Simple question, but nothing jumped out on Google (or Stackoverflow).
Look forward to seeing how y'all do this!
|
Look at Django's approach here: django.utils.urlize(). Regexps are too limited for the job and you have to use heuristics to get results that are mostly right.
|
Python loops with multiple lists?
|
<edit>
Thanks to everyone who has answered so far. The zip and os.path.join are really helpful. Any suggestions on ways to list the counter in front, without doing something like this:
zip(range(len(files)), files, directories)
</edit>
Hi,
I'm in the process of learning Python, but I come from a background where the following pseudocode is typical:
directories = ['directory_0', 'directory_1', 'directory_2']
files = ['file_a', 'file_b', 'file_c']
for(i = 0; i < directories.length; i++) {
print (i + 1) + '. ' + directories[i] + '/' + files[i] + '\n'
}
# Output:
# 1. directory_0/file_a
# 2. directory_1/file_b
# 3. directory_2/file_c
In Python, the way I would write the above right now, would be like this:
directories = ['directory_0', 'directory_1', 'directory_2']
files = ['file_a', 'file_b', 'file_c']
for i in range(len(directories)):
print '%s. %s/%s' % ((i + 1), directories[i], files[i]
# Output:
# 1. directory_0/file_a
# 2. directory_1/file_b
# 3. directory_2/file_c
While reading Dive into Python, Mark Pilgrim says that using for loops for counters is "Visual Basic-style thinking" (Simple Counters). He goes on to show how to use loops with dictionaries, but never really addresses a python solution in regards to how for loop counters are typically used in other languages.
I was hoping somebody could show me how to properly write the above scenario in Python. Is it possible to do it a different way?
If I took out the incrementing line count, is it possible to just match the two lists together using some kind of list comprehension?
For example, if all I wanted from the output was this (no counters, is that possible with list comprehension):
# Output:
# directory_0/file_a
# directory_1/file_b
# directory_2/file_c
Thanks in advance for any help.
|
import os.path
for dir, file in zip(directories, files):
print(os.path.join(dir, file)) # for directories, files
you can have it as a list comprehension as well, creating list of string with print going after that]
with counter:
for i, (dir, file) in enumerate(zip(directories, files)):
print(i, os.path.join(dir, file))
|
Why True/False is capitalized in Python?
|
All members are camel case, right? Why True/False but not true/false, which is more relaxed?
|
From Pep 285:
Should the constants be called 'True'
and 'False' (similar to
None) or 'true' and 'false' (as in C++, Java and C99)?
=> True and False.
Most reviewers agree that consistency within Python is more
important than consistency with other languages.
This, as Andrew points out, is probably because all (most)? built-in constants are capitalized.
|
How do i get python's pprint to return a string instead of printing?
|
In other words, what's the sprintf equivalent to pprint?
|
The pprint module has a command named pformat, for just that purpose.
From the documentation:
Return the formatted representation of object as a string. indent,
width and depth will be passed to the PrettyPrinter constructor as
formatting parameters.
Example:
>>> import pprint
>>> people = [
... {"first": "Brian", "last": "Kernighan"},
... {"first": "Dennis", "last": "Richie"},
... ]
>>> pprint.pformat(people, indent=4)
"[ { 'first': 'Brian', 'last': 'Kernighan'},\n { 'first': 'Dennis', 'last': 'Richie'}]"
|
Is there a way to decode numerical COM error-codes in pywin32
|
Here is part of a stack-trace from a recent run of an unreliable application written in Python which controls another application written in Excel:
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2146788248), None)
Obviously something has gone wrong ... but what?[1] These COM error codes seem to be excessively cryptic.
How can I decode this error message? Is there a table somewhere that allows me to convert this numerical error code into something more meaningful?
[1] I actually know what went wrong in this case, it was attempting to access a Name prperty on a Range object which did not have a Name property... not all bugs are this easy to find!
|
You are not doing anything wrong. The first item in your stack trace (the number) is the error code returned by the COM object. The second item is the description associated with the error code which in this case is "Exception Occurred". pywintypes.com_error already called the equivalent of win32api.FormatMessage(errCode) for you. We'll look at the second number in a minute.
By the way, you can use the "Error Lookup" utility that comes in Visual Studio (C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\ErrLook.exe) as a quick launching pad to check COM error codes. That utility also calls FormatMessage for you and displays the result. Not all error codes will work with this mechanism, but many will. That's usually my first stop.
Error handling and reporting in COM is a bit messy. I'll try to give you some background.
All COM method calls will return a numeric code called an HRESULT that can indicate success or failure. All forms of error reporting in COM build on top of that.
The codes are commonly expressed in hex, although sometimes you will see them as large 32-bit numbers, like in your stack trace. There are all kinds of predefined return codes for common results and problems, or the object can return custom numeric codes for special situations. For example, the value 0 (called S_OK) universally means "No error" and 0x80000002 is E_OUTOFMEMORY. Sometimes the HRESULT codes are returned by the object, sometimes by the COM infrastructure.
A COM object can also choose to provide much richer error information by implementing an interface called IErrorInfo. When an object implements IErrorInfo, it can provide all kinds of detail about what happened, such as a detailed custom error message and even the name of a help file that describes the problem. In VB6 and VBA. the Err object allows you to access all that information (Err.Description, etc).
To complicate matters, late bound COM objects (which use a mechanism called COM Automation or IDispatch) add some layers that need to be peeled off to get information out. Excel is usually manipulated via late binding.
Now let's look at your situation again. What you are getting as the first number is a fairly generic error code: DISP_E_EXCEPTION. Note: you can usually figure out the official name of an HRESULT by googling the number, although sometimes you will have to use the hex version to find anything useful.
Errors that begin with DISP_ are IDISPATCH error codes. The error loosely means "There was a COM exception thrown by the object", with more information packed elsewhere (although I don't quite know where; I'll have to look it up).
From what I understand of pywintypes.com_error, the last number in your message is the actual error code that was returned by the object during the exception. It's the actual numeric code that you would get out of VBA's Err.Number.
Unfortunately, that second code -2146788248 (0x800A9C68) is in the range reserved for custom application-defined error messages (in VBA: VbObjectError + someCustomErrorNumber), so there is no centralized meaning. The same number can mean entirely different things for different programs.
In this case, we have reached a dead end:
The error code is "custom", and the application needs to document what it is, except that Excel doesn't. Also, Excel (or the actual source of the error) doesn't seem to be providing any more information via IErrorInfo.
Excel is notorious (to me at least) for cryptic error codes from automation and obscure situations that cause them. This is especially so for errors that one could consider "design-time errors" ("you should have known better than calling a method that doesn't exist in the object"). Instead of a nice "Could not read the Name property", you get "Run-time error '1004': Application defined or object-defined error" (which I just got by trying to access a Name property on a Range, from VBA in Excel). That is NOT very helpful.
The problem is not routed on Python or it's interface to Excel. Excel itself doesn't explain what happened, even to VBA.
However, the general procedure above remains valid. If you get an error from Excel in the future, you might get a better error message that you can track the same way.
Good luck!
|
Finding first and last index of some value in a list in Python
|
Is there any built-in methods that are part of lists that would give me the first and last index of some value, like:
verts.IndexOf(12.345)
verts.LastIndexOf(12.345)
|
Sequences have a method index(value) which returns index of first occurrence.
You can run it on verts[::-1] to find out the last index.
|
Accessing the index in Python 'for' loops
|
How do I access the index itself for a list like the following?
ints = [8, 23, 45, 12, 78]
When I loop through it using a for loop, how do I access the loop index, from 1 to 5 in this case?
|
Using an additional state variable, such as an index variable (which you would normally use in languages such as C or PHP), is considered non-pythonic.
The better option is to use the built-in function enumerate(), available in both Python 2 and 3:
for idx, val in enumerate(ints):
print(idx, val)
Check out PEP 279 for more.
|
How can I strip Python logging calls without commenting them out?
|
Today I was thinking about a Python project I wrote about a year back where I used logging pretty extensively. I remember having to comment out a lot of logging calls in inner-loop-like scenarios (the 90% code) because of the overhead (hotshot indicated it was one of my biggest bottlenecks).
I wonder now if there's some canonical way to programmatically strip out logging calls in Python applications without commenting and uncommenting all the time. I'd think you could use inspection/recompilation or bytecode manipulation to do something like this and target only the code objects that are causing bottlenecks. This way, you could add a manipulator as a post-compilation step and use a centralized configuration file, like so:
[Leave ERROR and above]
my_module.SomeClass.method_with_lots_of_warn_calls
[Leave WARN and above]
my_module.SomeOtherClass.method_with_lots_of_info_calls
[Leave INFO and above]
my_module.SomeWeirdClass.method_with_lots_of_debug_calls
Of course, you'd want to use it sparingly and probably with per-function granularity -- only for code objects that have shown logging to be a bottleneck. Anybody know of anything like this?
Note: There are a few things that make this more difficult to do in a performant manner because of dynamic typing and late binding. For example, any calls to a method named debug may have to be wrapped with an if not isinstance(log, Logger). In any case, I'm assuming all of the minor details can be overcome, either by a gentleman's agreement or some run-time checking. :-)
|
What about using logging.disable?
I've also found I had to use logging.isEnabledFor if the logging message is expensive to create.
|
Python + Django page redirect
|
How do I accomplish a simple redirect (e.g. cflocation in ColdFusion, or header(location:http://) in Django)?
|
It's simple:
from django.http import HttpResponseRedirect
def myview(request):
...
return HttpResponseRedirect("/path/")
More info in the official Django docs
Update: Django 1.0
There is apparently a better way of doing this in Django now using generic views.
Example -
from django.views.generic.simple import redirect_to
urlpatterns = patterns('',
(r'^one/$', redirect_to, {'url': '/another/'}),
#etc...
)
There is more in the generic views documentation.
Credit - Carles Barrobés.
Update #2: Django 1.3+
In Django 1.5 *redirect_to* no longer exists and has been replaced by RedirectView. Credit to Yonatan
from django.views.generic import RedirectView
urlpatterns = patterns('',
(r'^one/$', RedirectView.as_view(url='/another/')),
)
|
Regular expression to detect semi-colon terminated C++ for & while loops
|
In my Python application, I need to write a regular expression that matches a C++ for or while loop that has been terminated with a semi-colon (;). For example, it should match this:
for (int i = 0; i < 10; i++);
... but not this:
for (int i = 0; i < 10; i++)
This looks trivial at first glance, until you realise that the text between the opening and closing parenthesis may contain other parenthesis, for example:
for (int i = funcA(); i < funcB(); i++);
I'm using the python.re module. Right now my regular expression looks like this (I've left my comments in so you can understand it easier):
# match any line that begins with a "for" or "while" statement:
^\s*(for|while)\s*
\( # match the initial opening parenthesis
# Now make a named group 'balanced' which matches a balanced substring.
(?P<balanced>
# A balanced substring is either something that is not a parenthesis:
[^()]
| # â¦or a parenthesised string:
\( # A parenthesised string begins with an opening parenthesis
(?P=balanced)* # â¦followed by a sequence of balanced substrings
\) # â¦and ends with a closing parenthesis
)* # Look for a sequence of balanced substrings
\) # Finally, the outer closing parenthesis.
# must end with a semi-colon to match:
\s*;\s*
This works perfectly for all the above cases, but it breaks as soon as you try and make the third part of the for loop contain a function, like so:
for (int i = 0; i < 10; doSomethingTo(i));
I think it breaks because as soon as you put some text between the opening and closing parenthesis, the "balanced" group matches that contained text, and thus the (?P=balanced) part doesn't work any more since it won't match (due to the fact that the text inside the parenthesis is different).
In my Python code I'm using the VERBOSE and MULTILINE flags, and creating the regular expression like so:
REGEX_STR = r"""# match any line that begins with a "for" or "while" statement:
^\s*(for|while)\s*
\( # match the initial opening parenthesis
# Now make a named group 'balanced' which matches
# a balanced substring.
(?P<balanced>
# A balanced substring is either something that is not a parenthesis:
[^()]
| # â¦or a parenthesised string:
\( # A parenthesised string begins with an opening parenthesis
(?P=balanced)* # â¦followed by a sequence of balanced substrings
\) # â¦and ends with a closing parenthesis
)* # Look for a sequence of balanced substrings
\) # Finally, the outer closing parenthesis.
# must end with a semi-colon to match:
\s*;\s*"""
REGEX_OBJ = re.compile(REGEX_STR, re.MULTILINE| re.VERBOSE)
Can anyone suggest an improvement to this regular expression? It's getting too complicated for me to get my head around.
|
You could write a little, very simple routine that does it, without using a regular expression:
Set a position counter pos so that is points to just before the opening bracket after your for or while.
Set an open brackets counter openBr to 0.
Now keep incrementing pos, reading the characters at the respective positions, and increment openBr when you see an opening bracket, and decrement it when you see a closing bracket. That will increment it once at the beginning, for the first opening bracket in "for (", increment and decrement some more for some brackets in between, and set it back to 0 when your for bracket closes.
So, stop when openBr is 0 again.
The stopping positon is your closing bracket of for(...). Now you can check if there is a semicolon following or not.
|
Does Python have class prototypes (or forward declarations)?
|
I have a series of Python classes in a file. Some classes reference others.
My code is something like this:
class A():
pass
class B():
c = C()
class C():
pass
Trying to run that, I get NameError: name 'C' is not defined. Fair enough, but is there any way to make it work, or do I have to manually re-order my classes to accommodate? In C++, I can create a class prototype. Does Python have an equivalent?
(I'm actually playing with Django models, but I tried not complicate matters).
|
Actually, all of the above are great observations about Python, but none of them will solve your problem.
Django needs to introspect stuff.
The right way to do what you want is the following:
class Car(models.Model):
manufacturer = models.ForeignKey('Manufacturer')
# ...
class Manufacturer(models.Model):
# ...
Note the use of the class name as a string rather than the literal class reference. Django offers this alternative to deal with exactly the problem that Python doesn't provide forward declarations.
This question reminds me of the classic support question that you should always ask any customer with an issue: "What are you really trying to do?"
|
NumPy, PIL adding an image
|
I'm trying to add two images together using NumPy and PIL. The way I would do this in MATLAB would be something like:
>> M1 = imread('_1.jpg');
>> M2 = imread('_2.jpg');
>> resM = M1 + M2;
>> imwrite(resM, 'res.jpg');
I get something like this:
Using a compositing program and adding the images the MATLAB result seems to be right.
In Python I'm trying to do the same thing like this:
from PIL import Image
from numpy import *
im1 = Image.open('/Users/rem7/Desktop/_1.jpg')
im2 = Image.open('/Users/rem7/Desktop/_2.jpg')
im1arr = asarray(im1)
im2arr = asarray(im2)
addition = im1arr + im2arr
resultImage = Image.fromarray(addition)
resultImage.save('/Users/rem7/Desktop/a.jpg')
and I get something like this:
Why am I getting all those funky colors? I also tried using ImageMath.eval("a+b", a=im1, b=im2), but I get an error about RGB unsupported.
I also saw that there is an Image.blend() but that requires an alpha.
What's the best way to achieve what I'm looking for?
Source Images (images have been removed):
Humm, OK, well I added the source images using the add image icon and they show up when I'm editing the post, but for some reason the images don't show up in the post.
(images have been removed) 2013 05 09
|
As everyone suggested already, the weird colors you're observing are overflow. And as you point out in the comment of schnaader's answer you still get overflow if you add your images like this:
addition=(im1arr+im2arr)/2
The reason for this overflow is that your NumPy arrays (im1arr im2arr) are of the uint8 type (i.e. 8-bit). This means each element of the array can only hold values up to 255, so when your sum exceeds 255, it loops back around 0:
>>>array([255,10,100],dtype='uint8') + array([1,10,160],dtype='uint8')
array([ 0, 20, 4], dtype=uint8)
To avoid overflow, your arrays should be able to contain values beyond 255. You need to convert them to floats for instance, perform the blending operation and convert the result back to uint8:
im1arrF = im1arr.astype('float')
im2arrF = im2arr.astype('float')
additionF = (im1arrF+im2arrF)/2
addition = additionF.astype('uint8')
You should not do this:
addition = im1arr/2 + im2arr/2
as you lose information, by squashing the dynamic of the image (you effectively make the images 7-bit) before you perform the blending information.
MATLAB note: the reason you don't see this problem in MATLAB, is probably because MATLAB takes care of the overflow implicitly in one of its functions.
|
Django templates: create a "back" link?
|
I'm tooling around with Django and I'm wondering if there is a simple way to create a "back" link to the previous page using the template system.
I figure that in the worst case I can get this information from the request object in the view function, and pass it along to the template rendering method, but I'm hoping I can avoid all this boilerplate code somehow.
I've checked the Django template docs and I haven't seen anything that mentions this explicitly.
|
actually it's go(-1)
<
input type=button value="Previous Page" onClick="javascript:history.go(-1);">
|
Embedding icon in .exe with py2exe, visible in Vista?
|
I've been trying to embed an icon (.ico) into my "compyled" .exe with py2exe.
Py2Exe does have a way to embed an icon:
windows=[{
'script':'MyScript.py',
'icon_resources':[(1,'MyIcon.ico')]
}]
And that's what I am using. The icon shows up fine on Windows XP or lower, but doesn't show at all on Vista. I suppose this is because of the new Vista icon format, which can be in PNG format, up to 256x256 pixels.
So, how can I get py2exe to embed them into my executable, without breaking the icons on Windows XP?
I'm cool with doing it with an external utility rather than py2exe - I've tried this command-line utility to embed it, but it always corrupts my exe and truncates its size for some reason.
|
Vista uses icons of high resolution 256x256 pixels images, they are stored using PNG-based compression. The problem is if you simply make the icon and save it in standard XP ICO format, the resulting file will be 400Kb on disk. The solution is to compress the images. The compression scheme used is PNG (Portable Network Graphic) because it has a good lossless ratio and supports alpha channel.
And use
png2ico myicon.ico logo16x16.png logo32x32.png logo255x255.png
It creates an ICO file from 1 or more PNG's and handles multiple sizes etc. And I guess XP would have no problem with that.
|
Regular expression: match start or whitespace
|
Can a regular expression match whitespace or the start of a string?
I'm trying to replace currency the abbreviation GBP with a £ symbol. I could just match anything starting GBP, but I'd like to be a bit more conservative, and look for certain delimiters around it.
>>> import re
>>> text = u'GBP 5 Off when you spend GBP75.00'
>>> re.sub(ur'GBP([\W\d])', ur'£\g<1>', text) # matches GBP with any prefix
u'\xa3 5 Off when you spend \xa375.00'
>>> re.sub(ur'^GBP([\W\d])', ur'£\g<1>', text) # matches at start only
u'\xa3 5 Off when you spend GBP75.00'
>>> re.sub(ur'(\W)GBP([\W\d])', ur'\g<1>£\g<2>', text) # matches whitespace prefix only
u'GBP 5 Off when you spend \xa375.00'
Can I do both of the latter examples at the same time?
|
Use the OR "|" operator:
>>> re.sub(r'(^|\W)GBP([\W\d])', u'\g<1>£\g<2>', text)
u'\xa3 5 Off when you spend \xa375.00'
|
Probability distribution in Python
|
I have a bunch of keys that each have an unlikeliness variable. I want to randomly choose one of these keys, yet I want it to be more unlikely for unlikely (key, values) to be chosen than a less unlikely (a more likely) object. I am wondering if you would have any suggestions, preferably an existing python module that I could use, else I will need to make it myself.
I have checked out the random module; it does not seem to provide this.
I have to make such choices many millions of times for 1000 different sets of objects each containing 2,455 objects. Each set will exchange objects among each other so the random chooser needs to be dynamic. With 1000 sets of 2,433 objects, that is 2,433 million objects; low memory consumption is crucial. And since these choices are not the bulk of the algorithm, I need this process to be quite fast; CPU-time is limited.
Thx
Update:
Ok, I tried to consider your suggestions wisely, but time is so limited...
I looked at the binary search tree approach and it seems too risky (complex and complicated). The other suggestions all resemble the ActiveState recipe. I took it and modified it a little in the hope of making more efficient:
def windex(dict, sum, max):
'''an attempt to make a random.choose() function that makes
weighted choices accepts a dictionary with the item_key and
certainty_value as a pair like:
>>> x = [('one', 20), ('two', 2), ('three', 50)], the
maximum certainty value (max) and the sum of all certainties.'''
n = random.uniform(0, 1)
sum = max*len(list)-sum
for key, certainty in dict.iteritems():
weight = float(max-certainty)/sum
if n < weight:
break
n = n - weight
return key
I am hoping to get an efficiency gain from dynamically maintaining the sum of certainties and the maximum certainty. Any further suggestions are welcome. You guys saves me so much time and effort, while increasing my effectiveness, it is crazy. Thx! Thx! Thx!
Update2:
I decided to make it more efficient by letting it choose more choices at once. This will result in an acceptable loss in precision in my algo for it is dynamic in nature. Anyway, here is what I have now:
def weightedChoices(dict, sum, max, choices=10):
'''an attempt to make a random.choose() function that makes
weighted choices accepts a dictionary with the item_key and
certainty_value as a pair like:
>>> x = [('one', 20), ('two', 2), ('three', 50)], the
maximum certainty value (max) and the sum of all certainties.'''
list = [random.uniform(0, 1) for i in range(choices)]
(n, list) = relavate(list.sort())
keys = []
sum = max*len(list)-sum
for key, certainty in dict.iteritems():
weight = float(max-certainty)/sum
if n < weight:
keys.append(key)
if list: (n, list) = relavate(list)
else: break
n = n - weight
return keys
def relavate(list):
min = list[0]
new = [l - min for l in list[1:]]
return (min, new)
I haven't tried it out yet. If you have any comments/suggestions, please do not hesitate. Thx!
Update3:
I have been working all day on a task-tailored version of Rex Logan's answer. Instead of a 2 arrays of objects and weights, it is actually a special dictionary class; which makes things quite complex since Rex's code generates a random index... I also coded a test case that kind of resembles what will happen in my algo (but I can't really know until I try!). The basic principle is: the more a key is randomly generated often, the more unlikely it will be generated again:
import random, time
import psyco
psyco.full()
class ProbDict():
"""
Modified version of Rex Logans RandomObject class. The more a key is randomly
chosen, the more unlikely it will further be randomly chosen.
"""
def __init__(self,keys_weights_values={}):
self._kw=keys_weights_values
self._keys=self._kw.keys()
self._len=len(self._keys)
self._findSeniors()
self._effort = 0.15
self._fails = 0
def __iter__(self):
return self.next()
def __getitem__(self, key):
return self._kw[key]
def __setitem__(self, key, value):
self.append(key, value)
def __len__(self):
return self._len
def next(self):
key=self._key()
while key:
yield key
key = self._key()
def __contains__(self, key):
return key in self._kw
def items(self):
return self._kw.items()
def pop(self, key):
try:
(w, value) = self._kw.pop(key)
self._len -=1
if w == self._seniorW:
self._seniors -= 1
if not self._seniors:
#costly but unlikely:
self._findSeniors()
return [w, value]
except KeyError:
return None
def popitem(self):
return self.pop(self._key())
def values(self):
values = []
for key in self._keys:
try:
values.append(self._kw[key][1])
except KeyError:
pass
return values
def weights(self):
weights = []
for key in self._keys:
try:
weights.append(self._kw[key][0])
except KeyError:
pass
return weights
def keys(self, imperfect=False):
if imperfect: return self._keys
return self._kw.keys()
def append(self, key, value=None):
if key not in self._kw:
self._len +=1
self._kw[key] = [0, value]
self._keys.append(key)
else:
self._kw[key][1]=value
def _key(self):
for i in range(int(self._effort*self._len)):
ri=random.randint(0,self._len-1) #choose a random object
rx=random.uniform(0,self._seniorW)
rkey = self._keys[ri]
try:
w = self._kw[rkey][0]
if rx >= w: # test to see if that is the value we want
w += 1
self._warnSeniors(w)
self._kw[rkey][0] = w
return rkey
except KeyError:
self._keys.pop(ri)
# if you do not find one after 100 tries then just get a random one
self._fails += 1 #for confirming effectiveness only
for key in self._keys:
if key in self._kw:
w = self._kw[key][0] + 1
self._warnSeniors(w)
self._kw[key][0] = w
return key
return None
def _findSeniors(self):
'''this function finds the seniors, counts them and assess their age. It
is costly but unlikely.'''
seniorW = 0
seniors = 0
for w in self._kw.itervalues():
if w >= seniorW:
if w == seniorW:
seniors += 1
else:
seniorsW = w
seniors = 1
self._seniors = seniors
self._seniorW = seniorW
def _warnSeniors(self, w):
#a weight can only be incremented...good
if w >= self._seniorW:
if w == self._seniorW:
self._seniors+=1
else:
self._seniors = 1
self._seniorW = w
def test():
#test code
iterations = 200000
size = 2500
nextkey = size
pd = ProbDict(dict([(i,[0,i]) for i in xrange(size)]))
start = time.clock()
for i in xrange(iterations):
key=pd._key()
w=pd[key][0]
if random.randint(0,1+pd._seniorW-w):
#the heavier the object, the more unlikely it will be removed
pd.pop(key)
probAppend = float(500+(size-len(pd)))/1000
if random.uniform(0,1) < probAppend:
nextkey+=1
pd.append(nextkey)
print (time.clock()-start)*1000/iterations, "msecs / iteration with", pd._fails, "failures /", iterations, "iterations"
weights = pd.weights()
weights.sort()
print "avg weight:", float(sum(weights))/pd._len, max(weights), pd._seniorW, pd._seniors, len(pd), len(weights)
print weights
test()
Any comments are still welcome. @Darius: your binary trees are too complex and complicated for me; and I do not think its leafs can be removed efficiently... Thx all
|
This activestate recipe gives an easy-to-follow approach, specifically the version in the comments that doesn't require you to pre-normalize your weights:
import random
def weighted_choice(items):
"""items is a list of tuples in the form (item, weight)"""
weight_total = sum((item[1] for item in items))
n = random.uniform(0, weight_total)
for item, weight in items:
if n < weight:
return item
n = n - weight
return item
This will be slow if you have a large list of items. A binary search would probably be better in that case... but would also be more complicated to write, for little gain if you have a small sample size. Here's an example of the binary search approach in python if you want to follow that route.
(I'd recommend doing some quick performance testing of both methods on your dataset. The performance of different approaches to this sort of algorithm is often a bit unintuitive.)
Edit: I took my own advice, since I was curious, and did a few tests.
I compared four approaches:
*The weighted_choice function above.*
A binary-search choice function like so:
def weighted_choice_bisect(items):
added_weights = []
last_sum = 0
for item, weight in items:
last_sum += weight
added_weights.append(last_sum)
return items[bisect.bisect(added_weights, random.random() * last_sum)][0]
A compiling version of 1:
def weighted_choice_compile(items):
"""returns a function that fetches a random item from items
items is a list of tuples in the form (item, weight)"""
weight_total = sum((item[1] for item in items))
def choice(uniform = random.uniform):
n = uniform(0, weight_total)
for item, weight in items:
if n < weight:
return item
n = n - weight
return item
return choice
A compiling version of 2:
def weighted_choice_bisect_compile(items):
"""Returns a function that makes a weighted random choice from items."""
added_weights = []
last_sum = 0
for item, weight in items:
last_sum += weight
added_weights.append(last_sum)
def choice(rnd=random.random, bis=bisect.bisect):
return items[bis(added_weights, rnd() * last_sum)][0]
return choice
I then built a big list of choices like so:
choices = [(random.choice("abcdefg"), random.uniform(0,50)) for i in xrange(2500)]
And an excessively simple profiling function:
def profiler(f, n, *args, **kwargs):
start = time.time()
for i in xrange(n):
f(*args, **kwargs)
return time.time() - start
The results:
(Seconds taken for 1,000 calls to the function.)
Simple uncompiled: 0.918624162674
Binary uncompiled: 1.01497793198
Simple compiled: 0.287325024605
Binary compiled: 0.00327413797379
The "compiled" results include the average time taken to compile the choice function once. (I timed 1,000 compiles, then divided that time by 1,000, and added the result to the choice function time.)
So: if you have a list of items+weights which change very rarely, the binary compiled method is by far the fastest.
|
Django form fails validation on a unique field
|
I have a simple model that is defined as:
class Article(models.Model):
slug = models.SlugField(max_length=50, unique=True)
title = models.CharField(max_length=100, unique=False)
and the form:
class ArticleForm(ModelForm):
class Meta:
model = Article
The validation here fails when I try to update an existing row:
if request.method == 'POST':
form = ArticleForm(request.POST)
if form.is_valid(): # POOF
form.save()
Creating a new entry is fine, however, when I try to update any of these fields, the validation no longer passes.
The "errors" property had nothing, but I dropped into the debugger and deep within the Django guts I saw this:
slug: "Article with this None already exists"
So it looks like is_valid() fails on a unique value check, but all I want to do is update the row.
I can't just do:
form.save(force_update=True)
... because the form will fail on validation.
This looks like something very simple, but I just can't figure it out.
I am running Django 1.0.2
What croaks is BaseModelForm.validate_unique() which is called on form initialization.
|
I don't think you are actually updating an existing article, but instead creating a new one, presumably with more or less the same content, especially the slug, and thus you will get an error. It is a bit strange that you don't get better error reporting, but also I do not know what the rest of your view looks like.
What if you where to try something along these lines (I have included a bit more of a possible view function, change it to fit your needs); I haven't actually tested my code, so I am sure I've made at least one mistake, but you should at least get the general idea:
def article_update(request, id):
article = get_objects_or_404(Article, pk=id)
if request.method == 'POST':
form = ArticleForm(request.POST, instance=article)
if form.is_valid():
form.save()
return HttpResponseRedirect(to-some-suitable-url)
else:
form = ArticleForm(instance=article)
return render_to_response('article_update.html', { 'form': form })
The thing is, as taurean noted, you should instantiate your model form with the object you wish to update, otherwise you will get a new one.
|
Intercepting stdout of a subprocess while it is running
|
If this is my subprocess:
import time, sys
for i in range(200):
sys.stdout.write( 'reading %i\n'%i )
time.sleep(.02)
And this is the script controlling and modifying the output of the subprocess:
import subprocess, time, sys
print 'starting'
proc = subprocess.Popen(
'c:/test_apps/testcr.py',
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE )
print 'process created'
while True:
#next_line = proc.communicate()[0]
next_line = proc.stdout.readline()
if next_line == '' and proc.poll() != None:
break
sys.stdout.write(next_line)
sys.stdout.flush()
print 'done'
Why is readline and communicate waiting until the process is done running? Is there a simple way to pass (and modify) the subprocess' stdout real-time?
BTW, I've seen this, but I don't need the logging features (and havn't bothered understand much of it).
I'm on Windows XP.
|
As Charles already mentioned, the problem is buffering. I ran in to a similar problem when writing some modules for SNMPd, and solved it by replacing stdout with an auto-flushing version.
I used the following code, inspired by some posts on ActiveState:
class FlushFile(object):
"""Write-only flushing wrapper for file-type objects."""
def __init__(self, f):
self.f = f
def write(self, x):
self.f.write(x)
self.f.flush()
# Replace stdout with an automatically flushing version
sys.stdout = FlushFile(sys.__stdout__)
|
How to deploy a Python application with libraries as source with no further dependencies?
|
Background: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako)
The situation: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install.
The Problem
This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access).
I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion.
Anecdote: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above).
|
Just use virtualenv - it is a tool to create isolated Python environments. You can create a set-up script and distribute the whole bunch if you want.
|
Python: Pass or Sleep for long running processes?
|
I am writing an queue processing application which uses threads for waiting on and responding to queue messages to be delivered to the app. For the main part of the application, it just needs to stay active. For a code example like:
while True:
pass
or
while True:
time.sleep(1)
Which one will have the least impact on a system? What is the preferred way to do nothing, but keep a python app running?
|
I would imagine time.sleep() will have less overhead on the system. Using pass will cause the loop to immediately re-evaluate and peg the CPU, whereas using time.sleep will allow the execution to be temporarily suspended.
EDIT: just to prove the point, if you launch the python interpreter and run this:
>>> while True:
... pass
...
You can watch Python start eating up 90-100% CPU instantly, versus:
>>> import time
>>> while True:
... time.sleep(1)
...
Which barely even registers on the Activity Monitor (using OS X here but it should be the same for every platform).
|
Traverse a list in reverse order in Python
|
So I can start from len(collection) and end in collection[0].
EDIT: Sorry, I forgot to mention I also want to be able to access the loop index.
|
Use the reversed() built-in function:
>>> a = ["foo", "bar", "baz"]
>>> for i in reversed(a):
... print i
...
baz
bar
foo
To also access the original index:
>>> for i, e in reversed(list(enumerate(a))):
... print i, e
...
2 baz
1 bar
0 foo
|
Easy_install cache downloaded files
|
Is there a way to configure easy_install to avoid having to download the files again when an installation fails?
|
pip (http://pypi.python.org/pypi/pip/) is a drop-in replacement for the easy_install tool and can do that.
Just run easy_install pip and set an environment variable PIP_DOWNLOAD_CACHE to the path you want pip to store the files.
Note that the cache won't work with dependencies that checkout from a source code repository (like svn/git/hg/bzr).
Then use pip install instead of easy_install
|
Django - How to prepopulate admin form fields
|
I know that you can prepopulate admin form fields based on other fields. For example, I have a slug field that is automatically populated based on the title field.
However, I would also like to make other automatic prepopulations based on the date. For example, I have an URL field, and I want it to automatically be set to http://example.com/20090209.mp3 where 20090209 is YYYYMMDD.
I would also like to have a text field that automatically starts with something like "Hello my name is author" where author is the current user's name. Of course, I also want the person to be able to edit the field. The point is to just make it so the user can fill out the admin form more easily, and not just to have fields that are completely automatic.
|
I know that you can prepopulate some values via GET, it will be something like this
http://localhost:8000/admin/app/model/add/?model_field=hello
I got some problems with date fields but, maybe this could help you.
|
Accessing POST Data from WSGI
|
I can't seem to figure out how to access POST data using WSGI. I tried the example on the wsgi.org website and it didn't work. I'm using Python 3.0 right now. Please don't recommend a WSGI framework as that is not what I'm looking for.
I would like to figure out how to get it into a fieldstorage object.
|
Assuming you are trying to get just the POST data into a FieldStorage object:
# env is the environment handed to you by the WSGI server.
# I am removing the query string from the env before passing it to the
# FieldStorage so we only have POST data in there.
post_env = env.copy()
post_env['QUERY_STRING'] = ''
post = cgi.FieldStorage(
fp=env['wsgi.input'],
environ=post_env,
keep_blank_values=True
)
|
Python 2.x gotcha's and landmines
|
The purpose of my question is to strengthen my knowledge base with Python and get a better picture of it, which includes knowing its faults and surprises. To keep things specific, I'm only interested in the CPython interpreter.
I'm looking for something similar to what learned from my PHP landmines
question where some of the answers were well known to me but a couple were borderline horrifying.
Update:
Apparently one maybe two people are upset that I asked a question that's already partially answered outside of Stack Overflow. As some sort of compromise here's the URL
http://www.ferg.org/projects/python_gotchas.html
Note that one or two answers here already are original from what was written on the site referenced above.
|
Expressions in default arguments are calculated when the function is defined, not when itâs called.
Example: consider defaulting an argument to the current time:
>>>import time
>>> def report(when=time.time()):
... print when
...
>>> report()
1210294387.19
>>> time.sleep(5)
>>> report()
1210294387.19
The when argument doesn't change. It is evaluated when you define the function. It won't change until the application is re-started.
Strategy: you won't trip over this if you default arguments to None and then do something useful when you see it:
>>> def report(when=None):
... if when is None:
... when = time.time()
... print when
...
>>> report()
1210294762.29
>>> time.sleep(5)
>>> report()
1210294772.23
Exercise: to make sure you've understood: why is this happening?
>>> def spam(eggs=[]):
... eggs.append("spam")
... return eggs
...
>>> spam()
['spam']
>>> spam()
['spam', 'spam']
>>> spam()
['spam', 'spam', 'spam']
>>> spam()
['spam', 'spam', 'spam', 'spam']
|
Parsing datetime strings with microseconds
|
I have a text file with a lot of datetime strings in isoformat. The strings are similar to this:
'2009-02-10 16:06:52.598800'
These strings were generated using str(datetime_object). The problem is that, for some reason, str(datetime_object) generates a different format when the datetime object has microseconds set to zero and some strings look like this:
'2009-02-10 16:06:52'
How can I parse these strings and convert them into a datetime object?
It's very important to get all the data in the object, including microseconds.
I have to use Python 2.5, I've found that the format directive %f for microseconds doesn't exist in 2.5.
|
Alternatively:
from datetime import datetime
def str2datetime(s):
parts = s.split('.')
dt = datetime.strptime(parts[0], "%Y-%m-%d %H:%M:%S")
return dt.replace(microsecond=int(parts[1]))
Using strptime itself to parse the date/time string (so no need to think up corner cases for a regex).
|
Adding a shebang causes No such file or directory error when running my python script
|
I'm trying to run a python script. It works fine when I run it:
python2.5 myscript.py inpt0
The problem starts when I add a shebang:
#!/usr/bin/env python2.5
Result in:
$ myscript.py inpt0
: No such file or directory
Try 2:
#!/usr/local/bin/python2.5
Result in:
$ myscript.py inpt0
: bad interpreter: No such file or directoryon2.5
When I run them directly in the terminal they both work just fine:
$ /usr/local/bin/python2.5
Python 2.5.4 (r254:67916, Feb 9 2009, 12:50:32)
[GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
$ /usr/bin/env python2.5
Python 2.5.4 (r254:67916, Feb 9 2009, 12:50:32)
[GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
Any hints on how to make this work with shebang?
|
I had similar problems and it turned out to be problem with line-endings. You use windows/linux/mac line endings?
Edit: forgot the script name, but as OP says, it's dos2unix <filename>
|
Extend standard python logging, to include the line number where a log method was invoked
|
Is it possible to somehow decorate/extend the python standard logging system,
so that when a logging method is invoked it also logs the file and the line number where it was invoked or maybe the method that invoked it.
|
Sure, check formatters in logging docs. Specifically the lineno and pathname variables.
%(pathname)s Full pathname of the source file where the logging call was issued(if available).
%(filename)s Filename portion of pathname.
%(module)s Module (name portion of filename).
%(funcName)s Name of function containing the logging call.
%(lineno)d Source line number where the logging call was issued (if available).
Looks something like this:
formatter = logging.Formatter('[%(asctime)s] p%(process)s {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s','%m-%d %H:%M:%S')
|
What does 'self' refer to in a @classmethod?
|
I thought I was starting to get a grip on "the Python way" of programming. Methods of a class accept self as the first parameter to refer to the instance of the class whose context the method is being called in. The @classmethod decorator refers to a method whose functionality is associated with the class, but which doesn't reference a specific instance.
So, what does the first parameter of a @classmethod (canonically 'self') refer to if the method is meant to be called without an instance reference?
|
class itself:
A class method receives the class as implicit first argument, just like an instance method receives the instance.
class C:
@classmethod
def f(cls):
print(cls.__name__, type(cls))
>>> C.f()
C <class 'type'>
and it's cls canonically, btw
|
Dynamic/runtime method creation (code generation) in Python
|
I need to generate code for a method at runtime. It's important to be able to run arbitrary code and have a docstring.
I came up with a solution combining exec and setattr, here's a dummy example:
class Viking(object):
def __init__(self):
code = '''
def dynamo(self, arg):
""" dynamo's a dynamic method!
"""
self.weight += 1
return arg * self.weight
'''
self.weight = 50
d = {}
exec code.strip() in d
setattr(self.__class__, 'dynamo', d['dynamo'])
if __name__ == "__main__":
v = Viking()
print v.dynamo(10)
print v.dynamo(10)
print v.dynamo.__doc__
Is there a better / safer / more idiomatic way of achieving the same result?
|
Based on Theran's code, but extending it to methods on classes:
class Dynamo(object):
pass
def add_dynamo(cls,i):
def innerdynamo(self):
print "in dynamo %d" % i
innerdynamo.__doc__ = "docstring for dynamo%d" % i
innerdynamo.__name__ = "dynamo%d" % i
setattr(cls,innerdynamo.__name__,innerdynamo)
for i in range(2):
add_dynamo(Dynamo, i)
d=Dynamo()
d.dynamo0()
d.dynamo1()
Which should print:
in dynamo 0
in dynamo 1
|
What is a mixin, and why are they useful?
|
In "Programming Python", Mark Lutz mentions "mixins". I'm from a C/C++/C# background and I have not heard the term before. What is a mixin?
Reading between the lines of this example (which I've linked to because it's quite long), I'm presuming it's a case of using multiple inheritance to extend a class as opposed to 'proper' subclassing. Is this right?
Why would I want to do that rather than put the new functionality into a subclass? For that matter, why would a mixin/multiple inheritance approach be better than using composition?
What separates a mixin from multiple inheritance? Is it just a matter of semantics?
|
A mixin is a special kind of multiple inheritance. There are two main situations where mixins are used:
You want to provide a lot of optional features for a class.
You want to use one particular feature in a lot of different classes.
For an example of number one, consider werkzeug's request and response system. I can make a plain old request object by saying:
from werkzeug import BaseRequest
class Request(BaseRequest):
pass
If I want to add accept header support, I would make that
from werkzeug import BaseRequest, AcceptMixin
class Request(BaseRequest, AcceptMixin):
pass
If I wanted to make a request object that supports accept headers, etags, authentication, and user agent support, I could do this:
from werkzeug import BaseRequest, AcceptMixin, ETagRequestMixin, UserAgentMixin, AuthorizationMixin
class Request(BaseRequest, AcceptMixin, ETagRequestMixin, UserAgentMixin, AuthorizationMixin):
pass
The difference is subtle, but in the above examples, the mixin classes weren't made to stand on their own. In more traditional multiple inheritance, The AuthenticationMixin (for example) would probably be something more like Authenticator. That is, the class would probably be designed to stand on its own.
|
Get the cartesian product of a series of lists in Python
|
How can I get the Cartesian product (every possible combination of values) from a group of lists?
Input:
somelists = [
[1, 2, 3],
['a', 'b'],
[4, 5]
]
Desired output:
[(1, 'a', 4), (1, 'a', 5), (1, 'b', 4), (1, 'b', 5), (2, 'a', 4), (2, 'a', 5) ...]
|
In Python 2.6+
import itertools
for element in itertools.product(*somelists):
print element
Documentation:
Python 2 - itertools.product
|
How to create a GUID/UUID in Python
|
How do I create a GUID in Python that is platform independent? I hear there is a method using ActivePython on Windows but it's Windows only because it uses COM. Is there a method using plain Python?
|
If you're using Python 2.5 or later, the uuid module is already included with the Python standard distribution.
Ex:
>>> import uuid
>>> uuid.uuid4()
UUID('5361a11b-615c-42bf-9bdb-e2c3790ada14')
|
Subtracting 2 lists in Python
|
Right now I have vector3 values represented as lists. is there a way to subtract 2 of these like vector3 values, like
[2,2,2] - [1,1,1] = [1,1,1]
Should I use tuples?
If none of them defines these operands on these types, can I define it instead?
If not, should I create a new vector3 class?
|
If this is something you end up doing frequently, and with different operations, you should probably create a class to handle cases like this, or better use some library like Numpy.
Otherwise, look for list comprehensions used with the zip builtin function:
[a_i - b_i for a_i, b_i in zip(a, b)]
|
C++ string parsing (python style)
|
I love how in python I can do something like:
points = []
for line in open("data.txt"):
a,b,c = map(float, line.split(','))
points += [(a,b,c)]
Basically it's reading a list of lines where each one represents a point in 3D space, the point is represented as three numbers separated by commas
How can this be done in C++ without too much headache?
Performance is not very important, this parsing only happens one time, so simplicity is more important.
P.S. I know it sounds like a newbie question, but believe me I've written a lexer in D (pretty much like C++) which involves reading some text char by char and recognizing tokens,
it's just that, coming back to C++ after a long period of python, just makes me not wanna waste my time on such things.
|
I`d do something like this:
ifstream f("data.txt");
string str;
while (getline(f, str)) {
Point p;
sscanf(str.c_str(), "%f, %f, %f\n", &p.x, &p.y, &p.z);
points.push_back(p);
}
x,y,z must be floats.
And include:
#include <iostream>
#include <fstream>
|
Reserve memory for list in Python?
|
When programming in Python, is it possible to reserve memory for a list that will be populated with a known number of items, so that the list will not be reallocated several times while building it? I've looked through the docs for a Python list type, and have not found anything that seems to do this. However, this type of list building shows up in a few hotspots of my code, so I want to make it as efficient as possible.
Edit: Also, does it even make sense to do something like this in a language like Python? I'm a fairly experienced programmer, but new to Python and still getting a feel for its way of doing things. Does Python internally allocate all objects in separate heap spaces, defeating the purpose of trying to minimize allocations, or are primitives like ints, floats, etc. stored directly in lists?
|
Here's four variants:
an incremental list creation
"pre-allocated" list
array.array()
numpy.zeros()
python -mtimeit -s"N=10**6" "a = []; app = a.append;"\
"for i in xrange(N): app(i);"
10 loops, best of 3: 390 msec per loop
python -mtimeit -s"N=10**6" "a = [None]*N; app = a.append;"\
"for i in xrange(N): a[i] = i"
10 loops, best of 3: 245 msec per loop
python -mtimeit -s"from array import array; N=10**6" "a = array('i', [0]*N)"\
"for i in xrange(N):" " a[i] = i"
10 loops, best of 3: 541 msec per loop
python -mtimeit -s"from numpy import zeros; N=10**6" "a = zeros(N,dtype='i')"\
"for i in xrange(N):" " a[i] = i"
10 loops, best of 3: 353 msec per loop
It shows that [None]*N is the fastest and array.array is the slowest in this case.
|
Standard python interpreter has a vi command mode?
|
this is going to sound pretty ignorant, but:
I was working a bit in the python interpreter (python 2.4 on RHEL 5.3), and suddenly found myself in what seems to be a 'vi command mode'. That is, I can edit previous commands with typical vi key bindings, going left with h, deleting with x...
I love it - the only thing is, I don't know how I got here (perhaps it's through one of the modules I've imported: pylab/matplotlib?).
Can anyone shed some light on how to enable this mode in the interpreter?
|
This kind of all depends on a few things.
First of all, the python shell uses readline, and as such, your ~/.inputrc is important here. That's the same with psql the PostgreSQL command-line interpreter and mysql the MySQL shell. All of those can be configured to use vi-style command bindings, with history etc.
<ESC> will put you into vi mode at the python shell once you've got your editing mode set to vi
You may need the following definition in your ~/.inputrc
set editing-mode vi
OSX info
OSX uses libedit which uses ~/.editrc. You can man editrc for more information.
For example, to mimick a popular key combination which searches in your history, you can add the following to your .editrc
bind "^R" em-inc-search-prev
|
Python IDE built into Visual Studio 2008?
|
Hi I develop in Visual Studio 2008 a lot and would like to find an addin like vsphp which enables intellisense and debugging in Visual Studio. Is IronStudio what I am looking for? As far as I understand IronStudio is a Plugin for .NET.
If there is no Plugin for Visual Studio 2008 whats a great IDE for a python newbee who loves Visual Studio (hate netbeans and eclipse sorry just don't feel the love)
Also if IronPython Studio is an IDE do I want the Isolated or Integrated Version? I don't seem to understand the name.
I installed integrated and got this: http://dl.getdropbox.com/u/5910/Jing/2009-02-11_1750.png :( no console like here: http://www.codeplex.com/IronPythonStudio
|
Have a look at PyScripter, I haven't tried it extensively but heard good things about it.
It's not an addon to Visual Studio, it's an independent IDE.
|
Iterating over a string
|
In C++, I could do:
for (int i = 0; i < str.length(); ++i)
std::cout << str[i] << std::endl;
How do I iterate over a string in Python?
|
As Johannes pointed out,
for c in "string":
#do something with c
You can iterate pretty much anything in python using the for loop construct,
for example, open("file.txt") returns a file object (and opens the file), iterating over it iterates over lines in that file
for line in open(filename):
# do something with line
If that seems like magic, well it kinda is, but the idea behind it is really simple.
There's a simple iterator protocol that can be applied to any kind of object to make the for loop work on it.
Simply implement an iterator that defines a next() method, and implement an __iter__ method on a class to make it iterable. (the __iter__ of course, should return an iterator object, that is, an object that defines next())
See official documentation
|
Handling very large numbers in Python
|
I've been considering fast poker hand evaluation in Python. It occurred to me that one way to speed the process up would be to represent all the card faces and suits as prime numbers and multiply them together to represent the hands. To whit:
class PokerCard:
faces = '23456789TJQKA'
suits = 'cdhs'
facePrimes = [11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 53, 59, 61]
suitPrimes = [2, 3, 5, 7]
AND
def HashVal(self):
return PokerCard.facePrimes[self.cardFace] * PokerCard.suitPrimes[self.cardSuit]
This would give each hand a numeric value that, through modulo could tell me how many kings are in the hand or how many hearts. For example, any hand with five or more clubs in it would divide evenly by 2^5; any hand with four kings would divide evenly by 59^4, etc.
The problem is that a seven-card hand like AcAdAhAsKdKhKs has a hash value of approximately 62.7 quadrillion, which would take considerably more than 32 bits to represent internally. Is there a way to store such large numbers in Python that will allow me to perform arithmetic operations on it?
|
Python supports a "bignum" integer type which can work with arbitrarily large numbers. In Python 2.5+, this type is called long and is separate from the int type, but the interpreter will automatically use whichever is more appropriate. In Python 3.0+, the int type has been dropped completely.
That's just an implementation detail, though â as long as you have version 2.5 or better, just perform standard math operations and any number which exceeds the boundaries of 32-bit math will be automatically (and transparently) converted to a bignum.
You can find all the gory details in PEP 0237.
|
Python format timedelta to string
|
I'm a Python newbie (2 weeks) and I'm having trouble formatting a datetime.timedelta object.
Here's what I'm trying to do:
I have a list of objects and one of the members of the class of the object is a timedelta object that shows the duration of an event. I would like to display that duration in the format of hours:minutes.
I have tried a variety of methods for doing this and I'm having difficulty. My current approach is to add methods to the class for my objects that return hours and minutes. I can get the hours by dividing the timedelta.seconds by 3600 and rounding it. I'm having trouble with getting the remainder seconds and converting that to minutes.
By the way, I'm using Google AppEngine with Django Templates for presentation.
If anyone can help or knows of a better way to resolve this, I would be very happy.
Thanks,
|
You can just convert the timedelta to a string with str(). Here's an example:
import datetime
start = datetime.datetime(2009,2,10,14,00)
end = datetime.datetime(2009,2,10,16,00)
delta = end-start
print str(delta)
# prints 2:00:00
|
Return a tuple of arguments to be fed to string.format()
|
Currently, I'm trying to get a method in Python to return a list of zero, one, or two strings to plug into a string formatter, and then pass them to the string method. My code looks something like this:
class PairEvaluator(HandEvaluator):
def returnArbitrary(self):
return ('ace', 'king')
pe = PairEvaluator()
cards = pe.returnArbitrary()
print('Two pair, {0}s and {1}s'.format(cards))
When I try to run this code, the compiler gives an IndexError: tuple index out of range.
How should I structure my return value to pass it as an argument to .format()?
|
print('Two pair, {0}s and {1}s'.format(*cards))
You are missing only the star :D
|
Python 3.0 urllib.parse error "Type str doesn't support the buffer API"
|
File "/usr/local/lib/python3.0/cgi.py", line 477, in __init__
self.read_urlencoded()
File "/usr/local/lib/python3.0/cgi.py", line 577, in read_urlencoded
self.strict_parsing):
File "/usr/local/lib/python3.0/urllib/parse.py", line 377, in parse_qsl
pairs = [s2 for s1 in qs.split('&') for s2 in s1.split(';')]
TypeError: Type str doesn't support the buffer API
Can anybody direct me on how to avoid this? I'm getting it through feeding data into the cgi.Fieldstorage and I can't seem to do it any other way.
|
urllib is trying to do:
b'a,b'.split(',')
Which doesn't work. byte strings and unicode strings mix even less smoothly in Py3k than they used toâââdeliberately, to make encoding problems go wrong sooner rather than later.
So the error is rather opaquely telling you âyou can't pass a byte string to urllib.parseâ. Presumably you are doing a POST request, where the form-encoded string is coming into cgi as a content body; the content body is still a byte string/stream so it now clashes with the new urllib.
So yeah, it's a bug in cgi.py, yet another victim of 2to3 conversion that hasn't been fixed properly for the new string model. It should be converting the incoming byte stream to characters before passing them to urllib.
Did I mention Python 3.0's libraries (especially web-related ones) still being rather shonky? :-)
|
Is it possible to programmatically construct a Python stack frame and start execution at an arbitrary point in the code?
|
Is it possible to programmatically construct a stack (one or more stack frames) in CPython and start execution at an arbitrary code point? Imagine the following scenario:
You have a workflow engine where workflows can be scripted in Python with some constructs (e.g. branching, waiting/joining) that are calls to the workflow engine.
A blocking call, such as a wait or join sets up a listener condition in an event-dispatching engine with a persistent backing store of some sort.
You have a workflow script, which calls the Wait condition in the engine, waiting for some condition that will be signalled later. This sets up the listener in the event dispatching engine.
The workflow script's state, relevant stack frames including the program counter (or equivalent state) are persisted - as the wait condition could occur days or months later.
In the interim, the workflow engine might be stopped and re-started, meaning that it must be possible to programmatically store and reconstruct the context of the workflow script.
The event dispatching engine fires the event that the wait condition picks up.
The workflow engine reads the serialised state and stack and reconstructs a thread with the stack. It then continues execution at the point where the wait service was called.
The Question
Can this be done with an unmodified Python interpreter? Even better, can anyone point me to some documentation that might cover this sort of thing or an example of code that programmatically constructs a stack frame and starts execution somewhere in the middle of a block of code?
Edit: To clarify 'unmodified python interpreter', I don't mind using the C API (is there enough information in a PyThreadState to do this?) but I don't want to go poking around the internals of the Python interpreter and having to build a modified one.
Update: From some initial investigation, one can get the execution context with PyThreadState_Get(). This returns the thread state in a PyThreadState (defined in pystate.h), which has a reference to the stack frame in frame. A stack frame is held in a struct typedef'd to PyFrameObject, which is defined in frameobject.h. PyFrameObject has a field f_lasti (props to bobince) which has a program counter expressed as an offset from the beginning of the code block.
This last is sort of good news, because it means that as long as you preserve the actual compiled code block, you should be able to reconstruct locals for as many stack frames as necessary and re-start the code. I'd say this means that it is theoretically possible without having to make a modified python interpereter, although it means that the code is still probably going to be fiddly and tightly coupled to specific versions of the interpreter.
The three remaining problems are:
Transaction state and 'saga' rollback, which can probably be accomplished by the sort of metaclass hacking one would use to build an O/R mapper. I did build a prototype once, so I have a fair idea of how this might be accomplished.
Robustly serialising transaction state and arbitrary locals. This might be accomplished by reading __locals__ (which is available from the stack frame) and programatically constructing a call to pickle. However, I don't know what, if any, gotchas there might be here.
Versioning and upgrade of workflows. This is somewhat trickier, as the system is not providing any symbolic anchors for workflow nodes. All we have is the anchor
In order to do this, one would have to identify the offsets of all of the entry points and map them to the new version. Probably feasible to do manually, but I suspect it would be hard to automate. This is probably the biggest obstacle if you want to support this capability.
Update 2: PyCodeObject (code.h) has a list of addr (f_lasti)-> line number mappings in PyCodeObject.co_lnotab (correct me if wrong here). This might be used to facilitate a migration process to update workflows to a new version, as frozen instruction pointers could be mapped to the appropriate place in the new script, done in terms of the line numbers. Still quite messy but a little more promising.
Update 3: I think the answer to this might be Stackless Python. You can suspend tasks and serialise them. I haven't worked out whether this will also work with the stack as well.
|
The expat python bindings included in the normal Python distribution is constructing stack frames programtically. Be warned though, it relies on undocumented and private APIs.
http://svn.python.org/view/python/trunk/Modules/pyexpat.c?rev=64048&view=auto
|
Extracting extension from filename in Python
|
Is there a function to extract the extension from a filename?
|
Yes. Use os.path.splitext:
>>> import os
>>> filename, file_extension = os.path.splitext('/path/to/somefile.ext')
>>> filename
'/path/to/somefile'
>>> file_extension
'.ext'
|
How to do variable assignment inside a while(expression) loop in Python?
|
I have the variable assignment in order to return the assigned value and compare that to an empty string, directly in the while loop.
Here is how I'm doing it in PHP:
while((name = raw_input("Name: ")) != ''):
names.append(name)
What I'm trying to do is identical to this in functionality:
names = []
while(True):
name = raw_input("Name: ")
if (name == ''):
break
names.append(name)
Is there any way to do this in Python?
|
from functools import partial
for name in iter(partial(raw_input, 'Name:'), ''):
do_something_with(name)
or if you want a list:
>>> names = list(iter(partial(raw_input, 'Name: '), ''))
Name: nosklo
Name: Andreas
Name: Aaron
Name: Phil
Name:
>>> names
['nosklo', 'Andreas', 'Aaron', 'Phil']
|
Are there any good build frameworks written in Python?
|
I switched from NAnt to using Python to write build automation scripts. I am curious if whether any build frameworks worth using that are similar to Make, Ant, and NAnt, but, instead, are Python-based. For example, Ruby has Rake. What about Python?
|
Try SCons
Or are you looking for something just to build python projects?
|
Benefits of os.path.splitext over regular .split?
|
In this other question, the votes clearly show that the os.path.splitext function is preferred over the simple .split('.')[-1] string manipulation. Does anyone have a moment to explain exactly why that is? Is it faster, or more accurate, or what? I'm willing to accept that there's something better about it, but I can't immediately see what it might be. Might importing a whole module to do this be overkill, at least in simple cases?
EDIT: The OS specificity is a big win that's not immediately obvious; but even I should've seen the "what if there isn't a dot" case! And thanks to everybody for the general comments on library usage.
|
Well, there are separate implementations for separate operating systems. This means that if the logic to extract the extension of a file differs on Mac from that on Linux, this distinction will be handled by those things. I don't know of any such distinction so there might be none.
Edit: @Brian comments that an example like /directory.ext/file would of course not work with a simple .split('.') call, and you would have to know both that directories can use extensions, as well as the fact that on some operating systems, forward slash is a valid directory separator.
This just emphasizes the use a library routine unless you have a good reason not to part of my answer.
Thanks @Brian.
Additionally, where a file doesn't have an extension, you would have to build in logic to handle that case. And what if the thing you try to split is a directory name ending with a backslash? No filename nor an extension.
The rule should be that unless you have a specific reason not to use a library function that does what you want, use it. This will avoid you having to maintain and bugfix code others have perfectly good solutions to.
|
Highlighting unmatched brackets in vim
|
I'm getting burned repeatedly by unmatched parentheses while writing python code in vim. I like how they're handled for C code - vim highlights in red all of the curly braces following the unmatched paren. I looked at the c.vim syntax file briefly to try to understand it, but the section that handles bracket errors is very complex. Can anyone explain how that code works and suggest how I might write something similar for python code?
Example C code with unmatched parens:
int main(void
{ /* brace highlighted in red */
} /* brace highlighted in red */
Since python code doesn't have curly braces to highlight, we'll have to choose something else (perhaps other parentheses).
BTW, I tried out this vim plugin but I wasn't happy with the behavior.
Edit:
I'm using python to generate C++ code (a language that likes parentheses and semicolons). I have a nasty habit of leaving the trailing paren off a file.write() method call. It would be nice if I could get vim to make that mistake more visually obvious.
Update:
Ok, here's what I've tried so far.
:syn region pParen transparent start="(" end=")" contains=ALL
:syn match pError display ")"
:hi def link pError Error
Unfortunately, all this does is highlight as an error the right paren of all balanced parentheses, the opposite of what I want to do. I really don't understand what I'm doing here (just copied off of the existing C syntax file). If anyone could explain what I did (wrong), I would appreciate it.
|
You can get vim to do the opposite: do a
:set showmatch
and it will highlight matching parens. You'll know when you're unbalanced when it doesn't highlight something.
I'm also assuming you're familiar with the '%' command, which bounces you to the matching element.
|
How do I attach a remote debugger to a Python process?
|
I'm tired of inserting
import pdb; pdb.set_trace()
lines into my Python programs and debugging through the console. How do I connect a remote debugger and insert breakpoints from a civilized user interface?
|
use Winpdb. It is a platform independent graphical GPL Python debugger with support for remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb.
Features:
GPL license. Winpdb is Free Software.
Compatible with CPython 2.3 through 2.6 and Python 3000
Compatible with wxPython 2.6 through 2.8
Platform independent, and tested on Ubuntu Gutsy and Windows XP.
User Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later.
|
Using different versions of python for different projects in Eclipse
|
So, I'm slowly working in some Python 3.0, but I still have a lot of things that rely on 2.5.
But, in Eclipse, every time I change projects between a 3.0 and a 2.5, I need to go through
Project -> Properties -> project type.
Issue 1: if I just switch the interpreter in the drop down box, that doesn't seem to change anything. I need to click "click here to configure an interpreter not listed", and UP the interpreter I wish to use.
Issue 2: That would be fine if I was switching to 3.0 for every project for the rest of my life, but I still am doing a lot of switching between projects and I don't see that changing anytime soon. So, I'm just trying to save a few operations.
Is there a way to configure Eclipse so that it remembers which interpreter I want associated with which project?
What if I created an entirely new workspace? Is "interpreter" a property of a workspace?
Also, it doesn't seem to matter what I choose when I create a new project via File -> New -> Pydev Project. Whatever I last selected through "Properties" is what eclipse is using.
This is Eclipse 3.4.0, running in Windows XP.
|
You can set the interpreter version on a per-script basis through the Run Configurations menu.
To do this go to Run -> Run Configurations, and then make a new entry under Python Run. Fill in your project name and the main script, and then go to the Interpeter tab and you can pick which interpreter you want to use for that script.
I've used this to have Python 2.2, 2.5, and 3.0 projects in the same workspace.
|
Mapping a global variable from a shared library with ctypes
|
I'd like to map an int value pbs_errno declared as a global in the library libtorque.so using ctypes.
Currently I can load the library like so:
from ctypes import *
libtorque = CDLL("libtorque.so")
and have successfully mapped a bunch of the functions. However, for error checking purposes many of them set the pbs_errno variable so I need access to that as well. However if I try to access it I get:
>>> pytorque.libtorque.pbs_errno
<_FuncPtr object at 0x9fc690>
Of course, it's not a function pointer and attempting to call it results in a seg fault.
It's declared as int pbs_errno; in the main header and extern int pbs_errno; in the API header files.
Objdump shows the symbol as:
00000000001294f8 g DO .bss 0000000000000004 Base pbs_errno
|
There's a section in the ctypes docs about accessing values exported in dlls:
http://docs.python.org/library/ctypes.html#accessing-values-exported-from-dlls
e.g.
def pbs_errno():
return c_int.in_dll(libtorque, "pbs_errno")
|
How do you verify an RSA SHA1 signature in Python?
|
I've got a string, a signature, and a public key, and I want to verify the signature on the string. The key looks like this:
-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDfG4IuFO2h/LdDNmonwGNw5srW
nUEWzoBrPRF1NM8LqpOMD45FAPtZ1NmPtHGo0BAS1UsyJEGXx0NPJ8Gw1z+huLrl
XnAVX5B4ec6cJfKKmpL/l94WhP2v8F3OGWrnaEX1mLMoxe124Pcfamt0SPCGkeal
VvXw13PLINE/YptjkQIDAQAB
-----END PUBLIC KEY-----
I've been reading the pycrypto docs for a while, but I can't figure out how to make an RSAobj with this kind of key. If you know PHP, I'm trying to do the following:
openssl_verify($data, $signature, $public_key, OPENSSL_ALGO_SHA1);
Also, if I'm confused about any terminology, please let me know.
|
Use M2Crypto. Here's how to verify for RSA and any other algorithm supported by OpenSSL:
pem = """-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDfG4IuFO2h/LdDNmonwGNw5srW
nUEWzoBrPRF1NM8LqpOMD45FAPtZ1NmPtHGo0BAS1UsyJEGXx0NPJ8Gw1z+huLrl
XnAVX5B4ec6cJfKKmpL/l94WhP2v8F3OGWrnaEX1mLMoxe124Pcfamt0SPCGkeal
VvXw13PLINE/YptjkQIDAQAB
-----END PUBLIC KEY-----""" # your example key
from M2Crypto import BIO, RSA, EVP
bio = BIO.MemoryBuffer(pem)
rsa = RSA.load_pub_key_bio(bio)
pubkey = EVP.PKey()
pubkey.assign_rsa(rsa)
# if you need a different digest than the default 'sha1':
pubkey.reset_context(md='sha1')
pubkey.verify_init()
pubkey.verify_update('test message')
assert pubkey.verify_final(signature) == 1
|
Can I print original variable's name in Python?
|
I have enum and use the variables like myEnum.SomeNameA, myEnum.SomeNameB, etc. When I return one of these variables from a function, can I print their names (such as myEnum.SomeNameA) instead of the value they returned?
|
Short answer: no.
Long answer: this is possible with some ugly hacks using traceback, inspect and the like, but it's generally probably not recommended for production code. For example see:
http://groups.google.com/group/comp.lang.python/msg/237dc92f3629dd9a?pli=1
http://aspn.activestate.com/ASPN/Mail/Message/python-Tutor/330294
Perhaps you can use a workaround to translate the value back to a name/representational string. If you post some more sample code and details about what you're wanting this for maybe we can provide more in-depth assistance.
|
Best method for reading newline delimited files in Python and discarding the newlines?
|
I am trying to determine the best way to handle getting rid of newlines when reading in newline delimited files in Python.
What I've come up with is the following code, include throwaway code to test.
import os
def getfile(filename,results):
f = open(filename)
filecontents = f.readlines()
for line in filecontents:
foo = line.strip('\n')
results.append(foo)
return results
blahblah = []
getfile('/tmp/foo',blahblah)
for x in blahblah:
print x
Suggestions?
|
lines = open(filename).read().splitlines()
|
Linux: compute a single hash for a given folder & contents?
|
Surely there must be a way to do this easily!
I've tried the linux command-line apps sha1sum & md5sum but they seem only to be able to compute hashes of individual files and output a list of hash values, one for each file.
I need to generate a single hash for the entire contents of a folder (not just the filenames).
I'd like to do something like
sha1sum /folder/of/stuff > singlehashvalue
Edit: to clarify, my files are at multiple levels in a directory tree, they're not all sitting in the same root folder.
|
One possible way would be:
sha1sum path/to/folder/* | sha1sum
If there is a whole directory tree, you're probably better off using find and xargs. One possible command would be
find path/to/folder -type f -print0 | xargs -0 sha1sum | sha1sum
Edit: Good point, it's probably a good thing to sort the list of files, so:
find path/to/folder -type f -print0 | sort -z | xargs -0 sha1sum | sha1sum
And, finally, if you also need to take account of permissions and empty directories:
(find path/to/folder -type f -print0 | sort -z | xargs -0 sha1sum;
find path/to/folder \( -type f -o -type d \) -print0 | sort -z | \
xargs -0 stat -c '%n %a') \
| sha1sum
The arguments to stat will cause it to print the name of the file, followed by its octal permissions. The two finds will run one after the other, causing double the amount of disk IO, the first finding all file names and checksumming the contents, the second finding all file and directory names, printing name and mode. The list of "file names and checksums", followed by "names and directories, with permissions" will then be checksummed, for a smaller checksum.
|
How do I calculate the date six months from the current date using the datetime Python module?
|
I am using the datetime Python module. I am looking to calculate the date 6 months from the current date. Could someone give me a little help doing this?
The reason I want to generate a date 6 months from the current date is to produce a Review Date. If the user enters data into the system it will have a review date of 6 months from the date they entered the data.
|
I found this solution to be good. (This uses the python-dateutil extension)
from datetime import date
from dateutil.relativedelta import relativedelta
six_months = date.today() + relativedelta(months=+6)
The advantage of this approach is that it takes care of issues with 28, 30, 31 days etc. This becomes very useful in handling business rules and scenarios (say invoice generation etc.)
$ date(2010,12,31)+relativedelta(months=+1)
datetime.date(2011, 1, 31)
$ date(2010,12,31)+relativedelta(months=+2)
datetime.date(2011, 2, 28)
|
How Do I Perform Introspection on an Object in Python 2.x?
|
I'm using Python 2.x and I have an object I'm summoning from the aether; the documentation on it is not particularly clear. I would like to be able to get a list of properties for that object and the type of each property.
Similarly, I'd like to get a list of methods for that object, as well, plus any other information I could find on that method, such as number of arguments and their respective types.
I have a feeling that I am simply missing the correct jargon in my Google searches. Not that I want to derail with specifics, but it's Active Directory, so that's always fun.
|
Well ... Your first stop will be a simple dir(object). This will show you all the object's members, both fields and methods. Try it in an interactive Python shell, and play around a little.
For instance:
> class Foo:
def __init__(self):
self.a = "bar"
self.b = 4711
> a=Foo()
> dir(a)
['__doc__', '__init__', '__module__', 'a', 'b']
|
Do OO design principles apply to Python?
|
It seems like many OO discussions use Java or C# as examples (e.g. Head First Design Patterns).
Do these patterns apply equally to Python? Or if I follow the design patterns, will I just end up writing Java in Python (which apparently is a very bad thing)?
|
The biggest differences are that Python is duck typed, meaning that you won't need to plan out class hierarchies in as much detail as in Java, and has first class functions. The strategy pattern, for example, becomes much simpler and more obvious when you can just pass a function in, rather than having to make interfaces, etc. just to simulate higher order functions. More generally, Python has syntactic sugar for a lot of common design patterns, such as the iterator and the aforementioned strategy. It might be useful to understand these patterns (I've read Head First and found it pretty useful), but think about Pythonic ways to implement them rather than just doing things the same way you would in Java.
|
How to dynamically load a Python class
|
Given a string of a Python class, e.g. my_package.my_module.MyClass, what is the best possible way to load it?
In other words I am looking for a equivalent Class.forName() in Java, function in Python. It needs to work on Google App Engine.
Preferably this would be a function that accepts the FQN of the class as a string, and returns a reference to the class:
my_class = load_class('my_package.my_module.MyClass')
my_instance = my_class()
|
From the python documentation, here's the function you want:
def my_import(name):
components = name.split('.')
mod = __import__(components[0])
for comp in components[1:]:
mod = getattr(mod, comp)
return mod
The reason a simple __import__ won't work is because any import of anything past the first dot in a package string is an attribute of the module you're importing. Thus, something like this won't work:
__import__('foo.bar.baz.qux')
You'd have to call the above function like so:
my_import('foo.bar.baz.qux')
Or in the case of your example:
klass = my_import('my_package.my_module.my_class')
some_object = klass()
EDIT: I was a bit off on this. What you're basically wanting to do is this:
from my_package.my_module import my_class
The above function is only necessary if you have a empty fromlist. Thus, the appropriate call would be like this:
mod = __import__('my_package.my_module', fromlist=['my_class'])
klass = getattr(mod, 'my_class')
|
Skipping Iterations in Python
|
I have a loop going, but there is the possibility for exceptions to be raised inside the loop. This of course would stop my program all together. To prevent that I catch the exceptions and handle them. But then the rest of the iteration runs even though an exception occurred. Is there a keyword to use in my except: clause to just skip the rest of the current iteration?
|
You are lookin for continue.
|
Removing the TK icon on a Tkinter window
|
Does anybody know how to make the icon not show up? I'm looking for a way to have no icon at all.
|
On Windows
Step One:
Create a transparent icon using either an icon editor, or a site like rw-designer. Save it as transparent.ico.
Step Two:
from tkinter import *
tk = Tk()
tk.iconbitmap(default='transparent.ico')
lab = Label(tk, text='Window with transparent icon.')
lab.pack()
tk.mainloop()
On Unix
Something similar, but using an xbm icon.
|
overload print python
|
Am i able to overload the print function? and call the normal function? What i want to do is after a specific line i want print to call my print which will call the normal print and write a copy to file.
Also i dont know how to overload print. I dont know how to do variable length arguments. i'll look it up soon but http://stackoverflow.com/questions/550470/overload-print-python/550477#550477 just told me i cant overload print in 2.x which is what i am using.
|
For those reviewing the previously dated answers, as of version release "Python 2.6" there is a new answer to the original poster's question.
In Python 2.6 and up, you can disable the print statement in favor of the print function, and then override the print function with your own print function:
from __future__ import print_function
# This must be the first statement before other statements.
# You may only put a quoted or triple quoted string,
# Python comments, other future statements, or blank lines before the __future__ line.
try:
import __builtin__
except ImportError:
# Python 3
import builtins as __builtin__
def print(*args, **kwargs):
"""My custom print() function."""
# Adding new arguments to the print function signature
# is probably a bad idea.
# Instead consider testing if custom argument keywords
# are present in kwargs
__builtin__.print('My overridden print() function!')
return __builtin__.print(*args, **kwargs)
Of course you'll need to consider that this print function is only module wide at this point. You could choose to override __builtin__.print, but you'll need to save the original __builtin__.print; likely mucking with the __builtin__ namespace.
|
Favorite Django Tips & Features?
|
Inspired by the question series 'Hidden features of ...', I am curious to hear about your favorite Django tips or lesser known but useful features you know of.
Please, include only one tip per answer.
Add Django version requirements if there are any.
|
I'm just going to start with a tip from myself :)
Use os.path.dirname() in settings.py to avoid hardcoded dirnames.
Don't hardcode path's in your settings.py if you want to run your project in different locations. Use the following code in settings.py if your templates and static files are located within the Django project directory:
# settings.py
import os
PROJECT_DIR = os.path.dirname(__file__)
...
STATIC_DOC_ROOT = os.path.join(PROJECT_DIR, "static")
...
TEMPLATE_DIRS = (
os.path.join(PROJECT_DIR, "templates"),
)
Credits: I got this tip from the screencast 'Django From the Ground Up'.
|
Cross-platform way to get PIDs by process name in python
|
Several processes with the same name are running on host. What is the cross-platform way to get PIDs of those processes by name using python or jython?
I want something like pidof but in python. (I don't have pidof anyway.)
I can't parse /proc because it might be unavailable (on HP-UX).
I do not want to run os.popen('ps') and parse the output because I think it is ugly (field sequence may be different in different OS).
Target platforms are Solaris, HP-UX, and maybe others.
|
You can use psutil ( https://github.com/giampaolo/psutil ), which works on Windows and UNIX:
import psutil
PROCNAME = "python.exe"
for proc in psutil.process_iter():
if proc.name() == PROCNAME:
print(proc)
The code above prints the following on my machine:
psutil.Process (PID:2908; PPID:3944; NAME:'python.exe'; PATH:'C:\Python26'; CMDL
INE:['C:\\Python26\\python.exe', 'D:\\pyftpdlib\\svn\\psutil\\foo.py']; UID:-1;
GID:-1;)
|
"Private" (implementation) class in Python
|
I am coding a small Python module composed of two parts:
some functions defining a public interface,
an implementation class used by the above functions, but which is not meaningful outside the module.
At first, I decided to "hide" this implementation class by defining it inside the function using it, but this hampers readability and cannot be used if multiple functions reuse the same class.
So, in addition to comments and docstrings, is there a mechanism to mark a class as "private" or "internal"? I am aware of the underscore mechanism, but as I understand it it only applies to variables, function and methods name.
|
Use a single underscore prefix:
class _Internal:
...
This is the official Python convention for 'internal' symbols; "from module import *" does not import underscore-prefixed objects.
|
Deploying application with Python or another embedded scripting language
|
I'm thinking about using Python as an embedded scripting language in a hobby project written in C++. I would not like to depend on separately installed Python distribution. Python documentation seems to be quite clear about general usage, but I couldn't find a clear answer to this.
Is it feasible to deploy a Python interpreter + standard library with my application? Would some other language like Lua, Javascript (Spidermonkey), Ruby, etc. be better for this use?
Here's the criteria I'm weighing the different languages against:
No/Few dependencies on externally installed packages
Standard library with good feature set
Nice language :)
Doesn't result in a huge install package
edit:
I guess the question should be:
How do I deploy my own python library + standard library with the installer of my program, so that it doesn't matter whether the platform already has python installed or not?
edit2:
One more clarification. I don't need info about specifics of linking C and Python code.
|
Link your application to the python library (pythonXX.lib on Windows) and add the following to your main() function.
Py_NoSiteFlag = 1; // Disable importing site.py
Py_Initialize(); // Create a python interpreter
Put the python standard library bits you need into a zip file (called pythonXX.zip) and place this and pythonXX.dll beside the executable you distribute. Have a look at PyZipFile in the the zipfile module.
|
Getting "global name 'foo' is not defined" with Python's timeit
|
I'm trying to find out how much time it takes to execute a Python statement, so I looked online and found that the standard library provides a module called timeit that purports to do exactly that:
import timeit
def foo():
# ... contains code I want to time ...
def dotime():
t = timeit.Timer("foo()")
time = t.timeit(1)
print "took %fs\n" % (time,)
dotime()
However, this produces an error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in dotime
File "/usr/local/lib/python2.6/timeit.py", line 193, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
NameError: global name 'foo' is not defined
I'm still new to Python and I don't fully understand all the scoping issues it has, but I don't know why this snippet doesn't work. Any thoughts?
|
have you tried making this line:
t = timeit.Timer("foo()")
This:
t = timeit.Timer("foo()", "from __main__ import foo")
Check out the link you provided at the very bottom.
To give the timeit module access to functions you define, you can pass a setup parameter which contains an import statement:
I just tested it on my machine and it worked with the changes.
|
"Interfaces" in Python: Yea or Nay?
|
So I'm starting a project using Python after spending a significant amount of time in static land. I've seen some projects that make "interfaces" which are really just classes without any implementations. Before, I'd scoff at the idea and ignore that section of those projects. But now, I'm beginning to warm up to the idea.
Just so we're clear, an interface in Python would look something like this:
class ISomething(object):
def some_method():
pass
def some_other_method(some_argument):
pass
Notice that you aren't passing self to any of the methods, thus requiring that the method be overriden to be called. I see this as a nice form of documentation and completeness testing.
So what is everyone here's opinion on the idea? Have I been brainwashed by all the C# programming I've done, or is this a good idea?
|
I'm not sure what the point of that is. Interfaces (of this form, anyway) are largely to work around the lack of multiple inheritance. But Python has MI, so why not just make an abstract class?
class Something(object):
def some_method(self):
raise NotImplementedError()
def some_other_method(self, some_argument):
raise NotImplementedError()
|
How do I profile memory usage in Python?
|
I've recently become interested in algorithms and have begun exploring them by writing a naive implementation and then optimizing it in various ways.
I'm already familiar with the standard Python module for profiling runtime (for most things I've found the timeit magic function in IPython to be sufficient), but I'm also interested in memory usage so I can explore those tradeoffs as well (e.g. the cost of caching a table of previously computed values versus recomputing them as needed). Is there a module that will profile the memory usage of a given function for me?
|
This one has been answered already here: Python memory profiler
Basically you do something like that (cited from Guppy-PE):
>>> from guppy import hpy; h=hpy()
>>> h.heap()
Partition of a set of 48477 objects. Total size = 3265516 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 25773 53 1612820 49 1612820 49 str
1 11699 24 483960 15 2096780 64 tuple
2 174 0 241584 7 2338364 72 dict of module
3 3478 7 222592 7 2560956 78 types.CodeType
4 3296 7 184576 6 2745532 84 function
5 401 1 175112 5 2920644 89 dict of class
6 108 0 81888 3 3002532 92 dict (no owner)
7 114 0 79632 2 3082164 94 dict of type
8 117 0 51336 2 3133500 96 type
9 667 1 24012 1 3157512 97 __builtin__.wrapper_descriptor
<76 more rows. Type e.g. '_.more' to view.>
>>> h.iso(1,[],{})
Partition of a set of 3 objects. Total size = 176 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1 33 136 77 136 77 dict (no owner)
1 1 33 28 16 164 93 list
2 1 33 12 7 176 100 int
>>> x=[]
>>> h.iso(x).sp
0: h.Root.i0_modules['__main__'].__dict__['x']
>>>
|
Generate a random date between two other dates
|
How would I generate a random date that has to be between two other given dates?
The functions signature should something like this-
randomDate("1/1/2008 1:30 PM", "1/1/2009 4:50 AM", 0.34)
^ ^ ^
date generated has date generated has random number
to be after this to be before this
and would return a date such as-
"2/4/2008 7:20 PM"
|
Convert both strings to timestamps (in your chosen resolution, e.g. milliseconds, seconds, hours, days, whatever), subtract the earlier from the later, multiply your random number (assuming it is distributed in the range [0, 1]) with that difference, and add again to the earlier one. Convert the timestamp back to date string and you have a random time in that range.
Python example (output is almost in the format you specified, other than 0 padding - blame the American time format conventions):
import random
import time
def strTimeProp(start, end, format, prop):
"""Get a time at a proportion of a range of two formatted times.
start and end should be strings specifying times formated in the
given format (strftime-style), giving an interval [start, end].
prop specifies how a proportion of the interval to be taken after
start. The returned time will be in the specified format.
"""
stime = time.mktime(time.strptime(start, format))
etime = time.mktime(time.strptime(end, format))
ptime = stime + prop * (etime - stime)
return time.strftime(format, time.localtime(ptime))
def randomDate(start, end, prop):
return strTimeProp(start, end, '%m/%d/%Y %I:%M %p', prop)
print randomDate("1/1/2008 1:30 PM", "1/1/2009 4:50 AM", random.random())
|
Can you use a string to instantiate a class in python?
|
I'm using a builder pattern to seperate a bunch of different configuration possibilities. Basically, I have a bunch of classes that are named an ID (something like ID12345). These all inherit from the base builder class. In my script, I need to instantiate an instance for each class (about 50) every time this app runs. So, I'm trying to see if instead of doing something like this:
ProcessDirector = ProcessDirector()
ID12345 = ID12345()
ID01234 = ID01234()
ProcessDirector.construct(ID12345)
ProcessDirector.construct(ID01234)
ID12345.run()
ID01234.run()
Can I do something like this (I know this doesn't work):
IDS = ["ID12345", "ID01234"]
ProcessDirector = ProcessDirector()
for id in IDS:
builder = id() #some how instantiate class from string
ProcessDirector.construct(builder)
builder.run()
That way, when I need to add a new one in the future, all I have to do is add the id to the IDS list, rather than peppering the new ID throughout the code.
EDIT
Looks like there are some different opinions based on where the data is coming from. These IDs are entered in a file that no one else has access to. I'm not reading the strings from the command line, and I'd like to be able to do as little alteration when adding a new ID in the future.
|
If you wanted to avoid an eval(), you could just do:
id = "1234asdf"
constructor = globals()[id]
instance = constructor()
Provided that the class is defined in (or imported into) your current scope.
|
How do I prevent Python's urllib(2) from following a redirect
|
I am currently trying to log into a site using Python however the site seems to be sending a cookie and a redirect statement on the same page. Python seems to be following that redirect thus preventing me from reading the cookie send by the login page. How do I prevent Python's urllib (or urllib2) urlopen from following the redirect?
|
You could do a couple of things:
Build your own HTTPRedirectHandler that intercepts each redirect
Create an instance of HTTPCookieProcessor and install that opener so that you have access to the cookiejar.
This is a quick little thing that shows both
import urllib2
#redirect_handler = urllib2.HTTPRedirectHandler()
class MyHTTPRedirectHandler(urllib2.HTTPRedirectHandler):
def http_error_302(self, req, fp, code, msg, headers):
print "Cookie Manip Right Here"
return urllib2.HTTPRedirectHandler.http_error_302(self, req, fp, code, msg, headers)
http_error_301 = http_error_303 = http_error_307 = http_error_302
cookieprocessor = urllib2.HTTPCookieProcessor()
opener = urllib2.build_opener(MyHTTPRedirectHandler, cookieprocessor)
urllib2.install_opener(opener)
response =urllib2.urlopen("WHEREEVER")
print response.read()
print cookieprocessor.cookiejar
|
Character Translation using Python (like the tr command)
|
Is there a way to do character translation (kind of like the tr command) using python
|
See string.translate
import string
"abc".translate(string.maketrans("abc", "def")) # => "def"
Note the doc's comments about subtleties in the translation of unicode strings.
Edit: Since tr is a bit more advanced, also consider using re.sub.
|
ImportError: No module named copy_reg pickle
|
I'm trying to unpickle an object stored as a blob in a MySQL database. I've manually generated and stored the pickled object in the database, but when I try to unpickle the object, I get the following rather cryptic exception:
ImportError: No module named copy_reg
Any ideas as to why this happens?
Method of Reproduction
Note: Must do step 1 on a Windows PC and steps 3 and 4 on a Linux PC.
1) On a Windows PC:
file = open("test.txt", "w")
thing = {'a': 1, 'b':2}
cPickle.dump(thing, file)
2) Manually insert contents of text.txt into blob field of MySQL database running on linux
3) In Python running on a linux machine, fetch the contents of column from MySQL
4) Assuming that you put the contents of the blob column into a variable called data, try this:
cPickle.loads(rawString)
|
It seems this might be caused by my method of exporting the pickled object.
This bug report seens to suggest that my issue can be resolved by exporting to a file writen in binary mode. I'm going to give this a go now and see if this solves my issue.
UPDATE: This works. The solution is to make sure you export your pickled object to a file open in binary mode, even if you are using the default protocol 0 (commonly referred to as being "text")
Correct code based on orignal example in question:
file = open("test.txt", 'wb')
thing = {'a': 1, 'b':2}
cPickle.dump(thing, file)
|
Python list serialization - fastest method
|
I need to load (de-serialize) a pre-computed list of integers from a file in a Python script (into a Python list). The list is large (upto millions of items), and I can choose the format I store it in, as long as loading is fastest.
Which is the fastest method, and why?
Using import on a .py file that just contains the list assigned to a variable
Using cPickle's load
Some other method (perhaps numpy?)
Also, how can one benchmark such things reliably?
Addendum: measuring this reliably is difficult, because import is cached so it can't be executed multiple times in a test. The loading with pickle also gets faster after the first time probably because page-precaching by the OS. Loading 1 million numbers with cPickle takes 1.1 sec the first time run, and 0.2 sec on subsequent executions of the script.
Intuitively I feel cPickle should be faster, but I'd appreciate numbers (this is quite a challenge to measure, I think).
And yes, it's important for me that this performs quickly.
Thanks
|
I would guess cPickle will be fastest if you really need the thing in a list.
If you can use an array, which is a built-in sequence type, I timed this at a quarter of a second for 1 million integers:
from array import array
from datetime import datetime
def WriteInts(theArray,filename):
f = file(filename,"wb")
theArray.tofile(f)
f.close()
def ReadInts(filename):
d = datetime.utcnow()
theArray = array('i')
f = file(filename,"rb")
try:
theArray.fromfile(f,1000000000)
except EOFError:
pass
print "Read %d ints in %s" % (len(theArray),datetime.utcnow() - d)
return theArray
if __name__ == "__main__":
a = array('i')
a.extend(range(0,1000000))
filename = "a_million_ints.dat"
WriteInts(a,filename)
r = ReadInts(filename)
print "The 5th element is %d" % (r[4])
|
How to re-use a reusable app in Django
|
Hello Django experts
I am trying to create my first site in Django and as I'm looking for example apps out there to draw inspiration from, I constantly stumble upon a term called "reusable apps".
I understand the concept of an app that is reusable easy enough, but the means of reusing an app in Django are quite lost for me. Few questions that are bugging me in the whole business are:
What is the preferred way to re-use an existing Django app? Where do I put it and how do I reference it?
From what I understand, the recommendation is to put it on your "PYTHONPATH", but that breaks as soon as I need to deploy my app to a remote location that I have limited access to (e.g. on a hosting service).
So, if I develop my site on my local computer and intend to deploy it on an ISP where I only have ftp access, how do I re-use 3rd party Django apps so that if I deploy my site, the site keeps working (e.g. the only thing I can count on is that the service provider has Python 2.5 and Django 1.x installed)?
How do I organize my Django project so that I could easily deploy it along with all of the reusable apps I want to use?
|
In general, the only thing required to use a reusable app is to make sure it's on sys.path, so that you can import it from Python code. In most cases (if the author follows best practice), the reusable app tarball or bundle will contain a top-level directory with docs, a README, a setup.py, and then a subdirectory containing the actual app (see django-voting for an example; the app itself is in the "voting" subdirectory). This subdirectory is what needs to be placed in your Python path. Possible methods for doing that include:
running pip install appname, if the app has been uploaded to PyPI (these days most are)
installing the app with setup.py install (this has the same result as pip install appname, but requires that you first download and unpack the code yourself; pip will do that for you)
manually symlinking the code directory to your Python site-packages directory
using software like virtualenv to create a "virtual Python environment" that has its own site-packages directory, and then running setup.py install or pip install appname with that virtualenv active, or placing or symlinking the app in the virtualenv's site-packages (highly recommended over all the "global installation" options, if you value your future sanity)
placing the application in some directory where you intend to place various apps, and then adding that directory to the PYTHONPATH environment variable
You'll know you've got it in the right place if you can fire up a Python interpreter and "import voting" (for example) without getting an ImportError.
On a server where you have FTP access only, your only option is really the last one, and they have to set it up for you. If they claim to support Django they must provide some place where you can upload packages and they will be available for importing in Python. Without knowing details of your webhost, it's impossible to say how they structure that for you.
|
How do I use easy_install and buildout when pypi is down?
|
I am using buildout to automatically download and setup the many dependencies of my Plone installation. buildout more or less uses easy_install to download and install a bunch of Python eggs. This usually works, but it doesn't work if any of the dependencies cannot be downloaded or if I want buildout to find an internal package not appropriate for pypi. How can I set up my own local version of pypi to automatically mirror the packages I need? Will it still depend on third-party servers for packages that use pypi for their metadata but not their code?
|
Here are instructions on how to setup your own PyPi mirror. The homepage of this project is here. There also seems to be a growing number of mirrors out there.
For instructions on how to setup your own package index, check out this blog post where one solution is explained at the end. Then you can also host your own internal packages in there. The advantage is also that the versions are fixed that way. (For a way to pin the versions directly in buildout, check out this post).
If there is only metadata on PyPI and the archive is stored somewhere else you might of course copy that over to your index as well. If you just use a PyPI mirror I assume that you still need access to these servers.
|
Function to determine if two numbers are nearly equal when rounded to n significant decimal digits
|
I have been asked to test a library provided by a 3rd party. The library is known to be accurate to n significant figures. Any less-significant errors can safely be ignored. I want to write a function to help me compare the results:
def nearlyequal( a, b, sigfig=5 ):
The purpose of this function is to determine if two floating-point numbers (a and b) are approximately equal. The function will return True if a==b (exact match) or if a and b have the same value when rounded to sigfig significant-figures when written in decimal.
Can anybody suggest a good implementation? I've written a mini unit-test. Unless you can see a bug in my tests then a good implementation should pass the following:
assert nearlyequal(1, 1, 5)
assert nearlyequal(1.0, 1.0, 5)
assert nearlyequal(1.0, 1.0, 5)
assert nearlyequal(-1e-9, 1e-9, 5)
assert nearlyequal(1e9, 1e9 + 1 , 5)
assert not nearlyequal( 1e4, 1e4 + 1, 5)
assert nearlyequal( 0.0, 1e-15, 5 )
assert not nearlyequal( 0.0, 1e-4, 6 )
Additional notes:
Values a and b might be of type int, float or numpy.float64. Values a and b will always be of the same type. It's vital that conversion does not introduce additional error into the function.
Lets keep this numerical, so functions that convert to strings or use non-mathematical tricks are not ideal. This program will be audited by somebody who is a mathematician who will want to be able to prove that the function does what it is supposed to do.
Speed... I've got to compare a lot of numbers so the faster the better.
I've got numpy, scipy and the standard-library. Anything else will be hard for me to get, especially for such a small part of the project.
|
There is a function assert_approx_equal in numpy.testing (source here) which may be a good starting point.
def assert_approx_equal(actual,desired,significant=7,err_msg='',verbose=True):
"""
Raise an assertion if two items are not equal up to significant digits.
.. note:: It is recommended to use one of `assert_allclose`,
`assert_array_almost_equal_nulp` or `assert_array_max_ulp`
instead of this function for more consistent floating point
comparisons.
Given two numbers, check that they are approximately equal.
Approximately equal is defined as the number of significant digits
that agree.
|
Bayesian spam filtering library for Python
|
I am looking for a Python library which does Bayesian Spam Filtering. I looked at SpamBayes and OpenBayes, but both seem to be unmaintained (I might be wrong).
Can anyone suggest a good Python (or Clojure, Common Lisp, even Ruby) library which implements Bayesian Spam Filtering?
Thanks in advance.
Clarification: I am actually looking for a Bayesian Spam Classifier and not necessarily a spam filter. I just want to train it using some data and later tell me whether some given data is spam. Sorry for any confusion.
|
Try Reverend. It's a spam filtering module.
|
How to display data using openlayers with OpenStreetMap in geodjango?
|
I've got geodjango running using openlayers and OpenStreetMaps with the admin app.
Now I want to write some views to display the data. Basically, I just want to add a list of points (seen in the admin) to the map.
Geodjango appears to use a special openlayers.js file to do it's magic in the admin. Is there a good way to interface with this?
How can I write a view/template to display the geodjango data on a open street map window, as is seen in the admin?
At the moment, I'm digging into the openlayers.js file and api looking for an 'easy' solution. (I don't have js experience so this is taking some time.)
The current way I can see to do this is add the following as a template, and use django to add the code needed to display the points. (Based on the example here)
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Draw Feature Example</title>
<script src="http://www.openlayers.org/api/OpenLayers.js"></script>
<script type="text/javascript">
var map;
function init(){
map = new OpenLayers.Map('map');
var layer = new OpenLayers.Layer.WMS( "OpenLayers WMS",
"http://labs.metacarta.com/wms/vmap0", {layers: 'basic'} );
map.addLayer(layer);
/*
* Layer style
*/
// we want opaque external graphics and non-opaque internal graphics
var layer_style = OpenLayers.Util.extend({}, OpenLayers.Feature.Vector.style['default']);
layer_style.fillOpacity = 0.2;
layer_style.graphicOpacity = 1;
/*
* Blue style
*/
var style_blue = OpenLayers.Util.extend({}, layer_style);
style_blue.strokeColor = "blue";
style_blue.fillColor = "blue";
style_blue.graphicName = "star";
style_blue.pointRadius = 10;
style_blue.strokeWidth = 3;
style_blue.rotation = 45;
style_blue.strokeLinecap = "butt";
var vectorLayer = new OpenLayers.Layer.Vector("Simple Geometry", {style: layer_style});
// create a point feature
var point = new OpenLayers.Geometry.Point(-111.04, 45.68);
var pointFeature = new OpenLayers.Feature.Vector(point,null,style_blue);
// Add additional points/features here via django
map.addLayer(vectorLayer);
map.setCenter(new OpenLayers.LonLat(point.x, point.y), 5);
vectorLayer.addFeatures([pointFeature]);
}
</script>
</head>
<body onload="init()">
<div id="map" class="smallmap"></div>
</body>
</html>
Is this how it's done, or is there a better way?
|
Another solution is to create a form that utilizes the GeoDjango Admin widget.
To do this, I:
Setup a GeneratePolygonAdminClass:
class GeneratePolygonAdmin(admin.GeoModelAdmin):
list_filter=('polygon',)
list_display=('object', 'polygon')
Where the form is built:
geoAdmin=GeneratePolygonAdmin(ModelWithPolygonField, admin.site)
PolygonFormField=GeneratePolygon._meta.get_field('Polygon')
PolygonWidget=geoAdmin.get_map_widget(PolygonFormField)
Dict['Polygon']=forms.CharField(widget=PolygonWidget()) #In this case, I am creating a Dict to use for a dynamic form
Populating the widget of the form:
def SetupPolygonWidget(form, LayerName, MapFileName, DefaultPolygon=''):
form.setData({'Polygon':DefaultPolygon})
form.fields['Polygon'].widget.params['wms_layer']=LayerName
form.fields['Polygon'].widget.params['wms_url']='/cgi-bin/mapserv?MAP=' + MapFileName
form.fields['Polygon'].widget.params['default_lon']=-80.9
form.fields['Polygon'].widget.params['default_lat']=33.7
form.fields['Polygon'].widget.params['default_zoom']=11
form.fields['Polygon'].widget.params['wms_name']=YOURWMSLayerName
form.fields['Polygon'].widget.params['map_width']=800
form.fields['Polygon'].widget.params['map_height']=600
form.fields['Polygon'].widget.params['map_srid']=YOUR_SRID
form.fields['Polygon'].widget.params['modifiable']=True
form.fields['Polygon'].widget.params['map_options']={}
form.fields['Polygon'].widget.params['map_options']['buffer'] = 0
return form
Based on the code at:
http://code.djangoproject.com/browser/django/branches/gis/django/contrib/gis/admin/options.py?rev=7980
It looks like you can use the extra_js option to include OpenStreetMap (I have not tested this).
|
How do I validate the MX record for a domain in python?
|
I have a large number of email addresses to validate. Initially I parse them with a regexp to throw out the completely crazy ones. I'm left with the ones that look sensible but still might contain errors.
I want to find which addresses have valid domains, so given me@abcxyz.com I want to know if it's even possible to send emails to abcxyz.com .
I want to test that to see if it corresponds to a valid A or MX record - is there an easy way to do it using only Python standard library? I'd rather not add an additional dependency to my project just to support this feature.
|
There is no DNS interface in the standard library so you will either have to roll your own or use a third party library.
This is not a fast-changing concept though, so the external libraries are stable and well tested.
The one I've used successful for the same task as your question is PyDNS.
A very rough sketch of my code is something like this:
import DNS, smtplib
DNS.DiscoverNameServers()
mx_hosts = DNS.mxlookup(hostname)
# Just doing the mxlookup might be enough for you,
# but do something like this to test for SMTP server
for mx in mx_hosts:
smtp = smtplib.SMTP()
#.. if this doesn't raise an exception it is a valid MX host...
try:
smtp.connect(mx[1])
except smtplib.SMTPConnectError:
continue # try the next MX server in list
Another library that might be better/faster than PyDNS is dnsmodule although it looks like it hasn't had any activity since 2002, compared to PyDNS last update in August 2008.
Edit: I would also like to point out that email addresses can't be easily parsed with a regexp. You are better off using the parseaddr() function in the standard library email.utils module (see my answer to this question for example).
|
How to convert an integer to the shortest url-safe string in Python?
|
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python.
The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
|
This answer is similar in spirit to Douglas Leeder's, with the following changes:
It doesn't use actual Base64, so there's no padding characters
Instead of converting the number first to a byte-string (base 256), it converts it directly to base 64, which has the advantage of letting you represent negative numbers using a sign character.
import string
ALPHABET = string.ascii_uppercase + string.ascii_lowercase + \
string.digits + '-_'
ALPHABET_REVERSE = dict((c, i) for (i, c) in enumerate(ALPHABET))
BASE = len(ALPHABET)
SIGN_CHARACTER = '$'
def num_encode(n):
if n < 0:
return SIGN_CHARACTER + num_encode(-n)
s = []
while True:
n, r = divmod(n, BASE)
s.append(ALPHABET[r])
if n == 0: break
return ''.join(reversed(s))
def num_decode(s):
if s[0] == SIGN_CHARACTER:
return -num_decode(s[1:])
n = 0
for c in s:
n = n * BASE + ALPHABET_REVERSE[c]
return n
>>> num_encode(0)
'A'
>>> num_encode(64)
'BA'
>>> num_encode(-(64**5-1))
'$_____'
A few side notes:
You could (marginally) increase the human-readibility of the base-64 numbers by putting string.digits first in the alphabet (and making the sign character '-'); I chose the order that I did based on Python's urlsafe_b64encode.
If you're encoding a lot of negative numbers, you could increase the efficiency by using a sign bit or one's/two's complement instead of a sign character.
You should be able to easily adapt this code to different bases by changing the alphabet, either to restrict it to only alphanumeric characters or to add additional "URL-safe" characters.
I would recommend against using a representation other than base 10 in URIs in most casesâit adds complexity and makes debugging harder without significant savings compared to the overhead of HTTPâunless you're going for something TinyURL-esque.
|
Python Imaging Library save function syntax
|
Simple one I think but essentially I need to know what the syntax is for the save function on the PIL. The help is really vague and I can't find anything online. Any help'd be great, thanks :).
|
From the PIL Handbook:
im.save(outfile, options...)
im.save(outfile, format, options...)
Simplest case:
im.save('my_image.png')
or whatever. In this case, the type of the image will be determined from the extension. Is there a particular problem you're having? Or specific saving option that you'd like to use but aren't sure how to do so?
You may be able to find additional information in the documentation on each filetype. The PIL Handbox Appendixes list the different file types that are supported. In some cases, options are given for save. For example, on the JPEG file format page, we're told that save supports
quality
optimize, and
progressive
with notes about each option.
|
What's Python good practice for importing and offering optional features?
|
I'm writing a piece of software over on github. It's basically a tray icon with some extra features. I want to provide a working piece of code without actually having to make the user install what are essentially dependencies for optional features and I don't actually want to import things I'm not going to use so I thought code like this would be "good solution":
---- IN LOADING FUNCTION ----
features = []
for path in sys.path:
if os.path.exists(os.path.join(path, 'pynotify')):
features.append('pynotify')
if os.path.exists(os.path.join(path, 'gnomekeyring.so')):
features.append('gnome-keyring')
#user dialog to ask for stuff
#notifications available, do you want them enabled?
dlg = ConfigDialog(features)
if not dlg.get_notifications():
features.remove('pynotify')
service_start(features ...)
---- SOMEWHERE ELSE ------
def service_start(features, other_config):
if 'pynotify' in features:
import pynotify
#use pynotify...
There are some issues however. If a user formats his machine and installs the newest version of his OS and redeploys this application, features suddenly disappear without warning. The solution is to present this on the configuration window:
if 'pynotify' in features:
#gtk checkbox
else:
#gtk label reading "Get pynotify and enjoy notification pop ups!"
But if this is say, a mac, how do I know I'm not sending the user on a wild goose chase looking for a dependency they can never fill?
The second problem is the:
if os.path.exists(os.path.join(path, 'gnomekeyring.so')):
issue. Can I be sure that the file is always called gnomekeyring.so across all the linux distros?
How do other people test these features? The problem with the basic
try:
import pynotify
except:
pynotify = disabled
is that the code is global, these might be littered around and even if the user doesn't want pynotify....it's loaded anyway.
So what do people think is the best way to solve this problem?
|
The try: method does not need to be global â it can be used in any scope and so modules can be "lazy-loaded" at runtime. For example:
def foo():
try:
import external_module
except ImportError:
pass
if external_module:
external_module.some_whizzy_feature()
else:
print "You could be using a whizzy feature right now, if you had external_module."
When your script is run, no attempt will be made to load external_module. The first time foo() is called, external_module is (if available) loaded and inserted into the function's local scope. Subsequent calls to foo() reinsert external_module into its scope without needing to reload the module.
In general, it's best to let Python handle import logic â it's been doing it for a while. :-)
|
How to stop Tkinter Frame from shrinking to fit its contents?
|
This is the code that's giving me trouble.
f = Frame(root, width=1000, bg="blue")
f.pack(fill=X, expand=True)
l = Label(f, text="hi", width=10, bg="red", fg="white")
l.pack()
If I comment out the lines with the Label, the Frame displays with the right width. However, adding the Label seems to shrink the Frame down to the Label's size. Is there a way to prevent that from happening?
|
By default, tk frames shrink or grow to fit their contents, which is what you want 99% of the time. The term that describes this feature is "geometry propagation". There is a command to turn geometry propagation on or off.
Since you are using pack, the syntax would be:
f.pack_propagate(0)
or maybe root.pack_propagate(0), depending on which widget(s) you actually want to affect.
That being said, the vast majority of the time you should let tkinter compute the size. When you turn geometry propagation off your GUI won't respond well to changes in resolution, changes in fonts, etc. tkinter's geometry managers (pack, place and grid) are remarkably powerful. Learn to take advantage of that power.
|
How can I check the memory usage of objects in iPython?
|
I am using iPython to run my code. I wonder if there is any module or command which would allow me to check the memory usage of an object. For instance:
In [1]: a = range(10000)
In [2]: %memusage a
Out[2]: 1MB
Something like %memusage <object> and return the memory used by the object.
Duplicate
Find out how much memory is being used by an object in Python
|
Unfortunately this is not possible, but there are a number of ways of approximating the answer:
for very simple objects (e.g. ints, strings, floats, doubles) which are represented more or less as simple C-language types you can simply calculate the number of bytes as with John Mulder's solution.
For more complex objects a good approximation is to serialize the object to a string using cPickle.dumps. The length of the string is a good approximation of the amount of memory required to store an object.
There is one big snag with solution 2, which is that objects usually contain references to other objects. For example a dict contains string-keys and other objects as values. Those other objects might be shared. Since pickle always tries to do a complete serialization of the object it will always over-estimate the amount of memory required to store an object.
|
Is there a way to change effective process name in Python?
|
Can I change effective process name of a Python script? I want to show a different name instead of the real name of the process when I get the system process list. In C I can set
strcpy(argv[0],"othername");
But in Python
argv[0] = "othername"
doesn't seem to work. When i get process list (with ps ax in my linux box) the real name doesn't change. I prefer a portable solution (or else one solution for posix and another for windows environments), if it exists.
Thanks in advance
|
I've recently written a Python module to change the process title in a portable way: check https://github.com/dvarrazzo/py-setproctitle
It is a wrapper around the code used by PostgreSQL to perform the title change. It is currently tested against Linux and Mac OS X: Windows (with limited functionality) and BSD portings are on the way.
Edit: as of July 2010, the module works with BSD and with limited functionality on Windows, and has been ported to Python 3.x.
|
How to get console window width in python
|
Is there a way in python to programmatically determine the width of the console? I mean the number of characters that fits in one line without wrapping, not the pixel width of the window.
Edit
Looking for a solution that works on Linux
|
import os
rows, columns = os.popen('stty size', 'r').read().split()
uses the 'stty size' command which according to a thread on the python mailing list is reasonably universal on linux. It opens the 'stty size' command as a file, 'reads' from it, and uses a simple string split to separate the coordinates.
Unlike the os.environ["COLUMNS"] value (which I can't access in spite of using bash as my standard shell) the data will also be up-to-date whereas I believe the os.environ["COLUMNS"] value would only be valid for the time of the launch of the python interpreter (suppose the user resized the window since then).
|
Simple Prime Generator in Python
|
could someone please tell me what I'm doing wrong with this code. It is just printing 'count' anyway. I just want a very simple prime generator (nothing fancy). Thanks a lot. lincoln.
import math
def main():
count = 3
one = 1
while one == 1:
for x in range(2, int(math.sqrt(count) + 1)):
if count % x == 0:
continue
if count % x != 0:
print count
count += 1
|
There are some problems:
Why do you print out count when it didn't divide by x? It doesn't mean it's prime, it means only that this particular x doesn't divide it
continue moves to the next loop iteration - but you really want to stop it using break
Here's your code with a few fixes, it prints out only primes:
import math
def main():
count = 3
Â
while True:
isprime = True
Â
for x in range(2, int(math.sqrt(count) + 1)):
if count % x == 0:
isprime = False
break
Â
if isprime:
print count
Â
count += 1
For much more efficient prime generation, see the Sieve of Erastothenes, as others have suggested. Here's a nice, optimized implementation with many comments:
# Sieve of Eratosthenes
# Code by David Eppstein, UC Irvine, 28 Feb 2002
# http://code.activestate.com/recipes/117119/
def gen_primes():
""" Generate an infinite sequence of prime numbers.
"""
# Maps composites to primes witnessing their compositeness.
# This is memory efficient, as the sieve is not "run forward"
# indefinitely, but only as long as required by the current
# number being tested.
#
D = {}
Â
# The running integer that's checked for primeness
q = 2
Â
while True:
if q not in D:
# q is a new prime.
# Yield it and mark its first multiple that isn't
# already marked in previous iterations
#
yield q
D[q * q] = [q]
else:
# q is composite. D[q] is the list of primes that
# divide it. Since we've reached q, we no longer
# need it in the map, but we'll mark the next
# multiples of its witnesses to prepare for larger
# numbers
#
for p in D[q]:
D.setdefault(p + q, []).append(p)
del D[q]
Â
q += 1
Note that it returns a generator.
|
running a command as a super user from a python script
|
So I'm trying to get a process to be run as a super user from within a python script using subprocess. In the ipython shell something like
proc = subprocess.Popen('sudo apach2ctl restart',
shell=True, stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
works fine, but as soon as I stick it into a script I start getting: sudo: apach2ctl: command not found.
I would guess this is due to the way sudo handles environments on ubuntu. (I've also tried sudo -E apche2ctl restart and sudo env path=$PATH apache2ctl restart with no avail)
So my question is basically, if I want to run apache2ctl restart as super user that prompts the user for the super user password when required, how should I go about doing this? I have no intention of storing passwords in the script.
Edit:
I've tried passing in the commands as both a string and tokenized into a list. In the python interpreter, with a string I'll get the password prompt properly (still doesnt work in a python script as in my original problem), a list just gives the help screen for sudo.
Edit 2:
So what I gather is that while Popen will work with some commands just as strings when shell=True, it takes
proc = subprocess.Popen(['sudo','/usr/sbin/apache2ctl','restart'])
without 'shell=True' to get sudo to work.
Thanks!
|
Try:
subprocess.call(['sudo', 'apach2ctl', 'restart'])
The subprocess needs to access the real stdin/out/err for it to be able to prompt you, and read in your password. If you set them up as pipes, you need to feed the password into that pipe yourself.
If you don't define them, then it grabs sys.stdout, etc...
|
is there a pythonic way to try something up to a maximum number of times?
|
I have a python script which is querying a MySQL server on a shared linux host. For some reason, queries to MySQL often return a "server has gone away" error:
_mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away')
If you try the query again immediately afterwards, it usually succeeds. So, I'd like to know if there's a sensible way in python to try to execute a query, and if it fails, to try again, up to a fixed number of tries. Probably I'd want it to try 5 times before giving up altogether.
Here's the kind of code I have:
conn = MySQLdb.connect(host, user, password, database)
cursor = conn.cursor()
try:
cursor.execute(query)
rows = cursor.fetchall()
for row in rows:
# do something with the data
except MySQLdb.Error, e:
print "MySQL Error %d: %s" % (e.args[0], e.args[1])
Clearly I could do it by having another attempt in the except clause, but that's incredibly ugly, and I have a feeling there must be a decent way to achieve this.
|
Building on Dana's answer, you might want to do this as a decorator:
def retry(howmany):
def tryIt(func):
def f():
attempts = 0
while attempts < howmany:
try:
return func()
except:
attempts += 1
return f
return tryIt
Then...
@retry(5)
def the_db_func():
# [...]
Enhanced version that uses the decorator module
import decorator, time
def retry(howmany, *exception_types, **kwargs):
timeout = kwargs.get('timeout', 0.0) # seconds
@decorator.decorator
def tryIt(func, *fargs, **fkwargs):
for _ in xrange(howmany):
try: return func(*fargs, **fkwargs)
except exception_types or Exception:
if timeout is not None: time.sleep(timeout)
return tryIt
Then...
@retry(5, MySQLdb.Error, timeout=0.5)
def the_db_func():
# [...]
To install the decorator module:
$ easy_install decorator
|
How can I process command line arguments in Python?
|
What would be an easy expression to process command line arguments if I'm expecting anything like 001 or 999 (let's limit expectations to 001...999 range for this time), and few other arguments passed, and would like to ignore any unexpected?
I understand if for example I need to find out if "debug" was passed among parameters it'll be something like that:
if 'debug' in argv[1:]:
print 'Will be running in debug mode.'
How to find out if 009 or 575 was passed?
All those are expected calls:
python script.py
python script.py 011
python script.py 256 debug
python script.py 391 xls
python script.py 999 debug pdf
At this point I don't care about calls like that:
python script.py 001 002 245 568
python script.py some unexpected argument
python script.py 0001
python script.py 02
...first one - because of more than one "numeric" argument; second - because of... well, unexpected arguments; third and fourth - because of non-3-digits arguments.
|
As others answered, optparse is the best option, but if you just want quick code try something like this:
import sys, re
first_re = re.compile(r'^\d{3}$')
if len(sys.argv) > 1:
if first_re.match(sys.argv[1]):
print "Primary argument is : ", sys.argv[1]
else:
raise ValueError("First argument should be ...")
args = sys.argv[2:]
else:
args = ()
# ... anywhere in code ...
if 'debug' in args:
print 'debug flag'
if 'xls' in args:
print 'xls flag'
EDIT: Here's an optparse example because so many people are answering optparse without really explaining why, or explaining what you have to change to make it work.
The primary reason to use optparse is it gives you more flexibility for expansion later, and gives you more flexibility on the command line. In other words, your options can appear in any order and usage messages are generated automatically. However to make it work with optparse you need to change your specifications to put '-' or '--' in front of the optional arguments and you need to allow all the arguments to be in any order.
So here's an example using optparse:
import sys, re, optparse
first_re = re.compile(r'^\d{3}$')
parser = optparse.OptionParser()
parser.set_defaults(debug=False,xls=False)
parser.add_option('--debug', action='store_true', dest='debug')
parser.add_option('--xls', action='store_true', dest='xls')
(options, args) = parser.parse_args()
if len(args) == 1:
if first_re.match(args[0]):
print "Primary argument is : ", args[0]
else:
raise ValueError("First argument should be ...")
elif len(args) > 1:
raise ValueError("Too many command line arguments")
if options.debug:
print 'debug flag'
if options.xls:
print 'xls flag'
The differences here with optparse and your spec is that now you can have command lines like:
python script.py --debug --xls 001
and you can easily add new options by calling parser.add_option()
|
How to check if there exists a process with a given pid?
|
Is there a way to check to see if a pid corresponds to a valid process? I'm getting a pid from a different source other than from os.getpid() and I need to check to see if a process with that pid doesn't exist on the machine.
I need it to be available in Unix and Windows. I'm also checking to see if the PID is NOT in use.
|
Sending signal 0 to a pid will raise an OSError exception if the pid is not running, and do nothing otherwise.
import os
def check_pid(pid):
""" Check For the existence of a unix pid. """
try:
os.kill(pid, 0)
except OSError:
return False
else:
return True
|
How do I create an empty array/matrix in NumPy?
|
I can't figure out how to use an array or matrix in the way that I would normally use a list. I want to create an empty array (or matrix) and then add one column (or row) to it at a time.
At the moment the only way I can find to do this is like:
mat = None
for col in columns:
if mat is None:
mat = col
else:
mat = hstack((mat, col))
Whereas if it were a list, I'd do something like this:
list = []
for item in data:
list.append(item)
Is there a way to use that kind of notation for NumPy arrays or matrices?
|
You have the wrong mental model for using NumPy efficiently. NumPy arrays are stored in contiguous blocks of memory. If you want to add rows or columns to an existing array, the entire array needs to be copied to a new block of memory, creating gaps for the new elements to be stored. This is very inefficient if done repeatedly to build an array.
In the case of adding rows, your best bet is to create an array that is as big as your data set will eventually be, and then add data to it row-by-row:
>>> import numpy
>>> a = numpy.zeros(shape=(5,2))
>>> a
array([[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.]])
>>> a[0] = [1,2]
>>> a[1] = [2,3]
>>> a
array([[ 1., 2.],
[ 2., 3.],
[ 0., 0.],
[ 0., 0.],
[ 0., 0.]])
|
Django: multiple models in one template using forms
|
I'm building a support ticket tracking app and have a few models I'd like to create from one page. Tickets belong to a Customer via a ForeignKey. Notes belong to Tickets via a ForeignKey as well. I'd like to have the option of selecting a Customer (that's a whole separate project) OR creating a new Customer, then creating a Ticket and finally creating a Note assigned to the new ticket.
Since I'm fairly new to Django, I tend to work iteratively, trying out new features each time. I've played with ModelForms but I want to hide some of the fields and do some complex validation. It seems like the level of control I'm looking for either requires formsets or doing everything by hand, complete with a tedious, hand-coded template page, which I'm trying to avoid.
Is there some lovely feature I'm missing? Does someone have a good reference or example for using formsets? I spent a whole weekend on the API docs for them and I'm still clueless. Is it a design issue if I break down and hand-code everything?
|
I just was in about the same situation a day ago, and here are my 2 cents:
1) I found arguably the shortest and most concise demonstration of multiple model entry in single form here: http://collingrady.wordpress.com/2008/02/18/editing-multiple-objects-in-django-with-newforms/ .
In a nutshell: Make a form for each model, submit them both to template in a single <form>, using prefix keyarg and have the view handle validation. If there is dependency, just make sure you save the "parent"
model before dependant, and use parent's ID for foreign key before commiting save of "child" model. The link has the demo.
2) Maybe formsets can be beaten into doing this, but as far as I delved in, formsets are primarily for entering multiples of the same model, which may be optionally tied to another model/models by foreign keys. However, there seem to be no default option for entering more than one model's data and that's not what formset seems to be meant for.
|
How to keep track of thread progress in Python without freezing the PyQt GUI?
|
Questions:
What is the best practice for
keeping track of a tread's
progress without locking the GUI
("Not Responding")?
Generally, what are the best practices for
threading as it applies to GUI
development?
Question Background:
I have a PyQt GUI for Windows.
It is used to process sets of HTML
documents.
It takes anywhere from three seconds
to three hours to process a set of
documents.
I want to be able to process
multiple sets at the same time.
I don't want the GUI to lock.
I'm looking at the threading module
to achieve this.
I am relatively new to threading.
The GUI has one progress bar.
I want it to display the progress of
the selected thread.
Display results of the selected
thread if it's finished.
I'm using Python 2.5.
My Idea: Have the threads emit a QtSignal when the progress is updated that triggers some function that updates the progress bar. Also signal when finished processing so results can be displayed.
#NOTE: this is example code for my idea, you do not have
# to read this to answer the question(s).
import threading
from PyQt4 import QtCore, QtGui
import re
import copy
class ProcessingThread(threading.Thread, QtCore.QObject):
__pyqtSignals__ = ( "progressUpdated(str)",
"resultsReady(str)")
def __init__(self, docs):
self.docs = docs
self.progress = 0 #int between 0 and 100
self.results = []
threading.Thread.__init__(self)
def getResults(self):
return copy.deepcopy(self.results)
def run(self):
num_docs = len(self.docs) - 1
for i, doc in enumerate(self.docs):
processed_doc = self.processDoc(doc)
self.results.append(processed_doc)
new_progress = int((float(i)/num_docs)*100)
#emit signal only if progress has changed
if self.progress != new_progress:
self.emit(QtCore.SIGNAL("progressUpdated(str)"), self.getName())
self.progress = new_progress
if self.progress == 100:
self.emit(QtCore.SIGNAL("resultsReady(str)"), self.getName())
def processDoc(self, doc):
''' this is tivial for shortness sake '''
return re.findall('<a [^>]*>.*?</a>', doc)
class GuiApp(QtGui.QMainWindow):
def __init__(self):
self.processing_threads = {} #{'thread_name': Thread(processing_thread)}
self.progress_object = {} #{'thread_name': int(thread_progress)}
self.results_object = {} #{'thread_name': []}
self.selected_thread = '' #'thread_name'
def processDocs(self, docs):
#create new thread
p_thread = ProcessingThread(docs)
thread_name = "example_thread_name"
p_thread.setName(thread_name)
p_thread.start()
#add thread to dict of threads
self.processing_threads[thread_name] = p_thread
#init progress_object for this thread
self.progress_object[thread_name] = p_thread.progress
#connect thread signals to GuiApp functions
QtCore.QObject.connect(p_thread, QtCore.SIGNAL('progressUpdated(str)'), self.updateProgressObject(thread_name))
QtCore.QObject.connect(p_thread, QtCore.SIGNAL('resultsReady(str)'), self.updateResultsObject(thread_name))
def updateProgressObject(self, thread_name):
#update progress_object for all threads
self.progress_object[thread_name] = self.processing_threads[thread_name].progress
#update progress bar for selected thread
if self.selected_thread == thread_name:
self.setProgressBar(self.progress_object[self.selected_thread])
def updateResultsObject(self, thread_name):
#update results_object for thread with results
self.results_object[thread_name] = self.processing_threads[thread_name].getResults()
#update results widget for selected thread
try:
self.setResultsWidget(self.results_object[thread_name])
except KeyError:
self.setResultsWidget(None)
Any commentary on this approach (e.g. drawbacks, pitfalls, praises, etc.) will be appreciated.
Resolution:
I ended up using the QThread class and associated signals and slots to communicate between threads. This is primarily because my program already uses Qt/PyQt4 for the GUI objects/widgets. This solution also required fewer changes to my existing code to implement.
Here is a link to an applicable Qt article that explains how Qt handles threads and signals, http://www.linuxjournal.com/article/9602. Excerpt below:
Fortunately, Qt permits
signals and slots to be connected
across threadsâas long as the threads
are running their own event loops.
This is a much cleaner method of
communication compared to sending and
receiving events, because it avoids
all the bookkeeping and intermediate
QEvent-derived classes that become
necessary in any nontrivial
application. Communicating between
threads now becomes a matter of
connecting signals from one thread to
the slots in another, and the mutexing
and thread-safety issues of exchanging
data between threads are handled by
Qt.
Why is it necessary to run an event
loop within each thread to which you
want to connect signals? The reason
has to do with the inter-thread
communication mechanism used by Qt
when connecting signals from one
thread to the slot of another thread.
When such a connection is made, it is
referred to as a queued connection.
When signals are emitted through a
queued connection, the slot is invoked
the next time the destination object's
event loop is executed. If the slot
had instead been invoked directly by a
signal from another thread, that slot
would execute in the same context as
the calling thread. Normally, this is
not what you want (and especially not
what you want if you are using a
database connection, as the database
connection can be used only by the
thread that created it). The queued
connection properly dispatches the
signal to the thread object and
invokes its slot in its own context by
piggy-backing on the event system.
This is precisely what we want for
inter-thread communication in which
some of the threads are handling
database connections. The Qt
signal/slot mechanism is at root an
implementation of the inter-thread
event-passing scheme outlined above,
but with a much cleaner and
easier-to-use interface.
NOTE: eliben also has a good answer, and if I weren't using PyQt4, which handles thread-safety and mutexing, his solution would have been my choice.
|
If you want to use signals to indicate progress to the main thread then you should really be using PyQt's QThread class instead of the Thread class from Python's threading module.
A simple example which uses QThread, signals and slots can be found on the PyQt Wiki:
https://wiki.python.org/moin/PyQt/Threading,_Signals_and_Slots
|
Lazy choices in Django form
|
I have a Django my_forms.py like this:
class CarSearchForm(forms.Form):
# lots of fields like this
bodystyle = forms.ChoiceField(choices=bodystyle_choices())
Each choice is e.g. ("Saloon", "Saloon (15 cars)"). So the choices are computed by this function.
def bodystyle_choices():
return [(bodystyle.bodystyle_name, '%s (%s cars)' %
(bodystyle.bodystyle_name, bodystyle.car_set.count()))
for bodystyle in Bodystyle.objects.all()]
My problem is the choices functions are getting executed every time I merely import my_forms.py. I think this is due to the way Django declares its fields: in the class but not in a class method. Which is fine but my views.py imports my_forms.py so the choices lookups are done on every request no matter which view is used.
I thought that maybe putting choices=bodystyle_choices with no bracket would work, but I get: 'function' object is not iterable
Obviously I can use caching and put the "import my_forms" just in the view functions required but that doesn't change the main point: my choices need to be lazy!
|
You can use the "lazy" function :)
from django.utils.functional import lazy
class CarSearchForm(forms.Form):
# lots of fields like this
bodystyle = forms.ChoiceField(choices=lazy(bodystyle_choices, tuple)())
very nice util function !
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.