instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
|---|---|---|
Python's sys.path value
|
Where is Python's sys.path initialized from?
UPD: Python is adding some paths before refering to PYTHONPATH:
>>> import sys
>>> from pprint import pprint as p
>>> p(sys.path)
['',
'C:\\Python25\\lib\\site-packages\\setuptools-0.6c9-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\orbited-0.7.8-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\morbid-0.8.6.1-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\demjson-1.4-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\stomper-0.2.2-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\uuid-1.30-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\stompservice-0.1.0-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\cherrypy-3.0.1-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\pyorbited-0.2.2-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\flup-1.0.1-py2.5.egg',
'C:\\Python25\\lib\\site-packages\\wsgilog-0.1-py2.5.egg',
'c:\\testdir',
'C:\\Windows\\system32\\python25.zip',
'C:\\Python25\\DLLs',
'C:\\Python25\\lib',
'C:\\Python25\\lib\\plat-win',
'C:\\Python25\\lib\\lib-tk',
'C:\\Python25',
'C:\\Python25\\lib\\site-packages',
'C:\\Python25\\lib\\site-packages\\PIL',
'C:\\Python25\\lib\\site-packages\\win32',
'C:\\Python25\\lib\\site-packages\\win32\\lib',
'C:\\Python25\\lib\\site-packages\\Pythonwin']
My PYTHONPATH is:
PYTHONPATH=c:\testdir
I wonder where those paths before PYTHONPATH's ones come from?
|
"Initialized from the environment variable PYTHONPATH, plus an installation-dependent default"
-- http://docs.python.org/library/sys.html#sys.path
|
Python equivalent of PHP's memory_get_usage()?
|
I've already found the following question, but I was wondering if there was a quicker and dirtier way of grabbing an estimate of how much memory the python interpreter is currently using for my script that doesn't rely on external libraries.
I'm coming from PHP and used to use memory_get_usage() and memory_get_peak_usage() a lot for this purpose and I was hoping to find an equivalent.
|
A simple solution for Linux and other systems with /proc/self/status is the following code, which I use in a project of mine:
def memory_usage():
"""Memory usage of the current process in kilobytes."""
status = None
result = {'peak': 0, 'rss': 0}
try:
# This will only work on systems with a /proc file system
# (like Linux).
status = open('/proc/self/status')
for line in status:
parts = line.split()
key = parts[0][2:-1].lower()
if key in result:
result[key] = int(parts[1])
finally:
if status is not None:
status.close()
return result
It returns the current and peak resident memory size (which is probably what people mean when they talk about how much RAM an application is using). It is easy to extend it to grab other pieces of information from the /proc/self/status file.
For the curious: the full output of cat /proc/self/status looks like this:
% cat /proc/self/status
Name: cat
State: R (running)
Tgid: 4145
Pid: 4145
PPid: 4103
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 32
Groups: 20 24 25 29 40 44 46 100 1000
VmPeak: 3580 kB
VmSize: 3580 kB
VmLck: 0 kB
VmHWM: 472 kB
VmRSS: 472 kB
VmData: 160 kB
VmStk: 84 kB
VmExe: 44 kB
VmLib: 1496 kB
VmPTE: 16 kB
Threads: 1
SigQ: 0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000000000
SigCgt: 0000000000000000
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: ffffffffffffffff
Cpus_allowed: 03
Cpus_allowed_list: 0-1
Mems_allowed: 1
Mems_allowed_list: 0
voluntary_ctxt_switches: 0
nonvoluntary_ctxt_switches: 0
|
How can I detect if a file is binary (non-text) in python?
|
How can I tell if a file is binary (non-text) in python? I am searching through a large set of files in python, and keep getting matches in binary files. This makes the output look incredibly messy.
I know I could use grep -I, but I am doing more with the data than what grep allows for.
In the past I would have just searched for characters greater than 0x7f, but utf8 and the like make that impossible on modern systems. Ideally the solution would be fast, but any solution will do.
|
Yet another method based on file(1) behavior:
>>> textchars = bytearray({7,8,9,10,12,13,27} | set(range(0x20, 0x100)) - {0x7f})
>>> is_binary_string = lambda bytes: bool(bytes.translate(None, textchars))
Example:
>>> is_binary_string(open('/usr/bin/python', 'rb').read(1024))
True
>>> is_binary_string(open('/usr/bin/dh_python3', 'rb').read(1024))
False
|
How should I verify a log message when testing Python code under nose?
|
I'm trying to write a simple unit test that will verify that, under a certain condition, a class in my application will log an error via the standard logging API. I can't work out what the cleanest way to test this situation is.
I know that nose already captures logging output through it's logging plugin, but this seems to be intended as a reporting and debugging aid for failed tests.
The two ways to do this I can see are:
Mock out the logging module, either in a piecemeal way (mymodule.logging = mockloggingmodule) or with a proper mocking library.
Write or use an existing nose plugin to capture the output and verify it.
If I go for the former approach, I'd like to know what the cleanest way to reset the global state to what it was before I mocked out the logging module.
Looking forward to your hints and tips on this one...
|
Fortunately this is not something that you have to write yourself; the testfixtures package provides a context manager that captures all logging output that occurs in the body of the with statement. You can find the package here:
http://pypi.python.org/pypi/testfixtures
And here are its docs about how to test logging:
http://testfixtures.readthedocs.org/en/latest/logging.html
|
Writing a list to a file with Python
|
Is this the cleanest way to write a list to a file, since writelines() doesn't insert newline characters?
file.writelines(["%s\n" % item for item in list])
It seems like there would be a standard way...
|
EDIT Adding info from Thomas' comment
Don't forget to open the file first
thefile = open('test.txt', 'w')
I'd use a loop:
for item in thelist:
thefile.write("%s\n" % item)
or:
for item in thelist:
print>>thefile, item
If you're keen on a single function call, at least remove the square brackets [] so that the strings to be printed get made one at a time (a genexp rather than a listcomp) -- no reason to take up all the memory required to materialize the whole list of strings.
|
Python-like list comprehension in Java
|
Since Java doesn't allow passing methods as parameters, what trick do you use to implement Python like list comprehension in Java ?
I have a list (ArrayList) of Strings. I need to transform each element by using a function so that I get another list. I have several functions which take a String as input and return another String as output. How do I make a generic method which can be given the list and the function as parameters so that I can get a list back with each element processed. It is not possible in the literal sense, but what trick should I use ?
The other option is to write a new function for each smaller String-processing function which simply loops over the entire list, which is kinda not so cool.
|
Basically, you create a Function interface:
public interface Func<In, Out> {
public Out apply(In in);
}
and then pass in an anonymous subclass to your method.
Your method could either apply the function to each element in-place:
public static <T> void applyToListInPlace(List<T> list, Func<T, T> f) {
ListIterator<T> itr = list.listIterator();
while (itr.hasNext()) {
T output = f.apply(itr.next());
itr.set(output);
}
}
// ...
List<String> myList = ...;
applyToListInPlace(myList, new Func<String, String>() {
public String apply(String in) {
return in.toLowerCase();
}
});
or create a new List (basically creating a mapping from the input list to the output list):
public static <In, Out> List<Out> map(List<In> in, Func<In, Out> f) {
List<Out> out = new ArrayList<Out>(in.size());
for (In inObj : in) {
out.add(f.apply(inObj));
}
return out;
}
// ...
List<String> myList = ...;
List<String> lowerCased = map(myList, new Func<String, String>() {
public String apply(String in) {
return in.toLowerCase();
}
});
Which one is preferable depends on your use case. If your list is extremely large, the in-place solution may be the only viable one; if you wish to apply many different functions to the same original list to make many derivative lists, you will want the map version.
|
Python, how to parse strings to look like sys.argv
|
I would like to parse a string like this:
-o 1 --long "Some long string"
into this:
["-o", "1", "--long", 'Some long string']
or similar.
This is different than either getopt, or optparse, which start with sys.argv parsed input (like the output I have above). Is there a standard way to do this? Basically, this is "splitting" while keeping quoted strings together.
My best function so far:
import csv
def split_quote(string,quotechar='"'):
'''
>>> split_quote('--blah "Some argument" here')
['--blah', 'Some argument', 'here']
>>> split_quote("--blah 'Some argument' here", quotechar="'")
['--blah', 'Some argument', 'here']
'''
s = csv.StringIO(string)
C = csv.reader(s, delimiter=" ",quotechar=quotechar)
return list(C)[0]
|
I believe you want the shlex module.
>>> import shlex
>>> shlex.split('-o 1 --long "Some long string"')
['-o', '1', '--long', 'Some long string']
|
Getting the caller function name inside another function in Python?
|
If you have 2 functions like:
def A
def B
and A calls B, can you get who is calling B inside B, like:
def A () :
B ()
def B () :
this.caller.name
|
You can use the inspect module to get the calling stack. It returns a list of frame records. The third element in each record is the caller name. What you want is this:
>>> import inspect
>>> def f():
... print inspect.stack()[1][3]
...
>>> def g():
... f()
...
>>> g()
g
Of course, it is a good idea to check that enough frame records exist before trying to access a particular index.
|
How do I include image files in Django templates?
|
I'm new to Django and I'm trying to learn it through a simple project I'm developing called 'dubliners' and an app called 'book'. The directory structure is like this:
dubliners/book/ [includes models.py, views.py, etc.]
dubliners/templates/book/
I have a JPG file that needs to be displayed in the header of each Web page. Where should I store the file? Which path should I use for the tag to display it using a template? I've tried various locations and paths, but nothing is working so far.
...
Thanks for the answer posted below. However, I've tried both relative and absolute paths to the image, and I still get a broken image icon displayed in the Web page. For example, if I have an image in my home directory and use this tag in my template:
<img src="/home/tony/london.jpg" />
The image doesn't display. If I save the Web page as a static HTML file, however, the images display, so the path is correct. Maybe the default Web server that comes with Django will display images only if they're on a particular path?
|
Try this,
settings.py
# typically, os.path.join(os.path.dirname(__file__), 'media')
MEDIA_ROOT = '<your_path>/media'
MEDIA_URL = '/media/'
urls.py
urlpatterns = patterns('',
(r'^media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT}),
)
.html
<img src="{{ MEDIA_URL }}<sub-dir-under-media-if-any>/<image-name.ext>" />
Caveat
Beware! using Context() will yield you an empty value for {{MEDIA_URL}}. You must use RequestContext(), instead.
I hope, this will help.
|
python factory functions compared to class
|
Just working through learning python and started to look at nested/factory functions (simple example):
def maker(N):
def action(X):
return X * N
return action
Are there any advantages of factory functions over creating a class? performance? memory? clean up?
|
What I like most about nested functions is that it is less verbose than classes. The equivalent class definition to your maker function is:
class clsmaker(object):
def __init__(self, N):
self.N = N
def __call__(self, X):
return X * self.N
That doesn't seem so bad until you start adding more arguments to the constructor. Then doing it the class way takes an extra line for each argument, while the function just gets the extra args.
It turns out that there is a speed advantage to the nested functions as well:
>>> T1 = timeit.Timer('maker(3)(4)', 'from __main__ import maker')
>>> T1.timeit()
1.2818338871002197
>>> T2 = timeit.Timer('clsmaker(3)(4)', 'from __main__ import clsmaker')
>>> T2.timeit()
2.2137160301208496
This may be due to there being fewer opcodes involved in the nested functions version:
>>> dis(clsmaker.__call__)
5 0 LOAD_FAST 1 (X)
3 LOAD_FAST 0 (self)
6 LOAD_ATTR 0 (N)
9 BINARY_MULTIPLY
10 RETURN_VALUE
>>> act = maker(3)
>>> dis(act)
3 0 LOAD_FAST 0 (X)
3 LOAD_DEREF 0 (N)
6 BINARY_MULTIPLY
7 RETURN_VALUE
|
Python OSError: [Errno 2]
|
I have the following code that is attempting to start each of the "commands" below in Linux. The module attempts to keep each of the 2 commands running if either should crash for whatever reason.
#!/usr/bin/env python
import subprocess
commands = [ ["screen -dmS RealmD top"], ["screen -DmS RealmD top -d 5"] ]
programs = [ subprocess.Popen(c) for c in commands ]
while True:
for i in range(len(programs)):
if programs[i].returncode is None:
continue # still running
else:
# restart this one
programs[i]= subprocess.Popen(commands[i])
time.sleep(1.0)
Upon executing the code the following exception is thrown:
Traceback (most recent call last):
File "./marp.py", line 82, in <module>
programs = [ subprocess.Popen(c) for c in commands ]
File "/usr/lib/python2.6/subprocess.py", line 595, in __init__
errread, errwrite)
File "/usr/lib/python2.6/subprocess.py", line 1092, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I think I'm missing something obvious, can anyone see what's wrong with the code above?
|
Use ["screen", "-dmS", "RealmD", "top"] instead of ["screen -dmS RealmD top"].
Maybe also use the complete path to screen.
If the program still cannot be found you can also go through your shell with shell=True, but then you need to quote and escape your parameters etc. Make sure to read the information in the docs if you plan to do that.
|
How to use variables in SQL statement in Python?
|
Ok so I'm not that experienced in Python.
I have the following Python code:
cursor.execute("INSERT INTO table VALUES var1, var2, var3,")
where var1 is an integer, var2 & var3 are strings.
How can I write the variable names without python including them as part of the query text?
|
cursor.execute("INSERT INTO table VALUES (%s, %s, %s)", (var1, var2, var3))
Note that the parameters are passed as a tuple.
The database API does proper escaping and quoting of variables. Be careful not to use the string formatting operator (%), because
it does not do any escaping or quoting.
it is prone to Uncontrolled string format attacks e.g. SQL injection.
|
Saving a Numpy array as an image
|
I have a matrix in the type of a Numpy array. How would I write it to disk it as an image? Any format works (png, jpeg, bmp...). One important constraint is that PIL is not present.
|
This uses PIL, but maybe some might find it useful:
import scipy.misc
scipy.misc.imsave('outfile.jpg', image_array)
EDIT: The current scipy version started to normalize all images so that min(data) become black and max(data) become white. This is unwanted if the data should be exact grey levels or exact RGB channels. The solution:
import scipy.misc
scipy.misc.toimage(image_array, cmin=0.0, cmax=...).save('outfile.jpg')
|
hasattr() vs try-except block to deal with non-existent attributes
|
if hasattr(obj, 'attribute'):
# do somthing
vs
try:
# access obj.attribute
except AttributeError, e:
# deal with AttributeError
Which should be preferred and why?
|
Any benches that illustrate difference in performance?
timeit it's your friend
$ python -mtimeit -s 'class C(object): a = 4
c = C()' 'hasattr(c, "nonexistent")'
1000000 loops, best of 3: 1.87 usec per loop
$ python -mtimeit -s 'class C(object): a = 4
c = C()' 'hasattr(c, "a")'
1000000 loops, best of 3: 0.446 usec per loop
$ python -mtimeit -s 'class C(object): a = 4
c = C()' 'try:
c.a
except:
pass'
1000000 loops, best of 3: 0.247 usec per loop
$ python -mtimeit -s 'class C(object): a = 4
c = C()' 'try:
c.nonexistent
except:
pass'
100000 loops, best of 3: 3.13 usec per loop
$
|positive|negative
hasattr| 0.446 | 1.87
try | 0.247 | 3.13
|
Python's 'with' statement versus 'with .. as'
|
Having just pulled my hair off because of a difference, I'd like to know what the difference really is in Python 2.5.
I had two blocks of code (dbao.getConnection() returns a MySQLdb connection).
conn = dbao.getConnection()
with conn:
# Do stuff
And
with dbao.getConnection() as conn:
# Do stuff
I thought these would have the same effect but apparently not as the conn object of the latter version was a Cursor. Where did the cursor come from and is there a way to combine the variable initialization and with statement somehow?
|
It may be a little confusing at first glance, but
with babby() as b:
...
is not equivalent to
b = babby()
with b:
...
To see why, here's how the context manager would be implemented:
class babby(object):
def __enter__(self):
return 'frigth'
def __exit__(self, type, value, tb):
pass
In the first case, the name b will be bound to whatever is returned from the __enter__ method of the context manager. This is often the context manager itself (for example for file objects), but it doesn't have to be; in this case it's the string 'frigth', and in your case it's the database cursor.
In the second case, b is the context manager object itself.
|
How do you extract a column from a multi-dimensional array?
|
Does anybody know how to extract a column from a multi-dimensional array in Python?
|
>>> import numpy as np
>>> A = np.array([[1,2,3,4],[5,6,7,8]])
>>> A
array([[1, 2, 3, 4],
[5, 6, 7, 8]])
>>> A[:,2] # returns the third columm
array([3, 7])
See also: "numpy.arange" and "reshape" to allocate memory
Example: (Allocating a array with shaping of matrix (3x4))
nrows = 3
ncols = 4
my_array = numpy.arange(nrows*ncols, dtype='double')
my_array = my_array.reshape(nrows, ncols)
|
Chain-calling parent constructors in python
|
Consider this - a base class A, class B inheriting from A, class C inheriting from B. What is a generic way to call a parent class constructor in a constructor? If this still sounds too vague, here's some code.
class A(object):
def __init__(self):
print "Constructor A was called"
class B(A):
def __init__(self):
super(B,self).__init__()
print "Constructor B was called"
class C(B):
def __init__(self):
super(C,self).__init__()
print "Constructor C was called"
c = C()
This is how I do it now. But it still seems a bit too non-generic - you still must pass a correct type by hand.
Now, I've tried using self.__class__ as a first argument to super(), but, obviously it doesn't work - if you put it in the constructor for C - fair enough, B's constructor gets called. If you do the same in B, "self" still points to an instance of C so you end up calling B's constructor again (this ends in an infinite recursion).
There is no need to think about diamond inheritance for now, I am just interested in solving this specific problem.
|
The way you are doing it is indeed the recommended one (for Python 2.x).
The issue of whether the class is passed explicitly to super is a matter of style rather than functionality. Passing the class to super fits in with Python's philosophy of "explicit is better than implicit".
|
Reading a UTF8 CSV file with Python
|
I am trying to read a CSV file with accented characters with Python (only French and/or Spanish characters). Based on the Python 2.5 documentation for the csvreader (http://docs.python.org/library/csv.html), I came up with the following code to read the CSV file since the csvreader supports only ASCII.
def unicode_csv_reader(unicode_csv_data, dialect=csv.excel, **kwargs):
# csv.py doesn't do Unicode; encode temporarily as UTF-8:
csv_reader = csv.reader(utf_8_encoder(unicode_csv_data),
dialect=dialect, **kwargs)
for row in csv_reader:
# decode UTF-8 back to Unicode, cell by cell:
yield [unicode(cell, 'utf-8') for cell in row]
def utf_8_encoder(unicode_csv_data):
for line in unicode_csv_data:
yield line.encode('utf-8')
filename = 'output.csv'
reader = unicode_csv_reader(open(filename))
try:
products = []
for field1, field2, field3 in reader:
...
Below is an extract of the CSV file I am trying to read:
0665000FS10120684,SD1200IS,Appareil photo numérique PowerShot de 10 Mpx de Canon avec trépied (SD1200IS) - Bleu
0665000FS10120689,SD1200IS,Appareil photo numérique PowerShot de 10 Mpx de Canon avec trépied (SD1200IS) - Gris
0665000FS10120687,SD1200IS,Appareil photo numérique PowerShot de 10 Mpx de Canon avec trépied (SD1200IS) - Vert
...
Even though I try to encode/decode to UTF-8, I am still getting the following exception:
Traceback (most recent call last):
File ".\Test.py", line 53, in <module>
for field1, field2, field3 in reader:
File ".\Test.py", line 40, in unicode_csv_reader
for row in csv_reader:
File ".\Test.py", line 46, in utf_8_encoder
yield line.encode('utf-8', 'ignore')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 68: ordinal not in range(128)
How do I fix this?
|
The .encode method gets applied to a Unicode string to make a byte-string; but you're calling it on a byte-string instead... the wrong way 'round! Look at the codecs module in the standard library and codecs.open in particular for better general solutions for reading UTF-8 encoded text files. However, for the csv module in particular, you need to pass in utf-8 data, and that's what you're already getting, so your code can be much simpler:
import csv
def unicode_csv_reader(utf8_data, dialect=csv.excel, **kwargs):
csv_reader = csv.reader(utf8_data, dialect=dialect, **kwargs)
for row in csv_reader:
yield [unicode(cell, 'utf-8') for cell in row]
filename = 'da.csv'
reader = unicode_csv_reader(open(filename))
for field1, field2, field3 in reader:
print field1, field2, field3
PS: if it turns out that your input data is NOT in utf-8, but e.g. in ISO-8859-1, then you do need a "transcoding" (if you're keen on using utf-8 at the csv module level), of the form line.decode('whateverweirdcodec').encode('utf-8') -- but probably you can just use the name of your existing encoding in the yield line in my code above, instead of 'utf-8', as csv is actually going to be just fine with ISO-8859-* encoded bytestrings.
|
How to parse malformed HTML in python
|
I need to browse the DOM tree of a parsed HTML document.
I'm using uTidyLib before parsing the string with lxml
a = tidy.parseString(html_code, options)
dom = etree.fromstring(str(a))
sometimes I get an error, it seems that tidylib is not able to repair malformed html.
how can I parse every HTML file without getting an error (parsing only some parts of files that can not be repaired)?
|
Beautiful Soup does a good job with invalid/broken HTML
>>> from BeautifulSoup import BeautifulSoup
>>> soup = BeautifulSoup("<htm@)($*><body><table <tr><td>hi</tr></td></body><html")
>>> print soup.prettify()
<htm>
<body>
<table>
<tr>
<td>
hi
</td>
</tr>
</table>
</body>
</htm>
|
How to remove all characters after a specific character in python?
|
I have a string. How do I remove all text after a certain character? (In this case ...)
The text after will ... change so I that's why I want to remove all characters after a certain one.
|
Split on your separator at most once, and take the first piece:
sep = '...'
rest = text.split(sep, 1)[0]
You didn't say what should happen if the separator isn't present. Both this and Alex's solution will return the entire string in that case.
|
Python strftime - date without leading 0?
|
When using Python strftime, is there a way to remove the first 0 of the date if it's before the 10th, ie. so 01 is 1? Can't find a %thingy for that?
Thanks!
|
Actually I had the same problem and I realized that, if you add a hyphen between the % and the letter, you can remove the leading zero.
For example %Y/%-m/%-d.
Only works on Unix (Linux, OS X). Doesn't work in Windows (including Cygwin).
|
Python and Intellisense
|
Is there an equivalent to 'intellisense' for Python?
Perhaps i shouldn't admit it but I find having intellisense really speeds up the 'discovery phase' of learning a new language. For instance switching from VB.net to C# was a breeze due to snippets and intellisense helping me along.
|
This blog entry explains setting Vim up as a Python IDE, he covers Intellisense-like functionality:
This is standard in Vim 7. There are a number of other very useful plugins for python development in Vim, such as Pyflakes which checks code on the fly and Python_fn.vim which provides functionality for manipulating python indentation & code blocks.
|
Resources concerning Python scripting in Vim
|
I'm trying to learn a little about python scripting in GVim, but I'm having trouble with starting. Elementary things, really.
Are there any resources / tutorials concerting python scripting in Vim out there ? Simple examples which show the basic principles, would be welcomed also.
|
Here is all I know of in terms of such docs:
A nice presentation by Sean Reifschneider
The "official" VIM Python Interface reference
some shorter notes
Extending Vim With Python
|
Why does sys.exit() not exit when called inside a thread in Python?
|
This could be a stupid question, but I'm testing out some of my assumptions about Python and I'm confused as to why the following code snippet would not exit when called in the thread, but would exit when called in the main thread.
import sys, time
from threading import Thread
def testexit():
time.sleep(5)
sys.exit()
print "post thread exit"
t = Thread(target = testexit)
t.start()
t.join()
print "pre main exit, post thread exit"
sys.exit()
print "post main exit"
The docs for sys.exit() state that the call should exit from Python. I can see from the output of this program that "post thread exit" is never printed, but the main thread just keeps on going even after the thread calls exit.
Is a separate instance of the interpreter being created for each thread, and the call to exit() is just exiting that separate instance? If so, how does the threading implementation manage access to shared resources? What if I did want to exit the program from the thread (not that I actually want to, but just so I understand)?
|
sys.exit() raises the SystemExit exception, as does thread.exit(). So, when sys.exit() raises that exception inside that thread, it has the same effect as calling thread.exit(), which is why only the thread exits.
|
A neat way of extending a class attribute in subclasses
|
Let's say I have the following class
class Parent(object):
Options = {
'option1': 'value1',
'option2': 'value2'
}
And a subclass called Child
class Child(Parent):
Options = Parent.Options.copy()
Options.update({
'option2': 'value2',
'option3': 'value3'
})
I want to be able to override or add options in the child class. The solution I'm using works. But I'm sure there is a better way of doing it.
EDIT
I don't want to add options as class attributes because I have other class attributes that aren't options and I prefer to keep all options in one place. This is just a simple example, the actual code is more complicated than that.
|
Semantically equivalent to your code but arguably neater:
class Child(Parent):
Options = dict(Parent.Options,
option2='value2',
option3='value3')
Remember, "life is better without braces", and by calling dict explicitly you can often avoid braces (and extra quotes around keys that are constant identifier-like strings).
See http://docs.python.org/library/stdtypes.html#dict for more details -- the key bit is "If a key is specified both in the positional argument and as a keyword argument, the value associated with the keyword is retained", i.e. keyword args override key-value associations in the positional arg, just like the update method lets you override them).
|
Generating file to download with Django
|
Is it possible to make a zip archive and offer it to download, but still not save a file to the hard drive?
|
To trigger a download you need to set Content-Disposition header:
from django.http import HttpResponse
from wsgiref.util import FileWrapper
# generate the file
response = HttpResponse(FileWrapper(myfile.getvalue()), content_type='application/zip')
response['Content-Disposition'] = 'attachment; filename=myfile.zip'
return response
If you don't want the file on disk you need to use StringIO
import cStringIO as StringIO
myfile = StringIO.StringIO()
while not_finished:
# generate chunk
myfile.write(chunk)
Optionally you can set Content-Length header as well:
response['Content-Length'] = myfile.tell()
|
How to write binary data in stdout in python 3?
|
In python 2.x I could do this:
import sys, array
a = array.array('B', range(100))
a.tofile(sys.stdout)
Now however, I get a TypeError: can't write bytes to text stream. Is there some secret encoding that I should use?
|
A better way:
import sys
sys.stdout.buffer.write(b"some binary data")
|
Why is the compiler package discontinued in Python 3?
|
I was just pleasantly surprised to came across the documentation of Python's compiler package, but noticed that it's gone in Python 3.0, without any clear replacement or explanation.
I can't seem to find any discussion on python-dev about how this decision was made - does anyone have any insight inot this decision?
|
I believe the functionality is now built in:
compile
ast
|
Resize fields in Django Admin
|
Django tends to fill up horizontal space when adding or editing entries on the admin, but, in some cases, is a real waste of space, when, i.e., editing a date field, 8 characters wide, or a CharField, also 6 or 8 chars wide, and then the edit box goes up to 15 or 20 chars.
How can I tell the admin how wide a textbox should be, or the height of a TextField edit box?
|
You should use ModelAdmin.formfield_overrides.
It is quite easy - in admin.py, define:
class YourModelAdmin(admin.ModelAdmin):
formfield_overrides = {
models.CharField: {'widget': TextInput(attrs={'size':'20'})},
models.TextField: {'widget': Textarea(attrs={'rows':4, 'cols':40})},
}
admin.site.register(YourModel, YourModelAdmin)
Don't forget that you should import appropriate classes -- in this case:
from django.forms import TextInput, Textarea
from django.db import models
|
Send an xmpp message using a python library
|
How can I send an XMPP message using one of the following Python libraries: wokkel, xmpppy, or jabber.py ?
I think I am aware of the pseudo-code, but so far have not been able to get one running correctly. This is what I have tried so far:
Call some API and pass the servername and port number to connect to that server.
Call some API and pass the username, password to construct a JID object.
Authenticate with that JID.
Construct a Message object and call some API and pass that message obj in the argument.
Call some send API.
It seems easy enough in concept, but the devil is somewhere in the details. Please show a sample snippet if that's possible.
|
This is the simplest possible xmpp client. It will send a 'hello :)' message. I'm using xmpppy in the example. And connecting to gtalk server. I think the example is self-explanatory:
import xmpp
username = 'username'
passwd = 'password'
to='name@example.com'
msg='hello :)'
client = xmpp.Client('gmail.com')
client.connect(server=('talk.google.com',5223))
client.auth(username, passwd, 'botty')
client.sendInitPresence()
message = xmpp.Message(to, msg)
message.setAttr('type', 'chat')
client.send(message)
|
gnuplot vs Matplotlib
|
I've started on a project graphing Tomcat logs using gnuplot-py, specifically correlating particular requests with memory allocation and garbage collection. What is the
collective wisdom on gnuplot-py vs Matplotlib for Python graphing. Are there better graphing libraries out there I haven't heard of?
My general considerations are:
While gnuplot has large amounts of documentation, gnuplot-py doesn't. How good is documentation community for Matplotlib?
Are there things which gnuplot can do, but gnuplot-py can't?
Does Matplotlib have better Python support?
Are there are big show stopping bugs in either? Annoyances?
Currently gnuplot is graphing 100,000's of points, I'm planning on scaling this up to millions. Should I expect problems? How well does Matplotlib handle this?
Ease of use, turnaround time for gnuplot vs Matplotlib?
How easy would it be to port existing gnuplot-py code to Matplotlib?
How would you approach this task?
|
you can check the documentation yourself. I find it quite comprehensive.
I have very little experience with gnuplot-py, so I can not say.
Matplotlib is written in and designed specifically for Python, so it fits very nicely with Python idioms and such.
Matplotlib is a mature project. NASA uses it for some stuff.
I've plotted tens of millions of points in Matplotlib, and it still looked beautiful and responded quickly.
beyond the object-oriented way of using Matplotlib is the pylab interface, which makes plotting as easy as it is in MATLAB -- that is, very easy.
as for porting, I have no idea.
|
detecting idle time using python
|
How do I detect if the system is idle on Windows using Python (i.e. no keyboard or mouse activity).
This has already been asked before, but there doesn't seem to be a GetLastInputInfo in the pywin32 module.
|
from ctypes import Structure, windll, c_uint, sizeof, byref
class LASTINPUTINFO(Structure):
_fields_ = [
('cbSize', c_uint),
('dwTime', c_uint),
]
def get_idle_duration():
lastInputInfo = LASTINPUTINFO()
lastInputInfo.cbSize = sizeof(lastInputInfo)
windll.user32.GetLastInputInfo(byref(lastInputInfo))
millis = windll.kernel32.GetTickCount() - lastInputInfo.dwTime
return millis / 1000.0
Call get_idle_duration() to get idle time in seconds.
|
Detect if a NumPy array contains at least one non-numeric value?
|
I need to write a function which will detect if the input contains at least one value which is non-numeric. If a non-numeric value is found I will raise an error (because the calculation should only return a numeric value). The number of dimensions of the input array is not known in advance - the function should give the correct value regardless of ndim. As an extra complication the input could be a single float or numpy.float64 or even something oddball like a zero-dimensional array.
The obvious way to solve this is to write a recursive function which iterates over every iterable object in the array until it finds a non-iterabe. It will apply the numpy.isnan() function over every non-iterable object. If at least one non-numeric value is found then the function will return False immediately. Otherwise if all the values in the iterable are numeric it will eventually return True.
That works just fine, but it's pretty slow and I expect that NumPy has a much better way to do it. What is an alternative that is faster and more numpyish?
Here's my mockup:
def contains_nan( myarray ):
"""
@param myarray : An n-dimensional array or a single float
@type myarray : numpy.ndarray, numpy.array, float
@returns: bool
Returns true if myarray is numeric or only contains numeric values.
Returns false if at least one non-numeric value exists
Not-A-Number is given by the numpy.isnan() function.
"""
return True
|
This should be faster than iterating and will work regardless of shape.
numpy.isnan(myarray).any()
Edit: 30x faster:
import timeit
s = 'import numpy;a = numpy.arange(10000.).reshape((100,100));a[10,10]=numpy.nan'
ms = [
'numpy.isnan(a).any()',
'any(numpy.isnan(x) for x in a.flatten())']
for m in ms:
print " %.2f s" % timeit.Timer(m, s).timeit(1000), m
Results:
0.11 s numpy.isnan(a).any()
3.75 s any(numpy.isnan(x) for x in a.flatten())
Bonus: it works fine for non-array NumPy types:
>>> a = numpy.float64(42.)
>>> numpy.isnan(a).any()
False
>>> a = numpy.float64(numpy.nan)
>>> numpy.isnan(a).any()
True
|
Using subprocess to run Python script on Windows
|
Is there a simple way to run a Python script on Windows/Linux/OS X?
On the latter two, subprocess.Popen("/the/script.py") works, but on Windows I get the following error:
Traceback (most recent call last):
File "test_functional.py", line 91, in test_functional
log = tvnamerifiy(tmp)
File "test_functional.py", line 49, in tvnamerifiy
stdout = PIPE
File "C:\Python26\lib\subprocess.py", line 595, in __init__
errread, errwrite)
File "C:\Python26\lib\subprocess.py", line 804, in _execute_child
startupinfo)
WindowsError: [Error 193] %1 is not a valid Win32 application
monkut's comment: The use case isn't clear. Why use subprocess to run a python script? Is there something preventing you from importing the script and calling the necessary function?
I was writing a quick script to test the overall functionality of a Python-command-line tool (to test it on various platforms). Basically it had to create a bunch of files in a temp folder, run the script on this and check the files were renamed correctly.
I could have imported the script and called the function, but since it relies on sys.argv and uses sys.exit(), I would have needed to do something like..
import sys
import tvnamer
sys.argv.append("-b", "/the/folder")
try:
tvnamer.main()
except BaseException, errormsg:
print type(errormsg)
Also, I wanted to capture the stdout and stderr for debugging incase something went wrong.
Of course a better way would be to write the script in more unit-testable way, but the script is basically "done" and I'm doing a final batch of testing before doing a "1.0" release (after which I'm going to do a rewrite/restructure, which will be far tidier and more testable)
Basically, it was much easier to simply run the script as a process, after finding the sys.executable variable. I would have written it as a shell-script, but that wouldn't have been cross-platform. The final script can be found here
|
Just found sys.executable - the full path to the current Python executable, which can be used to run the script (instead of relying on the shbang, which obviously doesn't work on Windows)
import sys
import subprocess
theproc = subprocess.Popen([sys.executable, "myscript.py"])
theproc.communicate()
|
Django forms, inheritance and order of form fields
|
I'm using Django forms in my website and would like to control the order of the fields.
Here's how I define my forms:
class edit_form(forms.Form):
summary = forms.CharField()
description = forms.CharField(widget=forms.TextArea)
class create_form(edit_form):
name = forms.CharField()
The name is immutable and should only be listed when the entity is created. I use inheritance to add consistency and DRY principles. What happens which is not erroneous, in fact totally expected, is that the name field is listed last in the view/html but I'd like the name field to be on top of summary and description. I do realize that I could easily fix it by copying summary and description into create_form and loose the inheritance but I'd like to know if this is possible.
Why? Imagine you've got 100 fields in edit_form and have to add 10 fields on the top in create_form - copying and maintaining the two forms wouldn't look so sexy then. (This is not my case, I'm just making up an example)
So, how can I override this behavior?
Edit:
Apparently there's no proper way to do this without going through nasty hacks (fiddling with .field attribute). The .field attribute is a SortedDict (one of Django's internal datastructures) which doesn't provide any way to reorder key:value pairs. It does how-ever provide a way to insert items at a given index but that would move the items from the class members and into the constructor. This method would work, but make the code less readable. The only other way I see fit is to modify the framework itself which is less-than-optimal in most situations.
In short the code would become something like this:
class edit_form(forms.Form):
summary = forms.CharField()
description = forms.CharField(widget=forms.TextArea)
class create_form(edit_form):
def __init__(self,*args,**kwargs):
forms.Form.__init__(self,*args,**kwargs)
self.fields.insert(0,'name',forms.CharField())
That shut me up :)
|
I had this same problem and I found another technique for reordering fields in the Django CookBook:
class EditForm(forms.Form):
summary = forms.CharField()
description = forms.CharField(widget=forms.TextArea)
class CreateForm(EditForm):
name = forms.CharField()
def __init__(self, *args, **kwargs):
super(CreateForm, self).__init__(*args, **kwargs)
self.fields.keyOrder = ['name', 'summary', 'description']
|
django error 'too many values to unpack'
|
I'm learning Django by building a simple recipes app. I have a 1 table model using the 'choices' field option for recipe categories rather than using a 2nd 'categories' table and a foreign key relationship. So i created db table via syncdb and then loaded table with test data. When i go to admin and click on the 'Recipes' link in an attempt to view recipes i get the following error:
Template error
In template /var/lib/python-support/python2.6/django/contrib/admin/templates/admin/change_list.html, error at line 34
Caught an exception while rendering: too many values to unpack
If anyone can shed light on this cryptic error that would be great. Db is Sqlite. Django version is 1.0. The model is listed below:
from django.db import models
class Recipe(models.Model):
CATEGORY_CHOICES = (
(1, u'Appetizer'),
(2, u'Bread'),
(3, u'Dessert'),
(4, u'Drinks'),
(5, u'Main Course'),
(6, u'Salad'),
(7, u'Side Dish'),
(8, u'Soup'),
(9, u'Sauce/Marinade'),
(10, u'Other'),
)
name = models.CharField(max_length=255)
submitter = models.CharField(max_length=40)
date = models.DateTimeField()
category = models.SmallIntegerField(choices=CATEGORY_CHOICES)
ingredients = models.TextField()
directions = models.TextField()
comments = models.TextField(null=True, blank=True)
|
Edit: Updated in light of kibibu's correction.
I have encountered what I believe is this same error, producing the message:
Caught ValueError while rendering: too many values to unpack
My form class was as follows:
class CalcForm(forms.Form):
item = forms.ChoiceField(choices=(('17815', '17816')))
Note that my choices type here a tuple. Django official documentation reads as follows for the choices arg:
An iterable (e.g., a list or tuple) of 2-tuples to use as choices for
this field. This argument accepts the same formats as the choices
argument to a model field.
src: https://docs.djangoproject.com/en/1.3/ref/forms/fields/#django.forms.ChoiceField.choices
This problem was solved by my observing the documentation and using a list of tuples:
class CalcForm(forms.Form):
item = forms.ChoiceField(choices=[('17815', '17816')])
Do note that while the docs state any iterable of the correct form can be used, a tuple of 2-tuples did not work:
item = forms.ChoiceField(choices=(('17815', '17816'), ('123', '456')))
This produced the same error as before.
Lesson: bugs happen.
|
What is the underlying data structure for Python lists?
|
What is the typical underlying data structure used to implement Python's built-in list data type?
|
List objects are implemented as
arrays. They are optimized for fast
fixed-length operations and incur O(n)
memory movement costs for pop(0) and
insert(0, v) operations which change
both the size and position of the
underlying data representation.
See also:
http://docs.python.org/library/collections.html#collections.deque
Btw, I find it interesting that the Python tutorial on data structures recommends using pop(0) to simulate a queue but does not mention O(n) or the deque option.
http://docs.python.org/tutorial/datastructures.html#using-lists-as-queues
|
Python: Looping through all but the last item of a list
|
I would like to loop through a list checking each item against the one following it.
Is there a way I can loop through all but the last item using for x in y? I would prefer to do it without using indexes if I can.
Note
freespace answered my actual question, which is why I accepted the answer, but SilentGhost answered the question I should have asked.
Apologies for the confusion.
|
for x in y[:-1]
If y is a generator, then the above will not work.
|
How does Python OOP compare to PHP OOP?
|
I'm basically wondering if Python has any OOP shortcomings like PHP does. PHP has been developing their OOP practices for the last few versions. It's getting better in PHP but it's still not perfect. I'm new to Python and I'm just wondering if Python's OOP support is better or just comparable.
If there are some issues in Python OOP which don't follow proper OOP practices I would definitely like to know those. PHP for instance, doesn't allow for multiple inheritance as far as I'm aware.
Thanks Everyone!
Edit:
How about support for Public and Private? or support of variable types. I think these are important regarding building OOP software.
|
I would say that Python's OOP support is much better given the fact that it was introduced into the language in its infancy as opposed to PHP which bolted OOP onto an existing procedural model.
|
Python equivalent of maplist?
|
What's the best Python equivalent of Common Lisp's maplist function? From the maplist documentation:
maplist is like mapcar except that
function is applied to successive
sublists of the lists. function is
first applied to the lists themselves,
and then to the cdr of each list, and
then to the cdr of the cdr of each
list, and so on.
Example (pseudoy-code, not tested):
>>> def p(x): return x
>>> maplist(p, [1,2,3])
[[1, 2, 3], [2, 3], [3]]
Note: the arguments passed to p in the example above would be the lists [1, 2, 3], [2, 3], [3]; i.e., p is not applied to the elements of those lists. E.g.:
>>> maplist(lambda l: list(reversed(l)), [1,2,3])
[[3, 2, 1], [3, 2], [3]]
|
You can write a little function for that
def maplist(func, values):
return [map(func, values[i:]) for i in xrange(len(values))]
>>> maplist(lambda a: a* 2, [1,2,3])
[[2, 4, 6], [4, 6], [6]]
[Edit]
if you want to apply the function on the sublists you can change the function to this:
def maplist(func, values):
return [func(values[i:]) for i in xrange(len(values))]
>>> maplist(lambda l: list(reversed(l)), [1,2,3])
[[3, 2, 1], [3, 2], [3]]
|
Relative paths in Python
|
I'm building a simple helper script for work that will copy a couple of template files in our code base to the current directory. I don't, however, have the absolute path to the directory where the templates are stored. I do have a relative path from the script but when I call the script it treats that as a path relative to the current working directory. Is there a way to specify that this relative url is from the location of the script instead?
|
In the file that has the script, you want to do something like this:
import os
dir = os.path.dirname(__file__)
filename = os.path.join(dir, '/relative/path/to/file/you/want')
This will give you the absolute path to the file you're looking for. Note that if you're using setuptools, you should probably use its package resources API instead.
UPDATE: I'm responding to a comment here so I can paste a code sample. :-)
Am I correct in thinking that __file__ is not always available (e.g. when you run the file directly rather than importing it)?
I'm assuming you mean the __main__ script when you mention running the file directly. If so, that doesn't appear to be the case on my system (python 2.5.1 on OS X 10.5.7):
#foo.py
import os
print os.getcwd()
print __file__
#in the interactive interpreter
>>> import foo
/Users/jason
foo.py
#and finally, at the shell:
~ % python foo.py
/Users/jason
foo.py
However, I do know that there are some quirks with __file__ on C extensions. For example, I can do this on my Mac:
>>> import collections #note that collections is a C extension in Python 2.5
>>> collections.__file__
'/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/lib-
dynload/collections.so'
However, this raises an exception on my Windows machine.
|
Are asynchronous Django model queries possible?
|
I'm new to Django, but the application that I have in mind might end up having URLs that look like this:
http://mysite/compare/id_1/id_2
Where "id_1" and "id_2" are identifiers of two distinct Model objects. In the handler for "compare" I'd like to asynchronously, and in parallel, query and retrieve objects id_1 and id_2.
Is there any way to do this using a standard Django syntax? I'm hoping for pseudocode that ends up looking something like this:
import django.async
# Issue the model query, but set it up asynchronously.
# The next 2 lines don't actually touch my database
o1 = Object(id=id_1).async_fetch()
o2 = Object(id=id_2).async_fetch()
# Now that I know what I want to query, fire off a fetch to do them all
# in parallel, and wait for all queries to finish before proceeding.
async.Execute((o2,o2))
# Now the code can use data from o1 and o2 below...
|
There aren't strictly asynchronous operations as you've described, but I think you can achieve the same effect by using django's in_bulk query operator, which takes a list of ids to query.
Something like this for the urls.py:
urlpatterns = patterns('',
(r'^compare/(\d+)/(\d+)/$', 'my.compareview'),
)
And this for the view:
def compareview(request, id1, id2):
# in_bulk returns a dict: { obj_id1: <MyModel instance>,
# obj_id2: <MyModel instance> }
# the SQL pulls all at once, rather than sequentially... arguably
# better than async as it pulls in one DB hit, rather than two
# happening at the same time
comparables = MyModel.objects.in_bulk([id1, id2])
o1, o2 = (comparables.get(id1), comparables.get(id2))
|
My python program executes faster than my java version of the same program. What gives?
|
Update: 2009-05-29
Thanks for all the suggestions and advice. I used your suggestions to make my production code execute 2.5 times faster on average than my best result a couple of days ago. In the end I was able to make the java code the fastest.
Lessons:
My example code below shows the insertion of primitive ints but the production code is actually storing strings (my bad). When I corrected that the python execution time went from 2.8 seconds to 9.6. So right off the bat, the java was actually faster when storing objects.
But it doesn't stop there. I had been executing the java program as follows:
java -Xmx1024m SpeedTest
But if you set the initial heap size as follows you get a huge improvement:
java -Xms1024m -Xmx1024m SpeedTest
This simple change reduced the execution time by more than 50%. So the final result for my SpeedTest is python 9.6 seconds. Java 6.5 seconds.
Original Question:
I had the following python code:
import time
import sys
def main(args):
iterations = 10000000
counts = set()
startTime = time.time();
for i in range(0, iterations):
counts.add(i)
totalTime = time.time() - startTime
print 'total time =',totalTime
print len(counts)
if __name__ == "__main__":
main(sys.argv)
And it executed in about 3.3 seconds on my machine but I wanted to make it faster so I decided to program it in java. I assumed that because java is compiled and is generally considered to be faster than python I would see some big paybacks.
Here is the java code:
import java.util.*;
class SpeedTest
{
public static void main(String[] args)
{
long startTime;
long totalTime;
int iterations = 10000000;
HashSet counts = new HashSet((2*iterations), 0.75f);
startTime = System.currentTimeMillis();
for(int i=0; i<iterations; i++)
{
counts.add(i);
}
totalTime = System.currentTimeMillis() - startTime;
System.out.println("TOTAL TIME = "+( totalTime/1000f) );
System.out.println(counts.size());
}
}
So this java code does basically the same thing as the python code. But it executed in 8.3 seconds instead of 3.3.
I have extracted this simple example from a real-world example to simplify things. The critical element is that I have (set or hashSet) that ends up with a lot of members much like the example.
Here are my questions:
How come my python implementation is faster than my java implementation?
Is there a better data structure to use than the hashSet (java) to hold a unique collection?
What would make the python implementation faster?
What would make the java implementation faster?
UPDATE:
Thanks to all who have contributed so far. Please allow me to add some details.
I have not included my production code because it is quite complex. And would generate a lot of distraction. The case I present above is the most simplified possible. By that I mean that the java put call seems to be much slower than the python set`s add().
The java implementation of the production code is also about 2.5 - 3 times slower than the python version -- just like the above.
I am not concerned about vm warmup or startup overhead. I just want to compare the code from my startTime to my totalTime. Please do not concern yourselves with other matters.
I initialized the hashset with more than enough buckets so that it should never have to rehash. (I will always know ahead of time how many elements the collection will ultimately contain.) I suppose one could argue that I should have initialized it to iterations/0.75. But if you try it you will see that execution time is not significantly impacted.
I set Xmx1024m for those that were curious (my machine has 4GB of ram).
I am using java version: Java(TM) SE Runtime Environment (build 1.6.0_13-b03).
In the production version of I am storing a string (2-15 chars) in the hashSet so I cannot use primitives, although that is an interesting case.
I have run the code many, many times. I have very high confidence that the python code is between 2.5 and 3 times faster than the java code.
|
You're not really testing Java vs. Python, you're testing java.util.HashSet using autoboxed Integers vs. Python's native set and integer handling.
Apparently, the Python side in this particular microbenchmark is indeed faster.
I tried replacing HashSet with TIntHashSet from GNU trove and achieved a speedup factor between 3 and 4, bringing Java slightly ahead of Python.
The real question is whether your example code is really as representative of your application code as you think. Have you run a profiler and determined that most of the CPU time is spent in putting a huge number of ints into a HashSet? If not, the example is irrelevant. Even if the only difference is that your production code stores other objects than ints, their creation and the computation of their hashcode could easily dominate the set insertion (and totally destroy Python's advantage in handling ints specially), making this whole question pointless.
|
Case insensitive replace
|
What's the easiest way to do a case-insensitive string replacement in Python?
|
The string type doesn't support this. You're probably best off using the regular expression sub method with the re.IGNORECASE option.
>>> import re
>>> insensitive_hippo = re.compile(re.escape('hippo'), re.IGNORECASE)
>>> insensitive_hippo.sub('giraffe', 'I want a hIPpo for my birthday')
'I want a giraffe for my birthday'
|
Can a variable number of arguments be passed to a function?
|
In a similar way to using varargs in C or C++:
fn(a, b)
fn(a, b, c, d, ...)
|
Yes.
This is simple and works if you disregard keyword arguments:
def manyArgs(*arg):
print "I was called with", len(arg), "arguments:", arg
>>> manyArgs(1)
I was called with 1 arguments: (1,)
>>> manyArgs(1, 2,3)
I was called with 3 arguments: (1, 2, 3)
As you can see, Python will give you a single tuple with all the arguments.
For keyword arguments you need to accept those as a separate actual argument, as shown in Skurmedel's answer.
|
How to find a thread id in Python
|
I have a multi-threading Python program, and a utility function, writeLog(message), that writes out a timestamp followed by the message. Unfortunately, the resultant log file gives no indication of which thread is generating which message.
I would like writeLog() to be able to add something to the message to identify which thread is calling it. Obviously I could just make the threads pass this information in, but that would be a lot more work. Is there some thread equivalent of os.getpid() that I could use?
|
thread.get_ident() works, though thread is deprecated, or threading.current_thread() (or threading.currentThread() for Python < 2.6).
|
When to use While or the For in python
|
I am currently finding problems in when I should use the while loop or the for loop in python. What it looks like is that people prefer using the for loop (less code lines?). Is there any specific situation which I should use one or the other? Is it a matter of personal preference? The codes I had read so far made me think there are big differences between then?
|
Yes, there is a huge difference between while and for.
The for statement iterates through a collection or iterable object or generator function.
The while statement simply loops until a condition is False.
It isn't preference. It's a question of what your data structures are.
Often, we represent the values we want to process as a range (an actual list), or xrange (which generates the values). This gives us a data structure tailor-made for the for statement.
Generally, however, we have a ready-made collection: a set, tuple, list, map or even a string is already an iterable collection, so we simply use a for loop.
In a few cases, we might want some functional-programming processing done for us, in which case we can apply that transformation as part of iteration. The sorted and enumerate functions apply a transformation on an iterable that fits naturally with the for statement.
If you don't have a tidy data structure to iterate through, or you don't have a generator function that drives your processing, you must use while.
|
Sending Multipart html emails which contain embedded images
|
I've been playing around with the email module in python but I want to be able to know how to embed images which are included in the html.
So for example if the body is something like
<img src="../path/image.png"></img>
I would like to embed image.png into the email, and the src attribute should be replaced with content-id. Does anybody know how to do this?
|
Here is an example I found.
Recipe 473810: Send an HTML email with embedded image and plain text alternate:
HTML is the method of choice for those
wishing to send emails with rich text,
layout and graphics. Often it is
desirable to embed the graphics within
the message so recipients can display
the message directly, without further
downloads.
Some mail agents don't support HTML or
their users prefer to receive plain
text messages. Senders of HTML
messages should include a plain text
message as an alternate for these
users.
This recipe sends a short HTML message
with a single embedded image and an
alternate plain text message.
# Send an HTML email with an embedded image and a plain text message for
# email clients that don't want to display the HTML.
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
from email.MIMEImage import MIMEImage
# Define these once; use them twice!
strFrom = 'from@example.com'
strTo = 'to@example.com'
# Create the root message and fill in the from, to, and subject headers
msgRoot = MIMEMultipart('related')
msgRoot['Subject'] = 'test message'
msgRoot['From'] = strFrom
msgRoot['To'] = strTo
msgRoot.preamble = 'This is a multi-part message in MIME format.'
# Encapsulate the plain and HTML versions of the message body in an
# 'alternative' part, so message agents can decide which they want to display.
msgAlternative = MIMEMultipart('alternative')
msgRoot.attach(msgAlternative)
msgText = MIMEText('This is the alternative plain text message.')
msgAlternative.attach(msgText)
# We reference the image in the IMG SRC attribute by the ID we give it below
msgText = MIMEText('<b>Some <i>HTML</i> text</b> and an image.<br><img src="cid:image1"><br>Nifty!', 'html')
msgAlternative.attach(msgText)
# This example assumes the image is in the current directory
fp = open('test.jpg', 'rb')
msgImage = MIMEImage(fp.read())
fp.close()
# Define the image's ID as referenced above
msgImage.add_header('Content-ID', '<image1>')
msgRoot.attach(msgImage)
# Send the email (this example assumes SMTP authentication is required)
import smtplib
smtp = smtplib.SMTP()
smtp.connect('smtp.example.com')
smtp.login('exampleuser', 'examplepass')
smtp.sendmail(strFrom, strTo, msgRoot.as_string())
smtp.quit()
|
Retrieving all Cookies in Python
|
How do I read back all of the cookies in Python without knowing their names?
|
Not sure if this is what you are looking for, but here is a simple example where you put cookies in a cookiejar and read them back:
from urllib2 import Request, build_opener, HTTPCookieProcessor, HTTPHandler
import cookielib
#Create a CookieJar object to hold the cookies
cj = cookielib.CookieJar()
#Create an opener to open pages using the http protocol and to process cookies.
opener = build_opener(HTTPCookieProcessor(cj), HTTPHandler())
#create a request object to be used to get the page.
req = Request("http://www.about.com")
f = opener.open(req)
#see the first few lines of the page
html = f.read()
print html[:50]
#Check out the cookies
print "the cookies are: "
for cookie in cj:
print cookie
|
Learning parser in python
|
I recall I have read about a parser which you just have to feed some sample lines, for it to know how to parse some text.
It just determines the difference between two lines to know what the variable parts are. I thought it was written in python, but i'm not sure. Does anyone know what library that was?
|
Probably you mean TemplateMaker, I haven't tried it yet, but it builds on well-researched longest-common-substring algorithms and thus should work reasonably... If you are interested in different (more complex) approaches, you can easily find a lot of material on Google Scholar using the query "wrapper induction" or "template induction".
|
Syntax Highlight for Mako in Eclipse or TextMate?
|
Does anyone know of a syntax highlight for Mako templates for Eclipse or for TextMate?
I know that there is a .mako syntax highlighter for the default text editor in Ubuntu.
|
I just did some googlin'. There is a Mako bundle.
I installed it under ~/Library/Application Support/TextMate/Bundles/ like so:
cd ~/Library/Application\ Support/TextMate/Bundles/
svn co http://svn.makotemplates.org/contrib/textmate/Mako.tmbundle
In TextMate, I did Bundles | Bundle Editor | Reload Bundles, and Mako showed up in the menu.
It adds new HTML language variant: HTML (Mako), snippets and stuff like that.
Hope this helps.
|
Check if input is a list/tuple of strings or a single string
|
I've a method that I want to be able to accept either a single string (a path, but not necessarily one that exists on the machine running the code) or a list/tuple of strings.
Given that strings act as lists of characters, how do I tell which the method has received?
I'd like to be able to accept either standard or unicode strings for a single entry, and either lists or tuples for multiple, so isinstance doesn't seem to be the answer unless I'm missing a clever trick with it (like taking advantage of common ancestor classes?).
Python version is 2.5
|
You can check if a variable is a string or unicode string with
isinstance(some_object, basestring)
This will return True for both strings and unicode strings
Edit:
You could do something like this:
if isinstance(some_object, basestring):
...
elif all(isinstance(item, basestring) for item in some_object): # check iterable for stringness of all items. Will raise TypeError if some_object is not iterable
...
else:
raise TypeError # or something along that line
Stringness is probably not a word, but I hope you get the idea
|
Keeping a session in python while making HTTP requests
|
I need to write a python script that makes multiple HTTP requests to the same site. Unless I'm wrong (and I may very well be) urllib reauthenticates for every request. For reasons I won't go into I need to be able to authenticate once and then use that session for the rest of my requests.
I'm using python 2.3.4
|
Use Requests library. From http://docs.python-requests.org/en/latest/user/advanced/#session-objects :
The Session object allows you to persist certain parameters across
requests. It also persists cookies across all requests made from the
Session instance.
s = requests.session()
s.get('http://httpbin.org/cookies/set/sessioncookie/123456789')
r = s.get("http://httpbin.org/cookies")
print r.text
# '{"cookies": {"sessioncookie": "123456789"}}'
|
Best way to retrieve variable values from a text file - Python - Json
|
Referring on this question, I have a similar -but not the same- problem..
On my way, I'll have some text file, structured like:
var_a: 'home'
var_b: 'car'
var_c: 15.5
And I need that python read the file and then create a variable named var_a with value 'home', and so on.
Example:
#python stuff over here
getVarFromFile(filename) #this is the function that im looking for
print var_b
#output: car, as string
print var_c
#output 15.5, as number.
Is this possible, I mean, even keep the var type?
Notice that I have the full freedom to the text file structure, I can use the format I like if the one I proposed isn't the best.
EDIT: the ConfigParser can be a solution, but I don't like it so much, because in my script I'll have then to refer to the variables in the file with
config.get("set", "var_name")
But what I'll love is to refer to the variable directly, as I declared it in the python script...
There is a way to import the file as a python dictionary?
Oh, last thing, keep in mind that I don't know exactly how many variables would I have in the text file.
Edit 2: I'm very interested at stephan's JSON solution, because in that way the text file could be read simply with others languages (PHP, then via AJAX JavaScript, for example), but I fail in something while acting that solution:
#for the example, i dont load the file but create a var with the supposed file content
file_content = "'var_a': 4, 'var_b': 'a string'"
mydict = dict(file_content)
#Error: ValueError: dictionary update sequence element #0 has length 1; 2 is required
file_content_2 = "{'var_a': 4, 'var_b': 'a string'}"
mydict_2 = dict(json.dump(file_content_2, True))
#Error:
#Traceback (most recent call last):
#File "<pyshell#5>", line 1, in <module>
#mydict_2 = dict(json.dump(file_content_2, True))
#File "C:\Python26\lib\json\__init__.py", line 181, in dump
#fp.write(chunk)
#AttributeError: 'bool' object has no attribute 'write'
In what kind of issues can I fall with the JSON format?
And, how can I read a JSON array in a text file, and transform it in a python dict?
P.S: I don't like the solution using .py files; I'll prefer .txt, .inc, .whatever is not restrictive to one language.
|
But what i'll love is to refer to the variable direclty, as i declared it in the python script..
Assuming you're happy to change your syntax slightly, just use python and import the "config" module.
# myconfig.py:
var_a = 'home'
var_b = 'car'
var_c = 15.5
Then do
from myconfig import *
And you can reference them by name in your current context.
|
How can I remove the top and right axis in matplotlib?
|
Instead of the default "boxed" axis style I want to have only the left and bottom axis, i.e.:
+------+ |
| | |
| | ---> |
| | |
+------+ +-------
This should be easy, but I can't find the necessary options in the docs.
|
Alternatively, this
def simpleaxis(ax):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
seems to achieve the same effect on an axis without losing rotated label support.
(Matplotlib 1.0.1; solution inspired by this).
|
python queue & multiprocessing queue: how they behave?
|
This sample code works (I can write something in the file):
from multiprocessing import Process, Queue
queue = Queue()
def _printer(self, queue):
queue.put("hello world!!")
def _cmdDisp(self, queue):
f = file("Cmd.log", "w")
print >> f, queue.get()
f.close()
instead this other sample not: (errormsg: 'module' object is not callable)
import Queue
queue = Queue()
def _printer(self, queue):
queue.put("hello world!!")
def _cmdDisp(self, queue):
f = file("Cmd.log", "w")
print >> f, queue.get()
f.close()
this other sample not (I cannot write something in the file):
import Queue
queue = Queue.Queue()
def _printer(self, queue):
queue.put("hello world!!")
def _cmdDisp(self, queue):
f = file("Cmd.log", "w")
print >> f, queue.get()
f.close()
Can someone explain the differences? and the right to do?
|
For your second example, you already gave the explanation yourself---Queue is a module, which cannot be called.
For the third example: I assume that you use Queue.Queue together with multiprocessing. A Queue.Queue will not be shared between processes. If the Queue.Queue is declared before the processes then each process will receive a copy of it which is then independent of every other process. Items placed in the Queue.Queue by the parent before starting the children will be available to each child. Items placed in the Queue.Queue by the parent after starting the child will only be available to the parent. Queue.Queue is made for data interchange between different threads inside the same process (using the threading module). The multiprocessing queues are for data interchange between different Python processes. While the API looks similar (it's designed to be that way), the underlying mechanisms are fundamentally different.
multiprocessing queues exchange data by pickling (serializing) objects and sending them through pipes.
Queue.Queue uses a data structure that is shared between threads and locks/mutexes for correct behaviour.
|
Jump into a Python Interactive Session mid-program?
|
Hey I was wondering... I am using the pydev with eclipse and I'm really enjoying the powerful debugging features, but I was wondering:
Is it possible to set a breakpoint in eclipse and jump into the interactive python interpreter during execution?
I think that would be pretty handy ;)
edit: I want to emphasize that my goal is not to jump into a debugger. pydev/eclipse have a great debugger, and I can just look at the traceback and set break points.
What I want is to execute a script and jump into an interactive python interpreter during execution so I can do things like...
poke around
check the values of things
manipulate variables
figure out some code before I add it to the app
I know you can do this all with a debugger, but I can do it faster in the interactive interpreter because I can try something, see that it didn't work, and try something else without having get the app back to the point of executing that code again.
|
So roughly a year on from the OP's question, PyDev has this capability built in. I am not sure when this feature was introduced, but all I know is I've spent the last ~2hrs Googling... configuring iPython and whatever (which was looking like it would have done the job), but only to realise Eclipse/PyDev has what I want ootb.
As soon as you hit a breakpoint in debug mode, the console is right there ready and waiting!
I only didn't notice this as there is no prompt or blinking cursor; I had wrongly assumed it was a standard, output-only, console... but it's not. It even has code-completion.
Great stuff, see http://pydev.org/manual_adv_debug_console.html for more details.
|
Why does defining __getitem__ on a class make it iterable in python?
|
Why does defining __getitem__ on a class make it iterable?
For instance if I write:
class b:
def __getitem__(self, k):
return k
cb = b()
for k in cb:
print k
I get the output:
0
1
2
3
4
5
6
7
8
...
I would really expect to see an error returned from "for k in cb:"
|
Iteration's support for __getitem__ can be seen as a "legacy feature" which allowed smoother transition when PEP234 introduced iterability as a primary concept. It only applies to classes without __iter__ whose __getitem__ accepts integers 0, 1, &c, and raises IndexError once the index gets too high (if ever), typically "sequence" classes coded before __iter__ appeared (though nothing stops you from coding new classes this way too).
Personally, I would rather not rely on this in new code, though it's not deprecated nor is it going away (works fine in Python 3 too), so this is just a matter of style and taste ("explicit is better than implicit" so I'd rather explicitly support iterability rather than rely on __getitem__ supporting it implicitly for me -- but, not a bigge).
|
Parsing HTTP User-Agent string
|
What is the best method to parse a User-Agent string in Python to reliably detect
Browser
Browser version
OS
Or perhaps any helper library that does it
|
Answering my own question ;)
Finally I decided to go by suggestion#1 i.e. write your own. And I am happy with the outcome. Please feel free to use/modify/send me patch etc.
It's here -> http://pypi.python.org/pypi/httpagentparser
|
how to override the verbose name of a superclass model field in django
|
Let's say that I have a model Foo that inherits from SuperFoo:
class SuperFoo(models.Model):
name = models.CharField('name of SuperFoo instance', max_length=50)
...
class Foo(SuperFoo):
... # do something that changes verbose_name of name field of SuperFoo
In class Foo, I'd like to override the verbose_name of the name field of SuperFoo. Can I? If not, is the best option setting a label inside the model form definition to get it displayed in a template?
|
A simple hack I have used is:
class SuperFoo(models.Model):
name = models.CharField('name of SuperFoo instance', max_length=50)
...
class Foo(SuperFoo):
... # do something that changes verbose_name of name field of SuperFoo
Foo._meta.get_field('name').verbose_name = 'Whatever'
|
Python Class Inheritance issue
|
I'm playing with Python Class inheritance and ran into a problem where the inherited __init__ is not being executed if called from the sub-class (code below) the result I get from Active Python is:
>>> start
Tom Sneed
Sue Ann
Traceback (most recent call last):
File "C:\Python26\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py", line 312, <br>in RunScript
exec codeObject in __main__.__dict__
File "C:\temp\classtest.py", line 22, in <module>
print y.get_emp()
File "C:\temp\classtest.py", line 16, in get_emp
return self.FirstName + ' ' + 'abc'
AttributeError: Employee instance has no attribute 'FirstName'
Here's the code
class Person():
AnotherName = 'Sue Ann'
def __init__(self):
self.FirstName = 'Tom'
self.LastName = 'Sneed'
def get_name(self):
return self.FirstName + ' ' + self.LastName
class Employee(Person):
def __init__(self):
self.empnum = 'abc123'
def get_emp(self):
print self.AnotherName
return self.FirstName + ' ' + 'abc'
x = Person()
y = Employee()
print 'start'
print x.get_name()
print y.get_emp()
|
Three things:
You need to explicitly call the constructor. It isn't called for you automatically like in C++
Use a new-style class inherited from object
With a new-style class, use the super() method available
This will look like:
class Person(object):
AnotherName = 'Sue Ann'
def __init__(self):
super(Person, self).__init__()
self.FirstName = 'Tom'
self.LastName = 'Sneed'
def get_name(self):
return self.FirstName + ' ' + self.LastName
class Employee(Person):
def __init__(self):
super(Employee, self).__init__()
self.empnum = 'abc123'
def get_emp(self):
print self.AnotherName
return self.FirstName + ' ' + 'abc'
Using super is recommended as it will also deal correctly with calling constructors only once in multiple inheritance cases (as long as each class in the inheritance graph also uses super). It's also one less place you need to change code if/when you change what a class is inherited from (for example, you factor out a base-class and change the derivation and don't need to worry about your classes calling the wrong parent constructors). Also on the MI front, you only need one super call to correctly call all the base-class constructors.
|
How do I access the child classes of an object in django without knowing the name of the child class?
|
In Django, when you have a parent class and multiple child classes that inherit from it you would normally access a child through parentclass.childclass1_set or parentclass.childclass2_set, but what if I don't know the name of the specific child class I want?
Is there a way to get the related objects in the parent->child direction without knowing the child class name?
|
(Update: For Django 1.2 and newer, which can follow select_related queries across reverse OneToOneField relations (and thus down inheritance hierarchies), there's a better technique available which doesn't require the added real_type field on the parent model. It's available as InheritanceManager in the django-model-utils project.)
The usual way to do this is to add a ForeignKey to ContentType on the Parent model which stores the content type of the proper "leaf" class. Without this, you may have to do quite a number of queries on child tables to find the instance, depending how large your inheritance tree is. Here's how I did it in one project:
from django.contrib.contenttypes.models import ContentType
from django.db import models
class InheritanceCastModel(models.Model):
"""
An abstract base class that provides a ``real_type`` FK to ContentType.
For use in trees of inherited models, to be able to downcast
parent instances to their child types.
"""
real_type = models.ForeignKey(ContentType, editable=False)
def save(self, *args, **kwargs):
if not self._state.adding:
self.real_type = self._get_real_type()
super(InheritanceCastModel, self).save(*args, **kwargs)
def _get_real_type(self):
return ContentType.objects.get_for_model(type(self))
def cast(self):
return self.real_type.get_object_for_this_type(pk=self.pk)
class Meta:
abstract = True
This is implemented as an abstract base class to make it reusable; you could also put these methods and the FK directly onto the parent class in your particular inheritance hierarchy.
This solution won't work if you aren't able to modify the parent model. In that case you're pretty much stuck checking all the subclasses manually.
|
Convert a number range to another range, maintaining ratio
|
I'm trying to convert one range of numbers to another, maintaining ratio. Maths is not my strong point.
I have an image file where point values may range from -16000.00 to 16000.00 though the typical range may be much less. What I want to do is compress these values into the integer range 0-100, where 0 is the value of the smallest point, and 100 is the value of the largest. All points in between should keep a relative ratio even though some precision is being lost I'd like to do this in python but even a general algorithm should suffice. I'd prefer an algorithm where the min/max or either range can be adjusted (ie, the second range could be -50 to 800 instead of 0 to 100).
|
NewValue = (((OldValue - OldMin) * (NewMax - NewMin)) / (OldMax - OldMin)) + NewMin
Or a little more readable:
OldRange = (OldMax - OldMin)
NewRange = (NewMax - NewMin)
NewValue = (((OldValue - OldMin) * NewRange) / OldRange) + NewMin
Or if you want to protect for the case where the old range is 0 (OldMin = OldMax):
OldRange = (OldMax - OldMin)
if (OldRange == 0)
NewValue = NewMin
else
{
NewRange = (NewMax - NewMin)
NewValue = (((OldValue - OldMin) * NewRange) / OldRange) + NewMin
}
Note that in this case we're forced to pick one of the possible new range values arbitrarily. Depending on context, sensible choices could be: NewMin (see sample), NewMax or (NewMin + NewMax) / 2
|
Why does assigning to my global variables not work in Python?
|
I'm having terrible trouble trying to understand python scoping rules.
With the following script:
a = 7
def printA():
print "Value of a is %d" % (a)
def setA(value):
a = value
print "Inside setA, a is now %d" %(a)
print "Before setA"
printA()
setA(42)
print "After setA"
printA()
Gives the unexpected (to me) output of:
Before setA
Value of a is 7
Inside setA, a is now 42
After setA
Value of a is 7
Where I would expect the last printing of the value of a to be 42, not 7. What am I missing about Python's scope rules for the scoping of global variables?
|
Global variables are special. If you try to assign to a variable a = value inside of a function, it creates a new local variable inside the function, even if there is a global variable with the same name. To instead access the global variable, add a global statement inside the function:
a = 7
def setA(value):
global a # declare a to be a global
a = value # this sets the global value of a
See also Naming and binding for a detailed explanation of Python's naming and binding rules.
|
Equivalent for Python's lambda functions in Java?
|
Can someone please tell me if there is an equivalent for Python's lambda functions in Java?
|
Unfortunately, there are no lambdas in Java. However, you can get almost the same effect (in a really ugly way) with anonymous classes:
interface MyLambda {
void theFunc(); // here we define the interface for the function
}
public class Something {
static void execute(MyLambda l) {
l.theFunc(); // this class just wants to use the lambda for something
}
}
public class Test {
static void main(String[] args) {
Something.execute(new MyLambda() { // here we create an anonymous class
void theFunc() { // implementing MyLambda
System.out.println("Hello world!");
}
});
}
}
Obviously these would have to be in separate files :(
|
Getting the last element of a list in Python
|
In Python, how do you get the last element of a list?
|
some_list[-1] is the shortest and most Pythonic.
In fact, you can do much more with this syntax. The some_list[-n] syntax gets the nth-to-last element. So some_list[-1] gets the last element, some_list[-2] gets the second to last, etc, all the way down to some_list[-len(some_list)], which gives you the first element.
You can also set list elements in this way. For instance:
>>> some_list = [1, 2, 3]
>>> some_list[-1] = 5 # Set the last element
>>> some_list[-2] = 3 # Set the second to last element
>>> some_list
[1, 3, 5]
|
How to run one last function before getting killed in Python?
|
Is there any way to run one last command before a running Python script is stopped by being killed by some other script, keyboard interrupt etc.
Thanks for your help!
|
import time
try:
time.sleep(10)
finally:
print "clean up"
clean up
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
KeyboardInterrupt
If you need to catch other OS level interrupts, look at the signal module:
http://docs.python.org/library/signal.html
Signal Example
from signal import *
import sys, time
def clean(*args):
print "clean me"
sys.exit(0)
for sig in (SIGABRT, SIGBREAK, SIGILL, SIGINT, SIGSEGV, SIGTERM):
signal(sig, clean)
time.sleep(10)
|
assertEquals vs. assertEqual in python
|
Is there a difference between assertEquals and assertEqual in the python unittest.TestCase?
And if there is not, why are there two functions? Only for convenience?
|
Good question!
Actually, in Python 2.6, both assertEqual and assertEquals are convenience aliases to failUnlessEqual. The source declares them thus:
# Synonyms for assertion methods
assertEqual = assertEquals = failUnlessEqual
In Python 3, to your point, failUnlessEqual is explicitly deprecated. assertEquals carries this comment :-)
# Synonyms for assertion methods
# The plurals are undocumented. Keep them that way to discourage use.
# Do not add more. Do not remove.
# Going through a deprecation cycle on these would annoy many people.
So, the upshot appears to be that you should use whatever you like for Python 2.x, but tend toward assertEqual for Python 3.
|
Reverse a string in Python
|
There is no built in reverse function in Python's str object. What is the best way of implementing this?
If supplying a very concise answer, please elaborate on it's efficiency. Is the str converted to a different object, etc.
|
How about:
>>> 'hello world'[::-1]
'dlrow olleh'
This is extended slice syntax. It works by doing [begin:end:step] - by leaving begin and end off and specifying a step of -1, it reverses a string.
|
Is there a Python equivalent to the PHP function htmlspecialchars()?
|
Is there a similar or equivalent function in Python to the PHP function htmlspecialchars()? The closest thing I've found so far is htmlentitydefs.entitydefs().
|
Closest thing I know about is cgi.escape.
|
Building a minimal plugin architecture in Python
|
I have an application, written in Python, which is used by a fairly technical audience (scientists).
I'm looking for a good way to make the application extensible by the users, i.e. a scripting/plugin architecture.
I am looking for something extremely lightweight. Most scripts, or plugins, are not going to be developed and distributed by a third-party and installed, but are going to be something whipped up by a user in a few minutes to automate a repeating task, add support for a file format, etc. So plugins should have the absolute minimum boilerplate code, and require no 'installation' other than copying to a folder (so something like setuptools entry points, or the Zope plugin architecture seems like too much.)
Are there any systems like this already out there, or any projects that implement a similar scheme that I should look at for ideas / inspiration?
|
Mine is, basically, a directory called "plugins" which the main app can poll and then use imp.load_module to pick up files, look for a well-known entry point possibly with module-level config params, and go from there. I use file-monitoring stuff for a certain amount of dynamism in which plugins are active, but that's a nice-to-have.
Of course, any requirement that comes along saying "I don't need [big, complicated thing] X; I just want something lightweight" runs the risk of re-implementing X one discovered requirement at a time. But that's not to say you can't have some fun doing it anyway :)
|
Python: defining my own operators?
|
I would like to define my own operator. Does python support such a thing?
|
While technically you cannot define new operators in Python, this clever hack works around this limitation. It allows you to define infix operators like this:
# simple multiplication
x=Infix(lambda x,y: x*y)
print 2 |x| 4
# => 8
# class checking
isa=Infix(lambda x,y: x.__class__==y.__class__)
print [1,2,3] |isa| []
print [1,2,3] <<isa>> []
# => True
|
retrieving a variable's name in python at runtime?
|
is there a way to know, during run-time, a variable's name (from the code) ?
or do var names forgotten during compilation (byte-code or not) ?
e.g.
>>> vari = 15
>>> print vari.~~name~~()
'vari'
note: i'm talking about plain data-type variables (int, str, list...)
|
Variable names don't get forgotten, you can access variables (and look which variables you have) by introspection, e.g.
>>> i = 1
>>> locals()["i"]
1
However, because there are no pointers in Python, there's no way to reference a variable without actually writing its name. So if you wanted to print a variable name and its value, you could go via locals() or a similar function. ([i] becomes [1] and there's no way to retrieve the information that the 1 actually came from i.)
|
Generic many-to-many relationships
|
I'm trying to create a messaging system where a message's sender and recipients can be generic entities. This seems fine for the sender, where there is only object to reference (GenericForeignKey) but I can't figure out how to go about this for the recipients (GenericManyToManyKey ??)
Below is a simplified example. PersonClient and CompanyClient inherit attributes from Client but have their own specific details. The last line is the sticking point. How do you allow message recipients to be a set of CompanyClients and PersonClients
class Client(models.Model):
city = models.CharField(max_length=16)
class Meta:
abstract = True
class PersonClient(Client):
first_name = models.CharField(max_length=16)
last_name = models.CharField(max_length=16)
gender = models.CharField(max_length=1)
class CompanyClient(Client):
name = models.CharField(max_length=32)
tax_no = PositiveIntegerField()
class Message(models.Model):
msg_body = models.CharField(max_length=1024)
sender = models.ForeignKey(ContentType)
recipients = models.ManyToManyField(ContentType)
|
You can implement this using generic relationships by manually creating the junction table between message and recipient:
from django.db import models
from django.contrib.contenttypes import generic
from django.contrib.contenttypes.models import ContentType
class Client(models.Model):
city = models.CharField(max_length=16)
# These aren't required, but they'll allow you do cool stuff
# like "person.sent_messages.all()" to get all messages sent
# by that person, and "person.received_messages.all()" to
# get all messages sent to that person.
# Well...sort of, since "received_messages.all()" will return
# a queryset of "MessageRecipient" instances.
sent_messages = generic.GenericRelation('Message',
content_type_field='sender_content_type',
object_id_field='sender_id'
)
received_messages = generic.GenericRelation('MessageRecipient',
content_type_field='recipient_content_type',
object_id_field='recipient_id'
)
class Meta:
abstract = True
class PersonClient(Client):
first_name = models.CharField(max_length=16)
last_name = models.CharField(max_length=16)
gender = models.CharField(max_length=1)
def __unicode__(self):
return u'%s %s' % (self.last_name, self.first_name)
class CompanyClient(Client):
name = models.CharField(max_length=32)
tax_no = models.PositiveIntegerField()
def __unicode__(self):
return self.name
class Message(models.Model):
sender_content_type = models.ForeignKey(ContentType)
sender_id = models.PositiveIntegerField()
sender = generic.GenericForeignKey('sender_content_type', 'sender_id')
msg_body = models.CharField(max_length=1024)
def __unicode__(self):
return u'%s...' % self.msg_body[:25]
class MessageRecipient(models.Model):
message = models.ForeignKey(Message)
recipient_content_type = models.ForeignKey(ContentType)
recipient_id = models.PositiveIntegerField()
recipient = generic.GenericForeignKey('recipient_content_type', 'recipient_id')
def __unicode__(self):
return u'%s sent to %s' % (self.message, self.recipient)
You'd use the above models like so:
>>> person1 = PersonClient.objects.create(first_name='Person', last_name='One', gender='M')
>>> person2 = PersonClient.objects.create(first_name='Person', last_name='Two', gender='F')
>>> company = CompanyClient.objects.create(name='FastCompany', tax_no='4220')
>>> company_ct = ContentType.objects.get_for_model(CompanyClient)
>>> person_ct = ContentType.objects.get_for_model(person1) # works for instances too.
# now we create a message:
>>> msg = Message.objects.create(sender_content_type=person_ct, sender_id=person1.pk, msg_body='Hey, did any of you move my cheese?')
# and send it to a coupla recipients:
>>> MessageRecipient.objects.create(message=msg, recipient_content_type=person_ct, recipient_id=person2.pk)
>>> MessageRecipient.objects.create(message=msg, recipient_content_type=company_ct, recipient_id=company.pk)
>>> MessageRecipient.objects.count()
2
As you can see, this is a far more verbose (complicated?) solution. I'd probably keep it simple and go with Prariedogg's solution above.
|
Write to UTF-8 file in Python
|
I'm really confused with the codecs.open function. When I do:
file = codecs.open("temp", "w", "utf-8")
file.write(codecs.BOM_UTF8)
file.close()
It gives me the error
UnicodeDecodeError: 'ascii' codec can't decode byte 0xef in position
0: ordinal not in range(128)
If I do:
file = open("temp", "w")
file.write(codecs.BOM_UTF8)
file.close()
It works fine.
Question is why does the first method fail? And how do I insert the bom?
If the second method is the correct way of doing it, what the point of using codecs.open(filename, "w", "utf-8")?
|
Disclaimer: I'm not a Python programmer.
I believe the problem is that codecs.BOM_UTF8 is a byte string, not a Unicode string. I suspect the file handler is trying to guess what you really mean based on "I'm meant to be writing Unicode as UTF-8-encoded text, but you've given me a byte string!"
Try writing the Unicode string for the byte order mark (i.e. Unicode U+FEFF) directly, so that the file just encodes that as UTF-8:
import codecs
file = codecs.open("lol", "w", "utf-8")
file.write(u'\ufeff')
file.close()
(That seems to give the right answer - a file with bytes EF BB BF.)
EDIT: S. Lott's suggestion of using "utf-8-sig" as the encoding is a better one than explicitly writing the BOM yourself, but I'll leave this answer here as it explains what was going wrong before.
|
How do I find out if a numpy array contains integers?
|
I know there is a simple solution to this but can't seem to find it at the moment.
Given a numpy array, I need to know if the array contains integers.
Checking the dtype per-se is not enough, as there are multiple int dtypes (int8, int16, int32, int64 ...).
|
Found it in the numpy book! Page 23:
The other types in the hierarchy deï¬ne particular categories of types.
These categories can be useful for testing whether or not the object
returned by self.dtype.type is of a particular class (using issubclass).
issubclass(n.dtype('int8').type, n.integer)
>>> True
issubclass(n.dtype('int16').type, n.integer)
>>> True
|
Using easy_install inside a python script?
|
easy_install python extension allows to install python eggs from console like:
easy_install py2app
But is it possible to access easy_install functionality inside a python script? I means, without calling os.system( "easy_install py2app" ) but instead importing easy_install as a python module and using it's native methods?
|
When I look at the setup tools source, it looks like you can try the following.
from setuptools.command import easy_install
easy_install.main( ["-U","py2app"] )
|
Python recursion and return statements
|
I'm fairly new to Python and recursive functions as a whole, so pardon my ignorance.
I am trying to implement a binary search tree in Python and have the following insert method (taken out of a class):
def insert(self, key, root=None):
'''Inserts a node in the tree'''
if root == None:
root = self.root
if root.key == None:
self._update(root, key)
return 0
else:
tmp = root
if key > tmp.key: # we work with the right subtree
self.insert(key, root=tmp.right)
elif key < tmp.key: # we work with the left subtree
self.insert(key, root=tmp.left)
else: # key already exists
return 0
I'm not sure if this is legible, but it traverses the tree until it gets to a None value and updates the node with the key to insert.
Now, the method works nicely and correctly creates a BST from scratch. But there's a problem with the return statements, as it only returns 0 if there is no recursion performed.
>>> bst.insert(10)
0
>>> bst.insert(15)
>>> bst.root.right.key
15
>>>
"Inserting" the root key again returns 0 (from line 15) the way it should.
>>> bst.insert(10)
0
I can't figure out why this happens. If I put a print statement in line 6, it executes correctly, yet it just won't return anything past the first insertion. Why is this? (I'm pretty sure I'm missing some basic information regarding Python and recursion)
Thanks for your help,
Ivan
P.S.: I've read that recursion is not the best way to implement a BST, so I'll look into other solutions, but I'd like to know the answer to this before moving on.
|
On your recursive lines, you do not return anything. If you want it to return 0, you should replace them with lines like:
return self.insert(key, root=tmp.left)
instead of just
self.insert(key, root=tmp.left)
|
How do you specify a default for a Django ForeignKey Model or AdminModel field?
|
How can I set a default value on a ForeignKey field in a django Model or AdminModel?
Something like this (but of course this doesn't work)...
created_by = models.ForeignKey(User, default=request.user)
I know I can 'trick' it in the view, but in terms of the AdminModel it doesn't seem possible.
|
class Foo(models.Model):
a = models.CharField(max_length=42)
class Bar(models.Model):
b = models.CharField(max_length=42)
a = models.ForeignKey(Foo, default=lambda: Foo.objects.get(id=1) )
|
Bad Practice to run code in constructor thats likely to fail?
|
my question is rather a design question.
In Python, if code in your "constructor" fails, the object ends up not being defined. Thus:
someInstance = MyClass("test123") #lets say that constructor throws an exception
someInstance.doSomething() # will fail, name someInstance not defined.
I do have a situation though, where a lot of code copying would occur if i remove the error-prone code from my constructor. Basically my constructor fills a few attributes (via IO, where a lot can go wrong) that can be accessed with various getters. If I remove the code from the contructor, i'd have 10 getters with copy paste code something like :
is attribute really set?
do some IO actions to fill the attribute
return the contents of the variable in question
I dislike that, because all my getters would contain a lot of code. Instead of that I perform my IO operations in a central location, the constructor, and fill all my attributes.
Whats a proper way of doing this?
|
There is a difference between a constructor in C++ and an __init__ method
in Python. In C++, the task of a constructor is to construct an object. If it fails,
no destructor is called. Therefore if any resources were acquired before an
exception was thrown, the cleanup should be done before exiting the constructor.
Thus, some prefer two-phase construction with most of the construction done
outside the constructor (ugh).
Python has a much cleaner two-phase construction (construct, then
initialize). However, many people confuse an __init__ method (initializer)
with a constructor. The actual constructor in Python is called __new__.
Unlike in C++, it does not take an instance, but
returns one. The task of __init__ is to initialize the created instance.
If an exception is raised in __init__, the destructor __del__ (if any)
will be called as expected, because the object was already created (even though it was not properly initialized) by the time __init__ was called.
Answering your question:
In Python, if code in your
"constructor" fails, the object ends
up not being defined.
That's not precisely true. If __init__ raises an exception, the object is
created but not initialized properly (e.g., some attributes are not
assigned). But at the time that it's raised, you probably don't have any references to
this object, so the fact that the attributes are not assigned doesn't matter. Only the destructor (if any) needs to check whether the attributes actually exist.
Whats a proper way of doing this?
In Python, initialize objects in __init__ and don't worry about exceptions.
In C++, use RAII.
Update [about resource management]:
In garbage collected languages, if you are dealing with resources, especially limited ones such as database connections, it's better not to release them in the destructor.
This is because objects are destroyed in a non-deterministic way, and if you happen
to have a loop of references (which is not always easy to tell), and at least one of the objects in the loop has a destructor defined, they will never be destroyed.
Garbage collected languages have other means of dealing with resources. In Python, it's a with statement.
|
Scope of python lambda functions and their parameters
|
I need a callback function that is almost exactly the same for a series of gui events. The function will behave slightly differently depending on which event has called it. Seems like a simple case to me, but I cannot figure out this weird behavior of lambda functions.
So I have the following simplified code below:
def callback(msg):
print msg
#creating a list of function handles with an iterator
funcList=[]
for m in ('do', 're', 'mi'):
funcList.append(lambda: callback(m))
for f in funcList:
f()
#create one at a time
funcList=[]
funcList.append(lambda: callback('do'))
funcList.append(lambda: callback('re'))
funcList.append(lambda: callback('mi'))
for f in funcList:
f()
The output of this code is:
mi
mi
mi
do
re
mi
I expected:
do
re
mi
do
re
mi
Why has using an iterator messed things up?
I've tried using a deepcopy:
import copy
funcList=[]
for m in ('do', 're', 'mi'):
funcList.append(lambda: callback(copy.deepcopy(m)))
for f in funcList:
f()
But this has the same problem. Thanks for helping me understand python better! I'm sure this will make sense once you help me figure it out! :)
|
When a lambda is created, it doesn't make a copy of the variables in the enclosing scope that it uses. It maintains a reference to the environment so that it can look up the value of the variable later. There is just one m. It gets assigned to every time through the loop. After the loop, the variable m has value 'mi'. So when you actually run the function you created later, it will look up the value of m in the environment that created it, which will by then have value 'mi'.
One common and idiomatic solution to this problem is to capture the value of m at the time that the lambda is created by using it as the default argument of an optional parameter. You usually use a parameter of the same name so you don't have to change the body of the code:
for m in ('do', 're', 'mi'):
funcList.append(lambda m=m: callback(m))
|
Total memory used by Python process?
|
Is there a way for a Python program to determine how much memory it's currently using? I've seen discussions about memory usage for a single object, but what I need is total memory usage for the process, so that I can determine when it's necessary to start discarding cached data.
|
For Unixes (Linux, Mac OS X, Solaris) you could also use the getrusage() function from the standard library module resource. The resulting object has the attribute ru_maxrss, which gives peak memory usage for the calling process:
>>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
2656 # peak memory usage (bytes on OS X, kilobytes on Linux)
The Python docs aren't clear on what the units are exactly, but the Mac OS X man page for getrusage(2) describes the units as bytes. The Linux man page isn't clear, but it seems to be equivalent to the information from /proc/self/status, which is in kilobytes.
The getrusage() function can also be given resource.RUSAGE_CHILDREN to get the usage for child processes, and (on some systems) resource.RUSAGE_BOTH for total (self and child) process usage.
resource is a standard library module.
If you only care about Linux, you can just check the /proc/self/status file as described in a similar question.
|
print vs stderr
|
Are there any specific advantages or disadvantages to either print or stderr?
|
print can print on any file-like object, including sys.stderr.
print >> sys.stderr, 'Text'
The advantages of using sys.stderr for errors instead of sys.stdout are:
If the user redirected stdout to a file, she still sees errors on the screen.
It's not buffered, so if sys.stderr is redirected to a log file there are less chance that the program may crash before the error was logged.
This answer written with Python 2 in mind.
For Python 3, use print('Text', file=sys.stderr) instead.
|
PyQt sending parameter to slot when connecting to a signal
|
I have a taskbar menu that when clicked is connected to a slot that gets the trigger event. Now the problem is that I want to know which menu item was clicked, but I don't know how to send that information to the function connected to. Here is the used to connect the action to the function:
QtCore.QObject.connect(menuAction, 'triggered()', menuClickedFunc)
I know that some events return a value, but triggered() doesn't. So how do I make this happen? Do I have to make my own signal?
|
Use a lambda
Here's an example from the PyQt book:
self.connect(button3, SIGNAL("clicked()"),
lambda who="Three": self.anyButton(who))
By the way, you can also use functools.partial, but I find the lambda method simpler and clearer.
|
Regular expression syntax for "match nothing"?
|
I have a python template engine that heavily uses regexp. It uses concatenation like:
re.compile( regexp1 + "|" + regexp2 + "*|" + regexp3 + "+" )
I can modify individual substrings (regexp1, regexp2 etc).
Is there any small and light expression that matches nothing, which I can use inside a template where I don't want any matches? Unfortunately, sometimes '+' or '*' is appended to the regexp atom so I can't use empty string - a "nothing to repeat" error will rise.
|
This shouldn't match anything:
re.compile('$^')
So if you replace regexp1, regexp2 and regexp3 with '$^' it will be impossible to find a match. Unless you are using the multi line mode.
After some tests I found a better solution
re.compile('a^')
It is impossible to match and will fail earlier than the previous solution. You can replace a with any other character and it will always be impossible to match
|
Any python OpenID server available?
|
I'd like to host my own OpenID provider. Is there anything available in Python?
|
You are weak with the Google.
(Edit: That's a link to OpenID-Enabled.com. There are also PHP and Ruby versions available there.)
|
Operation on every pair of element in a list
|
Using Python, I'd like to compare every possible pair in a list.
Suppose I have
my_list = [1,2,3,4]
I'd like to do an operation (let's call it foo) on every combination of 2 elements from the list.
The final result should be the same as
foo(1,1)
foo(1,2)
...
foo(4,3)
foo(4,4)
My first thought was to iterate twice through the list manually, but that doesn't seem very pythonic.
|
Check out product() in the itertools module. It does exactly what you describe.
import itertools
my_list = [1,2,3,4]
for pair in itertools.product(my_list, repeat=2):
foo(*pair)
This is equivalent to:
my_list = [1,2,3,4]
for x in my_list:
for y in my_list:
foo(x, y)
Edit: There are two very similar functions as well, permutations() and combinations(). To illustrate how they differ:
product() generates every possible pairing of elements, including all duplicates:
1,1 1,2 1,3 1,4
2,1 2,2 2,3 2,4
3,1 3,2 3,3 3,4
4,1 4,2 4,3 4,4
permutations() generates all unique orderings of each unique pair of elements, eliminating the x,x duplicates:
. 1,2 1,3 1,4
2,1 . 2,3 2,4
3,1 3,2 . 3,4
4,1 4,2 4,3 .
Finally, combinations() only generates each unique pair of elements, in lexicographic order:
. 1,2 1,3 1,4
. . 2,3 2,4
. . . 3,4
. . . .
All three of these functions were introduced in Python 2.6.
|
String manipulation in Cython
|
I have code that does some very CPU-intensive string manipulations and I was looking for ways to improve performance.
(EDIT: I'm doing stuff like finding longest common substring, running lots of regular expressions which might be better expressed as state machines in c, stripping comments from HTML, stuff like that.)
I am currently looking into porting some of the code to Cython after hearing many good things about it. However, it seems that the main focus of Cython is numerical calculations and working with strings is barely documented.
Unicode might also be a big issue.
My questions are:
Should I even bother with Cython for string stuff? Does anyone have experience with this type of processing in cython and can share?
Am I missing something in the Cython docs? Does anyone know of a tutorial/reference/documentation about working with strings in Cython?
|
I voted up the 'profile it' answer, but wanted to add this: where possible the best optimisation you can make is to use Python standard libraries or built-in functions to perform the tasks you want. These are typically implemented in C and will provide performance broadly equivalent to any extension, including extensions written in Cython. If your algorithms are performing character by character loops in Python then those should be the first things to go, if possible.
But if you have algorithms that can't be reworked in terms of built-ins or other existing standard libraries, Cython seems like a reasonable approach. It just compiles pseudo-Python down to native code and is as suited to string operations as any other operation, really. But I'm not convinced you will see a great benefit from using Cython if you just hand it idiomatic Python code. The maximum benefit will come if you are able to rewrite some or all of each algorithm in C so that low-level operations are not constantly translating variables across the Python/C barrier.
Finally, Unicode - you've implied it might be 'a big issue' but haven't specified how you're using it. Cython will presumably produce C code that calls the relevant Python APIs that handle Unicode so the functionality is unlikely to be limited. However handling Unicode strings in C is non-trivial and may mean that the idea of rewriting some of your algorithms in C for better performance isn't worth the effort. A lot of classic string algorithms simply won't work on many Unicode encodings, which aren't 'strings' in the traditional sense of having 1 unit of storage per character.
|
Efficient way of creating recursive paths Python
|
Hi
I need a simple function to create a path in Python where the parent may or may not exist.
From python documentation os.makedirs will fail if one of the parents exists.
I have written the below method as which works by makes as many sub directories as necessary.
Does this look efficient?
def create_path(path):
import os.path as os_path
paths_to_create = []
while not os_path.lexists(path):
paths_to_create.insert(0, path)
head,tail = os_path.split(path)
if len(tail.strip())==0: # Just incase path ends with a / or \
path = head
head,tail = os_path.split(path)
path = head
for path in paths_to_create:
os.mkdir(path)
Regards
Mark
|
"From python documentation os.makedirs will fail if one of the parents exists."
No, os.makedirs will fail if the directory itself already exists. It won't fail if just any of the parent directories already exists.
|
Best practice for Python Assert
|
Is there a performance or code maintenance issue with using assert as part of the standard code instead of using it just for debugging purposes?
Is
assert x >= 0, 'x is less than zero'
better or worse than
if x < 0:
raise Exception, 'x is less than zero'
Also, is there any way to set a business rule like if x < 0 raise error that is always checked without the try/except/finally so, if at anytime throughout the code x is less than 0 an error is raised, like if you set assert x < 0 at the start of a function, anywhere within the function where x becomes less then 0 an exception is raised?
|
Asserts should be used to test conditions that should never happen. The purpose is to crash early in the case of a corrupt program state.
Exceptions should be used for errors that can conceivably happen, and you should almost always create your own Exception classes.
For example, if you're writing a function to read from a configuration file into a dict, improper formatting in the file should raise a ConfigurationSyntaxError, while you can assert that you're not about to return None.
In your example, if x is a value set via a user interface or from an external source, an exception is best.
If x is only set by your own code in the same program, go with an assertion.
|
How to check for NaN in python?
|
float('nan') results in a thingy simply called nan. But how do I check for it? Should be very easy, but i cannot find it.
|
math.isnan()
Checks if the float x is a NaN (not a number). NaNs are part of the IEEE 754 standards. Operation like but not limited to inf * 0, inf / inf or any operation involving a NaN, e.g. nan * 1, return a NaN.
New in version 2.6.
>>> import math
>>> x=float('nan')
>>> math.isnan(x)
True
>>>
|
Numpy: Should I use newaxis or None?
|
In numpy one can use the 'newaxis' object in the slicing syntax to create an axis of length one, e.g.:
import numpy as np
print np.zeros((3,5))[:,np.newaxis,:].shape
# shape will be (3,1,5)
The documentation states that one can also use None instead of newaxis, the effect is exactly the same.
Is there any reason to choose one over the other? Is there any general preference or style guide? My impression is that newaxis is more popular, probably because it is more explicit. So is there any reason why None is allowed?
|
None is allowed because numpy.newaxis is merely an alias for None.
In [1]: import numpy
In [2]: numpy.newaxis is None
Out[2]: True
The authors probably chose it because they needed a convenient constant, and None was available.
As for why you should prefer newaxis over None: mainly it's because it's more explicit, and partly because someday the numpy authors might change it to something other than None. (They're not planning to, and probably won't, but there's no good reason to prefer None.)
|
Equivalent Javascript Functions for Python's urllib.quote() and urllib.unquote()
|
Are there any equivalent Javascript functions for Python's urllib.quote() and urllib.unquote()?
The closest I've come across are escape(), encodeURI(), and encodeURIComponent() (and their corresponding un-encoding functions), but they don't encode/decode the same set of special characters as far as I can tell.
Thanks,
Cameron
|
For the record:
JavaScript | Python
-----------------------------------
encodeURI(str) | urllib.quote(str, safe='~@#$&()*!+=:;,.?/\'');
-----------------------------------
encodeURIComponent(str) | urllib.quote(str, safe='~()*!.\'')
|
Using Python's list index() method on a list of tuples or objects?
|
Python's list type has an index() method that takes one parameter and returns the index of the first item in the list matching the parameter. For instance:
>>> some_list = ["apple", "pear", "banana", "grape"]
>>> some_list.index("pear")
1
>>> some_list.index("grape")
3
Is there a graceful (idiomatic) way to extend this to lists of complex objects, like tuples? Ideally, I'd like to be able to do something like this:
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> some_list.getIndexOfTuple(1, 7)
1
>>> some_list.getIndexOfTuple(0, "kumquat")
2
getIndexOfTuple() is just a hypothetical method that accepts a sub-index and a value, and then returns the index of the list item with the given value at that sub-index. I hope
Is there some way to achieve that general result, using list comprehensions or lambas or something "in-line" like that? I think I could write my own class and method, but I don't want to reinvent the wheel if Python already has a way to do it.
|
How about this?
>>> tuple_list = [("pineapple", 5), ("cherry", 7), ("kumquat", 3), ("plum", 11)]
>>> [x for x, y in enumerate(tuple_list) if y[1] == 7]
[1]
>>> [x for x, y in enumerate(tuple_list) if y[0] == 'kumquat']
[2]
As pointed out in the comments, this would get all matches. To just get the first one, you can do:
>>> [y[0] for y in tuple_list].index('kumquat')
2
There is a good discussion in the comments as to the speed difference between all the solutions posted. I may be a little biased but I would personally stick to a one-liner as the speed we're talking about is pretty insignificant versus creating functions and importing modules for this problem, but if you are planning on doing this to a very large amount of elements you might want to look at the other answers provided, as they are faster than what I provided.
|
How to execute a process remotely using python
|
I want to connect too and execute a process on a remote server using python. I want to be able to get the return code and stderr(if any) of the process. Has anyone ever done anything like this before. I have done it with ssh, but I want to do it from python script.
Cheers.
|
Use the ssh module called paramiko which was created for this purpose instead of using subprocess. Here's an example below:
from paramiko import SSHClient
client = SSHClient()
client.load_system_host_keys()
client.connect("hostname", username="user")
stdin, stdout, stderr = client.exec_command('program')
print "stderr: ", stderr.readlines()
print "pwd: ", stdout.readlines()
UPDATE: The example used to use the ssh module, but that is now deprecated and paramiko is the up-to-date module that provides ssh functionality in python.
|
using pyodbc on linux to insert unicode or utf-8 chars in a nvarchar mssql field
|
I am using Ubuntu 9.04
I have installed the following package versions:
unixodbc and unixodbc-dev: 2.2.11-16build3
tdsodbc: 0.82-4
libsybdb5: 0.82-4
freetds-common and freetds-dev: 0.82-4
I have configured /etc/unixodbc.ini like this:
[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
Driver = /usr/lib/odbc/libtdsodbc.so
Setup = /usr/lib/odbc/libtdsS.so
CPTimeout =
CPReuse =
UsageCount = 2
I have configured /etc/freetds/freetds.conf like this:
[global]
tds version = 8.0
client charset = UTF-8
I have grabbed pyodbc revision 31e2fae4adbf1b2af1726e5668a3414cf46b454f from http://github.com/mkleehammer/pyodbc and installed it using "python setup.py install"
I have a windows machine with Microsoft SQL Server 2000 installed on my local network, up and listening on the local ip address 10.32.42.69. I have an empty database created with name "Common". I have the user "sa" with password "secret" with full priviledges.
I am using the following python code to setup the connection:
import pyodbc
odbcstring = "SERVER=10.32.42.69;UID=sa;PWD=secret;DATABASE=Common;DRIVER=FreeTDS"
con = pyodbc.connect(s)
cur = con.cursor()
cur.execute('''
CREATE TABLE testing (
id INTEGER NOT NULL IDENTITY(1,1),
name NVARCHAR(200) NULL,
PRIMARY KEY (id)
)
''')
con.commit()
Everything WORKS up to this point. I have used SQLServer's Enterprise Manager on the server and the new table is there.
Now I want to insert some data on the table.
cur = con.cursor()
cur.execute('INSERT INTO testing (name) VALUES (?)', (u'something',))
That fails!! Here's the error I get:
pyodbc.Error: ('HY004', '[HY004] [FreeTDS][SQL Server]Invalid data type
(0) (SQLBindParameter)'
Since my client is configured to use UTF-8 I thought I could solve by encoding data to UTF-8. That works, but then I get back strange data:
cur = con.cursor()
cur.execute('DELETE FROM testing')
cur.execute('INSERT INTO testing (name) VALUES (?)', (u'somé string'.encode('utf-8'),))
con.commit()
# fetching data back
cur = con.cursor()
cur.execute('SELECT name FROM testing')
data = cur.fetchone()
print type(data[0]), data[0]
That gives no error, but the data returned is not the same data sent! I get:
<type 'unicode'> somé string
That is, pyodbc won't accept an unicode object directly, but it returns unicode objects back to me! And the encoding is being mixed up!
Now for the question:
I want code to insert unicode data in a NVARCHAR and/or NTEXT field. When I query back, I want the same data I inserted back.
That can be by configuring the system differently, or by using a wrapper function able to convert the data correctly to/from unicode when inserting or retrieving
That's not asking much, is it?
|
I can remember having this kind of stupid problems using odbc drivers, even if that time it was a java+oracle combination.
The core thing is that odbc driver apparently encodes the query string when sending it to the DB. Even if the field is Unicode, and if you provide Unicode, in some cases it does not seem to matter.
You need to ensure that what is sent by the driver has the same encoding as your Database (not only server, but also database). Otherwise, of course you get funky characters because either the client or the server is mixing things up when encoding/or decoding. Do you have any idea of the charset (codepoint as MS like to say) that your server is using as a default for decoding data?
Collation has nothing to do with this problem :)
See that MS page for example. For Unicode fields, collation is used only to define the sort order in the column, not to specify how the data is stored.
If you store your data as Unicode, there is an Unique way to represent it, that's the purpose of Unicode: no need to define a charset that is compatible with all the languages that you are going to use :)
The question here is "what happens when I give data to the server that is not Unicode?". For example:
When I send an UTF-8 string to the server, how does it understand it?
When I send an UTF-16 string to the server, how does it understand it?
When I send a Latin1 string to the server, how does it understand it?
From the server perspective, all these 3 strings are only a stream of bytes. The server cannot guess the encoding in which you encoded them. Which means that you will get troubles if your odbc client ends up sending bytestrings (an encoded string) to the server instead of sending unicode data: if you do so, the server will use a predefined encoding (that was my question: what encoding the server will use? Since it is not guessing, it must be a parameter value), and if the string had been encoded using a different encoding, dzing, data will get corrupted.
It's exactly similar as doing in Python:
uni = u'Hey my name is André'
in_utf8 = uni.encode('utf-8')
# send the utf-8 data to server
# send(in_utf8)
# on server side
# server receives it. But server is Japanese.
# So the server treats the data with the National charset, shift-jis:
some_string = in_utf8 # some_string = receive()
decoded = some_string.decode('sjis')
Just try it. It's fun. The decoded string is supposed to be "Hey my name is André", but is "Hey my name is Andrï¾ï½©". é gets replaced by Japanese ï¾ï½©
Hence my suggestion: you need to ensure that pyodbc is able to send directly the data as Unicode. If pyodbc fails to do this, you will get unexpected results.
And I described the problem in the Client to Server way. But the same sort of issues can arise when communicating back from the Server to the Client. If the Client cannot understand Unicode data, you'll likely get into troubles.
FreeTDS handles Unicode for you.
Actually, FreeTDS takes care of things for you and translates all the data to UCS2 unicode. (Source).
Server <--> FreeTDS : UCS2 data
FreeTDS <--> pyodbc : encoded strings, encoded in UTF-8 (from /etc/freetds/freetds.conf)
So I would expect your application to work correctly if you pass UTF-8 data to pyodbc. In fact, as this django-pyodbc ticket states, django-pyodbc communicates in UTF-8 with pyodbc, so you should be fine.
FreeTDS 0.82
However, cramm0 says that FreeTDS 0.82 is not completely bugfree, and that there are significant differences between 0.82 and the official patched 0.82 version that can be found here. You should probably try using the patched FreeTDS
Edited: removed old data, which had nothing to do with FreeTDS but was only relevant to Easysoft commercial odbc driver. Sorry.
|
Strip all non-numeric characters (except for ".") from a string in Python
|
I've got a pretty good working snippit of code, but I was wondering if anyone has any better suggestions on how to do this:
val = ''.join([c for c in val if c in '1234567890.'])
What would you do?
|
>>> import re
>>> non_decimal = re.compile(r'[^\d.]+')
>>> non_decimal.sub('', '12.34fe4e')
'12.344'
|
How to save a Python interactive session?
|
I find myself frequently using Python's interpreter to work with databases, files, etc -- basically a lot of manual formatting of semi-structured data. I don't properly save and clean up the useful bits as often as I would like. Is there a way to save my input into the shell (db connections, variable assignments, little for loops and bits of logic) -- some history of the interactive session? If I use something like script I get too much stdout noise. I don't really need to pickle all the objects -- though if there is a solution that does that, it would be OK. Ideally I would just be left with a script that ran as the one I created interactively, and I could just delete the bits I didn't need. Is there a package that does this, or a DIY approach?
UPDATE: I am really amazed at the quality and usefulness of these packages. For those with a similar itch:
IPython -- should have been using this for ages, kind of what I had in mind
reinteract -- very impressive, I want to learn more about visualization and this seems like it will shine there. Sort of a gtk/gnome desktop app that renders graphs inline. Imagine a hybrid shell + graphing calculator + mini eclipse. Source distribution here: http://www.reinteract.org/trac/wiki/GettingIt . Built fine on Ubuntu, integrates into gnome desktop, Windows and Mac installers too.
bpython -- extremely cool, lots of nice features, autocomplete(!), rewind, one keystroke save to file, indentation, well done. Python source distribution, pulled a couple of dependencies from sourceforge.
I am converted, these really fill a need between interpreter and editor.
|
IPython is extremely useful if you like using interactive sessions. For example for your usecase there is the %save magic command, you just input %save my_useful_session 10-20 23 to save input lines 10 to 20 and 23 to my_useful_session.py. (to help with this, every line is prefixed by its number)
Look at the videos on the documentation page to get a quick overview of the features.
|
Preventing file handle inheritance in multiprocessing lib
|
Using multiprocessing on windows it appears that any open file handles are inherited by spawned processes. This has the unpleasant side effect of locking them.
I'm interested in either:
1) Preventing the inheritance
2) A way to release the file from the spawned process
Consider the following code which works fine on OSX, but crashes on windows at os.rename
from multiprocessing import Process
import os
kFileA = "a.txt"
kFileB = "b.txt"
def emptyProcess():
while 1:
pass
def main():
# Open a file and write a message
testFile = open(kFileA, 'a')
testFile.write("Message One\n")
# Spawn a process
p = Process(target=emptyProcess)
p.start()
# Close the file
testFile.close()
# This will crash
# WindowsError: [Error 32] The process cannot access the file
# because it is being used by another process
os.rename(kFileA, kFileB)
testFile = open(kFileA, 'a')
testFile.write("Message Two\n")
testFile.close()
p.terminate()
if __name__ == "__main__":
main()
|
The fileno() method returns the file number as assigned by the runtime library. Given the file number, you can then call msvcrt.get_osfhandle() to get the Win32 file handle. Use this handle in the call to SetHandleInformation. So something like the following may work:
win32api.SetHandleInformation(
msvcrt.get_osfhandle(testFile.fileno()),
win32api.HANDLE_FLAG_INHERIT,
0)
I'm not certain of the exact usage of the win32api module, but this should help bridge the gap between a Python file object and a Win32 handle.
|
Python: split a list based on a condition?
|
What's the best way, both aesthetically and from a performance perspective, to split a list of items into multiple lists based on a conditional? The equivalent of:
good = [x for x in mylist if x in goodvals]
bad = [x for x in mylist if x not in goodvals]
is there a more elegant way to do this?
Update: here's the actual use case, to better explain what I'm trying to do:
# files looks like: [ ('file1.jpg', 33L, '.jpg'), ('file2.avi', 999L, '.avi'), ... ]
IMAGE_TYPES = ('.jpg','.jpeg','.gif','.bmp','.png')
images = [f for f in files if f[2].lower() in IMAGE_TYPES]
anims = [f for f in files if f[2].lower() not in IMAGE_TYPES]
|
good, bad = [], []
for x in mylist:
(bad, good)[x in goodvals].append(x)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.