title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Keyboard Interrupts with python's multiprocessing Pool | 1,408,356 | <p>How can I handle KeyboardInterrupt events with python's multiprocessing Pools? Here is a simple example:</p>
<pre><code>from multiprocessing import Pool
from time import sleep
from sys import exit
def slowly_square(i):
sleep(1)
return i*i
def go():
pool = Pool(8)
try:
results = pool.map(slowly_square, range(40))
except KeyboardInterrupt:
# **** THIS PART NEVER EXECUTES. ****
pool.terminate()
print "You cancelled the program!"
sys.exit(1)
print "\nFinally, here are the results: ", results
if __name__ == "__main__":
go()
</code></pre>
<p>When running the code above, the <code>KeyboardInterrupt</code> gets raised when I press <code>^C</code>, but the process simply hangs at that point and I have to kill it externally.</p>
<p>I want to be able to press <code>^C</code> at any time and cause all of the processes to exit gracefully.</p>
| 78 | 2009-09-10T23:59:35Z | 1,408,476 | <p>This is a Python bug. When waiting for a condition in threading.Condition.wait(), KeyboardInterrupt is never sent. Repro:</p>
<pre><code>import threading
cond = threading.Condition(threading.Lock())
cond.acquire()
cond.wait(None)
print "done"
</code></pre>
<p>The KeyboardInterrupt exception won't be delivered until wait() returns, and it never returns, so the interrupt never happens. KeyboardInterrupt should almost certainly interrupt a condition wait.</p>
<p>Note that this doesn't happen if a timeout is specified; cond.wait(1) will receive the interrupt immediately. So, a workaround is to specify a timeout. To do that, replace</p>
<pre><code> results = pool.map(slowly_square, range(40))
</code></pre>
<p>with</p>
<pre><code> results = pool.map_async(slowly_square, range(40)).get(9999999)
</code></pre>
<p>or similar.</p>
| 90 | 2009-09-11T00:45:17Z | [
"python",
"multiprocessing",
"pool",
"keyboardinterrupt"
] |
Keyboard Interrupts with python's multiprocessing Pool | 1,408,356 | <p>How can I handle KeyboardInterrupt events with python's multiprocessing Pools? Here is a simple example:</p>
<pre><code>from multiprocessing import Pool
from time import sleep
from sys import exit
def slowly_square(i):
sleep(1)
return i*i
def go():
pool = Pool(8)
try:
results = pool.map(slowly_square, range(40))
except KeyboardInterrupt:
# **** THIS PART NEVER EXECUTES. ****
pool.terminate()
print "You cancelled the program!"
sys.exit(1)
print "\nFinally, here are the results: ", results
if __name__ == "__main__":
go()
</code></pre>
<p>When running the code above, the <code>KeyboardInterrupt</code> gets raised when I press <code>^C</code>, but the process simply hangs at that point and I have to kill it externally.</p>
<p>I want to be able to press <code>^C</code> at any time and cause all of the processes to exit gracefully.</p>
| 78 | 2009-09-10T23:59:35Z | 2,561,809 | <p>For some reasons, only exceptions inherited from the base <code>Exception</code> class are handled normally. As a workaround, you may re-raise your <code>KeyboardInterrupt</code> as an <code>Exception</code> instance:</p>
<pre><code>from multiprocessing import Pool
import time
class KeyboardInterruptError(Exception): pass
def f(x):
try:
time.sleep(x)
return x
except KeyboardInterrupt:
raise KeyboardInterruptError()
def main():
p = Pool(processes=4)
try:
print 'starting the pool map'
print p.map(f, range(10))
p.close()
print 'pool map complete'
except KeyboardInterrupt:
print 'got ^C while pool mapping, terminating the pool'
p.terminate()
print 'pool is terminated'
except Exception, e:
print 'got exception: %r, terminating the pool' % (e,)
p.terminate()
print 'pool is terminated'
finally:
print 'joining pool processes'
p.join()
print 'join complete'
print 'the end'
if __name__ == '__main__':
main()
</code></pre>
<p>Normally you would get the following output:</p>
<pre><code>staring the pool map
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
pool map complete
joining pool processes
join complete
the end
</code></pre>
<p>So if you hit <code>^C</code>, you will get:</p>
<pre><code>staring the pool map
got ^C while pool mapping, terminating the pool
pool is terminated
joining pool processes
join complete
the end
</code></pre>
| 13 | 2010-04-01T16:06:35Z | [
"python",
"multiprocessing",
"pool",
"keyboardinterrupt"
] |
Keyboard Interrupts with python's multiprocessing Pool | 1,408,356 | <p>How can I handle KeyboardInterrupt events with python's multiprocessing Pools? Here is a simple example:</p>
<pre><code>from multiprocessing import Pool
from time import sleep
from sys import exit
def slowly_square(i):
sleep(1)
return i*i
def go():
pool = Pool(8)
try:
results = pool.map(slowly_square, range(40))
except KeyboardInterrupt:
# **** THIS PART NEVER EXECUTES. ****
pool.terminate()
print "You cancelled the program!"
sys.exit(1)
print "\nFinally, here are the results: ", results
if __name__ == "__main__":
go()
</code></pre>
<p>When running the code above, the <code>KeyboardInterrupt</code> gets raised when I press <code>^C</code>, but the process simply hangs at that point and I have to kill it externally.</p>
<p>I want to be able to press <code>^C</code> at any time and cause all of the processes to exit gracefully.</p>
| 78 | 2009-09-10T23:59:35Z | 3,577,628 | <p>I found, for the time being, the best solution is to not use the multiprocessing.pool feature but rather roll your own pool functionality. I provided an example demonstrating the error with apply_async as well as an example showing how to avoid using the pool functionality altogether.</p>
<p><a href="http://www.bryceboe.com/2010/08/26/python-multiprocessing-and-keyboardinterrupt/" rel="nofollow">http://www.bryceboe.com/2010/08/26/python-multiprocessing-and-keyboardinterrupt/</a></p>
| 3 | 2010-08-26T17:16:12Z | [
"python",
"multiprocessing",
"pool",
"keyboardinterrupt"
] |
Keyboard Interrupts with python's multiprocessing Pool | 1,408,356 | <p>How can I handle KeyboardInterrupt events with python's multiprocessing Pools? Here is a simple example:</p>
<pre><code>from multiprocessing import Pool
from time import sleep
from sys import exit
def slowly_square(i):
sleep(1)
return i*i
def go():
pool = Pool(8)
try:
results = pool.map(slowly_square, range(40))
except KeyboardInterrupt:
# **** THIS PART NEVER EXECUTES. ****
pool.terminate()
print "You cancelled the program!"
sys.exit(1)
print "\nFinally, here are the results: ", results
if __name__ == "__main__":
go()
</code></pre>
<p>When running the code above, the <code>KeyboardInterrupt</code> gets raised when I press <code>^C</code>, but the process simply hangs at that point and I have to kill it externally.</p>
<p>I want to be able to press <code>^C</code> at any time and cause all of the processes to exit gracefully.</p>
| 78 | 2009-09-10T23:59:35Z | 6,191,991 | <p>From what I have recently found, the best solution is to set up the worker processes to ignore SIGINT altogether, and confine all the cleanup code to the parent process. This fixes the problem for both idle and busy worker processes, and requires no error handling code in your child processes.</p>
<pre><code>import signal
...
def init_worker():
signal.signal(signal.SIGINT, signal.SIG_IGN)
...
def main()
pool = multiprocessing.Pool(size, init_worker)
...
except KeyboardInterrupt:
pool.terminate()
pool.join()
</code></pre>
<p>Explanation and full example code can be found at <a href="http://noswap.com/blog/python-multiprocessing-keyboardinterrupt/">http://noswap.com/blog/python-multiprocessing-keyboardinterrupt/</a> and <a href="http://github.com/jreese/multiprocessing-keyboardinterrupt">http://github.com/jreese/multiprocessing-keyboardinterrupt</a> respectively.</p>
| 30 | 2011-05-31T18:39:47Z | [
"python",
"multiprocessing",
"pool",
"keyboardinterrupt"
] |
Keyboard Interrupts with python's multiprocessing Pool | 1,408,356 | <p>How can I handle KeyboardInterrupt events with python's multiprocessing Pools? Here is a simple example:</p>
<pre><code>from multiprocessing import Pool
from time import sleep
from sys import exit
def slowly_square(i):
sleep(1)
return i*i
def go():
pool = Pool(8)
try:
results = pool.map(slowly_square, range(40))
except KeyboardInterrupt:
# **** THIS PART NEVER EXECUTES. ****
pool.terminate()
print "You cancelled the program!"
sys.exit(1)
print "\nFinally, here are the results: ", results
if __name__ == "__main__":
go()
</code></pre>
<p>When running the code above, the <code>KeyboardInterrupt</code> gets raised when I press <code>^C</code>, but the process simply hangs at that point and I have to kill it externally.</p>
<p>I want to be able to press <code>^C</code> at any time and cause all of the processes to exit gracefully.</p>
| 78 | 2009-09-10T23:59:35Z | 13,159,399 | <p>Usually this simple structure works for <kbd>Ctrl</kbd>-<kbd>C</kbd> on Pool :</p>
<pre><code>def signal_handle(_signal, frame):
print "Stopping the Jobs."
signal.signal(signal.SIGINT, signal_handle)
</code></pre>
<p>As was stated in few similar posts:</p>
<p><a href="http://stackoverflow.com/questions/4205317/capture-keyboardinterrupt-in-python-without-try-except">Capture keyboardinterrupt in Python without try-except</a></p>
| 6 | 2012-10-31T13:44:37Z | [
"python",
"multiprocessing",
"pool",
"keyboardinterrupt"
] |
Keyboard Interrupts with python's multiprocessing Pool | 1,408,356 | <p>How can I handle KeyboardInterrupt events with python's multiprocessing Pools? Here is a simple example:</p>
<pre><code>from multiprocessing import Pool
from time import sleep
from sys import exit
def slowly_square(i):
sleep(1)
return i*i
def go():
pool = Pool(8)
try:
results = pool.map(slowly_square, range(40))
except KeyboardInterrupt:
# **** THIS PART NEVER EXECUTES. ****
pool.terminate()
print "You cancelled the program!"
sys.exit(1)
print "\nFinally, here are the results: ", results
if __name__ == "__main__":
go()
</code></pre>
<p>When running the code above, the <code>KeyboardInterrupt</code> gets raised when I press <code>^C</code>, but the process simply hangs at that point and I have to kill it externally.</p>
<p>I want to be able to press <code>^C</code> at any time and cause all of the processes to exit gracefully.</p>
| 78 | 2009-09-10T23:59:35Z | 23,682,499 | <p>It seems there are two issues that make exceptions while multiprocessing annoying. The first (noted by Glenn) is that you need to use <code>map_async</code> with a timeout instead of <code>map</code> in order to get an immediate response (i.e., don't finish processing the entire list). The second (noted by Andrey) is that multiprocessing doesn't catch exceptions that don't inherit from <code>Exception</code> (e.g., <code>SystemExit</code>). So here's my solution that deals with both of these:</p>
<pre><code>import sys
import functools
import traceback
import multiprocessing
def _poolFunctionWrapper(function, arg):
"""Run function under the pool
Wrapper around function to catch exceptions that don't inherit from
Exception (which aren't caught by multiprocessing, so that you end
up hitting the timeout).
"""
try:
return function(arg)
except:
cls, exc, tb = sys.exc_info()
if issubclass(cls, Exception):
raise # No worries
# Need to wrap the exception with something multiprocessing will recognise
import traceback
print "Unhandled exception %s (%s):\n%s" % (cls.__name__, exc, traceback.format_exc())
raise Exception("Unhandled exception: %s (%s)" % (cls.__name__, exc))
def _runPool(pool, timeout, function, iterable):
"""Run the pool
Wrapper around pool.map_async, to handle timeout. This is required so as to
trigger an immediate interrupt on the KeyboardInterrupt (Ctrl-C); see
http://stackoverflow.com/questions/1408356/keyboard-interrupts-with-pythons-multiprocessing-pool
Further wraps the function in _poolFunctionWrapper to catch exceptions
that don't inherit from Exception.
"""
return pool.map_async(functools.partial(_poolFunctionWrapper, function), iterable).get(timeout)
def myMap(function, iterable, numProcesses=1, timeout=9999):
"""Run the function on the iterable, optionally with multiprocessing"""
if numProcesses > 1:
pool = multiprocessing.Pool(processes=numProcesses, maxtasksperchild=1)
mapFunc = functools.partial(_runPool, pool, timeout)
else:
pool = None
mapFunc = map
results = mapFunc(function, iterable)
if pool is not None:
pool.close()
pool.join()
return results
</code></pre>
| 3 | 2014-05-15T15:23:24Z | [
"python",
"multiprocessing",
"pool",
"keyboardinterrupt"
] |
How can you profile a parallelized Python script? | 1,408,393 | <p>Suppose I have a python script called <code>my_parallel_script.py</code> that involves using <code>multiprocessing</code> to parallelize several things and I run it with the following command:</p>
<pre><code>python -m cProfile my_parallel_script.py
</code></pre>
<p>This generates profiling output for the <strong>parent process</strong> only. Calls made in child processes are not recorded at all. Is it possible to profile the child processes as well?</p>
<p>If the only option is to modify the source, what would be the simplest way to do this?</p>
| 9 | 2009-09-11T00:15:21Z | 1,408,533 | <p>cProfile only works within a single process, so you will not automatically get the child process profiled.</p>
<p>I would recommend that you tweak the child process code so that you can invoke it separately as a single process. Then run it under the profiler. You probably don't need to run your system multi-process while profiling, and it will simplify the job to have only a single child running.</p>
| 8 | 2009-09-11T01:11:09Z | [
"python",
"profile",
"parallel-processing",
"multiprocessing"
] |
Improving a Python Script that Updates Apache DocumentRoot | 1,408,424 | <p>I'm tired of going through all the steps it takes (for me) to change the DocumentRoot in Apache. I'm trying to facilitate the process with the following Python script...</p>
<pre><code>#!/usr/bin/python
import sys, re
if len(sys.argv) == 2:
f = open('/tmp/apachecdr', 'w')
f.write(open('/etc/apache2/httpd.conf').read())
f = open('/tmp/apachecdr', 'r')
r = re.sub('DocumentRoot "(.*?)"',
'DocumentRoot "' + sys.argv[1] + '"',
f.read())
f = open('/etc/apache2/httpd.conf', 'w')
f.write(r)
else:
print "Please supply the new DocumentRoot path."
</code></pre>
<p>I've saved this as /usr/bin/apachecdr so that I could simply open a shell and "sudo apachecdr /new/documentroot/path" and then restart with apachectl. My question is how would you write this? </p>
<p>It's my first time posting on Stack Overflow (and I'm new to Python) so please let me know if this is not specific enough of a question.</p>
| 0 | 2009-09-11T00:26:09Z | 1,408,547 | <p>You're doing a lot of file work for not much benefit. Why do you write /tmp/apachecdr and then immediately read it again? If you are copying httpd.conf to the tmp directory as a backup, the shutil module provides functions to copy files:</p>
<pre><code>#!/usr/bin/python
import shutil, sys, re
HTTPD_CONF = '/etc/apache2/httpd.conf'
BACKUP = '/tmp/apachecdr'
if len(sys.argv) == 2:
shutil.copyfile(BACKUP, HTTPD_CONF)
conf = open(HTTPD_CONF).read()
new_conf = re.sub('DocumentRoot "(.*?)"',
'DocumentRoot "' + sys.argv[1] + '"',
conf)
open(HTTPD_CONF, 'w').write(new_conf)
else:
print "Please supply the new DocumentRoot path."
</code></pre>
| 0 | 2009-09-11T01:16:56Z | [
"python",
"apache",
"document",
"root"
] |
Improving a Python Script that Updates Apache DocumentRoot | 1,408,424 | <p>I'm tired of going through all the steps it takes (for me) to change the DocumentRoot in Apache. I'm trying to facilitate the process with the following Python script...</p>
<pre><code>#!/usr/bin/python
import sys, re
if len(sys.argv) == 2:
f = open('/tmp/apachecdr', 'w')
f.write(open('/etc/apache2/httpd.conf').read())
f = open('/tmp/apachecdr', 'r')
r = re.sub('DocumentRoot "(.*?)"',
'DocumentRoot "' + sys.argv[1] + '"',
f.read())
f = open('/etc/apache2/httpd.conf', 'w')
f.write(r)
else:
print "Please supply the new DocumentRoot path."
</code></pre>
<p>I've saved this as /usr/bin/apachecdr so that I could simply open a shell and "sudo apachecdr /new/documentroot/path" and then restart with apachectl. My question is how would you write this? </p>
<p>It's my first time posting on Stack Overflow (and I'm new to Python) so please let me know if this is not specific enough of a question.</p>
| 0 | 2009-09-11T00:26:09Z | 1,408,570 | <p>In general Python programmers prefer explicit over implicit (see <a href="http://www.python.org/dev/peps/pep-0020/" rel="nofollow">http://www.python.org/dev/peps/pep-0020/</a>).
With that in mind you may want to rewrite this line</p>
<pre><code>f.write(open('/etc/apache2/httpd.conf').read())
</code></pre>
<p>To:</p>
<pre><code>infile = open('/etc/apache2/httpd.conf').read()
f.write(infile)
infile.close()
</code></pre>
<p>Another suggestion would be to add some error checking. For example checking that the file /etc/apache2/httpd.conf exists.</p>
<p>Normally one would use a try/except around the open line, but if you want to use the combined read and the write command you could use a os.stat for checking.</p>
<p>Example</p>
<pre><code>import os
try:
os.stat('/etc/apache2/httpd.conf)
except OSError:
print "/etc/apache2/httpd.conf does not exist"
return #will exit the program
</code></pre>
| 0 | 2009-09-11T01:28:07Z | [
"python",
"apache",
"document",
"root"
] |
Improving a Python Script that Updates Apache DocumentRoot | 1,408,424 | <p>I'm tired of going through all the steps it takes (for me) to change the DocumentRoot in Apache. I'm trying to facilitate the process with the following Python script...</p>
<pre><code>#!/usr/bin/python
import sys, re
if len(sys.argv) == 2:
f = open('/tmp/apachecdr', 'w')
f.write(open('/etc/apache2/httpd.conf').read())
f = open('/tmp/apachecdr', 'r')
r = re.sub('DocumentRoot "(.*?)"',
'DocumentRoot "' + sys.argv[1] + '"',
f.read())
f = open('/etc/apache2/httpd.conf', 'w')
f.write(r)
else:
print "Please supply the new DocumentRoot path."
</code></pre>
<p>I've saved this as /usr/bin/apachecdr so that I could simply open a shell and "sudo apachecdr /new/documentroot/path" and then restart with apachectl. My question is how would you write this? </p>
<p>It's my first time posting on Stack Overflow (and I'm new to Python) so please let me know if this is not specific enough of a question.</p>
| 0 | 2009-09-11T00:26:09Z | 1,413,880 | <p>Thank you both for your answers. Ned, I was hoping you'd say something like that. The reason I was doing it like that is simply because I didn't know better. Thanks for providing your code, it was exactly what I was looking for. </p>
<p>Btw, the error is happening because "BACKUP" and "HTTPD_CONF" are reversed on line 8. In other words, switch "shutil.copyfile(BACKUP, HTTPD_CONF)" to "shutil.copyfile(HTTPD_CONF, BACKUP)" on line 8 of Ned's script.</p>
<p>Thanks again guys! This was a great first experience posting on this site.</p>
| 0 | 2009-09-12T00:44:51Z | [
"python",
"apache",
"document",
"root"
] |
Track process status with Python | 1,408,627 | <p>I want to start a number of subprocesses in my Python script and then track when they complete or crash.</p>
<p>subprocess.Popen.poll() seems to return None when the process is still running, 0 on success, and non-zero on failure. Can that be expected on all OS's?
Unfortunately the standard library documentation is lacking for these methods...</p>
<p>Is the subprocess module the most suitable to achieve this goal?</p>
<p>thanks</p>
| 3 | 2009-09-11T01:47:22Z | 1,408,646 | <p>Yes to all. </p>
| 1 | 2009-09-11T01:58:27Z | [
"python",
"process",
"crash",
"subprocess"
] |
Track process status with Python | 1,408,627 | <p>I want to start a number of subprocesses in my Python script and then track when they complete or crash.</p>
<p>subprocess.Popen.poll() seems to return None when the process is still running, 0 on success, and non-zero on failure. Can that be expected on all OS's?
Unfortunately the standard library documentation is lacking for these methods...</p>
<p>Is the subprocess module the most suitable to achieve this goal?</p>
<p>thanks</p>
| 3 | 2009-09-11T01:47:22Z | 1,408,668 | <p>This may not be a very good answer to your question, but just in case you are at risk of reinventing a wheel, take a look at <a href="http://supervisord.org/" rel="nofollow">Supervisor</a> </p>
<blockquote>
<p><em>Supervisor is a client/server system that allows its users to monitor and
control a number of processes on
UNIX-like operating systems.</em></p>
</blockquote>
<p>And it's all written in Python, so if you feel like tinkering with it, you can dig right in!</p>
| 4 | 2009-09-11T02:08:19Z | [
"python",
"process",
"crash",
"subprocess"
] |
Getting another program's output as input on the fly | 1,408,678 | <p>I've two programs I'm using in this way:</p>
<pre><code>$ c_program | python_program.py
</code></pre>
<p>c_program prints something using <code>printf()</code> and python_program.py reads using <code>sys.stdin.readline()</code> </p>
<p>I'd like to make the python_program.py process c_program's output as it prints, immediately, so that it can print its own current output. Unfortunately python_program.py gets its input only after c_program ends.</p>
<p>How can I solve this? </p>
| 9 | 2009-09-11T02:12:28Z | 1,408,710 | <p>All the Unix shells (that I know of) implement shell pipelines via something else than a pty
(typically, they use Unix pipes!-); therefore, the C/C++ runtime library in <code>cpp_program</code> will KNOW its output is NOT a terminal, and therefore it WILL buffer the output (in chunks of a few KB at a time). Unless you write your own shell (or semiquasimaybeshelloid) that implements pipelines via pyt's, I believe there is no way to do what you require using pipeline notation.</p>
<p>The "shelloid" thing in question might be written in Python (or in C, or Tcl, or...), using the <code>pty</code> module of the standard library or higher-level abstraction based on it such as <a href="http://pexpect.sourceforge.net/pexpect.html" rel="nofollow">pexpect</a>, and the fact that the two programs to be connected via a "pty-based pipeline" are written in C++ and Python is pretty irrelevant. The key idea is to trick the program to the left of the pipe into believing its stdout is a terminal (that's why a pty must be at the root of the trick) to fool its runtime library into NOT buffering output. Once you have written such a shelloid, you'd call it with some syntax such as:</p>
<p>$ shelloid 'cpp_program | python_program.py'</p>
<p>Of course it would be easier to provide a "point solution" by writing <code>python_program</code> in the knowledge that it must spawn <code>cpp_program</code> as a sub-process AND trick it into believing its stdout is a terminal (i.e., <code>python_program</code> would then directly use <code>pexpect</code>, for example). But if you have a million of such situations where you want to defeat the normal buffering performed by the system-provided C runtime library, or many cases in which you want to reuse existing filters, etc, writing <code>shelloid</code> might actually be preferable.</p>
| 1 | 2009-09-11T02:35:11Z | [
"python",
"c",
"linux",
"bash",
"stdio"
] |
Getting another program's output as input on the fly | 1,408,678 | <p>I've two programs I'm using in this way:</p>
<pre><code>$ c_program | python_program.py
</code></pre>
<p>c_program prints something using <code>printf()</code> and python_program.py reads using <code>sys.stdin.readline()</code> </p>
<p>I'd like to make the python_program.py process c_program's output as it prints, immediately, so that it can print its own current output. Unfortunately python_program.py gets its input only after c_program ends.</p>
<p>How can I solve this? </p>
| 9 | 2009-09-11T02:12:28Z | 1,408,717 | <p>You may want to try <code>flush</code>ing the stdout stream in the cpp program.</p>
| 1 | 2009-09-11T02:40:18Z | [
"python",
"c",
"linux",
"bash",
"stdio"
] |
Getting another program's output as input on the fly | 1,408,678 | <p>I've two programs I'm using in this way:</p>
<pre><code>$ c_program | python_program.py
</code></pre>
<p>c_program prints something using <code>printf()</code> and python_program.py reads using <code>sys.stdin.readline()</code> </p>
<p>I'd like to make the python_program.py process c_program's output as it prints, immediately, so that it can print its own current output. Unfortunately python_program.py gets its input only after c_program ends.</p>
<p>How can I solve this? </p>
| 9 | 2009-09-11T02:12:28Z | 1,408,730 | <p>What you need is for your C program to call fflush(stdout) after every line. For example, with the GNU grep tool, you can invoke the option '--line-buffered', which causes this behavior. See <strong><a href="http://www.thinkage.ca/english/gcos/expl/nsc/lib/fflush.html">fflush</a></strong>.</p>
| 8 | 2009-09-11T02:43:34Z | [
"python",
"c",
"linux",
"bash",
"stdio"
] |
Getting another program's output as input on the fly | 1,408,678 | <p>I've two programs I'm using in this way:</p>
<pre><code>$ c_program | python_program.py
</code></pre>
<p>c_program prints something using <code>printf()</code> and python_program.py reads using <code>sys.stdin.readline()</code> </p>
<p>I'd like to make the python_program.py process c_program's output as it prints, immediately, so that it can print its own current output. Unfortunately python_program.py gets its input only after c_program ends.</p>
<p>How can I solve this? </p>
| 9 | 2009-09-11T02:12:28Z | 1,408,768 | <p>ok this maybe sound stupid but it might work:</p>
<p>output your pgm to a file</p>
<pre><code>$ c_program >> ./out.log
</code></pre>
<p>develop a python program that read from tail command</p>
<pre><code>import os
tailoutput = os.popen("tail -n 0 -f ./out.log")
try:
while 1:
line = tailoutput.readline()
if len(line) == 0:
break
#do the rest of your things here
print line
except KeyboardInterrupt:
print "Quitting \n"
</code></pre>
| -1 | 2009-09-11T02:58:54Z | [
"python",
"c",
"linux",
"bash",
"stdio"
] |
Getting another program's output as input on the fly | 1,408,678 | <p>I've two programs I'm using in this way:</p>
<pre><code>$ c_program | python_program.py
</code></pre>
<p>c_program prints something using <code>printf()</code> and python_program.py reads using <code>sys.stdin.readline()</code> </p>
<p>I'd like to make the python_program.py process c_program's output as it prints, immediately, so that it can print its own current output. Unfortunately python_program.py gets its input only after c_program ends.</p>
<p>How can I solve this? </p>
| 9 | 2009-09-11T02:12:28Z | 1,408,829 | <p>Just set stdout to be line buffered at the beginning of your C program (before performing any output), like this:</p>
<pre><code>#include <stdio.h>
setvbuf(stdout, NULL, _IOLBF, 0);
</code></pre>
<p>or</p>
<pre><code>#include <stdio.h>
setlinebuf(stdout);
</code></pre>
<p>Either one will work on Linux, but <code>setvbuf</code> is part of the C standard so it will work on more systems.</p>
<p>By default stdout will be block buffered for a pipe or file, or line buffered for a terminal. Since stdout is a pipe in this case, the default will be block buffered. If it is block buffered then the buffer will be flushed when it is full, or when you call <code>fflush(stdout)</code>. If it is line buffered then it will be flushed automatically after each line.</p>
| 17 | 2009-09-11T03:27:43Z | [
"python",
"c",
"linux",
"bash",
"stdio"
] |
Getting another program's output as input on the fly | 1,408,678 | <p>I've two programs I'm using in this way:</p>
<pre><code>$ c_program | python_program.py
</code></pre>
<p>c_program prints something using <code>printf()</code> and python_program.py reads using <code>sys.stdin.readline()</code> </p>
<p>I'd like to make the python_program.py process c_program's output as it prints, immediately, so that it can print its own current output. Unfortunately python_program.py gets its input only after c_program ends.</p>
<p>How can I solve this? </p>
| 9 | 2009-09-11T02:12:28Z | 1,409,028 | <p>If you can modify your C program, you've already received your <a href="http://stackoverflow.com/questions/1408678/getting-another-programs-output-as-input-on-the-fly/1408730#1408730">answer</a> but i thought i'd include a solution for those that can't/won't modify code.</p>
<p><a href="http://expect.sourceforge.net/" rel="nofollow">expect</a> has an example script called <a href="http://expect.sourceforge.net/example/unbuffer.man.html" rel="nofollow">unbuffer</a> that will do the trick.</p>
| 6 | 2009-09-11T04:51:18Z | [
"python",
"c",
"linux",
"bash",
"stdio"
] |
Trimming Python Runtime | 1,408,726 | <p>We've got a (Windows) application, with which we distribute an entire Python installation (including several 3rd-party modules that we use), so we have consistency and so we don't need to install everything separately. This works pretty well, but the application is pretty huge.</p>
<p>Obviously, we don't use everything available in the runtime. I'd like to trim down the runtime to only include what we really need.</p>
<p>I plan on trying out py2exe, but I'd like to try and find another solution that will just help me remove the unneeded parts of the Python runtime.</p>
| 6 | 2009-09-11T02:42:49Z | 1,408,735 | <p>One trick I've learned while trimming down .py files to ship: Delete all the .pyc files in the standard library, then run your application throughly (that is, enough to be sure all the Python modules it needs will be loaded). If you examine the standard library directories, there will be .pyc files for all the modules that were actually used. .py files without .pyc are ones that you don't need.</p>
| 6 | 2009-09-11T02:46:39Z | [
"python"
] |
Trimming Python Runtime | 1,408,726 | <p>We've got a (Windows) application, with which we distribute an entire Python installation (including several 3rd-party modules that we use), so we have consistency and so we don't need to install everything separately. This works pretty well, but the application is pretty huge.</p>
<p>Obviously, we don't use everything available in the runtime. I'd like to trim down the runtime to only include what we really need.</p>
<p>I plan on trying out py2exe, but I'd like to try and find another solution that will just help me remove the unneeded parts of the Python runtime.</p>
| 6 | 2009-09-11T02:42:49Z | 1,408,742 | <p>Both <a href="http://www.py2exe.org/" rel="nofollow">py2exe</a> and <a href="http://www.pyinstaller.org/" rel="nofollow">pyinstaller</a> (<strong>NOTE</strong>: for the latter use the SVN version, the released one is VERY long in the tooth;-) do their "trimming" via <a href="http://www.python.org/doc/2.6.2/library/modulefinder.html" rel="nofollow">modulefinder</a>, the standard library module for finding all modules used by a given Python script; you can of course use the latter yourself to identify all needed modules, if you don't trust pyinstaller or py2exe to do it properly and automatically on your behalf.</p>
| 5 | 2009-09-11T02:48:54Z | [
"python"
] |
Trimming Python Runtime | 1,408,726 | <p>We've got a (Windows) application, with which we distribute an entire Python installation (including several 3rd-party modules that we use), so we have consistency and so we don't need to install everything separately. This works pretty well, but the application is pretty huge.</p>
<p>Obviously, we don't use everything available in the runtime. I'd like to trim down the runtime to only include what we really need.</p>
<p>I plan on trying out py2exe, but I'd like to try and find another solution that will just help me remove the unneeded parts of the Python runtime.</p>
| 6 | 2009-09-11T02:42:49Z | 1,408,815 | <p>This <a href="http://www.py2exe.org/index.cgi/BetterCompression" rel="nofollow">py2exe page on compression</a> suggests using <a href="http://upx.sourceforge.net/" rel="nofollow">UPX</a> to compress any DLLs or .pyd files (which are actually just DLLs, still). Obviously this doesn't help in trimming out unneeded <em>modules</em>, but it can/will trim down the size of your distribution, if that's a large concern.</p>
| 1 | 2009-09-11T03:19:42Z | [
"python"
] |
Getting the the keyword arguments actually passed to a Python method | 1,408,818 | <p>I'm dreaming of a Python method with explicit keyword args:</p>
<pre><code>def func(a=None, b=None, c=None):
for arg, val in magic_arg_dict.items(): # Where do I get the magic?
print '%s: %s' % (arg, val)
</code></pre>
<p>I want to get a dictionary of only those arguments the caller actually passed into the method, just like <code>**kwargs</code>, but I don't want the caller to be able to pass any old random args, unlike <code>**kwargs</code>.</p>
<pre><code>>>> func(b=2)
b: 2
>>> func(a=3, c=5)
a: 3
c: 5
</code></pre>
<p>So: is there such an incantation? In my case, I happen to be able to compare each argument against its default to find the ones that are different, but this is kind of inelegant and gets tedious when you have nine arguments. For bonus points, provide an incantation that can tell me even when the caller passes in a keyword argument assigned its default value:</p>
<pre><code>>>> func(a=None)
a: None
</code></pre>
<p>Tricksy!</p>
<p><b>Edit:</b> The (lexical) function signature has to remain intact. It's part of a public API, and the primary worth of the explicit keyword args lies in their documentary value. Just to make things interesting. :)</p>
| 16 | 2009-09-11T03:21:12Z | 1,408,842 | <p>One possibility:</p>
<pre><code>def f(**kw):
acceptable_names = set('a', 'b', 'c')
if not (set(kw) <= acceptable_names):
raise WhateverYouWantException(whatever)
...proceed...
</code></pre>
<p>IOW, it's very easy to check that the passed-in names are within the acceptable set and otherwise raise whatever you'd want Python to raise (TypeError, I guess;-). Pretty easy to turn into a decorator, btw.</p>
<p>Another possibility:</p>
<pre><code>_sentinel = object():
def f(a=_sentinel, b=_sentinel, c=_sentinel):
...proceed with checks `is _sentinel`...
</code></pre>
<p>by making a unique object <code>_sentinel</code> you remove the risk that the caller might be accidentally passing <code>None</code> (or other non-unique default values the caller could possibly pass). This is all <code>object()</code> is good for, btw: an extremely-lightweight, unique sentinel that cannot possibly be accidentally confused with any other object (when you check with the <code>is</code> operator).</p>
<p>Either solution is preferable for slightly different problems.</p>
| 5 | 2009-09-11T03:31:08Z | [
"python",
"arguments",
"keyword"
] |
Getting the the keyword arguments actually passed to a Python method | 1,408,818 | <p>I'm dreaming of a Python method with explicit keyword args:</p>
<pre><code>def func(a=None, b=None, c=None):
for arg, val in magic_arg_dict.items(): # Where do I get the magic?
print '%s: %s' % (arg, val)
</code></pre>
<p>I want to get a dictionary of only those arguments the caller actually passed into the method, just like <code>**kwargs</code>, but I don't want the caller to be able to pass any old random args, unlike <code>**kwargs</code>.</p>
<pre><code>>>> func(b=2)
b: 2
>>> func(a=3, c=5)
a: 3
c: 5
</code></pre>
<p>So: is there such an incantation? In my case, I happen to be able to compare each argument against its default to find the ones that are different, but this is kind of inelegant and gets tedious when you have nine arguments. For bonus points, provide an incantation that can tell me even when the caller passes in a keyword argument assigned its default value:</p>
<pre><code>>>> func(a=None)
a: None
</code></pre>
<p>Tricksy!</p>
<p><b>Edit:</b> The (lexical) function signature has to remain intact. It's part of a public API, and the primary worth of the explicit keyword args lies in their documentary value. Just to make things interesting. :)</p>
| 16 | 2009-09-11T03:21:12Z | 1,408,851 | <p>How about using a decorator to validate the incoming kwargs?</p>
<pre><code>def validate_kwargs(*keys):
def entangle(f):
def inner(*args, **kwargs):
for key in kwargs:
if not key in keys:
raise ValueError("Received bad kwarg: '%s', expected: %s" % (key, keys))
return f(*args, **kwargs)
return inner
return entangle
###
@validate_kwargs('a', 'b', 'c')
def func(**kwargs):
for arg,val in kwargs.items():
print arg, "->", val
func(b=2)
print '----'
func(a=3, c=5)
print '----'
func(d='not gonna work')
</code></pre>
<p>Gives this output:</p>
<pre><code>b -> 2
----
a -> 3
c -> 5
----
Traceback (most recent call last):
File "kwargs.py", line 20, in <module>
func(d='not gonna work')
File "kwargs.py", line 6, in inner
raise ValueError("Received bad kwarg: '%s', expected: %s" % (key, keys))
ValueError: Received bad kwarg: 'd', expected: ('a', 'b', 'c')
</code></pre>
| 2 | 2009-09-11T03:34:33Z | [
"python",
"arguments",
"keyword"
] |
Getting the the keyword arguments actually passed to a Python method | 1,408,818 | <p>I'm dreaming of a Python method with explicit keyword args:</p>
<pre><code>def func(a=None, b=None, c=None):
for arg, val in magic_arg_dict.items(): # Where do I get the magic?
print '%s: %s' % (arg, val)
</code></pre>
<p>I want to get a dictionary of only those arguments the caller actually passed into the method, just like <code>**kwargs</code>, but I don't want the caller to be able to pass any old random args, unlike <code>**kwargs</code>.</p>
<pre><code>>>> func(b=2)
b: 2
>>> func(a=3, c=5)
a: 3
c: 5
</code></pre>
<p>So: is there such an incantation? In my case, I happen to be able to compare each argument against its default to find the ones that are different, but this is kind of inelegant and gets tedious when you have nine arguments. For bonus points, provide an incantation that can tell me even when the caller passes in a keyword argument assigned its default value:</p>
<pre><code>>>> func(a=None)
a: None
</code></pre>
<p>Tricksy!</p>
<p><b>Edit:</b> The (lexical) function signature has to remain intact. It's part of a public API, and the primary worth of the explicit keyword args lies in their documentary value. Just to make things interesting. :)</p>
| 16 | 2009-09-11T03:21:12Z | 1,408,854 | <p>There's probably better ways to do this, but here's my take:</p>
<pre><code>def CompareArgs(argdict, **kwargs):
if not set(argdict.keys()) <= set(kwargs.keys()):
# not <= may seem weird, but comparing sets sometimes gives weird results.
# set1 <= set2 means that all items in set 1 are present in set 2
raise ValueError("invalid args")
def foo(**kwargs):
# we declare foo's "standard" args to be a, b, c
CompareArgs(kwargs, a=None, b=None, c=None)
print "Inside foo"
if __name__ == "__main__":
foo(a=1)
foo(a=1, b=3)
foo(a=1, b=3, c=5)
foo(c=10)
foo(bar=6)
</code></pre>
<p>and its output:</p>
<pre>
Inside foo
Inside foo
Inside foo
Inside foo
Traceback (most recent call last):
File "a.py", line 18, in
foo(bar=6)
File "a.py", line 9, in foo
CompareArgs(kwargs, a=None, b=None, c=None)
File "a.py", line 5, in CompareArgs
raise ValueError("invalid args")
ValueError: invalid args
</pre>
<p>This could probably be converted to a decorator, but my decorators need work. Left as an exercise to the reader :P</p>
| 0 | 2009-09-11T03:36:13Z | [
"python",
"arguments",
"keyword"
] |
Getting the the keyword arguments actually passed to a Python method | 1,408,818 | <p>I'm dreaming of a Python method with explicit keyword args:</p>
<pre><code>def func(a=None, b=None, c=None):
for arg, val in magic_arg_dict.items(): # Where do I get the magic?
print '%s: %s' % (arg, val)
</code></pre>
<p>I want to get a dictionary of only those arguments the caller actually passed into the method, just like <code>**kwargs</code>, but I don't want the caller to be able to pass any old random args, unlike <code>**kwargs</code>.</p>
<pre><code>>>> func(b=2)
b: 2
>>> func(a=3, c=5)
a: 3
c: 5
</code></pre>
<p>So: is there such an incantation? In my case, I happen to be able to compare each argument against its default to find the ones that are different, but this is kind of inelegant and gets tedious when you have nine arguments. For bonus points, provide an incantation that can tell me even when the caller passes in a keyword argument assigned its default value:</p>
<pre><code>>>> func(a=None)
a: None
</code></pre>
<p>Tricksy!</p>
<p><b>Edit:</b> The (lexical) function signature has to remain intact. It's part of a public API, and the primary worth of the explicit keyword args lies in their documentary value. Just to make things interesting. :)</p>
| 16 | 2009-09-11T03:21:12Z | 1,408,856 | <p>Perhaps raise an error if they pass any *args?</p>
<pre><code>def func(*args, **kwargs):
if args:
raise TypeError("no positional args allowed")
arg1 = kwargs.pop("arg1", "default")
if kwargs:
raise TypeError("unknown args " + str(kwargs.keys()))
</code></pre>
<p>It'd be simple to factor it into taking a list of varnames or a generic parsing function to use. It wouldn't be too hard to make this into a decorator (python 3.1), too:</p>
<pre><code>def OnlyKwargs(func):
allowed = func.__code__.co_varnames
def wrap(*args, **kwargs):
assert not args
# or whatever logic you need wrt required args
assert sorted(allowed) == sorted(kwargs)
return func(**kwargs)
</code></pre>
<p>Note: i'm not sure how well this would work around already wrapped functions or functions that have <code>*args</code> or <code>**kwargs</code> already.</p>
| 0 | 2009-09-11T03:37:09Z | [
"python",
"arguments",
"keyword"
] |
Getting the the keyword arguments actually passed to a Python method | 1,408,818 | <p>I'm dreaming of a Python method with explicit keyword args:</p>
<pre><code>def func(a=None, b=None, c=None):
for arg, val in magic_arg_dict.items(): # Where do I get the magic?
print '%s: %s' % (arg, val)
</code></pre>
<p>I want to get a dictionary of only those arguments the caller actually passed into the method, just like <code>**kwargs</code>, but I don't want the caller to be able to pass any old random args, unlike <code>**kwargs</code>.</p>
<pre><code>>>> func(b=2)
b: 2
>>> func(a=3, c=5)
a: 3
c: 5
</code></pre>
<p>So: is there such an incantation? In my case, I happen to be able to compare each argument against its default to find the ones that are different, but this is kind of inelegant and gets tedious when you have nine arguments. For bonus points, provide an incantation that can tell me even when the caller passes in a keyword argument assigned its default value:</p>
<pre><code>>>> func(a=None)
a: None
</code></pre>
<p>Tricksy!</p>
<p><b>Edit:</b> The (lexical) function signature has to remain intact. It's part of a public API, and the primary worth of the explicit keyword args lies in their documentary value. Just to make things interesting. :)</p>
| 16 | 2009-09-11T03:21:12Z | 1,408,860 | <p>Here is the easiest and simplest way:</p>
<pre><code>def func(a=None, b=None, c=None):
args = locals().copy()
print args
func(2, "egg")
</code></pre>
<p>This give the output: <code>{'a': 2, 'c': None, 'b': 'egg'}</code>.
The reason <code>args</code> should be a copy of the <code>locals</code> dictionary is that dictionaries are mutable, so if you created any local variables in this function <code>args</code> would contain all of the local variables and their values, not just the arguments.</p>
<p>More documentation on the built-in <code>locals</code> function <a href="http://docs.python.org/library/functions.html#locals">here</a>.</p>
| 9 | 2009-09-11T03:37:44Z | [
"python",
"arguments",
"keyword"
] |
Getting the the keyword arguments actually passed to a Python method | 1,408,818 | <p>I'm dreaming of a Python method with explicit keyword args:</p>
<pre><code>def func(a=None, b=None, c=None):
for arg, val in magic_arg_dict.items(): # Where do I get the magic?
print '%s: %s' % (arg, val)
</code></pre>
<p>I want to get a dictionary of only those arguments the caller actually passed into the method, just like <code>**kwargs</code>, but I don't want the caller to be able to pass any old random args, unlike <code>**kwargs</code>.</p>
<pre><code>>>> func(b=2)
b: 2
>>> func(a=3, c=5)
a: 3
c: 5
</code></pre>
<p>So: is there such an incantation? In my case, I happen to be able to compare each argument against its default to find the ones that are different, but this is kind of inelegant and gets tedious when you have nine arguments. For bonus points, provide an incantation that can tell me even when the caller passes in a keyword argument assigned its default value:</p>
<pre><code>>>> func(a=None)
a: None
</code></pre>
<p>Tricksy!</p>
<p><b>Edit:</b> The (lexical) function signature has to remain intact. It's part of a public API, and the primary worth of the explicit keyword args lies in their documentary value. Just to make things interesting. :)</p>
| 16 | 2009-09-11T03:21:12Z | 1,409,284 | <p>I was inspired by lost-theory's decorator goodness, and after playing about with it for a bit came up with this:</p>
<pre><code>def actual_kwargs():
"""
Decorator that provides the wrapped function with an attribute 'actual_kwargs'
containing just those keyword arguments actually passed in to the function.
"""
def decorator(function):
def inner(*args, **kwargs):
inner.actual_kwargs = kwargs
return function(*args, **kwargs)
return inner
return decorator
if __name__ == "__main__":
@actual_kwargs()
def func(msg, a=None, b=False, c='', d=0):
print msg
for arg, val in sorted(func.actual_kwargs.iteritems()):
print ' %s: %s' % (arg, val)
func("I'm only passing a", a='a')
func("Here's b and c", b=True, c='c')
func("All defaults", a=None, b=False, c='', d=0)
func("Nothin'")
try:
func("Invalid kwarg", e="bogon")
except TypeError, err:
print 'Invalid kwarg\n %s' % err
</code></pre>
<p>Which prints this:</p>
<pre>
I'm only passing a
a: a
Here's b and c
b: True
c: c
All defaults
a: None
b: False
c:
d: 0
Nothin'
Invalid kwarg
func() got an unexpected keyword argument 'e'
</pre>
<p>I'm happy with this. A more flexible approach is to pass the name of the attribute you want to use to the decorator, instead of hard-coding it to 'actual_kwargs', but this is the simplest approach that illustrates the solution.</p>
<p>Mmm, Python is tasty.</p>
| 22 | 2009-09-11T06:25:43Z | [
"python",
"arguments",
"keyword"
] |
Getting the the keyword arguments actually passed to a Python method | 1,408,818 | <p>I'm dreaming of a Python method with explicit keyword args:</p>
<pre><code>def func(a=None, b=None, c=None):
for arg, val in magic_arg_dict.items(): # Where do I get the magic?
print '%s: %s' % (arg, val)
</code></pre>
<p>I want to get a dictionary of only those arguments the caller actually passed into the method, just like <code>**kwargs</code>, but I don't want the caller to be able to pass any old random args, unlike <code>**kwargs</code>.</p>
<pre><code>>>> func(b=2)
b: 2
>>> func(a=3, c=5)
a: 3
c: 5
</code></pre>
<p>So: is there such an incantation? In my case, I happen to be able to compare each argument against its default to find the ones that are different, but this is kind of inelegant and gets tedious when you have nine arguments. For bonus points, provide an incantation that can tell me even when the caller passes in a keyword argument assigned its default value:</p>
<pre><code>>>> func(a=None)
a: None
</code></pre>
<p>Tricksy!</p>
<p><b>Edit:</b> The (lexical) function signature has to remain intact. It's part of a public API, and the primary worth of the explicit keyword args lies in their documentary value. Just to make things interesting. :)</p>
| 16 | 2009-09-11T03:21:12Z | 1,409,376 | <p>Magic is not the answer:</p>
<pre><code>def funky(a=None, b=None, c=None):
for name, value in [('a', a), ('b', b), ('c', c)]:
print name, value
</code></pre>
| 0 | 2009-09-11T06:53:28Z | [
"python",
"arguments",
"keyword"
] |
Getting the the keyword arguments actually passed to a Python method | 1,408,818 | <p>I'm dreaming of a Python method with explicit keyword args:</p>
<pre><code>def func(a=None, b=None, c=None):
for arg, val in magic_arg_dict.items(): # Where do I get the magic?
print '%s: %s' % (arg, val)
</code></pre>
<p>I want to get a dictionary of only those arguments the caller actually passed into the method, just like <code>**kwargs</code>, but I don't want the caller to be able to pass any old random args, unlike <code>**kwargs</code>.</p>
<pre><code>>>> func(b=2)
b: 2
>>> func(a=3, c=5)
a: 3
c: 5
</code></pre>
<p>So: is there such an incantation? In my case, I happen to be able to compare each argument against its default to find the ones that are different, but this is kind of inelegant and gets tedious when you have nine arguments. For bonus points, provide an incantation that can tell me even when the caller passes in a keyword argument assigned its default value:</p>
<pre><code>>>> func(a=None)
a: None
</code></pre>
<p>Tricksy!</p>
<p><b>Edit:</b> The (lexical) function signature has to remain intact. It's part of a public API, and the primary worth of the explicit keyword args lies in their documentary value. Just to make things interesting. :)</p>
| 16 | 2009-09-11T03:21:12Z | 10,504,589 | <p>This is easiest accomplished with a single instance of a sentry object:</p>
<pre><code># Top of module, does not need to be exposed in __all__
missing = {}
# Function prototype
def myFunc(a = missing, b = missing, c = missing):
if a is not missing:
# User specified argument a
if b is missing:
# User did not specify argument b
</code></pre>
<p>The nice thing about this approach is that, since we're using the "is" operator, the caller can pass an empty dict as the argument value, and we'll still pick up that they did not mean to pass it. We also avoid nasty decorators this way, and keep our code a little cleaner.</p>
| 1 | 2012-05-08T18:46:38Z | [
"python",
"arguments",
"keyword"
] |
django templates: include and extends | 1,408,925 | <p>I would like to provide the same content inside 2 different base files. </p>
<p>So I'm trying to do this:</p>
<p>page1.html:</p>
<pre><code>{% extends "base1.html" %}
{% include "commondata.html" %}
</code></pre>
<p>page2.html:</p>
<pre><code>{% extends "base2.html" %}
{% include "commondata.html" %}
</code></pre>
<p>The problem is that I can't seem to use both extends and include. Is there some way to do that? And if not, how can I accomplish the above?</p>
<p>commondata.html overrides a block that is specified in both base1.html and base2.html</p>
<p>The purpose of this is to provide the same page in both pdf and html format, where the formatting is slightly different. The above question though simplifies what I'm trying to do so if I can get an answer to that it will solve my problem.</p>
| 54 | 2009-09-11T04:08:38Z | 1,408,937 | <p>When you use the extends template tag, you're saying that the current template extends another -- that it is a child template, dependent on a parent template. Django will look at your child template and use its content to populate the parent.</p>
<p>Everything that you want to use in a child template should be within blocks, which Django uses to populate the parent. If you want use an include statement in that child template, you have to put it within a block, for Django to make sense of it. Otherwise it just doesn't make sense and Django doesn't know what to do with it.</p>
<p>The Django documentation has a few really good examples of using blocks to replace blocks in the parent template.</p>
<p><a href="https://docs.djangoproject.com/en/dev/ref/templates/language/#template-inheritance">https://docs.djangoproject.com/en/dev/ref/templates/language/#template-inheritance</a></p>
| 69 | 2009-09-11T04:13:46Z | [
"python",
"django",
"django-templates"
] |
django templates: include and extends | 1,408,925 | <p>I would like to provide the same content inside 2 different base files. </p>
<p>So I'm trying to do this:</p>
<p>page1.html:</p>
<pre><code>{% extends "base1.html" %}
{% include "commondata.html" %}
</code></pre>
<p>page2.html:</p>
<pre><code>{% extends "base2.html" %}
{% include "commondata.html" %}
</code></pre>
<p>The problem is that I can't seem to use both extends and include. Is there some way to do that? And if not, how can I accomplish the above?</p>
<p>commondata.html overrides a block that is specified in both base1.html and base2.html</p>
<p>The purpose of this is to provide the same page in both pdf and html format, where the formatting is slightly different. The above question though simplifies what I'm trying to do so if I can get an answer to that it will solve my problem.</p>
| 54 | 2009-09-11T04:08:38Z | 1,409,113 | <p>More info about why it wasn't working for me in case it helps future people:</p>
<p>The reason why it wasn't working is that {% include %} in django doesn't like special characters like fancy apostrophe. The template data I was trying to include was pasted from word. I had to manually remove all of these special characters and then it included successfully.</p>
| 7 | 2009-09-11T05:21:24Z | [
"python",
"django",
"django-templates"
] |
django templates: include and extends | 1,408,925 | <p>I would like to provide the same content inside 2 different base files. </p>
<p>So I'm trying to do this:</p>
<p>page1.html:</p>
<pre><code>{% extends "base1.html" %}
{% include "commondata.html" %}
</code></pre>
<p>page2.html:</p>
<pre><code>{% extends "base2.html" %}
{% include "commondata.html" %}
</code></pre>
<p>The problem is that I can't seem to use both extends and include. Is there some way to do that? And if not, how can I accomplish the above?</p>
<p>commondata.html overrides a block that is specified in both base1.html and base2.html</p>
<p>The purpose of this is to provide the same page in both pdf and html format, where the formatting is slightly different. The above question though simplifies what I'm trying to do so if I can get an answer to that it will solve my problem.</p>
| 54 | 2009-09-11T04:08:38Z | 13,298,495 | <p>From Django docs:</p>
<blockquote>
<p>The include tag should be considered as an implementation of "render this subtemplate and include the HTML", not as "parse this subtemplate and include its contents as if it were part of the parent". This means that there is no shared state between included templates -- each include is a completely independent rendering process.</p>
</blockquote>
<p>So Django doesn't grab any blocks from your commondata.html and it doesn't know what to do with rendered html outside blocks.</p>
| 53 | 2012-11-08T21:55:20Z | [
"python",
"django",
"django-templates"
] |
django templates: include and extends | 1,408,925 | <p>I would like to provide the same content inside 2 different base files. </p>
<p>So I'm trying to do this:</p>
<p>page1.html:</p>
<pre><code>{% extends "base1.html" %}
{% include "commondata.html" %}
</code></pre>
<p>page2.html:</p>
<pre><code>{% extends "base2.html" %}
{% include "commondata.html" %}
</code></pre>
<p>The problem is that I can't seem to use both extends and include. Is there some way to do that? And if not, how can I accomplish the above?</p>
<p>commondata.html overrides a block that is specified in both base1.html and base2.html</p>
<p>The purpose of this is to provide the same page in both pdf and html format, where the formatting is slightly different. The above question though simplifies what I'm trying to do so if I can get an answer to that it will solve my problem.</p>
| 54 | 2009-09-11T04:08:38Z | 13,896,420 | <p>Added for reference to future people who find this via google: You might want to look at the {% overextend %} tag provided by the mezzanine library for cases like this. </p>
| 1 | 2012-12-15T21:34:41Z | [
"python",
"django",
"django-templates"
] |
django templates: include and extends | 1,408,925 | <p>I would like to provide the same content inside 2 different base files. </p>
<p>So I'm trying to do this:</p>
<p>page1.html:</p>
<pre><code>{% extends "base1.html" %}
{% include "commondata.html" %}
</code></pre>
<p>page2.html:</p>
<pre><code>{% extends "base2.html" %}
{% include "commondata.html" %}
</code></pre>
<p>The problem is that I can't seem to use both extends and include. Is there some way to do that? And if not, how can I accomplish the above?</p>
<p>commondata.html overrides a block that is specified in both base1.html and base2.html</p>
<p>The purpose of this is to provide the same page in both pdf and html format, where the formatting is slightly different. The above question though simplifies what I'm trying to do so if I can get an answer to that it will solve my problem.</p>
| 54 | 2009-09-11T04:08:38Z | 16,111,696 | <p>You can't pull in blocks from an included file into a child template to override the parent template's blocks. However, you can specify a parent in a variable and have the base template specified in the context.</p>
<p>From the <a href="https://docs.djangoproject.com/en/dev/ref/templates/builtins/#extends" rel="nofollow">documentation</a>:</p>
<blockquote>
<p>{% extends variable %} uses the value of variable. If the variable evaluates to a string, Django will use that string as the name of the parent template. If the variable evaluates to a Template object, Django will use that object as the parent template.</p>
</blockquote>
<p>Instead of separate "page1.html" and "page2.html", put <code>{% extends base_template %}</code> at the top of "commondata.html". And then in your view, define <code>base_template</code> to be either "base1.html" or "base2.html".</p>
| 2 | 2013-04-19T18:57:19Z | [
"python",
"django",
"django-templates"
] |
django templates: include and extends | 1,408,925 | <p>I would like to provide the same content inside 2 different base files. </p>
<p>So I'm trying to do this:</p>
<p>page1.html:</p>
<pre><code>{% extends "base1.html" %}
{% include "commondata.html" %}
</code></pre>
<p>page2.html:</p>
<pre><code>{% extends "base2.html" %}
{% include "commondata.html" %}
</code></pre>
<p>The problem is that I can't seem to use both extends and include. Is there some way to do that? And if not, how can I accomplish the above?</p>
<p>commondata.html overrides a block that is specified in both base1.html and base2.html</p>
<p>The purpose of this is to provide the same page in both pdf and html format, where the formatting is slightly different. The above question though simplifies what I'm trying to do so if I can get an answer to that it will solve my problem.</p>
| 54 | 2009-09-11T04:08:38Z | 23,626,151 | <p>In my opinion, the right (best) answer to this question is the one from <strong>podshumok</strong>, as it explains why the behaviour of include when used along with inheritance.</p>
<p>However, I was somewhat surprised that nobody mentioned the <strong>ssi</strong> tag provided by the Django templating system, which is specifically designed for <em>inline</em> including an external piece of <em>text</em>. Here, <em>inline</em> means the external text will not be interpreted, parsed or interpolated, but simply "copied" inside the calling template.</p>
<p>Please, refer to the documentation for further details (be sure to check your appropriate version of Django in the selector at the lower right part of the page).</p>
<p><a href="https://docs.djangoproject.com/en/dev/ref/templates/builtins/#ssi" rel="nofollow">https://docs.djangoproject.com/en/dev/ref/templates/builtins/#ssi</a></p>
<p>From the documentation:</p>
<blockquote>
<pre><code>ssi
Outputs the contents of a given file into the page.
Like a simple include tag, {% ssi %} includes the contents of another file
â which must be specified using an absolute path â in the current page
</code></pre>
</blockquote>
<p>Beware also of the security implications of this technique and also of the required ALLOWED_INCLUDE_ROOTS define, which must be added to your settings files.</p>
<hr>
<p><strong>Edit (10/Dec/2015)</strong>: As pointed out in the comments, <em>ssi</em> is deprecated since version 1.8. According to the documentation: </p>
<blockquote>
<p>This tag has been deprecated and will be removed in Django 1.10. Use the include tag instead.</p>
</blockquote>
| 0 | 2014-05-13T08:24:55Z | [
"python",
"django",
"django-templates"
] |
How do I hide the field label for a HiddenInput widget in Django Admin? | 1,408,940 | <p>I've got a bit of Django form code that looks like this:</p>
<pre><code>class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>And that makes the form field go away, but it leaves the label "Order" in the Django admin page. If I use:</p>
<pre><code>order = forms.CharField(widget=forms.HiddenInput(), label='')
</code></pre>
<p>I'm still left with the ":" between where the field and label used to be.</p>
<p>How do I hide the whole thing?!</p>
| 16 | 2009-09-11T04:15:15Z | 1,410,442 | <p>If you're using JQuery this should do the trick:</p>
<p><strong>Your form</strong></p>
<pre><code>TO_HIDE_ATTRS = {'class': 'hidden'}
class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.TextInput(attrs=TO_HIDE_ATTRS))
</code></pre>
<p><strong>Javascript code to add to your template</strong></p>
<pre><code>$(document).ready(function(){
$('tr:has(.hidden)').hide();
});
</code></pre>
<p>That works if you're rendering your form as a table. If you want to make it work with any kind of form rendering you can do as follows:</p>
<pre><code>$(document).ready(function(){
$('{{ form_field_container }}:has(.hidden)').hide();
});
</code></pre>
<p>And add <code>form_field_container</code> to your template context. An example:</p>
<p>If you render your form like this:</p>
<pre><code> <form>
<span>{{ field.label_tag }} {{ field }}</span>
</form>
</code></pre>
<p>Your context must include:</p>
<pre><code>'form_field_container': 'span'
</code></pre>
<p>You get the idea...</p>
| -16 | 2009-09-11T11:55:51Z | [
"python",
"django",
"django-admin"
] |
How do I hide the field label for a HiddenInput widget in Django Admin? | 1,408,940 | <p>I've got a bit of Django form code that looks like this:</p>
<pre><code>class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>And that makes the form field go away, but it leaves the label "Order" in the Django admin page. If I use:</p>
<pre><code>order = forms.CharField(widget=forms.HiddenInput(), label='')
</code></pre>
<p>I'm still left with the ":" between where the field and label used to be.</p>
<p>How do I hide the whole thing?!</p>
| 16 | 2009-09-11T04:15:15Z | 1,627,875 | <p>I think it's simpler to achieve the ":" label omission for HiddenInput widget by modifying <code>class AdminField(object)</code> in <code>contrib/admin/helpers.py</code> from :</p>
<pre><code> if self.is_checkbox:
classes.append(u'vCheckboxLabel')
contents = force_unicode(escape(self.field.label))
else:
contents = force_unicode(escape(self.field.label)) + u':'
</code></pre>
<p>to : </p>
<pre><code> if self.is_checkbox:
classes.append(u'vCheckboxLabel')
contents = force_unicode(escape(self.field.label))
else:
contents = force_unicode(escape(self.field.label))
#MODIFIED 26/10/2009
if self.field.label <> '':
contents += u':'
# END MODIFY
</code></pre>
| 3 | 2009-10-26T23:16:37Z | [
"python",
"django",
"django-admin"
] |
How do I hide the field label for a HiddenInput widget in Django Admin? | 1,408,940 | <p>I've got a bit of Django form code that looks like this:</p>
<pre><code>class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>And that makes the form field go away, but it leaves the label "Order" in the Django admin page. If I use:</p>
<pre><code>order = forms.CharField(widget=forms.HiddenInput(), label='')
</code></pre>
<p>I'm still left with the ":" between where the field and label used to be.</p>
<p>How do I hide the whole thing?!</p>
| 16 | 2009-09-11T04:15:15Z | 4,246,248 | <p>I can't believe several people have suggested using jQuery for this...</p>
<p>Is it a case of: when the only tool you know is a hammer everything looks like a nail?</p>
<p>Come on, if you're going to do it from the client-side (instead of fixing the source of the problem in the back-end code) surely the right place to do it would be in CSS?</p>
<p>If you're in the admin site then it's a bit harder but if it's a regular page then it's easy to just omit the whole label from the form template, <a href="http://docs.djangoproject.com/en/dev/topics/forms/#customizing-the-form-template" rel="nofollow">for example</a></p>
<p>If you're in the admin site then you could still override the as_table, as_ul, as_p methods of BaseForm (see django/forms/forms.py) in your GalleryAdminForm class to omit the label of any field where the label is blank (or == ':' as the value may be at this stage of rendering)</p>
<p>(...looking at lines 160-170 of <a href="http://code.djangoproject.com/browser/django/trunk/django/forms/forms.py" rel="nofollow">forms.py</a> it seems like Django 1.2 should properly omit the ':' if the label is blank so I guess you're on an older version?)</p>
| 33 | 2010-11-22T14:12:59Z | [
"python",
"django",
"django-admin"
] |
How do I hide the field label for a HiddenInput widget in Django Admin? | 1,408,940 | <p>I've got a bit of Django form code that looks like this:</p>
<pre><code>class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>And that makes the form field go away, but it leaves the label "Order" in the Django admin page. If I use:</p>
<pre><code>order = forms.CharField(widget=forms.HiddenInput(), label='')
</code></pre>
<p>I'm still left with the ":" between where the field and label used to be.</p>
<p>How do I hide the whole thing?!</p>
| 16 | 2009-09-11T04:15:15Z | 6,498,935 | <p>Check the answer at <a href="http://stackoverflow.com/questions/4999005/create-a-hidden-field-in-django-admin/6498907#6498907">Create a hidden field in django-admin</a>, it can be done without JavaScript by overriding <code>admin/includes/fieldset.html</code> From there, you can inject a CSS class, and do the rest.</p>
| 3 | 2011-06-27T21:03:42Z | [
"python",
"django",
"django-admin"
] |
How do I hide the field label for a HiddenInput widget in Django Admin? | 1,408,940 | <p>I've got a bit of Django form code that looks like this:</p>
<pre><code>class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>And that makes the form field go away, but it leaves the label "Order" in the Django admin page. If I use:</p>
<pre><code>order = forms.CharField(widget=forms.HiddenInput(), label='')
</code></pre>
<p>I'm still left with the ":" between where the field and label used to be.</p>
<p>How do I hide the whole thing?!</p>
| 16 | 2009-09-11T04:15:15Z | 12,171,430 | <p>Try</p>
<p>{% for field in form.visible_fields %}</p>
| 13 | 2012-08-29T05:29:49Z | [
"python",
"django",
"django-admin"
] |
How do I hide the field label for a HiddenInput widget in Django Admin? | 1,408,940 | <p>I've got a bit of Django form code that looks like this:</p>
<pre><code>class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>And that makes the form field go away, but it leaves the label "Order" in the Django admin page. If I use:</p>
<pre><code>order = forms.CharField(widget=forms.HiddenInput(), label='')
</code></pre>
<p>I'm still left with the ":" between where the field and label used to be.</p>
<p>How do I hide the whole thing?!</p>
| 16 | 2009-09-11T04:15:15Z | 12,588,805 | <p>Oraculum has got it right. You shouldn't be cleaning this up on the client side. If it is clutter, then you shouldn't be sending it to client at all. Building on Oraculum's answer, you should use a custom form template because you you probably still want the hidden values in the form.</p>
<pre><code>{% for field in form.visible_fields %}
<div>
{{ field.errors }}
<span class="filter-label">{{ field.label_tag }}</span><br>
{{ field }}
</div>
{% endfor %}
{% for field in form.hidden_fields %}
<div style="display:none;">{{ field }}</div>
{% endfor %}
</code></pre>
<p>Using a custom form template to control hidden fields is cleaner because it doesn't send extraneous info to the client.</p>
| 30 | 2012-09-25T18:17:05Z | [
"python",
"django",
"django-admin"
] |
How do I hide the field label for a HiddenInput widget in Django Admin? | 1,408,940 | <p>I've got a bit of Django form code that looks like this:</p>
<pre><code>class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>And that makes the form field go away, but it leaves the label "Order" in the Django admin page. If I use:</p>
<pre><code>order = forms.CharField(widget=forms.HiddenInput(), label='')
</code></pre>
<p>I'm still left with the ":" between where the field and label used to be.</p>
<p>How do I hide the whole thing?!</p>
| 16 | 2009-09-11T04:15:15Z | 18,235,005 | <p>In theory, you should be able to pass <code>label_suffix</code> into the form constructor. However, the Django admin ignores this.</p>
<p>You've been bitten by two bugs in Django: <a href="https://code.djangoproject.com/ticket/18134" rel="nofollow">#18134 'BoundField.label_tag should include form.label_suffix'</a> (fixed in trunk, should be in 1.6) and to a lesser extent <a href="https://code.djangoproject.com/ticket/11277" rel="nofollow">#11277 Hidden fields in Inlines are displayed as empty rows</a>.</p>
<p>Currently, the best solution is to override the admin fieldset template. Use a <code>HiddenInput</code> for your widget, then override the admin fieldset template (<a href="https://docs.djangoproject.com/en/dev/ref/contrib/admin/#set-up-your-projects-admin-template-directories" rel="nofollow">documented here</a>). Just create a <code>templates/admin/includes/fieldset.html</code> with the following contents:</p>
<pre><code><fieldset class="module aligned {{ fieldset.classes }}">
{% if fieldset.name %}<h2>{{ fieldset.name }}</h2>{% endif %}
{% if fieldset.description %}
<div class="description">{{ fieldset.description|safe }}</div>
{% endif %}
{% for line in fieldset %}
<div class="form-row{% if line.fields|length_is:'1' and line.errors %} errors{% endif %}{% for field in line %}{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% endfor %}">
{% if line.fields|length_is:'1' %}{{ line.errors }}{% endif %}
{% for field in line %}
<div{% if not line.fields|length_is:'1' %} class="field-box{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% if not field.is_readonly and field.errors %} errors{% endif %}"{% endif %}>
{% if not line.fields|length_is:'1' and not field.is_readonly %}{{ field.errors }}{% endif %}
{% if field.is_checkbox %}
{{ field.field }}{{ field.label_tag }}
{% else %}
{# only show the label for visible fields #}
{% if not field.field.is_hidden %}
{{ field.label_tag }}
{% endif %}
{% if field.is_readonly %}
<p>{{ field.contents }}</p>
{% else %}
{{ field.field }}
{% endif %}
{% endif %}
{% if field.field.help_text %}
<p class="help">{{ field.field.help_text|safe }}</p>
{% endif %}
</div>
{% endfor %}
</div>
{% endfor %}
</fieldset>
</code></pre>
| 2 | 2013-08-14T14:50:16Z | [
"python",
"django",
"django-admin"
] |
How do I hide the field label for a HiddenInput widget in Django Admin? | 1,408,940 | <p>I've got a bit of Django form code that looks like this:</p>
<pre><code>class GalleryAdminForm(forms.ModelForm):
auto_id=False
order = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>And that makes the form field go away, but it leaves the label "Order" in the Django admin page. If I use:</p>
<pre><code>order = forms.CharField(widget=forms.HiddenInput(), label='')
</code></pre>
<p>I'm still left with the ":" between where the field and label used to be.</p>
<p>How do I hide the whole thing?!</p>
| 16 | 2009-09-11T04:15:15Z | 23,968,360 | <p>Based upon the solution by Wilfried Hughes I 've changed the fieldset.html with little improvements.</p>
<p>The code snippet below not only hides the input element instead if the fieldset contains only one single element which input-type is set to hidden it also hides the surrounding div-elements wasting no space in the form.</p>
<pre><code><fieldset class="module aligned {{ fieldset.classes }}">
{% if fieldset.name %}<h2>{{ fieldset.name }}</h2>{% endif %}
{% if fieldset.description %}
<div class="description">{{ fieldset.description|safe }}</div>
{% endif %}
{% for line in fieldset %}
<div class="form-row{% if line.fields|length_is:'1' and line.errors %} errors{% endif %}{% for field in line %}{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% endfor %}"{% if line.fields|length_is:'1' %}{% for field in line %}{% if field.field.is_hidden %} style="display: none"{% endif %}{% endfor %}{% endif %}>
{% if line.fields|length_is:'1' %}{{ line.errors }}{% endif %}
{% for field in line %}
<div{% if not line.fields|length_is:'1' %} class="field-box{% if field.field.name %} field-{{ field.field.name }}{% endif %}{% if not field.is_readonly and field.errors %} errors{% endif %}"{% endif %}{% if field.field.is_hidden %} style="display: none"{% endif %}>
{% if not line.fields|length_is:'1' and not field.is_readonly %}{{ field.errors }}{% endif %}
{% if field.is_checkbox %}
{{ field.field }}{{ field.label_tag }}
{% else %}
{# only show the label for visible fields #}
{% if not field.field.is_hidden %}
{{ field.label_tag }}
{% endif %}
{% if field.is_readonly %}
<p>{{ field.contents }}</p>
{% else %}
{{ field.field }}
{% endif %}
{% endif %}
{% if field.field.help_text %}
<p class="help">{{ field.field.help_text|safe }}</p>
{% endif %}
</div>
{% endfor %}
</div>
{% endfor %}
</code></pre>
<p></p>
| 2 | 2014-05-31T09:16:48Z | [
"python",
"django",
"django-admin"
] |
Auto-generate form fields for a Form in django | 1,409,192 | <p>I have some models and I want to generate a multi-selection form from this data.
So the form would contain an entry for each category and the choices would be the skills in that category.</p>
<p><strong>models.py</strong></p>
<pre><code>class SkillCategory(models.Model):
name = models.CharField(max_length=50)
class Skill(models.Model):
name = models.CharField(max_length=50)
category = models.ForeignKey(SkillCategory)
</code></pre>
<p>Is there a way to auto-generate the form fields?
I know I can manually add a 'SkillCategory' entry in the form for each SkillCategory, but the reason to have it as a model is so skills and skillcategories can be edited freely.</p>
<p>I want to do something like this:
(I tried this, but didn't get it to work, don't remember the exact error...)</p>
<p><strong>forms.py</strong></p>
<pre><code>class SkillSelectionForm(forms.Form):
def __init__(*args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
for c in SkillCategory.objects.all():
category_skills = [(pk, s.name) for s in c.skill_set.all()]
setattr(self, c.name, forms.MultipleChoiceField(choices=category_skills, widget=forms.CheckboxSelectMultiple))
</code></pre>
<p><hr /></p>
<p><strong>SOLUTION</strong> </p>
<p>This creates a form field entry using the SkillCategory.name and assigns choices as those in Skill. field_name/display_name are used to avoid issues with non-ascii <em>category names</em>.</p>
<p><strong>forms.py</strong></p>
<pre><code>def get_categorized_skills():
skills = {}
for s in Skill.objects.values('pk', 'name', 'category__name').order_by('category__name'):
if s['category__name'] not in skills.keys():
skills[s['category__name']] = []
skills[s['category__name']].append((s['pk'], s['name']))
return skills
class SkillSelectionForm(forms.Form):
def __init__(self, *args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
skills = get_categorized_skills()
for idx, cat in enumerate(skills.keys()):
field_name = u'category-{0}'.format(idx)
display_name = cat
self.fields[field_name] = forms.MultipleChoiceField(choices=skills[cat], widget=forms.CheckboxSelectMultiple, label=display_name)
</code></pre>
| 7 | 2009-09-11T05:51:26Z | 1,409,325 | <p>What you want is a Formset. This will give you a set of rows, each of which maps to a specific Skill.</p>
<p>See the <a href="http://docs.djangoproject.com/en/dev/topics/forms/formsets/" rel="nofollow">Formset documentation</a> and the page specifically on generating <a href="http://docs.djangoproject.com/en/dev/topics/forms/modelforms/#id1" rel="nofollow">formsets for models</a>. </p>
| 1 | 2009-09-11T06:39:16Z | [
"python",
"django",
"django-forms"
] |
Auto-generate form fields for a Form in django | 1,409,192 | <p>I have some models and I want to generate a multi-selection form from this data.
So the form would contain an entry for each category and the choices would be the skills in that category.</p>
<p><strong>models.py</strong></p>
<pre><code>class SkillCategory(models.Model):
name = models.CharField(max_length=50)
class Skill(models.Model):
name = models.CharField(max_length=50)
category = models.ForeignKey(SkillCategory)
</code></pre>
<p>Is there a way to auto-generate the form fields?
I know I can manually add a 'SkillCategory' entry in the form for each SkillCategory, but the reason to have it as a model is so skills and skillcategories can be edited freely.</p>
<p>I want to do something like this:
(I tried this, but didn't get it to work, don't remember the exact error...)</p>
<p><strong>forms.py</strong></p>
<pre><code>class SkillSelectionForm(forms.Form):
def __init__(*args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
for c in SkillCategory.objects.all():
category_skills = [(pk, s.name) for s in c.skill_set.all()]
setattr(self, c.name, forms.MultipleChoiceField(choices=category_skills, widget=forms.CheckboxSelectMultiple))
</code></pre>
<p><hr /></p>
<p><strong>SOLUTION</strong> </p>
<p>This creates a form field entry using the SkillCategory.name and assigns choices as those in Skill. field_name/display_name are used to avoid issues with non-ascii <em>category names</em>.</p>
<p><strong>forms.py</strong></p>
<pre><code>def get_categorized_skills():
skills = {}
for s in Skill.objects.values('pk', 'name', 'category__name').order_by('category__name'):
if s['category__name'] not in skills.keys():
skills[s['category__name']] = []
skills[s['category__name']].append((s['pk'], s['name']))
return skills
class SkillSelectionForm(forms.Form):
def __init__(self, *args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
skills = get_categorized_skills()
for idx, cat in enumerate(skills.keys()):
field_name = u'category-{0}'.format(idx)
display_name = cat
self.fields[field_name] = forms.MultipleChoiceField(choices=skills[cat], widget=forms.CheckboxSelectMultiple, label=display_name)
</code></pre>
| 7 | 2009-09-11T05:51:26Z | 1,409,609 | <p>Okay so you can't set fields like that on forms.Form, for reasons which will become apparent when you see <a href="http://code.djangoproject.com/browser/django/trunk/django/forms/forms.py#L53" rel="nofollow">DeclarativeFieldsMetaclass</a>, the metaclass of forms.Form (but not of forms.BaseForm). A solution which may be overkill in your case but an example of how dynamic form construction can be done, is something like this:</p>
<pre><code>base_fields = [
forms.MultipleChoiceField(choices=[
(pk, s.name) for s in c.skill_set.all()
]) for c in SkillCategory.objects.all()
]
SkillSelectionForm = type('SkillSelectionForm', (forms.BaseForm,), {'base_fields': base_fields})
</code></pre>
| 2 | 2009-09-11T08:04:34Z | [
"python",
"django",
"django-forms"
] |
Auto-generate form fields for a Form in django | 1,409,192 | <p>I have some models and I want to generate a multi-selection form from this data.
So the form would contain an entry for each category and the choices would be the skills in that category.</p>
<p><strong>models.py</strong></p>
<pre><code>class SkillCategory(models.Model):
name = models.CharField(max_length=50)
class Skill(models.Model):
name = models.CharField(max_length=50)
category = models.ForeignKey(SkillCategory)
</code></pre>
<p>Is there a way to auto-generate the form fields?
I know I can manually add a 'SkillCategory' entry in the form for each SkillCategory, but the reason to have it as a model is so skills and skillcategories can be edited freely.</p>
<p>I want to do something like this:
(I tried this, but didn't get it to work, don't remember the exact error...)</p>
<p><strong>forms.py</strong></p>
<pre><code>class SkillSelectionForm(forms.Form):
def __init__(*args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
for c in SkillCategory.objects.all():
category_skills = [(pk, s.name) for s in c.skill_set.all()]
setattr(self, c.name, forms.MultipleChoiceField(choices=category_skills, widget=forms.CheckboxSelectMultiple))
</code></pre>
<p><hr /></p>
<p><strong>SOLUTION</strong> </p>
<p>This creates a form field entry using the SkillCategory.name and assigns choices as those in Skill. field_name/display_name are used to avoid issues with non-ascii <em>category names</em>.</p>
<p><strong>forms.py</strong></p>
<pre><code>def get_categorized_skills():
skills = {}
for s in Skill.objects.values('pk', 'name', 'category__name').order_by('category__name'):
if s['category__name'] not in skills.keys():
skills[s['category__name']] = []
skills[s['category__name']].append((s['pk'], s['name']))
return skills
class SkillSelectionForm(forms.Form):
def __init__(self, *args, **kwargs):
super(SkillSelectionForm, self).__init__(*args, **kwargs)
skills = get_categorized_skills()
for idx, cat in enumerate(skills.keys()):
field_name = u'category-{0}'.format(idx)
display_name = cat
self.fields[field_name] = forms.MultipleChoiceField(choices=skills[cat], widget=forms.CheckboxSelectMultiple, label=display_name)
</code></pre>
| 7 | 2009-09-11T05:51:26Z | 1,410,800 | <p>Take a look at creating dynamic forms in Django, from <a href="http://www.b-list.org/weblog/2008/nov/09/dynamic-forms/" rel="nofollow">b-list.org</a> and <a href="http://uswaretech.com/blog/2008/10/dynamic-forms-with-django/" rel="nofollow">uswaretech.com</a>. I've had success using these examples to dynamically create form content from models.</p>
| 1 | 2009-09-11T13:15:57Z | [
"python",
"django",
"django-forms"
] |
Set function signature in Python | 1,409,295 | <p>Suppose I have a generic function f. I want to <strong>programmatically</strong> create a function f2 that behaves the same as f, but has a customised signature.</p>
<p><strong>More detail</strong></p>
<p>Given a list l and and dictionary d I want to be able to:</p>
<ul>
<li>Set the non-keyword arguments of f2 to the strings in l</li>
<li>Set the keyword arguments of f2 to the keys in d and the default values to the values of d</li>
</ul>
<p>ie. Suppose we have</p>
<pre><code>l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
</code></pre>
<p>Then I would want a function with signature:</p>
<pre><code>def f2(x, y, opt=None):
#My code
</code></pre>
<p><strong>A specific use case</strong></p>
<p>This is just a simplified version of my specific use case. I am giving this as an example only.</p>
<p>My actual use case (simplified) is as follows. We have a generic initiation function:</p>
<pre><code>def generic_init(self,*args,**kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__,args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
</code></pre>
<p>We want to use this function in a number of classes. In particular, we want to create a function <strong>init</strong> that behaves like generic_init, but has the signature defined by some class variables at <strong>creation time</strong>:</p>
<pre><code>class my_class:
__init_args__=["x", "y"]
__kw_init_args__={"my_opt": None}
__init__=create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
</code></pre>
<p>We want create_initiation_function to create a new function with the signature defined using <strong>init_args</strong> and <strong>kw_init_args</strong>. Is it possible to writ create_initiation_function?</p>
<p>Please note:</p>
<ul>
<li>If I just wanted to improve the help, I could set <strong>doc</strong>.</li>
<li>We want to set the function signature on creation. After that, it doesn't need to be changed.</li>
<li>Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init</li>
<li>We want to define create_initiation_function. We don't want to manually specify the new function!</li>
</ul>
<p>Related</p>
<ul>
<li><a href="http://stackoverflow.com/questions/147816/preserving-signatures-of-decorated-functions">Preserving signatures of decorated functions</a>: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value</li>
</ul>
| 15 | 2009-09-11T06:29:05Z | 1,409,336 | <p>Edit 1: Answering new question:</p>
<p>You ask how you can create a function with this signature:</p>
<pre><code>def fun(a, b, opt=None):
pass
</code></pre>
<p>The correct way to do that in Python is thus:</p>
<pre><code>def fun(a, b, opt=None):
pass
</code></pre>
<p>Edit 2: Answering explanation:</p>
<p>"Suppose I have a generic function f. I want to programmatically create a function f2 that behaves the same as f, but has a customised signature."</p>
<pre><code>def f(*args, **kw):
pass
</code></pre>
<p>OK, then f2 looks like so:</p>
<pre><code>def f2(a, b, opt=None):
f(a, b, opt=opt)
</code></pre>
<p>Again, the answer to your question is so trivial, that you obviously want to know something different that what you are asking. You really do need to stop asking abstract questions, and explain your concrete problem.</p>
| -4 | 2009-09-11T06:42:08Z | [
"python",
"function"
] |
Set function signature in Python | 1,409,295 | <p>Suppose I have a generic function f. I want to <strong>programmatically</strong> create a function f2 that behaves the same as f, but has a customised signature.</p>
<p><strong>More detail</strong></p>
<p>Given a list l and and dictionary d I want to be able to:</p>
<ul>
<li>Set the non-keyword arguments of f2 to the strings in l</li>
<li>Set the keyword arguments of f2 to the keys in d and the default values to the values of d</li>
</ul>
<p>ie. Suppose we have</p>
<pre><code>l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
</code></pre>
<p>Then I would want a function with signature:</p>
<pre><code>def f2(x, y, opt=None):
#My code
</code></pre>
<p><strong>A specific use case</strong></p>
<p>This is just a simplified version of my specific use case. I am giving this as an example only.</p>
<p>My actual use case (simplified) is as follows. We have a generic initiation function:</p>
<pre><code>def generic_init(self,*args,**kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__,args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
</code></pre>
<p>We want to use this function in a number of classes. In particular, we want to create a function <strong>init</strong> that behaves like generic_init, but has the signature defined by some class variables at <strong>creation time</strong>:</p>
<pre><code>class my_class:
__init_args__=["x", "y"]
__kw_init_args__={"my_opt": None}
__init__=create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
</code></pre>
<p>We want create_initiation_function to create a new function with the signature defined using <strong>init_args</strong> and <strong>kw_init_args</strong>. Is it possible to writ create_initiation_function?</p>
<p>Please note:</p>
<ul>
<li>If I just wanted to improve the help, I could set <strong>doc</strong>.</li>
<li>We want to set the function signature on creation. After that, it doesn't need to be changed.</li>
<li>Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init</li>
<li>We want to define create_initiation_function. We don't want to manually specify the new function!</li>
</ul>
<p>Related</p>
<ul>
<li><a href="http://stackoverflow.com/questions/147816/preserving-signatures-of-decorated-functions">Preserving signatures of decorated functions</a>: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value</li>
</ul>
| 15 | 2009-09-11T06:29:05Z | 1,409,496 | <p>For your usecase, having a docstring in the class/function should work -- that will show up in help() okay, and can be set programmatically (func.__doc__ = "stuff").</p>
<p>I can't see any way of setting the actual signature. I would have thought the <a href="http://docs.python.org/library/functools.html">functools module</a> would have done it if it was doable, but it doesn't, at least in py2.5 and py2.6.</p>
<p>You can also raise a TypeError exception if you get bad input.</p>
<p>Hmm, if you don't mind being truly vile, you can use compile()/eval() to do it. If your desired signature is specified by arglist=["foo","bar","baz"], and your actual function is f(*args, **kwargs), you can manage:</p>
<pre><code>argstr = ", ".join(arglist)
fakefunc = "def func(%s):\n return real_func(%s)\n" % (argstr, argstr)
fakefunc_code = compile(fakefunc, "fakesource", "exec")
fakeglobals = {}
eval(fakefunc_code, {"real_func": f}, fakeglobals)
f_with_good_sig = fakeglobals["func"]
help(f) # f(*args, **kwargs)
help(f_with_good_sig) # func(foo, bar, baz)
</code></pre>
<p>Changing the docstring and func_name should get you a complete solution. But, uh, eww...</p>
| 10 | 2009-09-11T07:35:49Z | [
"python",
"function"
] |
Set function signature in Python | 1,409,295 | <p>Suppose I have a generic function f. I want to <strong>programmatically</strong> create a function f2 that behaves the same as f, but has a customised signature.</p>
<p><strong>More detail</strong></p>
<p>Given a list l and and dictionary d I want to be able to:</p>
<ul>
<li>Set the non-keyword arguments of f2 to the strings in l</li>
<li>Set the keyword arguments of f2 to the keys in d and the default values to the values of d</li>
</ul>
<p>ie. Suppose we have</p>
<pre><code>l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
</code></pre>
<p>Then I would want a function with signature:</p>
<pre><code>def f2(x, y, opt=None):
#My code
</code></pre>
<p><strong>A specific use case</strong></p>
<p>This is just a simplified version of my specific use case. I am giving this as an example only.</p>
<p>My actual use case (simplified) is as follows. We have a generic initiation function:</p>
<pre><code>def generic_init(self,*args,**kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__,args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
</code></pre>
<p>We want to use this function in a number of classes. In particular, we want to create a function <strong>init</strong> that behaves like generic_init, but has the signature defined by some class variables at <strong>creation time</strong>:</p>
<pre><code>class my_class:
__init_args__=["x", "y"]
__kw_init_args__={"my_opt": None}
__init__=create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
</code></pre>
<p>We want create_initiation_function to create a new function with the signature defined using <strong>init_args</strong> and <strong>kw_init_args</strong>. Is it possible to writ create_initiation_function?</p>
<p>Please note:</p>
<ul>
<li>If I just wanted to improve the help, I could set <strong>doc</strong>.</li>
<li>We want to set the function signature on creation. After that, it doesn't need to be changed.</li>
<li>Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init</li>
<li>We want to define create_initiation_function. We don't want to manually specify the new function!</li>
</ul>
<p>Related</p>
<ul>
<li><a href="http://stackoverflow.com/questions/147816/preserving-signatures-of-decorated-functions">Preserving signatures of decorated functions</a>: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value</li>
</ul>
| 15 | 2009-09-11T06:29:05Z | 1,409,895 | <p>You can't do this with live code.</p>
<p>That is, you seem to be wanting to take an actual, live function that looks like this:</p>
<pre><code>def f(*args, **kwargs):
print args[0]
</code></pre>
<p>and change it to one like this:</p>
<pre><code>def f(a):
print a
</code></pre>
<p>The reason this can't be done--at least without modifying actual Python bytecode--is because these compile differently.</p>
<p>The former results in a function that receives two parameters: a list and a dict, and the code you're writing operates on that list and dict. The second results in a function that receives one parameter, and which is accessed as a local variable directly. If you changed the function "signature", so to speak, it'd result in a function like this:</p>
<pre><code>def f(a):
print a[0]
</code></pre>
<p>which obviously wouldn't work.</p>
<p>If you want more detail (though it doesn't really help you), a function that takes an *args or *kwargs has one or two bits set in <code>f.func_code.co_flags</code>; you can examine this yourself. The function that takes a regular parameter has <code>f.func_code.co_argcount</code> set to 1; the *args version is 0. This is what Python uses to figure out how to set up the function's stack frame when it's called, to check parameters, etc.</p>
<p>If you want to play around with modifying the function directly--if only to convince yourself that it won't work--see <a href="http://stackoverflow.com/questions/1360721/">this answer</a> for how to create a code object and live function from an existing one to modify bits of it. (This stuff is documented somewhere, but I can't find it; it's nowhere in the types module docs...)</p>
<p>That said, you <em>can</em> dynamically change the docstring of a function. Just assign to <code>func.__doc__</code>. Be sure to only do this at load time (from the global context or--most likely--a decorator); if you do it later on, tools that load the module to examine docstrings will never see it.</p>
| 0 | 2009-09-11T09:16:59Z | [
"python",
"function"
] |
Set function signature in Python | 1,409,295 | <p>Suppose I have a generic function f. I want to <strong>programmatically</strong> create a function f2 that behaves the same as f, but has a customised signature.</p>
<p><strong>More detail</strong></p>
<p>Given a list l and and dictionary d I want to be able to:</p>
<ul>
<li>Set the non-keyword arguments of f2 to the strings in l</li>
<li>Set the keyword arguments of f2 to the keys in d and the default values to the values of d</li>
</ul>
<p>ie. Suppose we have</p>
<pre><code>l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
</code></pre>
<p>Then I would want a function with signature:</p>
<pre><code>def f2(x, y, opt=None):
#My code
</code></pre>
<p><strong>A specific use case</strong></p>
<p>This is just a simplified version of my specific use case. I am giving this as an example only.</p>
<p>My actual use case (simplified) is as follows. We have a generic initiation function:</p>
<pre><code>def generic_init(self,*args,**kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__,args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
</code></pre>
<p>We want to use this function in a number of classes. In particular, we want to create a function <strong>init</strong> that behaves like generic_init, but has the signature defined by some class variables at <strong>creation time</strong>:</p>
<pre><code>class my_class:
__init_args__=["x", "y"]
__kw_init_args__={"my_opt": None}
__init__=create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
</code></pre>
<p>We want create_initiation_function to create a new function with the signature defined using <strong>init_args</strong> and <strong>kw_init_args</strong>. Is it possible to writ create_initiation_function?</p>
<p>Please note:</p>
<ul>
<li>If I just wanted to improve the help, I could set <strong>doc</strong>.</li>
<li>We want to set the function signature on creation. After that, it doesn't need to be changed.</li>
<li>Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init</li>
<li>We want to define create_initiation_function. We don't want to manually specify the new function!</li>
</ul>
<p>Related</p>
<ul>
<li><a href="http://stackoverflow.com/questions/147816/preserving-signatures-of-decorated-functions">Preserving signatures of decorated functions</a>: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value</li>
</ul>
| 15 | 2009-09-11T06:29:05Z | 1,410,151 | <p>"We want create_initiation_function to change the signature"</p>
<p>Please don't do this.</p>
<p>"We want to use this function in a number of classes"</p>
<p>Please use ordinary inheritance. </p>
<p>There's no value in having the signature "changed" at run time.</p>
<p>You're creating a maintenance nightmare. No one else will ever bother to figure out what you're doing. They'll simply rip it out and replace it with inheritance.</p>
<p>Do this instead. It's simple and obvious and makes your generic init available in all subclasses in an obvious, simple, Pythonic way.</p>
<pre><code>class Super( object ):
def __init__( self, **args, **kwargs ):
# the generic __init__ that we want every subclass to use
class SomeSubClass( Super ):
def __init__( self, this, that, **kwdefaults ):
super( SomeSubClass, self ).__init__( this, that, **kwdefaults )
class AnotherSubClass( Super ):
def __init__( self, x, y, **kwdefaults ):
super( AnotherSubClass, self ).__init__( x, y, **kwdefaults )
</code></pre>
| 2 | 2009-09-11T10:23:29Z | [
"python",
"function"
] |
Set function signature in Python | 1,409,295 | <p>Suppose I have a generic function f. I want to <strong>programmatically</strong> create a function f2 that behaves the same as f, but has a customised signature.</p>
<p><strong>More detail</strong></p>
<p>Given a list l and and dictionary d I want to be able to:</p>
<ul>
<li>Set the non-keyword arguments of f2 to the strings in l</li>
<li>Set the keyword arguments of f2 to the keys in d and the default values to the values of d</li>
</ul>
<p>ie. Suppose we have</p>
<pre><code>l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
</code></pre>
<p>Then I would want a function with signature:</p>
<pre><code>def f2(x, y, opt=None):
#My code
</code></pre>
<p><strong>A specific use case</strong></p>
<p>This is just a simplified version of my specific use case. I am giving this as an example only.</p>
<p>My actual use case (simplified) is as follows. We have a generic initiation function:</p>
<pre><code>def generic_init(self,*args,**kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__,args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
</code></pre>
<p>We want to use this function in a number of classes. In particular, we want to create a function <strong>init</strong> that behaves like generic_init, but has the signature defined by some class variables at <strong>creation time</strong>:</p>
<pre><code>class my_class:
__init_args__=["x", "y"]
__kw_init_args__={"my_opt": None}
__init__=create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
</code></pre>
<p>We want create_initiation_function to create a new function with the signature defined using <strong>init_args</strong> and <strong>kw_init_args</strong>. Is it possible to writ create_initiation_function?</p>
<p>Please note:</p>
<ul>
<li>If I just wanted to improve the help, I could set <strong>doc</strong>.</li>
<li>We want to set the function signature on creation. After that, it doesn't need to be changed.</li>
<li>Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init</li>
<li>We want to define create_initiation_function. We don't want to manually specify the new function!</li>
</ul>
<p>Related</p>
<ul>
<li><a href="http://stackoverflow.com/questions/147816/preserving-signatures-of-decorated-functions">Preserving signatures of decorated functions</a>: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value</li>
</ul>
| 15 | 2009-09-11T06:29:05Z | 1,410,320 | <p>Maybe I didn't understand the problem well, but if it's about keeping the same behavior while changing the function signature, then you can do something like :</p>
<pre><code># define a function
def my_func(name, age) :
print "I am %s and I am %s" % (name, age)
# label the function with a backup name
save_func = my_func
# rewrite the function with a different signature
def my_func(age, name) :
# use the backup name to use the old function and keep the old behavior
save_func(name, age)
# you can use the new signature
my_func(35, "Bob")
</code></pre>
<p>This outputs :</p>
<pre><code>I am Bob and I am 35
</code></pre>
| 0 | 2009-09-11T11:18:34Z | [
"python",
"function"
] |
Set function signature in Python | 1,409,295 | <p>Suppose I have a generic function f. I want to <strong>programmatically</strong> create a function f2 that behaves the same as f, but has a customised signature.</p>
<p><strong>More detail</strong></p>
<p>Given a list l and and dictionary d I want to be able to:</p>
<ul>
<li>Set the non-keyword arguments of f2 to the strings in l</li>
<li>Set the keyword arguments of f2 to the keys in d and the default values to the values of d</li>
</ul>
<p>ie. Suppose we have</p>
<pre><code>l=["x", "y"]
d={"opt":None}
def f(*args, **kwargs):
#My code
</code></pre>
<p>Then I would want a function with signature:</p>
<pre><code>def f2(x, y, opt=None):
#My code
</code></pre>
<p><strong>A specific use case</strong></p>
<p>This is just a simplified version of my specific use case. I am giving this as an example only.</p>
<p>My actual use case (simplified) is as follows. We have a generic initiation function:</p>
<pre><code>def generic_init(self,*args,**kwargs):
"""Function to initiate a generic object"""
for name, arg in zip(self.__init_args__,args):
setattr(self, name, arg)
for name, default in self.__init_kw_args__.items():
if name in kwargs:
setattr(self, name, kwargs[name])
else:
setattr(self, name, default)
</code></pre>
<p>We want to use this function in a number of classes. In particular, we want to create a function <strong>init</strong> that behaves like generic_init, but has the signature defined by some class variables at <strong>creation time</strong>:</p>
<pre><code>class my_class:
__init_args__=["x", "y"]
__kw_init_args__={"my_opt": None}
__init__=create_initiation_function(my_class, generic_init)
setattr(myclass, "__init__", __init__)
</code></pre>
<p>We want create_initiation_function to create a new function with the signature defined using <strong>init_args</strong> and <strong>kw_init_args</strong>. Is it possible to writ create_initiation_function?</p>
<p>Please note:</p>
<ul>
<li>If I just wanted to improve the help, I could set <strong>doc</strong>.</li>
<li>We want to set the function signature on creation. After that, it doesn't need to be changed.</li>
<li>Instead of creating a function like generic_init, but with a different signature we could create a new function with the desired signature that just calls generic_init</li>
<li>We want to define create_initiation_function. We don't want to manually specify the new function!</li>
</ul>
<p>Related</p>
<ul>
<li><a href="http://stackoverflow.com/questions/147816/preserving-signatures-of-decorated-functions">Preserving signatures of decorated functions</a>: This is how to preserve a signature when decorating a function. We need to be able to set the signature to an arbitrary value</li>
</ul>
| 15 | 2009-09-11T06:29:05Z | 33,112,180 | <p>From <a href="https://www.python.org/dev/peps/pep-0362/#visualizing-callable-objects-signature" rel="nofollow">PEP-0362</a>, there actually does appear to be a way to set the signature in py3.3+, using the <code>fn.__signature__</code> attribute:</p>
<pre><code>def shared_vars(*shared_args):
"""Decorator factory that defines shared variables that are
passed to every invocation of the function"""
def decorator(f):
@wraps(f)
def wrapper(*args, **kwargs):
full_args = shared_args + args
return f(*full_args, **kwargs)
# Override signature
sig = signature(f)
sig = sig.replace(parameters=tuple(sig.parameters.values())[1:])
wrapper.__signature__ = sig
return wrapper
return decorator
</code></pre>
<p>Then:</p>
<pre><code>>>> @shared_vars({"myvar": "myval"})
>>> def example(_state, a, b, c):
>>> return _state, a, b, c
>>> example(1,2,3)
({'myvar': 'myval'}, 1, 2, 3)
>>> str(signature(example))
'(a, b, c)'
</code></pre>
<p>Note: the PEP is not exactly right; Signature.replace moved the params from a positional arg to a kw-only arg.</p>
| 3 | 2015-10-13T20:38:16Z | [
"python",
"function"
] |
Python App Engine uploaded image content-type | 1,409,377 | <p>I know that I can accept image uploads by having a form that POSTs to App Engine like so:</p>
<pre><code><form action="/upload_received" enctype="multipart/form-data" method="post">
<div><input type="file" name="img"/></div>
<div><input type="submit" value="Upload Image"></div>
</form>
</code></pre>
<p>Then in the Python code I can do something like</p>
<pre><code>image = self.request.get("img")
</code></pre>
<p>But how can I figure out what the content-type of this image should be when later showing it to the user? It seems the most robust way would be to figure this out from the image data itself, but how to get that easily? I did not see anything suitable in the google.appengine.api images package.</p>
<p>Should I just look for the magic image headers in my own code, or is there already a method for that somewhere?</p>
<p>Edit:</p>
<p>Here's the simplistic solution I ended up using, seems to work well enough for my purposes and avoids having to store the image type as a separate field in the data store:</p>
<pre><code># Given an image, returns the mime type or None if could not detect.
def detect_mime_from_image_data(self, image):
if image[1:4] == 'PNG': return 'image/png'
if image[0:3] == 'GIF': return 'image/gif'
if image[6:10] == 'JFIF': return 'image/jpeg'
return None
</code></pre>
| 4 | 2009-09-11T06:53:30Z | 1,409,416 | <p>Based on my research, browsers, except for Internet Explorer (version 6 at least), determine the file mime type by using the file's extension. Given that you want image mime types, you could use a simple Python dictionary in order to achieve this.</p>
<p>Unfortunately I don't know of any method in Python that tries to guess the image type by reading some magic bytes (the way <a href="http://www.php.net/manual/en/book.fileinfo.php" rel="nofollow">fileinfo</a> does in PHP). Maybe you could apply the EAFP (Easier to Ask Forgiveness than Permission) principle with the Google appengine image API.</p>
<p>Yes, it appears that the image API does not tell you the type of image you've loaded. What I'd do in this case is to build that Python dictionary to map file extensions to image mime types and than try to load the image while expecting for a <code>NotImageError()</code> exception. If everything goes well, then I assume the mime type was OK.</p>
| 0 | 2009-09-11T07:09:31Z | [
"python",
"image",
"google-app-engine",
"upload",
"multipart"
] |
Python App Engine uploaded image content-type | 1,409,377 | <p>I know that I can accept image uploads by having a form that POSTs to App Engine like so:</p>
<pre><code><form action="/upload_received" enctype="multipart/form-data" method="post">
<div><input type="file" name="img"/></div>
<div><input type="submit" value="Upload Image"></div>
</form>
</code></pre>
<p>Then in the Python code I can do something like</p>
<pre><code>image = self.request.get("img")
</code></pre>
<p>But how can I figure out what the content-type of this image should be when later showing it to the user? It seems the most robust way would be to figure this out from the image data itself, but how to get that easily? I did not see anything suitable in the google.appengine.api images package.</p>
<p>Should I just look for the magic image headers in my own code, or is there already a method for that somewhere?</p>
<p>Edit:</p>
<p>Here's the simplistic solution I ended up using, seems to work well enough for my purposes and avoids having to store the image type as a separate field in the data store:</p>
<pre><code># Given an image, returns the mime type or None if could not detect.
def detect_mime_from_image_data(self, image):
if image[1:4] == 'PNG': return 'image/png'
if image[0:3] == 'GIF': return 'image/gif'
if image[6:10] == 'JFIF': return 'image/jpeg'
return None
</code></pre>
| 4 | 2009-09-11T06:53:30Z | 1,409,468 | <p>Instead of using self.request.get(fieldname), use self.request.POST[fieldname]. This returns a cgi.FieldStorage object (see the Python library docs for details), which has 'filename', 'type' and 'value' attributes.</p>
| 6 | 2009-09-11T07:27:44Z | [
"python",
"image",
"google-app-engine",
"upload",
"multipart"
] |
Python App Engine uploaded image content-type | 1,409,377 | <p>I know that I can accept image uploads by having a form that POSTs to App Engine like so:</p>
<pre><code><form action="/upload_received" enctype="multipart/form-data" method="post">
<div><input type="file" name="img"/></div>
<div><input type="submit" value="Upload Image"></div>
</form>
</code></pre>
<p>Then in the Python code I can do something like</p>
<pre><code>image = self.request.get("img")
</code></pre>
<p>But how can I figure out what the content-type of this image should be when later showing it to the user? It seems the most robust way would be to figure this out from the image data itself, but how to get that easily? I did not see anything suitable in the google.appengine.api images package.</p>
<p>Should I just look for the magic image headers in my own code, or is there already a method for that somewhere?</p>
<p>Edit:</p>
<p>Here's the simplistic solution I ended up using, seems to work well enough for my purposes and avoids having to store the image type as a separate field in the data store:</p>
<pre><code># Given an image, returns the mime type or None if could not detect.
def detect_mime_from_image_data(self, image):
if image[1:4] == 'PNG': return 'image/png'
if image[0:3] == 'GIF': return 'image/gif'
if image[6:10] == 'JFIF': return 'image/jpeg'
return None
</code></pre>
| 4 | 2009-09-11T06:53:30Z | 1,495,810 | <p>Try the python mimetypes module, it will guess the content type and encoding for you,</p>
<p>e.g.</p>
<p>>>import mimetypes</p>
<p>>>mimetypes.guess_type("/home/sean/desktop/comedy/30seconds.mp4")</p>
<p>('video/mp4', None)</p>
| 2 | 2009-09-30T02:14:28Z | [
"python",
"image",
"google-app-engine",
"upload",
"multipart"
] |
Python code readability | 1,409,821 | <p>I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class <em>Host</em>:</p>
<pre><code>class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
</code></pre>
<p>I don't understand from this definition, what "*network_interface*" should be. Is it a <em>string</em>, like "<em>eth0</em>" or is it an instance of a class <em>NetworkInterface</em>? The only way I'm thinking about to solve this is a documenting the code with a "<em>docstring</em>". Something like this:</p>
<pre><code>class Host(object):
''' Attributes:
@name: a string
@network_interface: an instance of class NetworkInterface'''
</code></pre>
<p>Or may be there are name conventions for things like that?</p>
| 11 | 2009-09-11T08:58:26Z | 1,409,872 | <p>The docstring conventions are at <a href="http://ftp.python.org/dev/peps/pep-0257/" rel="nofollow">PEP 257</a>. </p>
<p>The example there follows this format for specifying arguments, you can add the types if they matter:</p>
<pre><code>def complex(real=0.0, imag=0.0):
"""Form a complex number.
Keyword arguments:
real -- the real part (default 0.0)
imag -- the imaginary part (default 0.0)
"""
if imag == 0.0 and real == 0.0: return complex_zero
...
</code></pre>
<p>There was also a rejected PEP for docstrings for attributes ( rather than constructor arguments ). </p>
| 10 | 2009-09-11T09:12:21Z | [
"python"
] |
Python code readability | 1,409,821 | <p>I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class <em>Host</em>:</p>
<pre><code>class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
</code></pre>
<p>I don't understand from this definition, what "*network_interface*" should be. Is it a <em>string</em>, like "<em>eth0</em>" or is it an instance of a class <em>NetworkInterface</em>? The only way I'm thinking about to solve this is a documenting the code with a "<em>docstring</em>". Something like this:</p>
<pre><code>class Host(object):
''' Attributes:
@name: a string
@network_interface: an instance of class NetworkInterface'''
</code></pre>
<p>Or may be there are name conventions for things like that?</p>
| 11 | 2009-09-11T08:58:26Z | 1,409,885 | <p>The most pythonic solution is to document with examples. If possible, state what operations an object must support to be acceptable, rather than a specific type.</p>
<pre><code>class Host(object):
def __init__(self, name, network_interface)
"""Initialise host with given name and network_interface.
network_interface -- must support the same operations as NetworkInterface
>>> network_interface = NetworkInterface()
>>> host = Host("my_host", network_interface)
"""
...
</code></pre>
<p>At this point, hook your source up to <a href="http://docs.python.org/library/doctest.html" rel="nofollow">doctest</a> to make sure your doc examples continue to work in future.</p>
| 8 | 2009-09-11T09:14:46Z | [
"python"
] |
Python code readability | 1,409,821 | <p>I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class <em>Host</em>:</p>
<pre><code>class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
</code></pre>
<p>I don't understand from this definition, what "*network_interface*" should be. Is it a <em>string</em>, like "<em>eth0</em>" or is it an instance of a class <em>NetworkInterface</em>? The only way I'm thinking about to solve this is a documenting the code with a "<em>docstring</em>". Something like this:</p>
<pre><code>class Host(object):
''' Attributes:
@name: a string
@network_interface: an instance of class NetworkInterface'''
</code></pre>
<p>Or may be there are name conventions for things like that?</p>
| 11 | 2009-09-11T08:58:26Z | 1,409,916 | <p>Personally I found very usefull to use <a href="http://www.logilab.org/857" rel="nofollow">pylint</a> to validate my code.</p>
<p>If you follow pylint suggestion almost automatically your code become more readable,
you will improve your python writing skills, respect naming conventions. You can also define your own naming conventions and so on. It's very useful specially for a python beginner.</p>
<p>I suggest you to use.</p>
| 4 | 2009-09-11T09:21:32Z | [
"python"
] |
Python code readability | 1,409,821 | <p>I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class <em>Host</em>:</p>
<pre><code>class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
</code></pre>
<p>I don't understand from this definition, what "*network_interface*" should be. Is it a <em>string</em>, like "<em>eth0</em>" or is it an instance of a class <em>NetworkInterface</em>? The only way I'm thinking about to solve this is a documenting the code with a "<em>docstring</em>". Something like this:</p>
<pre><code>class Host(object):
''' Attributes:
@name: a string
@network_interface: an instance of class NetworkInterface'''
</code></pre>
<p>Or may be there are name conventions for things like that?</p>
| 11 | 2009-09-11T08:58:26Z | 1,410,204 | <p>Python, though not as overtly typed as C or Java, is still typed and will throw exceptions if you're doing things with types that simply do not play nice together.</p>
<p>To that end, if you're concerned about your code being used correctly, maintained correctly, etc. simply use docstrings, comments, or even more explicit variable names to indicate what the type should be.</p>
<p>Even better yet, include code that will allow it to handle whichever type it may be passed as long as it yields a usable result.</p>
| 2 | 2009-09-11T10:41:55Z | [
"python"
] |
Python code readability | 1,409,821 | <p>I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class <em>Host</em>:</p>
<pre><code>class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
</code></pre>
<p>I don't understand from this definition, what "*network_interface*" should be. Is it a <em>string</em>, like "<em>eth0</em>" or is it an instance of a class <em>NetworkInterface</em>? The only way I'm thinking about to solve this is a documenting the code with a "<em>docstring</em>". Something like this:</p>
<pre><code>class Host(object):
''' Attributes:
@name: a string
@network_interface: an instance of class NetworkInterface'''
</code></pre>
<p>Or may be there are name conventions for things like that?</p>
| 11 | 2009-09-11T08:58:26Z | 1,410,208 | <p>Using dynamic languages will teach you something about static languages: all the help you got from the static language that you now miss in the dynamic language, it wasn't all that helpful.</p>
<p>To use your example, in a static language, you'd know that the parameter was a string, and in Python you don't. So in Python you write a docstring. And while you're writing it, you realize you had more to say about it than, "it's a string". You need to say what data is in the string, and what format it should have, and what the default is, and something about error conditions. </p>
<p>And then you realize you should have written all that down for your static language as well. Sure, Java would force you know that it was a string, but there's all these other details that need to be specified, and you have to manually do that work in any language.</p>
| 18 | 2009-09-11T10:43:07Z | [
"python"
] |
Python code readability | 1,409,821 | <p>I have a programming experience with statically typed languages. Now writing code in Python I feel difficulties with its readability. Lets say I have a class <em>Host</em>:</p>
<pre><code>class Host(object):
def __init__(self, name, network_interface):
self.name = name
self.network_interface = network_interface
</code></pre>
<p>I don't understand from this definition, what "*network_interface*" should be. Is it a <em>string</em>, like "<em>eth0</em>" or is it an instance of a class <em>NetworkInterface</em>? The only way I'm thinking about to solve this is a documenting the code with a "<em>docstring</em>". Something like this:</p>
<pre><code>class Host(object):
''' Attributes:
@name: a string
@network_interface: an instance of class NetworkInterface'''
</code></pre>
<p>Or may be there are name conventions for things like that?</p>
| 11 | 2009-09-11T08:58:26Z | 1,414,678 | <p>One benefit of static typing is that types are a form of documentation. When programming in Python, you can document more flexibly and fluently. Of course in your example you want to say that network_interface should implement NetworkInterface, but in many cases the type is obvious from the context, variable name, or by convention, and in these cases by omitting the obvious you can produce more readable code. Common is to describe the meaning of a parameter and implicitly giving the type.</p>
<p>For example:</p>
<pre><code>def Bar(foo, count):
"""Bar the foo the given number of times."""
...
</code></pre>
<p>This describes the function tersely and precisely. What foo and bar mean will be obvious from context, and that count is a (positive) integer is implicit.</p>
<p>For your example, I'd just mention the type in the document string:</p>
<pre><code>"""Create a named host on the given NetworkInterface."""
</code></pre>
<p>This is shorter, more readable, and contains <em>more</em> information than a listing of the types.</p>
| 1 | 2009-09-12T08:55:56Z | [
"python"
] |
Borg pattern problem when used in two different modules | 1,409,917 | <p>I am using the Borg pattern with mutual inclusion of modules. See the example code (not the real code but it shows the problem) below. In this case, I have two different Borgs because the class names (and I guess the class) are seen as different by the interpreter. </p>
<p>Is there a way to use the Borg in that case without reworking the module architecture?</p>
<p>Module borg.py</p>
<pre>
import borg2
class Borg:
_we_are_one = {}
def __init__(self):
self.__dict__ = Borg._we_are_one
try:
self.name
except AttributeError:
self.name = "?"
print self.__class__, id(self.__dict__)
def fct_ab():
a = Borg()
a.name = "Bjorn"
b = Borg()
print b.name
if __name__ == "__main__":
fct_ab()
borg2.fct_c()
</pre>
<p>Module borg2.py</p>
<pre>
import borg
def fct_c():
c = borg.Borg()
print c.name
</pre>
<p>The result is</p>
<pre>
__main__.Borg 40106720
__main__.Borg 40106720
Bjorn
borg.Borg 40106288
?
</pre>
<p>EDIT:
In order to clarify my problem:
Why does Python consider <code>__main__.Borg</code> and borg.Borg has two different classes?</p>
| 0 | 2009-09-11T09:21:49Z | 1,409,990 | <p>It's not the class names that is the problem. I'm not entirely sure why Python see the Borg class and the borg.Borg class as different, perhaps it's because you run this from <code>__main__</code>, I think python does not realize that <code>__main__</code> and borg is the same module.</p>
<p>The solution is easy. Change fct_ab to:</p>
<pre><code>def fct_ab():
import borg
a = borg.Borg()
a.name = "Bjorn"
b = borg.Borg()
print b.name
</code></pre>
<p>This solves the problem.</p>
| 1 | 2009-09-11T09:40:17Z | [
"python"
] |
Borg pattern problem when used in two different modules | 1,409,917 | <p>I am using the Borg pattern with mutual inclusion of modules. See the example code (not the real code but it shows the problem) below. In this case, I have two different Borgs because the class names (and I guess the class) are seen as different by the interpreter. </p>
<p>Is there a way to use the Borg in that case without reworking the module architecture?</p>
<p>Module borg.py</p>
<pre>
import borg2
class Borg:
_we_are_one = {}
def __init__(self):
self.__dict__ = Borg._we_are_one
try:
self.name
except AttributeError:
self.name = "?"
print self.__class__, id(self.__dict__)
def fct_ab():
a = Borg()
a.name = "Bjorn"
b = Borg()
print b.name
if __name__ == "__main__":
fct_ab()
borg2.fct_c()
</pre>
<p>Module borg2.py</p>
<pre>
import borg
def fct_c():
c = borg.Borg()
print c.name
</pre>
<p>The result is</p>
<pre>
__main__.Borg 40106720
__main__.Borg 40106720
Bjorn
borg.Borg 40106288
?
</pre>
<p>EDIT:
In order to clarify my problem:
Why does Python consider <code>__main__.Borg</code> and borg.Borg has two different classes?</p>
| 0 | 2009-09-11T09:21:49Z | 1,409,995 | <p>The problem only occurs in your main-function. Move that code
to its own file and everything is as you'd expect. This code</p>
<pre><code>import borg
import borg2
if __name__ == "__main__":
borg.fct_ab()
borg2.fct_c()
</code></pre>
<p>delivers this output:</p>
<pre><code>borg.Borg 10438672
borg.Borg 10438672
Bjorn
borg.Borg 10438672
Bjorn
</code></pre>
| 3 | 2009-09-11T09:41:02Z | [
"python"
] |
Borg pattern problem when used in two different modules | 1,409,917 | <p>I am using the Borg pattern with mutual inclusion of modules. See the example code (not the real code but it shows the problem) below. In this case, I have two different Borgs because the class names (and I guess the class) are seen as different by the interpreter. </p>
<p>Is there a way to use the Borg in that case without reworking the module architecture?</p>
<p>Module borg.py</p>
<pre>
import borg2
class Borg:
_we_are_one = {}
def __init__(self):
self.__dict__ = Borg._we_are_one
try:
self.name
except AttributeError:
self.name = "?"
print self.__class__, id(self.__dict__)
def fct_ab():
a = Borg()
a.name = "Bjorn"
b = Borg()
print b.name
if __name__ == "__main__":
fct_ab()
borg2.fct_c()
</pre>
<p>Module borg2.py</p>
<pre>
import borg
def fct_c():
c = borg.Borg()
print c.name
</pre>
<p>The result is</p>
<pre>
__main__.Borg 40106720
__main__.Borg 40106720
Bjorn
borg.Borg 40106288
?
</pre>
<p>EDIT:
In order to clarify my problem:
Why does Python consider <code>__main__.Borg</code> and borg.Borg has two different classes?</p>
| 0 | 2009-09-11T09:21:49Z | 1,410,391 | <p>I've fixed the issue in my real application by fixing an error in the import.</p>
<p>In fact, I have two different modules using the same 3rd module. </p>
<p>The 1st one was importing mypackage.mymodule while the 2nd one was importing mymodule. mypackage is installed as a python egg and the code I was working on is on my development folder. </p>
<p>So both codes were importing different modules and I guess that it is normal to have two different classes in this case.</p>
<p>Regarding the example code I've used, the problem comes from the current modules to receive the <strong>main</strong> as name. I've tried to rename by doing <code>__name__ = 'borg'</code>. It works but it breaks the <code>if __name__ == "__main__"</code> condistion. As a conclusion, I would say that mutual inclusion must be avoid and is in most cases not necessary.</p>
<p>Thanks all for your help.</p>
| 0 | 2009-09-11T11:36:42Z | [
"python"
] |
Borg pattern problem when used in two different modules | 1,409,917 | <p>I am using the Borg pattern with mutual inclusion of modules. See the example code (not the real code but it shows the problem) below. In this case, I have two different Borgs because the class names (and I guess the class) are seen as different by the interpreter. </p>
<p>Is there a way to use the Borg in that case without reworking the module architecture?</p>
<p>Module borg.py</p>
<pre>
import borg2
class Borg:
_we_are_one = {}
def __init__(self):
self.__dict__ = Borg._we_are_one
try:
self.name
except AttributeError:
self.name = "?"
print self.__class__, id(self.__dict__)
def fct_ab():
a = Borg()
a.name = "Bjorn"
b = Borg()
print b.name
if __name__ == "__main__":
fct_ab()
borg2.fct_c()
</pre>
<p>Module borg2.py</p>
<pre>
import borg
def fct_c():
c = borg.Borg()
print c.name
</pre>
<p>The result is</p>
<pre>
__main__.Borg 40106720
__main__.Borg 40106720
Bjorn
borg.Borg 40106288
?
</pre>
<p>EDIT:
In order to clarify my problem:
Why does Python consider <code>__main__.Borg</code> and borg.Borg has two different classes?</p>
| 0 | 2009-09-11T09:21:49Z | 1,996,502 | <p>After a long day of struggling with Singletons and Borg, my conclusion is the following:</p>
<p>It seems that a Python module imported multiple times using different 'import paths' is actually imported multiple times. If that module contains a singleton, you get multiple instances.</p>
<p>Example:</p>
<pre><code>myproject/
module_A
some_folder/
module_B
module_C
</code></pre>
<p>If module_A imports module_C using <code>from myproject.some_folder import module_C</code> and module_B imports the same module_C using <code>import module_C</code>, the module is actually imported twice (at least according to my observations). Usually, this doesn't matter, but for singletons or borg, you actually get 2 instances of what should be unique. (That's 2 sets of borgs sharing 2 different internal states).</p>
<p>Solution: Give yourself an import statement convention and stick to it: I import all modules starting from a common root folder, even if the module file is located parallel to the one I am working on, so in the example above, both module_A and module_B import module_C using <code>from myproject.some_folder import module_C</code>.</p>
| 3 | 2010-01-03T22:03:24Z | [
"python"
] |
Borg pattern problem when used in two different modules | 1,409,917 | <p>I am using the Borg pattern with mutual inclusion of modules. See the example code (not the real code but it shows the problem) below. In this case, I have two different Borgs because the class names (and I guess the class) are seen as different by the interpreter. </p>
<p>Is there a way to use the Borg in that case without reworking the module architecture?</p>
<p>Module borg.py</p>
<pre>
import borg2
class Borg:
_we_are_one = {}
def __init__(self):
self.__dict__ = Borg._we_are_one
try:
self.name
except AttributeError:
self.name = "?"
print self.__class__, id(self.__dict__)
def fct_ab():
a = Borg()
a.name = "Bjorn"
b = Borg()
print b.name
if __name__ == "__main__":
fct_ab()
borg2.fct_c()
</pre>
<p>Module borg2.py</p>
<pre>
import borg
def fct_c():
c = borg.Borg()
print c.name
</pre>
<p>The result is</p>
<pre>
__main__.Borg 40106720
__main__.Borg 40106720
Bjorn
borg.Borg 40106288
?
</pre>
<p>EDIT:
In order to clarify my problem:
Why does Python consider <code>__main__.Borg</code> and borg.Borg has two different classes?</p>
| 0 | 2009-09-11T09:21:49Z | 38,994,959 | <p>The solution --- as has already been mentioned --- is to avoid a recursive <code>import</code> of the main module, but <code>borg.py</code> is <em>not</em> being "imported twice". The problem is that importing it <em>at all</em> while it is already executing causes you define the <code>Borg</code> class twice, in two different namespaces.</p>
<p>To demonstrate, I added a few lines to the top of both <code>borg.py</code> and <code>borg2.py</code>, and inserted my <code>print_module</code> function before and after most points of interest:</p>
<pre><code>#!/usr/bin/env python2
from __future__ import print_function
def print_module(*args, **kwargs):
print(__name__ + ': ', end='')
print(*args, **kwargs)
return
print_module('Importing module borg2...')
import borg2
print_module('Module borg2 imported.')
print_module('Defining class Borg...')
class Borg:
...
# etc.
</code></pre>
<p>The output is:</p>
<pre class="lang-none prettyprint-override"><code>__main__: Importing module borg2...
borg2: Importing module borg...
borg: Importing module borg2...
borg: Module borg2 imported.
borg: Defining class Borg...
borg: id(_we_are_one) = 17350480
borg: Class Borg defined.
borg: id(Borg) = 139879572980464
borg: End of borg.py.
borg2: Module borg imported.
borg2: End of borg2.py.
__main__: Module borg2 imported.
__main__: Defining class Borg...
__main__: id(_we_are_one) = 17351632
__main__: Class Borg defined.
__main__: id(Borg) = 139879572981136
__main__: Borg 17351632
__main__: Borg 17351632
__main__: Bjorn
borg: Borg 17350480
borg2: ?
__main__: End of borg.py.
</code></pre>
<p>The first thing <code>borg.py</code> does (not counting the bits I added) is import <code>borg2</code> into the <code>__main__</code> namespace. This happens before the <code>Borg</code> class is defined anywhere.</p>
<p>The first thing <code>borg2</code> does is import <code>borg</code>, which again attempts to import <code>borg2</code>... and Python refuses to do so. (Note that <em>nothing</em> happens between lines 3 and 4.) <code>borg</code> finally defines the <code>Borg</code> class and the <code>fct_ab</code> function in the <code>borg</code> namespace, and exits.</p>
<p><code>borg2</code> then defines <code>fct_c</code> and exits ("borg2: End of borg2.py."). All the <code>import</code> statements are done.</p>
<p>Now, <code>borg.py</code> <em>finally</em> gets to execute "for real". Yes, it already ran once when it was imported, but this is still the "first" time through the <code>borg.py</code> file. The <code>Borg</code> class gets defined again, this time in the <code>__main__</code> namespace, and both the class and its dictionary have new IDs.</p>
<p><code>borg.py</code> was not "imported twice". It was executed once from the command line, and it was executed once when it was imported. Since these happened in two different namespaces, the "second" definition of <code>Borg</code> did not replace the first, and the two functions modified two different classes, which just happened to be created from the same code.</p>
| 0 | 2016-08-17T11:08:59Z | [
"python"
] |
## in python using notepad++ syntax coloring | 1,410,063 | <p>In my editor (notepad++) in Python script edit mode, a line</p>
<pre>
## is this a special comment or what?
</pre>
<p>Turns a different color (yellow) than a normal #comment.</p>
<p>What's special about a ##comment vs a #comment?</p>
| 2 | 2009-09-11T09:54:47Z | 1,410,088 | <p>From the Python point of view, there's no difference. However, Notepad++'s highlighter considers the ## sequence as a STRINGEOL, which is why it colours it this way. See this <a href="http://sourceforge.net/forum/message.php?msg%5Fid=4340178">thread</a>.</p>
| 5 | 2009-09-11T10:02:43Z | [
"python",
"comments"
] |
## in python using notepad++ syntax coloring | 1,410,063 | <p>In my editor (notepad++) in Python script edit mode, a line</p>
<pre>
## is this a special comment or what?
</pre>
<p>Turns a different color (yellow) than a normal #comment.</p>
<p>What's special about a ##comment vs a #comment?</p>
| 2 | 2009-09-11T09:54:47Z | 1,410,104 | <p>Also, in a <a href="http://mail.python.org/pipermail/doc-sig/2006-March/003558.html" rel="nofollow">different situations</a>:</p>
<blockquote>
<p>Comment whose first line is a double hash:</p>
<blockquote>
<p>This is used by doxygen and Fredrik Lundh's PythonDoc. In doxygen,
if there's text on the line with the double hash, it is treated as
a summary string. I dislike this convention because it seems too
likely to result in false positives. E.g., if you comment-out a
region with a comment in it, you get a double-hash.</p>
</blockquote>
</blockquote>
| 0 | 2009-09-11T10:08:02Z | [
"python",
"comments"
] |
## in python using notepad++ syntax coloring | 1,410,063 | <p>In my editor (notepad++) in Python script edit mode, a line</p>
<pre>
## is this a special comment or what?
</pre>
<p>Turns a different color (yellow) than a normal #comment.</p>
<p>What's special about a ##comment vs a #comment?</p>
| 2 | 2009-09-11T09:54:47Z | 7,221,958 | <p>I thought the difference had something to do with usage:</p>
<pre><code>#this is a code block header
</code></pre>
<p>vs.</p>
<pre><code>##this is a comment
</code></pre>
<p>I know Python doesn't care one way or the other, but I thought it was just convention to do it that way.</p>
| 2 | 2011-08-28T15:48:02Z | [
"python",
"comments"
] |
Inverse Dict in Python | 1,410,087 | <p>I am trying to create a new dict using a list of values of an existing dict as individual keys.</p>
<p>So for example:</p>
<pre><code>dict1 = dict({'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]})
</code></pre>
<p>and I would like to obtain:</p>
<pre><code>dict2 = dict({1:['a','b','c'], 2:['a','b','c'], 3:['a','b'], 4:['b']})
</code></pre>
<p>So far, I've not been able to do this in a very clean way. Any suggestions?</p>
| 2 | 2009-09-11T10:02:32Z | 1,410,118 | <p>If you are using Python 2.5 or above, use the <a href="http://docs.python.org/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict</code> class from the <code>collections</code></a> module; a <code>defaultdict</code> automatically creates values on the first access to a missing key, so you can use that here to create the lists for <code>dict2</code>, like this:</p>
<pre><code>from collections import defaultdict
dict1 = dict({'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]})
dict2 = defaultdict(list)
for key, values in dict1.items():
for value in values:
# The list for dict2[value] is created automatically
dict2[value].append(key)
</code></pre>
<p>Note that the lists in dict2 will not be in any particular order, as a dictionaries do not order their key-value pairs.</p>
<p>If you want an ordinary dict out at the end that will raise a <code>KeyError</code> for missing keys, just use <code>dict2 = dict(dict2)</code> after the above.</p>
| 8 | 2009-09-11T10:12:23Z | [
"python",
"data-structures",
"dictionary"
] |
Inverse Dict in Python | 1,410,087 | <p>I am trying to create a new dict using a list of values of an existing dict as individual keys.</p>
<p>So for example:</p>
<pre><code>dict1 = dict({'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]})
</code></pre>
<p>and I would like to obtain:</p>
<pre><code>dict2 = dict({1:['a','b','c'], 2:['a','b','c'], 3:['a','b'], 4:['b']})
</code></pre>
<p>So far, I've not been able to do this in a very clean way. Any suggestions?</p>
| 2 | 2009-09-11T10:02:32Z | 1,410,186 | <p>Notice that you don't need the <code>dict</code> in your examples: the <code>{}</code> syntax gives you a dict:</p>
<pre><code>dict1 = {'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]}
</code></pre>
| 4 | 2009-09-11T10:35:46Z | [
"python",
"data-structures",
"dictionary"
] |
Inverse Dict in Python | 1,410,087 | <p>I am trying to create a new dict using a list of values of an existing dict as individual keys.</p>
<p>So for example:</p>
<pre><code>dict1 = dict({'a':[1,2,3], 'b':[1,2,3,4], 'c':[1,2]})
</code></pre>
<p>and I would like to obtain:</p>
<pre><code>dict2 = dict({1:['a','b','c'], 2:['a','b','c'], 3:['a','b'], 4:['b']})
</code></pre>
<p>So far, I've not been able to do this in a very clean way. Any suggestions?</p>
| 2 | 2009-09-11T10:02:32Z | 1,410,194 | <p>Other way:</p>
<pre><code>dict2={}
[[ (dict2.setdefault(i,[]) or 1) and (dict2[i].append(x)) for i in y ] for (x,y) in dict1.items()]
</code></pre>
| -3 | 2009-09-11T10:39:25Z | [
"python",
"data-structures",
"dictionary"
] |
Returning a c++ array (pointer) from boost python | 1,410,272 | <p>I'm currently writing python bindings for a c++ library I'm working on. The library reads some binary file format and reading speed is very important. While optimizing the library for speed, I noticed that std::vector (used in the instances I'm reading) was eating up a lot of processing time, so I replaced those with simple arrays allocated with new[] (whether this was a good/wise thing to do is another question probably).</p>
<p>Now I'm stuck with the problem of how to give python access to these arrays. There seems to be no solution built into boost::python (I haven't been able to find one at least).</p>
<p>Example code to illustrate the situation:</p>
<pre><code>// Instance.cpp
class Instance
{
int * data;
int dataLength;
Instance ()
{
data = new int[10];
dataLength = 10;
}
};
// Class pythonBindings.cpp
BOOST_PYTHON_MODULE(db)
{
class_<Instance>("Instance", init<>())
.add_property("data", ........)
;
}
</code></pre>
<p>I guess I could use a wrapper function that constructs a boost::python::list out of the arrays whenever python wants to access them. Since I am quite new to boost::python, I figured I should ask if there are any nice, standard or built-in solutions to this problem before I start hacking away though.</p>
<p>So, how would you recommend wrapping <code>Instance</code>'s <code>data</code> array using boost::python?</p>
| 1 | 2009-09-11T11:03:36Z | 1,887,396 | <p>I will recomend a wrap data and dataLength with proxy class and returns from Instance this proxy. In our project we use this way to export data from our app to python.</p>
<p>If you want I can give you few links to our implementation and explain how it works.</p>
| 1 | 2009-12-11T11:18:26Z | [
"python",
"boost",
"binding",
"boost-python"
] |
Returning a c++ array (pointer) from boost python | 1,410,272 | <p>I'm currently writing python bindings for a c++ library I'm working on. The library reads some binary file format and reading speed is very important. While optimizing the library for speed, I noticed that std::vector (used in the instances I'm reading) was eating up a lot of processing time, so I replaced those with simple arrays allocated with new[] (whether this was a good/wise thing to do is another question probably).</p>
<p>Now I'm stuck with the problem of how to give python access to these arrays. There seems to be no solution built into boost::python (I haven't been able to find one at least).</p>
<p>Example code to illustrate the situation:</p>
<pre><code>// Instance.cpp
class Instance
{
int * data;
int dataLength;
Instance ()
{
data = new int[10];
dataLength = 10;
}
};
// Class pythonBindings.cpp
BOOST_PYTHON_MODULE(db)
{
class_<Instance>("Instance", init<>())
.add_property("data", ........)
;
}
</code></pre>
<p>I guess I could use a wrapper function that constructs a boost::python::list out of the arrays whenever python wants to access them. Since I am quite new to boost::python, I figured I should ask if there are any nice, standard or built-in solutions to this problem before I start hacking away though.</p>
<p>So, how would you recommend wrapping <code>Instance</code>'s <code>data</code> array using boost::python?</p>
| 1 | 2009-09-11T11:03:36Z | 2,119,573 | <p>If you change your class to work with <code>std::vector</code> instances, take a look at the vector indexing suite (<a href="http://www.boost.org/doc/libs/1_41_0/libs/python/doc/v2/indexing.html" rel="nofollow">http://www.boost.org/doc/libs/1_41_0/libs/python/doc/v2/indexing.html</a>), which allows you to expose vectors to python with a native list interface, without creating copies from/to python.</p>
| 4 | 2010-01-22T18:37:57Z | [
"python",
"boost",
"binding",
"boost-python"
] |
How to implement Unicode string matching by folding in python | 1,410,308 | <p>I have an application implementing incremental search. I have a catalog of unicode strings to be matched and match them to a given "key" string; a catalog string is a "hit" if it contains all of the characters in the key, in order, and it is ranked better if the key characters cluster in the catalog string.</p>
<p>Anyway, this works fine and matches unicode exactly, so that "öst" will match "<strong>Ãst</strong>blocket" or "r**öst**" or "r**ö**d <strong>st</strong>en".</p>
<p>Anyway, now I want to implement folding, since there are some cases where it is not useful to distinguish between a catalog character such as "á" or "é" and the key character "a" or "e".</p>
<p>For example: "Ole" should match "Olé"</p>
<p>How do I best implement this unicode-folding matcher in Python? Efficiency is important since I have to match thousands of catalog strings to the short, given key.</p>
<p>It does not have to turn it into ascii; in fact, the algorithm's output string could be unicode. Leaving a character in is better than stripping it.</p>
<p><hr /></p>
<p>I don't know which answer to accept, since I use a bit of both. Taking the NKFD decomposition and removing combining marks goes almost the way to finish, I only add some custom transliterations to that. Here is the module as it looks now: (Warning, contains unicode chars inline, since it is much nicer to edit that way.)</p>
<pre><code># -*- encoding: UTF-8 -*-
import unicodedata
from unicodedata import normalize, category
def _folditems():
_folding_table = {
# general non-decomposing characters
# FIXME: This is not complete
u"Å" : u"l",
u"Å" : u"oe",
u"ð" : u"d",
u"þ" : u"th",
u"Ã" : u"ss",
# germano-scandinavic canonical transliterations
u"ü" : u"ue",
u"Ã¥" : u"aa",
u"ä" : u"ae",
u"æ" : u"ae",
u"ö" : u"oe",
u"ø" : u"oe",
}
for c, rep in _folding_table.iteritems():
yield (ord(c.upper()), rep.title())
yield (ord(c), rep)
folding_table = dict(_folditems())
def tofolded(ustr):
u"""Fold @ustr
Return a unicode str where composed characters are replaced by
their base, and extended latin characters are replaced by
similar basic latin characters.
>>> tofolded(u"WyÅÄ
cz")
u'Wylacz'
>>> tofolded(u"naïveté")
u'naivete'
Characters from other scripts are not transliterated.
>>> tofolded(u"á¼Î»Î»Î¬Ï") == u'ÎλλαÏ'
True
(These doctests pass, but should they fail, they fail hard)
"""
srcstr = normalize("NFKD", ustr.translate(folding_table))
return u"".join(c for c in srcstr if category(c) != 'Mn')
if __name__ == '__main__':
import doctest
doctest.testmod()
</code></pre>
<p>(And, for the actual matching if that interests anyone: I construct folded strings for all my catalog beforehand, and put the folded versions into the already-available catalog object alias property.)</p>
| 9 | 2009-09-11T11:15:48Z | 1,410,356 | <p>Have a look at this: <a href="ftp://alan.smcvt.edu/hefferon/unicode2ascii.py" rel="nofollow">ftp://alan.smcvt.edu/hefferon/unicode2ascii.py</a></p>
<p>Probably not complete, but might get you started.</p>
| 1 | 2009-09-11T11:28:33Z | [
"python",
"unicode"
] |
How to implement Unicode string matching by folding in python | 1,410,308 | <p>I have an application implementing incremental search. I have a catalog of unicode strings to be matched and match them to a given "key" string; a catalog string is a "hit" if it contains all of the characters in the key, in order, and it is ranked better if the key characters cluster in the catalog string.</p>
<p>Anyway, this works fine and matches unicode exactly, so that "öst" will match "<strong>Ãst</strong>blocket" or "r**öst**" or "r**ö**d <strong>st</strong>en".</p>
<p>Anyway, now I want to implement folding, since there are some cases where it is not useful to distinguish between a catalog character such as "á" or "é" and the key character "a" or "e".</p>
<p>For example: "Ole" should match "Olé"</p>
<p>How do I best implement this unicode-folding matcher in Python? Efficiency is important since I have to match thousands of catalog strings to the short, given key.</p>
<p>It does not have to turn it into ascii; in fact, the algorithm's output string could be unicode. Leaving a character in is better than stripping it.</p>
<p><hr /></p>
<p>I don't know which answer to accept, since I use a bit of both. Taking the NKFD decomposition and removing combining marks goes almost the way to finish, I only add some custom transliterations to that. Here is the module as it looks now: (Warning, contains unicode chars inline, since it is much nicer to edit that way.)</p>
<pre><code># -*- encoding: UTF-8 -*-
import unicodedata
from unicodedata import normalize, category
def _folditems():
_folding_table = {
# general non-decomposing characters
# FIXME: This is not complete
u"Å" : u"l",
u"Å" : u"oe",
u"ð" : u"d",
u"þ" : u"th",
u"Ã" : u"ss",
# germano-scandinavic canonical transliterations
u"ü" : u"ue",
u"Ã¥" : u"aa",
u"ä" : u"ae",
u"æ" : u"ae",
u"ö" : u"oe",
u"ø" : u"oe",
}
for c, rep in _folding_table.iteritems():
yield (ord(c.upper()), rep.title())
yield (ord(c), rep)
folding_table = dict(_folditems())
def tofolded(ustr):
u"""Fold @ustr
Return a unicode str where composed characters are replaced by
their base, and extended latin characters are replaced by
similar basic latin characters.
>>> tofolded(u"WyÅÄ
cz")
u'Wylacz'
>>> tofolded(u"naïveté")
u'naivete'
Characters from other scripts are not transliterated.
>>> tofolded(u"á¼Î»Î»Î¬Ï") == u'ÎλλαÏ'
True
(These doctests pass, but should they fail, they fail hard)
"""
srcstr = normalize("NFKD", ustr.translate(folding_table))
return u"".join(c for c in srcstr if category(c) != 'Mn')
if __name__ == '__main__':
import doctest
doctest.testmod()
</code></pre>
<p>(And, for the actual matching if that interests anyone: I construct folded strings for all my catalog beforehand, and put the folded versions into the already-available catalog object alias property.)</p>
| 9 | 2009-09-11T11:15:48Z | 1,410,365 | <p>You can use <a href="http://stackoverflow.com/questions/517923/what-is-the-best-way-to-remove-accents-in-a-python-unicode-string/518232#518232">this</a> <code>strip_accents</code> function to remove the accents:</p>
<pre><code>def strip_accents(s):
return ''.join((c for c in unicodedata.normalize('NFD', unicode(s)) if unicodedata.category(c) != 'Mn'))
>>> strip_accents(u'Ãstblocket')
'Ostblocket'
</code></pre>
| 5 | 2009-09-11T11:31:30Z | [
"python",
"unicode"
] |
How to implement Unicode string matching by folding in python | 1,410,308 | <p>I have an application implementing incremental search. I have a catalog of unicode strings to be matched and match them to a given "key" string; a catalog string is a "hit" if it contains all of the characters in the key, in order, and it is ranked better if the key characters cluster in the catalog string.</p>
<p>Anyway, this works fine and matches unicode exactly, so that "öst" will match "<strong>Ãst</strong>blocket" or "r**öst**" or "r**ö**d <strong>st</strong>en".</p>
<p>Anyway, now I want to implement folding, since there are some cases where it is not useful to distinguish between a catalog character such as "á" or "é" and the key character "a" or "e".</p>
<p>For example: "Ole" should match "Olé"</p>
<p>How do I best implement this unicode-folding matcher in Python? Efficiency is important since I have to match thousands of catalog strings to the short, given key.</p>
<p>It does not have to turn it into ascii; in fact, the algorithm's output string could be unicode. Leaving a character in is better than stripping it.</p>
<p><hr /></p>
<p>I don't know which answer to accept, since I use a bit of both. Taking the NKFD decomposition and removing combining marks goes almost the way to finish, I only add some custom transliterations to that. Here is the module as it looks now: (Warning, contains unicode chars inline, since it is much nicer to edit that way.)</p>
<pre><code># -*- encoding: UTF-8 -*-
import unicodedata
from unicodedata import normalize, category
def _folditems():
_folding_table = {
# general non-decomposing characters
# FIXME: This is not complete
u"Å" : u"l",
u"Å" : u"oe",
u"ð" : u"d",
u"þ" : u"th",
u"Ã" : u"ss",
# germano-scandinavic canonical transliterations
u"ü" : u"ue",
u"Ã¥" : u"aa",
u"ä" : u"ae",
u"æ" : u"ae",
u"ö" : u"oe",
u"ø" : u"oe",
}
for c, rep in _folding_table.iteritems():
yield (ord(c.upper()), rep.title())
yield (ord(c), rep)
folding_table = dict(_folditems())
def tofolded(ustr):
u"""Fold @ustr
Return a unicode str where composed characters are replaced by
their base, and extended latin characters are replaced by
similar basic latin characters.
>>> tofolded(u"WyÅÄ
cz")
u'Wylacz'
>>> tofolded(u"naïveté")
u'naivete'
Characters from other scripts are not transliterated.
>>> tofolded(u"á¼Î»Î»Î¬Ï") == u'ÎλλαÏ'
True
(These doctests pass, but should they fail, they fail hard)
"""
srcstr = normalize("NFKD", ustr.translate(folding_table))
return u"".join(c for c in srcstr if category(c) != 'Mn')
if __name__ == '__main__':
import doctest
doctest.testmod()
</code></pre>
<p>(And, for the actual matching if that interests anyone: I construct folded strings for all my catalog beforehand, and put the folded versions into the already-available catalog object alias property.)</p>
| 9 | 2009-09-11T11:15:48Z | 2,327,977 | <p>A general purpose solution (especially for search normalization and generating slugs) is the unidecode module:</p>
<p><a href="http://pypi.python.org/pypi/Unidecode" rel="nofollow">http://pypi.python.org/pypi/Unidecode</a></p>
<p>It's a port of the Text::Unidecode module for Perl. It's not complete, but it translates all Latin-derived characters I could find, transliterates Cyrillic, Chinese, etc to Latin and even handles full-width characters correctly.</p>
<p>It's probably a good idea to simply strip all characters you don't want to have in the final output or replace them with a filler (e.g. <code>"äÃÅ$"</code> will be unidecoded to <code>"assoe$"</code>, so you might want to strip the non-alphanumerics). For characters it will transliterate but shouldn't (say, <code>§</code>=><code>SS</code> and <code>â¬</code>=><code>EU</code>) you need to clean up the input:</p>
<pre><code>input_str = u'äÃÅ$'
input_str = u''.join([ch if ch.isalnum() else u'-' for ch in input_str])
input_str = str(unidecode(input_str)).lower()
</code></pre>
<p>This would replace all non-alphanumeric characters with a dummy replacement and then transliterate the string and turn it into lowercase.</p>
| 1 | 2010-02-24T17:18:10Z | [
"python",
"unicode"
] |
How to implement Unicode string matching by folding in python | 1,410,308 | <p>I have an application implementing incremental search. I have a catalog of unicode strings to be matched and match them to a given "key" string; a catalog string is a "hit" if it contains all of the characters in the key, in order, and it is ranked better if the key characters cluster in the catalog string.</p>
<p>Anyway, this works fine and matches unicode exactly, so that "öst" will match "<strong>Ãst</strong>blocket" or "r**öst**" or "r**ö**d <strong>st</strong>en".</p>
<p>Anyway, now I want to implement folding, since there are some cases where it is not useful to distinguish between a catalog character such as "á" or "é" and the key character "a" or "e".</p>
<p>For example: "Ole" should match "Olé"</p>
<p>How do I best implement this unicode-folding matcher in Python? Efficiency is important since I have to match thousands of catalog strings to the short, given key.</p>
<p>It does not have to turn it into ascii; in fact, the algorithm's output string could be unicode. Leaving a character in is better than stripping it.</p>
<p><hr /></p>
<p>I don't know which answer to accept, since I use a bit of both. Taking the NKFD decomposition and removing combining marks goes almost the way to finish, I only add some custom transliterations to that. Here is the module as it looks now: (Warning, contains unicode chars inline, since it is much nicer to edit that way.)</p>
<pre><code># -*- encoding: UTF-8 -*-
import unicodedata
from unicodedata import normalize, category
def _folditems():
_folding_table = {
# general non-decomposing characters
# FIXME: This is not complete
u"Å" : u"l",
u"Å" : u"oe",
u"ð" : u"d",
u"þ" : u"th",
u"Ã" : u"ss",
# germano-scandinavic canonical transliterations
u"ü" : u"ue",
u"Ã¥" : u"aa",
u"ä" : u"ae",
u"æ" : u"ae",
u"ö" : u"oe",
u"ø" : u"oe",
}
for c, rep in _folding_table.iteritems():
yield (ord(c.upper()), rep.title())
yield (ord(c), rep)
folding_table = dict(_folditems())
def tofolded(ustr):
u"""Fold @ustr
Return a unicode str where composed characters are replaced by
their base, and extended latin characters are replaced by
similar basic latin characters.
>>> tofolded(u"WyÅÄ
cz")
u'Wylacz'
>>> tofolded(u"naïveté")
u'naivete'
Characters from other scripts are not transliterated.
>>> tofolded(u"á¼Î»Î»Î¬Ï") == u'ÎλλαÏ'
True
(These doctests pass, but should they fail, they fail hard)
"""
srcstr = normalize("NFKD", ustr.translate(folding_table))
return u"".join(c for c in srcstr if category(c) != 'Mn')
if __name__ == '__main__':
import doctest
doctest.testmod()
</code></pre>
<p>(And, for the actual matching if that interests anyone: I construct folded strings for all my catalog beforehand, and put the folded versions into the already-available catalog object alias property.)</p>
| 9 | 2009-09-11T11:15:48Z | 2,759,500 | <p>What about this one:</p>
<pre><code>normalize('NFKD', unicode_string).encode('ASCII', 'ignore').lower()
</code></pre>
<p>Taken from here (Spanish) <a href="http://python.org.ar/pyar/Recetario/NormalizarCaracteresUnicode" rel="nofollow">http://python.org.ar/pyar/Recetario/NormalizarCaracteresUnicode</a></p>
| 1 | 2010-05-03T16:20:21Z | [
"python",
"unicode"
] |
How to implement Unicode string matching by folding in python | 1,410,308 | <p>I have an application implementing incremental search. I have a catalog of unicode strings to be matched and match them to a given "key" string; a catalog string is a "hit" if it contains all of the characters in the key, in order, and it is ranked better if the key characters cluster in the catalog string.</p>
<p>Anyway, this works fine and matches unicode exactly, so that "öst" will match "<strong>Ãst</strong>blocket" or "r**öst**" or "r**ö**d <strong>st</strong>en".</p>
<p>Anyway, now I want to implement folding, since there are some cases where it is not useful to distinguish between a catalog character such as "á" or "é" and the key character "a" or "e".</p>
<p>For example: "Ole" should match "Olé"</p>
<p>How do I best implement this unicode-folding matcher in Python? Efficiency is important since I have to match thousands of catalog strings to the short, given key.</p>
<p>It does not have to turn it into ascii; in fact, the algorithm's output string could be unicode. Leaving a character in is better than stripping it.</p>
<p><hr /></p>
<p>I don't know which answer to accept, since I use a bit of both. Taking the NKFD decomposition and removing combining marks goes almost the way to finish, I only add some custom transliterations to that. Here is the module as it looks now: (Warning, contains unicode chars inline, since it is much nicer to edit that way.)</p>
<pre><code># -*- encoding: UTF-8 -*-
import unicodedata
from unicodedata import normalize, category
def _folditems():
_folding_table = {
# general non-decomposing characters
# FIXME: This is not complete
u"Å" : u"l",
u"Å" : u"oe",
u"ð" : u"d",
u"þ" : u"th",
u"Ã" : u"ss",
# germano-scandinavic canonical transliterations
u"ü" : u"ue",
u"Ã¥" : u"aa",
u"ä" : u"ae",
u"æ" : u"ae",
u"ö" : u"oe",
u"ø" : u"oe",
}
for c, rep in _folding_table.iteritems():
yield (ord(c.upper()), rep.title())
yield (ord(c), rep)
folding_table = dict(_folditems())
def tofolded(ustr):
u"""Fold @ustr
Return a unicode str where composed characters are replaced by
their base, and extended latin characters are replaced by
similar basic latin characters.
>>> tofolded(u"WyÅÄ
cz")
u'Wylacz'
>>> tofolded(u"naïveté")
u'naivete'
Characters from other scripts are not transliterated.
>>> tofolded(u"á¼Î»Î»Î¬Ï") == u'ÎλλαÏ'
True
(These doctests pass, but should they fail, they fail hard)
"""
srcstr = normalize("NFKD", ustr.translate(folding_table))
return u"".join(c for c in srcstr if category(c) != 'Mn')
if __name__ == '__main__':
import doctest
doctest.testmod()
</code></pre>
<p>(And, for the actual matching if that interests anyone: I construct folded strings for all my catalog beforehand, and put the folded versions into the already-available catalog object alias property.)</p>
| 9 | 2009-09-11T11:15:48Z | 5,203,502 | <blockquote>
<p>For my application, I already addressed this in a different comment: I want to have a <em>unicode</em> result and <em>leave unhandled characters</em> untouched.</p>
</blockquote>
<p>In that case, the correct way to do this is to create a UCA collator object with its strength set to compare at primary strength only, which thereby completely disregards diacritics.</p>
<p>I show how to do this using Perl in <a href="http://stackoverflow.com/questions/5157141/how-do-you-match-accented-and-tilde-characters-in-a-perl-regular-expression-rege/5163247#5163247">this answer</a>. The first collator object is at the strength you need, while the second one considers accents for tie-breaking.</p>
<p>You will note that no strings have been harmed in the making of these comparisons: the original data is untouched.</p>
| 4 | 2011-03-05T11:24:27Z | [
"python",
"unicode"
] |
What are the pros and cons of PyRo and RPyC python libs? | 1,410,328 | <p>I am looking for a remote procedure call engine for Python and I've found that <a href="http://pythonhosted.org/Pyro4/" rel="nofollow">PyRo (Python Remote Object)</a> and <a href="http://rpyc.wikidot.com/" rel="nofollow">RPyC (Remote Python Call) </a> are both the kind of thing I am searching for.</p>
<p>However, I am curious to know how they compare to each other and what are their pros and cons ?</p>
| 13 | 2009-09-11T11:20:11Z | 1,411,448 | <p>I personally find them roughly equivalent, but RPyC's author (<a href="http://rpyc.wikispaces.com/about">here</a>) claims more simplicity (and maybe for somebody not all that used to distributed computing he's got a point; I may be too used to it to make a good judge;-). Quoting him...:</p>
<blockquote>
<p>although PYRO has a long list of
considerable projects in its resumè, I
find setting up a server too
complicated, if you take into account
the amount of code needed, registering
objects, running name servers, etc.
Not to mention the number of different
concepts you have to consider (events,
rebinding, with or without name
servers, proxy vs. attribute-proxy,
names have to be unique, etc.). And
it's limited (remote objects must be
picklable so you can't work with
remote files, etc.). All in all, PYRO
has too many special cases and is
generally too complicated (yes, I
consider this complicated). So of
course I'm not an independent reviewer
-- but judge for yourself. Isn't RPyC simpler and cleaner?</p>
</blockquote>
<p>On the other side of the coin, PyRO does try to provide some security (which RPyC's author claim is too weak anyway, and underlies many of PyRO's claimed complications).</p>
<p>A more independent voice, David Mertz, offers <a href="http://www.ibm.com/developerworks/linux/library/l-rpyc/">here</a> a good explanation of RPyC (PyRO has been around much longer and David points to previous articles covering it). The "classic mode" is the totally generally and simple and zero-security part, "essentially identical to Pyro (without Pyro's optional security framework)"; the "services mode" is more secure (everything not explicitly permitted is by default forbidden) and, David says, " the service mode is essentially RPC (for example, XML_RPC), modulo some details on calling conventions and implementation". Seems a fair assessment to me.</p>
<p>BTW, I'm not particularly fond of single-language RPC systems -- even if Python covers 99% of my needs (and it's not quite THAT high;-), I love the fact that I can use any language for the remaining 1%... I don't want to give that up at the RPC layer!-) I'd rather do e.g. <a href="http://json-rpc.org/">JSON-RPC</a> via <a href="http://json-rpc.org/wiki/python-json-rpc">this</a> module, or the like...!-).</p>
| 17 | 2009-09-11T15:02:58Z | [
"python",
"distributed",
"pyro",
"rpyc"
] |
What are the pros and cons of PyRo and RPyC python libs? | 1,410,328 | <p>I am looking for a remote procedure call engine for Python and I've found that <a href="http://pythonhosted.org/Pyro4/" rel="nofollow">PyRo (Python Remote Object)</a> and <a href="http://rpyc.wikidot.com/" rel="nofollow">RPyC (Remote Python Call) </a> are both the kind of thing I am searching for.</p>
<p>However, I am curious to know how they compare to each other and what are their pros and cons ?</p>
| 13 | 2009-09-11T11:20:11Z | 13,998,998 | <p>YMMV, but here are my results from evaluating RPyC, Pyro4 and ZeroRPC for use on an upcoming project. Note that there are not in-depth tests, nor is this intended to be an in-depth review, just my notes on how well each works for the needs of my upcoming project.</p>
<p>ZeroRPC:</p>
<ul>
<li>quite a few dependencies</li>
<li>very young project (main support from dotCloud)</li>
<li>very little documentation</li>
<li>can't access remote object's attributes, just methods</li>
<li>Due to lack of attribute access, IPython tab completion does not work on remote objects</li>
</ul>
<p>Pyro4:</p>
<ul>
<li>Python3 support</li>
<li>Nice, plentiful documenation</li>
<li>mature project</li>
<li>No attribute access/IPython tab completion</li>
</ul>
<p>Pyro3:</p>
<ul>
<li>support for attribute access (claimed in docs; have not verified)</li>
<li>No Python3 support</li>
</ul>
<p>RPyC:</p>
<ul>
<li>attribute access, IPython tab completion on remote objects</li>
<li>Python3 support (claimed in docs; not yet verified)</li>
<li>spotty documentation</li>
</ul>
<p>FWIW:</p>
<p>I tend to like RPyC (maybe because it was my first? ;-), but it's documentation is sparse. It was my first exposure to an RPC, and it took me a long time to "grok" how to get things working. The author (Tomer) is very helpful and does respond to Qs on the Google RPyC list.</p>
<p>If you're new to RPC, I would suggest starting with Pyro and take advantage of its solid documentation to learn the ropes. Move on to RPyC, ZeroRPC, etc. as your needs require.</p>
| 5 | 2012-12-22T00:41:12Z | [
"python",
"distributed",
"pyro",
"rpyc"
] |
Python image mirroring | 1,410,406 | <p>I've been making a picture mirroring in horizontal and vertical axes. Now I'm going to make the diagonal.</p>
<p>I had done the hori and verti width two for loops which in the hori scenario loops through all the pixels in the height and only the half of the pixels in the width. Then it gets the color of the pixel and set the same color to the pixel on the other side. Going from the <code>getWidth(pic)</code> to the center.</p>
<p>Then I have my mirror in the middle of the pic. How to do the diagonal way?</p>
<p>Edit:</p>
<pre><code>img_src = makePicture(pickAFile())
W = getWidth(img_src)
H = getHeight(img_src)
for x in range(W):
for y in range(H):
p = getPixel(img_src, x, y)
colorInSrc = getColor( getPixel(img_src, x, y) )
destPixel = getPixel(img_src, H-y-1, W-x-1)
setColor(destPixel, colorInSrc)
</code></pre>
| 0 | 2009-09-11T11:42:31Z | 1,410,438 | <p>It's not really an Python question, is it?</p>
<p>The easiest solution would be to first mirror horizontal and then vertical.
Another one would be to switch pixel rows with columns.</p>
<p>Or to do your algorithm but switch the pixels from left-top to bottom-right...</p>
| 1 | 2009-09-11T11:55:17Z | [
"python",
"image-manipulation"
] |
Python image mirroring | 1,410,406 | <p>I've been making a picture mirroring in horizontal and vertical axes. Now I'm going to make the diagonal.</p>
<p>I had done the hori and verti width two for loops which in the hori scenario loops through all the pixels in the height and only the half of the pixels in the width. Then it gets the color of the pixel and set the same color to the pixel on the other side. Going from the <code>getWidth(pic)</code> to the center.</p>
<p>Then I have my mirror in the middle of the pic. How to do the diagonal way?</p>
<p>Edit:</p>
<pre><code>img_src = makePicture(pickAFile())
W = getWidth(img_src)
H = getHeight(img_src)
for x in range(W):
for y in range(H):
p = getPixel(img_src, x, y)
colorInSrc = getColor( getPixel(img_src, x, y) )
destPixel = getPixel(img_src, H-y-1, W-x-1)
setColor(destPixel, colorInSrc)
</code></pre>
| 0 | 2009-09-11T11:42:31Z | 1,410,439 | <p>If I understood correctly what you need is to "flip" the image by a diagonal. Since there are two of them I'll presume that you mean the one that goes from left bottom to right top.</p>
<p>In order to flip by this diagonal you need to transform each row from the source in columns in the destination. The left part of the rows will become the bottom part of the new columns. Also the topmost row will become the rightmost column. You will need to do this pixel by pixel on the whole image. Also keep in mind that the width and height of the image will be swapped.</p>
<p><strong>Edit</strong>: A small example. Say you start with an image 5 pixels wide and 3 pixels high (5x3). You will need to create a new blank image 3 pixels wide and 5 pixels high.</p>
<p>If you start pixel numbering from left top corner with (0,0), then this pixel will end up at (2,4) in the new image, pixel (1,0) will end at (2,3) and so on.</p>
<p>If your original width and height are W and H then you should use something like this:</p>
<pre><code>for x in xrange(W):
for y in xrange(H):
p = img_src.getpixel(x, y)
img_dest.setpixel(H-y-1, W-x-1)
</code></pre>
<p>This should work, but is not tested.</p>
| 1 | 2009-09-11T11:55:29Z | [
"python",
"image-manipulation"
] |
Python image mirroring | 1,410,406 | <p>I've been making a picture mirroring in horizontal and vertical axes. Now I'm going to make the diagonal.</p>
<p>I had done the hori and verti width two for loops which in the hori scenario loops through all the pixels in the height and only the half of the pixels in the width. Then it gets the color of the pixel and set the same color to the pixel on the other side. Going from the <code>getWidth(pic)</code> to the center.</p>
<p>Then I have my mirror in the middle of the pic. How to do the diagonal way?</p>
<p>Edit:</p>
<pre><code>img_src = makePicture(pickAFile())
W = getWidth(img_src)
H = getHeight(img_src)
for x in range(W):
for y in range(H):
p = getPixel(img_src, x, y)
colorInSrc = getColor( getPixel(img_src, x, y) )
destPixel = getPixel(img_src, H-y-1, W-x-1)
setColor(destPixel, colorInSrc)
</code></pre>
| 0 | 2009-09-11T11:42:31Z | 1,410,787 | <p>Using PIL (the Python Imaging Library) this is a relatively straightforward task. Notice however, that the output image is square -- thus not the same size as the original image.</p>
<p>Here is the code:</p>
<pre><code>from PIL import Image, ImageDraw
# load the image, create the mirrored image, and the result placeholder
img = Image.open('img.png')
mirror = img.transpose(Image.FLIP_LEFT_RIGHT).transpose(Image.ROTATE_90)
sz = max(img.size + mirror.size)
result = Image.new(img.mode, (sz,sz))
result.paste(img, (0,0)+img.size)
# now paste the mirrored image, but with a triangular binary mask
mask = Image.new('1', mirror.size)
draw = ImageDraw.Draw(mask)
draw.polygon([0,0,0,sz,sz,sz], outline='white', fill='white')
result.paste(mirror, (0,0)+mirror.size, mask)
# clean up and save the result
del mirror, mask, draw
result.save('result.png')
</code></pre>
| 2 | 2009-09-11T13:12:44Z | [
"python",
"image-manipulation"
] |
Python image mirroring | 1,410,406 | <p>I've been making a picture mirroring in horizontal and vertical axes. Now I'm going to make the diagonal.</p>
<p>I had done the hori and verti width two for loops which in the hori scenario loops through all the pixels in the height and only the half of the pixels in the width. Then it gets the color of the pixel and set the same color to the pixel on the other side. Going from the <code>getWidth(pic)</code> to the center.</p>
<p>Then I have my mirror in the middle of the pic. How to do the diagonal way?</p>
<p>Edit:</p>
<pre><code>img_src = makePicture(pickAFile())
W = getWidth(img_src)
H = getHeight(img_src)
for x in range(W):
for y in range(H):
p = getPixel(img_src, x, y)
colorInSrc = getColor( getPixel(img_src, x, y) )
destPixel = getPixel(img_src, H-y-1, W-x-1)
setColor(destPixel, colorInSrc)
</code></pre>
| 0 | 2009-09-11T11:42:31Z | 12,682,853 | <p>Here's how to mirror diagonally in JES; It only works for a square image though:</p>
<pre><code>def mirrorDiagonal(picture):
for sourceX in range(0,getWidth(picture)):
for sourceY in range (0,getHeight(picture)):
pex=getPixel(picture,sourceY,sourceX)
pix=getPixel(picture, sourceX,sourceY)
color=getColor(pix)
setColor(pex,color)
show(picture)
</code></pre>
| 0 | 2012-10-02T00:13:16Z | [
"python",
"image-manipulation"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 1,410,452 | <ol>
<li><p>use editor / IDE that supports code highlighting. E.g., Notepad++ has word-highlighting feature that I find very useful.</p></li>
<li><p>use unit tests</p></li>
</ol>
<p>stupid errors will be weeded out first, so I wouldn't worry to much about this type of errors. it's "smart" error you should be afraid of.</p>
| 3 | 2009-09-11T11:58:52Z | [
"python",
"compiler-construction",
"correctness"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 1,410,455 | <p><a href="http://eclipse.org/" rel="nofollow">Eclipse</a> has a good <a href="http://pydev.org/" rel="nofollow">python plugin</a> for doing the syntax highlighting and debugging.</p>
| 0 | 2009-09-11T12:00:09Z | [
"python",
"compiler-construction",
"correctness"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 1,410,456 | <p>Looks like <a href="http://pychecker.sourceforge.net/" rel="nofollow">PyChecker</a> or <a href="http://www.logilab.org/857" rel="nofollow">pylint</a> are what you're looking for</p>
| 7 | 2009-09-11T12:00:12Z | [
"python",
"compiler-construction",
"correctness"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 1,410,460 | <ol>
<li><p>Use tools such as <a href="http://www.logilab.org/857" rel="nofollow">pylint</a> or <a href="http://pychecker.sourceforge.net/" rel="nofollow">PyChecker</a>.</p></li>
<li><p>Write unit tests.</p></li>
</ol>
| 3 | 2009-09-11T12:00:38Z | [
"python",
"compiler-construction",
"correctness"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 1,410,696 | <p>Pylint is almost doing what you are looking for. </p>
<p>You can also force the compilation of your python files. That will show some basic syntax error (it doesn't have all the capability of a c++ compiler)</p>
<p>I've read <a href="http://www.ibm.com/developerworks/opensource/library/os-ecant/" rel="nofollow">this article</a> and decided to make an automated build system with pyDev and ant. It does the compilation of the python files and is running the unit tests. Next step is to integrate pylint to that process.</p>
<p>I hope it helps </p>
| 0 | 2009-09-11T12:55:36Z | [
"python",
"compiler-construction",
"correctness"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 1,410,933 | <p>Unit test. <a href="http://docs.python.org/library/unittest.html" rel="nofollow">http://docs.python.org/library/unittest.html</a></p>
<p>If your tests are written at a reasonable level of granularity, it can be as fast to unit test as it is to run lint or a compiler.</p>
| 3 | 2009-09-11T13:40:06Z | [
"python",
"compiler-construction",
"correctness"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 1,411,292 | <ul>
<li><strong>Static analysis</strong> (as from the IDE, or from tools like pyLint and pyChecker) is a very quick and effective way to check simple errors, and enforce a common style. </li>
<li><strong>Unit tests</strong> are a great way to ensure the code stands for its contract. </li>
<li><strong>Code reviews</strong> and pair programming are one of the best ways to find errors of all sorts, and to spread knowledge in a team. </li>
</ul>
<p>All of the options require some time, to setup and to execute. However, the gains are tremendous, and far higher than the investment.</p>
| 2 | 2009-09-11T14:39:24Z | [
"python",
"compiler-construction",
"correctness"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 9,675,780 | <p>As with other languages, you should use assertions liberally throughout your code. Use assertions when you must rely on the predicate to be true for the program to run, not as exception/error handling. An assertion should be used to check for irrecoverable errors and force the program to crash. <a href="http://stackoverflow.com/questions/944592/best-practice-for-python-assert">More on assertions (and python error checking in general)</a></p>
| 0 | 2012-03-12T22:36:41Z | [
"python",
"compiler-construction",
"correctness"
] |
Checking Python code correctness | 1,410,444 | <p>In C++ I have compiler that tell me if something wrong with my code after refactoring. How to make sure that Python code is at least correct after changes? There may be some stupid error like wrong function name etc. that pretty easy to find in compile time.</p>
<p>Thanks</p>
| 5 | 2009-09-11T11:56:30Z | 29,248,370 | <p>You may need this:</p>
<pre><code>python -m py_compile script.py
</code></pre>
| 0 | 2015-03-25T05:51:12Z | [
"python",
"compiler-construction",
"correctness"
] |
copy.deepcopy vs pickle | 1,410,615 | <p>I have a tree structure of widgets e.g. collection contains models and model contains widgets. I want to copy whole collection, <code>copy.deepcopy</code> is faster in comparison to 'pickle and de-pickle'ing the object but cPickle as being written in C is much faster, so</p>
<ol>
<li>Why shouldn't I(we) always be using cPickle instead of deepcopy? </li>
<li>Is there any other copy alternative? because pickle is slower then deepcopy but cPickle is faster, so may be a C implementation of deepcopy will be the winner</li>
</ol>
<p><strong>Sample test code:</strong></p>
<pre><code>import copy
import pickle
import cPickle
class A(object): pass
d = {}
for i in range(1000):
d[i] = A()
def copy1():
return copy.deepcopy(d)
def copy2():
return pickle.loads(pickle.dumps(d, -1))
def copy3():
return cPickle.loads(cPickle.dumps(d, -1))
</code></pre>
<p><strong>Timings:</strong></p>
<pre><code>>python -m timeit -s "import c" "c.copy1()"
10 loops, best of 3: 46.3 msec per loop
>python -m timeit -s "import c" "c.copy2()"
10 loops, best of 3: 93.3 msec per loop
>python -m timeit -s "import c" "c.copy3()"
100 loops, best of 3: 17.1 msec per loop
</code></pre>
| 24 | 2009-09-11T12:39:43Z | 1,410,813 | <p>You should be using deepcopy because it makes your code more readable. Using a serialization mechanism to copy objects in memory is at the very least confusing to another developer reading your code. Using deepcopy also means you get to reap the benefits of future optimizations in deepcopy.</p>
<p>First rule of optimization: don't.</p>
| 6 | 2009-09-11T13:18:11Z | [
"python",
"pickle",
"deep-copy"
] |
copy.deepcopy vs pickle | 1,410,615 | <p>I have a tree structure of widgets e.g. collection contains models and model contains widgets. I want to copy whole collection, <code>copy.deepcopy</code> is faster in comparison to 'pickle and de-pickle'ing the object but cPickle as being written in C is much faster, so</p>
<ol>
<li>Why shouldn't I(we) always be using cPickle instead of deepcopy? </li>
<li>Is there any other copy alternative? because pickle is slower then deepcopy but cPickle is faster, so may be a C implementation of deepcopy will be the winner</li>
</ol>
<p><strong>Sample test code:</strong></p>
<pre><code>import copy
import pickle
import cPickle
class A(object): pass
d = {}
for i in range(1000):
d[i] = A()
def copy1():
return copy.deepcopy(d)
def copy2():
return pickle.loads(pickle.dumps(d, -1))
def copy3():
return cPickle.loads(cPickle.dumps(d, -1))
</code></pre>
<p><strong>Timings:</strong></p>
<pre><code>>python -m timeit -s "import c" "c.copy1()"
10 loops, best of 3: 46.3 msec per loop
>python -m timeit -s "import c" "c.copy2()"
10 loops, best of 3: 93.3 msec per loop
>python -m timeit -s "import c" "c.copy3()"
100 loops, best of 3: 17.1 msec per loop
</code></pre>
| 24 | 2009-09-11T12:39:43Z | 1,411,229 | <p>Problem is, pickle+unpickle can be faster (in the C implementation) because it's <em>less general</em> than deepcopy: many objects can be deepcopied but not pickled. Suppose for example that your class <code>A</code> were changed to...:</p>
<pre><code>class A(object):
class B(object): pass
def __init__(self): self.b = self.B()
</code></pre>
<p>now, <code>copy1</code> still works fine (A's complexity slows it downs but absolutely doesn't stop it); <code>copy2</code> and <code>copy3</code> break, the end of the stack trace says...:</p>
<pre><code> File "./c.py", line 20, in copy3
return cPickle.loads(cPickle.dumps(d, -1))
PicklingError: Can't pickle <class 'c.B'>: attribute lookup c.B failed
</code></pre>
<p>I.e., pickling always assumes that classes and functions are top-level entities in their modules, and so pickles them "by name" -- deepcopying makes absolutely no such assumptions.</p>
<p>So if you have a situation where speed of "somewhat deep-copying" is absolutely crucial, every millisecond matters, AND you want to take advantage of special limitations that you KNOW apply to the objects you're duplicating, such as those that make pickling applicable, or ones favoring other forms yet of serializations and other shortcuts, by all means go ahead - but if you do you MUST be aware that you're constraining your system to live by those limitations forevermore, and document that design decision very clearly and explicitly for the benefit of future maintainers.</p>
<p>For the NORMAL case, where you want generality, use <code>deepcopy</code>!-)</p>
| 27 | 2009-09-11T14:27:53Z | [
"python",
"pickle",
"deep-copy"
] |
copy.deepcopy vs pickle | 1,410,615 | <p>I have a tree structure of widgets e.g. collection contains models and model contains widgets. I want to copy whole collection, <code>copy.deepcopy</code> is faster in comparison to 'pickle and de-pickle'ing the object but cPickle as being written in C is much faster, so</p>
<ol>
<li>Why shouldn't I(we) always be using cPickle instead of deepcopy? </li>
<li>Is there any other copy alternative? because pickle is slower then deepcopy but cPickle is faster, so may be a C implementation of deepcopy will be the winner</li>
</ol>
<p><strong>Sample test code:</strong></p>
<pre><code>import copy
import pickle
import cPickle
class A(object): pass
d = {}
for i in range(1000):
d[i] = A()
def copy1():
return copy.deepcopy(d)
def copy2():
return pickle.loads(pickle.dumps(d, -1))
def copy3():
return cPickle.loads(cPickle.dumps(d, -1))
</code></pre>
<p><strong>Timings:</strong></p>
<pre><code>>python -m timeit -s "import c" "c.copy1()"
10 loops, best of 3: 46.3 msec per loop
>python -m timeit -s "import c" "c.copy2()"
10 loops, best of 3: 93.3 msec per loop
>python -m timeit -s "import c" "c.copy3()"
100 loops, best of 3: 17.1 msec per loop
</code></pre>
| 24 | 2009-09-11T12:39:43Z | 1,411,305 | <p>Even faster would be to avoid the copy in the first place. You mention that you are doing rendering. Why does it need to copy objects?</p>
| 1 | 2009-09-11T14:41:01Z | [
"python",
"pickle",
"deep-copy"
] |
copy.deepcopy vs pickle | 1,410,615 | <p>I have a tree structure of widgets e.g. collection contains models and model contains widgets. I want to copy whole collection, <code>copy.deepcopy</code> is faster in comparison to 'pickle and de-pickle'ing the object but cPickle as being written in C is much faster, so</p>
<ol>
<li>Why shouldn't I(we) always be using cPickle instead of deepcopy? </li>
<li>Is there any other copy alternative? because pickle is slower then deepcopy but cPickle is faster, so may be a C implementation of deepcopy will be the winner</li>
</ol>
<p><strong>Sample test code:</strong></p>
<pre><code>import copy
import pickle
import cPickle
class A(object): pass
d = {}
for i in range(1000):
d[i] = A()
def copy1():
return copy.deepcopy(d)
def copy2():
return pickle.loads(pickle.dumps(d, -1))
def copy3():
return cPickle.loads(cPickle.dumps(d, -1))
</code></pre>
<p><strong>Timings:</strong></p>
<pre><code>>python -m timeit -s "import c" "c.copy1()"
10 loops, best of 3: 46.3 msec per loop
>python -m timeit -s "import c" "c.copy2()"
10 loops, best of 3: 93.3 msec per loop
>python -m timeit -s "import c" "c.copy3()"
100 loops, best of 3: 17.1 msec per loop
</code></pre>
| 24 | 2009-09-11T12:39:43Z | 19,065,623 | <p>Short and somewhat late:</p>
<ul>
<li>If you have to cPickle an object anyway, you might as well use the cPickle method to deepcopy (but document)</li>
</ul>
<p>e.g. You might consider:</p>
<pre><code>def mydeepcopy(obj):
try:
return cPickle.loads(cPickle.dumps(obj, -1))
except PicklingError:
return deepcopy(obj)
</code></pre>
| 0 | 2013-09-28T09:54:24Z | [
"python",
"pickle",
"deep-copy"
] |
copy.deepcopy vs pickle | 1,410,615 | <p>I have a tree structure of widgets e.g. collection contains models and model contains widgets. I want to copy whole collection, <code>copy.deepcopy</code> is faster in comparison to 'pickle and de-pickle'ing the object but cPickle as being written in C is much faster, so</p>
<ol>
<li>Why shouldn't I(we) always be using cPickle instead of deepcopy? </li>
<li>Is there any other copy alternative? because pickle is slower then deepcopy but cPickle is faster, so may be a C implementation of deepcopy will be the winner</li>
</ol>
<p><strong>Sample test code:</strong></p>
<pre><code>import copy
import pickle
import cPickle
class A(object): pass
d = {}
for i in range(1000):
d[i] = A()
def copy1():
return copy.deepcopy(d)
def copy2():
return pickle.loads(pickle.dumps(d, -1))
def copy3():
return cPickle.loads(cPickle.dumps(d, -1))
</code></pre>
<p><strong>Timings:</strong></p>
<pre><code>>python -m timeit -s "import c" "c.copy1()"
10 loops, best of 3: 46.3 msec per loop
>python -m timeit -s "import c" "c.copy2()"
10 loops, best of 3: 93.3 msec per loop
>python -m timeit -s "import c" "c.copy3()"
100 loops, best of 3: 17.1 msec per loop
</code></pre>
| 24 | 2009-09-11T12:39:43Z | 22,619,902 | <p>It is <em>not</em> always the case that cPickle is faster than deepcopy(). While cPickle is probably always faster than pickle, whether it is faster than deepcopy depends on</p>
<ul>
<li>the size and nesting level of the structures to be copied,</li>
<li>the type of contained objects, and</li>
<li>the size of the pickled string representation.</li>
</ul>
<p>If something can be pickled, it can obviously be deepcopied, but the opposite is not the case: <strong>In order to pickle something, it needs to be fully serialized</strong>; this is not the case for deepcopying. In particular, you can implement <code>__deepcopy__</code> very efficiently by copying a structure in memory (think of extension types), without being able to save everything to disk. (Think of suspend-to-RAM vs. suspend-to-disk.)</p>
<p>A well-known extension type that fulfills the conditions above may be <code>ndarray</code>, and indeed, it serves as a good counterexample to your observation: With <code>d = numpy.arange(100000000)</code>, your code gives different runtimes:</p>
<pre><code>In [1]: import copy, pickle, cPickle, numpy
In [2]: d = numpy.arange(100000000)
In [3]: %timeit pickle.loads(pickle.dumps(d, -1))
1 loops, best of 3: 2.95 s per loop
In [4]: %timeit cPickle.loads(cPickle.dumps(d, -1))
1 loops, best of 3: 2.37 s per loop
In [5]: %timeit copy.deepcopy(d)
1 loops, best of 3: 459 ms per loop
</code></pre>
<p>If <code>__deepcopy__</code> is not implemented, <code>copy</code> and <code>pickle</code> share common infrastructure (cf. <code>copy_reg</code> module, discussed in <a href="http://stackoverflow.com/questions/22388854/relationship-between-pickle-and-deepcopy/22618214#22618214">Relationship between pickle and deepcopy</a>).</p>
| 0 | 2014-03-24T20:27:53Z | [
"python",
"pickle",
"deep-copy"
] |
Is there a way to reopen a socket? | 1,410,723 | <p>I create many "short-term" sockets in some code that look like that :</p>
<pre><code>nb=1000
for i in range(nb):
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
sck.send('question %i'%i)
sck.shutdown(SHUT_WR)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
</code></pre>
<p>This works fine, as long as <em>nb</em> is "small" enough. </p>
<p>As <em>nb</em> might be quite large though, I'd like to do something like this</p>
<pre><code>sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
for i in range(nb):
reopen(sck) # ? ? ?
sck.send('question %i'%i)
sck.shutdown(SHUT_WR)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
</code></pre>
<p>So the question is :<br />
Is there any way to "reuse" a socket that has been shutdown ?</p>
| 6 | 2009-09-11T13:00:14Z | 1,410,764 | <p>No, this is a limitation of the underlying C sockets (and the TCP/IP protocol, for that matter). My question to you is: why are you shutting them down when you can architect your application to use them?</p>
<p>The problem with many short-term sockets is that shutting them down puts them in a state where they cannot be used for a while (basically, twice the packet lifetime, to ensure any packets in the network either arrive and are discarded, or get discarded by the network itself). Basically what happens is that, in the 4-tuple that needs to be unique (source ip, source port, destination ip, destination port), the first one and last two tend to always be the same so, when you run out of source ports, you're hosed.</p>
<p>We've struck this problem in software before where it only became evident when we ran on faster machines (since we could use many more sessions).</p>
<p>Why dont you just open up the socket and continue to use it? It looks like your protocol is a simple request/response one, which should be easily do-able with that approach.</p>
<p>Something like:</p>
<pre><code>sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
for i in range(nb):
sck.send('question %i'%i)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
</code></pre>
<p><em>Update:</em></p>
<p>One possibility (and we've done this before) if you're running out of connection due to this continual open/close, is to detect the problem and throttle it. Consider the following code (the stuff I've added is more pseudo-code than Python since I haven't touched Python for quite a while):</p>
<pre><code>for i in range(nb):
sck = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sck.connect((adr, prt)
while sck.error() == NO_SOCKETS_AVAIL:
sleep 250 milliseconds
sck.connect((adr, prt)
sck.send('question %i'%i)
sck.shutdown(SHUT_WR)
answer=sck.recv(4096)
print 'answer %i : %s' % (%i, answer)
sck.close()
</code></pre>
<p>Basically, it lets you run at full speed while there are plenty of resources but slows down when you strike your problem area. This is actually what we did to our product to "fix" the problem of failing when resources got low. We would have re-architected it except for the fact it was a legacy product approaching end of life and we were basically in the fix-at-minimal-cost mode for service.</p>
| 14 | 2009-09-11T13:08:37Z | [
"python",
"sockets"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.