title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Python subprocess.Popen erroring with OSError: [Errno 12] Cannot allocate memory after period of time | 1,216,794 | <p><strong>Note</strong>: This question has been re-asked with a summary of all debugging attempts <a href="http://stackoverflow.com/questions/1367373/python-subprocess-popen-oserror-errno-12-cannot-allocate-memory">here</a>.</p>
<p><hr /></p>
<p>I have a Python script that is running as a background process executing every 60 seconds. Part of that is a call to <a href="http://docs.python.org/library/subprocess.html">subprocess.Popen</a> to get the output of <a href="http://linuxcommand.org/man%5Fpages/ps1.html">ps</a>. </p>
<pre><code>ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
</code></pre>
<p>After running for a few days, the call is erroring with:</p>
<pre>
File "/home/admin/sd-agent/checks.py", line 436, in getProcesses
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
File "/usr/lib/python2.4/subprocess.py", line 835, in _get_handles
OSError: [Errno 12] Cannot allocate memory
</pre>
<p>However the output of <a href="http://www.linuxcommand.org/man%5Fpages/free1.html">free</a> on the server is:</p>
<pre>
$ free -m
total used free shared buffers cached
Mem: 894 345 549 0 0 0
-/+ buffers/cache: 345 549
Swap: 0 0 0
</pre>
<p>I have searched around for the problem and found <a href="http://www.zenoss.com/community/wiki/common-error-messages-and-solutions/oserror-errno-12-cannot-allocate-memory-in-popen2.py/">this article</a> which says:</p>
<p>Solution is to add more swap space to your server. When the kernel is forking to start the modeler or discovery process, it first ensures there's enough space available on the swap store the new process if needed.</p>
<p>I note that there is no available swap from the free output above. Is this likely to be the problem and/or what other solutions might there be?</p>
<p><strong>Update 13th Aug 09</strong> The code above is called every 60 seconds as part of a series of monitoring functions. The process is daemonized and the check is scheduled using <a href="http://docs.python.org/library/sched.html">sched</a>. The specific code for the above function is:</p>
<pre><code>def getProcesses(self):
self.checksLogger.debug('getProcesses: start')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory before Popen - ' + str(mem))
# Get output from ps
try:
self.checksLogger.debug('getProcesses: attempting Popen')
ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
except Exception, e:
import traceback
self.checksLogger.error('getProcesses: exception = ' + traceback.format_exc())
return False
self.checksLogger.debug('getProcesses: Popen success, parsing')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory after Popen - ' + str(mem))
# Split out each process
processLines = ps.split('\n')
del processLines[0] # Removes the headers
processLines.pop() # Removes a trailing empty line
processes = []
self.checksLogger.debug('getProcesses: Popen success, parsing, looping')
for line in processLines:
line = line.split(None, 10)
processes.append(line)
self.checksLogger.debug('getProcesses: completed, returning')
return processes
</code></pre>
<p>This is part of a bigger class called checks which is initialised once when the daemon is started.</p>
<p>The entire checks class can be found at <a href="http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py">http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py</a> with the getProcesses function defined from line 442. This is called by doChecks() starting at line 520.</p>
| 8 | 2009-08-01T15:14:30Z | 1,270,442 | <p>You've perhaps got a memory leak bounded by some <a href="http://www.opengroup.org/onlinepubs/9699919799/functions/getrlimit.html" rel="nofollow">resource limit</a> (<code>RLIMIT_DATA</code>, <code>RLIMIT_AS</code>?) inherited by your python script. Check your <em>ulimit(1)</em>s before you run your script, and profile the script's memory usage, as others have suggested.</p>
<p><strong>What do you do with the variable <code>ps</code> after the code snippet you show us?</strong> Do you keep a reference to it, never to be freed? Quoting the <a href="http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow"><code>subprocess</code> module docs</a>:</p>
<blockquote>
<p><strong>Note:</strong> The data read is buffered in memory, so do not use this
method if the data size is large or unlimited.</p>
</blockquote>
<p>... and <em>ps aux</em> can be verbose on a busy system...</p>
<p><strong>Update</strong></p>
<p>You can check rlimits from with your python script using the <a href="http://docs.python.org/library/resource.html#resource.getrlimit" rel="nofollow">resource</a> module:</p>
<pre><code>import resource
print resource.getrlimit(resource.RLIMIT_DATA) # => (soft_lim, hard_lim)
print resource.getrlimit(resource.RLIMIT_AS)
</code></pre>
<p>If these return "unlimited" -- <code>(-1, -1)</code> -- then my hypothesis is incorrect and you may move on!</p>
<p>See also <a href="http://docs.python.org/library/resource.html#resource.getrusage" rel="nofollow"><code>resource.getrusage</code></a>, esp. the <code>ru_??rss</code> fields, which can help you to instrument for memory consumption from with the python script, without shelling out to an external program.</p>
| 6 | 2009-08-13T07:02:04Z | [
"python",
"linux",
"memory"
] |
Python subprocess.Popen erroring with OSError: [Errno 12] Cannot allocate memory after period of time | 1,216,794 | <p><strong>Note</strong>: This question has been re-asked with a summary of all debugging attempts <a href="http://stackoverflow.com/questions/1367373/python-subprocess-popen-oserror-errno-12-cannot-allocate-memory">here</a>.</p>
<p><hr /></p>
<p>I have a Python script that is running as a background process executing every 60 seconds. Part of that is a call to <a href="http://docs.python.org/library/subprocess.html">subprocess.Popen</a> to get the output of <a href="http://linuxcommand.org/man%5Fpages/ps1.html">ps</a>. </p>
<pre><code>ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
</code></pre>
<p>After running for a few days, the call is erroring with:</p>
<pre>
File "/home/admin/sd-agent/checks.py", line 436, in getProcesses
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
File "/usr/lib/python2.4/subprocess.py", line 835, in _get_handles
OSError: [Errno 12] Cannot allocate memory
</pre>
<p>However the output of <a href="http://www.linuxcommand.org/man%5Fpages/free1.html">free</a> on the server is:</p>
<pre>
$ free -m
total used free shared buffers cached
Mem: 894 345 549 0 0 0
-/+ buffers/cache: 345 549
Swap: 0 0 0
</pre>
<p>I have searched around for the problem and found <a href="http://www.zenoss.com/community/wiki/common-error-messages-and-solutions/oserror-errno-12-cannot-allocate-memory-in-popen2.py/">this article</a> which says:</p>
<p>Solution is to add more swap space to your server. When the kernel is forking to start the modeler or discovery process, it first ensures there's enough space available on the swap store the new process if needed.</p>
<p>I note that there is no available swap from the free output above. Is this likely to be the problem and/or what other solutions might there be?</p>
<p><strong>Update 13th Aug 09</strong> The code above is called every 60 seconds as part of a series of monitoring functions. The process is daemonized and the check is scheduled using <a href="http://docs.python.org/library/sched.html">sched</a>. The specific code for the above function is:</p>
<pre><code>def getProcesses(self):
self.checksLogger.debug('getProcesses: start')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory before Popen - ' + str(mem))
# Get output from ps
try:
self.checksLogger.debug('getProcesses: attempting Popen')
ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
except Exception, e:
import traceback
self.checksLogger.error('getProcesses: exception = ' + traceback.format_exc())
return False
self.checksLogger.debug('getProcesses: Popen success, parsing')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory after Popen - ' + str(mem))
# Split out each process
processLines = ps.split('\n')
del processLines[0] # Removes the headers
processLines.pop() # Removes a trailing empty line
processes = []
self.checksLogger.debug('getProcesses: Popen success, parsing, looping')
for line in processLines:
line = line.split(None, 10)
processes.append(line)
self.checksLogger.debug('getProcesses: completed, returning')
return processes
</code></pre>
<p>This is part of a bigger class called checks which is initialised once when the daemon is started.</p>
<p>The entire checks class can be found at <a href="http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py">http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py</a> with the getProcesses function defined from line 442. This is called by doChecks() starting at line 520.</p>
| 8 | 2009-08-01T15:14:30Z | 1,270,848 | <p>You need to</p>
<pre><code>ps = subprocess.Popen(["sleep", "1000"])
os.waitpid(ps.pid, 0)
</code></pre>
<p>to free resources.</p>
<p>Note: this does not work on Windows.</p>
| 0 | 2009-08-13T08:50:18Z | [
"python",
"linux",
"memory"
] |
Python subprocess.Popen erroring with OSError: [Errno 12] Cannot allocate memory after period of time | 1,216,794 | <p><strong>Note</strong>: This question has been re-asked with a summary of all debugging attempts <a href="http://stackoverflow.com/questions/1367373/python-subprocess-popen-oserror-errno-12-cannot-allocate-memory">here</a>.</p>
<p><hr /></p>
<p>I have a Python script that is running as a background process executing every 60 seconds. Part of that is a call to <a href="http://docs.python.org/library/subprocess.html">subprocess.Popen</a> to get the output of <a href="http://linuxcommand.org/man%5Fpages/ps1.html">ps</a>. </p>
<pre><code>ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
</code></pre>
<p>After running for a few days, the call is erroring with:</p>
<pre>
File "/home/admin/sd-agent/checks.py", line 436, in getProcesses
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
File "/usr/lib/python2.4/subprocess.py", line 835, in _get_handles
OSError: [Errno 12] Cannot allocate memory
</pre>
<p>However the output of <a href="http://www.linuxcommand.org/man%5Fpages/free1.html">free</a> on the server is:</p>
<pre>
$ free -m
total used free shared buffers cached
Mem: 894 345 549 0 0 0
-/+ buffers/cache: 345 549
Swap: 0 0 0
</pre>
<p>I have searched around for the problem and found <a href="http://www.zenoss.com/community/wiki/common-error-messages-and-solutions/oserror-errno-12-cannot-allocate-memory-in-popen2.py/">this article</a> which says:</p>
<p>Solution is to add more swap space to your server. When the kernel is forking to start the modeler or discovery process, it first ensures there's enough space available on the swap store the new process if needed.</p>
<p>I note that there is no available swap from the free output above. Is this likely to be the problem and/or what other solutions might there be?</p>
<p><strong>Update 13th Aug 09</strong> The code above is called every 60 seconds as part of a series of monitoring functions. The process is daemonized and the check is scheduled using <a href="http://docs.python.org/library/sched.html">sched</a>. The specific code for the above function is:</p>
<pre><code>def getProcesses(self):
self.checksLogger.debug('getProcesses: start')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory before Popen - ' + str(mem))
# Get output from ps
try:
self.checksLogger.debug('getProcesses: attempting Popen')
ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
except Exception, e:
import traceback
self.checksLogger.error('getProcesses: exception = ' + traceback.format_exc())
return False
self.checksLogger.debug('getProcesses: Popen success, parsing')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory after Popen - ' + str(mem))
# Split out each process
processLines = ps.split('\n')
del processLines[0] # Removes the headers
processLines.pop() # Removes a trailing empty line
processes = []
self.checksLogger.debug('getProcesses: Popen success, parsing, looping')
for line in processLines:
line = line.split(None, 10)
processes.append(line)
self.checksLogger.debug('getProcesses: completed, returning')
return processes
</code></pre>
<p>This is part of a bigger class called checks which is initialised once when the daemon is started.</p>
<p>The entire checks class can be found at <a href="http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py">http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py</a> with the getProcesses function defined from line 442. This is called by doChecks() starting at line 520.</p>
| 8 | 2009-08-01T15:14:30Z | 1,278,961 | <p>If you're running a background process, chances are that you've redirected your processes stdin/stdout/stderr.</p>
<p>In that case, append the option "close_fds=True" to your Popen call, which will prevent the child process from inheriting your redirected output. This may be the limit you're bumping into.</p>
| 2 | 2009-08-14T16:54:35Z | [
"python",
"linux",
"memory"
] |
Python subprocess.Popen erroring with OSError: [Errno 12] Cannot allocate memory after period of time | 1,216,794 | <p><strong>Note</strong>: This question has been re-asked with a summary of all debugging attempts <a href="http://stackoverflow.com/questions/1367373/python-subprocess-popen-oserror-errno-12-cannot-allocate-memory">here</a>.</p>
<p><hr /></p>
<p>I have a Python script that is running as a background process executing every 60 seconds. Part of that is a call to <a href="http://docs.python.org/library/subprocess.html">subprocess.Popen</a> to get the output of <a href="http://linuxcommand.org/man%5Fpages/ps1.html">ps</a>. </p>
<pre><code>ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
</code></pre>
<p>After running for a few days, the call is erroring with:</p>
<pre>
File "/home/admin/sd-agent/checks.py", line 436, in getProcesses
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
File "/usr/lib/python2.4/subprocess.py", line 835, in _get_handles
OSError: [Errno 12] Cannot allocate memory
</pre>
<p>However the output of <a href="http://www.linuxcommand.org/man%5Fpages/free1.html">free</a> on the server is:</p>
<pre>
$ free -m
total used free shared buffers cached
Mem: 894 345 549 0 0 0
-/+ buffers/cache: 345 549
Swap: 0 0 0
</pre>
<p>I have searched around for the problem and found <a href="http://www.zenoss.com/community/wiki/common-error-messages-and-solutions/oserror-errno-12-cannot-allocate-memory-in-popen2.py/">this article</a> which says:</p>
<p>Solution is to add more swap space to your server. When the kernel is forking to start the modeler or discovery process, it first ensures there's enough space available on the swap store the new process if needed.</p>
<p>I note that there is no available swap from the free output above. Is this likely to be the problem and/or what other solutions might there be?</p>
<p><strong>Update 13th Aug 09</strong> The code above is called every 60 seconds as part of a series of monitoring functions. The process is daemonized and the check is scheduled using <a href="http://docs.python.org/library/sched.html">sched</a>. The specific code for the above function is:</p>
<pre><code>def getProcesses(self):
self.checksLogger.debug('getProcesses: start')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory before Popen - ' + str(mem))
# Get output from ps
try:
self.checksLogger.debug('getProcesses: attempting Popen')
ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
except Exception, e:
import traceback
self.checksLogger.error('getProcesses: exception = ' + traceback.format_exc())
return False
self.checksLogger.debug('getProcesses: Popen success, parsing')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory after Popen - ' + str(mem))
# Split out each process
processLines = ps.split('\n')
del processLines[0] # Removes the headers
processLines.pop() # Removes a trailing empty line
processes = []
self.checksLogger.debug('getProcesses: Popen success, parsing, looping')
for line in processLines:
line = line.split(None, 10)
processes.append(line)
self.checksLogger.debug('getProcesses: completed, returning')
return processes
</code></pre>
<p>This is part of a bigger class called checks which is initialised once when the daemon is started.</p>
<p>The entire checks class can be found at <a href="http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py">http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py</a> with the getProcesses function defined from line 442. This is called by doChecks() starting at line 520.</p>
| 8 | 2009-08-01T15:14:30Z | 1,297,785 | <p>when you use popen you need to hand in close_fds=True if you want it to close extra file descriptors. </p>
<p>creating a new pipe, which occurs in the _get_handles function from the back trace, creates 2 file descriptors, but your current code never closes them and your eventually hitting your systems max fd limit. </p>
<p>Not sure why the error you're getting indicates an out of memory condition: it should be a file descriptor error as the return value of <code>pipe()</code> has an error code for this problem.</p>
| 4 | 2009-08-19T04:18:56Z | [
"python",
"linux",
"memory"
] |
Python subprocess.Popen erroring with OSError: [Errno 12] Cannot allocate memory after period of time | 1,216,794 | <p><strong>Note</strong>: This question has been re-asked with a summary of all debugging attempts <a href="http://stackoverflow.com/questions/1367373/python-subprocess-popen-oserror-errno-12-cannot-allocate-memory">here</a>.</p>
<p><hr /></p>
<p>I have a Python script that is running as a background process executing every 60 seconds. Part of that is a call to <a href="http://docs.python.org/library/subprocess.html">subprocess.Popen</a> to get the output of <a href="http://linuxcommand.org/man%5Fpages/ps1.html">ps</a>. </p>
<pre><code>ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
</code></pre>
<p>After running for a few days, the call is erroring with:</p>
<pre>
File "/home/admin/sd-agent/checks.py", line 436, in getProcesses
File "/usr/lib/python2.4/subprocess.py", line 533, in __init__
File "/usr/lib/python2.4/subprocess.py", line 835, in _get_handles
OSError: [Errno 12] Cannot allocate memory
</pre>
<p>However the output of <a href="http://www.linuxcommand.org/man%5Fpages/free1.html">free</a> on the server is:</p>
<pre>
$ free -m
total used free shared buffers cached
Mem: 894 345 549 0 0 0
-/+ buffers/cache: 345 549
Swap: 0 0 0
</pre>
<p>I have searched around for the problem and found <a href="http://www.zenoss.com/community/wiki/common-error-messages-and-solutions/oserror-errno-12-cannot-allocate-memory-in-popen2.py/">this article</a> which says:</p>
<p>Solution is to add more swap space to your server. When the kernel is forking to start the modeler or discovery process, it first ensures there's enough space available on the swap store the new process if needed.</p>
<p>I note that there is no available swap from the free output above. Is this likely to be the problem and/or what other solutions might there be?</p>
<p><strong>Update 13th Aug 09</strong> The code above is called every 60 seconds as part of a series of monitoring functions. The process is daemonized and the check is scheduled using <a href="http://docs.python.org/library/sched.html">sched</a>. The specific code for the above function is:</p>
<pre><code>def getProcesses(self):
self.checksLogger.debug('getProcesses: start')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory before Popen - ' + str(mem))
# Get output from ps
try:
self.checksLogger.debug('getProcesses: attempting Popen')
ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0]
except Exception, e:
import traceback
self.checksLogger.error('getProcesses: exception = ' + traceback.format_exc())
return False
self.checksLogger.debug('getProcesses: Popen success, parsing')
# Memory logging (case 27152)
if self.agentConfig['debugMode'] and sys.platform == 'linux2':
mem = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0]
self.checksLogger.debug('getProcesses: memory after Popen - ' + str(mem))
# Split out each process
processLines = ps.split('\n')
del processLines[0] # Removes the headers
processLines.pop() # Removes a trailing empty line
processes = []
self.checksLogger.debug('getProcesses: Popen success, parsing, looping')
for line in processLines:
line = line.split(None, 10)
processes.append(line)
self.checksLogger.debug('getProcesses: completed, returning')
return processes
</code></pre>
<p>This is part of a bigger class called checks which is initialised once when the daemon is started.</p>
<p>The entire checks class can be found at <a href="http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py">http://github.com/dmytton/sd-agent/blob/82f5ff9203e54d2adeee8cfed704d09e3f00e8eb/checks.py</a> with the getProcesses function defined from line 442. This is called by doChecks() starting at line 520.</p>
| 8 | 2009-08-01T15:14:30Z | 9,530,431 | <p>Virtual Memory matters!!!</p>
<p>I encountered the same issue before I add swap to my OS. The formula for virtual memory is usually like: SwapSize + 50% * PhysicalMemorySize. I finally get this resolved by either adding more physical memory or adding a Swap disk. close_fds won't work in my case.</p>
| 0 | 2012-03-02T08:56:20Z | [
"python",
"linux",
"memory"
] |
google appengine/python: Can I depend on Task queue retry on failure to keep insertions to a minimum? | 1,216,947 | <p>Say my app has a page on which people can add comments.
Say after each comment is added a taskqueue worker is added.
So if a 100 comments are added a 100 taskqueue insertions are made. </p>
<p>(note: the above is a hypothetical example to illustrate my question)</p>
<p>Say I wanted to ensure that the number of insertions are kept to a
minimum (so I don't run into the 10k insertion limit)</p>
<p>Could I do something as follows. </p>
<p>a) As each comment is added call taskqueue.add(name="stickytask",
url="/blah")
- Since this is a named taskqueue it will not be re-inserted if a
taskqueue of the same name exists. </p>
<p>b) The /blah worker url reads the newly added comments, processes the
first one and
than if more comments exist to be processed returns a status code
other than 200
- This will ensure that the task is retried and at next try will
process the next comment
and so on. </p>
<p>So all 100 comments are processed with 1 or a few taskqueue insertion.
(Note: If there is a lull
in activity where no new comments are added and all comments are
processed than the
next added comment will result in a new taskqueue insertion. ) </p>
<p>However from the docs (see snippet below) it notes that "the system
will back off gradually". Does this mean that on each "non 200" Http
status code returned a delay is inserted into the next retry? </p>
<p>From the docs:
If the execution of a particular Task fails (by returning any HTTP
status code other than 200 OK), App Engine will attempt to retry until
it succeeds. The system will back off gradually so as not to flood
your application with too many requests, but it will retry failed
tasks at least once a day at minimum. </p>
| 1 | 2009-08-01T16:29:27Z | 1,217,108 | <p>There's no reason to fake a failure (and incur backoff &c) -- that's a hacky and fragile arrangement. If you fear that simply scheduling a task per new comment might exceed the task queues' currently strict limits, then "batch up" as-yet-unprocessed comments in the store (and possibly also in memcache, I guess, for a potential speedup, but, that's optional) and don't schedule any task at that time.</p>
<p>Rather, keep a cron job executing (say) every minute, which may deal with some comments or schedule an appropriate number of tasks to deal with pending comments -- as you schedule tasks from just one cron job, it's easy to ensure you're never scheduling over 10,000 per day.</p>
<p>Don't let task queues make you forget that <a href="http://code.google.com/appengine/docs/python/config/cron.html" rel="nofollow">cron</a> is also there: a good architecture for "batch-like" processing will generally use both cron jobs and queued tasks to simplify its overall design.</p>
<p>To maximize the amount of useful work accomplished in a single request (from ether a queued task or a cron one), consider an approach based on <a href="http://code.google.com/appengine/docs/quotas.html#Monitoring%5FCPU%5FUsage%5Fin%5Fa%5FRequest" rel="nofollow">monitoring</a> your CPU usage -- when CPU is the factor limiting the work you can perform per request, this can help you get as many small schedulable units of work done in a single request as is prudently feasible. I think this approach is more solid than waiting for an <a href="http://code.google.com/appengine/docs/quotas.html#When%5Fa%5FResource%5Fis%5FDepleted" rel="nofollow">OverQuotaError</a>, catching it and rapidly closing up, as that may have other consequences out of your app's control.</p>
| 1 | 2009-08-01T17:47:22Z | [
"python",
"google-app-engine"
] |
Python: sorting a dictionary of lists | 1,217,251 | <p>Still learning python (finally!) and can't quite wrap my head around this one yet. What I want to do is sort a dictionary of lists by value using the third item in the list. It's easy enough sorting a dictionary by value when the value is just a single number or string, but this list thing has me baffled.</p>
<p>Example:</p>
<pre><code>myDict = { 'item1' : [ 7, 1, 9], 'item2' : [8, 2, 3], 'item3' : [ 9, 3, 11 ] }
</code></pre>
<p>I want to be able to iterate through the dictionary in order of the third value in each list, in this case 9, 3, 11.</p>
<p>Thanks much for any help!</p>
| 21 | 2009-08-01T19:07:17Z | 1,217,269 | <p>Here is one way to do this:</p>
<pre><code>>>> sorted(myDict.items(), key=lambda e: e[1][2])
[('item2', [8, 2, 3]), ('item1', [7, 1, 9]), ('item3', [9, 3, 11])]
</code></pre>
<p>The <a href="http://wiki.python.org/moin/HowTo/Sorting#Sortingbykeys"><code>key</code> argument</a> of the <code>sorted</code> function lets you derive a sorting key for each element of the list.</p>
<p>To iterate over the keys/values in this list, you can use something like:</p>
<pre><code>>>> for key, value in sorted(myDict.items(), key=lambda e: e[1][2]):
... print key, value
...
item2 [8, 2, 3]
item1 [7, 1, 9]
item3 [9, 3, 11]
</code></pre>
| 26 | 2009-08-01T19:15:11Z | [
"python",
"list",
"sorting",
"dictionary"
] |
Python: sorting a dictionary of lists | 1,217,251 | <p>Still learning python (finally!) and can't quite wrap my head around this one yet. What I want to do is sort a dictionary of lists by value using the third item in the list. It's easy enough sorting a dictionary by value when the value is just a single number or string, but this list thing has me baffled.</p>
<p>Example:</p>
<pre><code>myDict = { 'item1' : [ 7, 1, 9], 'item2' : [8, 2, 3], 'item3' : [ 9, 3, 11 ] }
</code></pre>
<p>I want to be able to iterate through the dictionary in order of the third value in each list, in this case 9, 3, 11.</p>
<p>Thanks much for any help!</p>
| 21 | 2009-08-01T19:07:17Z | 1,217,823 | <p>You stated two quite different wants:</p>
<ol>
<li>"What I want to do is sort a dictionary of lists ..."</li>
<li>"I want to be able to iterate through the dictionary in order of ..."</li>
</ol>
<p>The first of those is by definition impossible -- to sort something implies a rearrangement in some order. Python dictionaries are inherently unordered. The second would be vaguely possible but extremely unlikely to be implemented.</p>
<p>What you can do is</p>
<ol>
<li>Take a copy of the dictionary contents (which will be quite
unordered)</li>
<li>Sort that</li>
<li>Iterate over the sorted results -- and you already have two
solutions for that. By the way, the solution that uses "key" instead
of "cmp" is better; see <a href="http://docs.python.org/library/functions.html#sorted" rel="nofollow">sorted</a></li>
</ol>
<p>"the third item in the list" smells like "the third item in a tuple" to me, and "e[1][2]" just smells :-) ... you may like to investigate using named tuples instead of lists; see <a href="http://docs.python.org/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields" rel="nofollow">named tuple factory</a></p>
<p>If you are going to be doing extract/sort/process often on large data sets, you might like to consider something like this, using the Python-supplied sqlite3 module:</p>
<pre><code>create table ex_dict (k text primary key, v0 int, v1 int, v2 int);
insert into ex_dict values('item1', 7, 1, 9);
-- etc etc
select * from ex_dict order by v2;
</code></pre>
| 1 | 2009-08-02T00:04:33Z | [
"python",
"list",
"sorting",
"dictionary"
] |
Python: sorting a dictionary of lists | 1,217,251 | <p>Still learning python (finally!) and can't quite wrap my head around this one yet. What I want to do is sort a dictionary of lists by value using the third item in the list. It's easy enough sorting a dictionary by value when the value is just a single number or string, but this list thing has me baffled.</p>
<p>Example:</p>
<pre><code>myDict = { 'item1' : [ 7, 1, 9], 'item2' : [8, 2, 3], 'item3' : [ 9, 3, 11 ] }
</code></pre>
<p>I want to be able to iterate through the dictionary in order of the third value in each list, in this case 9, 3, 11.</p>
<p>Thanks much for any help!</p>
| 21 | 2009-08-01T19:07:17Z | 1,218,570 | <p>As John Machlin said you can't actually sort a Python dictionary.</p>
<p>However, you can create an index of the keys which can be sorted in any order you like.</p>
<p>The preferred Python pattern (idiom) for sorting by any alternative criterium is called "decorate-sort-undecorate" (DSU). In this idiom you create a temporary list which contains tuples of your key(s) followed by your original data elements, then call the normal <strong>.sort()</strong> method on that list (or, in more recent versions of Python simply wrap your decoration in a called to the <strong>sorted()</strong> built-in function). Then you remove the "decorations."</p>
<p>The reason this is generally preferred over passing comparison function to the <strong>.sort()</strong> method is that Python's built-in default sorting code (compiled C in the normal C Python) is very fast and efficient in the default case, but much, much slower when it has to call Python object code many, many times in the non-default case. So it's usually far better to iterate over the data creating data structures which can be passed to the default sort routines.</p>
<p>In this case you should be able to use something like:</p>
<pre><code>[y[1] for y in sorted([(myDict[x][2], x) for x in myDict.keys()])]
</code></pre>
<p>... that's a list comprehension doing the undecorate from the sorted list of tuples which is being returned by the inner list comprehension. The inner comprehension is creating a set of tuples, your desired sorting key (the 3rd element of the list) and the dictionary's key corresponding to the sorting key. myDict.keys() is, of course, a method of Python dictionaries which returns a list of all valid keys in whatever order the underlying implementation chooses --- presumably a simple iteration over the hashes.</p>
<p>A more verbose way of doing this might be easier to read:</p>
<pre><code>temp = list()
for k, v in myDict.items():
temp.append((v[2],))
temp.sort()
results = list()
for i in temp:
results.append(i[1])
</code></pre>
<p>Usually you should built up such code iteratively, in the interpreter using small data samples. Build the "decorate" expression or function. Then wrap that in a call to <strong>sorted()</strong>. Then build the undecorate expression (which is usually as simple as what I've shown here).</p>
| 1 | 2009-08-02T10:01:05Z | [
"python",
"list",
"sorting",
"dictionary"
] |
Is LINQ (or linq) a niche tool, or is it on the path to becoming foundational? | 1,217,274 | <p>After reading "<a href="http://stackoverflow.com/questions/1217228/what-is-the-java-equivalent-for-linq">What is the Java equivalent of LINQ?</a>", I'd like to know, is (lowercase) language-integrated query - in other words the ability to use a concise syntax for performing queries over object collections or external stores - going to be the path of the future for most general purpose languages? Or is LINQ an interesting piece of technology that will remain confined to Microsoft languages? Something in between?</p>
<p><strong>EDIT</strong>: I don't know other languages, but as I am learning it seems like LINQ is neither unprecedented nor unique. The ideas in LINQ - lambdas and queries - are present in other languages, and the ideas seem to be spreading. </p>
| 7 | 2009-08-01T19:16:26Z | 1,217,282 | <p>I would say that integrated query technology in any language will become foundational in time, especially given the recent rise in interest of Functional programming languages.</p>
<p>LINQ is certainly one of the biggest reasons I personally am sticking with .NET, anyway - it's become foundational for me personally, and I'd wager a lot of devs feel this way as well.</p>
| 2 | 2009-08-01T19:19:43Z | [
"java",
".net",
"python",
"linq"
] |
Is LINQ (or linq) a niche tool, or is it on the path to becoming foundational? | 1,217,274 | <p>After reading "<a href="http://stackoverflow.com/questions/1217228/what-is-the-java-equivalent-for-linq">What is the Java equivalent of LINQ?</a>", I'd like to know, is (lowercase) language-integrated query - in other words the ability to use a concise syntax for performing queries over object collections or external stores - going to be the path of the future for most general purpose languages? Or is LINQ an interesting piece of technology that will remain confined to Microsoft languages? Something in between?</p>
<p><strong>EDIT</strong>: I don't know other languages, but as I am learning it seems like LINQ is neither unprecedented nor unique. The ideas in LINQ - lambdas and queries - are present in other languages, and the ideas seem to be spreading. </p>
| 7 | 2009-08-01T19:16:26Z | 1,217,290 | <p>Disclaimer: I've never used LINQ. Please correct me if I'm wrong.</p>
<p>Many languages have constructs which allow to the same things as LINQ with the language data types.
Apparently the most interesting feature is that LINQ constructs can be converted to SQL, but it's not specific to LINQ: <a href="http://www.aminus.org/blogs/index.php/2008/04/22/linq-in-python?blog=2" rel="nofollow">http://www.aminus.org/blogs/index.php/2008/04/22/linq-in-python?blog=2</a>.</p>
| 1 | 2009-08-01T19:23:33Z | [
"java",
".net",
"python",
"linq"
] |
Is LINQ (or linq) a niche tool, or is it on the path to becoming foundational? | 1,217,274 | <p>After reading "<a href="http://stackoverflow.com/questions/1217228/what-is-the-java-equivalent-for-linq">What is the Java equivalent of LINQ?</a>", I'd like to know, is (lowercase) language-integrated query - in other words the ability to use a concise syntax for performing queries over object collections or external stores - going to be the path of the future for most general purpose languages? Or is LINQ an interesting piece of technology that will remain confined to Microsoft languages? Something in between?</p>
<p><strong>EDIT</strong>: I don't know other languages, but as I am learning it seems like LINQ is neither unprecedented nor unique. The ideas in LINQ - lambdas and queries - are present in other languages, and the ideas seem to be spreading. </p>
| 7 | 2009-08-01T19:16:26Z | 1,217,291 | <p>I don't think that you can really classify it (or many things) as either. While I would hardly say that LINQ is a niche tool -- it has many applications to many people -- it's not "foundational" IMO. However, I also wouldn't say that having LINQ (or an equivalent) language-specific querying language can be truly foundational at this stage of the game. In the future, perhaps, but right now you can construct a query in many different ways that yield SIGNIFICANTLY varying levels of performance. </p>
| 0 | 2009-08-01T19:23:59Z | [
"java",
".net",
"python",
"linq"
] |
Is LINQ (or linq) a niche tool, or is it on the path to becoming foundational? | 1,217,274 | <p>After reading "<a href="http://stackoverflow.com/questions/1217228/what-is-the-java-equivalent-for-linq">What is the Java equivalent of LINQ?</a>", I'd like to know, is (lowercase) language-integrated query - in other words the ability to use a concise syntax for performing queries over object collections or external stores - going to be the path of the future for most general purpose languages? Or is LINQ an interesting piece of technology that will remain confined to Microsoft languages? Something in between?</p>
<p><strong>EDIT</strong>: I don't know other languages, but as I am learning it seems like LINQ is neither unprecedented nor unique. The ideas in LINQ - lambdas and queries - are present in other languages, and the ideas seem to be spreading. </p>
| 7 | 2009-08-01T19:16:26Z | 1,217,307 | <p>It sounds a lot to me like Ruby's Active Record, but I've never used LINQ. Anyone used both? (I would have posted this as a comment, but I'd really like to be updated on the answer--I'm probably wrong so it'll get downvoted :) )</p>
<p>(Actually I should say that AR is like LINQ to SQL, they haven't implemented AR for other targets as far as I know)</p>
| 0 | 2009-08-01T19:33:10Z | [
"java",
".net",
"python",
"linq"
] |
Is LINQ (or linq) a niche tool, or is it on the path to becoming foundational? | 1,217,274 | <p>After reading "<a href="http://stackoverflow.com/questions/1217228/what-is-the-java-equivalent-for-linq">What is the Java equivalent of LINQ?</a>", I'd like to know, is (lowercase) language-integrated query - in other words the ability to use a concise syntax for performing queries over object collections or external stores - going to be the path of the future for most general purpose languages? Or is LINQ an interesting piece of technology that will remain confined to Microsoft languages? Something in between?</p>
<p><strong>EDIT</strong>: I don't know other languages, but as I am learning it seems like LINQ is neither unprecedented nor unique. The ideas in LINQ - lambdas and queries - are present in other languages, and the ideas seem to be spreading. </p>
| 7 | 2009-08-01T19:16:26Z | 1,217,313 | <p>After spending years</p>
<ul>
<li>Handcrafting database access(in oh so many languages)</li>
<li>Going throgh the Entity framework</li>
<li>Fetching and storing data through the fashioned ORM of the month</li>
</ul>
<p>It was about time somone made an easy to access and language integrated way to talk to a database.
LINQ to SQL should have been made years ago. I applaud the team that come up with it - finally a database access framework that makes sense.</p>
<p>It's not perfect, yet, and my main headache at the moment is there's no real support for LINQ2SQL for other common databases, nor are there anything like it for Java.</p>
<p>(LINQ in general is nice too btw, not just LINQ to SQL :-)</p>
| 4 | 2009-08-01T19:36:25Z | [
"java",
".net",
"python",
"linq"
] |
Is LINQ (or linq) a niche tool, or is it on the path to becoming foundational? | 1,217,274 | <p>After reading "<a href="http://stackoverflow.com/questions/1217228/what-is-the-java-equivalent-for-linq">What is the Java equivalent of LINQ?</a>", I'd like to know, is (lowercase) language-integrated query - in other words the ability to use a concise syntax for performing queries over object collections or external stores - going to be the path of the future for most general purpose languages? Or is LINQ an interesting piece of technology that will remain confined to Microsoft languages? Something in between?</p>
<p><strong>EDIT</strong>: I don't know other languages, but as I am learning it seems like LINQ is neither unprecedented nor unique. The ideas in LINQ - lambdas and queries - are present in other languages, and the ideas seem to be spreading. </p>
| 7 | 2009-08-01T19:16:26Z | 1,217,454 | <p>I think the functional concepts which are the under pinnings of LINQ will become popular in many languages. Passing a sequence of objects through a set of functions to get the desired set of objects. Essentially, using the lambda syntax over the query syntax. </p>
<p>This is a very powerful and expressive way of coding. </p>
<p>This is not because I feel it's a fundamentally better way to do things (i.e. lambda over query syntax). Comparatively speaking, it's much easier to add the underlying library support for query expressions to a language than it is to add the query syntax. All that is required the lambda syntax for queries is </p>
<ul>
<li>Lambdas</li>
<li>Underlying query methods</li>
</ul>
<p>Most new languages support lambdas (even C++ is finally getting them!). Adding the library support is fairly cheap and can usually be done by a motivated individual. </p>
<p>Getting the query syntax into the language though requires a lot more work. </p>
| 2 | 2009-08-01T20:45:02Z | [
"java",
".net",
"python",
"linq"
] |
Is LINQ (or linq) a niche tool, or is it on the path to becoming foundational? | 1,217,274 | <p>After reading "<a href="http://stackoverflow.com/questions/1217228/what-is-the-java-equivalent-for-linq">What is the Java equivalent of LINQ?</a>", I'd like to know, is (lowercase) language-integrated query - in other words the ability to use a concise syntax for performing queries over object collections or external stores - going to be the path of the future for most general purpose languages? Or is LINQ an interesting piece of technology that will remain confined to Microsoft languages? Something in between?</p>
<p><strong>EDIT</strong>: I don't know other languages, but as I am learning it seems like LINQ is neither unprecedented nor unique. The ideas in LINQ - lambdas and queries - are present in other languages, and the ideas seem to be spreading. </p>
| 7 | 2009-08-01T19:16:26Z | 1,217,473 | <p>I don't think linq will be confined to the microsoft languages, check it out, there is already something for the php, check it out <a href="http://phplinq.codeplex.com/" rel="nofollow">http://phplinq.codeplex.com/</a></p>
<p>I think Linq is great tool in development process and personally would be really glad if it will be transferred to other languages(like in my example of php)</p>
| 1 | 2009-08-01T20:55:53Z | [
"java",
".net",
"python",
"linq"
] |
Is LINQ (or linq) a niche tool, or is it on the path to becoming foundational? | 1,217,274 | <p>After reading "<a href="http://stackoverflow.com/questions/1217228/what-is-the-java-equivalent-for-linq">What is the Java equivalent of LINQ?</a>", I'd like to know, is (lowercase) language-integrated query - in other words the ability to use a concise syntax for performing queries over object collections or external stores - going to be the path of the future for most general purpose languages? Or is LINQ an interesting piece of technology that will remain confined to Microsoft languages? Something in between?</p>
<p><strong>EDIT</strong>: I don't know other languages, but as I am learning it seems like LINQ is neither unprecedented nor unique. The ideas in LINQ - lambdas and queries - are present in other languages, and the ideas seem to be spreading. </p>
| 7 | 2009-08-01T19:16:26Z | 1,218,086 | <p>Before LinQ, Python had <a href="http://www.python.org/dev/peps/pep-0289/" rel="nofollow">Generator Expressions</a> which are <em>specific syntax for performing queries over collections</em>. Python's syntax is more reduced than Linq's, but let you basically perform the same queries as easy as in linq. Months ago, I wrote a <a href="http://wiki.freaks-unidos.net/weblogs/ceronman/python%20vs%20csharp%20querys%20part%201" rel="nofollow">blog post comparing queries in C# and Python</a>, here is a small example:</p>
<p>C# Linq:</p>
<pre><code>var orders = from c in customers
where c.Region == "WA"
from o in c.Orders
where o.OrderDate >= cutoffDate
select new {c.CustomerID, o.OrderID};
</code></pre>
<p>Python Generator Expressions:</p>
<pre><code>orders = ( (c.customer_id, o.order_id)
for c in customers if c.region == 'WA'
for o in c.orders if o.date >= cutoff_date)
</code></pre>
<p>Syntax for queries in programming languages are an extremely useful tool. I believe every language should include something like that.</p>
| 6 | 2009-08-02T03:34:34Z | [
"java",
".net",
"python",
"linq"
] |
is it possible to add an element to a list and preserve the order | 1,217,780 | <p>I would like to add an element to a list that preserve the order of the list.</p>
<p>Let's assume the list of object is <em>[a, b, c, d]</em> I have a function <em>cmp</em> that compares two elements of the list. if I add f object which is the bigger I would like it to be at the last position.</p>
<p>maybe it's better to sort the complete list...</p>
| 2 | 2009-08-01T23:34:04Z | 1,217,785 | <p>Yes, this is what <a href="http://effbot.org/librarybook/bisect.htm" rel="nofollow">bisect.insort</a> is for, however it doesn't take a comparison function. If the objects are custom objects, you can override one or more of the <a href="http://docs.python.org/reference/datamodel.html#object.%5F%5Flt%5F%5F" rel="nofollow">rich comparison methods</a> to establish your desired sort order. Or you could store a 2-tuple with the sort key as the first item, and sort that instead.</p>
| 5 | 2009-08-01T23:37:42Z | [
"python"
] |
is it possible to add an element to a list and preserve the order | 1,217,780 | <p>I would like to add an element to a list that preserve the order of the list.</p>
<p>Let's assume the list of object is <em>[a, b, c, d]</em> I have a function <em>cmp</em> that compares two elements of the list. if I add f object which is the bigger I would like it to be at the last position.</p>
<p>maybe it's better to sort the complete list...</p>
| 2 | 2009-08-01T23:34:04Z | 1,217,786 | <pre><code>L.insert(index, object) -- insert object before index
</code></pre>
| 0 | 2009-08-01T23:38:43Z | [
"python"
] |
is it possible to add an element to a list and preserve the order | 1,217,780 | <p>I would like to add an element to a list that preserve the order of the list.</p>
<p>Let's assume the list of object is <em>[a, b, c, d]</em> I have a function <em>cmp</em> that compares two elements of the list. if I add f object which is the bigger I would like it to be at the last position.</p>
<p>maybe it's better to sort the complete list...</p>
| 2 | 2009-08-01T23:34:04Z | 1,217,883 | <p><code>bisect.insort</code> is a little bit faster, where applicable, than append-then-sort (unless you have a few elements to add before you need the list to be sorted again) -- measuring as usual on my laptop (a speedier machine will of course be faster across the board, but the ratio should remain roughly constant):</p>
<pre><code>$ python -mtimeit -s'import random, bisect; x=range(20)' 'y=list(x); bisect.insort(y, 22*random.random())'
1000000 loops, best of 3: 1.99 usec per loop
</code></pre>
<p>vs</p>
<pre><code>$ python -mtimeit -s'import random, bisect; x=range(20)' 'y=list(x); y.append(22*random.random()); y.sort()'
100000 loops, best of 3: 2.78 usec per loop
</code></pre>
<p>How much you care about this difference, of course, depend on how critical a bottleneck this operation is for your application -- there are of course situation where even this fraction of a microsecond makes all the difference, though they are the exception, not the rule.</p>
<p>The <code>bisect</code> module is not as flexible and configurable -- you can easily pass your own custom comparator to sort (although if you can possibly put it in the form of a key= argument you're <em>strongly</em> advised to do that; in Python 3, only key= remains, cmp= is gone, because the performance just couldn't be made good), while bisect rigidly uses built-in comparisons (so you'd have to wrap your objects into wrappers implementing <code>__cmp__</code> or <code>__le__</code> to your liking, which also has important performance implications).</p>
<p>In your shoes, I'd start with the append-then-sort approach, and switch to the less-handy bisect approach only if profiling showed that the performance hit was material. Remember <a href="http://en.wikipedia.org/wiki/Optimization%5F%28computer%5Fscience%29#When%5Fto%5Foptimize" rel="nofollow">Knuth's</a> (and Hoare's) famous quote, and <a href="http://www.c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast" rel="nofollow">Kent Beck's</a> almost-as-famous one too!-)</p>
| 4 | 2009-08-02T00:45:27Z | [
"python"
] |
Reverting a Python/Cocoa project to use the default OSX 10.5 Python (2.5) | 1,217,781 | <p>I have installed the latest MacPython (2.6.2) on my Leopard OS X and started an XCode PyObjC project. </p>
<p>When I finalized the app, I built the release version and sent it to a friend of mine to try if it runs with out of the box. It did not, because it expects the latest Python, as on my computer.</p>
<p>No matter what I tried, I could not find any config file, etc. where I could change this setting to expecting the default Python that came with OS X.</p>
<p>Any and all help will be much appreciated.</p>
<p>Regards,
OA</p>
| 1 | 2009-08-01T23:34:07Z | 1,217,912 | <p>Uninstalling what you now have in /Library/Frameworks (so XCode falls back to the Python in /System/Library/Frameworks) would work but may be considered a bit drastic. <a href="http://forums.macrumors.com/showthread.php?t=377881#6" rel="nofollow">This post</a> and its followups have other potentially useful recommendations, the best one being in the followup at the very end -- you can edit the configurations in /Developer/Library/Xcode/Project Templates/Application/ to determine which Python version XCode projects will be using.</p>
| 1 | 2009-08-02T01:01:19Z | [
"python",
"cocoa",
"xcode",
"xcodebuild",
"pyobjc"
] |
PyOpenGL + Pygame capped to 60 FPS in Fullscreen | 1,217,939 | <p>I'm currently working on a game engine written in pygame and I wanted to add OpenGL support.</p>
<p>I wrote a test to see how to make pygame and OpenGL work together, and when it's running in windowed mode, it runs between 150 and 200 fps. When I run it full screen (all I did was add the FULLSCREEN flag when I set up the window), it drops down to 60 fps. I added a lot more drawing functions to see if it was just a huge performance drop, but it always ran at 60 fps.</p>
<p>Is there something extra I need to do to tell OpenGL that it's running fullscreen or is this a limitation of OpenGL?</p>
<p>(I am running in Windows XP)</p>
| 2 | 2009-08-02T01:22:31Z | 1,217,946 | <p>Is this a <a href="http://en.wikipedia.org/wiki/V-sync" rel="nofollow">V-Sync</a> issue? Something about the config or your environment may be limiting maximum frame rate to your monitor's refresh rate.</p>
| 1 | 2009-08-02T01:28:41Z | [
"python",
"fullscreen",
"pygame",
"pyopengl"
] |
PyOpenGL + Pygame capped to 60 FPS in Fullscreen | 1,217,939 | <p>I'm currently working on a game engine written in pygame and I wanted to add OpenGL support.</p>
<p>I wrote a test to see how to make pygame and OpenGL work together, and when it's running in windowed mode, it runs between 150 and 200 fps. When I run it full screen (all I did was add the FULLSCREEN flag when I set up the window), it drops down to 60 fps. I added a lot more drawing functions to see if it was just a huge performance drop, but it always ran at 60 fps.</p>
<p>Is there something extra I need to do to tell OpenGL that it's running fullscreen or is this a limitation of OpenGL?</p>
<p>(I am running in Windows XP)</p>
| 2 | 2009-08-02T01:22:31Z | 1,218,011 | <p>If you are not changing your clock.tick() when you change between full screen and windowed mode this is almost certainly a vsync issue. If you are on an LCD then it's 100% certain.</p>
<p>Unfortunately v-sync can be handled in many places including SDL, Pyopengl, your display server and your video drivers. If you are using windows you can adjust the vsync toggle in the nvidia control panel to test, and there's more than likely something in nvidia-settings for linux as well. I'd guess other manufacturers drivers have similar settings but that's a guess. </p>
| 0 | 2009-08-02T02:18:11Z | [
"python",
"fullscreen",
"pygame",
"pyopengl"
] |
PyOpenGL + Pygame capped to 60 FPS in Fullscreen | 1,217,939 | <p>I'm currently working on a game engine written in pygame and I wanted to add OpenGL support.</p>
<p>I wrote a test to see how to make pygame and OpenGL work together, and when it's running in windowed mode, it runs between 150 and 200 fps. When I run it full screen (all I did was add the FULLSCREEN flag when I set up the window), it drops down to 60 fps. I added a lot more drawing functions to see if it was just a huge performance drop, but it always ran at 60 fps.</p>
<p>Is there something extra I need to do to tell OpenGL that it's running fullscreen or is this a limitation of OpenGL?</p>
<p>(I am running in Windows XP)</p>
| 2 | 2009-08-02T01:22:31Z | 1,218,102 | <p>As frou pointed out, this would be due to Pygame waiting for the vertical retrace when you update the screen by calling <code>display.flip()</code>. As the <a href="http://www.pygame.org/docs/ref/display.html#pygame.display.flip">Pygame <code>display</code> documentation</a> notes, if you set the display mode using the <code>HWSURFACE</code> or the <code>DOUBLEBUF</code> flags, <code>display.flip()</code> will wait for the vertical retrace before swapping buffers.</p>
<p>To be honest, I don't see any good reason (aside from benchmarking) to try to achieve a frame rate that's faster than the screen's refresh rate. You (and the people playing your game) won't be able to notice any difference in speed or performance, since the display can only draw 60 fps anyways. Plus, if you don't sync with the vertical retrace, there's a good chance that you'll get <a href="http://en.wikipedia.org/wiki/Screen%5Ftearing">screen tearing</a>.</p>
| 7 | 2009-08-02T03:46:05Z | [
"python",
"fullscreen",
"pygame",
"pyopengl"
] |
How do I forbid easy_install from zipping eggs? | 1,218,058 | <p>What must I put into <code>distutils.cfg</code> to prevent easy_install from ever installing a zipped egg? The compression is a nice thought, but I like to be able to grep through and debug that code.</p>
<p>I pulled in some dependencies with <code>python setup.py develop .</code> A closer look reveals that also accepts the --always-unzip flag. It would just be nice to set that as default.</p>
| 11 | 2009-08-02T03:07:06Z | 1,218,060 | <p>One solution?</p>
<pre><code>easy_install pip
rm easy_install
ln -s easy_install pip
</code></pre>
| 0 | 2009-08-02T03:08:49Z | [
"python",
"setuptools",
"easy-install"
] |
How do I forbid easy_install from zipping eggs? | 1,218,058 | <p>What must I put into <code>distutils.cfg</code> to prevent easy_install from ever installing a zipped egg? The compression is a nice thought, but I like to be able to grep through and debug that code.</p>
<p>I pulled in some dependencies with <code>python setup.py develop .</code> A closer look reveals that also accepts the --always-unzip flag. It would just be nice to set that as default.</p>
| 11 | 2009-08-02T03:07:06Z | 1,218,270 | <p>I doubt there is a setting in distutils.cfg for this, as easy_install is not a part of distutils. But run easy_install like this:</p>
<pre><code>easy_install --always-unzip
</code></pre>
<p>and it should solve the problem.</p>
| 3 | 2009-08-02T06:08:20Z | [
"python",
"setuptools",
"easy-install"
] |
How do I forbid easy_install from zipping eggs? | 1,218,058 | <p>What must I put into <code>distutils.cfg</code> to prevent easy_install from ever installing a zipped egg? The compression is a nice thought, but I like to be able to grep through and debug that code.</p>
<p>I pulled in some dependencies with <code>python setup.py develop .</code> A closer look reveals that also accepts the --always-unzip flag. It would just be nice to set that as default.</p>
| 11 | 2009-08-02T03:07:06Z | 1,218,426 | <p>the option is zip-ok, so put the following in your distutils.cfg:</p>
<pre><code>[easy_install]
# i don't like having zipped files.
zip_ok = 0
</code></pre>
| 14 | 2009-08-02T08:25:36Z | [
"python",
"setuptools",
"easy-install"
] |
How do I forbid easy_install from zipping eggs? | 1,218,058 | <p>What must I put into <code>distutils.cfg</code> to prevent easy_install from ever installing a zipped egg? The compression is a nice thought, but I like to be able to grep through and debug that code.</p>
<p>I pulled in some dependencies with <code>python setup.py develop .</code> A closer look reveals that also accepts the --always-unzip flag. It would just be nice to set that as default.</p>
| 11 | 2009-08-02T03:07:06Z | 17,781,086 | <p>I had the issue using buildout, and the solved it by adding:</p>
<pre><code>[buildout]
unzip = true
</code></pre>
<p>in the <code>buildout.cfg</code> file</p>
| 0 | 2013-07-22T06:19:07Z | [
"python",
"setuptools",
"easy-install"
] |
In erlang: How do I expand wxNotebook in a panel? | 1,218,433 | <p>(I have tagged this question as Python as well since I understand Python code so examples in Python are also welcome!).</p>
<p>I want to create a simple window in wxWidgets:<br />
I create a main panel which I add to a form<br />
I associate a boxsizer to the main panel (splitting it in two, horizontally).<br />
I add LeftPanel to the boxsizer,<br />
I add RightPanel to the boxsizer,<br />
I create a new boxsizer (vertical)<br />
I create another boxsizer (horizontal)<br />
I create a Notebook widget<br />
I create a Panel and put it inside the Notebook (addpage)<br />
I add the notebook to the new boxsizer (vertical one)<br />
I add the vertical sizer in the horizontal one<br />
I associate the horizontal sizer to the RightPanel<br />
I add the Left and Right panel to the main sizer.<br /></p>
<p>This doesn't work...</p>
<p>Maybe I have missed something (mental block about sizers) but what I would <em>like</em> to do is to expand the notebook widget without the use of the vertical sizer inside the horizontal one (it doesn't work anyway).</p>
<p>So my question is. Assuming I want to expand the Notebook widget inside the RightPanel to take up the rest of the right side area of the form, how would I go about doing that?</p>
<p>For those that understand Erlang, This is what I have so far:</p>
<pre><code>mainwindow() ->
%% Create new environment
X = wx:new(),
%% Create the main frame
MainFrame = wxFrame:new(X, -1, "Test"),
MainPanel = wxPanel:new(MainFrame, [{winid, ?wxID_ANY}]),
MainSizer = wxBoxSizer:new(?wxHORIZONTAL),
wxWindow:setSizer(MainPanel, MainSizer),
%% Left Panel...
LeftPanel = wxPanel:new(MainPanel, [{winid, ?wxID_ANY}]),
LeftPanelSizer = wxBoxSizer:new(?wxVERTICAL),
wxWindow:setSizer(LeftPanel, LeftPanelSizer),
wxWindow:setMinSize(LeftPanel, {152, -1}),
%% Right Panel
RightPanel = wxPanel:new(MainPanel, [{winid, ?wxID_ANY}]),
RightPanelVerticalSizer = wxBoxSizer:new(?wxVERTICAL),
RightPanelHorizontalSizer = wxBoxSizer:new(?wxHORIZONTAL),
wxWindow:setBackgroundColour(RightPanel, {255,0,0}),
Notebook = wxNotebook:new(RightPanel, ?wxID_ANY, [{size,{-1,-1}}]),
TestPanel1 = wxPanel:new(Notebook, [{size,{-1,-1}},{winid, ?wxID_ANY}]),
wxNotebook:addPage(Notebook, TestPanel1, "Testpanel!"),
TestPanel2 = wxPanel:new(Notebook, [{size,{-1,-1}},{winid, ?wxID_ANY}]),
wxNotebook:addPage(Notebook, TestPanel2, "Testpanel!"),
wxSizer:add(RightPanelVerticalSizer, Notebook, [{border,0},{proportion,1}, {flag,?wxEXPAND}]),
wxSizer:add(RightPanelHorizontalSizer, RightPanelVerticalSizer, [{proportion,1}, {flag,?wxEXPAND}]),
wxWindow:setSizer(RightPanel, RightPanelHorizontalSizer),
%% Main Sizer
wxSizer:add(MainSizer, LeftPanel, [{border, 2}, {flag,?wxEXPAND bor ?wxALL}]),
wxSizer:add(MainSizer, RightPanel, [{border, 2}, {flag,?wxEXPAND bor ?wxTOP bor ?wxRIGHT bor ?wxBOTTOM}]),
%% Connect to events
wxFrame:connect(MainFrame, close_window),
wxWindow:center(MainFrame),
wxWindow:show(MainFrame),
...
</code></pre>
| 8 | 2009-08-02T08:28:41Z | 1,218,627 | <p>I'm closing this question (as soon as I can) after I figured out what I needed to do.</p>
<p>Basically I changed the proportion to 1 of the add command to the main panel (this will expand the whole thing)</p>
<p>New code:</p>
<pre><code> %% Main Sizer
wxSizer:add(MainSizer, LeftPanel, [{proportion,0},{border, 2}, {flag,?wxEXPAND bor ?wxALL}]),
wxSizer:add(MainSizer, RightPanel, [{proportion,1},{border, 2}, {flag,?wxEXPAND bor ?wxTOP bor ?wxRIGHT bor ?wxBOTTOM}]),
</code></pre>
| 4 | 2009-08-02T10:30:42Z | [
"python",
"layout",
"erlang",
"wxwidgets"
] |
Python string templater | 1,218,457 | <p>I'm using this REST web service, which returns various templated strings as urls, for example:</p>
<pre><code>"http://api.app.com/{foo}"
</code></pre>
<p>In Ruby, I can then use </p>
<pre><code>url = Addressable::Template.new("http://api.app.com/{foo}").expand('foo' => 'bar')
</code></pre>
<p>to get</p>
<pre><code>"http://api.app.com/bar"
</code></pre>
<p>Is there any way to do this in Python? I know about %() templates, but obviously they're not working here.</p>
| 3 | 2009-08-02T08:46:50Z | 1,218,577 | <p>In python 2.6 you can do this if you need exactly that syntax</p>
<pre><code>from string import Formatter
f = Formatter()
f.format("http://api.app.com/{foo}", foo="bar")
</code></pre>
<p>If you need to use an earlier python version then you can either copy the 2.6 formatter class or hand roll a parser/regex to do it.</p>
| 4 | 2009-08-02T10:03:52Z | [
"python",
"ruby",
"templates",
"string"
] |
Python string templater | 1,218,457 | <p>I'm using this REST web service, which returns various templated strings as urls, for example:</p>
<pre><code>"http://api.app.com/{foo}"
</code></pre>
<p>In Ruby, I can then use </p>
<pre><code>url = Addressable::Template.new("http://api.app.com/{foo}").expand('foo' => 'bar')
</code></pre>
<p>to get</p>
<pre><code>"http://api.app.com/bar"
</code></pre>
<p>Is there any way to do this in Python? I know about %() templates, but obviously they're not working here.</p>
| 3 | 2009-08-02T08:46:50Z | 1,218,944 | <p>I cannot give you a perfect solution but you could try using <code>string.Template</code>.
You either pre-process your incoming URL and then use string.Template directly, like</p>
<pre><code>In [6]: url="http://api.app.com/{foo}"
In [7]: up=string.Template(re.sub("{", "${", url))
In [8]: up.substitute({"foo":"bar"})
Out[8]: 'http://api.app.com/bar'
</code></pre>
<p>taking advantage of the default "${...}" syntax for replacement identifiers. Or you subclass <code>string.Template</code> to control the identifier pattern, like</p>
<pre><code>class MyTemplate(string.Template):
delimiter = ...
pattern = ...
</code></pre>
<p>but I haven't figured that out.</p>
| 0 | 2009-08-02T13:54:13Z | [
"python",
"ruby",
"templates",
"string"
] |
Python string templater | 1,218,457 | <p>I'm using this REST web service, which returns various templated strings as urls, for example:</p>
<pre><code>"http://api.app.com/{foo}"
</code></pre>
<p>In Ruby, I can then use </p>
<pre><code>url = Addressable::Template.new("http://api.app.com/{foo}").expand('foo' => 'bar')
</code></pre>
<p>to get</p>
<pre><code>"http://api.app.com/bar"
</code></pre>
<p>Is there any way to do this in Python? I know about %() templates, but obviously they're not working here.</p>
| 3 | 2009-08-02T08:46:50Z | 1,223,406 | <p>Don't use a quick hack.</p>
<p>What is used there (and implemented by Addressable) are <a href="http://bitworking.org/projects/URI-Templates/spec/draft-gregorio-uritemplate-03.html" rel="nofollow">URI Templates</a>. There seem to be several libs for this in python, for example: <a href="http://code.google.com/p/uri-templates/" rel="nofollow">uri-templates</a>. <a href="http://github.com/asplake/described%5Froutes-py/tree/master" rel="nofollow">described_routes_py</a> also has a parser for them.</p>
| 2 | 2009-08-03T16:51:09Z | [
"python",
"ruby",
"templates",
"string"
] |
python's sum() and non-integer values | 1,218,710 | <p>Is there a simple and quick way to use sum() with non-integer values?</p>
<p>So I can use it like this:</p>
<pre><code>class Foo(object):
def __init__(self,bar)
self.bar=bar
mylist=[Foo(3),Foo(34),Foo(63),200]
result=sum(mylist) # result should be 300
</code></pre>
<p>I tried overriding <code>__add__</code> and <code>__int__</code> etc, but I don't have found a solution yet</p>
<p><strong>EDIT:</strong></p>
<p>The solution is to implement:</p>
<pre><code> def __radd__(self, other):
return other + self.bar
</code></pre>
<p>as Will suggested in his post. But as always, all roads lead to Rome, but I think this is the best solution since I don't need <code>__add__</code> in my class</p>
| 15 | 2009-08-02T11:27:41Z | 1,218,724 | <p>Try:</p>
<pre><code>import operator
result=reduce(operator.add, mylist)
</code></pre>
<p>sum() works probably faster, but it is specialized for builtin numbers only. Of course you still have to provide a method to add your Foo() objects. So full example:</p>
<pre><code>class Foo(object):
def __init__(self, i): self.i = i
def __add__(self, other):
if isinstance(other, int):
return Foo(self.i + other)
return Foo(self.i + other.i)
def __radd__(self, other):
return self.__add__(other)
import operator
mylist = [Foo(42), Foo(36), Foo(12), 177, Foo(11)]
print reduce(operator.add, mylist).i
</code></pre>
| 4 | 2009-08-02T11:36:24Z | [
"python",
"list",
"sum"
] |
python's sum() and non-integer values | 1,218,710 | <p>Is there a simple and quick way to use sum() with non-integer values?</p>
<p>So I can use it like this:</p>
<pre><code>class Foo(object):
def __init__(self,bar)
self.bar=bar
mylist=[Foo(3),Foo(34),Foo(63),200]
result=sum(mylist) # result should be 300
</code></pre>
<p>I tried overriding <code>__add__</code> and <code>__int__</code> etc, but I don't have found a solution yet</p>
<p><strong>EDIT:</strong></p>
<p>The solution is to implement:</p>
<pre><code> def __radd__(self, other):
return other + self.bar
</code></pre>
<p>as Will suggested in his post. But as always, all roads lead to Rome, but I think this is the best solution since I don't need <code>__add__</code> in my class</p>
| 15 | 2009-08-02T11:27:41Z | 1,218,732 | <p>Try using the <code>__int__</code> method and then mapping each element in your list to the <code>int</code> function to get the values out:</p>
<pre><code>class Foo(object):
def __init__(self,bar):
self.bar = bar
def __int__(self):
return self.bar
mylist = [Foo(3),Foo(34),Foo(63),200]
result = sum(map(int,mylist))
print(result)
</code></pre>
| 3 | 2009-08-02T11:40:48Z | [
"python",
"list",
"sum"
] |
python's sum() and non-integer values | 1,218,710 | <p>Is there a simple and quick way to use sum() with non-integer values?</p>
<p>So I can use it like this:</p>
<pre><code>class Foo(object):
def __init__(self,bar)
self.bar=bar
mylist=[Foo(3),Foo(34),Foo(63),200]
result=sum(mylist) # result should be 300
</code></pre>
<p>I tried overriding <code>__add__</code> and <code>__int__</code> etc, but I don't have found a solution yet</p>
<p><strong>EDIT:</strong></p>
<p>The solution is to implement:</p>
<pre><code> def __radd__(self, other):
return other + self.bar
</code></pre>
<p>as Will suggested in his post. But as always, all roads lead to Rome, but I think this is the best solution since I don't need <code>__add__</code> in my class</p>
| 15 | 2009-08-02T11:27:41Z | 1,218,733 | <p>Or if you don't want to import anything,</p>
<pre><code>result=reduce( (lambda x,y:x+y), mylist
</code></pre>
<p>Another small advantage is that you don't have to necessarily declare an <strong>add</strong> method as part of your Foo objects, if this happens to be the only circumstance in which you'd want to do addition. (But it probably wouldn't hurt to define <strong>add</strong> for future flexibility.)</p>
| 3 | 2009-08-02T11:41:55Z | [
"python",
"list",
"sum"
] |
python's sum() and non-integer values | 1,218,710 | <p>Is there a simple and quick way to use sum() with non-integer values?</p>
<p>So I can use it like this:</p>
<pre><code>class Foo(object):
def __init__(self,bar)
self.bar=bar
mylist=[Foo(3),Foo(34),Foo(63),200]
result=sum(mylist) # result should be 300
</code></pre>
<p>I tried overriding <code>__add__</code> and <code>__int__</code> etc, but I don't have found a solution yet</p>
<p><strong>EDIT:</strong></p>
<p>The solution is to implement:</p>
<pre><code> def __radd__(self, other):
return other + self.bar
</code></pre>
<p>as Will suggested in his post. But as always, all roads lead to Rome, but I think this is the best solution since I don't need <code>__add__</code> in my class</p>
| 15 | 2009-08-02T11:27:41Z | 1,218,735 | <p>You may also need to implement the <code>__radd__</code> function, which represents "reverse add" and is called when the arguments can't be resolved in the "forward" direction. For example, <code>x + y</code> is evaluated as <code>x.__add__(y)</code> if possible, but if that doesn't exist then Python tries <code>y.__radd__(x)</code>.</p>
<p>Since the <code>sum()</code> function starts with the integer <code>0</code>, the first thing it does is try to evaluate:</p>
<pre><code>0 + Foo(3)
</code></pre>
<p>which will require that you implement <code>Foo.__radd__</code>.</p>
| 13 | 2009-08-02T11:42:10Z | [
"python",
"list",
"sum"
] |
python's sum() and non-integer values | 1,218,710 | <p>Is there a simple and quick way to use sum() with non-integer values?</p>
<p>So I can use it like this:</p>
<pre><code>class Foo(object):
def __init__(self,bar)
self.bar=bar
mylist=[Foo(3),Foo(34),Foo(63),200]
result=sum(mylist) # result should be 300
</code></pre>
<p>I tried overriding <code>__add__</code> and <code>__int__</code> etc, but I don't have found a solution yet</p>
<p><strong>EDIT:</strong></p>
<p>The solution is to implement:</p>
<pre><code> def __radd__(self, other):
return other + self.bar
</code></pre>
<p>as Will suggested in his post. But as always, all roads lead to Rome, but I think this is the best solution since I don't need <code>__add__</code> in my class</p>
| 15 | 2009-08-02T11:27:41Z | 1,218,742 | <p>Its a bit tricky - the sum() function takes the start and adds it to the next and so on</p>
<p>You need to implement the <code>__radd__</code> method:</p>
<pre><code>class T:
def __init__(self,x):
self.x = x
def __radd__(self, other):
return other + self.x
test = (T(1),T(2),T(3),200)
print sum(test)
</code></pre>
| 18 | 2009-08-02T11:45:03Z | [
"python",
"list",
"sum"
] |
Multiprocessing Debugging error | 1,218,757 | <p>Hey everyone, I am having a little trouble debugging my code. Please look below: </p>
<pre><code>import globalFunc
from globalFunc import systemPrint
from globalFunc import out
from globalFunc import debug
import math
import time
import multiprocessing
"""
Somehow this is not working well
"""
class urlServerM( multiprocessing.Process):
"""
This calculates how much links get put into the priority queue
so to reach the level that we intend, for every query resultset,
we will put the a certain number of links into visitNext first,
and even if every resultSet is full, we will be able to achieve the link
level that we intended. The rest is pushed into another list where
if the first set of lists don't have max for every time, the remaining will
be spared on these links
"""
def getPriorityCounter(self, level, constraint):
return int( math.exp( ( math.log(constraint) / (level - 1) ) ) )
def __init__( self, level, constraint, urlQ):
"""limit is obtained via ngCrawler.getPriorityNum"""
multiprocessing.Process.__init__(self)
self.constraint = int( constraint)
self.limit = self.getPriorityCounter( level, self.constraint)
self.visitNext = []
self.visitLater = []
self._count = 0
self.urlQ = urlQ
"""
puts the next into the Queue
"""
def putNextIntoQ(self):
debug('putNextIntoQ', str(self.visitNext) + str(self.visitLater) )
if self.visitNext != []:
_tmp = self.visitNext[0]
self.visitNext.remove(_tmp)
self.urlQ.put(_tmp)
elif self.visitLater != []:
_tmp = self.visitLater[0]
self.visitLater.remove(_tmp)
self.urlQ.put(_tmp)
def run(self):
while True:
if self.hasNext():
time.sleep(0.5)
self.putNextIntoQ()
debug('process', 'put something in Q already')
else:
out('process', 'Nothing in visitNext or visitLater, sleeping')
time.sleep(2)
return
def hasNext(self):
debug( 'hasnext', str(self.visitNext) + str(self.visitLater) )
if self.visitNext != []:
return True
elif self.visitLater != []:
return True
return False
"""
This function resets the counter
which is used to keep track of how much is already inside the
visitNext vs visitLater
"""
def reset(self):
self._count = 0
def store(self, linkS):
"""Stores a link into one of these list"""
if self._count < self.limit:
self.visitNext.append( linkS)
debug('put', 'something is put inside visitNext')
else:
self.visitLater.append( linkS)
debug('put', 'something is put inside visitLater')
self._count += 1
if __name__ == "__main__":
# def __init__( self, level, constraint, urlQ):
from multiprocessing import Queue
q = Queue(3)
us = urlServerM( 3, 6000, q)
us.start()
time.sleep(2)
# only one thread will do this
us.store('http://www.google.com')
debug('put', 'put completed')
time.sleep(3)
print q.get_nowait()
time.sleep(3)
</code></pre>
<p>And this is the output</p>
<pre><code>OUTPUT
DEBUG hasnext: [][]
[process] Nothing in visitNext or visitLater, sleeping
DEBUG put: something is put inside visitNext
DEBUG put: put completed
DEBUG hasnext: [][]
[process] Nothing in visitNext or visitLater, sleeping
DEBUG hasnext: [][]
[process] Nothing in visitNext or visitLater, sleeping
Traceback (most recent call last):
File "urlServerM.py", line 112, in <module>
print q.get_nowait()
File "/usr/lib/python2.6/multiprocessing/queues.py", line 122, in get_nowait
return self.get(False)
File "/usr/lib/python2.6/multiprocessing/queues.py", line 104, in get
raise Empty
Queue.Empty
DEBUG hasnext: [][]
</code></pre>
<p>Apparently, I find this really weird. Well basically what this code this is that when tested in main(), it starts the process, and then it stores <a href="http://www.google.com" rel="nofollow">http://www.google.com</a> into the the class's visitNext, and then I just want to see that being pushed into the Queue. </p>
<p>However, according to the output
I find it extremely weird that even though my class has completed storing a url into the Class, hasNext doesn't show anything up. Any body know why? Is this the best way to write run() in a continual while loop? Is this actually necessary? I basically am trying to experiment with the module multiprocessing, and I have a pool of workers (from multiprocessing.Pool) which need to obtain these urls from this class (Single point of entry). Is using a queue the best way? Do I need to make this a "live" process since every worker is asking from the Queue and unless I have a way to signal that to my urlServer to put something into the Queue, I cannot think of a less troublesome way.</p>
| 1 | 2009-08-02T11:56:40Z | 1,219,742 | <p>You're using multiprocessing, so the memory is not shared between main execution and your urlserver.</p>
<p>I.e. I think this is effectively a Noop: <code>{us.store('<a href="http://www.google.com" rel="nofollow">http://www.google.com</a>')}</code> because when it's executed in the main thread, it modifies only the main threads representation of <code>{us}</code>. You can confirm that the url is in main thread's memory, by calling <code>{us.hasnext()}</code> before <code>{q.get_nowait()}</code>.</p>
<p>To make it work, you'll have to turn all lists that you want to share into Queue-s, or Pipe-s. Alternatively, you can just change your model to <code>{threading}</code> and it should work without changes (more or less - you'll have to do locking around the visit lists; and you get GIL issues again).</p>
<p>(And yeah - please edit your questions better next time. I knew what might be your problem as soon as I saw "multiprocessing", but otherwise I wouldn't bother looking at the code at all...)</p>
| 0 | 2009-08-02T20:18:11Z | [
"python"
] |
Segment a list in Python | 1,218,793 | <p>I am looking for an python inbuilt function (or mechanism) to segment a list into required segment lengths (without mutating the input list). Here is the code I already have:</p>
<pre><code>>>> def split_list(list, seg_length):
... inlist = list[:]
... outlist = []
...
... while inlist:
... outlist.append(inlist[0:seg_length])
... inlist[0:seg_length] = []
...
... return outlist
...
>>> alist = range(10)
>>> split_list(alist, 3)
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
</code></pre>
| 8 | 2009-08-02T12:14:15Z | 1,218,810 | <p>You can use list comprehension:</p>
<pre><code>>>> seg_length = 3
>>> a = range(10)
>>> [a[x:x+seg_length] for x in range(0,len(a),seg_length)]
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
</code></pre>
| 15 | 2009-08-02T12:26:09Z | [
"python",
"list",
"segments"
] |
Segment a list in Python | 1,218,793 | <p>I am looking for an python inbuilt function (or mechanism) to segment a list into required segment lengths (without mutating the input list). Here is the code I already have:</p>
<pre><code>>>> def split_list(list, seg_length):
... inlist = list[:]
... outlist = []
...
... while inlist:
... outlist.append(inlist[0:seg_length])
... inlist[0:seg_length] = []
...
... return outlist
...
>>> alist = range(10)
>>> split_list(alist, 3)
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
</code></pre>
| 8 | 2009-08-02T12:14:15Z | 1,218,828 | <p>not the same output, I still think the <a href="http://docs.python.org/library/itertools.html" rel="nofollow">grouper function</a> is helpful:</p>
<pre><code>from itertools import izip_longest
def grouper(iterable, n, fillvalue=None):
args = [iter(iterable)] * n
return izip_longest(*args, fillvalue=fillvalue)
</code></pre>
<p>for Python2.4 and 2.5 that does not have izip_longest:</p>
<pre><code>from itertools import izip, chain, repeat
def grouper(iterable, n, padvalue=None):
return izip(*[chain(iterable, repeat(padvalue, n-1))]*n)
</code></pre>
<p>some demo code and output:</p>
<pre><code>alist = range(10)
print list(grouper(alist, 3))
</code></pre>
<p>output:
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, None, None)]</p>
| 2 | 2009-08-02T12:38:36Z | [
"python",
"list",
"segments"
] |
Segment a list in Python | 1,218,793 | <p>I am looking for an python inbuilt function (or mechanism) to segment a list into required segment lengths (without mutating the input list). Here is the code I already have:</p>
<pre><code>>>> def split_list(list, seg_length):
... inlist = list[:]
... outlist = []
...
... while inlist:
... outlist.append(inlist[0:seg_length])
... inlist[0:seg_length] = []
...
... return outlist
...
>>> alist = range(10)
>>> split_list(alist, 3)
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
</code></pre>
| 8 | 2009-08-02T12:14:15Z | 1,218,844 | <p>How do you need to use the output? If you only need to iterate over it, you are better off creating an iterable, one that yields your groups:</p>
<pre><code>def split_by(sequence, length):
iterable = iter(sequence)
def yield_length():
for i in xrange(length):
yield iterable.next()
while True:
res = list(yield_length())
if not res:
raise StopIteration
yield res
</code></pre>
<p>Usage example:</p>
<pre><code>>>> alist = range(10)
>>> list(split_by(alist, 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
</code></pre>
<p>This uses far less memory than trying to construct the whole list in memory at once, if you are only looping over the result, because it only constructs one subset at a time:</p>
<pre><code>>>> for subset in split_by(alist, 3):
... print subset
...
[0, 1, 2]
[3, 4, 5]
[6, 7, 8]
[9]
</code></pre>
| 3 | 2009-08-02T12:51:07Z | [
"python",
"list",
"segments"
] |
Multiple versions of Python on OS X Leopard | 1,218,891 | <p>I currently have multiple versions of Python installed on my Mac, the one that came with it, a version I downloaded recently from python.org, an older version used to run Zope locally and another version that Appengine is using. It's kind of a mess. Any recommendations of using one version of python to rule them all? How would I go about deleted older versions and linking all of my apps to a single install. Any Mac specific gotchas I should know about? Is this a dumb idea?</p>
| 21 | 2009-08-02T13:23:32Z | 1,219,166 | <p>There's nothing inherently wrong with having multiple versions of Python around. Sometimes it's a necessity when using applications with version dependencies. Probably the biggest issue is dealing with site-package dependencies which may vary from app to app. Tools like <a href="http://pypi.python.org/pypi/virtualenv"><code>virtualenv</code></a> can help there. One thing you should <strong>not</strong> do is attempt to remove the Apple-supplied Python in /System/Library/Frameworks and linked to from /usr/bin/python. (Note the recent discussion of multiple versions <a href="http://stackoverflow.com/questions/1213690/what-is-the-most-compatible-way-to-install-python-modules-on-a-mac/">here</a>.)</p>
| 20 | 2009-08-02T16:09:49Z | [
"python",
"osx",
"osx-leopard",
"zope"
] |
Multiple versions of Python on OS X Leopard | 1,218,891 | <p>I currently have multiple versions of Python installed on my Mac, the one that came with it, a version I downloaded recently from python.org, an older version used to run Zope locally and another version that Appengine is using. It's kind of a mess. Any recommendations of using one version of python to rule them all? How would I go about deleted older versions and linking all of my apps to a single install. Any Mac specific gotchas I should know about? Is this a dumb idea?</p>
| 21 | 2009-08-02T13:23:32Z | 1,219,303 | <p>The approach I prefer which should work on every UNIX-like operating system:</p>
<p>Create for each application which need an specific python version an user account. Install in each user count the corresponding python version with an user-local prefix (like ~/build/python) and add ~/build/bin/ to the PATH environment variable of the user. Install/use your python applications in their correct user.</p>
<p>The advantage of this approach is the perfect isolation between the individual python installations and relatively convenient selection of the correct python environment (just <code>su</code> to the appropriate user). Also the operating system remains untouched.</p>
| 1 | 2009-08-02T17:13:27Z | [
"python",
"osx",
"osx-leopard",
"zope"
] |
Multiple versions of Python on OS X Leopard | 1,218,891 | <p>I currently have multiple versions of Python installed on my Mac, the one that came with it, a version I downloaded recently from python.org, an older version used to run Zope locally and another version that Appengine is using. It's kind of a mess. Any recommendations of using one version of python to rule them all? How would I go about deleted older versions and linking all of my apps to a single install. Any Mac specific gotchas I should know about? Is this a dumb idea?</p>
| 21 | 2009-08-02T13:23:32Z | 1,219,486 | <p>Ian Bicking's <a href="http://pypi.python.org/pypi/virtualenv">virtualenv</a> allows me to have isolated Pythons for each application I build, and lets me decide whether or not to include the global site-packages in the isolated Python environment.</p>
<p>I haven't tried it with Zope, but I'm guessing that the following should work nicely:</p>
<ol>
<li>Using your Zope's Python, make a new virtualenv, either with or without --no-site-packages</li>
<li>Drop your Zope into the virtualenv</li>
<li>Activate the environment with $VENV/bin/activate</li>
<li>Install any needed site-packages</li>
<li>Run your Zope using the Python now at $VENV/bin/python</li>
</ol>
<p>This has worked brilliantly for managing Django projects with various versions of Python, Django, and add-ons.</p>
<p><a href="http://grok.zope.org/documentation/how-to/using-virtualenv-for-a-clean-grok-installation">This article</a> seems to go into more detail on the specifics of Grok and Virtualenv, but the generalities should apply to Zope as welll.</p>
| 9 | 2009-08-02T18:27:39Z | [
"python",
"osx",
"osx-leopard",
"zope"
] |
Multiple versions of Python on OS X Leopard | 1,218,891 | <p>I currently have multiple versions of Python installed on my Mac, the one that came with it, a version I downloaded recently from python.org, an older version used to run Zope locally and another version that Appengine is using. It's kind of a mess. Any recommendations of using one version of python to rule them all? How would I go about deleted older versions and linking all of my apps to a single install. Any Mac specific gotchas I should know about? Is this a dumb idea?</p>
| 21 | 2009-08-02T13:23:32Z | 1,321,011 | <p>+1 for virtualenv. </p>
<p>Even if you don't need different Python versions, it's still good to keep your development dependencies seperate from your system Python.</p>
<p>I'm not sure what OS you are using, but I find <a href="http://www.stereoplex.com/blog/creating-a-python-2-4-plone-and-zope-development-e" rel="nofollow">these</a> instructions very useful for getting python development environments running on OSX.</p>
| 2 | 2009-08-24T07:57:54Z | [
"python",
"osx",
"osx-leopard",
"zope"
] |
How to get the biggest numbers out from huge amount of numbers? | 1,218,922 | <p>I'd like to get the largest 100 elements out from a list of at least 100000000 numbers.</p>
<p>I could sort the entire list and just take the last 100 elements from the sorted list, but that would be very expensive in terms of both memory and time.</p>
<p>Is there any existing easy, pythonic way of doing this?</p>
<p>What I want is following function instead of a pure sort. Actually I don't want waste time to sort the elements I don't care.</p>
<p>For example, this is the function I'd like to have:</p>
<pre><code>getSortedElements(100, lambda x,y:cmp(x,y))
</code></pre>
<p>Note this requirement is only for performance perspective.</p>
| 10 | 2009-08-02T13:43:21Z | 1,218,930 | <p><a href="http://en.wikipedia.org/wiki/Selection%5Falgorithm" rel="nofollow">Selection algorithms</a> should help here. </p>
<p>A very easy solution is to find the 100th biggest element, then run through the list picking off elements that are bigger than this element. That will give you the 100 biggest elements. This is linear in the length of the list; this is best possible.</p>
<p>There are more sophisticated algorithms. A <a href="http://tinyurl.com/6qo3yu" rel="nofollow">heap</a>, for example, is very amenable to this problem. The heap based algorithm is <code>n log k</code> where <code>n</code> is the length of the list and <code>k</code> is the number of largest elements that you want to select.</p>
<p>There's a discussion of this <a href="http://en.wikipedia.org/wiki/Selection%5Falgorithm#Selecting%5Fk%5Fsmallest%5For%5Flargest%5Felements" rel="nofollow">problem</a> on the Wikipedia page for selection algorithms.</p>
<p>Edit: Another poster has pointed out that Python has a built in solution to this problem. Obviously that is far easier than rolling your own, but I'll keep this post up in case you would like to learn about how such algorithms work.</p>
| 6 | 2009-08-02T13:45:45Z | [
"python",
"sorting",
"max",
"minimum"
] |
How to get the biggest numbers out from huge amount of numbers? | 1,218,922 | <p>I'd like to get the largest 100 elements out from a list of at least 100000000 numbers.</p>
<p>I could sort the entire list and just take the last 100 elements from the sorted list, but that would be very expensive in terms of both memory and time.</p>
<p>Is there any existing easy, pythonic way of doing this?</p>
<p>What I want is following function instead of a pure sort. Actually I don't want waste time to sort the elements I don't care.</p>
<p>For example, this is the function I'd like to have:</p>
<pre><code>getSortedElements(100, lambda x,y:cmp(x,y))
</code></pre>
<p>Note this requirement is only for performance perspective.</p>
| 10 | 2009-08-02T13:43:21Z | 1,218,943 | <p>You can use a Heap data structure. A heap will not necessarily be ordered, but it is a fairly fast way to keep semi-ordered data, and it has the benefit of the smallest item always being the first element in the heap.</p>
<p>A heap has two basic operations that will help you: Add and Replace.</p>
<p>Basically what you do is add items to it until you get to a 100 items (your top N number per your question). Then after that, you replace the first item with every new item, as long as the new item is bigger than the first item.</p>
<p>Whenever you replace the first item with something bigger, the internal code in the heap will adjust the heap contents so that if the new item is not the smallest, it will bubble up into the heap, and the smallest item will "bubble down" to the first element, ready to be replaced along the way.</p>
| 5 | 2009-08-02T13:53:21Z | [
"python",
"sorting",
"max",
"minimum"
] |
How to get the biggest numbers out from huge amount of numbers? | 1,218,922 | <p>I'd like to get the largest 100 elements out from a list of at least 100000000 numbers.</p>
<p>I could sort the entire list and just take the last 100 elements from the sorted list, but that would be very expensive in terms of both memory and time.</p>
<p>Is there any existing easy, pythonic way of doing this?</p>
<p>What I want is following function instead of a pure sort. Actually I don't want waste time to sort the elements I don't care.</p>
<p>For example, this is the function I'd like to have:</p>
<pre><code>getSortedElements(100, lambda x,y:cmp(x,y))
</code></pre>
<p>Note this requirement is only for performance perspective.</p>
| 10 | 2009-08-02T13:43:21Z | 1,218,945 | <p>The heapq module in the standard library offers the nlargest() function to do this:</p>
<pre><code>top100 = heapq.nlargest(100, iterable [,key])
</code></pre>
<p>It won't sort the entire list, so you won't waste time on the elements you don't need.</p>
| 27 | 2009-08-02T13:54:27Z | [
"python",
"sorting",
"max",
"minimum"
] |
How to get the biggest numbers out from huge amount of numbers? | 1,218,922 | <p>I'd like to get the largest 100 elements out from a list of at least 100000000 numbers.</p>
<p>I could sort the entire list and just take the last 100 elements from the sorted list, but that would be very expensive in terms of both memory and time.</p>
<p>Is there any existing easy, pythonic way of doing this?</p>
<p>What I want is following function instead of a pure sort. Actually I don't want waste time to sort the elements I don't care.</p>
<p>For example, this is the function I'd like to have:</p>
<pre><code>getSortedElements(100, lambda x,y:cmp(x,y))
</code></pre>
<p>Note this requirement is only for performance perspective.</p>
| 10 | 2009-08-02T13:43:21Z | 1,218,953 | <p>The best way to do this is to maintain a heap sorted priority queue that you pop off of once it has 100 entries in it. </p>
<p>While you don't care if the results are sorted it is intuitively obvious you will get this for free. In order to know you have the top 100, you need to order your current list of top numbers in order via some efficient data structure. That structure will know the minimum, the maximum, and the relative position of each element in some natural way that you can assert it's position next to it's neighbors. </p>
<p>As has been mentioned in python you would use heapq. In java PriorityQueue:
<a href="http://java.sun.com/javase/6/docs/api/java/util/PriorityQueue.html" rel="nofollow">http://java.sun.com/javase/6/docs/api/java/util/PriorityQueue.html</a> </p>
| 3 | 2009-08-02T13:59:47Z | [
"python",
"sorting",
"max",
"minimum"
] |
How to get the biggest numbers out from huge amount of numbers? | 1,218,922 | <p>I'd like to get the largest 100 elements out from a list of at least 100000000 numbers.</p>
<p>I could sort the entire list and just take the last 100 elements from the sorted list, but that would be very expensive in terms of both memory and time.</p>
<p>Is there any existing easy, pythonic way of doing this?</p>
<p>What I want is following function instead of a pure sort. Actually I don't want waste time to sort the elements I don't care.</p>
<p>For example, this is the function I'd like to have:</p>
<pre><code>getSortedElements(100, lambda x,y:cmp(x,y))
</code></pre>
<p>Note this requirement is only for performance perspective.</p>
| 10 | 2009-08-02T13:43:21Z | 1,219,090 | <p>Here is a solution I have used that is independent of libraries and that
will work in any programming language that has arrays:</p>
<p>Initialisation:</p>
<pre><code>Make an array of 100 elements and initialise all elements
with a low value (less than any value in your input list).
Initialise an integer variable to 0 (or any value in
[0;99]), say index_minvalue, that will point to the
current lowest value in the array.
Initialise a variable, say minvalue, to hold the current
lowest value in the array.
</code></pre>
<p>For each value, say current_value, in the input list:</p>
<pre><code>if current_value > minvalue
Replace value in array pointed to by index_minvalue
with current_value
Find new lowest value in the array and set index_minvalue to
its array index. (linear search for this will be OK as the array
is quickly filled up with large values)
Set minvalue to current_value
else
<don't do anything!>
</code></pre>
<p>minvalue will quickly get a high value and thus most values
in the input list will only need to be compared to minvalue
(the result of the comparison will mostly be false).</p>
| 2 | 2009-08-02T15:09:29Z | [
"python",
"sorting",
"max",
"minimum"
] |
How to get the biggest numbers out from huge amount of numbers? | 1,218,922 | <p>I'd like to get the largest 100 elements out from a list of at least 100000000 numbers.</p>
<p>I could sort the entire list and just take the last 100 elements from the sorted list, but that would be very expensive in terms of both memory and time.</p>
<p>Is there any existing easy, pythonic way of doing this?</p>
<p>What I want is following function instead of a pure sort. Actually I don't want waste time to sort the elements I don't care.</p>
<p>For example, this is the function I'd like to have:</p>
<pre><code>getSortedElements(100, lambda x,y:cmp(x,y))
</code></pre>
<p>Note this requirement is only for performance perspective.</p>
| 10 | 2009-08-02T13:43:21Z | 1,219,240 | <p>For the algorithms weenies in the audience: you can do this with a simple variation on Tony Hoare's algorithm <a href="http://portal.acm.org/citation.cfm?id=362489" rel="nofollow"><em>Find</em></a>:</p>
<pre><code>find(topn, a, i, j)
pick a random element x from a[i..j]
partition the subarray a[i..j] (just as in Quicksort)
into subarrays of elements <x, ==x, >x
let k be the position of element x
if k == 0 you're finished
if k > topn, call find(topn, a, i, k)
if k < topn, call find(topn-k, k, j)
</code></pre>
<p>This algorithm puts the largest <code>topn</code> elements into the first <code>topn</code> elements of array <code>a</code>, <em>without</em> sorting them. Of course, if you want them sorted, or for sheer simplicity, a heap is better, and calling the library function is better still. But it's a cool algorithm.</p>
| 1 | 2009-08-02T16:45:34Z | [
"python",
"sorting",
"max",
"minimum"
] |
Can I redirect the stdout in python into some sort of string buffer? | 1,218,933 | <p>I'm using python's <code>ftplib</code> to write a small FTP client, but some of the functions in the package don't return string output, but print to <code>stdout</code>. I want to redirect <code>stdout</code> to an object which I'll be able to read the output from.</p>
<p>I know <code>stdout</code> can be redirected into any regular file with:</p>
<pre><code>stdout = open("file", "a")
</code></pre>
<p>But I prefer a method that doesn't uses the local drive.</p>
<p>I'm looking for something like the <code>BufferedReader</code> in Java that can be used to wrap a buffer into a stream.</p>
| 76 | 2009-08-02T13:47:50Z | 1,218,940 | <p>Use <code>pipe()</code> and write to the appropriate file descriptor.</p>
<p><a href="https://docs.python.org/library/os.html#file-descriptor-operations" rel="nofollow">https://docs.python.org/library/os.html#file-descriptor-operations</a></p>
| 3 | 2009-08-02T13:51:08Z | [
"python",
"stream",
"stdout",
"redirect"
] |
Can I redirect the stdout in python into some sort of string buffer? | 1,218,933 | <p>I'm using python's <code>ftplib</code> to write a small FTP client, but some of the functions in the package don't return string output, but print to <code>stdout</code>. I want to redirect <code>stdout</code> to an object which I'll be able to read the output from.</p>
<p>I know <code>stdout</code> can be redirected into any regular file with:</p>
<pre><code>stdout = open("file", "a")
</code></pre>
<p>But I prefer a method that doesn't uses the local drive.</p>
<p>I'm looking for something like the <code>BufferedReader</code> in Java that can be used to wrap a buffer into a stream.</p>
| 76 | 2009-08-02T13:47:50Z | 1,218,951 | <pre><code>from cStringIO import StringIO
import sys
old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
# blah blah lots of code ...
sys.stdout = old_stdout
# examine mystdout.getvalue()
</code></pre>
| 121 | 2009-08-02T13:57:41Z | [
"python",
"stream",
"stdout",
"redirect"
] |
Can I redirect the stdout in python into some sort of string buffer? | 1,218,933 | <p>I'm using python's <code>ftplib</code> to write a small FTP client, but some of the functions in the package don't return string output, but print to <code>stdout</code>. I want to redirect <code>stdout</code> to an object which I'll be able to read the output from.</p>
<p>I know <code>stdout</code> can be redirected into any regular file with:</p>
<pre><code>stdout = open("file", "a")
</code></pre>
<p>But I prefer a method that doesn't uses the local drive.</p>
<p>I'm looking for something like the <code>BufferedReader</code> in Java that can be used to wrap a buffer into a stream.</p>
| 76 | 2009-08-02T13:47:50Z | 1,220,002 | <p>Just to add to Ned's answer above: you can use this to redirect output to <strong>any object that implements a write(str) method</strong>.</p>
<p>This can be used to good effect to "catch" stdout output in a GUI application.</p>
<p>Here's a silly example in PyQt:</p>
<pre><code>import sys
from PyQt4 import QtGui
class OutputWindow(QtGui.QPlainTextEdit):
def write(self, txt):
self.appendPlainText(str(txt))
app = QtGui.QApplication(sys.argv)
out = OutputWindow()
sys.stdout=out
out.show()
print "hello world !"
</code></pre>
| 30 | 2009-08-02T22:10:07Z | [
"python",
"stream",
"stdout",
"redirect"
] |
Can I redirect the stdout in python into some sort of string buffer? | 1,218,933 | <p>I'm using python's <code>ftplib</code> to write a small FTP client, but some of the functions in the package don't return string output, but print to <code>stdout</code>. I want to redirect <code>stdout</code> to an object which I'll be able to read the output from.</p>
<p>I know <code>stdout</code> can be redirected into any regular file with:</p>
<pre><code>stdout = open("file", "a")
</code></pre>
<p>But I prefer a method that doesn't uses the local drive.</p>
<p>I'm looking for something like the <code>BufferedReader</code> in Java that can be used to wrap a buffer into a stream.</p>
| 76 | 2009-08-02T13:47:50Z | 19,345,047 | <p>Starting with Python 2.6 you can use anything implementing the <a href="http://docs.python.org/2/library/io.html#io.TextIOBase" rel="nofollow"><code>TextIOBase</code> API</a> from the io module as a replacement.
This solution also enables you to use <code>sys.stdout.buffer.write()</code> in Python 3 to write (already) encoded byte strings to stdout (see <a href="http://docs.python.org/3.0/library/sys.html#sys.stdout" rel="nofollow">stdout in Python 3</a>).
Using <code>StringIO</code> wouldn't work then, because neither <code>sys.stdout.encoding</code> nor <code>sys.stdout.buffer</code> would be available.</p>
<p>A solution using TextIOWrapper:</p>
<pre><code>import sys
from io import TextIOWrapper, BytesIO
# setup the environment
old_stdout = sys.stdout
sys.stdout = TextIOWrapper(BytesIO(), sys.stdout.encoding)
# do something that writes to stdout or stdout.buffer
# get output
sys.stdout.seek(0) # jump to the start
out = sys.stdout.read() # read output
# restore stdout
sys.stdout.close()
sys.stdout = old_stdout
</code></pre>
<p>This solution works for Python 2 >= 2.6 and Python 3.</p>
<p>Please note that our new <code>sys.stdout.write()</code> only accepts unicode strings and <code>sys.stdout.buffer.write()</code> only accepts byte strings.
This might not be the case for old code, but is often the case for code that is built to run on Python 2 and 3 without changes, which again often makes use of <code>sys.stdout.buffer</code>.</p>
<p>You can build a slight variation that accepts unicode and byte strings for <code>write()</code>:</p>
<pre><code>class StdoutBuffer(TextIOWrapper):
def write(self, string):
try:
return super(StdoutBuffer, self).write(string)
except TypeError:
# redirect encoded byte strings directly to buffer
return super(StdoutBuffer, self).buffer.write(string)
</code></pre>
<p>You don't have to set the encoding of the buffer the sys.stdout.encoding, but this helps when using this method for testing/comparing script output.</p>
| 1 | 2013-10-13T12:10:50Z | [
"python",
"stream",
"stdout",
"redirect"
] |
Can I redirect the stdout in python into some sort of string buffer? | 1,218,933 | <p>I'm using python's <code>ftplib</code> to write a small FTP client, but some of the functions in the package don't return string output, but print to <code>stdout</code>. I want to redirect <code>stdout</code> to an object which I'll be able to read the output from.</p>
<p>I know <code>stdout</code> can be redirected into any regular file with:</p>
<pre><code>stdout = open("file", "a")
</code></pre>
<p>But I prefer a method that doesn't uses the local drive.</p>
<p>I'm looking for something like the <code>BufferedReader</code> in Java that can be used to wrap a buffer into a stream.</p>
| 76 | 2009-08-02T13:47:50Z | 22,434,594 | <p>There is <a href="http://docs.python.org/3.4/library/contextlib.html#contextlib.redirect_stdout">contextlib.redirect_stdout() function</a> in Python 3.4:</p>
<pre><code>import io
from contextlib import redirect_stdout
with io.StringIO() as buf, redirect_stdout(buf):
print('redirected')
output = buf.getvalue()
</code></pre>
<p>Here's <a href="http://stackoverflow.com/a/22434262/4279">code example that shows how to implement it on older Python versions</a>.</p>
| 16 | 2014-03-16T08:18:22Z | [
"python",
"stream",
"stdout",
"redirect"
] |
Can I redirect the stdout in python into some sort of string buffer? | 1,218,933 | <p>I'm using python's <code>ftplib</code> to write a small FTP client, but some of the functions in the package don't return string output, but print to <code>stdout</code>. I want to redirect <code>stdout</code> to an object which I'll be able to read the output from.</p>
<p>I know <code>stdout</code> can be redirected into any regular file with:</p>
<pre><code>stdout = open("file", "a")
</code></pre>
<p>But I prefer a method that doesn't uses the local drive.</p>
<p>I'm looking for something like the <code>BufferedReader</code> in Java that can be used to wrap a buffer into a stream.</p>
| 76 | 2009-08-02T13:47:50Z | 33,979,942 | <p>This method restores sys.stdout even if there's an exception. It also gets any output before the exception.</p>
<pre><code>real_stdout = sys.stdout
fake_stdout = io.BytesIO()
try:
sys.stdout = fake_stdout
# do what you gotta do to create some output
finally:
sys.stdout = real_stdout
output_string = fake_stdout.getvalue()
fake_stdout.close()
# do what you want with the output_string
</code></pre>
<p>I only tested this in Python 2.7.10, but <a href="https://docs.python.org/2/library/io.html#io.BytesIO" rel="nofollow"><code>io.BytesIO</code></a> is rumored to be the <a href="http://stackoverflow.com/a/12028682/673991">replacement for StringIO in Python 3</a>.</p>
| 2 | 2015-11-29T06:10:42Z | [
"python",
"stream",
"stdout",
"redirect"
] |
How do I do database transactions with psycopg2/python db api? | 1,219,326 | <p>Im fiddling with psycopg2 , and while there's a .commit() and .rollback() there's no .begin() or similar to start a transaction , or so it seems ?
I'd expect to be able to do </p>
<pre><code>db.begin() # possible even set the isolation level here
curs = db.cursor()
cursor.execute('select etc... for update')
...
cursor.execute('update ... etc.')
db.commit();
</code></pre>
<p>So, how do transactions work with psycopg2 ?
How would I set/change the isolation level ?</p>
| 17 | 2009-08-02T17:26:17Z | 1,219,376 | <p>Use <code>db.set_isolation_level(n)</code>, assuming <code>db</code> is your connection object. As Federico wrote <a href="https://web.archive.org/web/20100828225638/http://lists.initd.org/pipermail/psycopg/2004-February/002577.html" rel="nofollow">here</a>, the meaning of <code>n</code> is:</p>
<pre><code>0 -> autocommit
1 -> read committed
2 -> serialized (but not officially supported by pg)
3 -> serialized
</code></pre>
<p>As documented <a href="http://www.initd.org/psycopg/docs/extensions.html#isolation-level-constants" rel="nofollow">here</a>, <code>psycopg2.extensions</code> gives you symbolic constants for the purpose:</p>
<pre><code>Setting transaction isolation levels
====================================
psycopg2 connection objects hold informations about the PostgreSQL `transaction
isolation level`_. The current transaction level can be read from the
`.isolation_level` attribute. The default isolation level is ``READ
COMMITTED``. A different isolation level con be set through the
`.set_isolation_level()` method. The level can be set to one of the following
constants, defined in `psycopg2.extensions`:
`ISOLATION_LEVEL_AUTOCOMMIT`
No transaction is started when command are issued and no
`.commit()`/`.rollback()` is required. Some PostgreSQL command such as
``CREATE DATABASE`` can't run into a transaction: to run such command use
`.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)`.
`ISOLATION_LEVEL_READ_COMMITTED`
This is the default value. A new transaction is started at the first
`.execute()` command on a cursor and at each new `.execute()` after a
`.commit()` or a `.rollback()`. The transaction runs in the PostgreSQL
``READ COMMITTED`` isolation level.
`ISOLATION_LEVEL_SERIALIZABLE`
Transactions are run at a ``SERIALIZABLE`` isolation level.
.. _transaction isolation level:
http://www.postgresql.org/docs/8.1/static/transaction-iso.html
</code></pre>
| 21 | 2009-08-02T17:47:18Z | [
"python",
"database",
"postgresql"
] |
How do I do database transactions with psycopg2/python db api? | 1,219,326 | <p>Im fiddling with psycopg2 , and while there's a .commit() and .rollback() there's no .begin() or similar to start a transaction , or so it seems ?
I'd expect to be able to do </p>
<pre><code>db.begin() # possible even set the isolation level here
curs = db.cursor()
cursor.execute('select etc... for update')
...
cursor.execute('update ... etc.')
db.commit();
</code></pre>
<p>So, how do transactions work with psycopg2 ?
How would I set/change the isolation level ?</p>
| 17 | 2009-08-02T17:26:17Z | 1,226,713 | <p>I prefer to explicitly see where my transactions are : </p>
<ul>
<li>cursor.execute("BEGIN")</li>
<li>cursor.execute("COMMIT")</li>
</ul>
| 3 | 2009-08-04T10:18:15Z | [
"python",
"database",
"postgresql"
] |
How do I do database transactions with psycopg2/python db api? | 1,219,326 | <p>Im fiddling with psycopg2 , and while there's a .commit() and .rollback() there's no .begin() or similar to start a transaction , or so it seems ?
I'd expect to be able to do </p>
<pre><code>db.begin() # possible even set the isolation level here
curs = db.cursor()
cursor.execute('select etc... for update')
...
cursor.execute('update ... etc.')
db.commit();
</code></pre>
<p>So, how do transactions work with psycopg2 ?
How would I set/change the isolation level ?</p>
| 17 | 2009-08-02T17:26:17Z | 1,265,499 | <p>The <code>BEGIN</code> with python standard DB API is always implicit. When you start working with the database the driver issues a <code>BEGIN</code> and after any <code>COMMIT</code> or <code>ROLLBACK</code> another <code>BEGIN</code> is issued. A python DB API compliant with the specification should always work this way (not only the postgresql).</p>
<p>You can change this setting the isolation level to autocommit with <code>db.set_isolation_level(n)</code> as pointed by Alex Martelli.</p>
<p>As Tebas said the begin is implicit but not executed until an SQL is executed, so if you don't execute any SQL, the session is not in a transaction.</p>
| 10 | 2009-08-12T11:03:28Z | [
"python",
"database",
"postgresql"
] |
Accessing data files before and after distutils/setuptools | 1,219,367 | <p>I'm doing a platform independent PyQt application. I intend to use write a setup.py files using setuptools. So far I've managed to detech platform, e.g. load specific options for setup() depending on platform in order to use py2exe on Windows... etc...</p>
<p>However, with my application I'm distributing some themes, HTML and images, I need to load these images in the application at runtime. So far they are stored in the themes/ directory of the application.</p>
<p>I've been reading documentation on setuptools and distutils, and figured out that if I gave setup() the data_files options with all the files in the themes/ directory to be installed in "share/MyApp/themes/" it would be installed with a /usr/ prefix, or whatever sys.prefix is on the platform.
I assume that I would find the data files using os.path.join(sys.prefix, "share", "MyApp", "themes") nomatter what platform I'm on, right?</p>
<p>However, I want to be able to access the data files during development too, where they reside in the themes/ directory relative to application source. How do I do this?
Is there some smart way to figure out whether the application has been installed? Or a utility that maps to the data files regardless of where they are, at the moment?</p>
<p>I would really hate to add all sorts of ugly hacks to see if there are themes relative to the source, or in sys.prefix/share... etc... How do I find there data files during development? and after install on arbitrary platform ?</p>
| 10 | 2009-08-02T17:42:45Z | 1,219,406 | <p>I've used a utility method called data_file:</p>
<pre><code>def data_file(fname):
"""Return the path to a data file of ours."""
return os.path.join(os.path.split(__file__)[0], fname)
</code></pre>
<p>I put this in the <strong>init</strong>.py file in my project, and then call it from anywhere in my package to get a file relative to the package.</p>
<p>Setuptools offers a similar function, but this doesn't need setuptools.</p>
| 5 | 2009-08-02T17:58:14Z | [
"python",
"setuptools",
"distutils"
] |
Accessing data files before and after distutils/setuptools | 1,219,367 | <p>I'm doing a platform independent PyQt application. I intend to use write a setup.py files using setuptools. So far I've managed to detech platform, e.g. load specific options for setup() depending on platform in order to use py2exe on Windows... etc...</p>
<p>However, with my application I'm distributing some themes, HTML and images, I need to load these images in the application at runtime. So far they are stored in the themes/ directory of the application.</p>
<p>I've been reading documentation on setuptools and distutils, and figured out that if I gave setup() the data_files options with all the files in the themes/ directory to be installed in "share/MyApp/themes/" it would be installed with a /usr/ prefix, or whatever sys.prefix is on the platform.
I assume that I would find the data files using os.path.join(sys.prefix, "share", "MyApp", "themes") nomatter what platform I'm on, right?</p>
<p>However, I want to be able to access the data files during development too, where they reside in the themes/ directory relative to application source. How do I do this?
Is there some smart way to figure out whether the application has been installed? Or a utility that maps to the data files regardless of where they are, at the moment?</p>
<p>I would really hate to add all sorts of ugly hacks to see if there are themes relative to the source, or in sys.prefix/share... etc... How do I find there data files during development? and after install on arbitrary platform ?</p>
| 10 | 2009-08-02T17:42:45Z | 2,568,149 | <p>You could try <a href="http://setuptools.readthedocs.io/en/latest/pkg_resources.html#resourcemanager-api" rel="nofollow"><code>pkg_resources</code></a>:</p>
<pre><code>my_data = pkg_resources.resource_string(__name__, fname)
</code></pre>
| 6 | 2010-04-02T17:35:44Z | [
"python",
"setuptools",
"distutils"
] |
Accessing data files before and after distutils/setuptools | 1,219,367 | <p>I'm doing a platform independent PyQt application. I intend to use write a setup.py files using setuptools. So far I've managed to detech platform, e.g. load specific options for setup() depending on platform in order to use py2exe on Windows... etc...</p>
<p>However, with my application I'm distributing some themes, HTML and images, I need to load these images in the application at runtime. So far they are stored in the themes/ directory of the application.</p>
<p>I've been reading documentation on setuptools and distutils, and figured out that if I gave setup() the data_files options with all the files in the themes/ directory to be installed in "share/MyApp/themes/" it would be installed with a /usr/ prefix, or whatever sys.prefix is on the platform.
I assume that I would find the data files using os.path.join(sys.prefix, "share", "MyApp", "themes") nomatter what platform I'm on, right?</p>
<p>However, I want to be able to access the data files during development too, where they reside in the themes/ directory relative to application source. How do I do this?
Is there some smart way to figure out whether the application has been installed? Or a utility that maps to the data files regardless of where they are, at the moment?</p>
<p>I would really hate to add all sorts of ugly hacks to see if there are themes relative to the source, or in sys.prefix/share... etc... How do I find there data files during development? and after install on arbitrary platform ?</p>
| 10 | 2009-08-02T17:42:45Z | 3,702,842 | <p>You should <a href="http://setuptools.readthedocs.io/en/latest/pkg_resources.html" rel="nofollow">use the pkgutil/pkg_resources module to load the data files</a>. It even works from within ziped eggs.</p>
| 5 | 2010-09-13T17:27:02Z | [
"python",
"setuptools",
"distutils"
] |
Python Matplotlib hangs when asked to plot a second chart (after closing first chart window) | 1,219,394 | <p>Weird behaviour, I'm sure it's me screwing up, but I'd like to get to the bottom of what's happening: </p>
<p>I am running the following code to create a very simple graph window using matplotlib:</p>
<pre><code>>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x0290B750>]
>>> plt.show()
</code></pre>
<p>and as expected I get the chart one would expect, in a new window that has popped up, containing a very simple blue line going from 1 to 3 back to 1 again on y axis, with 0, 1, 2 as the x axis points (just as example). Now I close the graph window (using cross button in the top right under windows). This gives me control to the interpreter, and I start again, creating new objects:</p>
<pre><code>>>>
>>> fig1 = plt.figure()
>>> bx = fig1.add_subplot(111)
>>> bx.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x029E8210>]
>>> plt.show()
</code></pre>
<p>This time though, I get a window frame, with nothing in it (just the frame, no white background nothing), and the whole bang shoot hangs. I have to "end task", the python interpreter is terminated by the system and I get a command prompt back. Similar behaviour on a mac (except it does actually plot the graph first, before also hanging). </p>
<p>So somehow Python and/or matplotlib doesn't want me to close the window manually. Anybody know what's going on and what I should be doing? What I'd like to do is play around with different plots from within the interpreter, and obviously this behaviour doesn't help. I know I could use "Ipython -pylab" but in the interests of learning, I want to understand the above error. </p>
<p>Thanks. </p>
| 5 | 2009-08-02T17:53:36Z | 1,219,471 | <p>did you try:</p>
<pre><code>plt.close()
</code></pre>
<p>to make sure you closed the plot object? </p>
| 0 | 2009-08-02T18:22:44Z | [
"python",
"matplotlib"
] |
Python Matplotlib hangs when asked to plot a second chart (after closing first chart window) | 1,219,394 | <p>Weird behaviour, I'm sure it's me screwing up, but I'd like to get to the bottom of what's happening: </p>
<p>I am running the following code to create a very simple graph window using matplotlib:</p>
<pre><code>>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x0290B750>]
>>> plt.show()
</code></pre>
<p>and as expected I get the chart one would expect, in a new window that has popped up, containing a very simple blue line going from 1 to 3 back to 1 again on y axis, with 0, 1, 2 as the x axis points (just as example). Now I close the graph window (using cross button in the top right under windows). This gives me control to the interpreter, and I start again, creating new objects:</p>
<pre><code>>>>
>>> fig1 = plt.figure()
>>> bx = fig1.add_subplot(111)
>>> bx.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x029E8210>]
>>> plt.show()
</code></pre>
<p>This time though, I get a window frame, with nothing in it (just the frame, no white background nothing), and the whole bang shoot hangs. I have to "end task", the python interpreter is terminated by the system and I get a command prompt back. Similar behaviour on a mac (except it does actually plot the graph first, before also hanging). </p>
<p>So somehow Python and/or matplotlib doesn't want me to close the window manually. Anybody know what's going on and what I should be doing? What I'd like to do is play around with different plots from within the interpreter, and obviously this behaviour doesn't help. I know I could use "Ipython -pylab" but in the interests of learning, I want to understand the above error. </p>
<p>Thanks. </p>
| 5 | 2009-08-02T17:53:36Z | 1,220,036 | <p>Have you tried to use ipython instead of the standard python interpreter?</p>
<p>You can install ipython with the following command:</p>
<pre><code>easy_install ipython
</code></pre>
<p>and then, ipython has a specific mode to be ran with pylab, called -pylab:</p>
<pre><code>ipython -pylab
In[1]: ...
</code></pre>
<p>I think that most of the people use this solution to plot graphs with python, it is a command line similar to the one of R/Matlab, completition, etc... and it runs a separated thread for every plot so it shouldn't have the problem you have described.</p>
| 2 | 2009-08-02T22:29:21Z | [
"python",
"matplotlib"
] |
Python Matplotlib hangs when asked to plot a second chart (after closing first chart window) | 1,219,394 | <p>Weird behaviour, I'm sure it's me screwing up, but I'd like to get to the bottom of what's happening: </p>
<p>I am running the following code to create a very simple graph window using matplotlib:</p>
<pre><code>>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x0290B750>]
>>> plt.show()
</code></pre>
<p>and as expected I get the chart one would expect, in a new window that has popped up, containing a very simple blue line going from 1 to 3 back to 1 again on y axis, with 0, 1, 2 as the x axis points (just as example). Now I close the graph window (using cross button in the top right under windows). This gives me control to the interpreter, and I start again, creating new objects:</p>
<pre><code>>>>
>>> fig1 = plt.figure()
>>> bx = fig1.add_subplot(111)
>>> bx.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x029E8210>]
>>> plt.show()
</code></pre>
<p>This time though, I get a window frame, with nothing in it (just the frame, no white background nothing), and the whole bang shoot hangs. I have to "end task", the python interpreter is terminated by the system and I get a command prompt back. Similar behaviour on a mac (except it does actually plot the graph first, before also hanging). </p>
<p>So somehow Python and/or matplotlib doesn't want me to close the window manually. Anybody know what's going on and what I should be doing? What I'd like to do is play around with different plots from within the interpreter, and obviously this behaviour doesn't help. I know I could use "Ipython -pylab" but in the interests of learning, I want to understand the above error. </p>
<p>Thanks. </p>
| 5 | 2009-08-02T17:53:36Z | 1,249,939 | <p>Apparently, this is caused by a bug in the tkinter backend. See, e.g., <a href="https://bugs.launchpad.net/ubuntu/+source/matplotlib/+bug/313834" rel="nofollow">https://bugs.launchpad.net/ubuntu/+source/matplotlib/+bug/313834</a> . It's being worked on...
If you can regress to a slightly older tkinter library, that should be a workaround for the time-being (I ran into this same thing a couple of weeks ago, and that was my only hope). </p>
| 2 | 2009-08-08T21:23:59Z | [
"python",
"matplotlib"
] |
Python Matplotlib hangs when asked to plot a second chart (after closing first chart window) | 1,219,394 | <p>Weird behaviour, I'm sure it's me screwing up, but I'd like to get to the bottom of what's happening: </p>
<p>I am running the following code to create a very simple graph window using matplotlib:</p>
<pre><code>>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x0290B750>]
>>> plt.show()
</code></pre>
<p>and as expected I get the chart one would expect, in a new window that has popped up, containing a very simple blue line going from 1 to 3 back to 1 again on y axis, with 0, 1, 2 as the x axis points (just as example). Now I close the graph window (using cross button in the top right under windows). This gives me control to the interpreter, and I start again, creating new objects:</p>
<pre><code>>>>
>>> fig1 = plt.figure()
>>> bx = fig1.add_subplot(111)
>>> bx.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x029E8210>]
>>> plt.show()
</code></pre>
<p>This time though, I get a window frame, with nothing in it (just the frame, no white background nothing), and the whole bang shoot hangs. I have to "end task", the python interpreter is terminated by the system and I get a command prompt back. Similar behaviour on a mac (except it does actually plot the graph first, before also hanging). </p>
<p>So somehow Python and/or matplotlib doesn't want me to close the window manually. Anybody know what's going on and what I should be doing? What I'd like to do is play around with different plots from within the interpreter, and obviously this behaviour doesn't help. I know I could use "Ipython -pylab" but in the interests of learning, I want to understand the above error. </p>
<p>Thanks. </p>
| 5 | 2009-08-02T17:53:36Z | 1,787,772 | <p>Three months late to the party, but I found a suggestion in the matlibplot documentation to use draw() rather than show(); the former apparently just does a render of the current plot, while the latter starts up all the interactive tools, which is where the problems seem to start.</p>
<p>It's not terribly prominently placed in the documentation, but here's the link:
<a href="http://matplotlib.sourceforge.net/faq/howto%5Ffaq.html#use-show">http://matplotlib.sourceforge.net/faq/howto%5Ffaq.html#use-show</a></p>
<p>For what it's worth, I've tried pylab.show() and had exactly the same issue you did, while pylab.draw() seems to work fine if I just want to see the output.</p>
| 7 | 2009-11-24T04:25:16Z | [
"python",
"matplotlib"
] |
Python Matplotlib hangs when asked to plot a second chart (after closing first chart window) | 1,219,394 | <p>Weird behaviour, I'm sure it's me screwing up, but I'd like to get to the bottom of what's happening: </p>
<p>I am running the following code to create a very simple graph window using matplotlib:</p>
<pre><code>>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x0290B750>]
>>> plt.show()
</code></pre>
<p>and as expected I get the chart one would expect, in a new window that has popped up, containing a very simple blue line going from 1 to 3 back to 1 again on y axis, with 0, 1, 2 as the x axis points (just as example). Now I close the graph window (using cross button in the top right under windows). This gives me control to the interpreter, and I start again, creating new objects:</p>
<pre><code>>>>
>>> fig1 = plt.figure()
>>> bx = fig1.add_subplot(111)
>>> bx.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x029E8210>]
>>> plt.show()
</code></pre>
<p>This time though, I get a window frame, with nothing in it (just the frame, no white background nothing), and the whole bang shoot hangs. I have to "end task", the python interpreter is terminated by the system and I get a command prompt back. Similar behaviour on a mac (except it does actually plot the graph first, before also hanging). </p>
<p>So somehow Python and/or matplotlib doesn't want me to close the window manually. Anybody know what's going on and what I should be doing? What I'd like to do is play around with different plots from within the interpreter, and obviously this behaviour doesn't help. I know I could use "Ipython -pylab" but in the interests of learning, I want to understand the above error. </p>
<p>Thanks. </p>
| 5 | 2009-08-02T17:53:36Z | 1,895,460 | <p>As posted somewhere above:</p>
<p>Use <code>plt.draw()</code> for all your plots except the last one.</p>
<p>For your last plot, use <code>plt.show()</code> </p>
<p>It's weird, but if you don't use <code>plt.show()</code> in the last one and try <code>plt.draw()</code> instead, you don't see any plots.</p>
<p>Good luck with this!</p>
| 0 | 2009-12-13T03:19:05Z | [
"python",
"matplotlib"
] |
Python Matplotlib hangs when asked to plot a second chart (after closing first chart window) | 1,219,394 | <p>Weird behaviour, I'm sure it's me screwing up, but I'd like to get to the bottom of what's happening: </p>
<p>I am running the following code to create a very simple graph window using matplotlib:</p>
<pre><code>>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111)
>>> ax.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x0290B750>]
>>> plt.show()
</code></pre>
<p>and as expected I get the chart one would expect, in a new window that has popped up, containing a very simple blue line going from 1 to 3 back to 1 again on y axis, with 0, 1, 2 as the x axis points (just as example). Now I close the graph window (using cross button in the top right under windows). This gives me control to the interpreter, and I start again, creating new objects:</p>
<pre><code>>>>
>>> fig1 = plt.figure()
>>> bx = fig1.add_subplot(111)
>>> bx.plot((1, 3, 1))
[<matplotlib.lines.Line2D object at 0x029E8210>]
>>> plt.show()
</code></pre>
<p>This time though, I get a window frame, with nothing in it (just the frame, no white background nothing), and the whole bang shoot hangs. I have to "end task", the python interpreter is terminated by the system and I get a command prompt back. Similar behaviour on a mac (except it does actually plot the graph first, before also hanging). </p>
<p>So somehow Python and/or matplotlib doesn't want me to close the window manually. Anybody know what's going on and what I should be doing? What I'd like to do is play around with different plots from within the interpreter, and obviously this behaviour doesn't help. I know I could use "Ipython -pylab" but in the interests of learning, I want to understand the above error. </p>
<p>Thanks. </p>
| 5 | 2009-08-02T17:53:36Z | 39,326,913 | <p>I had this problem when using TkAgg as the backend. After using <code>plt.close('all')</code> my computer froze.</p>
<p>The solution was to switch to a different backend. I now use Qt4Agg instead.</p>
<p>If you have <a href="https://pypi.python.org/pypi/PyQt4" rel="nofollow">Qt4Agg installed</a> it is possible to switch backends by typing:</p>
<pre><code>plt.switch_backend('Qt4Agg')
</code></pre>
<p>before plotting data</p>
| 0 | 2016-09-05T08:50:44Z | [
"python",
"matplotlib"
] |
java and python equivalent of php's foreach($array as $key => $value) | 1,219,548 | <p>In php, one can handle a list of state names and their abbreviations with an associative array like this:</p>
<pre><code><?php
$stateArray = array(
"ALABAMA"=>"AL",
"ALASKA"=>"AK",
// etc...
"WYOMING"=>"WY"
);
foreach ($stateArray as $stateName => $stateAbbreviation){
print "The abbreviation for $stateName is $stateAbbreviation.\n\n";
}
?>
</code></pre>
<p>Output (with key order preserved):</p>
<pre class="lang-none prettyprint-override"><code>The abbreviation for ALABAMA is AL.
The abbreviation for ALASKA is AK.
The abbreviation for WYOMING is WY.
</code></pre>
<p>EDIT: Note that the order of array elements is preserved in the output of the php version. The Java implementation, using a HashMap, does not guarantee the order of elements. Nor does the dictionary in Python.</p>
<p>How is this done in java and python? I only find approaches that supply the value, given the key, like python's:</p>
<pre><code>stateDict = {
"ALASKA": "AK",
"WYOMING": "WY",
}
for key in stateDict:
value = stateDict[key]
</code></pre>
<hr>
<p>EDIT: based on the answers, this was my solution in python,</p>
<pre><code># a list of two-tuples
stateList = [
('ALABAMA', 'AL'),
('ALASKA', 'AK'),
('WISCONSIN', 'WI'),
('WYOMING', 'WY'),
]
for name, abbreviation in stateList:
print name, abbreviation
</code></pre>
<p>Output:</p>
<pre><code>ALABAMA AL
ALASKA AK
WISCONSIN WI
WYOMING WY
</code></pre>
<p>Which is exactly what was required.</p>
| 11 | 2009-08-02T19:02:46Z | 1,219,563 | <p>in Python:</p>
<pre><code>for key, value in stateDict.items(): # .iteritems() in Python 2.x
print "The abbreviation for %s is %s." % (key, value)
</code></pre>
<p>in Java:</p>
<pre><code>Map<String,String> stateDict;
for (Map.Entry<String,String> e : stateDict.entrySet())
System.out.println("The abbreviation for " + e.getKey() + " is " + e.getValue() + ".");
</code></pre>
| 30 | 2009-08-02T19:07:31Z | [
"java",
"php",
"python",
"associative-array"
] |
java and python equivalent of php's foreach($array as $key => $value) | 1,219,548 | <p>In php, one can handle a list of state names and their abbreviations with an associative array like this:</p>
<pre><code><?php
$stateArray = array(
"ALABAMA"=>"AL",
"ALASKA"=>"AK",
// etc...
"WYOMING"=>"WY"
);
foreach ($stateArray as $stateName => $stateAbbreviation){
print "The abbreviation for $stateName is $stateAbbreviation.\n\n";
}
?>
</code></pre>
<p>Output (with key order preserved):</p>
<pre class="lang-none prettyprint-override"><code>The abbreviation for ALABAMA is AL.
The abbreviation for ALASKA is AK.
The abbreviation for WYOMING is WY.
</code></pre>
<p>EDIT: Note that the order of array elements is preserved in the output of the php version. The Java implementation, using a HashMap, does not guarantee the order of elements. Nor does the dictionary in Python.</p>
<p>How is this done in java and python? I only find approaches that supply the value, given the key, like python's:</p>
<pre><code>stateDict = {
"ALASKA": "AK",
"WYOMING": "WY",
}
for key in stateDict:
value = stateDict[key]
</code></pre>
<hr>
<p>EDIT: based on the answers, this was my solution in python,</p>
<pre><code># a list of two-tuples
stateList = [
('ALABAMA', 'AL'),
('ALASKA', 'AK'),
('WISCONSIN', 'WI'),
('WYOMING', 'WY'),
]
for name, abbreviation in stateList:
print name, abbreviation
</code></pre>
<p>Output:</p>
<pre><code>ALABAMA AL
ALASKA AK
WISCONSIN WI
WYOMING WY
</code></pre>
<p>Which is exactly what was required.</p>
| 11 | 2009-08-02T19:02:46Z | 1,219,580 | <p>in java for associative array use Map</p>
<pre><code>import java.util.*;
class Foo
{
public static void main(String[] args)
{
Map<String, String> stateMap = new HashMap<String, String>();
stateMap.put("ALABAMA", "AL");
stateMap.put("ALASKA", "AK");
// ...
stateMap.put("WYOMING", "WY");
for (Map.Entry<String, String> state : stateMap.entrySet()) {
System.out.printf(
"The abbreviation for %s is %s%n",
state.getKey(),
state.getValue()
);
}
}
}
</code></pre>
| 5 | 2009-08-02T19:13:14Z | [
"java",
"php",
"python",
"associative-array"
] |
java and python equivalent of php's foreach($array as $key => $value) | 1,219,548 | <p>In php, one can handle a list of state names and their abbreviations with an associative array like this:</p>
<pre><code><?php
$stateArray = array(
"ALABAMA"=>"AL",
"ALASKA"=>"AK",
// etc...
"WYOMING"=>"WY"
);
foreach ($stateArray as $stateName => $stateAbbreviation){
print "The abbreviation for $stateName is $stateAbbreviation.\n\n";
}
?>
</code></pre>
<p>Output (with key order preserved):</p>
<pre class="lang-none prettyprint-override"><code>The abbreviation for ALABAMA is AL.
The abbreviation for ALASKA is AK.
The abbreviation for WYOMING is WY.
</code></pre>
<p>EDIT: Note that the order of array elements is preserved in the output of the php version. The Java implementation, using a HashMap, does not guarantee the order of elements. Nor does the dictionary in Python.</p>
<p>How is this done in java and python? I only find approaches that supply the value, given the key, like python's:</p>
<pre><code>stateDict = {
"ALASKA": "AK",
"WYOMING": "WY",
}
for key in stateDict:
value = stateDict[key]
</code></pre>
<hr>
<p>EDIT: based on the answers, this was my solution in python,</p>
<pre><code># a list of two-tuples
stateList = [
('ALABAMA', 'AL'),
('ALASKA', 'AK'),
('WISCONSIN', 'WI'),
('WYOMING', 'WY'),
]
for name, abbreviation in stateList:
print name, abbreviation
</code></pre>
<p>Output:</p>
<pre><code>ALABAMA AL
ALASKA AK
WISCONSIN WI
WYOMING WY
</code></pre>
<p>Which is exactly what was required.</p>
| 11 | 2009-08-02T19:02:46Z | 1,219,601 | <p>Another way of doing it in Java. Although a better way has already been posted, this one's syntactically closer to your php code.</p>
<pre><code>for (String x:stateDict.keySet()){
System.out.printf("The abbreviation for %s is %s\n",x,stateDict.get(x));
}
</code></pre>
| 1 | 2009-08-02T19:25:27Z | [
"java",
"php",
"python",
"associative-array"
] |
java and python equivalent of php's foreach($array as $key => $value) | 1,219,548 | <p>In php, one can handle a list of state names and their abbreviations with an associative array like this:</p>
<pre><code><?php
$stateArray = array(
"ALABAMA"=>"AL",
"ALASKA"=>"AK",
// etc...
"WYOMING"=>"WY"
);
foreach ($stateArray as $stateName => $stateAbbreviation){
print "The abbreviation for $stateName is $stateAbbreviation.\n\n";
}
?>
</code></pre>
<p>Output (with key order preserved):</p>
<pre class="lang-none prettyprint-override"><code>The abbreviation for ALABAMA is AL.
The abbreviation for ALASKA is AK.
The abbreviation for WYOMING is WY.
</code></pre>
<p>EDIT: Note that the order of array elements is preserved in the output of the php version. The Java implementation, using a HashMap, does not guarantee the order of elements. Nor does the dictionary in Python.</p>
<p>How is this done in java and python? I only find approaches that supply the value, given the key, like python's:</p>
<pre><code>stateDict = {
"ALASKA": "AK",
"WYOMING": "WY",
}
for key in stateDict:
value = stateDict[key]
</code></pre>
<hr>
<p>EDIT: based on the answers, this was my solution in python,</p>
<pre><code># a list of two-tuples
stateList = [
('ALABAMA', 'AL'),
('ALASKA', 'AK'),
('WISCONSIN', 'WI'),
('WYOMING', 'WY'),
]
for name, abbreviation in stateList:
print name, abbreviation
</code></pre>
<p>Output:</p>
<pre><code>ALABAMA AL
ALASKA AK
WISCONSIN WI
WYOMING WY
</code></pre>
<p>Which is exactly what was required.</p>
| 11 | 2009-08-02T19:02:46Z | 1,219,684 | <p>Also, to maintain insertion order, you can use a LinkedHashMap instead of a HashMap.</p>
| 2 | 2009-08-02T19:55:56Z | [
"java",
"php",
"python",
"associative-array"
] |
java and python equivalent of php's foreach($array as $key => $value) | 1,219,548 | <p>In php, one can handle a list of state names and their abbreviations with an associative array like this:</p>
<pre><code><?php
$stateArray = array(
"ALABAMA"=>"AL",
"ALASKA"=>"AK",
// etc...
"WYOMING"=>"WY"
);
foreach ($stateArray as $stateName => $stateAbbreviation){
print "The abbreviation for $stateName is $stateAbbreviation.\n\n";
}
?>
</code></pre>
<p>Output (with key order preserved):</p>
<pre class="lang-none prettyprint-override"><code>The abbreviation for ALABAMA is AL.
The abbreviation for ALASKA is AK.
The abbreviation for WYOMING is WY.
</code></pre>
<p>EDIT: Note that the order of array elements is preserved in the output of the php version. The Java implementation, using a HashMap, does not guarantee the order of elements. Nor does the dictionary in Python.</p>
<p>How is this done in java and python? I only find approaches that supply the value, given the key, like python's:</p>
<pre><code>stateDict = {
"ALASKA": "AK",
"WYOMING": "WY",
}
for key in stateDict:
value = stateDict[key]
</code></pre>
<hr>
<p>EDIT: based on the answers, this was my solution in python,</p>
<pre><code># a list of two-tuples
stateList = [
('ALABAMA', 'AL'),
('ALASKA', 'AK'),
('WISCONSIN', 'WI'),
('WYOMING', 'WY'),
]
for name, abbreviation in stateList:
print name, abbreviation
</code></pre>
<p>Output:</p>
<pre><code>ALABAMA AL
ALASKA AK
WISCONSIN WI
WYOMING WY
</code></pre>
<p>Which is exactly what was required.</p>
| 11 | 2009-08-02T19:02:46Z | 1,219,813 | <p>In python an <a href="http://www.python.org/dev/peps/pep-0372/" rel="nofollow">ordered dictionary</a> is available in Python 2.7 (not yet released) and Python 3.1. It's called OrderedDict.</p>
| 2 | 2009-08-02T20:49:19Z | [
"java",
"php",
"python",
"associative-array"
] |
java and python equivalent of php's foreach($array as $key => $value) | 1,219,548 | <p>In php, one can handle a list of state names and their abbreviations with an associative array like this:</p>
<pre><code><?php
$stateArray = array(
"ALABAMA"=>"AL",
"ALASKA"=>"AK",
// etc...
"WYOMING"=>"WY"
);
foreach ($stateArray as $stateName => $stateAbbreviation){
print "The abbreviation for $stateName is $stateAbbreviation.\n\n";
}
?>
</code></pre>
<p>Output (with key order preserved):</p>
<pre class="lang-none prettyprint-override"><code>The abbreviation for ALABAMA is AL.
The abbreviation for ALASKA is AK.
The abbreviation for WYOMING is WY.
</code></pre>
<p>EDIT: Note that the order of array elements is preserved in the output of the php version. The Java implementation, using a HashMap, does not guarantee the order of elements. Nor does the dictionary in Python.</p>
<p>How is this done in java and python? I only find approaches that supply the value, given the key, like python's:</p>
<pre><code>stateDict = {
"ALASKA": "AK",
"WYOMING": "WY",
}
for key in stateDict:
value = stateDict[key]
</code></pre>
<hr>
<p>EDIT: based on the answers, this was my solution in python,</p>
<pre><code># a list of two-tuples
stateList = [
('ALABAMA', 'AL'),
('ALASKA', 'AK'),
('WISCONSIN', 'WI'),
('WYOMING', 'WY'),
]
for name, abbreviation in stateList:
print name, abbreviation
</code></pre>
<p>Output:</p>
<pre><code>ALABAMA AL
ALASKA AK
WISCONSIN WI
WYOMING WY
</code></pre>
<p>Which is exactly what was required.</p>
| 11 | 2009-08-02T19:02:46Z | 1,219,820 | <p>This is the modified code from o948 where you use a TreeMap instead of a HashMap. The Tree map will preserve the ordering of the keys by the key.</p>
<pre><code>import java.util.*;
class Foo
{
public static void main(String[] args)
{
Map<String, String> stateMap = new TreeMap<String, String>();
stateMap.put("ALABAMA", "AL");
stateMap.put("ALASKA", "AK");
// ...
stateMap.put("WYOMING", "WY");
for (Map.Entry<String, String> state : stateMap.entrySet()) {
System.out.printf(
"The abbreviation for %s is %s%n",
state.getKey(),
state.getValue()
);
}
}
}
</code></pre>
| 1 | 2009-08-02T20:50:26Z | [
"java",
"php",
"python",
"associative-array"
] |
java and python equivalent of php's foreach($array as $key => $value) | 1,219,548 | <p>In php, one can handle a list of state names and their abbreviations with an associative array like this:</p>
<pre><code><?php
$stateArray = array(
"ALABAMA"=>"AL",
"ALASKA"=>"AK",
// etc...
"WYOMING"=>"WY"
);
foreach ($stateArray as $stateName => $stateAbbreviation){
print "The abbreviation for $stateName is $stateAbbreviation.\n\n";
}
?>
</code></pre>
<p>Output (with key order preserved):</p>
<pre class="lang-none prettyprint-override"><code>The abbreviation for ALABAMA is AL.
The abbreviation for ALASKA is AK.
The abbreviation for WYOMING is WY.
</code></pre>
<p>EDIT: Note that the order of array elements is preserved in the output of the php version. The Java implementation, using a HashMap, does not guarantee the order of elements. Nor does the dictionary in Python.</p>
<p>How is this done in java and python? I only find approaches that supply the value, given the key, like python's:</p>
<pre><code>stateDict = {
"ALASKA": "AK",
"WYOMING": "WY",
}
for key in stateDict:
value = stateDict[key]
</code></pre>
<hr>
<p>EDIT: based on the answers, this was my solution in python,</p>
<pre><code># a list of two-tuples
stateList = [
('ALABAMA', 'AL'),
('ALASKA', 'AK'),
('WISCONSIN', 'WI'),
('WYOMING', 'WY'),
]
for name, abbreviation in stateList:
print name, abbreviation
</code></pre>
<p>Output:</p>
<pre><code>ALABAMA AL
ALASKA AK
WISCONSIN WI
WYOMING WY
</code></pre>
<p>Which is exactly what was required.</p>
| 11 | 2009-08-02T19:02:46Z | 1,219,975 | <p>Along the lines of Alexander's answer...</p>
<p>The native python dictionary doesn't maintain ordering for maximum efficiency of its primary use: an unordered mapping of keys to values.</p>
<p>I can think of two workarounds:</p>
<ol>
<li><p>look at the source code of OrderedDict and include it in your own program.</p></li>
<li><p>make a list that holds the keys in order:</p>
<pre><code>states = ['Alabamba', 'Alaska', ...]
statesd = {'Alabamba':'AL', 'Alaska':'AK', ...}
for k in states:
print "The abbreviation for %s is %s." % (k, statesd[k])
</code></pre></li>
</ol>
| 1 | 2009-08-02T21:57:32Z | [
"java",
"php",
"python",
"associative-array"
] |
java and python equivalent of php's foreach($array as $key => $value) | 1,219,548 | <p>In php, one can handle a list of state names and their abbreviations with an associative array like this:</p>
<pre><code><?php
$stateArray = array(
"ALABAMA"=>"AL",
"ALASKA"=>"AK",
// etc...
"WYOMING"=>"WY"
);
foreach ($stateArray as $stateName => $stateAbbreviation){
print "The abbreviation for $stateName is $stateAbbreviation.\n\n";
}
?>
</code></pre>
<p>Output (with key order preserved):</p>
<pre class="lang-none prettyprint-override"><code>The abbreviation for ALABAMA is AL.
The abbreviation for ALASKA is AK.
The abbreviation for WYOMING is WY.
</code></pre>
<p>EDIT: Note that the order of array elements is preserved in the output of the php version. The Java implementation, using a HashMap, does not guarantee the order of elements. Nor does the dictionary in Python.</p>
<p>How is this done in java and python? I only find approaches that supply the value, given the key, like python's:</p>
<pre><code>stateDict = {
"ALASKA": "AK",
"WYOMING": "WY",
}
for key in stateDict:
value = stateDict[key]
</code></pre>
<hr>
<p>EDIT: based on the answers, this was my solution in python,</p>
<pre><code># a list of two-tuples
stateList = [
('ALABAMA', 'AL'),
('ALASKA', 'AK'),
('WISCONSIN', 'WI'),
('WYOMING', 'WY'),
]
for name, abbreviation in stateList:
print name, abbreviation
</code></pre>
<p>Output:</p>
<pre><code>ALABAMA AL
ALASKA AK
WISCONSIN WI
WYOMING WY
</code></pre>
<p>Which is exactly what was required.</p>
| 11 | 2009-08-02T19:02:46Z | 1,221,284 | <p>TreeMap is not an answer to your question because it sorts elements by key, while LinkedHashMap preserves original order. However, TreeMap is more suitable for the dictionary because of sorting.</p>
| 0 | 2009-08-03T08:49:52Z | [
"java",
"php",
"python",
"associative-array"
] |
Workarounds when a string is too long for a .join. OverflowError occurs | 1,219,733 | <p>I'm working through some python problems on pythonchallenge.com to teach myself python and I've hit a roadblock, since the string I am to be using is too large for python to handle. I receive this error:</p>
<pre><code>my-macbook:python owner1$ python singleoccurrence.py
Traceback (most recent call last):
File "singleoccurrence.py", line 32, in <module>
myString = myString.join(line)
OverflowError: join() result is too long for a Python string
</code></pre>
<p>What alternatives do I have for this issue? My code looks like such...</p>
<pre><code>#open file testdata.txt
#for each character, check if already exists in array of checked characters
#if so, skip.
#if not, character.count
#if count > 1, repeat recursively with first character stripped off of page.
# if count = 1, add to valid character array.
#when string = 0, print valid character array.
valid = []
checked = []
myString = ""
def recursiveCount(bigString):
if len(bigString) == 0:
print "YAY!"
return valid
myChar = bigString[0]
if myChar in checked:
return recursiveCount(bigString[1:])
if bigString.count(myChar) > 1:
checked.append(myChar)
return recursiveCount(bigString[1:])
checked.append(myChar)
valid.append(myChar)
return recursiveCount(bigString[1:])
fileIN = open("testdata.txt", "r")
line = fileIN.readline()
while line:
line = line.strip()
myString = myString.join(line)
line = fileIN.readline()
myString = recursiveCount(myString)
print "\n"
print myString
</code></pre>
| 0 | 2009-08-02T20:13:52Z | 1,219,754 | <p><a href="http://docs.python.org/library/string.html">string.join</a> doesn't do what you think. join is used to combine a list of words into a single string with the given seperator. Ie:</p>
<pre><code>>>> ",".join(('foo', 'bar', 'baz'))
'foo,bar,baz'
</code></pre>
<p>The code snippet you posted will attempt to insert myString between every character in the variable line. You can see how that will get big quickly :-). Are you trying to read the entire file into a single string, myString? If so, the way you want to concatenate the strings is like this:</p>
<pre><code>myString = myString + line
</code></pre>
<p>While I'm here... since you're learning Python here are some other suggestions.</p>
<p>There are easier ways to read an entire file into a variable. For instance:</p>
<pre><code>fileIN = open("testdata.txt", "r")
myString = fileIN.read()
</code></pre>
<p>(This won't have the exact behaviour of your existing strip() code, but may in fact do what you want.)</p>
<p>Also, I would never recommend practical Python code use recursion to iterate over a string. Your code will make a function call (and a stack entry) for every character in the string. Also I'm not sure Python will be very smart about all the uses of bigString[1:]: it may well create a second string in memory that's a copy of the original without the first character. The simplest way to process every character in a string is:</p>
<pre><code>for mychar in bigString:
... do your stuff ...
</code></pre>
<p>Finally, you are using the list named "checked" to see if you've ever seen a particular character before. But the membership test on lists ("if myChar in checked") is slow. In Python you're better off using a dictionary:</p>
<pre><code>checked = {}
...
if not checked.has_key(myChar):
checked[myChar] = True
...
</code></pre>
<p>This exercise you're doing is a great way to learn several Python idioms.</p>
| 10 | 2009-08-02T20:21:24Z | [
"python",
"overflow"
] |
Python: prefer several small modules or one larger module? | 1,219,815 | <p>I'm working on a Python web application in which I have some small modules that serve very specific functions: <code>session.py</code>, <code>logger.py</code>, <code>database.py</code>, etc. And by "small" I really do mean small; each of these files currently includes around 3-5 lines of code, or maybe up to 10 at most. I might have a few imports and a class definition or two in each. I'm wondering, is there any reason I should or shouldn't merge these into one module, something like <code>misc.py</code>?</p>
<p>My thoughts are that having separate modules helps with code clarity, and later on, if by some chance these modules grow to more than 10 lines, I won't feel so bad about having them separated. But on the other hand, it just seems like such a waste to have a bunch of files with only a few lines in each! And is there any significant difference in resource usage between the multi-file vs. single-file approach? (Of course I'm nowhere near the point where I should be worrying about resource usage, but I couldn't resist asking...)</p>
<p>I checked around to see whether this had been asked before and didn't see anything specific to Python, but if it's in fact a duplicate, I'd appreciate being pointed in the right direction.</p>
| 8 | 2009-08-02T20:49:23Z | 1,219,844 | <blockquote>
<p>My thoughts are that having separate
modules helps with code clarity, and
later on, if by some chance these
modules grow to more than 10 lines, I
won't feel so bad about having them
separated.</p>
</blockquote>
<p>This. Keep it the way you have it. </p>
| 4 | 2009-08-02T21:00:45Z | [
"python",
"refactoring",
"module"
] |
Python: prefer several small modules or one larger module? | 1,219,815 | <p>I'm working on a Python web application in which I have some small modules that serve very specific functions: <code>session.py</code>, <code>logger.py</code>, <code>database.py</code>, etc. And by "small" I really do mean small; each of these files currently includes around 3-5 lines of code, or maybe up to 10 at most. I might have a few imports and a class definition or two in each. I'm wondering, is there any reason I should or shouldn't merge these into one module, something like <code>misc.py</code>?</p>
<p>My thoughts are that having separate modules helps with code clarity, and later on, if by some chance these modules grow to more than 10 lines, I won't feel so bad about having them separated. But on the other hand, it just seems like such a waste to have a bunch of files with only a few lines in each! And is there any significant difference in resource usage between the multi-file vs. single-file approach? (Of course I'm nowhere near the point where I should be worrying about resource usage, but I couldn't resist asking...)</p>
<p>I checked around to see whether this had been asked before and didn't see anything specific to Python, but if it's in fact a duplicate, I'd appreciate being pointed in the right direction.</p>
| 8 | 2009-08-02T20:49:23Z | 1,219,850 | <p>Personally I find it easier to keep things like this in a single file, just for the practicality of editing a smaller number of files in my editor.</p>
<p>The important thing to do is <em>treat</em> the different pieces of code as though they were in separate files, so you ensure that you can trivially separate them later, for the reasons you cite. So for instance, don't introduce dependencies between the different pieces that will make it hard to disentangle them later.</p>
| 3 | 2009-08-02T21:03:37Z | [
"python",
"refactoring",
"module"
] |
Python: prefer several small modules or one larger module? | 1,219,815 | <p>I'm working on a Python web application in which I have some small modules that serve very specific functions: <code>session.py</code>, <code>logger.py</code>, <code>database.py</code>, etc. And by "small" I really do mean small; each of these files currently includes around 3-5 lines of code, or maybe up to 10 at most. I might have a few imports and a class definition or two in each. I'm wondering, is there any reason I should or shouldn't merge these into one module, something like <code>misc.py</code>?</p>
<p>My thoughts are that having separate modules helps with code clarity, and later on, if by some chance these modules grow to more than 10 lines, I won't feel so bad about having them separated. But on the other hand, it just seems like such a waste to have a bunch of files with only a few lines in each! And is there any significant difference in resource usage between the multi-file vs. single-file approach? (Of course I'm nowhere near the point where I should be worrying about resource usage, but I couldn't resist asking...)</p>
<p>I checked around to see whether this had been asked before and didn't see anything specific to Python, but if it's in fact a duplicate, I'd appreciate being pointed in the right direction.</p>
| 8 | 2009-08-02T20:49:23Z | 1,219,867 | <p>Off course you can have as many modules as you like.</p>
<p>But now let as think a little, what happens when we put every small code snippet into one single file.</p>
<p>We will end up in hundreds of import statements in any less trivial module. And off course you could also save a little by having all explicit in seperated files. But guess what: Nobody can remember so many module names and you might end up in searching for the right file anyway ...</p>
<p>I try to put things that belong together in one single file (unless it becomes to big!). But when I have small functions or classes that do not belong to other components in my system, I have "util" modules or the like. I also try to group these for example according to my application layering or seperate them by other means. One seperation criteria could be: Utilities that are used for UI and those that are not.</p>
| 2 | 2009-08-02T21:11:27Z | [
"python",
"refactoring",
"module"
] |
Python: prefer several small modules or one larger module? | 1,219,815 | <p>I'm working on a Python web application in which I have some small modules that serve very specific functions: <code>session.py</code>, <code>logger.py</code>, <code>database.py</code>, etc. And by "small" I really do mean small; each of these files currently includes around 3-5 lines of code, or maybe up to 10 at most. I might have a few imports and a class definition or two in each. I'm wondering, is there any reason I should or shouldn't merge these into one module, something like <code>misc.py</code>?</p>
<p>My thoughts are that having separate modules helps with code clarity, and later on, if by some chance these modules grow to more than 10 lines, I won't feel so bad about having them separated. But on the other hand, it just seems like such a waste to have a bunch of files with only a few lines in each! And is there any significant difference in resource usage between the multi-file vs. single-file approach? (Of course I'm nowhere near the point where I should be worrying about resource usage, but I couldn't resist asking...)</p>
<p>I checked around to see whether this had been asked before and didn't see anything specific to Python, but if it's in fact a duplicate, I'd appreciate being pointed in the right direction.</p>
| 8 | 2009-08-02T20:49:23Z | 1,219,881 | <p>As a user of modules, I greatly prefer when I can include the entire module via a single import. Don't make a user of your package do multiple imports unless there's some reason to allow for importing different alternates.</p>
<p>BTW, there's no reason a single modules can't consist of multiple source files. The simplest case is to use an <a href="http://docs.python.org/tutorial/modules.html#packages" rel="nofollow">__init__.py</a> file to simply load all the other code into the module's namespace.</p>
| 4 | 2009-08-02T21:18:18Z | [
"python",
"refactoring",
"module"
] |
Python: prefer several small modules or one larger module? | 1,219,815 | <p>I'm working on a Python web application in which I have some small modules that serve very specific functions: <code>session.py</code>, <code>logger.py</code>, <code>database.py</code>, etc. And by "small" I really do mean small; each of these files currently includes around 3-5 lines of code, or maybe up to 10 at most. I might have a few imports and a class definition or two in each. I'm wondering, is there any reason I should or shouldn't merge these into one module, something like <code>misc.py</code>?</p>
<p>My thoughts are that having separate modules helps with code clarity, and later on, if by some chance these modules grow to more than 10 lines, I won't feel so bad about having them separated. But on the other hand, it just seems like such a waste to have a bunch of files with only a few lines in each! And is there any significant difference in resource usage between the multi-file vs. single-file approach? (Of course I'm nowhere near the point where I should be worrying about resource usage, but I couldn't resist asking...)</p>
<p>I checked around to see whether this had been asked before and didn't see anything specific to Python, but if it's in fact a duplicate, I'd appreciate being pointed in the right direction.</p>
| 8 | 2009-08-02T20:49:23Z | 1,219,914 | <p>Small. </p>
| 0 | 2009-08-02T21:36:19Z | [
"python",
"refactoring",
"module"
] |
Python: prefer several small modules or one larger module? | 1,219,815 | <p>I'm working on a Python web application in which I have some small modules that serve very specific functions: <code>session.py</code>, <code>logger.py</code>, <code>database.py</code>, etc. And by "small" I really do mean small; each of these files currently includes around 3-5 lines of code, or maybe up to 10 at most. I might have a few imports and a class definition or two in each. I'm wondering, is there any reason I should or shouldn't merge these into one module, something like <code>misc.py</code>?</p>
<p>My thoughts are that having separate modules helps with code clarity, and later on, if by some chance these modules grow to more than 10 lines, I won't feel so bad about having them separated. But on the other hand, it just seems like such a waste to have a bunch of files with only a few lines in each! And is there any significant difference in resource usage between the multi-file vs. single-file approach? (Of course I'm nowhere near the point where I should be worrying about resource usage, but I couldn't resist asking...)</p>
<p>I checked around to see whether this had been asked before and didn't see anything specific to Python, but if it's in fact a duplicate, I'd appreciate being pointed in the right direction.</p>
| 8 | 2009-08-02T20:49:23Z | 1,220,447 | <p>For command line scripts there most likely will not be much difference unless each invocation invokes all files in the module, in which case there will be a slight performance cost as n files need to be opened vs one.</p>
<p>For mod_python there most likely will be no difference as byte-compiled modules stay alive for the duration of the apache process.</p>
<p>For google app engine though there will be a performance hit unless the service is constantly used and is "hot" as each cold start would require opening all files.</p>
| 2 | 2009-08-03T02:12:08Z | [
"python",
"refactoring",
"module"
] |
Python GTK Drag and Drop - Get URL | 1,219,863 | <p>I'm creating a small app must be able to receive URLs. If the apps window is open, I should be able to drag a link from a browser and drop it into the app - and the app will save the URL to a database.</p>
<p>I'm creating this in Python/GTk. But I am a bit confused about the drag and drop functionality in it. So, how do it?</p>
<p>Some sample code to implement drag/drop(my app uses a bit of this code)...</p>
<pre><code>import pygtk
pygtk.require('2.0')
import gtk
# function to print out the mime type of the drop item
def drop_cb(wid, context, x, y, time):
l.set_text('\n'.join([str(t) for t in context.targets]))
# What should I put here to get the URL of the link?
context.finish(True, False, time)
return True
# Create a GTK window and Label, and hook up
# drag n drop signal handlers to the window
w = gtk.Window()
w.set_size_request(200, 150)
w.drag_dest_set(0, [], 0)
w.connect('drag_drop', drop_cb)
w.connect('destroy', lambda w: gtk.main_quit())
l = gtk.Label()
w.add(l)
w.show_all()
# Start the program
gtk.main()
</code></pre>
| 4 | 2009-08-02T21:09:50Z | 1,221,896 | <p>You must fetch the data yourself. Here's a simple working example that will set a label to the url dropped:</p>
<pre><code>#!/usr/local/env python
import pygtk
pygtk.require('2.0')
import gtk
def motion_cb(wid, context, x, y, time):
l.set_text('\n'.join([str(t) for t in context.targets]))
context.drag_status(gtk.gdk.ACTION_COPY, time)
# Returning True which means "I accept this data".
return True
def drop_cb(wid, context, x, y, time):
# Some data was dropped, get the data
wid.drag_get_data(context, context.targets[-1], time)
return True
def got_data_cb(wid, context, x, y, data, info, time):
# Got data.
l.set_text(data.get_text())
context.finish(True, False, time)
w = gtk.Window()
w.set_size_request(200, 150)
w.drag_dest_set(0, [], 0)
w.connect('drag_motion', motion_cb)
w.connect('drag_drop', drop_cb)
w.connect('drag_data_received', got_data_cb)
w.connect('destroy', lambda w: gtk.main_quit())
l = gtk.Label()
w.add(l)
w.show_all()
gtk.main()
</code></pre>
| 8 | 2009-08-03T11:47:36Z | [
"python",
"drag-and-drop",
"gtk",
"gdk"
] |
Python GTK Drag and Drop - Get URL | 1,219,863 | <p>I'm creating a small app must be able to receive URLs. If the apps window is open, I should be able to drag a link from a browser and drop it into the app - and the app will save the URL to a database.</p>
<p>I'm creating this in Python/GTk. But I am a bit confused about the drag and drop functionality in it. So, how do it?</p>
<p>Some sample code to implement drag/drop(my app uses a bit of this code)...</p>
<pre><code>import pygtk
pygtk.require('2.0')
import gtk
# function to print out the mime type of the drop item
def drop_cb(wid, context, x, y, time):
l.set_text('\n'.join([str(t) for t in context.targets]))
# What should I put here to get the URL of the link?
context.finish(True, False, time)
return True
# Create a GTK window and Label, and hook up
# drag n drop signal handlers to the window
w = gtk.Window()
w.set_size_request(200, 150)
w.drag_dest_set(0, [], 0)
w.connect('drag_drop', drop_cb)
w.connect('destroy', lambda w: gtk.main_quit())
l = gtk.Label()
w.add(l)
w.show_all()
# Start the program
gtk.main()
</code></pre>
| 4 | 2009-08-02T21:09:50Z | 5,389,964 | <p>To be sure to get only the data of one file or directory on DnD'ing a list of files from your file explorer, you could use something like:</p>
<pre><code>data.get_text().split(None,1)[0]
</code></pre>
<p>the code for the "got_data_cb" method would then look like this:</p>
<pre><code>def got_data_cb(wid, context, x, y, data, info, time):
# Got data.
l.set_text(data.get_text().split(None,1)[0])
context.finish(True, False, time)
</code></pre>
<p>This would split the data by any whitespace and returns you the first item.</p>
| 3 | 2011-03-22T10:42:03Z | [
"python",
"drag-and-drop",
"gtk",
"gdk"
] |
Python GTK Drag and Drop - Get URL | 1,219,863 | <p>I'm creating a small app must be able to receive URLs. If the apps window is open, I should be able to drag a link from a browser and drop it into the app - and the app will save the URL to a database.</p>
<p>I'm creating this in Python/GTk. But I am a bit confused about the drag and drop functionality in it. So, how do it?</p>
<p>Some sample code to implement drag/drop(my app uses a bit of this code)...</p>
<pre><code>import pygtk
pygtk.require('2.0')
import gtk
# function to print out the mime type of the drop item
def drop_cb(wid, context, x, y, time):
l.set_text('\n'.join([str(t) for t in context.targets]))
# What should I put here to get the URL of the link?
context.finish(True, False, time)
return True
# Create a GTK window and Label, and hook up
# drag n drop signal handlers to the window
w = gtk.Window()
w.set_size_request(200, 150)
w.drag_dest_set(0, [], 0)
w.connect('drag_drop', drop_cb)
w.connect('destroy', lambda w: gtk.main_quit())
l = gtk.Label()
w.add(l)
w.show_all()
# Start the program
gtk.main()
</code></pre>
| 4 | 2009-08-02T21:09:50Z | 15,137,112 | <p>The only solution working for me is :</p>
<pre><code>def got_data_cb(wid, context, x, y, data, info, time):
# Got data.
l.set_text(data.get_uris()[0])
context.finish(True, False, time)
</code></pre>
| 1 | 2013-02-28T13:45:38Z | [
"python",
"drag-and-drop",
"gtk",
"gdk"
] |
Python GTK Drag and Drop - Get URL | 1,219,863 | <p>I'm creating a small app must be able to receive URLs. If the apps window is open, I should be able to drag a link from a browser and drop it into the app - and the app will save the URL to a database.</p>
<p>I'm creating this in Python/GTk. But I am a bit confused about the drag and drop functionality in it. So, how do it?</p>
<p>Some sample code to implement drag/drop(my app uses a bit of this code)...</p>
<pre><code>import pygtk
pygtk.require('2.0')
import gtk
# function to print out the mime type of the drop item
def drop_cb(wid, context, x, y, time):
l.set_text('\n'.join([str(t) for t in context.targets]))
# What should I put here to get the URL of the link?
context.finish(True, False, time)
return True
# Create a GTK window and Label, and hook up
# drag n drop signal handlers to the window
w = gtk.Window()
w.set_size_request(200, 150)
w.drag_dest_set(0, [], 0)
w.connect('drag_drop', drop_cb)
w.connect('destroy', lambda w: gtk.main_quit())
l = gtk.Label()
w.add(l)
w.show_all()
# Start the program
gtk.main()
</code></pre>
| 4 | 2009-08-02T21:09:50Z | 38,056,395 | <p>The following code is ported from <a href="http://www.pygtk.org/pygtk2tutorial/examples/dragtargets.py" rel="nofollow">an example of the (old) PyGTK tutorial</a> which I guess inspired <a href="http://stackoverflow.com/a/1221896/2015768">the accepted answer</a>, but with pygi:</p>
<pre><code>#!/usr/local/env python
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk, Gdk
def motion_cb(wid, context, x, y, time):
Gdk.drag_status(context, Gdk.DragAction.COPY, time)
return True
def drop_cb(wid, context, x, y, time):
l.set_text('\n'.join([str(t) for t in context.list_targets()]))
context.finish(True, False, time)
return True
w = Gtk.Window()
w.set_size_request(200, 150)
w.drag_dest_set(0, [], 0)
w.connect('drag-motion', motion_cb)
w.connect('drag-drop', drop_cb)
w.connect('destroy', lambda w: Gtk.main_quit())
l = Gtk.Label()
w.add(l)
w.show_all()
Gtk.main()
</code></pre>
| 0 | 2016-06-27T14:21:40Z | [
"python",
"drag-and-drop",
"gtk",
"gdk"
] |
Storing data and searching by metadata? | 1,220,440 | <p>Let's say I have a set of data where each row is a pair of coordinates: (X, Y). Associated with each point I have arbitrary metadata, such as <code>{color: yellow}</code> or <code>{age: 2 years}</code>.</p>
<p>I'd like to be able to store the data and metadata in such a way that I can query the metadata (eg: <code>[rows where {age: 2 years, color: yellow}]</code>) and in return receive all of the matching coordinate rows.</p>
<p>There are no predefined metadata columns or values, nor will all coordinate rows necessarily have the same metadata columns. What would be the best way to store this data for the fastest access? Would it be possible using something such as Tokyo Cabinet (without Tokyo Tyrant) or SQLite, or is there a better option?</p>
| 0 | 2009-08-03T02:08:17Z | 1,220,446 | <p>Any relational database should be able to handle something like that (you'd basically just being doing a join between a couple of tables, one for the data and one for the metadata). SQLite should work fine.</p>
<p>Your first table would have the data itself with a unique IDs for each entry. Then your second table would have something like 3 working columns: metadata key, metadata value, and associated entry id.</p>
<p>Example data table:</p>
<pre><code>ID Data
--------
1 (1,1)
2 (7,4)
3 (2,3)
</code></pre>
<p>Example metadata table:</p>
<pre><code>ID Key Value
--------------------------
1 "color" yellow
1 "age" 3
2 "color" "blue"
2 "age" 2
3 "color" "blue"
3 "age" 4
3 "loc" "usa"
</code></pre>
<p>Then if you wanted to search for all data points with an age of at least 3, you'd use a query like this:</p>
<pre><code>SELECT * from datatable WHERE datatable.ID = metadatatable.ID AND metadatatable.Key="age" AND metadatatable.Value >= 3
</code></pre>
| 2 | 2009-08-03T02:12:02Z | [
"python",
"metadata"
] |
Storing data and searching by metadata? | 1,220,440 | <p>Let's say I have a set of data where each row is a pair of coordinates: (X, Y). Associated with each point I have arbitrary metadata, such as <code>{color: yellow}</code> or <code>{age: 2 years}</code>.</p>
<p>I'd like to be able to store the data and metadata in such a way that I can query the metadata (eg: <code>[rows where {age: 2 years, color: yellow}]</code>) and in return receive all of the matching coordinate rows.</p>
<p>There are no predefined metadata columns or values, nor will all coordinate rows necessarily have the same metadata columns. What would be the best way to store this data for the fastest access? Would it be possible using something such as Tokyo Cabinet (without Tokyo Tyrant) or SQLite, or is there a better option?</p>
| 0 | 2009-08-03T02:08:17Z | 1,220,471 | <p>Since the columns are neither predefined nor consistent across all rows you have to either go
with bigtable type implementations such as google appengine (exapndo models w/listproperty) or cassandra/hbase etc. (see <a href="http://en.wikipedia.org/wiki/BigTable" rel="nofollow">http://en.wikipedia.org/wiki/BigTable</a>)</p>
<p>For simple implementations using sqlite you could create a string field formatted as </p>
<pre><code>f1 | f2 | metadata as string
x1 | y1 | cola:val-a1 colb:val-b1 colc:val-c1
x2 | y2 | cola:val-a2 colx:val-x2
and use SELECT * from table WHERE metadata like "%cola:val-a2%"
</code></pre>
| 0 | 2009-08-03T02:27:01Z | [
"python",
"metadata"
] |
Storing data and searching by metadata? | 1,220,440 | <p>Let's say I have a set of data where each row is a pair of coordinates: (X, Y). Associated with each point I have arbitrary metadata, such as <code>{color: yellow}</code> or <code>{age: 2 years}</code>.</p>
<p>I'd like to be able to store the data and metadata in such a way that I can query the metadata (eg: <code>[rows where {age: 2 years, color: yellow}]</code>) and in return receive all of the matching coordinate rows.</p>
<p>There are no predefined metadata columns or values, nor will all coordinate rows necessarily have the same metadata columns. What would be the best way to store this data for the fastest access? Would it be possible using something such as Tokyo Cabinet (without Tokyo Tyrant) or SQLite, or is there a better option?</p>
| 0 | 2009-08-03T02:08:17Z | 1,220,533 | <p>Using @Dav's schema, a way to get " [all coordinate rows where age=2 and color=blue] " is (assuming (ID, Key, Value) is Unique in metadatatable, i.e., the latter has no entirely duplicate rows):</p>
<pre><code>SELECT datatable.Data
FROM datatable
JOIN metatadatable AS m USING(ID)
WHERE (m.Key="age" AND m.Value=2)
OR (m.Key="color" AND m.Value="blue")
GROUP BY datatable.ID, datatable.Data
HAVING COUNT()=2
</code></pre>
| 1 | 2009-08-03T03:09:50Z | [
"python",
"metadata"
] |
"Watching" program being proccessed line-by-line? | 1,220,465 | <p>I'm looking for a debugging tool that will run my Python app, but display which line is currently being processed -- like an automatically stepping debugger. Basically I want to see what is going on, but be able to jump in if a traceback occurs.</p>
<p>Any suggestions?</p>
| 1 | 2009-08-03T02:24:32Z | 1,220,474 | <p>The integrated debugger in <a href="http://www.wingware.com/" rel="nofollow">Wing IDE</a> is quite versatile and nice to work with. (The <a href="http://www.wingware.com/wingide-101/index" rel="nofollow">Wing IDE 101</a> version is freeware.)</p>
| 0 | 2009-08-03T02:28:03Z | [
"python",
"debugging"
] |
"Watching" program being proccessed line-by-line? | 1,220,465 | <p>I'm looking for a debugging tool that will run my Python app, but display which line is currently being processed -- like an automatically stepping debugger. Basically I want to see what is going on, but be able to jump in if a traceback occurs.</p>
<p>Any suggestions?</p>
| 1 | 2009-08-03T02:24:32Z | 1,220,500 | <p>I think you're looking for the <a href="http://docs.python.org/library/pdb" rel="nofollow">pdb</a> module.</p>
| 1 | 2009-08-03T02:47:06Z | [
"python",
"debugging"
] |
"Watching" program being proccessed line-by-line? | 1,220,465 | <p>I'm looking for a debugging tool that will run my Python app, but display which line is currently being processed -- like an automatically stepping debugger. Basically I want to see what is going on, but be able to jump in if a traceback occurs.</p>
<p>Any suggestions?</p>
| 1 | 2009-08-03T02:24:32Z | 1,220,539 | <p><a href="http://winpdb.org/" rel="nofollow">Winpdb</a> is a good python debugger. It is written in Python under the GPL, so adding the automatic stepping functionality you want should not be too complicated. </p>
| 3 | 2009-08-03T03:13:45Z | [
"python",
"debugging"
] |
"Watching" program being proccessed line-by-line? | 1,220,465 | <p>I'm looking for a debugging tool that will run my Python app, but display which line is currently being processed -- like an automatically stepping debugger. Basically I want to see what is going on, but be able to jump in if a traceback occurs.</p>
<p>Any suggestions?</p>
| 1 | 2009-08-03T02:24:32Z | 1,221,572 | <p>"Basically I want to see what is going on, but be able to jump in if a traceback occurs."</p>
<p>Here's a radical thought: Don't.</p>
<p>"Watching" is a crutch. You should be writing small sections of code that you <strong>know</strong> will work. Then put those together.</p>
<p>Watching sometimes results from "I'm not sure what Python's really doing", so there's an urge to "watch" execution and see what's happening. Other times watching results from writing a script that's too big and complex without proper decomposition. Sometimes watching results from having a detailed specification which was translated into Python without a deep understanding. I've seen people doing these; of course, there are lots more reasons.</p>
<p>The advice, however, is the same for all:</p>
<ol>
<li><p>Break things into small pieces,
usually classes of functions. Make
them simple enough that you can
actually understand what Python is
doing. </p></li>
<li><p>Knit them together to create your
larger application from pieces you
actually understand.</p></li>
</ol>
<p>Watching will limit your ability to actually write working software. It will -- in a very real way -- limit you to trivial programming exercises. It's not a good learning tool; and it's a perfectly awful way to create production code.</p>
<p><strong>Bottom Line</strong>.</p>
<p>Don't pursue "watching". Decompose into smaller pieces so you don't need to watch.</p>
| 1 | 2009-08-03T10:17:47Z | [
"python",
"debugging"
] |
"Watching" program being proccessed line-by-line? | 1,220,465 | <p>I'm looking for a debugging tool that will run my Python app, but display which line is currently being processed -- like an automatically stepping debugger. Basically I want to see what is going on, but be able to jump in if a traceback occurs.</p>
<p>Any suggestions?</p>
| 1 | 2009-08-03T02:24:32Z | 1,222,253 | <p>There is also a very nice debugger in The <a href="http://pydev.sourceforge.net/" rel="nofollow">Python Eclipse plugin.</a></p>
| 0 | 2009-08-03T13:06:24Z | [
"python",
"debugging"
] |
"Watching" program being proccessed line-by-line? | 1,220,465 | <p>I'm looking for a debugging tool that will run my Python app, but display which line is currently being processed -- like an automatically stepping debugger. Basically I want to see what is going on, but be able to jump in if a traceback occurs.</p>
<p>Any suggestions?</p>
| 1 | 2009-08-03T02:24:32Z | 1,224,093 | <p>Try <a href="http://ipython.scipy.org/" rel="nofollow">IPython</a> alongwith <a href="http://pypi.python.org/pypi/ipdb/" rel="nofollow">ipdb</a></p>
| 0 | 2009-08-03T19:07:54Z | [
"python",
"debugging"
] |
How to make authkit session cookie HttpOnly in pylons? | 1,220,555 | <p>I use authkit module with Pylons and I see that session cookie it sets (aptly named authkit) is not set to be HttpOnly.</p>
<p>Is there a simple way to make it HttpOnly? (By "simple" I mean the one that does not involve hacking authkit's code.)</p>
| 1 | 2009-08-03T03:21:24Z | 1,220,664 | <p>This is not documented in authkit, because it only started working in Python 2.6 (see <a href="http://docs.python.org/library/cookie.html#Cookie.Morsel" rel="nofollow">here</a>), but if you do have Python 2.6 then </p>
<pre><code>authkit.cookie.params.httponly = true
</code></pre>
<p>in the config should work and do what you desire.</p>
<p>authkit internally uses a <code>Cookie.SimpleCookie</code>, and that's what limits the keys you can have for the <code>authkit.cookie.params.</code> -- up to Python 2.5 they were only the keys supported by the standard, <a href="http://www.faqs.org/rfcs/rfc2109.html" rel="nofollow">RFC 2109</a>, but in Python 2.6 the useful <code>httponly</code> extension was added -- which is how authkit gained support for it automatically... because, quite properly, it doesn't do its own checks but rather delegates all checks to <code>SimpleCookie</code>.</p>
<p>If you're stuck with Python 2.5 or earlier, then to make this work will require a little more effort (not changing authkit, but monkeypatching Python's Cookie.py, or better, if feasible, installing a newer version of Cookie.py from the Python 2.6 sources in a directory that's earlier in sys.path than the directory for Python's own standard library).</p>
| 1 | 2009-08-03T04:24:55Z | [
"python",
"cookies",
"pylons",
"authkit"
] |
Need to get the rest of an iterator in python | 1,220,640 | <p>Say I have an iterator.<br />
After iterating over a few items of the iterator, I will have to get rid of these first few items and return an iterator(preferably the same) with the rest of the items. How do I go about?
Also, Do iterators support remove or pop operations (like lists)?</p>
| 2 | 2009-08-03T04:11:13Z | 1,220,642 | <p>Yes, just use iter.next()</p>
<p>Example</p>
<pre><code>iter = xrange(3).__iter__()
iter.next() # this pops 0
for i in iter:
print i
1
2
</code></pre>
<p>You can pop off the front of an iterator with .next(). You cannot do any other fancy operations.</p>
| 6 | 2009-08-03T04:12:33Z | [
"python",
"iterator"
] |
Need to get the rest of an iterator in python | 1,220,640 | <p>Say I have an iterator.<br />
After iterating over a few items of the iterator, I will have to get rid of these first few items and return an iterator(preferably the same) with the rest of the items. How do I go about?
Also, Do iterators support remove or pop operations (like lists)?</p>
| 2 | 2009-08-03T04:11:13Z | 1,221,400 | <p>The <a href="http://docs.python.org/library/itertools.html#itertools.dropwhile" rel="nofollow"><code>itertools.dropwhile()</code></a> function might be helpful, too:</p>
<pre><code>dropwhile(lambda x: x<5, xrange(10))
</code></pre>
| 6 | 2009-08-03T09:25:25Z | [
"python",
"iterator"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.