title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 25,797,695 | <p>When we do this:</p>
<pre><code>def foo(a=[]):
...
</code></pre>
<p>... we assign the argument <code>a</code> to an <em>unnamed</em> list, if the caller does not pass the value of a.</p>
<p>To make things simpler for this discussion, let's temporarily give the unnamed list a name. How about <code>pavlo</code> ?</p>
<pre><code>def foo(a=pavlo):
...
</code></pre>
<p>At any time, if the caller doesn't tell us what <code>a</code> is, we reuse <code>pavlo</code>.</p>
<p>If <code>pavlo</code> is mutable (modifiable), and <code>foo</code> ends up modifying it, an effect we notice the next time <code>foo</code> is called without specifying <code>a</code>.</p>
<p>So this is what you see (Remember, <code>pavlo</code> is initialized to []):</p>
<pre><code> >>> foo()
[5]
</code></pre>
<p>Now, <code>pavlo</code> is [5].</p>
<p>Calling <code>foo()</code> again modifies <code>pavlo</code> again:</p>
<pre><code>>>> foo()
[5, 5]
</code></pre>
<p>Specifying <code>a</code> when calling <code>foo()</code> ensures <code>pavlo</code> is not touched.</p>
<pre><code>>>> ivan = [1, 2, 3, 4]
>>> foo(a=ivan)
[1, 2, 3, 4, 5]
>>> ivan
[1, 2, 3, 4, 5]
</code></pre>
<p>So, <code>pavlo</code> is still <code>[5, 5]</code>.</p>
<pre><code>>>> foo()
[5, 5, 5]
</code></pre>
| 10 | 2014-09-11T22:05:43Z | [
"python",
"language-design",
"least-astonishment"
] |
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 28,354,667 | <p>I sometimes exploit this behavior as an alternative to the following pattern:</p>
<pre><code>singleton = None
def use_singleton():
global singleton
if singleton is None:
singleton = _make_singleton()
return singleton.use_me()
</code></pre>
<p>If <code>singleton</code> is only used by <code>use_singleton</code>, I like the following pattern as a replacement:</p>
<pre><code># _make_singleton() is called only once when the def is executed
def use_singleton(singleton=_make_singleton()):
return singleton.use_me()
</code></pre>
<p>I've used this for instantiating client classes that access external resources, and also for creating dicts or lists for memoization.</p>
<p>Since I don't think this pattern is well known, I do put a short comment in to guard against future misunderstandings.</p>
| 11 | 2015-02-05T21:44:51Z | [
"python",
"language-design",
"least-astonishment"
] |
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 29,290,566 | <p>Already busy topic, but from what I read here, the following helped me realizing how it's working internally:</p>
<pre><code>def bar(a=[]):
print id(a)
a = a + [1]
print id(a)
return a
>>> bar()
4484370232
4484524224
[1]
>>> bar()
4484370232
4484524152
[1]
>>> bar()
4484370232 # Never change, this is 'class property' of the function
4484523720 # Always a new object
[1]
>>> id(bar.func_defaults[0])
4484370232
</code></pre>
| 15 | 2015-03-26T23:14:01Z | [
"python",
"language-design",
"least-astonishment"
] |
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 29,344,819 | <h1>5 points in defense of Python</h1>
<ol>
<li><p><strong>Simplicity</strong>: The behavior is simple in the following sense:
Most people fall into this trap only once, not several times.</p></li>
<li><p><strong>Consistency</strong>: Python <em>always</em> passes objects, not names.
The default parameter is, obviously, part of the function
heading (not the function body). It therefore ought to be evaluated
at module load time (and only at module load time, unless nested), not
at function call time.</p></li>
<li><p><strong>Usefulness</strong>: As Frederik Lundh points out in his explanation
of <a href="http://effbot.org/zone/default-values.htm#valid-uses-for-mutable-defaults" rel="nofollow">"Default Parameter Values in Python"</a>, the
current behavior can be quite useful for advanced programming.
(Use sparingly.)</p></li>
<li><p><strong>Sufficient documentation</strong>: In the most basic Python documentation,
the tutorial, the issue is loudly announced as
an <strong>"Important warning"</strong> in the <em>first</em> subsection of Section
<a href="https://docs.python.org/3/tutorial/controlflow.html#default-argument-values" rel="nofollow">"More on Defining Functions"</a>.
The warning even uses boldface,
which is rarely applied outside of headings.
RTFM: Read the fine manual.</p></li>
<li><p><strong>Meta-learning</strong>: Falling into the trap is actually a very
helpful moment (at least if you are a reflective learner),
because you will subsequently better understand the point
"Consistency" above and that will
teach you a great deal about Python.</p></li>
</ol>
| 26 | 2015-03-30T11:18:25Z | [
"python",
"language-design",
"least-astonishment"
] |
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 30,177,062 | <p>A very subtle issue being pointed out here. Thanks for all the insights.</p>
<p>I ran into a similar problem and found a fix for this.</p>
<p><strong>It's always safe to clean the vessel before we start cooking</strong></p>
<p><em>safe version:</em> </p>
<pre><code>def foo(bird=[]):
del bird[:] # <--- clean it
bird.append('parrot')
return bird
>>>> foo() #1st call
['parrot']
>>>> foo() #2nd call
['parrot']
>>>> foo() #3rd call
['parrot']
>>>> foo() #nth call
['parrot']
</code></pre>
<p>Now, independent of the number of times you call, this will work as "expected"</p>
| 2 | 2015-05-11T20:28:57Z | [
"python",
"language-design",
"least-astonishment"
] |
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 30,447,095 | <p>Just change the function to be:</p>
<pre><code>def notastonishinganymore(a = [])'''The name is just a joke :)''':
del a[:]
a.append(5)
return a
</code></pre>
| 2 | 2015-05-25T23:04:44Z | [
"python",
"language-design",
"least-astonishment"
] |
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 32,535,706 | <p>I am going to demonstrate an alternative structure to pass a default list value to a function (it works equally well with dictionaries). </p>
<p>As others have extensively commented, the list parameter is bound to the function when it is defined as opposed to when it is executed. Because lists and dictionaries are mutable, any alteration to this parameter will affect other calls to this function. As a result, subsequent calls to the function will receive this shared list which may have been altered by any other calls to the function. Worse yet, two parameters are using this function's shared parameter at the same time oblivious to the changes made by the other.</p>
<p><strong>Wrong Method (probably...)</strong>:</p>
<pre><code>def foo(list_arg=[5]):
return list_arg
a = foo()
a.append(6)
>>> a
[5, 6]
b = foo()
b.append(7)
# The value of 6 appended to variable 'a' is now part of the list held by 'b'.
>>> b
[5, 6, 7]
# Although 'a' is expecting to receive 6 (the last element it appended to the list),
# it actually receives the last element appended to the shared list.
# It thus receives the value 7 previously appended by 'b'.
>>> a.pop()
7
</code></pre>
<p>You can verify that they are one and the same object by using <code>id</code>:</p>
<pre><code>>>> id(a)
5347866528
>>> id(b)
5347866528
</code></pre>
<p>Per Brett Slatkin's "Effective Python: 59 Specific Ways to Write Better Python", <em>Item 20: Use <code>None</code> and Docstrings to specify dynamic default arguments</em> (p. 48)</p>
<blockquote>
<p>The convention for achieving the desired result in Python is to
provide a default value of <code>None</code> and to document the actual behaviour
in the docstring.</p>
</blockquote>
<p>This implementation ensures that each call to the function either receives the default list or else the list passed to the function.</p>
<p><strong>Preferred Method</strong>:</p>
<pre><code>def foo(list_arg=None):
"""
:param list_arg: A list of input values.
If none provided, used a list with a default value of 5.
"""
if not list_arg:
list_arg = [5]
return list_arg
a = foo()
a.append(6)
>>> a
[5, 6]
b = foo()
b.append(7)
>>> b
[5, 7]
c = foo([10])
c.append(11)
>>> c
[10, 11]
</code></pre>
<p>There may be legitimate use cases for the 'Wrong Method' whereby the programmer intended the default list parameter to be shared, but this is more likely the exception than the rule.</p>
| 7 | 2015-09-12T06:00:51Z | [
"python",
"language-design",
"least-astonishment"
] |
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 34,172,768 | <h2>Why don't you introspect?</h2>
<p>I'm actually <em>really</em> surprised no one has performed the insightfull introspection offered by Python (<code>2</code> and <code>3</code> certainly apply) on callables. </p>
<p>Given a simple little function <code>func</code> defined as:</p>
<pre><code>>>> def func(a = []):
... a.append(5)
</code></pre>
<p>When Python encounters it, the first thing it will do is compile it in order to create a <code>code</code> object for this function. While this compilation step is done <em>Python <strong>stores</strong> the <code>list</code> reference in the function object itself</em>, as the top answer mentioned, the list <code>a</code> can now be considered a <em>member</em> of the function <code>func</code>.</p>
<p>Let's do some introspection, a before and after and examine the actual list get expanded <strong>inside</strong> the function object. I'm using <code>Python 3.x</code> even though for Python 2 the same applies (<code>__defaults__</code> is present in 2 if I'm not mistaken, if it isn't <code>func_defaults</code> definitely is).</p>
<h3>Function Before Execution:</h3>
<pre><code>>>> def func(a = []):
... a.append(5)
...
</code></pre>
<p>After Python executes this definition it will take any default parameters specified (<code>a = []</code> here) and <a href="https://docs.python.org/3/reference/datamodel.html#the-standard-type-hierarchy" rel="nofollow">cram them in the <code>__defaults__</code> attribute for the function object</a> (relevant section: Callables): </p>
<pre><code>>>> func.__defaults__
([],)
</code></pre>
<p>Ok, so an empty list as the single entry in <code>__defaults__</code>, just as expected. </p>
<h3>Function After Execution:</h3>
<p>Let's now execute this function:</p>
<pre><code>>>> func()
</code></pre>
<p>Now, let's see those <code>__defaults__</code> again: </p>
<pre><code>>>> func.__defaults__
([5],)
</code></pre>
<p><em>Astonished?</em> The value inside the object changes! Consecutive calls to the function will now simply append to that embedded <code>list</code> object:</p>
<pre><code>>>> func(); func(); func()
>>> func.__defaults__
([5, 5, 5, 5],)
</code></pre>
<p>So, there you have it, the reason why this <em>flaw</em> (ehem) happens is because default arguments are part of the function object. There's nothing weird going on here, it's all just a bit surprising.</p>
<hr>
<p>To further verify that the list in <code>__defaults__</code> is the same as that used in the function <code>func</code> you can just change your function to return the <code>id</code> of the list <code>a</code> used inside the function body. Then, compare it to the list in <code>__defaults__</code> (position <code>[0]</code> in <code>__defaults__</code>) and you'll see how these are indeed refering to the same list instance:</p>
<pre><code>>>> def func(a = []):
... a.append(5)
... return id(a)
>>>
>>> id(func.__defaults__[0]) == func()
True
</code></pre>
<p>All with the power of introspection! </p>
| 14 | 2015-12-09T07:13:28Z | [
"python",
"language-design",
"least-astonishment"
] |
"Least Astonishment" and the Mutable Default Argument | 1,132,941 | <p>Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:</p>
<pre><code>def foo(a=[]):
a.append(5)
return a
</code></pre>
<p>Python novices would expect this function to always return a list with only one element: <code>[5]</code>. The result is instead very different, and very astonishing (for a novice):</p>
<pre><code>>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
</code></pre>
<p>A manager of mine once had his first encounter with this feature, and called it "a dramatic design flaw" of the language. I replied that the behavior had an underlying explanation, and it is indeed very puzzling and unexpected if you don't understand the internals. However, I was not able to answer (to myself) the following question: what is the reason for binding the default argument at function definition, and not at function execution? I doubt the experienced behavior has a practical use (who really used static variables in C, without breeding bugs?)</p>
<p><strong>Edit</strong>: </p>
<p>Baczek made an interesting example. Together with most of your comments and Utaal's in particular, I elaborated further:</p>
<pre><code>>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
</code></pre>
<p>To me, it seems that the design decision was relative to where to put the scope of parameters: inside the function or "together" with it?</p>
<p>Doing the binding inside the function would mean that <code>x</code> is effectively bound to the specified default when the function is called, not defined, something that would present a deep flaw: the <code>def</code> line would be "hybrid" in the sense that part of the binding (of the function object) would happen at definition, and part (assignment of default parameters) at function invocation time.</p>
<p>The actual behavior is more consistent: everything of that line gets evaluated when that line is executed, meaning at function definition.</p>
| 1,504 | 2009-07-15T18:00:37Z | 36,968,932 | <h1>Python: The Mutable Default Argument</h1>
<p>Default arguments get evaluated at the time the function is compiled into a function object. When used by the function, multiple times by that function, they are and remain the same object. </p>
<p>When they are mutable, when mutated (for example, by adding an element to it) they remain mutated on consecutive calls.</p>
<p>They stay mutated because they are the same object each time.</p>
<h2>Demonstration</h2>
<p>Here's a demonstration - you can verify that they are the same object each time they are referenced by </p>
<ul>
<li>seeing that the list is created before the function has finished compiling to a function object,</li>
<li>observing that the id is the same each time the list is referenced,</li>
<li>observing that the list stays changed when the function that uses it is called a second time,</li>
<li>observing the order in which the output is printed from the source (which I conveniently numbered for you):</li>
</ul>
<p><code>example.py</code></p>
<pre><code>print('1. Global scope being evaluated')
def create_list():
'''noisily create a list for usage as a kwarg'''
l = []
print('3. list being created and returned, id: ' + str(id(l)))
return l
print('2. example_function about to be compiled to an object')
def example_function(default_kwarg1=create_list()):
print('appending "a" in default default_kwarg1')
default_kwarg1.append("a")
print('list with id: ' + str(id(default_kwarg1)) +
' - is now: ' + repr(default_kwarg1))
print('4. example_function compiled: ' + repr(example_function))
if __name__ == '__main__':
print('5. calling example_function twice!:')
example_function()
example_function()
</code></pre>
<p>and running it with <code>python example.py</code>:</p>
<pre><code>1. Global scope being evaluated
2. example_function about to be compiled to an object
3. list being created and returned, id: 140502758808032
4. example_function compiled: <function example_function at 0x7fc9590905f0>
5. calling example_function twice!:
appending "a" in default default_kwarg1
list with id: 140502758808032 - is now: ['a']
appending "a" in default default_kwarg1
list with id: 140502758808032 - is now: ['a', 'a']
</code></pre>
<h2>Does this violate the principle of "Least Astonishment"?</h2>
<p>This order of execution is frequently confusing to new users of Python. If you understand the Python execution model, then it becomes quite expected. </p>
<h2>The usual instruction to new Python users:</h2>
<p>But this is why the usual instruction to new users is to create their default arguments like this instead:</p>
<pre><code>def example_function_2(default_kwarg=None):
if default_kwarg is None:
default_kwarg = []
</code></pre>
<p>This uses the None singleton as a sentinel object to tell the function whether or not we've gotten an argument other than the default. If we get no argument, then we actually want to use a new empty list, <code>[]</code>, as the default.</p>
<p>As the <a href="https://docs.python.org/tutorial/controlflow.html#default-argument-values" rel="nofollow">tutorial section on control flow</a> says:</p>
<blockquote>
<p>If you donât want the default to be shared between subsequent calls,
you can write the function like this instead:</p>
<pre><code>def f(a, L=None):
if L is None:
L = []
L.append(a)
return L
</code></pre>
</blockquote>
| 4 | 2016-05-01T16:20:44Z | [
"python",
"language-design",
"least-astonishment"
] |
Django serialization of inherited model | 1,132,980 | <p>I have a problem with serialization of Django inherited models. For example</p>
<pre><code>class Animal(models.Model):
color = models.CharField(max_length=50)
class Dog(Animal):
name = models.CharField(max_length=50)
...
# now I want to serialize Dog model with Animal inherited fields obviously included
print serializers.serialize('xml', Dog.objects.all())
</code></pre>
<p>and only Dog model has been serialized.</p>
<p>I can do smth like </p>
<pre><code>all_objects = list(Animal.objects.all()) + list(Dog.objects.all())
print serializers.serialize('xml', all_objects)
</code></pre>
<p>But it looks ugly and because my models are very big so I have to use SAX parser and with such output it's difficult to parse.</p>
<p>Any idea how to serialize django models with parent class?</p>
<p>**EDIT: ** It use to work ok before this <a href="http://code.djangoproject.com/changeset/7600">patch</a> has been applied. And the explanation why the patch exist "Model saving was too aggressive about creating new parent class instances during deserialization. Raw save on a model now skips saving of the parent class. " I think there should be an option to be able to serialize "local fields only" by default and second option - "all" - to serialize all inherited fields.</p>
| 6 | 2009-07-15T18:06:50Z | 1,179,693 | <p>Did you look at select_related() ?
as in </p>
<pre><code>serializers.serialize('xml', Dog.objects.select_related().all())
</code></pre>
| 0 | 2009-07-24T19:30:05Z | [
"python",
"django",
"serialization"
] |
Django serialization of inherited model | 1,132,980 | <p>I have a problem with serialization of Django inherited models. For example</p>
<pre><code>class Animal(models.Model):
color = models.CharField(max_length=50)
class Dog(Animal):
name = models.CharField(max_length=50)
...
# now I want to serialize Dog model with Animal inherited fields obviously included
print serializers.serialize('xml', Dog.objects.all())
</code></pre>
<p>and only Dog model has been serialized.</p>
<p>I can do smth like </p>
<pre><code>all_objects = list(Animal.objects.all()) + list(Dog.objects.all())
print serializers.serialize('xml', all_objects)
</code></pre>
<p>But it looks ugly and because my models are very big so I have to use SAX parser and with such output it's difficult to parse.</p>
<p>Any idea how to serialize django models with parent class?</p>
<p>**EDIT: ** It use to work ok before this <a href="http://code.djangoproject.com/changeset/7600">patch</a> has been applied. And the explanation why the patch exist "Model saving was too aggressive about creating new parent class instances during deserialization. Raw save on a model now skips saving of the parent class. " I think there should be an option to be able to serialize "local fields only" by default and second option - "all" - to serialize all inherited fields.</p>
| 6 | 2009-07-15T18:06:50Z | 1,303,306 | <p>You found your answer in the documentation of the patch.</p>
<pre><code>all_objects = list(Animal.objects.all()) + list(Dog.objects.all())
print serializers.serialize('xml', all_objects)
</code></pre>
<p>However, if you change <code>Animal</code> to be an abstract base class it will work:</p>
<pre><code>class Animal(models.Model):
color = models.CharField(max_length=50)
class Meta:
abstract = True
class Dog(Animal):
name = models.CharField(max_length=50)
</code></pre>
<p>This works as of Django 1.0. See <a href="http://docs.djangoproject.com/en/dev/topics/db/models/" rel="nofollow">http://docs.djangoproject.com/en/dev/topics/db/models/</a>.</p>
| 1 | 2009-08-20T00:14:01Z | [
"python",
"django",
"serialization"
] |
Django serialization of inherited model | 1,132,980 | <p>I have a problem with serialization of Django inherited models. For example</p>
<pre><code>class Animal(models.Model):
color = models.CharField(max_length=50)
class Dog(Animal):
name = models.CharField(max_length=50)
...
# now I want to serialize Dog model with Animal inherited fields obviously included
print serializers.serialize('xml', Dog.objects.all())
</code></pre>
<p>and only Dog model has been serialized.</p>
<p>I can do smth like </p>
<pre><code>all_objects = list(Animal.objects.all()) + list(Dog.objects.all())
print serializers.serialize('xml', all_objects)
</code></pre>
<p>But it looks ugly and because my models are very big so I have to use SAX parser and with such output it's difficult to parse.</p>
<p>Any idea how to serialize django models with parent class?</p>
<p>**EDIT: ** It use to work ok before this <a href="http://code.djangoproject.com/changeset/7600">patch</a> has been applied. And the explanation why the patch exist "Model saving was too aggressive about creating new parent class instances during deserialization. Raw save on a model now skips saving of the parent class. " I think there should be an option to be able to serialize "local fields only" by default and second option - "all" - to serialize all inherited fields.</p>
| 6 | 2009-07-15T18:06:50Z | 14,753,889 | <p>You'll need a custom serializer to support inherited fields, as Django's serializer will only serialize local fields.</p>
<p>I ended up writing my own when dealing with this issue, feel free to copy it: <a href="https://github.com/zmathew/django-backbone/blob/master/backbone/serializers.py" rel="nofollow">https://github.com/zmathew/django-backbone/blob/master/backbone/serializers.py</a></p>
<p>In order to use it on its own, you need to do:</p>
<pre><code>serializer = AllFieldsSerializer()
serializer.serialize(queryset, fields=fields)
print serializer.getvalue()
</code></pre>
| 0 | 2013-02-07T14:47:42Z | [
"python",
"django",
"serialization"
] |
Django serialization of inherited model | 1,132,980 | <p>I have a problem with serialization of Django inherited models. For example</p>
<pre><code>class Animal(models.Model):
color = models.CharField(max_length=50)
class Dog(Animal):
name = models.CharField(max_length=50)
...
# now I want to serialize Dog model with Animal inherited fields obviously included
print serializers.serialize('xml', Dog.objects.all())
</code></pre>
<p>and only Dog model has been serialized.</p>
<p>I can do smth like </p>
<pre><code>all_objects = list(Animal.objects.all()) + list(Dog.objects.all())
print serializers.serialize('xml', all_objects)
</code></pre>
<p>But it looks ugly and because my models are very big so I have to use SAX parser and with such output it's difficult to parse.</p>
<p>Any idea how to serialize django models with parent class?</p>
<p>**EDIT: ** It use to work ok before this <a href="http://code.djangoproject.com/changeset/7600">patch</a> has been applied. And the explanation why the patch exist "Model saving was too aggressive about creating new parent class instances during deserialization. Raw save on a model now skips saving of the parent class. " I think there should be an option to be able to serialize "local fields only" by default and second option - "all" - to serialize all inherited fields.</p>
| 6 | 2009-07-15T18:06:50Z | 32,158,475 | <p>You can define a custom Serializer:</p>
<pre><code>class DogSerializer(serializers.ModelSerializer):
class Meta:
model = Dog
fields = ('color','name')
</code></pre>
<p>Use it like:<br>
serializer = DogSerializer(Dog.objects.all(), many=True)<br>
print serializer.data enter code here</p>
| 0 | 2015-08-22T16:44:10Z | [
"python",
"django",
"serialization"
] |
Alternatives to using pack_into() when manipulating a list of bytes? | 1,133,044 | <p>I'm reading in a binary file into a list and parsing the binary data. I'm using unpack() to extract certain parts of the data as primitive data types, and I want to edit that data and insert it back into the original list of bytes. Using <a href="http://docs.python.org/library/struct.html" rel="nofollow">pack_into()</a> would make it easy, except that I'm using Python 2.4, and pack_into() wasn't introduced until 2.5</p>
<p>Does anyone know of a good way to go about serializing the data this way so that I can accomplish essentially the same functionality as pack_into()?</p>
| 2 | 2009-07-15T18:18:20Z | 1,133,205 | <p>Do you mean editing data in a buffer object? Documentation on manipulating those at all from Python directly is fairly scarce.</p>
<p>If you just want to edit bytes in a string, it's simple enough, though; struct.pack_into is new to 2.5, but struct.pack isn't:</p>
<pre><code>import struct
s = open("file").read()
ofs = 1024
fmt = "Ih"
size = struct.calcsize(fmt)
before, data, after = s[0:ofs], s[ofs:ofs+size], s[ofs+size:]
values = list(struct.unpack(fmt, data))
values[0] += 5
values[1] /= 2
data = struct.pack(fmt, *values)
s = "".join([before, data, after])
</code></pre>
| 1 | 2009-07-15T18:46:28Z | [
"python",
"binary",
"struct"
] |
Alternatives to using pack_into() when manipulating a list of bytes? | 1,133,044 | <p>I'm reading in a binary file into a list and parsing the binary data. I'm using unpack() to extract certain parts of the data as primitive data types, and I want to edit that data and insert it back into the original list of bytes. Using <a href="http://docs.python.org/library/struct.html" rel="nofollow">pack_into()</a> would make it easy, except that I'm using Python 2.4, and pack_into() wasn't introduced until 2.5</p>
<p>Does anyone know of a good way to go about serializing the data this way so that I can accomplish essentially the same functionality as pack_into()?</p>
| 2 | 2009-07-15T18:18:20Z | 1,143,568 | <p>Have you looked at the <a href="http://python-bitstring.googlecode.com" rel="nofollow"><code>bitstring</code></a> module? It's designed to make the construction, parsing and modification of binary data easier than using the <code>struct</code> and <code>array</code> modules directly.</p>
<p>It's especially made for working at the bit level, but will work with bytes just as well. It will also work with Python 2.4.</p>
<pre><code>from bitstring import BitString
s = BitString(filename='somefile')
# replace byte range with new values
# The step of '8' signifies byte rather than bit indicies.
s[10:15:8] = '0x001122'
# Search and replace byte value with two bytes
s.replace('0xcc', '0xddee', bytealigned=True)
# Different interpretations of the data are available through properties
if s[5:7:8].int > 1000:
s[5:7:8] = 1000
# Use the bytes property to get back to a Python string
open('newfile', 'wb').write(s.bytes)
</code></pre>
<p>The underlying data stored in the <code>BitString</code> is just an <code>array</code> object, but with a comprehensive set of functions and special methods to make it simple to modify and interpret.</p>
| 3 | 2009-07-17T14:15:10Z | [
"python",
"binary",
"struct"
] |
DateTime in python extracting different bits and pieces | 1,133,147 | <p>I want to extract year from current date using python.</p>
<p>Do something like: </p>
<pre><code> DateTime a = DateTime.Now()
a.Year # (this is in C#)
</code></pre>
| 34 | 2009-07-15T18:37:27Z | 1,133,171 | <pre><code>import datetime
a = datetime.datetime.today().year
</code></pre>
<p>or even (as Lennart suggested)</p>
<pre><code>a = datetime.datetime.now().year
</code></pre>
<p>or even</p>
<pre><code>a = datetime.date.today().year
</code></pre>
| 10 | 2009-07-15T18:41:17Z | [
"python",
"datetime"
] |
DateTime in python extracting different bits and pieces | 1,133,147 | <p>I want to extract year from current date using python.</p>
<p>Do something like: </p>
<pre><code> DateTime a = DateTime.Now()
a.Year # (this is in C#)
</code></pre>
| 34 | 2009-07-15T18:37:27Z | 1,133,190 | <p>It's in fact almost the same in Python.. :-)</p>
<pre><code>import datetime
year = datetime.date.today().year
</code></pre>
<p>Of course, date doesn't have a time associated, so if you care about that too, you can do the same with a complete datetime object:</p>
<pre><code>import datetime
year = datetime.datetime.today().year
</code></pre>
<p>(Obviously no different, but you can store datetime.datetime.today() in a variable before you grab the year, of course).</p>
<p>One key thing to note is that the time components can differ between 32-bit and 64-bit pythons in some python versions (2.5.x tree I think). So you will find things like hour/min/sec on some 64-bit platforms, while you get hour/minute/second on 32-bit.</p>
| 43 | 2009-07-15T18:44:31Z | [
"python",
"datetime"
] |
DateTime in python extracting different bits and pieces | 1,133,147 | <p>I want to extract year from current date using python.</p>
<p>Do something like: </p>
<pre><code> DateTime a = DateTime.Now()
a.Year # (this is in C#)
</code></pre>
| 34 | 2009-07-15T18:37:27Z | 1,133,246 | <p>The other answers to this question seem to hit it spot on. Now how would you figure this out for yourself without stack overflow? Check out <a href="http://ipython.scipy.org/moin/">IPython</a>, an interactive Python shell that has tab auto-complete.</p>
<pre><code>> ipython
import Python 2.5 (r25:51908, Nov 6 2007, 16:54:01)
Type "copyright", "credits" or "license" for more information.
IPython 0.8.2.svn.r2750 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]: import datetime
In [2]: now=datetime.datetime.now()
In [3]: now.
</code></pre>
<p>press tab a few times and you'll be prompted with the members of the "now" object:</p>
<pre><code>now.__add__ now.__gt__ now.__radd__ now.__sub__ now.fromordinal now.microsecond now.second now.toordinal now.weekday
now.__class__ now.__hash__ now.__reduce__ now.astimezone now.fromtimestamp now.min now.strftime now.tzinfo now.year
now.__delattr__ now.__init__ now.__reduce_ex__ now.combine now.hour now.minute now.strptime now.tzname
now.__doc__ now.__le__ now.__repr__ now.ctime now.isocalendar now.month now.time now.utcfromtimestamp
now.__eq__ now.__lt__ now.__rsub__ now.date now.isoformat now.now now.timetuple now.utcnow
now.__ge__ now.__ne__ now.__setattr__ now.day now.isoweekday now.replace now.timetz now.utcoffset
now.__getattribute__ now.__new__ now.__str__ now.dst now.max now.resolution now.today now.utctimetuple
</code></pre>
<p>and you'll see that <strong>now.year</strong> is a member of the "now" object.</p>
| 9 | 2009-07-15T18:52:54Z | [
"python",
"datetime"
] |
DateTime in python extracting different bits and pieces | 1,133,147 | <p>I want to extract year from current date using python.</p>
<p>Do something like: </p>
<pre><code> DateTime a = DateTime.Now()
a.Year # (this is in C#)
</code></pre>
| 34 | 2009-07-15T18:37:27Z | 19,181,120 | <p>If you want the year from a (unknown) datetime-object:</p>
<pre><code>tijd = datetime.datetime(9999, 12, 31, 23, 59, 59)
>>> tijd.timetuple()
time.struct_time(tm_year=9999, tm_mon=12, tm_mday=31, tm_hour=23, tm_min=59, tm_sec=59, tm_wday=4, tm_yday=365, tm_isdst=-1)
>>> tijd.timetuple().tm_year
9999
</code></pre>
| 1 | 2013-10-04T12:26:14Z | [
"python",
"datetime"
] |
Adding tuples to produce a tuple with a subtotal per 'column' | 1,133,286 | <p>What is the most pythonic way of adding the values of two or more tuples to produce a total for each 'column'?</p>
<p>Eg:</p>
<pre><code>>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> ???
(51, 73)
</code></pre>
<p>I've so far considered the following:</p>
<pre><code>def sumtuples(*tuples):
return (sum(v1 for v1,_ in tuples), sum(v2 for _,v2 in tuples))
>>> print sumtuples(a, b, c)
(51, 73)
</code></pre>
<p>I'm sure this far from ideal - how can it be improved?</p>
| 2 | 2009-07-15T19:00:35Z | 1,133,313 | <p>If your set of tuples is going to be relatively small, your solution is fine. However, if you're going to be working on very large data sets you should consider using reduce as it will only iterate over the list once compared to your original solution which iterates over the list of tuples twice.</p>
<pre><code>>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> values=[a,b,c]
>>> reduce(lambda x,y: (x[0]+y[0],x[1]+y[1]), values,(0,0))
(51, 73)
</code></pre>
| 0 | 2009-07-15T19:05:35Z | [
"python",
"tuples"
] |
Adding tuples to produce a tuple with a subtotal per 'column' | 1,133,286 | <p>What is the most pythonic way of adding the values of two or more tuples to produce a total for each 'column'?</p>
<p>Eg:</p>
<pre><code>>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> ???
(51, 73)
</code></pre>
<p>I've so far considered the following:</p>
<pre><code>def sumtuples(*tuples):
return (sum(v1 for v1,_ in tuples), sum(v2 for _,v2 in tuples))
>>> print sumtuples(a, b, c)
(51, 73)
</code></pre>
<p>I'm sure this far from ideal - how can it be improved?</p>
| 2 | 2009-07-15T19:00:35Z | 1,133,316 | <p>I guess you could use <code>reduce</code>, though it's debatable whether that's pythonic ..</p>
<pre><code>In [13]: reduce(lambda s, t: (s[0]+t[0], s[1]+t[1]), [a, b, c], (0, 0))
Out[13]: (51, 73)
</code></pre>
<p>Here's another way using <code>map</code> and <code>zip</code>:</p>
<pre><code>In [14]: map(sum, zip(a, b, c))
Out[14]: [51, 73]
</code></pre>
<p>or, if you're passing your collection of tuples in as a list:</p>
<pre><code>In [15]: tups = [a, b, c]
In [15]: map(sum, zip(*tups))
Out[15]: [51, 73]
</code></pre>
<p>and, using a list comprehension instead of <code>map</code>:</p>
<pre><code>In [16]: [sum(z) for z in zip(*tups)]
Out[16]: [51, 73]
</code></pre>
| 7 | 2009-07-15T19:06:07Z | [
"python",
"tuples"
] |
Adding tuples to produce a tuple with a subtotal per 'column' | 1,133,286 | <p>What is the most pythonic way of adding the values of two or more tuples to produce a total for each 'column'?</p>
<p>Eg:</p>
<pre><code>>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> ???
(51, 73)
</code></pre>
<p>I've so far considered the following:</p>
<pre><code>def sumtuples(*tuples):
return (sum(v1 for v1,_ in tuples), sum(v2 for _,v2 in tuples))
>>> print sumtuples(a, b, c)
(51, 73)
</code></pre>
<p>I'm sure this far from ideal - how can it be improved?</p>
| 2 | 2009-07-15T19:00:35Z | 1,133,347 | <p>Not pure Python, but the preferred way if you have SciPy installed:</p>
<pre><code>from scipy import array
a = array((10, 20))
b = array((40, 50))
c = array((1, 3))
print tuple(a+b+c)
</code></pre>
| 1 | 2009-07-15T19:10:47Z | [
"python",
"tuples"
] |
Adding tuples to produce a tuple with a subtotal per 'column' | 1,133,286 | <p>What is the most pythonic way of adding the values of two or more tuples to produce a total for each 'column'?</p>
<p>Eg:</p>
<pre><code>>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> ???
(51, 73)
</code></pre>
<p>I've so far considered the following:</p>
<pre><code>def sumtuples(*tuples):
return (sum(v1 for v1,_ in tuples), sum(v2 for _,v2 in tuples))
>>> print sumtuples(a, b, c)
(51, 73)
</code></pre>
<p>I'm sure this far from ideal - how can it be improved?</p>
| 2 | 2009-07-15T19:00:35Z | 1,133,349 | <p>Since we're going crazy,</p>
<pre><code>a = (10, 20)
b = (40, 50)
c = (1, 3)
def sumtuples(*tuples):
return map(sum, zip(*tuples))
sumtuples(a,b,c)
[51, 73]
</code></pre>
<p>Truth is, almost every time I post one of these crazy solutions, the 'naive' method seems to work out faster and more readable...</p>
| 2 | 2009-07-15T19:11:15Z | [
"python",
"tuples"
] |
Adding tuples to produce a tuple with a subtotal per 'column' | 1,133,286 | <p>What is the most pythonic way of adding the values of two or more tuples to produce a total for each 'column'?</p>
<p>Eg:</p>
<pre><code>>>> a = (10, 20)
>>> b = (40, 50)
>>> c = (1, 3)
>>> ???
(51, 73)
</code></pre>
<p>I've so far considered the following:</p>
<pre><code>def sumtuples(*tuples):
return (sum(v1 for v1,_ in tuples), sum(v2 for _,v2 in tuples))
>>> print sumtuples(a, b, c)
(51, 73)
</code></pre>
<p>I'm sure this far from ideal - how can it be improved?</p>
| 2 | 2009-07-15T19:00:35Z | 1,134,856 | <p>These solutions all suffer from one of two problems:</p>
<ul>
<li>they only work on exactly two columns; ((1,2,3),(2,3,4),(3,4,5)) doesn't work; or</li>
<li>they don't work on an iterator, so generating a billion rows doesn't work (or wastes tons of memory).</li>
</ul>
<p>Don't get caught up in the "pythonic" buzzword at the expense of not getting a correct answer.</p>
<pre><code>def sum_columns(it):
result = []
for row in it:
if len(result) <= len(row):
extend_by = len(row) - len(result)
result.extend([0] * extend_by)
for idx, val in enumerate(row):
result[idx] += val
return result
a = (1, 20)
b = (4, 50)
c = (0, 30, 3)
print sum_columns([a,b,c])
def generate_rows():
for i in range(1000):
yield (i, 1, 2)
lst = generate_rows()
print sum_columns(lst)
</code></pre>
| 0 | 2009-07-16T00:31:47Z | [
"python",
"tuples"
] |
Weird Problem with Classes and Optional Arguments | 1,133,309 | <p>Okay so this was driving me nuts all day. </p>
<p>Why does this happen:</p>
<pre><code>class Foo:
def __init__(self, bla = {}):
self.task_defs = bla
def __str__(self):
return ''.join(str(self.task_defs))
a = Foo()
b = Foo()
a.task_defs['BAR'] = 1
print 'B is ==> %s' % str(b)
print 'A is ==> %s' % str(a)
</code></pre>
<p>Gives me the output:</p>
<pre><code>B is ==> {'BAR': 1}
A is ==> {'BAR': 1}
</code></pre>
<p>I know it has to do with python passing everything by reference.</p>
<p>But why does this happen? This was literally making me go insane all day, basically causing me to tear my stuff apart. Shouldn't python be smart enough to deal with something like this?</p>
| 0 | 2009-07-15T19:05:05Z | 1,133,329 | <p>Since you have <code>bla</code> initially set to a mutable type (in this case a dict) in the arguments, it gets shared since <code>bla</code> doesn't get reinitialized to a new dict instance for each instance created for <code>Foo</code>. Here, try this instead:</p>
<pre><code>class Foo:
def __init__(self, bla=None):
if bla is None:
bla = {}
self.task_defs = bla
def __str__(self):
return ''.join(str(self.task_defs))
a = Foo()
b = Foo()
a.task_defs['BAR'] = 1
print 'B is ==> %s' % str(b)
print 'A is ==> %s' % str(a)
</code></pre>
| 6 | 2009-07-15T19:08:24Z | [
"python"
] |
PHP equivalent to Python's yield operator | 1,133,371 | <p>In Python (and others), you can incrementally process large volumes of data by using the 'yield' operator in a function. What would be the similar way to do so in PHP?</p>
<p>For example, lets say in Python, if I wanted to read a potentially very large file, I could work on each line one at a time like so (this example is contrived, as it is basically the same thing as 'for line in file_obj'):</p>
<pre><code>def file_lines(fname):
f = open(fname)
for line in f:
yield line
f.close()
for line in file_lines('somefile'):
#process the line
</code></pre>
<p>What I'm doing right now (in PHP) is I'm using a private instance variable to keep track of state, and acting accordingly each time the function is called, but it seems like there must be a better way.</p>
| 19 | 2009-07-15T19:15:19Z | 1,133,448 | <p>PHP has a direct equivalent called <a href="http://se2.php.net/manual/en/language.generators.overview.php" rel="nofollow">generators</a>.</p>
<p><strong>Old (pre php 5.5 answer):</strong></p>
<p>Unfortunately, there isn't a language equivalent. The easiest way is to either to what you're already doing, or to create a object that uses instance variables to maintain state.</p>
<p>There is however a good option if you want to use the function in conjunction with the foreach-statement: <a href="http://se2.php.net/iterator" rel="nofollow">SPL Iterators</a>. They can be used to achieve something quite similar to python generators.</p>
| 12 | 2009-07-15T19:25:29Z | [
"php",
"python",
"lazy-evaluation"
] |
PHP equivalent to Python's yield operator | 1,133,371 | <p>In Python (and others), you can incrementally process large volumes of data by using the 'yield' operator in a function. What would be the similar way to do so in PHP?</p>
<p>For example, lets say in Python, if I wanted to read a potentially very large file, I could work on each line one at a time like so (this example is contrived, as it is basically the same thing as 'for line in file_obj'):</p>
<pre><code>def file_lines(fname):
f = open(fname)
for line in f:
yield line
f.close()
for line in file_lines('somefile'):
#process the line
</code></pre>
<p>What I'm doing right now (in PHP) is I'm using a private instance variable to keep track of state, and acting accordingly each time the function is called, but it seems like there must be a better way.</p>
| 19 | 2009-07-15T19:15:19Z | 1,133,650 | <p>There may not be an equivalent operator, but the following code is equivalent in function and overhead:</p>
<pre><code>function file_lines($file) {
static $fhandle;
if ( is_null($fhandle) ) {
$fhandle = fopen($file, 'r');
if ( $fhandle === false ) {
return false;
}
}
if ( ($line = fgets($fhandle))!== false ) {
return $line;
}
fclose($fhandle);
$fhandle = null;
}
while ( $line = file_lines('some_file') ) {
// ...
}
</code></pre>
<p>That looks about right. Sorry, I haven't tested it.</p>
| 1 | 2009-07-15T20:03:59Z | [
"php",
"python",
"lazy-evaluation"
] |
PHP equivalent to Python's yield operator | 1,133,371 | <p>In Python (and others), you can incrementally process large volumes of data by using the 'yield' operator in a function. What would be the similar way to do so in PHP?</p>
<p>For example, lets say in Python, if I wanted to read a potentially very large file, I could work on each line one at a time like so (this example is contrived, as it is basically the same thing as 'for line in file_obj'):</p>
<pre><code>def file_lines(fname):
f = open(fname)
for line in f:
yield line
f.close()
for line in file_lines('somefile'):
#process the line
</code></pre>
<p>What I'm doing right now (in PHP) is I'm using a private instance variable to keep track of state, and acting accordingly each time the function is called, but it seems like there must be a better way.</p>
| 19 | 2009-07-15T19:15:19Z | 1,133,863 | <p>I prototype everything in Python before implementing in any other languages, including PHP. I ended up using callbacks to achieve what I would with the <code>yield</code>.</p>
<pre><code>function doSomething($callback)
{
foreach ($something as $someOtherThing) {
// do some computations that generates $data
call_user_func($callback, $data);
}
}
function myCallback($input)
{
// save $input to DB
// log
// send through a webservice
// etc.
var_dump($input);
}
doSomething('myCallback');
</code></pre>
<p>This way each <code>$data</code> is passed to the callback function and you can do what you want.</p>
| 11 | 2009-07-15T20:34:15Z | [
"php",
"python",
"lazy-evaluation"
] |
PHP equivalent to Python's yield operator | 1,133,371 | <p>In Python (and others), you can incrementally process large volumes of data by using the 'yield' operator in a function. What would be the similar way to do so in PHP?</p>
<p>For example, lets say in Python, if I wanted to read a potentially very large file, I could work on each line one at a time like so (this example is contrived, as it is basically the same thing as 'for line in file_obj'):</p>
<pre><code>def file_lines(fname):
f = open(fname)
for line in f:
yield line
f.close()
for line in file_lines('somefile'):
#process the line
</code></pre>
<p>What I'm doing right now (in PHP) is I'm using a private instance variable to keep track of state, and acting accordingly each time the function is called, but it seems like there must be a better way.</p>
| 19 | 2009-07-15T19:15:19Z | 8,546,875 | <p>Extending @Luiz's answer - another cool way is to use anonymous functions:</p>
<pre><code>function iterator($n, $cb)
{
for($i=0; $i<$n; $i++) {
call_user_func($cb, $i);
}
}
$sum = 0;
iterator(10,
function($i) use (&$sum)
{
$sum += $i;
}
);
print $sum;
</code></pre>
| 3 | 2011-12-17T18:16:41Z | [
"php",
"python",
"lazy-evaluation"
] |
PHP equivalent to Python's yield operator | 1,133,371 | <p>In Python (and others), you can incrementally process large volumes of data by using the 'yield' operator in a function. What would be the similar way to do so in PHP?</p>
<p>For example, lets say in Python, if I wanted to read a potentially very large file, I could work on each line one at a time like so (this example is contrived, as it is basically the same thing as 'for line in file_obj'):</p>
<pre><code>def file_lines(fname):
f = open(fname)
for line in f:
yield line
f.close()
for line in file_lines('somefile'):
#process the line
</code></pre>
<p>What I'm doing right now (in PHP) is I'm using a private instance variable to keep track of state, and acting accordingly each time the function is called, but it seems like there must be a better way.</p>
| 19 | 2009-07-15T19:15:19Z | 11,495,404 | <p>There is a rfc at <a href="https://wiki.php.net/rfc/generators">https://wiki.php.net/rfc/generators</a> adressing just that, which might be included in PHP 5.5.</p>
<p>In the mean time, check out this proof-of-concept of a poor mans "generator function" implemented in userland.</p>
<p>
<pre><code>namespace Functional;
error_reporting(E_ALL|E_STRICT);
const BEFORE = 1;
const NEXT = 2;
const AFTER = 3;
const FORWARD = 4;
const YIELD = 5;
class Generator implements \Iterator {
private $funcs;
private $args;
private $key;
private $result;
public function __construct(array $funcs, array $args) {
$this->funcs = $funcs;
$this->args = $args;
}
public function rewind() {
$this->key = -1;
$this->result = call_user_func_array($this->funcs[BEFORE],
$this->args);
$this->next();
}
public function valid() {
return $this->result[YIELD] !== false;
}
public function current() {
return $this->result[YIELD];
}
public function key() {
return $this->key;
}
public function next() {
$this->result = call_user_func($this->funcs[NEXT],
$this->result[FORWARD]);
if ($this->result[YIELD] === false) {
call_user_func($this->funcs[AFTER], $this->result[FORWARD]);
}
++$this->key;
}
}
function generator($funcs, $args) {
return new Generator($funcs, $args);
}
/**
* A generator function that lazily yields each line in a file.
*/
function get_lines_from_file($file_name) {
$funcs = array(
BEFORE => function($file_name) { return array(FORWARD => fopen($file_name, 'r')); },
NEXT => function($fh) { return array(FORWARD => $fh, YIELD => fgets($fh)); },
AFTER => function($fh) { fclose($fh); },
);
return generator($funcs, array($file_name));
}
// Output content of this file with padded linenumbers.
foreach (get_lines_from_file(__FILE__) as $k => $v) {
echo str_pad($k, 8), $v;
}
echo "\n";
</code></pre>
| 18 | 2012-07-15T20:34:16Z | [
"php",
"python",
"lazy-evaluation"
] |
PHP equivalent to Python's yield operator | 1,133,371 | <p>In Python (and others), you can incrementally process large volumes of data by using the 'yield' operator in a function. What would be the similar way to do so in PHP?</p>
<p>For example, lets say in Python, if I wanted to read a potentially very large file, I could work on each line one at a time like so (this example is contrived, as it is basically the same thing as 'for line in file_obj'):</p>
<pre><code>def file_lines(fname):
f = open(fname)
for line in f:
yield line
f.close()
for line in file_lines('somefile'):
#process the line
</code></pre>
<p>What I'm doing right now (in PHP) is I'm using a private instance variable to keep track of state, and acting accordingly each time the function is called, but it seems like there must be a better way.</p>
| 19 | 2009-07-15T19:15:19Z | 18,170,824 | <p>The same sentence 'yield' exists now on PHP 5.5: </p>
<p><a href="http://php.net/manual/en/language.generators.syntax.php" rel="nofollow">http://php.net/manual/en/language.generators.syntax.php</a></p>
| 1 | 2013-08-11T10:07:48Z | [
"php",
"python",
"lazy-evaluation"
] |
Code lines of code in a Django Project | 1,133,391 | <p>Is there an easy way to count the lines of code you have written for your django project? </p>
<p>Edit: The shell stuff is cool, but how about on Windows?</p>
| 9 | 2009-07-15T19:17:15Z | 1,133,411 | <p>Yep:</p>
<pre><code>shell]$ find /my/source -name "*.py" -type f -exec cat {} + | wc -l
</code></pre>
<p>Job's a good 'un.</p>
| 14 | 2009-07-15T19:20:47Z | [
"python",
"django"
] |
Code lines of code in a Django Project | 1,133,391 | <p>Is there an easy way to count the lines of code you have written for your django project? </p>
<p>Edit: The shell stuff is cool, but how about on Windows?</p>
| 9 | 2009-07-15T19:17:15Z | 1,133,417 | <p>Check out the wc command on unix.</p>
| 1 | 2009-07-15T19:21:36Z | [
"python",
"django"
] |
Code lines of code in a Django Project | 1,133,391 | <p>Is there an easy way to count the lines of code you have written for your django project? </p>
<p>Edit: The shell stuff is cool, but how about on Windows?</p>
| 9 | 2009-07-15T19:17:15Z | 1,134,186 | <p>Starting with Aiden's answer, and with a bit of help in <a href="http://stackoverflow.com/questions/1133698/find-name-pattern-that-matches-multiple-patterns">a question of my own</a>, I ended up with this god-awful mess:</p>
<pre><code># find the combined LOC of files
# usage: loc Documents/fourU py html
function loc {
#find $1 -name $2 -type f -exec cat {} + | wc -l
namelist=''
let i=2
while [ $i -le $# ]; do
namelist="$namelist -name \"*.$@[$i]\""
if [ $i != $# ]; then
namelist="$namelist -or "
fi
let i=i+1
done
#echo $namelist
#echo "find $1 $namelist" | sh
#echo "find $1 $namelist" | sh | xargs cat
echo "find $1 $namelist" | sh | xargs cat | wc -l
}
</code></pre>
<p>which allows you to specify any number of extensions you want to match. As far as I can tell, it outputs the right answer, but... I thought this would be a one-liner, else I wouldn't have started in bash, and it just kinda grew from there.</p>
<p>I'm sure that those more knowledgable than I can improve upon this, so I'm going to put it in community wiki.</p>
| 2 | 2009-07-15T21:25:14Z | [
"python",
"django"
] |
Code lines of code in a Django Project | 1,133,391 | <p>Is there an easy way to count the lines of code you have written for your django project? </p>
<p>Edit: The shell stuff is cool, but how about on Windows?</p>
| 9 | 2009-07-15T19:17:15Z | 1,139,296 | <p>Get wc command on Windows using GnuWin32 (<a href="http://gnuwin32.sourceforge.net/packages/coreutils.htm" rel="nofollow">http://gnuwin32.sourceforge.net/packages/coreutils.htm</a>)</p>
<blockquote>
<p>wc *.py</p>
</blockquote>
| 0 | 2009-07-16T18:07:43Z | [
"python",
"django"
] |
Code lines of code in a Django Project | 1,133,391 | <p>Is there an easy way to count the lines of code you have written for your django project? </p>
<p>Edit: The shell stuff is cool, but how about on Windows?</p>
| 9 | 2009-07-15T19:17:15Z | 1,141,171 | <p>You might want to look at <a href="http://cloc.sourceforge.net/" rel="nofollow">CLOC</a> -- it's not Django specific but it supports Python. It can show you lines counts for actual code, comments, blank lines, etc.</p>
| 4 | 2009-07-17T02:04:21Z | [
"python",
"django"
] |
Why can't IPython return records with multiple fields when submitting a query to sqlite? | 1,133,604 | <p>I am trying to write a simple query to an sqlite database in a python script. To test if my parameters were correct, I tried running the query from the ipython command line. It looked something like this:</p>
<pre><code>import sqlite3
db = 'G:\path\to\db\file.sqlite'
conn = sqlite3.connect(db)
results = conn.execute('SELECT * FROM studies').fetchall()
</code></pre>
<p>for some reason, my results came back totally empty. Then I tried another test query:</p>
<pre><code>results = conn.execute('SELECT id FROM studies').fetchall()
</code></pre>
<p>Which returned correctly. I figured there was a problem with the asterisk [WRONG, SEE SECOND UPDATE BELOW], so I tried the 'SELECT * FROM studies' query from a default python command line. Lo and behold, it returned correctly. I tried all the normal ways to escape the asterisk only to be met by a wide variety of error messages. Is there any way to run this query in IPython? </p>
<p><hr /></p>
<p>EDIT: Sorry, I incorrectly assumed IronPython and IPython were the same. What I meant was the IPython command line, not the IronPython framework.</p>
<p><hr /></p>
<p>EDIT2: Okay, it turns out the asterisk DOES work as shown by this successful query:</p>
<pre><code>'SELECT COUNT(*) FROM studies'
</code></pre>
<p>From the suggestions posted here, it turns out the error results from trying to return records with multiple fields, i.e.:</p>
<pre><code>'SELECT field1,field2 FROM studies'
</code></pre>
<p>which still results in to records being returned. I have changed the title of the question accordingly.</p>
| 1 | 2009-07-15T19:55:08Z | 1,133,661 | <p>This is SQL. IronPython has little or nothing to do with the processing of the query. Are you using an unusual character encoding? (IE not UTF-8 or ASCII)?</p>
<p>What happens if you <code>SELECT id,fieldname,fieldname FROM studies</code> (In other words, simulating what '*' does.)</p>
| 1 | 2009-07-15T20:06:32Z | [
"python",
"sqlite3",
"field",
"ipython"
] |
Why can't IPython return records with multiple fields when submitting a query to sqlite? | 1,133,604 | <p>I am trying to write a simple query to an sqlite database in a python script. To test if my parameters were correct, I tried running the query from the ipython command line. It looked something like this:</p>
<pre><code>import sqlite3
db = 'G:\path\to\db\file.sqlite'
conn = sqlite3.connect(db)
results = conn.execute('SELECT * FROM studies').fetchall()
</code></pre>
<p>for some reason, my results came back totally empty. Then I tried another test query:</p>
<pre><code>results = conn.execute('SELECT id FROM studies').fetchall()
</code></pre>
<p>Which returned correctly. I figured there was a problem with the asterisk [WRONG, SEE SECOND UPDATE BELOW], so I tried the 'SELECT * FROM studies' query from a default python command line. Lo and behold, it returned correctly. I tried all the normal ways to escape the asterisk only to be met by a wide variety of error messages. Is there any way to run this query in IPython? </p>
<p><hr /></p>
<p>EDIT: Sorry, I incorrectly assumed IronPython and IPython were the same. What I meant was the IPython command line, not the IronPython framework.</p>
<p><hr /></p>
<p>EDIT2: Okay, it turns out the asterisk DOES work as shown by this successful query:</p>
<pre><code>'SELECT COUNT(*) FROM studies'
</code></pre>
<p>From the suggestions posted here, it turns out the error results from trying to return records with multiple fields, i.e.:</p>
<pre><code>'SELECT field1,field2 FROM studies'
</code></pre>
<p>which still results in to records being returned. I have changed the title of the question accordingly.</p>
| 1 | 2009-07-15T19:55:08Z | 1,134,475 | <p>Just a wild guess, but please try to escape backslashes in the path to the database file. In other words instead of</p>
<pre><code>db = 'G:\path\to\db\file.sqlite'
</code></pre>
<p>try</p>
<pre><code>db = 'G:\\path\\to\\db\\file.sqlite'
</code></pre>
| 0 | 2009-07-15T22:28:29Z | [
"python",
"sqlite3",
"field",
"ipython"
] |
Why can't IPython return records with multiple fields when submitting a query to sqlite? | 1,133,604 | <p>I am trying to write a simple query to an sqlite database in a python script. To test if my parameters were correct, I tried running the query from the ipython command line. It looked something like this:</p>
<pre><code>import sqlite3
db = 'G:\path\to\db\file.sqlite'
conn = sqlite3.connect(db)
results = conn.execute('SELECT * FROM studies').fetchall()
</code></pre>
<p>for some reason, my results came back totally empty. Then I tried another test query:</p>
<pre><code>results = conn.execute('SELECT id FROM studies').fetchall()
</code></pre>
<p>Which returned correctly. I figured there was a problem with the asterisk [WRONG, SEE SECOND UPDATE BELOW], so I tried the 'SELECT * FROM studies' query from a default python command line. Lo and behold, it returned correctly. I tried all the normal ways to escape the asterisk only to be met by a wide variety of error messages. Is there any way to run this query in IPython? </p>
<p><hr /></p>
<p>EDIT: Sorry, I incorrectly assumed IronPython and IPython were the same. What I meant was the IPython command line, not the IronPython framework.</p>
<p><hr /></p>
<p>EDIT2: Okay, it turns out the asterisk DOES work as shown by this successful query:</p>
<pre><code>'SELECT COUNT(*) FROM studies'
</code></pre>
<p>From the suggestions posted here, it turns out the error results from trying to return records with multiple fields, i.e.:</p>
<pre><code>'SELECT field1,field2 FROM studies'
</code></pre>
<p>which still results in to records being returned. I have changed the title of the question accordingly.</p>
| 1 | 2009-07-15T19:55:08Z | 1,134,584 | <p>Some more debugging you could try:</p>
<pre><code>s = 'SELEECT * from studies'
print s
conn.execute(s).fetchall()
</code></pre>
<p>or:</p>
<pre><code>s = 'SELECT ' + chr(42) + ' from studies'
conn.execute(s).fetchall()
</code></pre>
<p>You might also try:</p>
<pre><code>conn.execute('select count(*) from studies').fetchall()
</code></pre>
<p>if that comes back as [(0,)] then something really weird is going on :-)</p>
<p><hr /></p>
<p>Some more things you could try:</p>
<pre><code>conn.execute('select id from (select * from studies)').fetchall()
</code></pre>
<p>or:</p>
<pre><code>cur = conn.cursor()
cur.execute('select * from studies').fetchall()
</code></pre>
| 1 | 2009-07-15T23:05:46Z | [
"python",
"sqlite3",
"field",
"ipython"
] |
Why can't IPython return records with multiple fields when submitting a query to sqlite? | 1,133,604 | <p>I am trying to write a simple query to an sqlite database in a python script. To test if my parameters were correct, I tried running the query from the ipython command line. It looked something like this:</p>
<pre><code>import sqlite3
db = 'G:\path\to\db\file.sqlite'
conn = sqlite3.connect(db)
results = conn.execute('SELECT * FROM studies').fetchall()
</code></pre>
<p>for some reason, my results came back totally empty. Then I tried another test query:</p>
<pre><code>results = conn.execute('SELECT id FROM studies').fetchall()
</code></pre>
<p>Which returned correctly. I figured there was a problem with the asterisk [WRONG, SEE SECOND UPDATE BELOW], so I tried the 'SELECT * FROM studies' query from a default python command line. Lo and behold, it returned correctly. I tried all the normal ways to escape the asterisk only to be met by a wide variety of error messages. Is there any way to run this query in IPython? </p>
<p><hr /></p>
<p>EDIT: Sorry, I incorrectly assumed IronPython and IPython were the same. What I meant was the IPython command line, not the IronPython framework.</p>
<p><hr /></p>
<p>EDIT2: Okay, it turns out the asterisk DOES work as shown by this successful query:</p>
<pre><code>'SELECT COUNT(*) FROM studies'
</code></pre>
<p>From the suggestions posted here, it turns out the error results from trying to return records with multiple fields, i.e.:</p>
<pre><code>'SELECT field1,field2 FROM studies'
</code></pre>
<p>which still results in to records being returned. I have changed the title of the question accordingly.</p>
| 1 | 2009-07-15T19:55:08Z | 1,135,685 | <p>I've tried all the things you've mentioned in IPython and sqlite without any problems (ipython 0.9.1, python 2.5.2).</p>
<p>Is there a chance this is some kind of version mismatch issue? Maybe your shells are referencing different libraries?</p>
<p>For example, does</p>
<pre><code>import sqlite3; print sqlite3.version
</code></pre>
<p>return the same thing from both shells (i.e. ipython and the regular one where the sql query works)?</p>
<p>How about </p>
<pre><code>conn.execute('select sqlite_version()').fetchall()
</code></pre>
<p>Does that return the same thing?</p>
| 1 | 2009-07-16T06:08:35Z | [
"python",
"sqlite3",
"field",
"ipython"
] |
Python processes stops responding to SIGTERM / SIGINT after being restarted | 1,133,693 | <p>I'm having a weird problem with some python processes running using a watchdog process.</p>
<p>The watchdog process is written in python and is the parent, and has a function called *start_child(name)* which uses <em>subprocess.Popen</em> to open the child process. The Popen object is recorded so that the watchdog can monitor the process using <strong>poll()</strong> and eventually end it with <strong>terminate()</strong> when needed.
If the child dies unexpectedly, the watchdog calls *start_child(name)* again and records the new Popen object.</p>
<p>There are 7 child processes, all of which are also python. If I run any of the children manually, I can send SIGTERM or SIGINT using <em>kill</em> and get the results I expect (the process ends).</p>
<p>However, when run from the watchdog process, the child will only end after the <strong>FIRST</strong> signal. When the watchdog restarts the child, the new child process no longer responds to SIGTERM or SIGINT. I have no idea what is causing this.</p>
<p><strong>watchdog.py</strong></p>
<pre><code>class watchdog:
# <snip> various init stuff
def start(self):
self.running = true
kids = ['app1', 'app2', 'app3', 'app4', 'app5', 'app6', 'app7']
self.processes = {}
for kid in kids:
self.start_child(kid)
self.thread = threading.Thread(target=self._monitor)
self.thread.start()
while self.running:
time.sleep(10)
def start_child(self, name):
try:
proc = subprocess.Popen(name)
self.processes[name] = proc
except:
print "oh no"
else:
print "started child ok"
def _monitor(self):
while self.running:
time.sleep(1)
if self.running:
for kid, proc in self.processes.iteritems():
if proc.poll() is not None: # process ended
self.start_child(kid)
</code></pre>
<p>So what happens is <em>watchdog.start()</em> launches all 7 processes, and if I send any process SIGTERM, it ends, and the monitor thread starts it again. However, if I then send the new process SIGTERM, it ignores it.</p>
<p>I should be able to keep sending kill -15 to the restarted processes over and over again. Why do they ignore it after being restarted?</p>
| 3 | 2009-07-15T20:11:48Z | 1,134,566 | <p>As explained here: <a href="http://blogs.gentoo.org/agaffney/2005/03/18/python_sucks">http://blogs.gentoo.org/agaffney/2005/03/18/python_sucks</a> , when Python creates a new thread, it blocks all signals for that thread (and for any processes that thread spawns).</p>
<p>I fixed this using sigprocmask, called through ctypes. This may or may not be the "correct" way to do it, but it does work.</p>
<p>In the child process, during <code>__init__</code>:</p>
<pre><code>libc = ctypes.cdll.LoadLibrary("libc.so")
mask = '\x00' * 17 # 16 byte empty mask + null terminator
libc.sigprocmask(3, mask, None) # '3' on FreeBSD is the value for SIG_SETMASK
</code></pre>
| 5 | 2009-07-15T23:00:10Z | [
"python",
"ipc",
"freebsd"
] |
Python processes stops responding to SIGTERM / SIGINT after being restarted | 1,133,693 | <p>I'm having a weird problem with some python processes running using a watchdog process.</p>
<p>The watchdog process is written in python and is the parent, and has a function called *start_child(name)* which uses <em>subprocess.Popen</em> to open the child process. The Popen object is recorded so that the watchdog can monitor the process using <strong>poll()</strong> and eventually end it with <strong>terminate()</strong> when needed.
If the child dies unexpectedly, the watchdog calls *start_child(name)* again and records the new Popen object.</p>
<p>There are 7 child processes, all of which are also python. If I run any of the children manually, I can send SIGTERM or SIGINT using <em>kill</em> and get the results I expect (the process ends).</p>
<p>However, when run from the watchdog process, the child will only end after the <strong>FIRST</strong> signal. When the watchdog restarts the child, the new child process no longer responds to SIGTERM or SIGINT. I have no idea what is causing this.</p>
<p><strong>watchdog.py</strong></p>
<pre><code>class watchdog:
# <snip> various init stuff
def start(self):
self.running = true
kids = ['app1', 'app2', 'app3', 'app4', 'app5', 'app6', 'app7']
self.processes = {}
for kid in kids:
self.start_child(kid)
self.thread = threading.Thread(target=self._monitor)
self.thread.start()
while self.running:
time.sleep(10)
def start_child(self, name):
try:
proc = subprocess.Popen(name)
self.processes[name] = proc
except:
print "oh no"
else:
print "started child ok"
def _monitor(self):
while self.running:
time.sleep(1)
if self.running:
for kid, proc in self.processes.iteritems():
if proc.poll() is not None: # process ended
self.start_child(kid)
</code></pre>
<p>So what happens is <em>watchdog.start()</em> launches all 7 processes, and if I send any process SIGTERM, it ends, and the monitor thread starts it again. However, if I then send the new process SIGTERM, it ignores it.</p>
<p>I should be able to keep sending kill -15 to the restarted processes over and over again. Why do they ignore it after being restarted?</p>
| 3 | 2009-07-15T20:11:48Z | 1,135,368 | <p>Wouldn't it be better to restore the default signal handlers within Python rather than via ctypes? In your child process, use the signal module:</p>
<pre><code>import signal
for sig in range(1, signal.NSIG):
try:
signal.signal(sig, signal.SIG_DFL)
except RuntimeError:
pass
</code></pre>
<p>RuntimeError is raised when trying to set signals such as SIGKILL which can't be caught.</p>
| 0 | 2009-07-16T04:21:35Z | [
"python",
"ipc",
"freebsd"
] |
How can I retrieve last x elements in Django | 1,133,715 | <p>I am trying to retrieve the latest 5 posts (by post time)
In the views.py, if I try <code>blog_post_list = blogPosts.objects.all()[:5]</code> It retreives the first 5 elements of the blogPosts objects, how can I reverse this to retreive the latest ones?</p>
<p>Cheers</p>
| 2 | 2009-07-15T20:15:07Z | 1,133,736 | <pre><code>blog_post_list = blogPosts.objects.all().reverse()[:5]
# OR
blog_post_list = blogPosts.objects.all().order_by('-DEFAULT_ORDER_KEY')[:5]
</code></pre>
<p>I prefer the first.</p>
| 8 | 2009-07-15T20:18:06Z | [
"python",
"django",
"list"
] |
How can I retrieve last x elements in Django | 1,133,715 | <p>I am trying to retrieve the latest 5 posts (by post time)
In the views.py, if I try <code>blog_post_list = blogPosts.objects.all()[:5]</code> It retreives the first 5 elements of the blogPosts objects, how can I reverse this to retreive the latest ones?</p>
<p>Cheers</p>
| 2 | 2009-07-15T20:15:07Z | 1,133,895 | <p>Based on Nick Presta's answer and your comment, try:</p>
<pre><code>blog_post_list = blogPosts.objects.all().order_by('-pub_date')[:5]
</code></pre>
| 4 | 2009-07-15T20:39:39Z | [
"python",
"django",
"list"
] |
How accurate is python's time.sleep()? | 1,133,857 | <p>I can give it floating point numbers, such as</p>
<pre><code>time.sleep(0.5)
</code></pre>
<p>but how accurate is it? If i give it</p>
<pre><code>time.sleep(0.05)
</code></pre>
<p>will it really sleep about 50 ms?</p>
| 48 | 2009-07-15T20:33:42Z | 1,133,879 | <p>From the <a href="http://docs.python.org/3.0/library/time.html">documentation</a>:</p>
<blockquote>
<p>On the other hand, the precision of
<code>time()</code> and <code>sleep()</code> is better than
their Unix equivalents: times are
expressed as floating point numbers,
<code>time()</code> returns the most accurate time
available (using Unix <code>gettimeofday</code>
where available), and <code>sleep()</code> will
accept a time with a nonzero fraction
(Unix <code>select</code> is used to implement
this, where available).</p>
</blockquote>
<p>And <a href="http://docs.python.org/3.0/library/time.html#time.sleep">more specifically</a> w.r.t. <code>sleep()</code>:</p>
<blockquote>
<p>Suspend execution for the given number
of seconds. The argument may be a
floating point number to indicate a
more precise sleep time. The actual
suspension time <strong>may be less</strong> than that
requested because any caught signal
will terminate the <code>sleep()</code> following
execution of that signalâs catching
routine. Also, the suspension time <strong>may
be longer</strong> than requested by an
arbitrary amount because of the
scheduling of other activity in the
system.</p>
</blockquote>
| 17 | 2009-07-15T20:37:01Z | [
"python",
"datetime",
"time",
"sleep"
] |
How accurate is python's time.sleep()? | 1,133,857 | <p>I can give it floating point numbers, such as</p>
<pre><code>time.sleep(0.5)
</code></pre>
<p>but how accurate is it? If i give it</p>
<pre><code>time.sleep(0.05)
</code></pre>
<p>will it really sleep about 50 ms?</p>
| 48 | 2009-07-15T20:33:42Z | 1,133,888 | <p>The accuracy of the time.sleep function depends on the accuracy of your underlying OS's sleep accuracy. For non-realtime OS's like a stock Windows the smallest interval you can sleep for is about 10-13ms. I have seen accurate sleeps within several milliseconds of that time when above the minimum 10-13ms.</p>
<p>Update:
Like mentioned in the docs sited below, it's common to do the sleep in a loop that will make sure to go back to sleep if the wakes you up the early.</p>
<p>I should also mention that if you are running Ubuntu you can try out a pseudo real-time kernel (with the RT_PREEMPT patch set) by installing the rt kernel package (at least in Ubuntu 10.04 LTS).</p>
<p>EDIT: Correction non-realtime Linux kernels have minimum sleep interval much closer to 1ms then 10ms but it varies in a non-deterministic manner.</p>
| 45 | 2009-07-15T20:38:44Z | [
"python",
"datetime",
"time",
"sleep"
] |
How accurate is python's time.sleep()? | 1,133,857 | <p>I can give it floating point numbers, such as</p>
<pre><code>time.sleep(0.5)
</code></pre>
<p>but how accurate is it? If i give it</p>
<pre><code>time.sleep(0.05)
</code></pre>
<p>will it really sleep about 50 ms?</p>
| 48 | 2009-07-15T20:33:42Z | 1,133,925 | <p>You can't really guarantee anything about sleep(), except that it will at least make a best effort to sleep as long as you told it (signals can kill your sleep before the time is up, and lots more things can make it run long). For sure the minimum you can get on a standard desktop operating system is going to be around 16ms (timer granularity plus time to context switch), but chances are that the % deviation from the provided argument is going to be significant when you're trying to sleep for 10s of milliseconds. Signals, other threads holding the GIL, kernel scheduling fun, processor speed stepping, etc. can all play havoc with the duration your thread/process actually sleeps.</p>
| 2 | 2009-07-15T20:45:43Z | [
"python",
"datetime",
"time",
"sleep"
] |
How accurate is python's time.sleep()? | 1,133,857 | <p>I can give it floating point numbers, such as</p>
<pre><code>time.sleep(0.5)
</code></pre>
<p>but how accurate is it? If i give it</p>
<pre><code>time.sleep(0.05)
</code></pre>
<p>will it really sleep about 50 ms?</p>
| 48 | 2009-07-15T20:33:42Z | 1,133,984 | <p>Why don't you find out:</p>
<pre><code>from datetime import datetime
import time
def check_sleep(amount):
start = datetime.now()
time.sleep(amount)
end = datetime.now()
delta = end-start
return delta.seconds + delta.microseconds/1000000.
error = sum(abs(check_sleep(0.050)-0.050) for i in xrange(100))*10
print "Average error is %0.2fms" % error
</code></pre>
<p>For the record, I get around 0.1ms error on my HTPC and 2ms on my laptop, both linux machines.</p>
| 12 | 2009-07-15T20:52:09Z | [
"python",
"datetime",
"time",
"sleep"
] |
How accurate is python's time.sleep()? | 1,133,857 | <p>I can give it floating point numbers, such as</p>
<pre><code>time.sleep(0.5)
</code></pre>
<p>but how accurate is it? If i give it</p>
<pre><code>time.sleep(0.05)
</code></pre>
<p>will it really sleep about 50 ms?</p>
| 48 | 2009-07-15T20:33:42Z | 15,967,564 | <p>People are quite right about the differences between operating systems and kernels, but I do not see any granularity in Ubuntu and I see a 1 ms granularity in MS7. Suggesting a different implementation of time.sleep, not just a different tick rate. Closer inspection suggests a 1μs granularity in Ubuntu by the way, but that is due to the time.time function that I use for measuring the accuracy.
<img src="http://i.stack.imgur.com/GeF8u.png" alt="Linux and Windows typical time.sleep behaviour in Python"></p>
| 29 | 2013-04-12T09:23:01Z | [
"python",
"datetime",
"time",
"sleep"
] |
How accurate is python's time.sleep()? | 1,133,857 | <p>I can give it floating point numbers, such as</p>
<pre><code>time.sleep(0.5)
</code></pre>
<p>but how accurate is it? If i give it</p>
<pre><code>time.sleep(0.05)
</code></pre>
<p>will it really sleep about 50 ms?</p>
| 48 | 2009-07-15T20:33:42Z | 30,672,412 | <p>Here's my follow-up to Wilbert's answer: the same for Mac OS X Yosemite, since it's not been mentioned much yet.<img src="http://i.stack.imgur.com/4MYle.png" alt="Sleep behavior of Mac OS X Yosemite"></p>
<p>Looks like a lot of the time it sleeps about 1.25 times the time that you request and sometimes sleeps between 1 and 1.25 times the time you request. It almost never (~twice out of 1000 samples) sleeps significantly more than 1.25 times the time you request. </p>
<p>Also (not shown explicitly) the 1.25 relationship seems to hold pretty well until you get below about 0.2 ms, after which it starts get a little fuzzy. Additionally, the actual time seems to settle to about 5 ms longer than you request after the amount of time requested gets above 20 ms.</p>
<p>Again, it appears to be a completely different implementation of <code>sleep()</code> in OS X than in Windows or whichever Linux kernal Wilbert was using.</p>
| 7 | 2015-06-05T17:25:35Z | [
"python",
"datetime",
"time",
"sleep"
] |
Keep ConfigParser output files sorted | 1,134,071 | <p>I've noticed with my source control that the content of the output files generated with ConfigParser is never in the same order. Sometimes sections will change place or options inside sections even without any modifications to the values.</p>
<p>Is there a way to keep things sorted in the configuration file so that I don't have to commit trivial changes every time I launch my application?</p>
| 6 | 2009-07-15T21:03:18Z | 1,134,109 | <p>No. The ConfigParser library writes things out in dictionary hash order. (You can see this if you look at the source code.) There are replacements for this module that do a better job.</p>
<p>I will see if I can find one and add it here.</p>
<p><a href="http://www.voidspace.org.uk/python/configobj.html#introduction" rel="nofollow">http://www.voidspace.org.uk/python/configobj.html#introduction</a> is the one I was thinking of. It's not a drop-in replacement, but it is very easy to use.</p>
| 3 | 2009-07-15T21:09:51Z | [
"python",
"configuration",
"configparser"
] |
Keep ConfigParser output files sorted | 1,134,071 | <p>I've noticed with my source control that the content of the output files generated with ConfigParser is never in the same order. Sometimes sections will change place or options inside sections even without any modifications to the values.</p>
<p>Is there a way to keep things sorted in the configuration file so that I don't have to commit trivial changes every time I launch my application?</p>
| 6 | 2009-07-15T21:03:18Z | 1,134,323 | <p>ConfigParser is based on the ini file format, who in it's design is supposed to NOT be sensitive to order. If your config file format is sensitive to order, you can't use ConfigParser. It may also confuse people if you have an ini-type format that is sensitive to the order of the statements...</p>
| -1 | 2009-07-15T21:55:54Z | [
"python",
"configuration",
"configparser"
] |
Keep ConfigParser output files sorted | 1,134,071 | <p>I've noticed with my source control that the content of the output files generated with ConfigParser is never in the same order. Sometimes sections will change place or options inside sections even without any modifications to the values.</p>
<p>Is there a way to keep things sorted in the configuration file so that I don't have to commit trivial changes every time I launch my application?</p>
| 6 | 2009-07-15T21:03:18Z | 1,134,533 | <p>Looks like this was fixed in <a href="http://docs.python.org/dev/py3k/whatsnew/3.1.html">Python 3.1</a> and 2.7 with the introduction of ordered dictionaries:</p>
<blockquote>
<p>The standard library now supports use
of ordered dictionaries in several
modules. The configparser module uses
them by default. This lets
configuration files be read, modified,
and then written back in their
original order.</p>
</blockquote>
| 8 | 2009-07-15T22:45:03Z | [
"python",
"configuration",
"configparser"
] |
Python accessing web service protected by PKI/SSL | 1,134,565 | <p>I need to use Python to access data from a RESTful web service that requires certificate-based client authentication (PKI) over SSL/HTTPS. What is the recommended way of doing this?</p>
| 1 | 2009-07-15T23:00:04Z | 1,134,612 | <p>I found this: <a href="http://code.activestate.com/recipes/117004/" rel="nofollow">http://code.activestate.com/recipes/117004/</a>
I did not try it so it may not work.</p>
| 1 | 2009-07-15T23:18:22Z | [
"python",
"web-services",
"ssl",
"certificate",
"pki"
] |
Python accessing web service protected by PKI/SSL | 1,134,565 | <p>I need to use Python to access data from a RESTful web service that requires certificate-based client authentication (PKI) over SSL/HTTPS. What is the recommended way of doing this?</p>
| 1 | 2009-07-15T23:00:04Z | 1,135,272 | <p>The suggestion by stribika using <code>httplib.HTTPSConnection</code> should work for you provided that you do not need to verify the server's certificate. If you do want/need to verify the server, you'll need to look at a 3rd party module such as <a href="http://pyopenssl.sourceforge.net/" rel="nofollow">pyOpenSSL</a> (which is a Python wrapper around a subset of the OpenSSL library).</p>
| 2 | 2009-07-16T03:35:50Z | [
"python",
"web-services",
"ssl",
"certificate",
"pki"
] |
Python accessing web service protected by PKI/SSL | 1,134,565 | <p>I need to use Python to access data from a RESTful web service that requires certificate-based client authentication (PKI) over SSL/HTTPS. What is the recommended way of doing this?</p>
| 1 | 2009-07-15T23:00:04Z | 1,441,812 | <p>I would recommend using <a href="http://chandlerproject.org/Projects/MeTooCrypto" rel="nofollow">M2Crypto</a>. If you are a Twisted guy, <a href="http://svn.osafoundation.org/m2crypto/trunk/M2Crypto/SSL/TwistedProtocolWrapper.py" rel="nofollow">M2Crypto integrates with Twisted</a> so you can let Twisted handle the networking stuff and M2Crypto the SSL/verification/validation stuff.</p>
| 0 | 2009-09-17T23:16:52Z | [
"python",
"web-services",
"ssl",
"certificate",
"pki"
] |
Python Exception handling | 1,134,607 | <p>C has perror and errno, which print and store the last error encountered. This is convenient when doing file io as I do not have to fstat() every file that fails as an argument to fopen() to present the user with a reason why the call failed.</p>
<p>I was wondering what is the proper way to grab errno when gracefully handling the IOError exception in python?</p>
<pre>
In [1]: fp = open("/notthere")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/home/mugen/ in ()
IOError: [Errno 2] No such file or directory: '/notthere'
In [2]: fp = open("test/testfile")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/home/mugen/ in ()
IOError: [Errno 13] Permission denied: 'test/testfile'
In [5]: try:
...: fp = open("nothere")
...: except IOError:
...: print "This failed for some reason..."
...:
...:
This failed for some reason...
</pre>
| 15 | 2009-07-15T23:15:06Z | 1,134,614 | <p>The Exception has a errno attribute:</p>
<pre><code>try:
fp = open("nother")
except IOError, e:
print e.errno
print e
</code></pre>
| 18 | 2009-07-15T23:18:53Z | [
"python",
"exception",
"errno",
"ioerror"
] |
Python Exception handling | 1,134,607 | <p>C has perror and errno, which print and store the last error encountered. This is convenient when doing file io as I do not have to fstat() every file that fails as an argument to fopen() to present the user with a reason why the call failed.</p>
<p>I was wondering what is the proper way to grab errno when gracefully handling the IOError exception in python?</p>
<pre>
In [1]: fp = open("/notthere")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/home/mugen/ in ()
IOError: [Errno 2] No such file or directory: '/notthere'
In [2]: fp = open("test/testfile")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/home/mugen/ in ()
IOError: [Errno 13] Permission denied: 'test/testfile'
In [5]: try:
...: fp = open("nothere")
...: except IOError:
...: print "This failed for some reason..."
...:
...:
This failed for some reason...
</pre>
| 15 | 2009-07-15T23:15:06Z | 1,134,616 | <p>Here's how you can do it. Also see the <code>errno</code> module and <code>os.strerror</code> function for some utilities.</p>
<pre><code>import os, errno
try:
f = open('asdfasdf', 'r')
except IOError as ioex:
print 'errno:', ioex.errno
print 'err code:', errno.errorcode[ioex.errno]
print 'err message:', os.strerror(ioex.errno)
</code></pre>
<ul>
<li><a href="http://docs.python.org/library/errno.html" rel="nofollow">http://docs.python.org/library/errno.html</a></li>
<li><a href="http://docs.python.org/library/os.html" rel="nofollow">http://docs.python.org/library/os.html</a></li>
</ul>
<p>For more information on IOError attributes, see the base class EnvironmentError:</p>
<ul>
<li><a href="http://docs.python.org/library/exceptions.html?highlight=ioerror#exceptions.EnvironmentError" rel="nofollow">http://docs.python.org/library/exceptions.html?highlight=ioerror#exceptions.EnvironmentError</a></li>
</ul>
| 22 | 2009-07-15T23:19:32Z | [
"python",
"exception",
"errno",
"ioerror"
] |
Python Exception handling | 1,134,607 | <p>C has perror and errno, which print and store the last error encountered. This is convenient when doing file io as I do not have to fstat() every file that fails as an argument to fopen() to present the user with a reason why the call failed.</p>
<p>I was wondering what is the proper way to grab errno when gracefully handling the IOError exception in python?</p>
<pre>
In [1]: fp = open("/notthere")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/home/mugen/ in ()
IOError: [Errno 2] No such file or directory: '/notthere'
In [2]: fp = open("test/testfile")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
/home/mugen/ in ()
IOError: [Errno 13] Permission denied: 'test/testfile'
In [5]: try:
...: fp = open("nothere")
...: except IOError:
...: print "This failed for some reason..."
...:
...:
This failed for some reason...
</pre>
| 15 | 2009-07-15T23:15:06Z | 1,134,622 | <pre><code>try:
fp = open("nothere")
except IOError as err:
print err.errno
print err.strerror
</code></pre>
| 18 | 2009-07-15T23:20:51Z | [
"python",
"exception",
"errno",
"ioerror"
] |
Django required field in model form | 1,134,667 | <p>I have a form where a couple of fields are coming out as required when I don't want them too. Here is the form from models.py</p>
<pre><code>class CircuitForm(ModelForm):
class Meta:
model = Circuit
exclude = ('lastPaged',)
def __init__(self, *args, **kwargs):
super(CircuitForm, self).__init__(*args, **kwargs)
self.fields['begin'].widget = widgets.AdminSplitDateTime()
self.fields['end'].widget = widgets.AdminSplitDateTime()
</code></pre>
<p>In the actual Circuit model, the fields are defined like this:</p>
<pre><code>begin = models.DateTimeField('Start Time', null=True, blank=True)
end = models.DateTimeField('Stop Time', null=True, blank=True)
</code></pre>
<p>My views.py for this is here:</p>
<pre><code>def addCircuitForm(request):
if request.method == 'POST':
form = CircuitForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/sla/all')
form = CircuitForm()
return render_to_response('sla/add.html', {'form': form})
</code></pre>
<p>What can I do so that the two fields aren't required?</p>
| 51 | 2009-07-15T23:37:16Z | 1,134,693 | <blockquote>
<p>If the model field has blank=True, then required is set to False on the form field. Otherwise, required=True</p>
</blockquote>
<p>Says so here: <a href="http://docs.djangoproject.com/en/dev/topics/forms/modelforms/">http://docs.djangoproject.com/en/dev/topics/forms/modelforms/</a></p>
<p>Looks like you are doing everything right.
You could check the value of <code>self.fields['end'].required</code>.</p>
| 10 | 2009-07-15T23:45:10Z | [
"python",
"django",
"forms",
"model",
"widget"
] |
Django required field in model form | 1,134,667 | <p>I have a form where a couple of fields are coming out as required when I don't want them too. Here is the form from models.py</p>
<pre><code>class CircuitForm(ModelForm):
class Meta:
model = Circuit
exclude = ('lastPaged',)
def __init__(self, *args, **kwargs):
super(CircuitForm, self).__init__(*args, **kwargs)
self.fields['begin'].widget = widgets.AdminSplitDateTime()
self.fields['end'].widget = widgets.AdminSplitDateTime()
</code></pre>
<p>In the actual Circuit model, the fields are defined like this:</p>
<pre><code>begin = models.DateTimeField('Start Time', null=True, blank=True)
end = models.DateTimeField('Stop Time', null=True, blank=True)
</code></pre>
<p>My views.py for this is here:</p>
<pre><code>def addCircuitForm(request):
if request.method == 'POST':
form = CircuitForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/sla/all')
form = CircuitForm()
return render_to_response('sla/add.html', {'form': form})
</code></pre>
<p>What can I do so that the two fields aren't required?</p>
| 51 | 2009-07-15T23:37:16Z | 1,429,646 | <p>If you don't want to modify blank setting for your fields inside models (doing so will break normal validation in admin site), you can do the following in your Form class:</p>
<pre><code>def __init__(self, *args, **kwargs):
super(CircuitForm, self).__init__(*args, **kwargs)
for key in self.fields:
self.fields[key].required = False
</code></pre>
<p>The redefined constructor won't harm any functionality.</p>
| 84 | 2009-09-15T21:11:02Z | [
"python",
"django",
"forms",
"model",
"widget"
] |
Django required field in model form | 1,134,667 | <p>I have a form where a couple of fields are coming out as required when I don't want them too. Here is the form from models.py</p>
<pre><code>class CircuitForm(ModelForm):
class Meta:
model = Circuit
exclude = ('lastPaged',)
def __init__(self, *args, **kwargs):
super(CircuitForm, self).__init__(*args, **kwargs)
self.fields['begin'].widget = widgets.AdminSplitDateTime()
self.fields['end'].widget = widgets.AdminSplitDateTime()
</code></pre>
<p>In the actual Circuit model, the fields are defined like this:</p>
<pre><code>begin = models.DateTimeField('Start Time', null=True, blank=True)
end = models.DateTimeField('Stop Time', null=True, blank=True)
</code></pre>
<p>My views.py for this is here:</p>
<pre><code>def addCircuitForm(request):
if request.method == 'POST':
form = CircuitForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/sla/all')
form = CircuitForm()
return render_to_response('sla/add.html', {'form': form})
</code></pre>
<p>What can I do so that the two fields aren't required?</p>
| 51 | 2009-07-15T23:37:16Z | 1,650,162 | <p>It's not an answer, but for anyone else who finds this via Google, one more bit of data: this is happening to me on a Model Form with a DateField. It has required set to False, the model has "null=True, blank=True" and the field in the form shows required=False if I look at it during the clean() method, but it's still saying I need a valid date format. I'm not using any special widget and I get the "Enter a valid date" message even when I explicitly set input_formats=['%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', ''] on the form field.</p>
<p><strong>EDIT:</strong> Don't know if it'll help anyone else, but I solved the problem I was having. Our form has some default text in the field (in this case, the word "to" to indicate the field is the end date; the field is called "end_time"). I was specifically looking for the word "to" in the form's clean() method (I'd also tried the clean_end_time() method, but it never got called) and setting the value of the clean_data variable to None as suggested in <a href="http://code.djangoproject.com/ticket/11765" rel="nofollow">this Django ticket</a>. However, none of that mattered as (I guess) the model's validation had already puked on the invalid date format of "to" without giving me a chance to intercept it. </p>
| 3 | 2009-10-30T14:33:49Z | [
"python",
"django",
"forms",
"model",
"widget"
] |
Django required field in model form | 1,134,667 | <p>I have a form where a couple of fields are coming out as required when I don't want them too. Here is the form from models.py</p>
<pre><code>class CircuitForm(ModelForm):
class Meta:
model = Circuit
exclude = ('lastPaged',)
def __init__(self, *args, **kwargs):
super(CircuitForm, self).__init__(*args, **kwargs)
self.fields['begin'].widget = widgets.AdminSplitDateTime()
self.fields['end'].widget = widgets.AdminSplitDateTime()
</code></pre>
<p>In the actual Circuit model, the fields are defined like this:</p>
<pre><code>begin = models.DateTimeField('Start Time', null=True, blank=True)
end = models.DateTimeField('Stop Time', null=True, blank=True)
</code></pre>
<p>My views.py for this is here:</p>
<pre><code>def addCircuitForm(request):
if request.method == 'POST':
form = CircuitForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/sla/all')
form = CircuitForm()
return render_to_response('sla/add.html', {'form': form})
</code></pre>
<p>What can I do so that the two fields aren't required?</p>
| 51 | 2009-07-15T23:37:16Z | 1,833,543 | <p>This is a bug when using the widgets:</p>
<p>workaround:
<a href="http://stackoverflow.com/questions/38601/using-django-time-date-widgets-in-custom-form/1833247#1833247">http://stackoverflow.com/questions/38601/using-django-time-date-widgets-in-custom-form/1833247#1833247</a></p>
<p>or ticket 12303</p>
| 0 | 2009-12-02T15:16:19Z | [
"python",
"django",
"forms",
"model",
"widget"
] |
Django required field in model form | 1,134,667 | <p>I have a form where a couple of fields are coming out as required when I don't want them too. Here is the form from models.py</p>
<pre><code>class CircuitForm(ModelForm):
class Meta:
model = Circuit
exclude = ('lastPaged',)
def __init__(self, *args, **kwargs):
super(CircuitForm, self).__init__(*args, **kwargs)
self.fields['begin'].widget = widgets.AdminSplitDateTime()
self.fields['end'].widget = widgets.AdminSplitDateTime()
</code></pre>
<p>In the actual Circuit model, the fields are defined like this:</p>
<pre><code>begin = models.DateTimeField('Start Time', null=True, blank=True)
end = models.DateTimeField('Stop Time', null=True, blank=True)
</code></pre>
<p>My views.py for this is here:</p>
<pre><code>def addCircuitForm(request):
if request.method == 'POST':
form = CircuitForm(request.POST)
if form.is_valid():
form.save()
return HttpResponseRedirect('/sla/all')
form = CircuitForm()
return render_to_response('sla/add.html', {'form': form})
</code></pre>
<p>What can I do so that the two fields aren't required?</p>
| 51 | 2009-07-15T23:37:16Z | 34,790,883 | <p>Expanding on DataGreed's answer, I created a Mixin that allows you to specify a <code>fields_required</code> variable on the <code>Meta</code> class like this:</p>
<pre><code>class MyForm(RequiredFieldsMixin, ModelForm):
class Meta:
model = MyModel
fields = ['field1', 'field2']
fields_required = ['field1']
</code></pre>
<p>Here it is:</p>
<pre><code>class RequiredFieldsMixin():
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
fields_required = getattr(self.Meta, 'fields_required', None)
if fields_required:
for key in self.fields:
if key not in fields_required:
self.fields[key].required = False
</code></pre>
| 1 | 2016-01-14T13:29:18Z | [
"python",
"django",
"forms",
"model",
"widget"
] |
Simple tray icon application using pygtk | 1,134,749 | <p>I'm writing a webmail checker in python and I want it to just sit on the tray icon and warn me when there is a new email. Could anyone point me in the right direction as far as the gtk code?</p>
<p>I already coded the bits necessary to check for new email but it's CLI right now.</p>
| 4 | 2009-07-16T00:02:35Z | 1,134,771 | <p>You'll want to use a gtk.StatusIcon to actually display the icon. <a href="http://www.pygtk.org/docs/pygtk/class-gtkstatusicon.html">Here are the docs</a>. If you're just getting started with gui programming you might want to work though a bit of the <a href="http://www.pygtk.org/pygtk2tutorial/index.html">pygtk tutorial</a>.</p>
| 5 | 2009-07-16T00:11:40Z | [
"python",
"pygtk",
"trayicon",
"tray"
] |
Simple tray icon application using pygtk | 1,134,749 | <p>I'm writing a webmail checker in python and I want it to just sit on the tray icon and warn me when there is a new email. Could anyone point me in the right direction as far as the gtk code?</p>
<p>I already coded the bits necessary to check for new email but it's CLI right now.</p>
| 4 | 2009-07-16T00:02:35Z | 1,134,773 | <p>This <a href="http://www.pygtk.org/docs/pygtk/class-gtkstatusicon.html" rel="nofollow">http://www.pygtk.org/docs/pygtk/class-gtkstatusicon.html</a> should get you going.</p>
| 2 | 2009-07-16T00:12:50Z | [
"python",
"pygtk",
"trayicon",
"tray"
] |
How to install django-haystack using buildout | 1,134,946 | <p>I'm trying to convert a current Django project in development to use zc.buildout So far, I've got all the bits figured except for Haystack figured out.</p>
<p>The Haystack source is available on GitHub, but I don't want to force users to install git. A suitable alternative seems to be to fetch a tarball from <a href="http://github.com/toastdriven/django-haystack/tarball/master" rel="nofollow">here</a></p>
<p>That tarball contains a setuptools setup.py, and it seems like it should be so <em>easy</em> to get buildout to install it. Halp!</p>
| 3 | 2009-07-16T01:08:12Z | 1,136,131 | <p>Well, if you don't want to install GIT, you can't check it out. So then you have to download a release. But there aren't any. In theory, find-links directly to the distribution should work. In this case it wont, probably because you don't link to the file, but to a page that generates the file from the trunk. So that option was out. </p>
<p>So, you need to bribe somebody to make a release, or make one yourself. You can make a release and stick it in a file server somewhere, and then use find-links in the buildout to point to the right place. </p>
<p>Or, since nobody else seems to have released Haystack to PyPI, you can do it! (But be nice and tell the developers and give them manager rights to the package as well).</p>
| 1 | 2009-07-16T08:15:48Z | [
"python",
"buildout"
] |
How to install django-haystack using buildout | 1,134,946 | <p>I'm trying to convert a current Django project in development to use zc.buildout So far, I've got all the bits figured except for Haystack figured out.</p>
<p>The Haystack source is available on GitHub, but I don't want to force users to install git. A suitable alternative seems to be to fetch a tarball from <a href="http://github.com/toastdriven/django-haystack/tarball/master" rel="nofollow">here</a></p>
<p>That tarball contains a setuptools setup.py, and it seems like it should be so <em>easy</em> to get buildout to install it. Halp!</p>
| 3 | 2009-07-16T01:08:12Z | 1,144,547 | <p>I figured this one out, without posting it to PyPI. (There is no actually tagged release version of django-haystack, so posting to to PyPI seems unclean. It's something the maintainer should and probably will handle better themselves.)</p>
<p>The relevant section is as follows:</p>
<pre><code>[haystack]
recipe = collective.recipe.distutils
url = http://github.com/ephelon/django-haystack/tarball/master
</code></pre>
<p>I had to create a fork of the project to remove <code>zip_safe=False</code> from setup.py. Once I'd done that the above works flawlessly, even the redirect sent by the above url.</p>
| 4 | 2009-07-17T17:06:27Z | [
"python",
"buildout"
] |
How to install django-haystack using buildout | 1,134,946 | <p>I'm trying to convert a current Django project in development to use zc.buildout So far, I've got all the bits figured except for Haystack figured out.</p>
<p>The Haystack source is available on GitHub, but I don't want to force users to install git. A suitable alternative seems to be to fetch a tarball from <a href="http://github.com/toastdriven/django-haystack/tarball/master" rel="nofollow">here</a></p>
<p>That tarball contains a setuptools setup.py, and it seems like it should be so <em>easy</em> to get buildout to install it. Halp!</p>
| 3 | 2009-07-16T01:08:12Z | 1,202,483 | <p>This currently works for me without forking.</p>
<pre><code>[django-haystack]
recipe = zerokspot.recipe.git
repository = git://github.com/toastdriven/django-haystack.git
as_egg = true
[whoosh]
recipe = zerokspot.recipe.git
repository = git://github.com/toastdriven/whoosh.git
branch = haystacked
as_egg = true
</code></pre>
<p>Make sure you add the locations to your <code>extra-paths</code>.</p>
| 2 | 2009-07-29T19:25:11Z | [
"python",
"buildout"
] |
How to install django-haystack using buildout | 1,134,946 | <p>I'm trying to convert a current Django project in development to use zc.buildout So far, I've got all the bits figured except for Haystack figured out.</p>
<p>The Haystack source is available on GitHub, but I don't want to force users to install git. A suitable alternative seems to be to fetch a tarball from <a href="http://github.com/toastdriven/django-haystack/tarball/master" rel="nofollow">here</a></p>
<p>That tarball contains a setuptools setup.py, and it seems like it should be so <em>easy</em> to get buildout to install it. Halp!</p>
| 3 | 2009-07-16T01:08:12Z | 1,701,213 | <p>It seems they've fixed the package to work from the tarball. James' fork is not working right now, but you can use the same recipe passing it the standard url:</p>
<pre><code>[haystack]
recipe = collective.recipe.distutils
url = http://github.com/toastdriven/django-haystack/tarball/master
</code></pre>
<p>This worked for me and is 100% hack free.</p>
| 0 | 2009-11-09T14:14:40Z | [
"python",
"buildout"
] |
Buildout recipe for a hierarchy of parts | 1,135,082 | <p>Is there a Python <a href="http://www.buildout.org/" rel="nofollow">buildout</a> recipe which would allow the following:</p>
<pre><code>[buildout]
parts = group-of-parts
[group-of-parts]
recipe = what.can.i.use.for.this
parts = part-1 part-2
[part-1]
...
[part-2]
...
</code></pre>
<p>In other words, I want a recipe which takes a 'parts' attribute much like 'buildout' section does so I can manually manage a hierarchy of groups of parts.</p>
<p>Yes, I know that I could do:</p>
<pre><code>[buildout]
parts = group-of-parts
[group-of-parts]
recipe =
parts = ${part-1:recipe} ${part-2:recipe}
[part-1]
...
[part-2]
...
</code></pre>
<p>but relying on the side effect that the parts will be built by referencing an attribute of them seems a bit obscure. I would rather it be more explicit by using a recipe which would just allow the name of the part itself to be listed.</p>
<p>Certainly when extending and overriding, it looks a lot cleaner to say:</p>
<pre><code>[groups-of-parts]
parts -= part-2
</code></pre>
<p>than:</p>
<pre><code>[groups-of-parts]
parts -= ${part-2:recipe}
</code></pre>
<p>Or is my problem that I am just missing something fundamental about how buildout works, or just overlooking something in the documentation which makes this much cleaner.</p>
<p>And no I don't want to have a flat hierarchy where all parts are listed in the 'parts' attribute of the 'buildout' section.</p>
| 4 | 2009-07-16T02:05:58Z | 1,135,970 | <p>No, there is no hierarchy, although you could build a recipe for it, of course.</p>
<p>Why do you want it? It's not like you end up with hundreds of parts so it's hard to keep track of them...</p>
| 1 | 2009-07-16T07:33:57Z | [
"python",
"buildout"
] |
Buildout recipe for a hierarchy of parts | 1,135,082 | <p>Is there a Python <a href="http://www.buildout.org/" rel="nofollow">buildout</a> recipe which would allow the following:</p>
<pre><code>[buildout]
parts = group-of-parts
[group-of-parts]
recipe = what.can.i.use.for.this
parts = part-1 part-2
[part-1]
...
[part-2]
...
</code></pre>
<p>In other words, I want a recipe which takes a 'parts' attribute much like 'buildout' section does so I can manually manage a hierarchy of groups of parts.</p>
<p>Yes, I know that I could do:</p>
<pre><code>[buildout]
parts = group-of-parts
[group-of-parts]
recipe =
parts = ${part-1:recipe} ${part-2:recipe}
[part-1]
...
[part-2]
...
</code></pre>
<p>but relying on the side effect that the parts will be built by referencing an attribute of them seems a bit obscure. I would rather it be more explicit by using a recipe which would just allow the name of the part itself to be listed.</p>
<p>Certainly when extending and overriding, it looks a lot cleaner to say:</p>
<pre><code>[groups-of-parts]
parts -= part-2
</code></pre>
<p>than:</p>
<pre><code>[groups-of-parts]
parts -= ${part-2:recipe}
</code></pre>
<p>Or is my problem that I am just missing something fundamental about how buildout works, or just overlooking something in the documentation which makes this much cleaner.</p>
<p>And no I don't want to have a flat hierarchy where all parts are listed in the 'parts' attribute of the 'buildout' section.</p>
| 4 | 2009-07-16T02:05:58Z | 7,325,985 | <p>Some time ago I wrote an <a href="http://bluedynamics.com/articles/jens/dependencies-and-zc.buildout" rel="nofollow">article on dependency resolution in buildout</a>. Its not an answer to your question, because what you want does not make much sense in my opinion. But you may get an insights to dependency hierachy tree resolution buildout uses.</p>
| 0 | 2011-09-06T20:50:49Z | [
"python",
"buildout"
] |
Buildout recipe for a hierarchy of parts | 1,135,082 | <p>Is there a Python <a href="http://www.buildout.org/" rel="nofollow">buildout</a> recipe which would allow the following:</p>
<pre><code>[buildout]
parts = group-of-parts
[group-of-parts]
recipe = what.can.i.use.for.this
parts = part-1 part-2
[part-1]
...
[part-2]
...
</code></pre>
<p>In other words, I want a recipe which takes a 'parts' attribute much like 'buildout' section does so I can manually manage a hierarchy of groups of parts.</p>
<p>Yes, I know that I could do:</p>
<pre><code>[buildout]
parts = group-of-parts
[group-of-parts]
recipe =
parts = ${part-1:recipe} ${part-2:recipe}
[part-1]
...
[part-2]
...
</code></pre>
<p>but relying on the side effect that the parts will be built by referencing an attribute of them seems a bit obscure. I would rather it be more explicit by using a recipe which would just allow the name of the part itself to be listed.</p>
<p>Certainly when extending and overriding, it looks a lot cleaner to say:</p>
<pre><code>[groups-of-parts]
parts -= part-2
</code></pre>
<p>than:</p>
<pre><code>[groups-of-parts]
parts -= ${part-2:recipe}
</code></pre>
<p>Or is my problem that I am just missing something fundamental about how buildout works, or just overlooking something in the documentation which makes this much cleaner.</p>
<p>And no I don't want to have a flat hierarchy where all parts are listed in the 'parts' attribute of the 'buildout' section.</p>
| 4 | 2009-07-16T02:05:58Z | 7,327,966 | <p>You can do this:</p>
<pre><code>[buildout]
development-tools-parts =
thing1
thing2
software-parts =
thing3
thing4
parts =
${buildout:development-tools-parts}
${buildout:software-parts}
</code></pre>
<p>Did I understand you correctly?</p>
<p>It works because most of those buildout configuration statements are just lists. Which you can append to each other.</p>
<p>I used this sometimes for a basic builout config ("base.cfg") that I would extend from. This would give you a <code>${buildout:common-parts}</code> you could add to your parts list to get a couple of standard items in there. Just to give you an example.</p>
| 0 | 2011-09-07T01:31:58Z | [
"python",
"buildout"
] |
Preventing variable substitutions from occurring with buildout | 1,135,292 | <p>Is there a simple way of escaping the magic characters used for variable substitution in a <a href="http://www.buildout.org/">buildout</a> configuration, such that the string is left alone. In other words, where I say:</p>
<pre><code>[part]
attribute = ${variable}
</code></pre>
<p>I don't actually want it to expand ${variable} but leave it as the literal value.</p>
<p>In practice the specific problem I am encountering is not in the buildout configuration file itself, but in a template file processed by the recipe 'collective.recipe.template'. This uses the same variable substitution engine from buildout that is used in the configuration files. Problem is that the file I want to use as a template already uses '${variable}' syntax for its own purposes in conjunction with the application configuration system which ultimately consumes the file.</p>
<p>The only way I have found to get around the problem is to use something like:</p>
<pre><code>[server-xml]
recipe = collective.recipe.template
input = templates/server.xml.in
output = ${product:build-directory}/conf/server.xml
dollar = $
</code></pre>
<p>In the template input file then have:</p>
<pre><code>${dollar}{variable}
</code></pre>
<p>instead of:</p>
<pre><code>${variable}
</code></pre>
<p>that it already had.</p>
<p>What this is doing is cause a lookup of 'dollar' attribute against the section using the template and replace it with '$'.</p>
<p>Rather than have to do that, was sort of hoping that one could do:</p>
<pre><code>\${variable}
</code></pre>
<p>or perhaps even:</p>
<pre><code>$${variable}
</code></pre>
<p>and eliminate the need to have to have a dummy attribute to trick it into doing what I want.</p>
<p>Looking at the source code for buildout, the way it matches variable substitution doesn't seem to provide an escape mechanism.</p>
<p>If there is indeed no way, then perhaps someone knows of an alternate templating recipe for buildout that can do variable expansion, but provides an escape mechanism for whatever way it indicates variables, such that one can avoid problems where there may be a clash between the templating systems expansion mechanism and literal data in the file being templated.</p>
| 6 | 2009-07-16T03:45:55Z | 1,164,584 | <p>I am afraid your analysis of the buildout variable substitution code (which collective.recipe.template relies on) is correct. There is no syntax for escaping a <code>${section:variable}</code> variable substitution and your solution of providing a <code>${dollar}</code> substitution is the best workaround I can think of.</p>
<p>You could of course also propose a patch to the zc.buildout team to add support for escaping the variable substitution syntax. :-)</p>
| 5 | 2009-07-22T11:17:12Z | [
"python",
"buildout"
] |
Preventing variable substitutions from occurring with buildout | 1,135,292 | <p>Is there a simple way of escaping the magic characters used for variable substitution in a <a href="http://www.buildout.org/">buildout</a> configuration, such that the string is left alone. In other words, where I say:</p>
<pre><code>[part]
attribute = ${variable}
</code></pre>
<p>I don't actually want it to expand ${variable} but leave it as the literal value.</p>
<p>In practice the specific problem I am encountering is not in the buildout configuration file itself, but in a template file processed by the recipe 'collective.recipe.template'. This uses the same variable substitution engine from buildout that is used in the configuration files. Problem is that the file I want to use as a template already uses '${variable}' syntax for its own purposes in conjunction with the application configuration system which ultimately consumes the file.</p>
<p>The only way I have found to get around the problem is to use something like:</p>
<pre><code>[server-xml]
recipe = collective.recipe.template
input = templates/server.xml.in
output = ${product:build-directory}/conf/server.xml
dollar = $
</code></pre>
<p>In the template input file then have:</p>
<pre><code>${dollar}{variable}
</code></pre>
<p>instead of:</p>
<pre><code>${variable}
</code></pre>
<p>that it already had.</p>
<p>What this is doing is cause a lookup of 'dollar' attribute against the section using the template and replace it with '$'.</p>
<p>Rather than have to do that, was sort of hoping that one could do:</p>
<pre><code>\${variable}
</code></pre>
<p>or perhaps even:</p>
<pre><code>$${variable}
</code></pre>
<p>and eliminate the need to have to have a dummy attribute to trick it into doing what I want.</p>
<p>Looking at the source code for buildout, the way it matches variable substitution doesn't seem to provide an escape mechanism.</p>
<p>If there is indeed no way, then perhaps someone knows of an alternate templating recipe for buildout that can do variable expansion, but provides an escape mechanism for whatever way it indicates variables, such that one can avoid problems where there may be a clash between the templating systems expansion mechanism and literal data in the file being templated.</p>
| 6 | 2009-07-16T03:45:55Z | 5,939,291 | <p>since version 1.7 of collective.recipe.template you can use genshi text templates, but since version 1.8 its useful because of some fixes made.</p>
<pre><code>recipe = collective.recipe.template[genshi]:genshi
...
mymessage = Hello
</code></pre>
<p>so the input-file it looks like</p>
<pre><code>The message in $${:mymessage} is: ${options['mymessage']}
</code></pre>
<p>genshi allows escaping of the dollar, see <a href="http://genshi.edgewall.org/wiki/Documentation/templates.html#escaping">http://genshi.edgewall.org/wiki/Documentation/templates.html#escaping</a></p>
<p>More details of how to use teh recipe with genshi at <a href="http://pypi.python.org/pypi/collective.recipe.template#genshi-text-templates">http://pypi.python.org/pypi/collective.recipe.template#genshi-text-templates</a></p>
| 5 | 2011-05-09T15:46:45Z | [
"python",
"buildout"
] |
Preventing variable substitutions from occurring with buildout | 1,135,292 | <p>Is there a simple way of escaping the magic characters used for variable substitution in a <a href="http://www.buildout.org/">buildout</a> configuration, such that the string is left alone. In other words, where I say:</p>
<pre><code>[part]
attribute = ${variable}
</code></pre>
<p>I don't actually want it to expand ${variable} but leave it as the literal value.</p>
<p>In practice the specific problem I am encountering is not in the buildout configuration file itself, but in a template file processed by the recipe 'collective.recipe.template'. This uses the same variable substitution engine from buildout that is used in the configuration files. Problem is that the file I want to use as a template already uses '${variable}' syntax for its own purposes in conjunction with the application configuration system which ultimately consumes the file.</p>
<p>The only way I have found to get around the problem is to use something like:</p>
<pre><code>[server-xml]
recipe = collective.recipe.template
input = templates/server.xml.in
output = ${product:build-directory}/conf/server.xml
dollar = $
</code></pre>
<p>In the template input file then have:</p>
<pre><code>${dollar}{variable}
</code></pre>
<p>instead of:</p>
<pre><code>${variable}
</code></pre>
<p>that it already had.</p>
<p>What this is doing is cause a lookup of 'dollar' attribute against the section using the template and replace it with '$'.</p>
<p>Rather than have to do that, was sort of hoping that one could do:</p>
<pre><code>\${variable}
</code></pre>
<p>or perhaps even:</p>
<pre><code>$${variable}
</code></pre>
<p>and eliminate the need to have to have a dummy attribute to trick it into doing what I want.</p>
<p>Looking at the source code for buildout, the way it matches variable substitution doesn't seem to provide an escape mechanism.</p>
<p>If there is indeed no way, then perhaps someone knows of an alternate templating recipe for buildout that can do variable expansion, but provides an escape mechanism for whatever way it indicates variables, such that one can avoid problems where there may be a clash between the templating systems expansion mechanism and literal data in the file being templated.</p>
| 6 | 2009-07-16T03:45:55Z | 37,085,401 | <p>Inserting an empty substitution between the <code>$</code> and the <code>{</code> should prevent buildout from evaluating the resulting text as a buildout substitution.</p>
<p><strong>buildout.cfg:</strong></p>
<pre><code>[server-xml]
recipe = collective.recipe.template
input = server.xml.in
output = server.xml
_ =
</code></pre>
<p><strong>server.xml.in:</strong></p>
<pre><code>do no substitution $${_}{myvar} blah
</code></pre>
<p><strong>server.xml:</strong></p>
<pre><code>do no substitution ${myvar} blah
</code></pre>
| 0 | 2016-05-07T06:43:34Z | [
"python",
"buildout"
] |
increment int object | 1,135,335 | <p>Is there a way in python to increment int object in place, int doesn't seem to implement <code>__iadd__</code> so += 1 actually returns a new object</p>
<pre><code>>>> n=1
>>> id(n)
9788024
>>> n+=1
>>> id(n)
9788012
</code></pre>
<p>What I want is n to remain pointing to same object.</p>
<p>Purpose: I have class derived from int and I want to implement C type '++n' operator for that class</p>
<p>Conclusion: ok as int is immutable there is no way, looks like i will have to write my own class something like this</p>
<pre><code>class Int(object):
def __init__(self, value):
self._decr = False
self.value = value
def __neg__(self):
if self._decr:
self.value -= 1
self._decr = not self._decr
return self
def __str__(self):
return str(self.value)
def __cmp__(self, n):
return cmp(self.value, n)
def __nonzero__(self):
return self.value
n = Int(10)
while --n:
print n
</code></pre>
| 13 | 2009-07-16T04:06:12Z | 1,135,344 | <p>It would probably be easier to create a class that implements the int methods and wraps an internal integer.</p>
| 3 | 2009-07-16T04:10:36Z | [
"python",
"int"
] |
increment int object | 1,135,335 | <p>Is there a way in python to increment int object in place, int doesn't seem to implement <code>__iadd__</code> so += 1 actually returns a new object</p>
<pre><code>>>> n=1
>>> id(n)
9788024
>>> n+=1
>>> id(n)
9788012
</code></pre>
<p>What I want is n to remain pointing to same object.</p>
<p>Purpose: I have class derived from int and I want to implement C type '++n' operator for that class</p>
<p>Conclusion: ok as int is immutable there is no way, looks like i will have to write my own class something like this</p>
<pre><code>class Int(object):
def __init__(self, value):
self._decr = False
self.value = value
def __neg__(self):
if self._decr:
self.value -= 1
self._decr = not self._decr
return self
def __str__(self):
return str(self.value)
def __cmp__(self, n):
return cmp(self.value, n)
def __nonzero__(self):
return self.value
n = Int(10)
while --n:
print n
</code></pre>
| 13 | 2009-07-16T04:06:12Z | 1,135,347 | <p>ints are immutable, so you'll have to build your own class with all the int's methods if you want a "mutable int"</p>
| 11 | 2009-07-16T04:10:55Z | [
"python",
"int"
] |
increment int object | 1,135,335 | <p>Is there a way in python to increment int object in place, int doesn't seem to implement <code>__iadd__</code> so += 1 actually returns a new object</p>
<pre><code>>>> n=1
>>> id(n)
9788024
>>> n+=1
>>> id(n)
9788012
</code></pre>
<p>What I want is n to remain pointing to same object.</p>
<p>Purpose: I have class derived from int and I want to implement C type '++n' operator for that class</p>
<p>Conclusion: ok as int is immutable there is no way, looks like i will have to write my own class something like this</p>
<pre><code>class Int(object):
def __init__(self, value):
self._decr = False
self.value = value
def __neg__(self):
if self._decr:
self.value -= 1
self._decr = not self._decr
return self
def __str__(self):
return str(self.value)
def __cmp__(self, n):
return cmp(self.value, n)
def __nonzero__(self):
return self.value
n = Int(10)
while --n:
print n
</code></pre>
| 13 | 2009-07-16T04:06:12Z | 1,135,348 | <p>Hah, seems I misread the op.</p>
<p>But yes, the short answer is that, ints are immutable.</p>
| 0 | 2009-07-16T04:11:45Z | [
"python",
"int"
] |
increment int object | 1,135,335 | <p>Is there a way in python to increment int object in place, int doesn't seem to implement <code>__iadd__</code> so += 1 actually returns a new object</p>
<pre><code>>>> n=1
>>> id(n)
9788024
>>> n+=1
>>> id(n)
9788012
</code></pre>
<p>What I want is n to remain pointing to same object.</p>
<p>Purpose: I have class derived from int and I want to implement C type '++n' operator for that class</p>
<p>Conclusion: ok as int is immutable there is no way, looks like i will have to write my own class something like this</p>
<pre><code>class Int(object):
def __init__(self, value):
self._decr = False
self.value = value
def __neg__(self):
if self._decr:
self.value -= 1
self._decr = not self._decr
return self
def __str__(self):
return str(self.value)
def __cmp__(self, n):
return cmp(self.value, n)
def __nonzero__(self):
return self.value
n = Int(10)
while --n:
print n
</code></pre>
| 13 | 2009-07-16T04:06:12Z | 1,140,858 | <p>If you absolutely have to get that code to work, here's a dirty method, where an instance method moves up a frame and overwrites its own locals entry. Wouldn't recommend. (like, really not. I'm not even sure what that does. What happens to the old instance? I don't know enough about frames...). Really, I'm only posting this because everyone said it's impossible, when in reality it's just ridiculously bad form. ;-)</p>
<pre><code>import sys
class FakeInt(int):
def __init__(self, *arg, **kwarg):
self._decr = False
int.__init__(self, *arg, **kwarg)
def __neg__(self):
if self._decr:
upLocals = sys._getframe(1).f_locals
keys, values = zip(*upLocals.items())
i = list(values).index(self)
result = FakeInt(self-1)
upLocals[keys[i]]=result
return result
self._decr = not self._decr
return self
A = FakeInt(10)
while --A:
print A,
</code></pre>
<p>outputs:</p>
<pre><code>9 8 7 6 5 4 3 2 1
</code></pre>
| 3 | 2009-07-16T23:44:29Z | [
"python",
"int"
] |
increment int object | 1,135,335 | <p>Is there a way in python to increment int object in place, int doesn't seem to implement <code>__iadd__</code> so += 1 actually returns a new object</p>
<pre><code>>>> n=1
>>> id(n)
9788024
>>> n+=1
>>> id(n)
9788012
</code></pre>
<p>What I want is n to remain pointing to same object.</p>
<p>Purpose: I have class derived from int and I want to implement C type '++n' operator for that class</p>
<p>Conclusion: ok as int is immutable there is no way, looks like i will have to write my own class something like this</p>
<pre><code>class Int(object):
def __init__(self, value):
self._decr = False
self.value = value
def __neg__(self):
if self._decr:
self.value -= 1
self._decr = not self._decr
return self
def __str__(self):
return str(self.value)
def __cmp__(self, n):
return cmp(self.value, n)
def __nonzero__(self):
return self.value
n = Int(10)
while --n:
print n
</code></pre>
| 13 | 2009-07-16T04:06:12Z | 18,700,492 | <p>You can always put an immutable object inside a mutable container; lists are easiest. That will allow multiple references to the container, which can mutate its items.</p>
<p>If you did the following, the int is not mutable:</p>
<pre><code>a = 0
b = a
b = 1
print(a) # prints 0
</code></pre>
<p>Here's the same code, but with the int wrapped in a list. By referencing and assigning to <code>a[0]</code> and <code>b[0]</code>, you get the effect of a mutable int, by having a shared reference to an int that can be 'swapped' for a different one.</p>
<pre><code>a = [0]
b = a
b[0] = 1
print(a[0]) # prints 1
</code></pre>
<p>This obviously works with any type of object, as only the list gets mutated. You could share more than one value, and could use any kind of mutable container.</p>
<p>Ultimately, you must share immutable variables inside mutable containers.</p>
| 1 | 2013-09-09T14:24:02Z | [
"python",
"int"
] |
increment int object | 1,135,335 | <p>Is there a way in python to increment int object in place, int doesn't seem to implement <code>__iadd__</code> so += 1 actually returns a new object</p>
<pre><code>>>> n=1
>>> id(n)
9788024
>>> n+=1
>>> id(n)
9788012
</code></pre>
<p>What I want is n to remain pointing to same object.</p>
<p>Purpose: I have class derived from int and I want to implement C type '++n' operator for that class</p>
<p>Conclusion: ok as int is immutable there is no way, looks like i will have to write my own class something like this</p>
<pre><code>class Int(object):
def __init__(self, value):
self._decr = False
self.value = value
def __neg__(self):
if self._decr:
self.value -= 1
self._decr = not self._decr
return self
def __str__(self):
return str(self.value)
def __cmp__(self, n):
return cmp(self.value, n)
def __nonzero__(self):
return self.value
n = Int(10)
while --n:
print n
</code></pre>
| 13 | 2009-07-16T04:06:12Z | 28,757,945 | <p>You can use ctypes as mutable integers. Choosing the right ctype will be important though, as they limit the size of integer they can carry.</p>
<pre><code>>>> from ctypes import c_int64
>>> num = c_int64(0)
>>> id(num)
4447709232
>>> def increment(number):
... number.value += 1
...
>>> increment(num)
>>> increment(num)
>>> increment(num)
>>> num.value
3
>>> id(num)
4447709232
>>>
</code></pre>
<p>More info: <a href="https://docs.python.org/2/library/ctypes.html#fundamental-data-types" rel="nofollow">https://docs.python.org/2/library/ctypes.html#fundamental-data-types</a></p>
| 4 | 2015-02-27T05:09:32Z | [
"python",
"int"
] |
Using python scripts in subversion hooks on windows | 1,135,499 | <p>My main goal is to get <a href="http://stackoverflow.com/questions/84178/how-do-i-implement-the-post-commit-hook-with-trac-svn-in-a-windows-environment">this</a> up and running.</p>
<p>My hook gets called when I do the commit with Tortoise SVN, but it always exits when I get to this line: Python "%~dp0trac-post-commit-hook.py" -p "%TRAC_ENV%" -r "%REV%" || EXIT 5</p>
<p>If I try and replace the call to the python script with any simple Python script it still doesn't work so I'm assuming it is a problem with the call to Python and not the script itself.</p>
<p>I have tried setting the PYTHON_PATH variable and also set %PATH% to include Python.</p>
<p>I have trac up and running so Python is working on the server itself.</p>
<p>Here is some background info:</p>
<ul>
<li><p>Python is installed on Windows server and script is called from local machine so </p>
<p>IF NOT EXIST %TRAC_ENV% EXIT 3</p>
<p>and</p>
<p>SET PYTHON_PATH=X:\Python26
IF NOT EXIST %PYTHON_PATH% EXIT 4</p></li>
</ul>
<p>fail unless I point set them to the mapped network drive (That is point them at X and Y drives not C and E drives)</p>
<ul>
<li>Python scripts can be called anywhere from the command line from the server regardless of the drive so the PATH variable should be set correctly</li>
</ul>
<p>Appears to be an issue with calling python scripts externally, but not sure how I go about changing the permissions for this.</p>
<p>Thanks in advance.</p>
| 1 | 2009-07-16T05:15:13Z | 1,137,861 | <p>Take the following things into account:</p>
<ul>
<li>network drive mappings and <code>subst</code>
mappings are user specific. Make sure
the drives exist for the user account
under which the svn server is
running.</li>
<li>subversion hook scripts are <a href="http://svnbook.red-bean.com/en/1.5/svn.reposadmin.create.html#svn.reposadmin.create.hooks" rel="nofollow">run
without any environment variables
being set</a> for security reasons, not even <code>%path%</code>. Call
the python executable with an
absolute path, e.g.
<code>c:\python25\python.exe</code>.</li>
</ul>
| 3 | 2009-07-16T14:08:14Z | [
"python",
"windows",
"svn",
"hook",
"svn-hooks"
] |
connecting Python 2.6.1 with MySQLdb | 1,135,625 | <p>Hi All
I am using Python 2.6.1 and I want to connect to MySQLdb, I installed mySQL in my system, and I am trying to connect <strong>MySQL-python-1.2.2.win32-py2.6</strong> from <strong><a href="http://www.codegood.com/archives/4" rel="nofollow">http://www.codegood.com/archives/4</a></strong> site but its not working
while running my application its saying that <strong>No module named MySQLdb</strong></p>
<p>please any one provide me the proper setup for MySQLdb.</p>
<p>thanks in advance</p>
| 4 | 2009-07-16T05:51:27Z | 1,135,722 | <p>The module is not likely in your python search path..</p>
<p>Check to see if that module is in your <em>Python Path</em>... In windows...you may find it in the registry</p>
<p>HKLM\Software\Python\PythonCore\2.6\PythonPath</p>
<p>Be careful editing it...</p>
<p>You may also alter the Python Path programmaticly by the following</p>
<pre><code>import sys
sys.path.append('somepath_to_the_module_you_wanted')
import the_module_you_wanted
</code></pre>
<p>Hope that helps</p>
| 4 | 2009-07-16T06:19:09Z | [
"python",
"mysql"
] |
connecting Python 2.6.1 with MySQLdb | 1,135,625 | <p>Hi All
I am using Python 2.6.1 and I want to connect to MySQLdb, I installed mySQL in my system, and I am trying to connect <strong>MySQL-python-1.2.2.win32-py2.6</strong> from <strong><a href="http://www.codegood.com/archives/4" rel="nofollow">http://www.codegood.com/archives/4</a></strong> site but its not working
while running my application its saying that <strong>No module named MySQLdb</strong></p>
<p>please any one provide me the proper setup for MySQLdb.</p>
<p>thanks in advance</p>
| 4 | 2009-07-16T05:51:27Z | 1,135,877 | <p>generally, (good) python modules provide a 'setup.py' script that takes care of things like proper installation (google for 'distutils python'). MySQLdb is a "good" module in this sense.</p>
<p>since you're using windows, things might be a bit more complex. I assume you already installed MySQLdb following the instructions and it still gives this problem. what I would do is open a cmd.exe window, cd to the directory containing the 'setup.py' script and there type something like
C:\Python26\Python.exe setup.py install</p>
<p>if this does not work, then grab the module somewhere else, maybe at the place where it is actively developed: <a href="http://sourceforge.net/projects/mysql-python/" rel="nofollow">http://sourceforge.net/projects/mysql-python/</a></p>
| 0 | 2009-07-16T07:06:19Z | [
"python",
"mysql"
] |
connecting Python 2.6.1 with MySQLdb | 1,135,625 | <p>Hi All
I am using Python 2.6.1 and I want to connect to MySQLdb, I installed mySQL in my system, and I am trying to connect <strong>MySQL-python-1.2.2.win32-py2.6</strong> from <strong><a href="http://www.codegood.com/archives/4" rel="nofollow">http://www.codegood.com/archives/4</a></strong> site but its not working
while running my application its saying that <strong>No module named MySQLdb</strong></p>
<p>please any one provide me the proper setup for MySQLdb.</p>
<p>thanks in advance</p>
| 4 | 2009-07-16T05:51:27Z | 1,137,144 | <p>See this post on the mysql-python blog: <a href="http://mysql-python.blogspot.com/2009/03/mysql-python-123-beta-2-released.html" rel="nofollow">MySQL-python-1.2.3 beta 2 released</a> - dated March 2009. Looks like MySQLdb for Python 2.6 is still a work in progress...</p>
| 0 | 2009-07-16T12:15:20Z | [
"python",
"mysql"
] |
connecting Python 2.6.1 with MySQLdb | 1,135,625 | <p>Hi All
I am using Python 2.6.1 and I want to connect to MySQLdb, I installed mySQL in my system, and I am trying to connect <strong>MySQL-python-1.2.2.win32-py2.6</strong> from <strong><a href="http://www.codegood.com/archives/4" rel="nofollow">http://www.codegood.com/archives/4</a></strong> site but its not working
while running my application its saying that <strong>No module named MySQLdb</strong></p>
<p>please any one provide me the proper setup for MySQLdb.</p>
<p>thanks in advance</p>
| 4 | 2009-07-16T05:51:27Z | 1,473,805 | <p>The best setup for Windows that I've found:</p>
<p><a href="http://www.codegood.com/downloads?dl_cat=2" rel="nofollow">http://www.codegood.com/downloads?dl_cat=2</a></p>
<p>EDIT: Removed original link (it's an ad farm now :( )</p>
| 13 | 2009-09-24T19:52:05Z | [
"python",
"mysql"
] |
connecting Python 2.6.1 with MySQLdb | 1,135,625 | <p>Hi All
I am using Python 2.6.1 and I want to connect to MySQLdb, I installed mySQL in my system, and I am trying to connect <strong>MySQL-python-1.2.2.win32-py2.6</strong> from <strong><a href="http://www.codegood.com/archives/4" rel="nofollow">http://www.codegood.com/archives/4</a></strong> site but its not working
while running my application its saying that <strong>No module named MySQLdb</strong></p>
<p>please any one provide me the proper setup for MySQLdb.</p>
<p>thanks in advance</p>
| 4 | 2009-07-16T05:51:27Z | 2,372,127 | <p>I was having this problem and then I realised I was importing MySQLdb erroneously - it's case sensitive: </p>
<p>Incorrect: >>>import mysqldb</p>
<p>Correct: >>>import MySQLdb</p>
<p>Silly mistake, but cost me a few hours!</p>
| 2 | 2010-03-03T14:40:11Z | [
"python",
"mysql"
] |
connecting Python 2.6.1 with MySQLdb | 1,135,625 | <p>Hi All
I am using Python 2.6.1 and I want to connect to MySQLdb, I installed mySQL in my system, and I am trying to connect <strong>MySQL-python-1.2.2.win32-py2.6</strong> from <strong><a href="http://www.codegood.com/archives/4" rel="nofollow">http://www.codegood.com/archives/4</a></strong> site but its not working
while running my application its saying that <strong>No module named MySQLdb</strong></p>
<p>please any one provide me the proper setup for MySQLdb.</p>
<p>thanks in advance</p>
| 4 | 2009-07-16T05:51:27Z | 5,099,785 | <p>I went for compiled binary , thats the best way to go on windows. There is a good source maintained by someone.
I wrote about it here before because some months down the lane I will forget how I solved this and be searching Stack again :/
<a href="http://vangel.3ezy.com/archives/101-Python-2.4-2.5-2.6-and-2.7-Windows-MySQLdb-python-installation.html" rel="nofollow">http://vangel.3ezy.com/archives/101-Python-2.4-2.5-2.6-and-2.7-Windows-MySQLdb-python-installation.html</a></p>
| 0 | 2011-02-24T02:42:48Z | [
"python",
"mysql"
] |
How do I receive SNMP traps on OS X? | 1,135,981 | <p>I need to receive and parse some SNMP traps (messages) and I would appreciate any advice on getting the code I have working on my OS X machine. I have been given some Java code that runs on Windows with net-snmp. I'd like to either get the Java code running on my development machine or whip up some Python code to do the same.</p>
<p>I was able to get the Java code to compile on my OS X machine and it runs without any complaints, including none of the exceptions I would expect to be thrown if it was unable to bind to socket 8255. However, it never reports receiving any SNMP traps, which makes me wonder whether it's really able to read on the socket. Here's what I gather to be the code from the Java program that binds to the socket:</p>
<pre><code>DatagramChannel dgChannel1=DatagramChannel.open();
Selector mux=Selector.open();
dgChannel1.socket().bind(new InetSocketAddress(8255));
dgChannel1.configureBlocking(false);
dgChannel1.register(mux,SelectionKey.OP_READ);
while(mux.select()>0) {
Iterator keyIt = mux.selectedKeys().iterator();
while (keyIt.hasNext()) {
SelectionKey key = (SelectionKey) keyIt.next();
if (key.isReadable()) {
/* processing */
}
}
}
</code></pre>
<p>Since I don't know Java and like to mess around with Python, I installed <a href="http://www.seafelt.com/products/libsnmp" rel="nofollow">libsnmp</a> via <code>easy_install</code> and tried to get that working. The sample programs <code>traplistener.py</code> and <code>trapsender.py</code> have no problem talking to each other but if I run <code>traplistener.py</code> waiting for my own SNMP signals I again fail to receive anything. I should note that I had to run the python programs via <code>sudo</code> in order to have permission to access the sockets. Running the java program via sudo had no effect.</p>
<p>All this makes me suspect that both programs are having problem with OS X and its sockets, perhaps their permissions. For instance, I had to change the permissions on the <code>/dev/bpf</code> devices for Wireshark to work. Another thought is that it has something to do with my machine having multiple network adapters enabled, including eth0 (ethernet, where I see the trap messages thanks to Wireshark) and eth1 (wifi). Could this be the problem?</p>
<p>As you can see, I know very little about sockets or SNMP, so any help is much appreciated!</p>
<p><strong>Update:</strong> Using <code>lsof</code> (<code>sudo lsof -i -n -P</code> to be exact) it appears that my problem is that the java program is only listen on IPv6 when the trap sender is using IPv4. I've tried disabling IPv6 (<code>sudo ip6 -x</code>) and telling java to use IPv4 (<code>java -jar bridge.jar -Djava.net.preferIPv4Stack=true</code>) but I keep finding my program using IPv6. Any thoughts?</p>
<pre><code>java 16444 peter 34u IPv6 0x12f3ad98 0t0 UDP *:8255
</code></pre>
<p><strong>Update 2:</strong> Ok, I guess I had the java parameter order wrong: <code>java -Djava.net.preferIPv4Stack=true -jar bridge.jar</code> puts the program on IPv4. However, my program still shows no signs of receiving the packets that I know are there.</p>
| 0 | 2009-07-16T07:36:59Z | 1,136,576 | <p>The standard port number for SNMP traps is 162. </p>
<p>Is there a reason you're specifying a different port number ? You can normally change the port number that traps are sent on/received on, but obviously both ends have to agree. So I'm wondering if this is your problem.</p>
| 0 | 2009-07-16T09:57:36Z | [
"java",
"python",
"osx",
"sockets",
"snmp"
] |
How do I receive SNMP traps on OS X? | 1,135,981 | <p>I need to receive and parse some SNMP traps (messages) and I would appreciate any advice on getting the code I have working on my OS X machine. I have been given some Java code that runs on Windows with net-snmp. I'd like to either get the Java code running on my development machine or whip up some Python code to do the same.</p>
<p>I was able to get the Java code to compile on my OS X machine and it runs without any complaints, including none of the exceptions I would expect to be thrown if it was unable to bind to socket 8255. However, it never reports receiving any SNMP traps, which makes me wonder whether it's really able to read on the socket. Here's what I gather to be the code from the Java program that binds to the socket:</p>
<pre><code>DatagramChannel dgChannel1=DatagramChannel.open();
Selector mux=Selector.open();
dgChannel1.socket().bind(new InetSocketAddress(8255));
dgChannel1.configureBlocking(false);
dgChannel1.register(mux,SelectionKey.OP_READ);
while(mux.select()>0) {
Iterator keyIt = mux.selectedKeys().iterator();
while (keyIt.hasNext()) {
SelectionKey key = (SelectionKey) keyIt.next();
if (key.isReadable()) {
/* processing */
}
}
}
</code></pre>
<p>Since I don't know Java and like to mess around with Python, I installed <a href="http://www.seafelt.com/products/libsnmp" rel="nofollow">libsnmp</a> via <code>easy_install</code> and tried to get that working. The sample programs <code>traplistener.py</code> and <code>trapsender.py</code> have no problem talking to each other but if I run <code>traplistener.py</code> waiting for my own SNMP signals I again fail to receive anything. I should note that I had to run the python programs via <code>sudo</code> in order to have permission to access the sockets. Running the java program via sudo had no effect.</p>
<p>All this makes me suspect that both programs are having problem with OS X and its sockets, perhaps their permissions. For instance, I had to change the permissions on the <code>/dev/bpf</code> devices for Wireshark to work. Another thought is that it has something to do with my machine having multiple network adapters enabled, including eth0 (ethernet, where I see the trap messages thanks to Wireshark) and eth1 (wifi). Could this be the problem?</p>
<p>As you can see, I know very little about sockets or SNMP, so any help is much appreciated!</p>
<p><strong>Update:</strong> Using <code>lsof</code> (<code>sudo lsof -i -n -P</code> to be exact) it appears that my problem is that the java program is only listen on IPv6 when the trap sender is using IPv4. I've tried disabling IPv6 (<code>sudo ip6 -x</code>) and telling java to use IPv4 (<code>java -jar bridge.jar -Djava.net.preferIPv4Stack=true</code>) but I keep finding my program using IPv6. Any thoughts?</p>
<pre><code>java 16444 peter 34u IPv6 0x12f3ad98 0t0 UDP *:8255
</code></pre>
<p><strong>Update 2:</strong> Ok, I guess I had the java parameter order wrong: <code>java -Djava.net.preferIPv4Stack=true -jar bridge.jar</code> puts the program on IPv4. However, my program still shows no signs of receiving the packets that I know are there.</p>
| 0 | 2009-07-16T07:36:59Z | 1,139,504 | <p>Ok, the solution to get my code working was to run the program as <code>java -Djava.net.preferIPv4Stack=true -jar bridge.jar</code> and to power cycle the SNMP trap sender. Thanks for your help, Brian.</p>
| 0 | 2009-07-16T18:41:53Z | [
"java",
"python",
"osx",
"sockets",
"snmp"
] |
What is an efficent way of inserting thousands of records into an SQLite table using Django? | 1,136,106 | <p>I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.<br /> At the moment I'm using a for loop to iterate through all the items and then insert them one by one.<br />
Example:</p>
<pre><code>for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>What is an efficent way of doing this?<br />
<br />
<br />
<b>Edit:</b> A little comparison between the two insertion methods.<br /></p>
<p>Without commit_manually decorator (11245 records):</p>
<pre><code>nox@noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
</code></pre>
<p>Using commit_manually decorator (11245 records):</p>
<pre><code>[nox@noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
</code></pre>
<p><strong>Note:</strong> The <em>test</em> script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.</p>
| 49 | 2009-07-16T08:08:30Z | 1,136,141 | <p>Have a look at <a href="http://www.djangosnippets.org/snippets/446/" rel="nofollow">this</a>. It's meant for use out-of-the-box with MySQL only, but there are pointers on what to do for other databases.</p>
| 3 | 2009-07-16T08:18:00Z | [
"python",
"sql",
"django",
"sqlite",
"insert"
] |
What is an efficent way of inserting thousands of records into an SQLite table using Django? | 1,136,106 | <p>I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.<br /> At the moment I'm using a for loop to iterate through all the items and then insert them one by one.<br />
Example:</p>
<pre><code>for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>What is an efficent way of doing this?<br />
<br />
<br />
<b>Edit:</b> A little comparison between the two insertion methods.<br /></p>
<p>Without commit_manually decorator (11245 records):</p>
<pre><code>nox@noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
</code></pre>
<p>Using commit_manually decorator (11245 records):</p>
<pre><code>[nox@noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
</code></pre>
<p><strong>Note:</strong> The <em>test</em> script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.</p>
| 49 | 2009-07-16T08:08:30Z | 1,136,160 | <p>You might be better off bulk-loading the items - prepare a file and use a bulk load tool. This will be vastly more efficient than 8000 individual inserts.</p>
| 3 | 2009-07-16T08:24:24Z | [
"python",
"sql",
"django",
"sqlite",
"insert"
] |
What is an efficent way of inserting thousands of records into an SQLite table using Django? | 1,136,106 | <p>I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.<br /> At the moment I'm using a for loop to iterate through all the items and then insert them one by one.<br />
Example:</p>
<pre><code>for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>What is an efficent way of doing this?<br />
<br />
<br />
<b>Edit:</b> A little comparison between the two insertion methods.<br /></p>
<p>Without commit_manually decorator (11245 records):</p>
<pre><code>nox@noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
</code></pre>
<p>Using commit_manually decorator (11245 records):</p>
<pre><code>[nox@noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
</code></pre>
<p><strong>Note:</strong> The <em>test</em> script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.</p>
| 49 | 2009-07-16T08:08:30Z | 1,136,248 | <p>You want to check out <code>django.db.transaction.commit_manually</code>.</p>
<p><a href="http://docs.djangoproject.com/en/dev/topics/db/transactions/#django-db-transaction-commit-manually">http://docs.djangoproject.com/en/dev/topics/db/transactions/#django-db-transaction-commit-manually</a></p>
<p>So it would be something like:</p>
<pre><code>from django.db import transaction
@transaction.commit_manually
def viewfunc(request):
...
for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
transaction.commit()
</code></pre>
<p>Which will only commit once, instead at each save().</p>
<p>In django 1.3 context managers were introduced.
So now you can use <strong><a href="https://docs.djangoproject.com/en/1.3/topics/db/transactions/#controlling-transaction-management-in-views">transaction.commit_on_success()</a></strong> in a similar way:</p>
<pre><code>from django.db import transaction
def viewfunc(request):
...
with transaction.commit_on_success():
for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>In django 1.4, <a href="https://docs.djangoproject.com/en/dev/ref/models/querysets/#django.db.models.query.QuerySet.bulk_create"><code>bulk_create</code></a> was added, allowing you to create lists of your model objects and then commit them all at once. </p>
<p><strong>NOTE</strong> the save method will not be called when using bulk create.</p>
<pre><code>>>> Entry.objects.bulk_create([
... Entry(headline="Django 1.0 Released"),
... Entry(headline="Django 1.1 Announced"),
... Entry(headline="Breaking: Django is awesome")
... ])
</code></pre>
<p>In django 1.6, <strong><a href="https://docs.djangoproject.com/en/1.6/topics/db/transactions/#django.db.transaction.atomic">transaction.atomic</a></strong> was introduced, intended to replace now legacy functions <code>commit_on_success</code> and <code>commit_manually</code>.</p>
<p>from the django <a href="https://docs.djangoproject.com/en/1.6/topics/db/transactions/#django.db.transaction.atomic">documentation on atomic</a>: </p>
<p>atomic is usable both as a decorator:</p>
<pre><code>from django.db import transaction
@transaction.atomic
def viewfunc(request):
# This code executes inside a transaction.
do_stuff()
</code></pre>
<p>and as a context manager:</p>
<pre><code>from django.db import transaction
def viewfunc(request):
# This code executes in autocommit mode (Django's default).
do_stuff()
with transaction.atomic():
# This code executes inside a transaction.
do_more_stuff()
</code></pre>
| 84 | 2009-07-16T08:44:56Z | [
"python",
"sql",
"django",
"sqlite",
"insert"
] |
What is an efficent way of inserting thousands of records into an SQLite table using Django? | 1,136,106 | <p>I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.<br /> At the moment I'm using a for loop to iterate through all the items and then insert them one by one.<br />
Example:</p>
<pre><code>for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>What is an efficent way of doing this?<br />
<br />
<br />
<b>Edit:</b> A little comparison between the two insertion methods.<br /></p>
<p>Without commit_manually decorator (11245 records):</p>
<pre><code>nox@noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
</code></pre>
<p>Using commit_manually decorator (11245 records):</p>
<pre><code>[nox@noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
</code></pre>
<p><strong>Note:</strong> The <em>test</em> script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.</p>
| 49 | 2009-07-16T08:08:30Z | 1,139,355 | <p>I recommend using plain SQL (not ORM) you can insert multiple rows with a single insert:</p>
<pre><code>insert into A select from B;
</code></pre>
<p>The <b>select from B</b> portion of your sql could be as complicated as you want it to get as long as the results match the columns in table A and there are no constraint conflicts.</p>
| 1 | 2009-07-16T18:18:49Z | [
"python",
"sql",
"django",
"sqlite",
"insert"
] |
What is an efficent way of inserting thousands of records into an SQLite table using Django? | 1,136,106 | <p>I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.<br /> At the moment I'm using a for loop to iterate through all the items and then insert them one by one.<br />
Example:</p>
<pre><code>for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>What is an efficent way of doing this?<br />
<br />
<br />
<b>Edit:</b> A little comparison between the two insertion methods.<br /></p>
<p>Without commit_manually decorator (11245 records):</p>
<pre><code>nox@noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
</code></pre>
<p>Using commit_manually decorator (11245 records):</p>
<pre><code>[nox@noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
</code></pre>
<p><strong>Note:</strong> The <em>test</em> script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.</p>
| 49 | 2009-07-16T08:08:30Z | 3,878,219 | <p>I've ran into the same problem and I can't figure out a way to do it without so many inserts.
I agree that using transactions is probably the <em>right</em> way to solve it, but here is my hack:</p>
<pre><code> def viewfunc(request):
...
to_save = [];
for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
to_save.append(entry);
map(lambda x: x.save(), to_save);
</code></pre>
| -2 | 2010-10-07T02:13:13Z | [
"python",
"sql",
"django",
"sqlite",
"insert"
] |
What is an efficent way of inserting thousands of records into an SQLite table using Django? | 1,136,106 | <p>I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.<br /> At the moment I'm using a for loop to iterate through all the items and then insert them one by one.<br />
Example:</p>
<pre><code>for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>What is an efficent way of doing this?<br />
<br />
<br />
<b>Edit:</b> A little comparison between the two insertion methods.<br /></p>
<p>Without commit_manually decorator (11245 records):</p>
<pre><code>nox@noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
</code></pre>
<p>Using commit_manually decorator (11245 records):</p>
<pre><code>[nox@noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
</code></pre>
<p><strong>Note:</strong> The <em>test</em> script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.</p>
| 49 | 2009-07-16T08:08:30Z | 4,903,947 | <p>You should check out <a href="http://pypi.python.org/pypi/dse/0.4.0" rel="nofollow">DSE</a>. I wrote DSE to solve these kinds of problems ( massive insert or updates ). Using the django orm is a dead-end, you got to do it in plain SQL and DSE takes care of much of that for you.</p>
<p>Thomas</p>
| 2 | 2011-02-04T23:30:55Z | [
"python",
"sql",
"django",
"sqlite",
"insert"
] |
What is an efficent way of inserting thousands of records into an SQLite table using Django? | 1,136,106 | <p>I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.<br /> At the moment I'm using a for loop to iterate through all the items and then insert them one by one.<br />
Example:</p>
<pre><code>for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>What is an efficent way of doing this?<br />
<br />
<br />
<b>Edit:</b> A little comparison between the two insertion methods.<br /></p>
<p>Without commit_manually decorator (11245 records):</p>
<pre><code>nox@noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
</code></pre>
<p>Using commit_manually decorator (11245 records):</p>
<pre><code>[nox@noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
</code></pre>
<p><strong>Note:</strong> The <em>test</em> script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.</p>
| 49 | 2009-07-16T08:08:30Z | 7,971,313 | <p>Bulk creation is available in Django 1.4:</p>
<p><a href="https://django.readthedocs.io/en/1.4/ref/models/querysets.html#bulk-create" rel="nofollow">https://django.readthedocs.io/en/1.4/ref/models/querysets.html#bulk-create</a></p>
| 11 | 2011-11-01T18:45:49Z | [
"python",
"sql",
"django",
"sqlite",
"insert"
] |
What is an efficent way of inserting thousands of records into an SQLite table using Django? | 1,136,106 | <p>I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.<br /> At the moment I'm using a for loop to iterate through all the items and then insert them one by one.<br />
Example:</p>
<pre><code>for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
</code></pre>
<p>What is an efficent way of doing this?<br />
<br />
<br />
<b>Edit:</b> A little comparison between the two insertion methods.<br /></p>
<p>Without commit_manually decorator (11245 records):</p>
<pre><code>nox@noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
</code></pre>
<p>Using commit_manually decorator (11245 records):</p>
<pre><code>[nox@noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
</code></pre>
<p><strong>Note:</strong> The <em>test</em> script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.</p>
| 49 | 2009-07-16T08:08:30Z | 20,840,338 | <p>To answer the question particularly with regard to SQLite, as asked, while I have just now confirmed that bulk_create does provide a tremendous speedup there is a limitation with SQLite: "The default is to create all objects in one batch, except for SQLite where the default is such that at maximum 999 variables per query is used."</p>
<p>The quoted stuff is from the docs--- A-IV provided a link. </p>
<p>What I have to add is that <a href="https://djangosnippets.org/snippets/2724/" rel="nofollow">this djangosnippets</a> entry by alpar also seems to be working for me. It's a little wrapper that breaks the big batch that you want to process into smaller batches, managing the 999 variables limit.</p>
| 1 | 2013-12-30T13:15:45Z | [
"python",
"sql",
"django",
"sqlite",
"insert"
] |
Allowing user to configure cron | 1,136,168 | <p>I have this bash script on the server that runs every hour, via cron. I was perfectly happy, but now the user wants to be able to configure the frequency through the web interface.</p>
<p>I don't feel comfortable manipulating the cron configuration programmatically, but I'm not sure if the other options are any better.</p>
<p>The way I see it, I can either:</p>
<ul>
<li>Schedule a script to run once a minute and check if it should really run "now"</li>
<li>Forgo cron altogether and use a deamon that is its own scheduler. This probably means rewriting the script in python</li>
<li>...or suck it up and manipulate the cron configuration from the web interface (written in python BTW)</li>
</ul>
<p>What should I do?</p>
<p>EDIT: to clarify, the main reason I'm apprehensive about manipulating cron is because it's basically text manipulation with no validation, and if I mess it up, none of my other cron jobs will run.</p>
<p><strong>Here's what I ended up doing:</strong></p>
<p>Taking stefanw's advice, I added the following line at the top of my bash script:</p>
<pre><code>if [ ! `cat /home/username/settings/run.freq` = $1 ]; then
exit 0
fi
</code></pre>
<p>I set up the following cron jobs:</p>
<pre><code>0 */2 * * * /home/username/scripts/publish.sh 2_hours
@hourly /home/username/scripts/publish.sh 1_hour
*/30 * * * * /home/username/scripts/publish.sh 30_minutes
*/10 * * * * /home/username/scripts/publish.sh 10_minutes
</code></pre>
<p>From the web interface, I let the user choose between those four options, and based on what the user chose, I write the string <code>2_hours/1_hour/30_minutes/10_minutes</code> into the file at <code>/home/username/settings/run.freq</code>.</p>
<p>I don't love it, but it seems like the best alternative.</p>
| 3 | 2009-07-16T08:25:31Z | 1,136,222 | <p>Give your users some reasonable choices like every minute, every 5 minutes, every half an hour, ... and translate these values to a cron job string. This is user friendly and forbids users to tamper directly with the cron job string.</p>
| 10 | 2009-07-16T08:37:38Z | [
"python",
"bash",
"cron"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.