title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
python constructing functions on the fly
1,593,572
<p>I have several little functions <code>f1</code>, <code>f2</code>, <code>f3</code> and a function <code>f</code>.</p> <p>I want <code>f</code> to be a "container" to <code>f1</code>, <code>f2</code>, <code>f3</code>: to do the some of operations <code>f1</code>, <code>f2</code>, <code>f3</code>, depending on the program configuration (for example <code>f1</code> and <code>f2</code> or <code>f1</code> and <code>f3</code> or all the three) and nothing more.</p> <p>I see two simple solutions: first to add some <code>if</code>'s in the function <code>f</code>:</p> <pre><code>if configuration_f1_f2: f1() f2() </code></pre> <p>second, I can add a list of operations in <code>f</code>:</p> <pre><code>for op in operations: op() </code></pre> <p>and add <code>f1</code>, <code>f2</code>, <code>f3</code> in <code>operations</code> or remove them, depending on configuration.</p> <p>But can I somehow construct dynamically code of 'f' adding to it calls of <code>f1</code>, <code>f2</code> and <code>f3</code> exact what I need without any <code>if</code>'s or <code>list</code>'s or <code>for</code>'s? I mean something like on the fly code manipulation. So if my configuration is "<code>f1</code> and <code>f3</code>", I set code of <code>f</code> so that it is </p> <pre><code>f1() f3() </code></pre> <p>and when my configuration changes to "<code>f2</code> and <code>f3</code>" I modify code of <code>f</code> to </p> <pre><code>f2() f3() </code></pre> <p>Can I manipulate the code of the function that way?</p>
5
2009-10-20T09:59:00Z
1,593,664
<p>There can be various ways to achieve what you want, but for that you need to fully define the problem and context</p> <p>a way could be like this, and most of the other will be variant of this, using a dict to lookup the function we need</p> <pre><code>def f1(): print "f1" def f2(): print "f2" def f3(): print "f3" def f(fList): for f in fList: globals()[f]() f(["f1", "f2"]) f("f1 f3 f1 f3 f2 f1 f3 f2".split()) </code></pre>
1
2009-10-20T10:16:41Z
[ "python" ]
python constructing functions on the fly
1,593,572
<p>I have several little functions <code>f1</code>, <code>f2</code>, <code>f3</code> and a function <code>f</code>.</p> <p>I want <code>f</code> to be a "container" to <code>f1</code>, <code>f2</code>, <code>f3</code>: to do the some of operations <code>f1</code>, <code>f2</code>, <code>f3</code>, depending on the program configuration (for example <code>f1</code> and <code>f2</code> or <code>f1</code> and <code>f3</code> or all the three) and nothing more.</p> <p>I see two simple solutions: first to add some <code>if</code>'s in the function <code>f</code>:</p> <pre><code>if configuration_f1_f2: f1() f2() </code></pre> <p>second, I can add a list of operations in <code>f</code>:</p> <pre><code>for op in operations: op() </code></pre> <p>and add <code>f1</code>, <code>f2</code>, <code>f3</code> in <code>operations</code> or remove them, depending on configuration.</p> <p>But can I somehow construct dynamically code of 'f' adding to it calls of <code>f1</code>, <code>f2</code> and <code>f3</code> exact what I need without any <code>if</code>'s or <code>list</code>'s or <code>for</code>'s? I mean something like on the fly code manipulation. So if my configuration is "<code>f1</code> and <code>f3</code>", I set code of <code>f</code> so that it is </p> <pre><code>f1() f3() </code></pre> <p>and when my configuration changes to "<code>f2</code> and <code>f3</code>" I modify code of <code>f</code> to </p> <pre><code>f2() f3() </code></pre> <p>Can I manipulate the code of the function that way?</p>
5
2009-10-20T09:59:00Z
1,593,698
<p>If you're brave, you can construct a function definition as a <em>string</em> and pass it to the <a href="http://docs.python.org/reference/simple%5Fstmts.html#the-exec-statement" rel="nofollow"><code>exec</code></a> statement. For example:</p> <pre><code>func = "def f():\n" if config_f1: func += " f1()\n" if config_f2: func += " f2()\n" exec func in globals() </code></pre> <p>At this point, you should have a new global <code>f()</code> that executes the appropriate bits of code.</p>
3
2009-10-20T10:24:06Z
[ "python" ]
python constructing functions on the fly
1,593,572
<p>I have several little functions <code>f1</code>, <code>f2</code>, <code>f3</code> and a function <code>f</code>.</p> <p>I want <code>f</code> to be a "container" to <code>f1</code>, <code>f2</code>, <code>f3</code>: to do the some of operations <code>f1</code>, <code>f2</code>, <code>f3</code>, depending on the program configuration (for example <code>f1</code> and <code>f2</code> or <code>f1</code> and <code>f3</code> or all the three) and nothing more.</p> <p>I see two simple solutions: first to add some <code>if</code>'s in the function <code>f</code>:</p> <pre><code>if configuration_f1_f2: f1() f2() </code></pre> <p>second, I can add a list of operations in <code>f</code>:</p> <pre><code>for op in operations: op() </code></pre> <p>and add <code>f1</code>, <code>f2</code>, <code>f3</code> in <code>operations</code> or remove them, depending on configuration.</p> <p>But can I somehow construct dynamically code of 'f' adding to it calls of <code>f1</code>, <code>f2</code> and <code>f3</code> exact what I need without any <code>if</code>'s or <code>list</code>'s or <code>for</code>'s? I mean something like on the fly code manipulation. So if my configuration is "<code>f1</code> and <code>f3</code>", I set code of <code>f</code> so that it is </p> <pre><code>f1() f3() </code></pre> <p>and when my configuration changes to "<code>f2</code> and <code>f3</code>" I modify code of <code>f</code> to </p> <pre><code>f2() f3() </code></pre> <p>Can I manipulate the code of the function that way?</p>
5
2009-10-20T09:59:00Z
1,593,702
<p>Use objects and the Command design pattern.</p> <pre><code>class Function( object ): pass class F1( Function ): def __call__( self ): whatever `f1` used to do class F2( Function ): def __call__( self ): whatever `f1` used to do class Sequence( Function ): def __init__( self, *someList ): self.sequence= someList def __call__( self ): for f in self.sequence: f() f= myDynamicOperation( F1(), F2(), F3() ) f() </code></pre> <p>That's how it's done. No "constructing a function on the fly"</p>
1
2009-10-20T10:24:55Z
[ "python" ]
String replacing in a file by given position
1,593,576
<p>I have a file opened in 'ab+' mode.</p> <p>What I need to do is replacing some bytes in the file with another string's bytes such that:</p> <p>FILE:</p> <pre><code>thisissomethingasperfectlygood. </code></pre> <p>string:</p> <pre><code>01234 </code></pre> <p>So, for example, I seek for the position (4, 0) and I want to write 01234 in the place of "issom" in the file. Last appearance would be:</p> <p><code>this01234ethingasperfectlygood</code>.</p> <p>There are some solutions on the net, but all of them (at least what I could find) are based on "first find a string in the file and then replace it with another one". Because my case is based on seeking, so I am confused about the solution.</p>
2
2009-10-20T10:00:07Z
1,593,646
<p>You could mmap() your file and then use slice notation to update specific byte ranges in the file. The example <a href="http://docs.python.org/library/mmap.html" rel="nofollow">here</a> should help.</p>
2
2009-10-20T10:13:51Z
[ "python", "string", "replace", "seek" ]
String replacing in a file by given position
1,593,576
<p>I have a file opened in 'ab+' mode.</p> <p>What I need to do is replacing some bytes in the file with another string's bytes such that:</p> <p>FILE:</p> <pre><code>thisissomethingasperfectlygood. </code></pre> <p>string:</p> <pre><code>01234 </code></pre> <p>So, for example, I seek for the position (4, 0) and I want to write 01234 in the place of "issom" in the file. Last appearance would be:</p> <p><code>this01234ethingasperfectlygood</code>.</p> <p>There are some solutions on the net, but all of them (at least what I could find) are based on "first find a string in the file and then replace it with another one". Because my case is based on seeking, so I am confused about the solution.</p>
2
2009-10-20T10:00:07Z
1,593,658
<p>You can use mmap for that</p> <pre><code>import os,mmap f=os.open("afile",os.O_RDWR) m=mmap.mmap(f,0) m[4:9]="01234" os.close(f) </code></pre>
2
2009-10-20T10:15:52Z
[ "python", "string", "replace", "seek" ]
getpos() coding
1,593,599
<p>just wanted to know how to write the getpos() command which must return an (x,y) tuple of the current position.</p> <p>does it start like this:</p> <pre><code>def getpos(x 100, y 100) </code></pre> <p>not sure need help</p>
0
2009-10-20T10:06:01Z
1,593,637
<p>This is a bit underspecified, but this might work:</p> <pre><code>def getpos(self): return (self.x, self.y) </code></pre> <p>This is how to return a tuple, from values assumed to be instance variables.</p>
2
2009-10-20T10:12:37Z
[ "python" ]
getpos() coding
1,593,599
<p>just wanted to know how to write the getpos() command which must return an (x,y) tuple of the current position.</p> <p>does it start like this:</p> <pre><code>def getpos(x 100, y 100) </code></pre> <p>not sure need help</p>
0
2009-10-20T10:06:01Z
1,593,648
<p>In Python you can't force a return type on a function from its header. The return type can change from one call to another. When you write:</p> <pre><code>def getpos(x,y): </code></pre> <p>It means that the function receives 2 parameters, that inside the function are called x &amp; y. No type is forced on them either. Just write the function so it returns the tuple, e.g.:</p> <pre><code>def getpos(): x = 100 y = 100 return (x,y) </code></pre> <p>The syntax that you used (def getpos(x 100, y 100) ) does not have any meaning that I know.</p>
1
2009-10-20T10:13:53Z
[ "python" ]
Python: Get name of instantiating class?
1,593,632
<p>Example:</p> <pre><code>class Class1: def __init__(self): self.x = Class2('Woo!') class Class2: def __init__(self, word): print word meow = Class1() </code></pre> <p>How do I derive the class name that created the self.x instance? In other words, if I was given the instance self.x, how do I get the name 'Class1'? Using <code>self.x.__class__.__name__</code> will obviously only give you the Class2 name. Is this even possible? Thanks.</p>
7
2009-10-20T10:11:42Z
1,593,652
<p>Set a variable on the class in question in your <code>__init__()</code> method that you then retrieve later on.</p> <p>You'll get better answers if you ask better questions. This one is pretty unclear.</p>
1
2009-10-20T10:14:43Z
[ "python", "class", "instance" ]
Python: Get name of instantiating class?
1,593,632
<p>Example:</p> <pre><code>class Class1: def __init__(self): self.x = Class2('Woo!') class Class2: def __init__(self, word): print word meow = Class1() </code></pre> <p>How do I derive the class name that created the self.x instance? In other words, if I was given the instance self.x, how do I get the name 'Class1'? Using <code>self.x.__class__.__name__</code> will obviously only give you the Class2 name. Is this even possible? Thanks.</p>
7
2009-10-20T10:11:42Z
1,593,673
<p>You can't, unless you pass an instance of the 'creator' to the Class2() constructor. e.g.</p> <pre><code>class Class1(object): def __init__(self, *args, **kw): self.x = Class2("Woo!", self) class Class2(object): def __init__(self, word, creator, *args, **kw): self._creator = creator print word </code></pre> <p>This creates an inverse link between the classes for you</p>
5
2009-10-20T10:18:23Z
[ "python", "class", "instance" ]
Python: Get name of instantiating class?
1,593,632
<p>Example:</p> <pre><code>class Class1: def __init__(self): self.x = Class2('Woo!') class Class2: def __init__(self, word): print word meow = Class1() </code></pre> <p>How do I derive the class name that created the self.x instance? In other words, if I was given the instance self.x, how do I get the name 'Class1'? Using <code>self.x.__class__.__name__</code> will obviously only give you the Class2 name. Is this even possible? Thanks.</p>
7
2009-10-20T10:11:42Z
1,593,704
<p>Your question is very similar to answered <a href="http://stackoverflow.com/questions/1497683/can-python-determine-the-class-of-a-object-accessing-a-method">here</a>. Note, that you can determine who created the instance in its constructor, but not afterwards. Anyway, the best way is to pass creator into constructor explicitly.</p>
0
2009-10-20T10:25:37Z
[ "python", "class", "instance" ]
forward and back command
1,593,818
<p>wanted to know how to write a forward and back command in a superclass not sure but i gave it a try dont know if its right or wrong some help plz</p> <pre><code>def forward(self): return (self.100) def back(self): return (self.50) </code></pre>
-1
2009-10-20T10:56:21Z
1,593,838
<pre><code>def forward(self): self.position += self.distance return (self.position) def back(self): self.position -= self.distance return (self.position) </code></pre> <p>EDIT: I assumed you are doing something like progress bar of install app, where some operation advances progress (copying files), and if user cancels install others operation make rollback (deleting files). Try ask a proper question as other users comment.</p>
1
2009-10-20T11:01:39Z
[ "python" ]
Problem with Twisted and threads
1,593,948
<p>Some of you that are more experienced using Twisted will probably judge me about using it together with threads - but I did it :). And now I am in somehow of a trouble - I am having an application server that listens for client requests and each time a new client connects it spawns another thread that I probably forget to properly close, since after a while of heavy usage the server stops processing requests. Well, I have 3 different types of threads and for one of those it happens - the thing is that I am not sure what's the proper way to do it, since <code>Thread.join()</code> seems to not work and doing <code>cat /proc/&lt;pid&gt;/status</code> it always gives me <code>Threads: 43</code> when the server stopped working.</p> <p>So I am looking for a way of debugging this and see how can I properly close the threads.</p> <p>And yeah, I know about this question:</p> <p><a href="http://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python">http://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python</a></p> <p>and probably many others.</p>
0
2009-10-20T11:28:55Z
1,595,929
<p>"Twisted way" to do anything outside reactor loop (aka spawning threads) is twisted.internet.threads.deferToThread.</p> <p>For example:</p> <pre><code>from twisted.internet import threads def sthToDoInSeparateThread(): return 3 d = threads.deferToThread(sthToDoInSeparateThread) </code></pre> <p>deferToThread will execute sthToDoInSeparateThread in separate thread and fire returned defered d as soon as thread is stopped.</p>
4
2009-10-20T16:46:59Z
[ "python", "multithreading", "twisted" ]
Problem with Twisted and threads
1,593,948
<p>Some of you that are more experienced using Twisted will probably judge me about using it together with threads - but I did it :). And now I am in somehow of a trouble - I am having an application server that listens for client requests and each time a new client connects it spawns another thread that I probably forget to properly close, since after a while of heavy usage the server stops processing requests. Well, I have 3 different types of threads and for one of those it happens - the thing is that I am not sure what's the proper way to do it, since <code>Thread.join()</code> seems to not work and doing <code>cat /proc/&lt;pid&gt;/status</code> it always gives me <code>Threads: 43</code> when the server stopped working.</p> <p>So I am looking for a way of debugging this and see how can I properly close the threads.</p> <p>And yeah, I know about this question:</p> <p><a href="http://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python">http://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python</a></p> <p>and probably many others.</p>
0
2009-10-20T11:28:55Z
2,317,726
<p>You probably just want to do <code>mythread.setDaemon(True)</code> so that your threads exit when the main process exits.</p>
0
2010-02-23T11:37:33Z
[ "python", "multithreading", "twisted" ]
Writing an XML file using python
1,594,052
<p>I have to write an xml file using python standard modules (not using elementtree, lxml etc) The metadata is a SAML identity provider metadata and is of the form - </p> <pre><code>&lt;?xml version="1.0"?&gt; &lt;EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" entityID="http://wsa.saas.com"&gt; &lt;IDPSSODescriptor&gt; &lt;KeyDescriptor use="signing"&gt; &lt;ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;ds:X509Data&gt;&lt;ds:X509Certificate&gt;-----BEGIN CERTIFICATE----- MIIDnjCCAoagAwIBAgIBATANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJGUjEP MA0GA1UECBMGRnJhbmNlMQ4wDAYDVQQHEwVQYXJpczETMBEGA1UEChMKRW50cm91 dmVydDEPMA0GA1UEAxMGRGFtaWVuMB4XDTA2MTAyNzA5MDc1NFoXDTExMTAyNjA5 MDc1NFowVDELMAkGA1UEBhMCRlIxDzANBgNVBAgTBkZyYW5jZTEOMAwGA1UEBxMF UGFyaXMxEzARBgNVBAoTCkVudHJvdXZlcnQxDzANBgNVBAMTBkRhbWllbjCCASIw DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM06Hx6VgHYR9wUf/tZVVTRkVWNq h9x+PvHA2qH4OYMuqGs4Af6lU2YsZvnrmRdcFWv0+UkdAgXhReCWAZgtB1pd/W9m 6qDRldCCyysow6xPPKRz/pOTwRXm/fM0QGPeXzwzj34BXOIOuFu+n764vKn18d+u uVAEzk1576pxTp4pQPzJfdNLrLeQ8vyCshoFU+MYJtp1UA+h2JoO0Y8oGvywbUxH ioHN5PvnzObfAM4XaDQohmfxM9Uc7Wp4xKAc1nUq5hwBrHpjFMRSz6UCfMoJSGIi +3xJMkNCjL0XEw5NKVc5jRKkzSkN5j8KTM/k1jPPsDHPRYzbWWhnNtd6JlkCAwEA AaN7MHkwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0 ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFP2WWMDShux3iF74+SoO1xf6qhqaMB8G A1UdIwQYMBaAFGjl6TRXbQDHzSlZu+e8VeBaZMB5MA0GCSqGSIb3DQEBBQUAA4IB AQAZ/imK7UMognXbs5RfSB8cMW6iNAI+JZqe9XWjvtmLfIIPbHM96o953SiFvrvQ BZjGmmPMK3UH29cjzDx1R/RQaYTyMrHyTePLh3BMd5mpJ/9eeJCSxPzE2ECqWRUa pkjukecFXqmRItwgTxSIUE9QkpzvuQRb268PwmgroE0mwtiREADnvTFkLkdiEMew fiYxZfJJLPBqwlkw/7f1SyzXoPXnz5QbNwDmrHelga6rKSprYKb3pueqaIe8j/AP NC1/bzp8cGOcJ88BD5+Ny6qgPVCrMLE5twQumJ12V3SvjGNtzFBvg2c/9S5OmVqR LlTxKnCrWAXftSm1rNtewTsF -----END CERTIFICATE----- &lt;/ds:X509Certificate&gt;&lt;/ds:X509Data&gt; &lt;/ds:KeyInfo&gt; &lt;/KeyDescriptor&gt; &lt;SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://idp5/singleSignOn" /&gt; &lt;/IDPSSODescriptor&gt; &lt;/EntityDescriptor&gt; </code></pre> <p>My code at present does this - </p> <pre><code>&gt;&gt;&gt; from xml.dom.minidom import Document &gt;&gt;&gt; doc = Document() &gt;&gt;&gt; entity_descriptor = doc.createElement("EntityDescriptor") &gt;&gt;&gt; doc.appendChild(entity_descriptor) &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns', 'urn:oasis:names:tc:SAML:2.0:metadata') &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns:saml', 'urn:oasis:names:tc:SAML:2.0:assertion') &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns:ds', 'hxxp://xxx.w3.org/2000/09/xmldsig#') &gt;&gt;&gt; entity_descriptor.setAttribute('entityID', 'hxxp://wsa.saas.com') &gt;&gt;&gt; idpssodescr = doc.createElement('IDPSSODescriptor') &gt;&gt;&gt; entity_descriptor.appendChild(idpssodescr) &gt;&gt;&gt; keydescr = doc.createElement('KeyDescriptor') &gt;&gt;&gt; keydescr.setAttribute('use', 'signing') &gt;&gt;&gt; idpssodescr.appendChild(keydescr) &gt;&gt;&gt; keyinfo = doc.createElement('ds:KeyInfo') &gt;&gt;&gt; keyinfo.setAttribute('xmlns:ds', 'http://xxx.w3.org/2000/09/xmldsig#') &gt;&gt;&gt; keydescr.appendChild(keyinfo) &gt;&gt;&gt; x509data = doc.createElement('ds:X509Data') &gt;&gt;&gt; keyinfo.appendChild(x509data) &gt;&gt;&gt; x509cert = doc.createElement('ds:X509Certificate') &gt;&gt;&gt; ptext = doc.createTextNode("This is a test!") &gt;&gt;&gt; x509cert.appendChild(ptext) &gt;&gt;&gt; x509data.appendChild(x509cert) &gt;&gt;&gt; sso = doc.createElement('SingleSignOnService') &gt;&gt;&gt; sso.setAttribute('Binding', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect') &gt;&gt;&gt; sso.setAttribute('Location', 'hxxx://idp5/singleSignOn') &gt;&gt;&gt; idpssodescr.appendChild(sso) &gt;&gt;&gt; print doc.toprettyxml(indent=" ") &lt;?xml version="1.0" ?&gt; &lt;EntityDescriptor entityID="http://wsa.saas.com" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"&gt; &lt;IDPSSODescriptor&gt; &lt;KeyDescriptor use="signing"&gt; &lt;ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;ds:X509Data&gt; &lt;ds:X509Certificate&gt; This is a test! &lt;/ds:X509Certificate&gt; &lt;/ds:X509Data&gt; &lt;/ds:KeyInfo&gt; &lt;/KeyDescriptor&gt; &lt;SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://idp5/singleSignOn"/&gt; &lt;/IDPSSODescriptor&gt; &lt;/EntityDescriptor&gt; </code></pre> <p>I am little confused with the certificate part. Original is <code>&lt;ds:X509Certificate&gt;-----BEGIN CERTIFICATE---.....&lt;/ds:X509Certificate&gt;</code> and mine is:</p> <pre><code> &lt;ds:X509Certificate&gt; This is the cert &lt;/ds:X509Certificate&gt; </code></pre> <p>Sorry, I don't have much experience with XML. Is the code proper for the task intended.</p> <p>Thanks. </p>
1
2009-10-20T11:48:46Z
1,594,142
<p>Well? What is your question? In these two lines:</p> <pre><code>&gt;&gt;&gt; ptext = doc.createTextNode("This is a test!") &gt;&gt;&gt; x509cert.appendChild(ptext) </code></pre> <p>you are, indeed, creating a text node as a child of a node <code>"&lt;ds:X509Certificate&gt;"</code> . The contents of the text node are "This is a test!". If you want to have something else inserted as the text content, you must call .createTextNode with an appropriate argument.</p> <p>EDIT (taking into account the OP's comment): No, it is not <em>text content</em>. In XML everything is a node. What you think is "text between the tags" is, actually, a <em>text node</em>.</p>
2
2009-10-20T12:06:54Z
[ "python", "xml", "saml" ]
Writing an XML file using python
1,594,052
<p>I have to write an xml file using python standard modules (not using elementtree, lxml etc) The metadata is a SAML identity provider metadata and is of the form - </p> <pre><code>&lt;?xml version="1.0"?&gt; &lt;EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" entityID="http://wsa.saas.com"&gt; &lt;IDPSSODescriptor&gt; &lt;KeyDescriptor use="signing"&gt; &lt;ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;ds:X509Data&gt;&lt;ds:X509Certificate&gt;-----BEGIN CERTIFICATE----- MIIDnjCCAoagAwIBAgIBATANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJGUjEP MA0GA1UECBMGRnJhbmNlMQ4wDAYDVQQHEwVQYXJpczETMBEGA1UEChMKRW50cm91 dmVydDEPMA0GA1UEAxMGRGFtaWVuMB4XDTA2MTAyNzA5MDc1NFoXDTExMTAyNjA5 MDc1NFowVDELMAkGA1UEBhMCRlIxDzANBgNVBAgTBkZyYW5jZTEOMAwGA1UEBxMF UGFyaXMxEzARBgNVBAoTCkVudHJvdXZlcnQxDzANBgNVBAMTBkRhbWllbjCCASIw DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM06Hx6VgHYR9wUf/tZVVTRkVWNq h9x+PvHA2qH4OYMuqGs4Af6lU2YsZvnrmRdcFWv0+UkdAgXhReCWAZgtB1pd/W9m 6qDRldCCyysow6xPPKRz/pOTwRXm/fM0QGPeXzwzj34BXOIOuFu+n764vKn18d+u uVAEzk1576pxTp4pQPzJfdNLrLeQ8vyCshoFU+MYJtp1UA+h2JoO0Y8oGvywbUxH ioHN5PvnzObfAM4XaDQohmfxM9Uc7Wp4xKAc1nUq5hwBrHpjFMRSz6UCfMoJSGIi +3xJMkNCjL0XEw5NKVc5jRKkzSkN5j8KTM/k1jPPsDHPRYzbWWhnNtd6JlkCAwEA AaN7MHkwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0 ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFP2WWMDShux3iF74+SoO1xf6qhqaMB8G A1UdIwQYMBaAFGjl6TRXbQDHzSlZu+e8VeBaZMB5MA0GCSqGSIb3DQEBBQUAA4IB AQAZ/imK7UMognXbs5RfSB8cMW6iNAI+JZqe9XWjvtmLfIIPbHM96o953SiFvrvQ BZjGmmPMK3UH29cjzDx1R/RQaYTyMrHyTePLh3BMd5mpJ/9eeJCSxPzE2ECqWRUa pkjukecFXqmRItwgTxSIUE9QkpzvuQRb268PwmgroE0mwtiREADnvTFkLkdiEMew fiYxZfJJLPBqwlkw/7f1SyzXoPXnz5QbNwDmrHelga6rKSprYKb3pueqaIe8j/AP NC1/bzp8cGOcJ88BD5+Ny6qgPVCrMLE5twQumJ12V3SvjGNtzFBvg2c/9S5OmVqR LlTxKnCrWAXftSm1rNtewTsF -----END CERTIFICATE----- &lt;/ds:X509Certificate&gt;&lt;/ds:X509Data&gt; &lt;/ds:KeyInfo&gt; &lt;/KeyDescriptor&gt; &lt;SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://idp5/singleSignOn" /&gt; &lt;/IDPSSODescriptor&gt; &lt;/EntityDescriptor&gt; </code></pre> <p>My code at present does this - </p> <pre><code>&gt;&gt;&gt; from xml.dom.minidom import Document &gt;&gt;&gt; doc = Document() &gt;&gt;&gt; entity_descriptor = doc.createElement("EntityDescriptor") &gt;&gt;&gt; doc.appendChild(entity_descriptor) &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns', 'urn:oasis:names:tc:SAML:2.0:metadata') &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns:saml', 'urn:oasis:names:tc:SAML:2.0:assertion') &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns:ds', 'hxxp://xxx.w3.org/2000/09/xmldsig#') &gt;&gt;&gt; entity_descriptor.setAttribute('entityID', 'hxxp://wsa.saas.com') &gt;&gt;&gt; idpssodescr = doc.createElement('IDPSSODescriptor') &gt;&gt;&gt; entity_descriptor.appendChild(idpssodescr) &gt;&gt;&gt; keydescr = doc.createElement('KeyDescriptor') &gt;&gt;&gt; keydescr.setAttribute('use', 'signing') &gt;&gt;&gt; idpssodescr.appendChild(keydescr) &gt;&gt;&gt; keyinfo = doc.createElement('ds:KeyInfo') &gt;&gt;&gt; keyinfo.setAttribute('xmlns:ds', 'http://xxx.w3.org/2000/09/xmldsig#') &gt;&gt;&gt; keydescr.appendChild(keyinfo) &gt;&gt;&gt; x509data = doc.createElement('ds:X509Data') &gt;&gt;&gt; keyinfo.appendChild(x509data) &gt;&gt;&gt; x509cert = doc.createElement('ds:X509Certificate') &gt;&gt;&gt; ptext = doc.createTextNode("This is a test!") &gt;&gt;&gt; x509cert.appendChild(ptext) &gt;&gt;&gt; x509data.appendChild(x509cert) &gt;&gt;&gt; sso = doc.createElement('SingleSignOnService') &gt;&gt;&gt; sso.setAttribute('Binding', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect') &gt;&gt;&gt; sso.setAttribute('Location', 'hxxx://idp5/singleSignOn') &gt;&gt;&gt; idpssodescr.appendChild(sso) &gt;&gt;&gt; print doc.toprettyxml(indent=" ") &lt;?xml version="1.0" ?&gt; &lt;EntityDescriptor entityID="http://wsa.saas.com" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"&gt; &lt;IDPSSODescriptor&gt; &lt;KeyDescriptor use="signing"&gt; &lt;ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;ds:X509Data&gt; &lt;ds:X509Certificate&gt; This is a test! &lt;/ds:X509Certificate&gt; &lt;/ds:X509Data&gt; &lt;/ds:KeyInfo&gt; &lt;/KeyDescriptor&gt; &lt;SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://idp5/singleSignOn"/&gt; &lt;/IDPSSODescriptor&gt; &lt;/EntityDescriptor&gt; </code></pre> <p>I am little confused with the certificate part. Original is <code>&lt;ds:X509Certificate&gt;-----BEGIN CERTIFICATE---.....&lt;/ds:X509Certificate&gt;</code> and mine is:</p> <pre><code> &lt;ds:X509Certificate&gt; This is the cert &lt;/ds:X509Certificate&gt; </code></pre> <p>Sorry, I don't have much experience with XML. Is the code proper for the task intended.</p> <p>Thanks. </p>
1
2009-10-20T11:48:46Z
1,605,897
<p>Just for archiving, my earlier code was missing a small change, that is why the certificate was being rejected.</p> <pre><code>from xml.dom.minidom import Document doc = Document() entity_descriptor = doc.createElement("EntityDescriptor") doc.appendChild(entity_descriptor) entity_descriptor.setAttribute('xmlns', 'urn:oasis:names:tc:SAML:2.0:metadata') entity_descriptor.setAttribute('xmlns:saml', 'urn:oasis:names:tc:SAML:2.0:assertion') entity_descriptor.setAttribute('xmlns:ds', 'http://www.w3.org/2000/09/xmldsig#') entity_descriptor.setAttribute('entityID', 'http://wsa.saas.com') idpssodescr = doc.createElement('IDPSSODescriptor') idpssodescr.setAttribute('WantAuthnRequestsSigned', 'true') idpssodescr.setAttribute('protocolSupportEnumeration', 'urn:oasis:names:tc:SAML:2.0:protocol') entity_descriptor.appendChild(idpssodescr) keydescr = doc.createElement('KeyDescriptor') keydescr.setAttribute('use', 'signing') idpssodescr.appendChild(keydescr) keyinfo = doc.createElement('ds:KeyInfo') keyinfo.setAttribute('xmlns:ds', 'http://www.w3.org/2000/09/xmldsig#') keydescr.appendChild(keyinfo) x509data = doc.createElement('ds:X509Data') keyinfo.appendChild(x509data) x509cert = doc.createElement('ds:X509Certificate') # Read the certificate from some file. fp = file('idp.crt.pem', 'r') s = '' for i in fp.readlines(): s+=''.join(i) ptext = doc.createTextNode(s) x509cert.appendChild(ptext) x509data.appendChild(x509cert) sso = doc.createElement('SingleSignOnService') sso.setAttribute('Binding', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect') sso.setAttribute('Location', 'http://idp5/singleSignOn') idpssodescr.appendChild(sso) print doc.toprettyxml(indent=" ") &lt;?xml version="1.0" ?&gt; &lt;EntityDescriptor entityID="http://wsa.saas.com" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09 /xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"&gt; &lt;IDPSSODescriptor WantAuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"&gt; &lt;KeyDescriptor use="signing"&gt; &lt;ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;ds:X509Data&gt; &lt;ds:X509Certificate&gt; -----BEGIN CERTIFICATE----- MIICfTCCAeagAwIBAgIJAPEn4h3J3p2dMA0GCSqGSIb3DQEBBQUAMDQxCzAJBgNV BAYTAmFhMQswCQYDVQQKEwJhYTELMAkGA1UECxMCYWExCzAJBgNVBAMTAmFhMB4X DTA5MTAxOTA4NTI1M1oXDTEwMTAxOTA4NTI1M1owNDELMAkGA1UEBhMCYWExCzAJ BgNVBAoTAmFhMQswCQYDVQQLEwJhYTELMAkGA1UEAxMCYWEwgZ8wDQYJKoZIhvcN AQEBBQADgY0AMIGJAoGBAMerInhZF/l0O0jmiD8M1lSSpHjFcT0peiwqWq+LZ8Ay b6mcpnHdFVmHQaGtUt+6i+0NqKDppxnaVW4vOdYD64OlmSVrG+WzkYMAmE/0EzJN A5pEA5ZK1w6MGo+IQLjrPDmm/qV6XrkARR2THjA2xKE8/L7s+VEJj/d+/CC8V7vP AgMBAAGjgZYwgZMwHQYDVR0OBBYEFOHoipN0T0TNs1IwFkmTwLDtsV0gMGQGA1Ud IwRdMFuAFOHoipN0T0TNs1IwFkmTwLDtsV0goTikNjA0MQswCQYDVQQGEwJhYTEL MAkGA1UEChMCYWExCzAJBgNVBAsTAmFhMQswCQYDVQQDEwJhYYIJAPEn4h3J3p2d MAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEANTQgpYm+OBZTTYbLkyBH MQ9QygwgNWOQJ9hEbT0xpiL8xHXBTQdHJkMXD/PWzs1AyZShXsUwcKBaKgxyIsQj a36poKPyfAYbfsg8xLyijMVXbsW7OlKN9FjapaZTnEvHfsMO8ITAad4a7RVWAYQ8 ucT7nO9OPFjOv8dwGsF5RVM= -----END CERTIFICATE----- &lt;/ds:X509Certificate&gt; &lt;/ds:X509Data&gt; &lt;/ds:KeyInfo&gt; &lt;/KeyDescriptor&gt; &lt;SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://idp5/singleSignOn"/&gt; &lt;/IDPSSODescriptor&gt; &lt;/EntityDescriptor&gt; x = open('metadata.xml', 'w') doc.writexml(x, " ", "", "\n", "UTF-8") x.close() </code></pre>
0
2009-10-22T08:57:25Z
[ "python", "xml", "saml" ]
Writing an XML file using python
1,594,052
<p>I have to write an xml file using python standard modules (not using elementtree, lxml etc) The metadata is a SAML identity provider metadata and is of the form - </p> <pre><code>&lt;?xml version="1.0"?&gt; &lt;EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" entityID="http://wsa.saas.com"&gt; &lt;IDPSSODescriptor&gt; &lt;KeyDescriptor use="signing"&gt; &lt;ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;ds:X509Data&gt;&lt;ds:X509Certificate&gt;-----BEGIN CERTIFICATE----- MIIDnjCCAoagAwIBAgIBATANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJGUjEP MA0GA1UECBMGRnJhbmNlMQ4wDAYDVQQHEwVQYXJpczETMBEGA1UEChMKRW50cm91 dmVydDEPMA0GA1UEAxMGRGFtaWVuMB4XDTA2MTAyNzA5MDc1NFoXDTExMTAyNjA5 MDc1NFowVDELMAkGA1UEBhMCRlIxDzANBgNVBAgTBkZyYW5jZTEOMAwGA1UEBxMF UGFyaXMxEzARBgNVBAoTCkVudHJvdXZlcnQxDzANBgNVBAMTBkRhbWllbjCCASIw DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM06Hx6VgHYR9wUf/tZVVTRkVWNq h9x+PvHA2qH4OYMuqGs4Af6lU2YsZvnrmRdcFWv0+UkdAgXhReCWAZgtB1pd/W9m 6qDRldCCyysow6xPPKRz/pOTwRXm/fM0QGPeXzwzj34BXOIOuFu+n764vKn18d+u uVAEzk1576pxTp4pQPzJfdNLrLeQ8vyCshoFU+MYJtp1UA+h2JoO0Y8oGvywbUxH ioHN5PvnzObfAM4XaDQohmfxM9Uc7Wp4xKAc1nUq5hwBrHpjFMRSz6UCfMoJSGIi +3xJMkNCjL0XEw5NKVc5jRKkzSkN5j8KTM/k1jPPsDHPRYzbWWhnNtd6JlkCAwEA AaN7MHkwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0 ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFP2WWMDShux3iF74+SoO1xf6qhqaMB8G A1UdIwQYMBaAFGjl6TRXbQDHzSlZu+e8VeBaZMB5MA0GCSqGSIb3DQEBBQUAA4IB AQAZ/imK7UMognXbs5RfSB8cMW6iNAI+JZqe9XWjvtmLfIIPbHM96o953SiFvrvQ BZjGmmPMK3UH29cjzDx1R/RQaYTyMrHyTePLh3BMd5mpJ/9eeJCSxPzE2ECqWRUa pkjukecFXqmRItwgTxSIUE9QkpzvuQRb268PwmgroE0mwtiREADnvTFkLkdiEMew fiYxZfJJLPBqwlkw/7f1SyzXoPXnz5QbNwDmrHelga6rKSprYKb3pueqaIe8j/AP NC1/bzp8cGOcJ88BD5+Ny6qgPVCrMLE5twQumJ12V3SvjGNtzFBvg2c/9S5OmVqR LlTxKnCrWAXftSm1rNtewTsF -----END CERTIFICATE----- &lt;/ds:X509Certificate&gt;&lt;/ds:X509Data&gt; &lt;/ds:KeyInfo&gt; &lt;/KeyDescriptor&gt; &lt;SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://idp5/singleSignOn" /&gt; &lt;/IDPSSODescriptor&gt; &lt;/EntityDescriptor&gt; </code></pre> <p>My code at present does this - </p> <pre><code>&gt;&gt;&gt; from xml.dom.minidom import Document &gt;&gt;&gt; doc = Document() &gt;&gt;&gt; entity_descriptor = doc.createElement("EntityDescriptor") &gt;&gt;&gt; doc.appendChild(entity_descriptor) &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns', 'urn:oasis:names:tc:SAML:2.0:metadata') &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns:saml', 'urn:oasis:names:tc:SAML:2.0:assertion') &gt;&gt;&gt; entity_descriptor.setAttribute('xmlns:ds', 'hxxp://xxx.w3.org/2000/09/xmldsig#') &gt;&gt;&gt; entity_descriptor.setAttribute('entityID', 'hxxp://wsa.saas.com') &gt;&gt;&gt; idpssodescr = doc.createElement('IDPSSODescriptor') &gt;&gt;&gt; entity_descriptor.appendChild(idpssodescr) &gt;&gt;&gt; keydescr = doc.createElement('KeyDescriptor') &gt;&gt;&gt; keydescr.setAttribute('use', 'signing') &gt;&gt;&gt; idpssodescr.appendChild(keydescr) &gt;&gt;&gt; keyinfo = doc.createElement('ds:KeyInfo') &gt;&gt;&gt; keyinfo.setAttribute('xmlns:ds', 'http://xxx.w3.org/2000/09/xmldsig#') &gt;&gt;&gt; keydescr.appendChild(keyinfo) &gt;&gt;&gt; x509data = doc.createElement('ds:X509Data') &gt;&gt;&gt; keyinfo.appendChild(x509data) &gt;&gt;&gt; x509cert = doc.createElement('ds:X509Certificate') &gt;&gt;&gt; ptext = doc.createTextNode("This is a test!") &gt;&gt;&gt; x509cert.appendChild(ptext) &gt;&gt;&gt; x509data.appendChild(x509cert) &gt;&gt;&gt; sso = doc.createElement('SingleSignOnService') &gt;&gt;&gt; sso.setAttribute('Binding', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect') &gt;&gt;&gt; sso.setAttribute('Location', 'hxxx://idp5/singleSignOn') &gt;&gt;&gt; idpssodescr.appendChild(sso) &gt;&gt;&gt; print doc.toprettyxml(indent=" ") &lt;?xml version="1.0" ?&gt; &lt;EntityDescriptor entityID="http://wsa.saas.com" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"&gt; &lt;IDPSSODescriptor&gt; &lt;KeyDescriptor use="signing"&gt; &lt;ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;ds:X509Data&gt; &lt;ds:X509Certificate&gt; This is a test! &lt;/ds:X509Certificate&gt; &lt;/ds:X509Data&gt; &lt;/ds:KeyInfo&gt; &lt;/KeyDescriptor&gt; &lt;SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://idp5/singleSignOn"/&gt; &lt;/IDPSSODescriptor&gt; &lt;/EntityDescriptor&gt; </code></pre> <p>I am little confused with the certificate part. Original is <code>&lt;ds:X509Certificate&gt;-----BEGIN CERTIFICATE---.....&lt;/ds:X509Certificate&gt;</code> and mine is:</p> <pre><code> &lt;ds:X509Certificate&gt; This is the cert &lt;/ds:X509Certificate&gt; </code></pre> <p>Sorry, I don't have much experience with XML. Is the code proper for the task intended.</p> <p>Thanks. </p>
1
2009-10-20T11:48:46Z
30,003,069
<p>Yattag is could be interesting for this</p> <pre><code>from yattag import Doc, indent doc, tag, text = Doc().tagtext() doc.asis('&lt;?xml version="1.0"?&gt;') with tag('EntityDescriptor', ("xmlns:saml", "urn:oasis:names:tc:SAML:2.0:assertion"), ("xmlns:ds", "http://www.w3.org/2000/09/xmldsig#"), entityID = "http://wsa.saas.com", xmlns = "urn:oasis:names:tc:SAML:2.0:metadata"): with tag('IDPSSODescriptor'): with tag('KeyDescriptor', use='signing'): with tag('ds:KeyInfo', ("xmlns:ds", "http://www.w3.org/2000/09/xmldsig#")): with tag('ds:X509Data'): with tag('ds:X509Certificate'): text( """-----BEGIN CERTIFICATE----- MIIDnjCCAoagAwIBAgIBATANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJGUjEP MA0GA1UECBMGRnJhbmNlMQ4wDAYDVQQHEwVQYXJpczETMBEGA1UEChMKRW50cm91 dmVydDEPMA0GA1UEAxMGRGFtaWVuMB4XDTA2MTAyNzA5MDc1NFoXDTExMTAyNjA5 MDc1NFowVDELMAkGA1UEBhMCRlIxDzANBgNVBAgTBkZyYW5jZTEOMAwGA1UEBxMF UGFyaXMxEzARBgNVBAoTCkVudHJvdXZlcnQxDzANBgNVBAMTBkRhbWllbjCCASIw DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM06Hx6VgHYR9wUf/tZVVTRkVWNq h9x+PvHA2qH4OYMuqGs4Af6lU2YsZvnrmRdcFWv0+UkdAgXhReCWAZgtB1pd/W9m 6qDRldCCyysow6xPPKRz/pOTwRXm/fM0QGPeXzwzj34BXOIOuFu+n764vKn18d+u uVAEzk1576pxTp4pQPzJfdNLrLeQ8vyCshoFU+MYJtp1UA+h2JoO0Y8oGvywbUxH ioHN5PvnzObfAM4XaDQohmfxM9Uc7Wp4xKAc1nUq5hwBrHpjFMRSz6UCfMoJSGIi +3xJMkNCjL0XEw5NKVc5jRKkzSkN5j8KTM/k1jPPsDHPRYzbWWhnNtd6JlkCAwEA AaN7MHkwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0 ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFP2WWMDShux3iF74+SoO1xf6qhqaMB8G A1UdIwQYMBaAFGjl6TRXbQDHzSlZu+e8VeBaZMB5MA0GCSqGSIb3DQEBBQUAA4IB AQAZ/imK7UMognXbs5RfSB8cMW6iNAI+JZqe9XWjvtmLfIIPbHM96o953SiFvrvQ BZjGmmPMK3UH29cjzDx1R/RQaYTyMrHyTePLh3BMd5mpJ/9eeJCSxPzE2ECqWRUa pkjukecFXqmRItwgTxSIUE9QkpzvuQRb268PwmgroE0mwtiREADnvTFkLkdiEMew fiYxZfJJLPBqwlkw/7f1SyzXoPXnz5QbNwDmrHelga6rKSprYKb3pueqaIe8j/AP NC1/bzp8cGOcJ88BD5+Ny6qgPVCrMLE5twQumJ12V3SvjGNtzFBvg2c/9S5OmVqR LlTxKnCrWAXftSm1rNtewTsF -----END CERTIFICATE----- """) with tag('SingleSignOnService', Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect", Location="http://idp5/singleSignOn" ): pass result = indent( doc.getvalue(), indentation = ' '*4, newline = '\r\n' ) print(result) </code></pre> <p>You get:</p> <pre><code>&lt;?xml version="1.0"?&gt; &lt;EntityDescriptor xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" entityID="http://wsa.saas.com" xmlns="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;IDPSSODescriptor&gt; &lt;KeyDescriptor use="signing"&gt; &lt;ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"&gt; &lt;ds:X509Data&gt; &lt;ds:X509Certificate&gt;-----BEGIN CERTIFICATE----- MIIDnjCCAoagAwIBAgIBATANBgkqhkiG9w0BAQUFADBUMQswCQYDVQQGEwJGUjEP MA0GA1UECBMGRnJhbmNlMQ4wDAYDVQQHEwVQYXJpczETMBEGA1UEChMKRW50cm91 dmVydDEPMA0GA1UEAxMGRGFtaWVuMB4XDTA2MTAyNzA5MDc1NFoXDTExMTAyNjA5 MDc1NFowVDELMAkGA1UEBhMCRlIxDzANBgNVBAgTBkZyYW5jZTEOMAwGA1UEBxMF UGFyaXMxEzARBgNVBAoTCkVudHJvdXZlcnQxDzANBgNVBAMTBkRhbWllbjCCASIw DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM06Hx6VgHYR9wUf/tZVVTRkVWNq h9x+PvHA2qH4OYMuqGs4Af6lU2YsZvnrmRdcFWv0+UkdAgXhReCWAZgtB1pd/W9m 6qDRldCCyysow6xPPKRz/pOTwRXm/fM0QGPeXzwzj34BXOIOuFu+n764vKn18d+u uVAEzk1576pxTp4pQPzJfdNLrLeQ8vyCshoFU+MYJtp1UA+h2JoO0Y8oGvywbUxH ioHN5PvnzObfAM4XaDQohmfxM9Uc7Wp4xKAc1nUq5hwBrHpjFMRSz6UCfMoJSGIi +3xJMkNCjL0XEw5NKVc5jRKkzSkN5j8KTM/k1jPPsDHPRYzbWWhnNtd6JlkCAwEA AaN7MHkwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0 ZWQgQ2VydGlmaWNhdGUwHQYDVR0OBBYEFP2WWMDShux3iF74+SoO1xf6qhqaMB8G A1UdIwQYMBaAFGjl6TRXbQDHzSlZu+e8VeBaZMB5MA0GCSqGSIb3DQEBBQUAA4IB AQAZ/imK7UMognXbs5RfSB8cMW6iNAI+JZqe9XWjvtmLfIIPbHM96o953SiFvrvQ BZjGmmPMK3UH29cjzDx1R/RQaYTyMrHyTePLh3BMd5mpJ/9eeJCSxPzE2ECqWRUa pkjukecFXqmRItwgTxSIUE9QkpzvuQRb268PwmgroE0mwtiREADnvTFkLkdiEMew fiYxZfJJLPBqwlkw/7f1SyzXoPXnz5QbNwDmrHelga6rKSprYKb3pueqaIe8j/AP NC1/bzp8cGOcJ88BD5+Ny6qgPVCrMLE5twQumJ12V3SvjGNtzFBvg2c/9S5OmVqR LlTxKnCrWAXftSm1rNtewTsF -----END CERTIFICATE----- &lt;/ds:X509Certificate&gt; &lt;/ds:X509Data&gt; &lt;/ds:KeyInfo&gt; &lt;/KeyDescriptor&gt; &lt;/IDPSSODescriptor&gt; &lt;SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="http://idp5/singleSignOn"&gt;&lt;/SingleSignOnService&gt; &lt;/EntityDescriptor&gt; </code></pre>
0
2015-05-02T13:50:08Z
[ "python", "xml", "saml" ]
angles commands in superclass
1,594,090
<p>how do you write a command that turn left or right at an angle in a superclass is it like this:</p> <pre><code>def left(self): self.position += self.angle return (self.position) </code></pre> <p>is it the same as the forward and back command</p>
0
2009-10-20T11:58:01Z
1,594,175
<p>It looks like you are interested by something like the logo turtle. Look at <a href="http://docs.python.org/library/turtle.html" rel="nofollow">http://docs.python.org/library/turtle.html</a></p> <p>If so, the left function doesn't change the position of the turtle but its orientation.</p> <pre><code>def left(self, angle): self.angle -= angle*2*math.pi/360 </code></pre>
1
2009-10-20T12:13:53Z
[ "python" ]
Preserving extent from the old image
1,594,223
<p>I am using <a href="http://en.wikipedia.org/wiki/Python_Imaging_Library" rel="nofollow">PIL</a> 1.1.6, Python 2.5 on the Windows platform.</p> <p>In my program, I am performing a point operation (changing the pixel values) and then saving the new image.</p> <p>When I am loading the new and old image, they are not in the same extent. How to impose the extent of old image to the new image?</p> <p>My code is:</p> <pre><code>img = Image.open("D:/BTC/dada_72.tif") out = Image.eval(img, lambda x: x * 5) out.save("D:/BTC/dada_72_Com.tif") </code></pre>
0
2009-10-20T12:21:14Z
1,597,146
<p>Assuming by "extent" you mean "size" (pixels wide by pixels high), then there are several options depending on what you have as a "new" image.</p> <p>If "new" is an existing image (and you want to stretch/shrink/grow the new):</p> <pre><code>from PIL import Image &gt;&gt;&gt; im1 = Image.open('img1.jpg') &gt;&gt;&gt; im2 = Image.open('img2.jpg').resize(im1.size) </code></pre> <p>If you want to crop or pad "new" that's a bit more complex...</p> <p>If "new" is a new blank image:</p> <pre><code>&gt;&gt;&gt; im1 = Image.open('img1.jpg') &gt;&gt;&gt; im2 = Image.new(im1.mode, im1.size) </code></pre>
0
2009-10-20T20:37:12Z
[ "python", "image", "python-imaging-library" ]
Making Django admin display the Primary Key rather than each object's Object type
1,594,436
<p>In Django 1.1 admin, when I go to add or change an object, my objects are displayed as:</p> <pre><code>Select host to change * Add host Host object Host object Host object Host object Host object </code></pre> <p>This happens for all models in my site, not just Hosts.</p> <p>Rather than display the same name for each object, I would like Django to display the primary key. </p> <pre><code>Select host to change * Add host machine1 machine2 </code></pre> <p>Here is my code:</p> <pre><code>from django.db import models # Create your models here. class Host(models.Model): host = models.CharField(max_length=100,primary_key=True) class Admin: list_display = ('host') class Test(models.Model): testname = models.CharField(max_length=100,primary_key=True) class Admin: list_display = ('testname') class Result(models.Model): host = models.ForeignKey(Host) TESTRESULT_CHOICES = ( ('P', 'Pass'), ('F', 'Fail'), ) testresult = models.CharField(max_length=1, choices=TESTRESULT_CHOICES) reason = models.CharField(max_length=100) time = models.DateTimeField() testname = models.OneToOneField(Test, primary_key=True) class Admin: list_display = ('host','testname','time','testresult','reason') </code></pre> <p>Reading <a href="http://docs.djangoproject.com/en/dev/ref/contrib/admin/">http://docs.djangoproject.com/en/dev/ref/contrib/admin/</a>:</p> <p>"ModelAdmin.list_display</p> <p>Set list_display to control which fields are displayed on the change list page of the admin."</p> <p>However this simply does not seem to work. Am I doing something wrong?</p>
11
2009-10-20T12:58:26Z
1,594,453
<p>Add a <a href="http://docs.djangoproject.com/en/dev/ref/models/instances/#django.db.models.Model.__unicode__"><code>__unicode__()</code></a> method to <code>Host</code>. To show the primary key of your host objects, you'd want something like:</p> <pre><code>class Host(models.Model): host = models.CharField(max_length=100, primary_key=True) def __unicode__(self): return self.pk ... </code></pre> <p>You might want to think about showing the contents of the <code>host</code> field:</p> <pre><code>class Host(models.Model): host = models.CharField(max_length=100, primary_key=True) def __unicode__(self): return self.host ... </code></pre> <p>You'll need to do something similar for every model you've got.</p> <p>For Python 3 compatibility, you'll want to do something like this (see <a href="https://docs.djangoproject.com/en/dev/topics/python3/#str-and-unicode-methods">the documentation</a>):</p> <pre><code>from __future__ import unicode_literals from django.utils.encoding import python_2_unicode_compatible @python_2_unicode_compatible class Host(models.Model): host = models.CharField(max_length=100, primary_key=True) def __str__(self): return self.host ... </code></pre>
26
2009-10-20T13:00:35Z
[ "python", "django", "admin" ]
Making Django admin display the Primary Key rather than each object's Object type
1,594,436
<p>In Django 1.1 admin, when I go to add or change an object, my objects are displayed as:</p> <pre><code>Select host to change * Add host Host object Host object Host object Host object Host object </code></pre> <p>This happens for all models in my site, not just Hosts.</p> <p>Rather than display the same name for each object, I would like Django to display the primary key. </p> <pre><code>Select host to change * Add host machine1 machine2 </code></pre> <p>Here is my code:</p> <pre><code>from django.db import models # Create your models here. class Host(models.Model): host = models.CharField(max_length=100,primary_key=True) class Admin: list_display = ('host') class Test(models.Model): testname = models.CharField(max_length=100,primary_key=True) class Admin: list_display = ('testname') class Result(models.Model): host = models.ForeignKey(Host) TESTRESULT_CHOICES = ( ('P', 'Pass'), ('F', 'Fail'), ) testresult = models.CharField(max_length=1, choices=TESTRESULT_CHOICES) reason = models.CharField(max_length=100) time = models.DateTimeField() testname = models.OneToOneField(Test, primary_key=True) class Admin: list_display = ('host','testname','time','testresult','reason') </code></pre> <p>Reading <a href="http://docs.djangoproject.com/en/dev/ref/contrib/admin/">http://docs.djangoproject.com/en/dev/ref/contrib/admin/</a>:</p> <p>"ModelAdmin.list_display</p> <p>Set list_display to control which fields are displayed on the change list page of the admin."</p> <p>However this simply does not seem to work. Am I doing something wrong?</p>
11
2009-10-20T12:58:26Z
1,594,468
<p><code>contrib.admin</code> has been reworked in 1.0, and old <code>Admin</code> classes inside models no longer work. What you need is <code>ModelAdmin</code> subclass in <code>your_application.admin</code> module, e.g.</p> <pre><code>from your_application.models import Host from django.contrib import admin class HostAdmin(admin.ModelAdmin): list_display = ('host',) admin.site.register(Host, HostAdmin) </code></pre> <p>Or use <code>__unicode__</code> in the model itself, e.g.</p> <pre><code>class Host(models.Model): host = models.CharField(max_length=100,primary_key=True) def __unicode__(self): return self.host </code></pre>
10
2009-10-20T13:03:07Z
[ "python", "django", "admin" ]
Making Django admin display the Primary Key rather than each object's Object type
1,594,436
<p>In Django 1.1 admin, when I go to add or change an object, my objects are displayed as:</p> <pre><code>Select host to change * Add host Host object Host object Host object Host object Host object </code></pre> <p>This happens for all models in my site, not just Hosts.</p> <p>Rather than display the same name for each object, I would like Django to display the primary key. </p> <pre><code>Select host to change * Add host machine1 machine2 </code></pre> <p>Here is my code:</p> <pre><code>from django.db import models # Create your models here. class Host(models.Model): host = models.CharField(max_length=100,primary_key=True) class Admin: list_display = ('host') class Test(models.Model): testname = models.CharField(max_length=100,primary_key=True) class Admin: list_display = ('testname') class Result(models.Model): host = models.ForeignKey(Host) TESTRESULT_CHOICES = ( ('P', 'Pass'), ('F', 'Fail'), ) testresult = models.CharField(max_length=1, choices=TESTRESULT_CHOICES) reason = models.CharField(max_length=100) time = models.DateTimeField() testname = models.OneToOneField(Test, primary_key=True) class Admin: list_display = ('host','testname','time','testresult','reason') </code></pre> <p>Reading <a href="http://docs.djangoproject.com/en/dev/ref/contrib/admin/">http://docs.djangoproject.com/en/dev/ref/contrib/admin/</a>:</p> <p>"ModelAdmin.list_display</p> <p>Set list_display to control which fields are displayed on the change list page of the admin."</p> <p>However this simply does not seem to work. Am I doing something wrong?</p>
11
2009-10-20T12:58:26Z
1,950,946
<p>It might also be worth mentioning that, if you are using an auto-incrementing primary key for your models, you will need to coerce it into a string, eg:</p> <pre><code>def __unicode__(self): return str(self.pk) </code></pre>
2
2009-12-23T06:07:08Z
[ "python", "django", "admin" ]
Unicode handling in ReportLab
1,594,470
<p>I am trying to use ReportLab with Unicode characters, but it is not working. I tried tracing through the code till I reached the following line:</p> <pre><code>class TTFont: # ... def splitString(self, text, doc, encoding='utf-8'): # ... cur.append(n &amp; 0xFF) # &lt;-- here is the problem! # ... </code></pre> <p>(This code can be found in ReportLab's repository, in the file <a href="https://svn.reportlab.com/svn/public/reportlab/trunk/src/reportlab/pdfbase/ttfonts.py" rel="nofollow">pdfbase/ttfonts.py</a>. The code in question is in line 1059.)</p> <p><strong>Why is <code>n</code>'s value being manipulated?!</strong></p> <p>In the line shown above, <code>n</code> contains the code point of the character being processed (e.g. 65 for 'A', 97 for 'a', or 1588 for Arabic sheen 'Ø´'). <code>cur</code> is a list that is being filled with the characters to be sent to the final output (AFAIU). Before that line, everything was (apparently) working fine, but in this line, the value of <code>n</code> was manipulated, apparently reducing it to the extended ASCII range!</p> <p>This causes non-ASCII, Unicode characters to lose their value. I cannot understand how this statement is useful, or why it is necessary!</p> <p>So my question is, why is <code>n</code>'s value being manipulated here, and how should I proceed about fixing this issue?</p> <p><strong>Edit:</strong><br /> In response to the comment regarding my code snippet, here is an example that causes this error:</p> <pre><code>my_doctemplate.build([Paragraph(bulletText = None, encoding = 'utf8', caseSensitive = 1, debug = 0, text = '\xd8\xa3\xd8\xa8\xd8\xb1\xd8\xa7\xd8\xac', frags = [ParaFrag(fontName = 'DejaVuSansMono-BoldOblique', text = '\xd8\xa3\xd8\xa8\xd8\xb1\xd8\xa7\xd8\xac', sub = 0, rise = 0, greek = 0, link = None, italic = 0, strike = 0, fontSize = 12.0, textColor = Color(0,0,0), super = 0, underline = 0, bold = 0)])]) </code></pre> <p>In <code>PDFTextObject._textOut</code>, <code>_formatText</code> is called, which identifies the font as <code>_dynamicFont</code>, and accordingly calls <code>font.splitString</code>, which is causing the error described above.</p>
1
2009-10-20T13:03:09Z
1,594,858
<p>I'm pretty sure you'd need to change <code>0xFF</code> to <code>0xFFFF</code> to use 4-byte unicode characters, as ~unutbu suggested, hence using four bytes instead of two.</p>
0
2009-10-20T14:05:53Z
[ "python", "unicode", "reportlab" ]
Unicode handling in ReportLab
1,594,470
<p>I am trying to use ReportLab with Unicode characters, but it is not working. I tried tracing through the code till I reached the following line:</p> <pre><code>class TTFont: # ... def splitString(self, text, doc, encoding='utf-8'): # ... cur.append(n &amp; 0xFF) # &lt;-- here is the problem! # ... </code></pre> <p>(This code can be found in ReportLab's repository, in the file <a href="https://svn.reportlab.com/svn/public/reportlab/trunk/src/reportlab/pdfbase/ttfonts.py" rel="nofollow">pdfbase/ttfonts.py</a>. The code in question is in line 1059.)</p> <p><strong>Why is <code>n</code>'s value being manipulated?!</strong></p> <p>In the line shown above, <code>n</code> contains the code point of the character being processed (e.g. 65 for 'A', 97 for 'a', or 1588 for Arabic sheen 'Ø´'). <code>cur</code> is a list that is being filled with the characters to be sent to the final output (AFAIU). Before that line, everything was (apparently) working fine, but in this line, the value of <code>n</code> was manipulated, apparently reducing it to the extended ASCII range!</p> <p>This causes non-ASCII, Unicode characters to lose their value. I cannot understand how this statement is useful, or why it is necessary!</p> <p>So my question is, why is <code>n</code>'s value being manipulated here, and how should I proceed about fixing this issue?</p> <p><strong>Edit:</strong><br /> In response to the comment regarding my code snippet, here is an example that causes this error:</p> <pre><code>my_doctemplate.build([Paragraph(bulletText = None, encoding = 'utf8', caseSensitive = 1, debug = 0, text = '\xd8\xa3\xd8\xa8\xd8\xb1\xd8\xa7\xd8\xac', frags = [ParaFrag(fontName = 'DejaVuSansMono-BoldOblique', text = '\xd8\xa3\xd8\xa8\xd8\xb1\xd8\xa7\xd8\xac', sub = 0, rise = 0, greek = 0, link = None, italic = 0, strike = 0, fontSize = 12.0, textColor = Color(0,0,0), super = 0, underline = 0, bold = 0)])]) </code></pre> <p>In <code>PDFTextObject._textOut</code>, <code>_formatText</code> is called, which identifies the font as <code>_dynamicFont</code>, and accordingly calls <code>font.splitString</code>, which is causing the error described above.</p>
1
2009-10-20T13:03:09Z
1,964,931
<p>What do you mean, "not working"? You have misquoted the reportlab source code. What it is actually doing is that the lower and upper byte of each 16-bit unicode character are coded separately (the upper byte is only written out when it changes, which I assume is a PDF-specific optimization to make documents smaller).</p> <p>Please explain exactly what the problem <em>is</em>, not what you think what the underlying reason is. Chances are the characters you want to display simply don't exist in the selected font ('DejaVuSansMono-BoldOblique').</p>
1
2009-12-27T01:46:19Z
[ "python", "unicode", "reportlab" ]
SPARQL Query gives unexpected result
1,594,518
<p>I hope someone can help me on this probably totally easy-to-solve problem:</p> <p>I want to run a SPARQL query against the following RDF (noted in N3, the RDF/XMl sits <a href="http://rdf.schalljugend.net/umstaetter.rdf" rel="nofollow">here</a>). This is the desription of a journal article and descriptions of the journal, author and publisher:</p> <pre><code> @prefix bibo: &lt;http://purl.org/ontology/bibo/&gt; . @prefix dc: &lt;http://purl.org/dc/elements/1.1/&gt; . @prefix ex: &lt;http://example.org/thesis/&gt; . @prefix foaf: &lt;http://xmlns.com/foaf/0.1/&gt; . @prefix rdf: &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; . &lt;ex:XY&gt; a bibo:Article; dc:creator ex:umstaetter; dc:date "2008-11-01"; dc:isPartOf ex:bibdienst; dc:title "DDC in Europa"@de; bibo:endPage "1221"; bibo:issue "11"; bibo:language "de"; bibo:pageStart "1194"; bibo:uri &lt;http://www.zlb.de/Erschliessung020309BD.pdf&gt;; bibo:volume "42" . &lt;ex:bibdienst&gt; a bibo:Journal; dc:publisher ex:zlb; dc:title "Bibliotheksdienst"@de; bibo:issn "00061972" . &lt;ex:umstaetter&gt; a foaf:person; foaf:birthday "1941-06-12"; foaf:gender "Male"; foaf:givenName "Walther"; foaf:homepage &lt;http://www.ib.hu-berlin.de/~wumsta/index.html&gt;; foaf:img "http://libreas.eu/ausgabe7/pictures/wumstaetter1.jpg"; foaf:name "Walther Umst\u00E4tter"; foaf:surname "Umst\u00E4tter"; foaf:title "Prof. Dr. rer. nat." . &lt;ex:zlb&gt; a foaf:Organization; foaf:homepage &lt;http://www.zlb.de&gt;; foaf:name "Zentral- und Landesbibliothek Berlin"@de . </code></pre> <p>For testing purposes I wanted to read out the <strong>foaf:homepage</strong> of <strong>ex:zlb</strong> - the SPARQL I want to run is:</p> <pre><code>PREFIX rdf:&lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; PREFIX dc: &lt;http://purl.org/dc/elements/1.1/&gt; PREFIX bibo: &lt;http://purl.org/ontology/bibo/&gt; PREFIX foaf: &lt;http://xmlns.com/foaf/0.1/&gt; PREFIX ex: &lt;http://example.org/thesis/&gt; SELECT ?article ?publisher ?publisher_url WHERE { ?article dc:isPartOf ?journal . ?journal dc:publisher ?publisher . ?publisher foaf:homepage ?publisher_url } </code></pre> <p>(Again: This is gonna be for testing only since there is only one entity of article.)</p> <p>Running it on my local machine with Python and RDflib doesn't give me a result. Neither does the Online Redland SPARQL Query Demo.</p> <p>Anyone out there who sees a solution? Am I on the right path or totally wrong?</p>
7
2009-10-20T13:12:33Z
1,594,959
<p>I don't think that you can use a QName in an XML attribute value; e.g. the value of <code>rdf:about</code>. So consider this line from your RDF/XML file:</p> <pre><code> &lt;bibo:Journal rdf:about="ex:bibdienst"&gt; </code></pre> <p>I think that this is actually saying that the subject URI is "ex:bibdienst". That is a syntactically valid URI, but it is not the same URI as appears as the object of the triple corresponding to this line:</p> <pre><code> &lt;dc:isPartOf rdf:resource="http://example.org/thesis/bibdienst" /&gt; </code></pre> <p>Try replacing the QNames in XML attribute values with the corresponding URIs and see if that fixes your problem.</p>
7
2009-10-20T14:22:28Z
[ "python", "rdf", "sparql", "rdflib" ]
SPARQL Query gives unexpected result
1,594,518
<p>I hope someone can help me on this probably totally easy-to-solve problem:</p> <p>I want to run a SPARQL query against the following RDF (noted in N3, the RDF/XMl sits <a href="http://rdf.schalljugend.net/umstaetter.rdf" rel="nofollow">here</a>). This is the desription of a journal article and descriptions of the journal, author and publisher:</p> <pre><code> @prefix bibo: &lt;http://purl.org/ontology/bibo/&gt; . @prefix dc: &lt;http://purl.org/dc/elements/1.1/&gt; . @prefix ex: &lt;http://example.org/thesis/&gt; . @prefix foaf: &lt;http://xmlns.com/foaf/0.1/&gt; . @prefix rdf: &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; . &lt;ex:XY&gt; a bibo:Article; dc:creator ex:umstaetter; dc:date "2008-11-01"; dc:isPartOf ex:bibdienst; dc:title "DDC in Europa"@de; bibo:endPage "1221"; bibo:issue "11"; bibo:language "de"; bibo:pageStart "1194"; bibo:uri &lt;http://www.zlb.de/Erschliessung020309BD.pdf&gt;; bibo:volume "42" . &lt;ex:bibdienst&gt; a bibo:Journal; dc:publisher ex:zlb; dc:title "Bibliotheksdienst"@de; bibo:issn "00061972" . &lt;ex:umstaetter&gt; a foaf:person; foaf:birthday "1941-06-12"; foaf:gender "Male"; foaf:givenName "Walther"; foaf:homepage &lt;http://www.ib.hu-berlin.de/~wumsta/index.html&gt;; foaf:img "http://libreas.eu/ausgabe7/pictures/wumstaetter1.jpg"; foaf:name "Walther Umst\u00E4tter"; foaf:surname "Umst\u00E4tter"; foaf:title "Prof. Dr. rer. nat." . &lt;ex:zlb&gt; a foaf:Organization; foaf:homepage &lt;http://www.zlb.de&gt;; foaf:name "Zentral- und Landesbibliothek Berlin"@de . </code></pre> <p>For testing purposes I wanted to read out the <strong>foaf:homepage</strong> of <strong>ex:zlb</strong> - the SPARQL I want to run is:</p> <pre><code>PREFIX rdf:&lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; PREFIX dc: &lt;http://purl.org/dc/elements/1.1/&gt; PREFIX bibo: &lt;http://purl.org/ontology/bibo/&gt; PREFIX foaf: &lt;http://xmlns.com/foaf/0.1/&gt; PREFIX ex: &lt;http://example.org/thesis/&gt; SELECT ?article ?publisher ?publisher_url WHERE { ?article dc:isPartOf ?journal . ?journal dc:publisher ?publisher . ?publisher foaf:homepage ?publisher_url } </code></pre> <p>(Again: This is gonna be for testing only since there is only one entity of article.)</p> <p>Running it on my local machine with Python and RDflib doesn't give me a result. Neither does the Online Redland SPARQL Query Demo.</p> <p>Anyone out there who sees a solution? Am I on the right path or totally wrong?</p>
7
2009-10-20T13:12:33Z
1,595,236
<p>Yep Stephen C is totally correct that you can't use QNames in XML attributes, you can use XML entities instead which you define in a DTD block at the top of your document like so:</p> <p>eg.</p> <pre><code>&lt;!DOCTYPE rdf:RDF[ &lt;!ENTITY rdf 'http://www.w3.org/1999/02/22-rdf-syntax-ns#'&gt; &lt;!ENTITY rdfs 'http://www.w3.org/2000/01/rdf-schema#'&gt; &lt;!ENTITY xsd 'http://www.w3.org/2001/XMLSchema#'&gt; &lt;!ENTITY ex 'http://example.org/thesis/'&gt; &lt;!ENTITY dc 'http://purl.org/dc/elements/1.1/'&gt; &lt;!ENTITY foaf 'http://xmlns.com/foaf/0.1/'&gt; &lt;!ENTITY bibo 'http://purl.org/ontology/bibo/'&gt; ]&gt; </code></pre> <p>Then you can define attributes like so:</p> <pre><code>&lt;bibo:Journal rdf:about="&amp;ex;bibdienst"&gt; </code></pre>
6
2009-10-20T14:55:57Z
[ "python", "rdf", "sparql", "rdflib" ]
Implement a listbox
1,594,589
<p>I need to implement a listbox for a mobile. The only relevant controls are up and down arrow keys. The listbox should display as many rows of items from a list as will fit on the screen (screen_rows), one row should be highighted (sel_row) and the display should wrap if the user hits up arrow when the first item is highlighted or down arrow if the last item is highlighted (that is, the last item should be displayed and highlighted if the user hits up when the first item is highlighted). Up arrow highlights the previous item and down arrow highlights the next item.</p> <p>I've put something together, but am concerned I've missed something in testing. There must be a standard way to do this, given the prevalence of listboxes out there.</p> <pre><code>def up_key(self): if self.sel_row &gt; 0: self.sel_row -= 1 elif self.top_item &gt; 0: # top_item is the index of the first list item self.top_item -= 1 elif self.top_item == 0: if self.n_lines &gt;= self.screen_rows: # n_lines is the number of items in the list self.top_item = self.n_lines - self.screen_rows self.sel_row = min(self.screen_rows-1, self.n_lines-1) else: self.top_item = 0 self.sel_row = self.n_lines-1 def down_key(self): if self.sel_row &lt; self.screen_rows-1 and self.sel_row &lt; self.n_lines-1: self.sel_row += 1 elif self.sel_row == self.screen_rows-1: bottom_item = self.top_item + self.screen_rows if bottom_item == self.n_lines: self.top_item = 0 self.sel_row = 0 if bottom_item &lt; self.n_lines: self.top_item += 1 elif self.sel_row == self.n_lines-1: self.top_item = 0 self.sel_row = 0 def set_pos(self, pos): # display item with index pos if pos &lt; 0: pos = 0 elif pos &gt;= self.n_lines: pos = self.n_lines - 1 if pos &lt; self.screen_rows: self.top_item = 0 self.sel_row = pos else: self.sel_row = min(self.screen_rows, self.n_lines)//2 - 1 self.top_item = pos - self.sel_row if self.top_item &gt;= self.n_lines - self.screen_rows: self.top_item = self.n_lines - self.screen_rows - 1 self.sel_row = pos - self.top_item - 1 </code></pre> <p>EDIT: after each function I call a redraw screen function, which redraws the screen with top_item at the top and sel-row highlighted. </p> <p>I've added a pseudo-code tag, in case someone has a version in something that's not python.</p>
0
2009-10-20T13:24:01Z
1,595,490
<p>Few Python programs implement listboxes from scratch -- they're normally just taken from existing toolkits. That may explain why there's no real cross-toolkit "standard"!-)</p> <p>Coming to your code, I imagine <code>set_pos</code> is meant to be called right after either <code>up_key</code> or <code>down_key</code> are finished (you don't make this entirely clear).</p> <p>My main worry would be the repetitiousness and asymmetry between your two <code>_key</code> routines. Surely given that your specs are so similar for up and down keys, you want to delegate to a single function which takes an "increment" argument, either +1 or -1. That common function could first do <code>self.sel_row += increment</code>, then immediately return in the common case where <code>sel_row</code> is still fine, i.e <code>if self.top_item &lt;= self.sel_row &lt; self.top_item + self.screen_rows</code>; otherwise deal with the cases where <code>sel_row</code> has exited the currently displayed region, by adjusting <code>self.top_item</code>, exiting if that causes no need to wraparound, or finally dealing with the wraparound cases.</p> <p>I'd be keen to apply "flat is better than nested" by repeatedly using constructs of the form "do some required state chance; if things are now fine, return" rather than logically more complex "if doing a simple thing will be OK, then do the simple thing; else if something a bit more complicated but not terrible is needed, then do the complicated something; else if we're in a really complicated case, deal with the really complicated problem" -- the latter is far more prone to error and harder to follow in any case.</p>
1
2009-10-20T15:30:14Z
[ "python", "listbox", "pseudocode" ]
How should I optimize this filesystem I/O bound program?
1,594,604
<p>I have a python program that does something like this:</p> <ol> <li>Read a row from a csv file.</li> <li>Do some transformations on it.</li> <li>Break it up into the actual rows as they would be written to the database.</li> <li>Write those rows to individual csv files.</li> <li>Go back to step 1 unless the file has been totally read.</li> <li>Run SQL*Loader and load those files into the database.</li> </ol> <p>Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.</p> <p>There are a few ideas that I have to solve this:</p> <ol> <li>Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?</li> <li>Parallelize steps 1, 2&amp;3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.</li> <li>Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.</li> </ol> <p>Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?</p>
2
2009-10-20T13:27:31Z
1,594,634
<p>Poor man's map-reduce:</p> <p>Use <a href="http://www.gnu.org/manual/gawk/html%5Fnode/Split-Program.html">split</a> to break the file up into as many pieces as you have CPUs.</p> <p>Use <a href="http://linux.about.com/library/cmd/blcmdl1%5Fbatch.htm">batch</a> to run your muncher in parallel.</p> <p>Use <a href="http://www.gnu.org/software/coreutils/manual/html%5Fnode/cat-invocation.html#cat-invocation">cat</a> to concatenate the results.</p>
5
2009-10-20T13:33:19Z
[ "python", "performance", "optimization", "file-io" ]
How should I optimize this filesystem I/O bound program?
1,594,604
<p>I have a python program that does something like this:</p> <ol> <li>Read a row from a csv file.</li> <li>Do some transformations on it.</li> <li>Break it up into the actual rows as they would be written to the database.</li> <li>Write those rows to individual csv files.</li> <li>Go back to step 1 unless the file has been totally read.</li> <li>Run SQL*Loader and load those files into the database.</li> </ol> <p>Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.</p> <p>There are a few ideas that I have to solve this:</p> <ol> <li>Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?</li> <li>Parallelize steps 1, 2&amp;3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.</li> <li>Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.</li> </ol> <p>Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?</p>
2
2009-10-20T13:27:31Z
1,594,704
<p>If you are I/O bound, the best way I have found to optimize is to read or write the <em>entire</em> file into/out of memory at once, then operate out of RAM from there on.</p> <p>With extensive testing I found that my runtime eded up bound not by the amount of data I read from/wrote to disk, but by the number of I/O operations I used to do it. That is what you need to optimize.</p> <p>I don't know Python, but if there is a way to tell it to write the whole file out of RAM in one go, rather than issuing a separate I/O for each byte, that's what you need to do.</p> <p>Of course the drawback to this is that files can be considerably larger than available RAM. There are lots of ways to deal with that, but that is another question for another time.</p>
3
2009-10-20T13:42:32Z
[ "python", "performance", "optimization", "file-io" ]
How should I optimize this filesystem I/O bound program?
1,594,604
<p>I have a python program that does something like this:</p> <ol> <li>Read a row from a csv file.</li> <li>Do some transformations on it.</li> <li>Break it up into the actual rows as they would be written to the database.</li> <li>Write those rows to individual csv files.</li> <li>Go back to step 1 unless the file has been totally read.</li> <li>Run SQL*Loader and load those files into the database.</li> </ol> <p>Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.</p> <p>There are a few ideas that I have to solve this:</p> <ol> <li>Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?</li> <li>Parallelize steps 1, 2&amp;3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.</li> <li>Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.</li> </ol> <p>Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?</p>
2
2009-10-20T13:27:31Z
1,594,935
<p>The first thing is to be certain of what you should optimize. You seem to not know precisely where your time is going. Before spending more time wondering, use a performance profiler to see exactly where the time is going.</p> <p><a href="http://docs.python.org/library/profile.html" rel="nofollow">http://docs.python.org/library/profile.html</a></p> <p>When you know exactly where the time is going, you'll be in a better position to know where to spend your time optimizing.</p>
-2
2009-10-20T14:18:12Z
[ "python", "performance", "optimization", "file-io" ]
How should I optimize this filesystem I/O bound program?
1,594,604
<p>I have a python program that does something like this:</p> <ol> <li>Read a row from a csv file.</li> <li>Do some transformations on it.</li> <li>Break it up into the actual rows as they would be written to the database.</li> <li>Write those rows to individual csv files.</li> <li>Go back to step 1 unless the file has been totally read.</li> <li>Run SQL*Loader and load those files into the database.</li> </ol> <p>Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.</p> <p>There are a few ideas that I have to solve this:</p> <ol> <li>Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?</li> <li>Parallelize steps 1, 2&amp;3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.</li> <li>Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.</li> </ol> <p>Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?</p>
2
2009-10-20T13:27:31Z
1,595,358
<p>Use buffered writes for step 4.</p> <p>Write a simple function that simply appends the output onto a string, checks the string length, and only writes when you have enough which should be some multiple of 4k bytes. I would say start with 32k buffers and time it.</p> <p>You would have one buffer per file, so that most "writes" won't actually hit the disk.</p>
1
2009-10-20T15:14:06Z
[ "python", "performance", "optimization", "file-io" ]
How should I optimize this filesystem I/O bound program?
1,594,604
<p>I have a python program that does something like this:</p> <ol> <li>Read a row from a csv file.</li> <li>Do some transformations on it.</li> <li>Break it up into the actual rows as they would be written to the database.</li> <li>Write those rows to individual csv files.</li> <li>Go back to step 1 unless the file has been totally read.</li> <li>Run SQL*Loader and load those files into the database.</li> </ol> <p>Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.</p> <p>There are a few ideas that I have to solve this:</p> <ol> <li>Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?</li> <li>Parallelize steps 1, 2&amp;3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.</li> <li>Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.</li> </ol> <p>Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?</p>
2
2009-10-20T13:27:31Z
1,595,626
<p>Python already does IO buffering and the OS <em>should</em> handle both prefetching the input file and delaying writes until it needs the RAM for something else or just gets uneasy about having dirty data in RAM for too long. Unless you force the OS to write them immediately, like closing the file after each write or opening the file in O_SYNC mode.</p> <p>If the OS isn't doing the right thing, you can try raising the buffer size (third parameter to <code>open()</code>). For some guidance on appropriate values given a 100MB/s 10ms latency IO system a 1MB IO size will result in approximately 50% latency overhead, while a 10MB IO size will result in 9% overhead. If its still IO bound, you probably just need more bandwidth. Use your OS specific tools to check what kind of bandwidth you are getting to/from the disks.</p> <p>Also useful is to check if step 4 is taking a lot of time executing or waiting on IO. If it's executing you'll need to spend more time checking which part is the culprit and optimize that, or split out the work to different processes.</p>
3
2009-10-20T15:51:27Z
[ "python", "performance", "optimization", "file-io" ]
How should I optimize this filesystem I/O bound program?
1,594,604
<p>I have a python program that does something like this:</p> <ol> <li>Read a row from a csv file.</li> <li>Do some transformations on it.</li> <li>Break it up into the actual rows as they would be written to the database.</li> <li>Write those rows to individual csv files.</li> <li>Go back to step 1 unless the file has been totally read.</li> <li>Run SQL*Loader and load those files into the database.</li> </ol> <p>Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.</p> <p>There are a few ideas that I have to solve this:</p> <ol> <li>Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?</li> <li>Parallelize steps 1, 2&amp;3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.</li> <li>Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.</li> </ol> <p>Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?</p>
2
2009-10-20T13:27:31Z
1,597,062
<p>Can you use a ramdisk for step 4? Low millions sounds doable if the rows are less than a couple of kB or so.</p>
2
2009-10-20T20:20:29Z
[ "python", "performance", "optimization", "file-io" ]
How should I optimize this filesystem I/O bound program?
1,594,604
<p>I have a python program that does something like this:</p> <ol> <li>Read a row from a csv file.</li> <li>Do some transformations on it.</li> <li>Break it up into the actual rows as they would be written to the database.</li> <li>Write those rows to individual csv files.</li> <li>Go back to step 1 unless the file has been totally read.</li> <li>Run SQL*Loader and load those files into the database.</li> </ol> <p>Step 6 isn't really taking much time at all. It seems to be step 4 that's taking up most of the time. For the most part, I'd like to optimize this for handling a set of records in the low millions running on a quad-core server with a RAID setup of some kind.</p> <p>There are a few ideas that I have to solve this:</p> <ol> <li>Read the entire file from step one (or at least read it in very large chunks) and write the file to disk as a whole or in very large chunks. The idea being that the hard disk would spend less time going back and forth between files. Would this do anything that buffering wouldn't?</li> <li>Parallelize steps 1, 2&amp;3, and 4 into separate processes. This would make steps 1, 2, and 3 not have to wait on 4 to complete.</li> <li>Break the load file up into separate chunks and process them in parallel. The rows don't need to be handled in any sequential order. This would likely need to be combined with step 2 somehow.</li> </ol> <p>Of course, the correct answer to this question is "do what you find to be the fastest by testing." However, I'm mainly trying to get an idea of where I should spend my time first. Does anyone with more experience in these matters have any advice?</p>
2
2009-10-20T13:27:31Z
1,597,281
<p>Isn't it possible to collect a few thousand rows in ram, then go directly to the database server and execute them? </p> <p>This would remove the save to and load from the disk that step 4 entails.</p> <p>If the database server is transactional, this is also a safe way to do it - just have the database begin before your first row and commit after the last.</p>
1
2009-10-20T21:01:01Z
[ "python", "performance", "optimization", "file-io" ]
Cleaning build directory in setup.py
1,594,827
<p>How could I make my <code>setup.py</code> pre-delete and post-delete the build directory?</p>
23
2009-10-20T14:01:15Z
1,594,896
<p>Does <a href="http://docs.python.org/distutils/apiref.html#module-distutils.command.clean">this</a> answer it? IIRC, you'll need to use the <code>--all</code> flag to get rid of stuff outside of <code>build/lib</code>:</p> <pre><code>python setup.py clean --all </code></pre>
56
2009-10-20T14:12:39Z
[ "python", "build", "distutils" ]
Cleaning build directory in setup.py
1,594,827
<p>How could I make my <code>setup.py</code> pre-delete and post-delete the build directory?</p>
23
2009-10-20T14:01:15Z
1,594,902
<p>For pre-deletion, just delete it with <code>distutils.dir_util.remove_tree</code> before calling setup.</p> <p>For post-delete, I assume you only want to post-delete after selected commands. Subclass the respective command, override its run method (to invoke remove_tree after calling the base run), and pass the new command into the cmdclass dictionary of setup.</p>
5
2009-10-20T14:13:42Z
[ "python", "build", "distutils" ]
Cleaning build directory in setup.py
1,594,827
<p>How could I make my <code>setup.py</code> pre-delete and post-delete the build directory?</p>
23
2009-10-20T14:01:15Z
30,241,551
<p>Here's an answer that combines the programmatic approach of Martin's answer with the functionality of Matt's answer (a <code>clean</code> that takes care of all possible build areas):</p> <pre><code>from distutils.core import setup from distutils.command.clean import clean from distutils.command.install import install class MyInstall(install): # Calls the default run command, then deletes the build area # (equivalent to "setup clean --all"). def run(self): install.run(self) c = clean(self.distribution) c.all = True c.finalize_options() c.run() if __name__ == '__main__': setup( name="myname", ... cmdclass={'install': MyInstall} ) </code></pre>
1
2015-05-14T15:44:09Z
[ "python", "build", "distutils" ]
Why do simple math operations on floating point return unexpected (inaccurate) results in VB.Net and Python?
1,594,985
<pre><code>x = 4.2 - 0.1 </code></pre> <p>vb.net gives <code>4.1000000000000005</code><br /> python gives <code>4.1000000000000005</code></p> <p>Excel gives <code>4.1</code><br /> <a href="http://www.google.com/search?q=4.2-.1" rel="nofollow">Google calc</a> gives <code>4.1</code> </p> <p>What is the reason this happens?</p>
2
2009-10-20T14:25:03Z
1,594,999
<p><a href="http://en.wikipedia.org/wiki/Floating%5Fpoint">Float/double precision.</a></p> <p>You must remember that in binary, 4.1 = 4 + 1/10. 1/10 is an infinitely repeating sum in binary, much like 1/9 is an infinite sum in decimal.</p>
13
2009-10-20T14:27:40Z
[ "python", "vb.net", "floating-point" ]
Why do simple math operations on floating point return unexpected (inaccurate) results in VB.Net and Python?
1,594,985
<pre><code>x = 4.2 - 0.1 </code></pre> <p>vb.net gives <code>4.1000000000000005</code><br /> python gives <code>4.1000000000000005</code></p> <p>Excel gives <code>4.1</code><br /> <a href="http://www.google.com/search?q=4.2-.1" rel="nofollow">Google calc</a> gives <code>4.1</code> </p> <p>What is the reason this happens?</p>
2
2009-10-20T14:25:03Z
1,595,035
<p>There is no problem, really. It is just the way floats work (their internal binary representation). Anyway:</p> <pre><code>&gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; Decimal('4.2')-Decimal('0.1') Decimal('4.1') </code></pre>
4
2009-10-20T14:31:13Z
[ "python", "vb.net", "floating-point" ]
Why do simple math operations on floating point return unexpected (inaccurate) results in VB.Net and Python?
1,594,985
<pre><code>x = 4.2 - 0.1 </code></pre> <p>vb.net gives <code>4.1000000000000005</code><br /> python gives <code>4.1000000000000005</code></p> <p>Excel gives <code>4.1</code><br /> <a href="http://www.google.com/search?q=4.2-.1" rel="nofollow">Google calc</a> gives <code>4.1</code> </p> <p>What is the reason this happens?</p>
2
2009-10-20T14:25:03Z
1,595,041
<p>In vb.net, you can avoid this problem by using Decimal type instead:</p> <pre><code>Dim x As Decimal = 4.2D - 0.1D </code></pre> <p>The result is 4.1 .</p>
1
2009-10-20T14:31:54Z
[ "python", "vb.net", "floating-point" ]
Why do simple math operations on floating point return unexpected (inaccurate) results in VB.Net and Python?
1,594,985
<pre><code>x = 4.2 - 0.1 </code></pre> <p>vb.net gives <code>4.1000000000000005</code><br /> python gives <code>4.1000000000000005</code></p> <p>Excel gives <code>4.1</code><br /> <a href="http://www.google.com/search?q=4.2-.1" rel="nofollow">Google calc</a> gives <code>4.1</code> </p> <p>What is the reason this happens?</p>
2
2009-10-20T14:25:03Z
1,595,044
<pre><code>&gt;&gt;&gt; x = 4.2 - 0.1 &gt;&gt;&gt; x 4.1000000000000005 &gt;&gt;&gt;&gt;print(x) 4.1 </code></pre> <p>This happens because of <a href="http://en.wikipedia.org/wiki/Floating%5Fpoint" rel="nofollow">how numbers are stored internally</a>.</p> <p>Computers represent numbers in binary, instead of decimal, as us humans are used to. With floating point numbers, computers have to make an <em>approximation</em> to the closest binary floating point value.</p> <blockquote> <p><a href="http://docs.python.org/tutorial/floatingpoint.html#representation-error" rel="nofollow">Almost all machines today</a> (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form <code>J/2***N*</code> where <code>J</code> is an integer containing exactly 53 bits.</p> </blockquote> <p>If you <code>print</code> the number, <a href="http://docs.python.org/tutorial/floatingpoint.html" rel="nofollow">it will show the approximation</a>, truncated to a <em>normal</em> value. For example, the <strong>real</strong> value of <code>0.1</code> is <code>0.1000000000000000055511151231257827021181583404541015625</code>.</p> <p>If you <em>really</em> need a base 10 based number (if you don't know the answer to this question, <strong>you don't</strong>), you could use (in Python) <a href="http://docs.python.org/library/decimal.html" rel="nofollow"><code>decimal.Decimal</code></a>:</p> <pre><code>&gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; Decimal("4.2") - Decimal("0.1") Decimal("4.1") </code></pre> <blockquote> <p>Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is explained in precise detail below, in the “<a href="http://docs.python.org/tutorial/floatingpoint.html#representation-error" rel="nofollow">Representation Error</a>” section. <strong>See <a href="http://www.lahey.com/float.htm" rel="nofollow">The Perils of Floating Point</a> for a more complete account of other common surprises.</strong></p> <p>As that says near the end, “there are no easy answers.” Still, don’t be unduly wary of floating-point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in <code>2**53</code> per operation. That’s more than adequate for most tasks, but you do need to keep in mind that it’s not decimal arithmetic, and that every float operation can suffer a new rounding error.</p> <p>While pathological cases do exist, for most casual use of floating-point arithmetic you’ll see the result you expect in the end if you simply round the display of your final results to the number of decimal digits you expect. <a href="http://docs.python.org/library/functions.html#str" rel="nofollow"><code>str()</code></a> usually suffices, and for finer control see the <a href="http://docs.python.org/library/stdtypes.html#str.format" rel="nofollow"><code>str.format()</code></a> method’s format specifiers in <a href="http://docs.python.org/library/string.html#formatstrings" rel="nofollow"><em>Format String Syntax</em></a>.</p> </blockquote>
10
2009-10-20T14:32:07Z
[ "python", "vb.net", "floating-point" ]
Convert to UTC Timestamp
1,595,047
<pre><code>//parses some string into that format. datetime1 = datetime.strptime(somestring, "%Y-%m-%dT%H:%M:%S") //gets the seconds from the above date. timestamp1 = time.mktime(datetime1.timetuple()) //adds milliseconds to the above seconds. timeInMillis = int(timestamp1) * 1000 </code></pre> <p>How do I (at any point in that code) turn the date into UTC format? I've been ploughing through the API for what seems like a century and cannot find anything that I can get working. Can anyone help? It's currently turning it into Eastern time i believe (however I'm in GMT but want UTC).</p> <p>EDIT: I gave the answer to the guy with the closest to what I finally found out.</p> <pre><code>datetime1 = datetime.strptime(somestring, someformat) timeInSeconds = calendar.timegm(datetime1.utctimetuple()) timeInMillis = timeInSeconds * 1000 </code></pre> <p>:)</p>
9
2009-10-20T14:32:35Z
1,595,081
<p><a href="http://docs.python.org/library/datetime.html#datetime.datetime.utcfromtimestamp"><code>datetime.utcfromtimestamp</code></a> is probably what you're looking for:</p> <pre><code>&gt;&gt;&gt; timestamp1 = time.mktime(datetime.now().timetuple()) &gt;&gt;&gt; timestamp1 1256049553.0 &gt;&gt;&gt; datetime.utcfromtimestamp(timestamp1) datetime.datetime(2009, 10, 20, 14, 39, 13) </code></pre>
6
2009-10-20T14:37:12Z
[ "python", "datetime", "utc" ]
Convert to UTC Timestamp
1,595,047
<pre><code>//parses some string into that format. datetime1 = datetime.strptime(somestring, "%Y-%m-%dT%H:%M:%S") //gets the seconds from the above date. timestamp1 = time.mktime(datetime1.timetuple()) //adds milliseconds to the above seconds. timeInMillis = int(timestamp1) * 1000 </code></pre> <p>How do I (at any point in that code) turn the date into UTC format? I've been ploughing through the API for what seems like a century and cannot find anything that I can get working. Can anyone help? It's currently turning it into Eastern time i believe (however I'm in GMT but want UTC).</p> <p>EDIT: I gave the answer to the guy with the closest to what I finally found out.</p> <pre><code>datetime1 = datetime.strptime(somestring, someformat) timeInSeconds = calendar.timegm(datetime1.utctimetuple()) timeInMillis = timeInSeconds * 1000 </code></pre> <p>:)</p>
9
2009-10-20T14:32:35Z
1,595,096
<p>I think you can use the <code>utcoffset()</code> method:</p> <pre><code>utc_time = datetime1 - datetime1.utcoffset() </code></pre> <p>The docs give an example of this using the <code>astimezone()</code> method <a href="http://docs.python.org/library/datetime.html#datetime.datetime.astimezone" rel="nofollow">here</a>.</p> <p>Additionally, if you're going to be dealing with timezones, you might want to look into the <a href="http://pytz.sourceforge.net/" rel="nofollow">PyTZ library</a> which has lots of helpful tools for converting datetime's into various timezones (including between EST and UTC)</p> <p>With PyTZ:</p> <pre><code>from datetime import datetime import pytz utc = pytz.utc eastern = pytz.timezone('US/Eastern') # Using datetime1 from the question datetime1 = datetime.strptime(somestring, "%Y-%m-%dT%H:%M:%S") # First, tell Python what timezone that string was in (you said Eastern) eastern_time = eastern.localize(datetime1) # Then convert it from Eastern to UTC utc_time = eastern_time.astimezone(utc) </code></pre>
3
2009-10-20T14:38:16Z
[ "python", "datetime", "utc" ]
Convert to UTC Timestamp
1,595,047
<pre><code>//parses some string into that format. datetime1 = datetime.strptime(somestring, "%Y-%m-%dT%H:%M:%S") //gets the seconds from the above date. timestamp1 = time.mktime(datetime1.timetuple()) //adds milliseconds to the above seconds. timeInMillis = int(timestamp1) * 1000 </code></pre> <p>How do I (at any point in that code) turn the date into UTC format? I've been ploughing through the API for what seems like a century and cannot find anything that I can get working. Can anyone help? It's currently turning it into Eastern time i believe (however I'm in GMT but want UTC).</p> <p>EDIT: I gave the answer to the guy with the closest to what I finally found out.</p> <pre><code>datetime1 = datetime.strptime(somestring, someformat) timeInSeconds = calendar.timegm(datetime1.utctimetuple()) timeInMillis = timeInSeconds * 1000 </code></pre> <p>:)</p>
9
2009-10-20T14:32:35Z
1,595,195
<p>You probably want one of these two:</p> <pre><code>import time import datetime from email.Utils import formatdate rightnow = time.time() utc = datetime.datetime.utcfromtimestamp(rightnow) print utc print formatdate(rightnow) </code></pre> <p>The two outputs look like this</p> <pre><code>2009-10-20 14:46:52.725000 Tue, 20 Oct 2009 14:46:52 -0000 </code></pre>
1
2009-10-20T14:50:53Z
[ "python", "datetime", "utc" ]
Convert to UTC Timestamp
1,595,047
<pre><code>//parses some string into that format. datetime1 = datetime.strptime(somestring, "%Y-%m-%dT%H:%M:%S") //gets the seconds from the above date. timestamp1 = time.mktime(datetime1.timetuple()) //adds milliseconds to the above seconds. timeInMillis = int(timestamp1) * 1000 </code></pre> <p>How do I (at any point in that code) turn the date into UTC format? I've been ploughing through the API for what seems like a century and cannot find anything that I can get working. Can anyone help? It's currently turning it into Eastern time i believe (however I'm in GMT but want UTC).</p> <p>EDIT: I gave the answer to the guy with the closest to what I finally found out.</p> <pre><code>datetime1 = datetime.strptime(somestring, someformat) timeInSeconds = calendar.timegm(datetime1.utctimetuple()) timeInMillis = timeInSeconds * 1000 </code></pre> <p>:)</p>
9
2009-10-20T14:32:35Z
1,596,308
<pre><code>def getDateAndTime(seconds=None): """ Converts seconds since the Epoch to a time tuple expressing UTC. When 'seconds' is not passed in, convert the current time instead. :Parameters: - `seconds`: time in seconds from the epoch. :Return: Time in UTC format. """ return time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(seconds))` </code></pre> <p>This converts local time to UTC</p> <pre><code>time.mktime(time.localtime(calendar.timegm(utc_time))) </code></pre> <p><a href="http://feihonghsu.blogspot.com/2008/02/converting-from-local-time-to-utc.html" rel="nofollow">http://feihonghsu.blogspot.com/2008/02/converting-from-local-time-to-utc.html</a></p> <p>If converting a struct_time to seconds-since-the-epoch is done using mktime, this conversion is <em>in local timezone</em>. There's no way to tell it to use any specific timezone, not even just UTC. The standard 'time' package always assumes that a time is in your local timezone.</p>
2
2009-10-20T18:00:42Z
[ "python", "datetime", "utc" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
1,595,549
<p>You must do this in several threads. Otherwise, you'll end up in a situation where you can't send data: child p1 won't read your input since p2 doesn't read p1's output because you don't read p2's output.</p> <p>So you need a background thread that reads what p2 writes out. That will allow p2 to continue after writing some data to the pipe, so it can read the next line of input from p1 which again allows p1 to process the data which you send to it.</p> <p>Alternatively, you can send the data to p1 with a background thread and read the output from p2 in the main thread. But either side must be a thread.</p>
2
2009-10-20T15:41:29Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
1,610,865
<p>I think you may be examining the wrong problem. Certainly as Aaron says if you try to be both a producer to the beginning of a pipeline, and a consumer of the end of the pipeline, it is easy to get into a deadlock situation. This is the problem that communicate() solves.</p> <p>communicate() isn't exactly correct for you since stdin and stdout are on different subprocess objects; but if you take a look at the implementation in subprocess.py you'll see that it does exactly what Aaron suggested.</p> <p>Once you see that communicate both reads and writes, you'll see that in your second try communicate() competes with p2 for the output of p1:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) # ... p1.communicate('data\n') # reads from p1.stdout, as does p2 </code></pre> <p>I am running on win32, which definitely has different i/o and buffering characteristics, but this works for me:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.stdin.write('hello world\n' * 100000) p1.stdin.close() t.join() </code></pre> <p>I tuned the input size to produce a deadlock when using a naive unthreaded p2.read()</p> <p>You might also try buffering into a file, eg</p> <pre><code>fd, _ = tempfile.mkstemp() os.write(fd, 'hello world\r\n' * 100000) os.lseek(fd, 0, os.SEEK_SET) p1 = Popen(["grep", "-v", "not"], stdin=fd, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) print p2.stdout.read() </code></pre> <p>That also works for me without deadlocks.</p>
1
2009-10-23T00:49:36Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
1,611,099
<p>What about using a SpooledTemporaryFile ? This bypasses (but perhaps doesn't solve) the issue:</p> <p><a href="http://docs.python.org/library/tempfile.html#tempfile.SpooledTemporaryFile" rel="nofollow">http://docs.python.org/library/tempfile.html#tempfile.SpooledTemporaryFile</a></p> <p>You can write to it like a file, but it's actually a memory block.</p> <p>Or am I totally misunderstanding...</p>
0
2009-10-23T02:20:03Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
1,616,457
<p>I found out how to do it.</p> <p>It is not about threads, and not about select().</p> <p>When I run the first process (<code>grep</code>), it creates two low-level file descriptors, one for each pipe. Lets call those <code>a</code> and <code>b</code>.</p> <p>When I run the second process, <code>b</code> gets passed to <code>cut</code> <code>sdtin</code>. But there is a brain-dead default on <code>Popen</code> - <code>close_fds=False</code>. </p> <p>The effect of that is that <code>cut</code> also inherits <code>a</code>. So <code>grep</code> can't die even if I close <code>a</code>, because stdin is still open on <code>cut</code>'s process (<code>cut</code> ignores it).</p> <p>The following code now runs perfectly.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True) p1.stdin.write('Hello World\n') p1.stdin.close() result = p2.stdout.read() assert result == "Hello Worl\n" </code></pre> <p><strong><code>close_fds=True</code> SHOULD BE THE DEFAULT</strong> on unix systems. On windows it closes <strong>all</strong> fds, so it prevents piping.</p> <p>EDIT:</p> <p>PS: For people with a similar problem reading this answer: As pooryorick said in a comment, that also could block if data written to <code>p1.stdin</code> is bigger than the buffers. In that case you should chunk the data into smaller pieces, and use <code>select.select()</code> to know when to read/write. The code in the question should give a hint on how to implement that.</p> <p>EDIT2: Found another solution, with more help from pooryorick - instead of using <code>close_fds=True</code> and close <strong>ALL</strong> fds, one could close the <code>fd</code>s that belongs to the first process, when executing the second, and it will work. The closing must be done in the child so the <code>preexec_fn</code> function from Popen comes very handy to do just that. On executing p2 you can do:</p> <pre><code>p2 = Popen(cmd2, stdin=p1.stdout, stdout=PIPE, stderr=devnull, preexec_fn=p1.stdin.close) </code></pre>
18
2009-10-23T23:33:38Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
2,173,130
<p>There are three main tricks to making pipes work as expected</p> <ol> <li><p>Make sure each end of the pipe is used in a different thread/process (some of the examples near the top suffer from this problem).</p></li> <li><p>explicitly close the unused end of the pipe in each process</p></li> <li><p>deal with buffering by either disabling it (Python -u option), using pty's, or simply filling up the buffer with something that won't affect the data, ( maybe '\n', but whatever fits).</p></li> </ol> <p>The examples in the Python "pipeline" module (I'm the author) fit your scenario exactly, and make the low-level steps fairly clear.</p> <p><a href="http://pypi.python.org/pypi/pipeline/" rel="nofollow">http://pypi.python.org/pypi/pipeline/</a></p> <p>More recently, I used the subprocess module as part of a producer-processor-consumer-controller pattern:</p> <p><a href="http://www.darkarchive.org/w/Pub/PythonInteract" rel="nofollow">http://www.darkarchive.org/w/Pub/PythonInteract</a></p> <p>This example deals with buffered stdin without resorting to using a pty, and also illustrates which pipe ends should be closed where. I prefer processes to threading, but the principle is the same. Additionally, it illustrates synchronizing Queues to which feed the producer and collect output from the consumer, and how to shut them down cleanly (look out for the sentinels inserted into the queues). This pattern allows new input to be generated based on recent output, allowing for recursive discovery and processing.</p>
3
2010-01-31T20:12:58Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
2,223,087
<p>Nosklo's offered solution will quickly break if too much data is written to the receiving end of the pipe:</p> <pre><code> from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True) p1.stdin.write('Hello World\n' * 20000) p1.stdin.close() result = p2.stdout.read() assert result == "Hello Worl\n" </code></pre> <p>If this script doesn't hang on your machine, just increase "20000" to something that exceeds the size of your operating system's pipe buffers.</p> <p>This is because the operating system is buffering the input to "grep", but once that buffer is full, the <code>p1.stdin.write</code> call will block until something reads from <code>p2.stdout</code>. In toy scenarios, you can get way with writing to/reading from a pipe in the same process, but in normal usage, it is necessary to write from one thread/process and read from a separate thread/process. This is true for subprocess.popen, os.pipe, os.popen*, etc.</p> <p>Another twist is that sometimes you want to keep feeding the pipe with items generated from earlier output of the same pipe. The solution is to make both the pipe feeder and the pipe reader asynchronous to the man program, and implement two queues: one between the main program and the pipe feeder and one between the main program and the pipe reader. <a href="http://www.darkarchive.org/w/Pub/PythonInteract" rel="nofollow">PythonInteract</a> is an example of that.</p> <p>Subprocess is a nice convenience model, but because it hides the details of the os.popen and os.fork calls it does under the hood, it can sometimes be more difficult to deal with than the lower-level calls it utilizes. For this reason, subprocess is not a good way to learn about how inter-process pipes really work.</p>
2
2010-02-08T16:30:25Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
2,239,610
<p>In one of the comments above, I challenged nosklo to either post some code to back up his assertions about <code>select.select</code> or to upvote my responses he had previously down-voted. He responded with the following code:</p> <pre><code>from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE, close_fds=True) data_to_write = 100000 * 'hello world\n' to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer written = 0 while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = p2.stdout.read(1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if written &lt; len(data_to_write): part = data_to_write[written:written+1024] written += len(part) p1.stdin.write(part); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>One problem with this script is that it second-guesses the size/nature of the system pipe buffers. The script would experience fewer failures if it could remove magic numbers like 1024.</p> <p>The big problem is that this script code only works consistently with the right combination of data input and external programs. grep and cut both work with lines, and so their internal buffers behave a bit differently. If we use a more generic command like "cat", and write smaller bits of data into the pipe, the fatal race condition will pop up more often:</p> <pre><code>from subprocess import Popen, PIPE import select import time p1 = Popen(["cat"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cat"], stdin=p1.stdout, stdout=PIPE, close_fds=True) data_to_write = 'hello world\n' to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer written = 0 while to_read or to_write: time.sleep(1) read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: print 'I am reading now!' data = p2.stdout.read(1024) if not data: p1.stdout.close() to_read = [] else: b.append(data) if write_now: print 'I am writing now!' if written &lt; len(data_to_write): part = data_to_write[written:written+1024] written += len(part) p1.stdin.write(part); p1.stdin.flush() else: print 'closing file' p1.stdin.close() to_write = [] print b </code></pre> <p>In this case, two different results will manifest:</p> <pre><code>write, write, close file, read -&gt; success write, read -&gt; hang </code></pre> <p>So again, I challenge nosklo to either post code showing the use of <code>select.select</code> to handle arbitrary input and pipe buffering from a single thread, or to upvote my responses.</p> <p>Bottom line: don't try to manipulate both ends of a pipe from a single thread. It's just not worth it. See <a href="http://pypi.python.org/pypi/pipeline/0.1" rel="nofollow">pipeline</a> for a nice low-level example of how to do this correctly.</p>
1
2010-02-10T19:14:16Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
2,240,246
<p>Responding to nosklo's assertion (see other comments to this question) that it can't be done without <code>close_fds=True</code>:</p> <p><code>close_fds=True</code> is only necessary if you've left other file descriptors open. When opening multiple child processes, it's always good to keep track of open files that might get inherited, and to explicitly close any that aren't needed:</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p1.stdin.write('Hello World\n') p1.stdin.close() p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) result = p2.stdout.read() assert result == "Hello Worl\n" </code></pre> <p><code>close_fds</code> defaults to <code>False</code> because subprocess prefers to trust the calling program to know what it's doing with open file descriptors, and just provide the caller with an easy option to close them all if that's what it wants to do.</p> <p>But the real issue is that pipe buffers will bite you for all but toy examples. As I have said in my other answers to this question, the rule of thumb is to not have your reader and your writer open in the same process/thread. Anyone who wants to use the subprocess module for two-way communication would be well-served to study os.pipe and os.fork, first. They're actually not that hard to use if you have a <a href="http://pypi.python.org/pypi/pipeline/0.1" rel="nofollow">good example</a> to look at.</p>
2
2010-02-10T20:49:51Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
2,245,585
<p>Here's an example of using Popen together with os.fork to accomplish the same thing. Instead of using <code>close_fds</code> it just closes the pipes at the right places. Much simpler than trying to use <code>select.select</code>, and takes full advantage of system pipe buffers.</p> <pre><code>from subprocess import Popen, PIPE import os import sys p1 = Popen(["cat"], stdin=PIPE, stdout=PIPE) pid = os.fork() if pid: #parent p1.stdin.close() p2 = Popen(["cat"], stdin=p1.stdout, stdout=PIPE) data = p2.stdout.read() sys.stdout.write(data) p2.stdout.close() else: #child data_to_write = 'hello world\n' * 100000 p1.stdin.write(data_to_write) p1.stdin.close() </code></pre>
-1
2010-02-11T15:44:39Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
blocks - send input to python subprocess pipeline
1,595,492
<p>I'm testing subprocesses pipelines with python. I'm aware that I can do what the programs below do in python directly, but that's not the point. I just want to test the pipeline so I know how to use it.</p> <p>My system is Linux Ubuntu 9.04 with default python 2.6.</p> <p>I started with this <a href="http://docs.python.org/library/subprocess.html#replacing-shell-pipeline">documentation example</a>.</p> <pre><code>from subprocess import Popen, PIPE p1 = Popen(["grep", "-v", "not"], stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] print output </code></pre> <p>That works, but since <code>p1</code>'s <code>stdin</code> is not being redirected, I have to type stuff in the terminal to feed the pipe. When I type <code>^D</code> closing stdin, I get the output I want.</p> <p>However, I want to send data to the pipe using a python string variable. First I tried writing on stdin:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.stdin.write('test\n') output = p2.communicate()[0] # blocks forever here </code></pre> <p>Didn't work. I tried using <code>p2.stdout.read()</code> instead on last line, but it also blocks. I added <code>p1.stdin.flush()</code> and <code>p1.stdin.close()</code> but it didn't work either. I Then I moved to communicate:</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) p1.communicate('test\n') # blocks forever here output = p2.communicate()[0] </code></pre> <p>So that's still not it.</p> <p>I noticed that running a single process (like <code>p1</code> above, removing <code>p2</code>) works perfectly. And passing a file handle to <code>p1</code> (<code>stdin=open(...)</code>) also works. So the problem is:</p> <p>Is it possible to pass data to a pipeline of 2 or more subprocesses in python, without blocking? Why not?</p> <p>I'm aware I could run a shell and run the pipeline in the shell, but that's not what I want.</p> <p><hr /></p> <p><strong>UPDATE 1</strong>: Following Aaron Digulla's hint below I'm now trying to use threads to make it work.</p> <p>First I've tried running p1.communicate on a thread.</p> <pre><code>p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=p1.communicate, args=('some data\n',)) t.start() output = p2.communicate()[0] # blocks forever here </code></pre> <p>Okay, didn't work. Tried other combinations like changing it to <code>.write()</code> and also <code>p2.read()</code>. Nothing. Now let's try the opposite approach:</p> <pre><code>def get_output(subp): output = subp.communicate()[0] # blocks on thread print 'GOT:', output p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) t = threading.Thread(target=get_output, args=(p2,)) t.start() p1.communicate('data\n') # blocks here. t.join() </code></pre> <p>code ends up blocking somewhere. Either in the spawned thread, or in the main thread, or both. So it didn't work. If you know how to make it work it would make easier if you can provide working code. I'm trying here.</p> <p><hr /></p> <p><strong>UPDATE 2</strong></p> <p>Paul Du Bois answered below with some information, so I did more tests. I've read entire <code>subprocess.py</code> module and got how it works. So I tried applying exactly that to code.</p> <p>I'm on linux, but since I was testing with threads, my first approach was to replicate the exact windows threading code seen on <code>subprocess.py</code>'s <code>communicate()</code> method, but for two processes instead of one. Here's the entire listing of what I tried:</p> <pre><code>import os from subprocess import Popen, PIPE import threading def get_output(fobj, buffer): while True: chunk = fobj.read() # BLOCKS HERE if not chunk: break buffer.append(chunk) p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) b = [] # create a buffer t = threading.Thread(target=get_output, args=(p2.stdout, b)) t.start() # start reading thread for x in xrange(100000): p1.stdin.write('hello world\n') # write data p1.stdin.flush() p1.stdin.close() # close input... t.join() </code></pre> <p>Well. It didn't work. Even after <code>p1.stdin.close()</code> was called, <code>p2.stdout.read()</code> still blocks.</p> <p>Then I tried the posix code on <code>subprocess.py</code>:</p> <pre><code>import os from subprocess import Popen, PIPE import select p1 = Popen(["grep", "-v", "not"], stdin=PIPE, stdout=PIPE) p2 = Popen(["cut", "-c", "1-10"], stdin=p1.stdout, stdout=PIPE) numwrites = 100000 to_read = [p2.stdout] to_write = [p1.stdin] b = [] # create buffer while to_read or to_write: read_now, write_now, xlist = select.select(to_read, to_write, []) if read_now: data = os.read(p2.stdout.fileno(), 1024) if not data: p2.stdout.close() to_read = [] else: b.append(data) if write_now: if numwrites &gt; 0: numwrites -= 1 p1.stdin.write('hello world!\n'); p1.stdin.flush() else: p1.stdin.close() to_write = [] print b </code></pre> <p>Also blocks on <code>select.select()</code>. By spreading <code>print</code>s around, I found out this:</p> <ul> <li>Reading is working. Code reads many times during execution.</li> <li>Writing is also working. Data is written to <code>p1.stdin</code>.</li> <li>At the end of <code>numwrites</code>, <code>p1.stdin.close()</code> is called.</li> <li>When <code>select()</code> starts blocking, only <code>to_read</code> has something, <code>p2.stdout</code>. <code>to_write</code> is already empty.</li> <li><code>os.read()</code> call always returns something, so <code>p2.stdout.close()</code> is never called.</li> </ul> <p><strong>Conclusion from both tests</strong>: Closing the <code>stdin</code> of the first process on the pipeline (<code>grep</code> in the example) is not making it dump its buffered output to the next and die.</p> <p>No way to make it work?</p> <p>PS: I don't want to use a temporary file, I've already tested with files and I know it works. And I don't want to use windows.</p>
28
2009-10-20T15:30:51Z
14,061,132
<h2>Working with large files</h2> <p>Two principles need to be applied uniformly when working with large files in Python.</p> <ol> <li>Since any IO routine can block, we must <strong>keep each stage of the pipeline in a different thread</strong> or process. We use threads in this example, but subprocesses would let you avoid the GIL. </li> <li>We must use <strong>incremental reads</strong> and writes so that we don't wait for <code>EOF</code> before starting to make progress.</li> </ol> <p>An alternative is to use nonblocking IO, though this is cumbersome in standard Python. See <a href="http://gevent.org">gevent</a> for a lightweight threading library that implements the synchronous IO API using nonblocking primitives.</p> <h2>Example code</h2> <p>We'll construct a silly pipeline that is roughly</p> <pre><code>{cat /usr/share/dict/words} | grep -v not \ | {upcase, filtered tee to stderr} | cut -c 1-10 \ | {translate 'E' to '3'} | grep K | grep Z | {downcase} </code></pre> <p>where each stage in braces <code>{}</code> is implemented in Python while the others use standard external programs. <strong>TL;DR:</strong> <a href="https://gist.github.com/4391685">See this gist</a>.</p> <p>We start with the expected imports.</p> <pre><code>#!/usr/bin/env python from subprocess import Popen, PIPE import sys, threading </code></pre> <h3>Python stages of the pipeline</h3> <p>All but the last Python-implemented stage of the pipeline needs to go in a thread so that it's IO does not block the others. These could instead run in Python subprocesses if you wanted them to actually run in parallel (avoid the GIL).</p> <pre><code>def writer(output): for line in open('/usr/share/dict/words'): output.write(line) output.close() def filter(input, output): for line in input: if 'k' in line and 'z' in line: # Selective 'tee' sys.stderr.write('### ' + line) output.write(line.upper()) output.close() def leeter(input, output): for line in input: output.write(line.replace('E', '3')) output.close() </code></pre> <p>Each of these needs to be put in its own thread, which we'll do using this convenience function.</p> <pre><code>def spawn(func, **kwargs): t = threading.Thread(target=func, kwargs=kwargs) t.start() return t </code></pre> <h3>Create the pipeline</h3> <p>Create the external stages using <code>Popen</code> and the Python stages using <code>spawn</code>. The argument <code>bufsize=-1</code> says to use the system default buffering (usually 4 kiB). This is generally faster than the default (unbuffered) or line buffering, but you'll want line buffering if you want to visually monitor the output without lags.</p> <pre><code>grepv = Popen(['grep','-v','not'], stdin=PIPE, stdout=PIPE, bufsize=-1) cut = Popen(['cut','-c','1-10'], stdin=PIPE, stdout=PIPE, bufsize=-1) grepk = Popen(['grep', 'K'], stdin=PIPE, stdout=PIPE, bufsize=-1) grepz = Popen(['grep', 'Z'], stdin=grepk.stdout, stdout=PIPE, bufsize=-1) twriter = spawn(writer, output=grepv.stdin) tfilter = spawn(filter, input=grepv.stdout, output=cut.stdin) tleeter = spawn(leeter, input=cut.stdout, output=grepk.stdin) </code></pre> <h3>Drive the pipeline</h3> <p>Assembled as above, all the buffers in the pipeline will fill up, but since nobody is reading from the end (<code>grepz.stdout</code>), they will all block. We could read the entire thing in one call to <code>grepz.stdout.read()</code>, but that would use a lot of memory for large files. Instead, we read <em>incrementally</em>.</p> <pre><code>for line in grepz.stdout: sys.stdout.write(line.lower()) </code></pre> <p>The threads and processes clean up once they reach <code>EOF</code>. We can explicitly clean up using</p> <pre><code>for t in [twriter, tfilter, tleeter]: t.join() for p in [grepv, cut, grepk, grepz]: p.wait() </code></pre> <h3>Python-2.6 and earlier</h3> <p>Internally, <code>subprocess.Popen</code> calls <code>fork</code>, configures the pipe file descriptors, and calls <code>exec</code>. The child process from <code>fork</code> has copies of all file descriptors in the parent process, and <em>both</em> copies will need to be closed before the corresponding reader will get <code>EOF</code>. This can be fixed by manually closing the pipes (either by <code>close_fds=True</code> or a suitable <code>preexec_fn</code> argument to <code>subprocess.Popen</code>) or by setting the <a href="http://stackoverflow.com/questions/6125068"><code>FD_CLOEXEC</code></a> flag to have <code>exec</code> automatically close the file descriptor. This flag is set automatically in Python-2.7 and later, see <a href="http://bugs.python.org/issue12786">issue12786</a>. We can get the Python-2.7 behavior in earlier versions of Python by calling</p> <pre><code>p._set_cloexec_flags(p.stdin) </code></pre> <p>before passing <code>p.stdin</code> as an argument to a subsequent <code>subprocess.Popen</code>.</p>
5
2012-12-27T20:49:46Z
[ "python", "ipc", "pipe", "subprocess", "blocking" ]
Threading in a PyQt application: Use Qt threads or Python threads?
1,595,649
<p>I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread.</p> <p>[Yes, I know, now I have <a href="http://codemines.blogspot.com/2006/08/now-they-have-two-problems.html">two problems</a>.]</p> <p>Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python <code>threading</code> module? What are advantages / disadvantages of each? Or do you have a totally different suggestion?</p> <p><strong>Edit (re bounty):</strong> While the solution in my particular case will probably be using a non-blocking network request like <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595690#1595690">Jeff Ober</a> and <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595754#1595754">Lukáš Lalinský</a> suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question:</p> <p>What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the <code>threading</code> module)?</p> <p><hr /></p> <p><strong>Edit 2:</strong> Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages.</p> <p>For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1645666#1645666">abbot</a> links to.</p> <p>There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call.</p> <p>Thanks again.</p>
74
2009-10-20T15:54:19Z
1,595,690
<p>Python's threads will be simpler and safer, and since it is for an I/O-based application, they are able to bypass the GIL. That said, have you considered non-blocking I/O using Twisted or non-blocking sockets/select?</p> <p><strong>EDIT: more on threads</strong></p> <p><strong>Python threads</strong></p> <p>Python's threads are system threads. However, Python uses a global interpreter lock (GIL) to ensure that the interpreter is only ever executing a certain size block of byte-code instructions at a time. Luckily, Python releases the GIL during input/output operations, making threads useful for simulating non-blocking I/O.</p> <p><strong>Important caveat:</strong> This can be misleading, since the number of byte-code instructions does <em>not</em> correspond to the number of lines in a program. Even a single assignment may not be atomic in Python, so a mutex lock is necessary for <em>any</em> block of code that must be executed atomically, even with the GIL.</p> <p><strong>QT threads</strong></p> <p>When Python hands off control to a 3rd party compiled module, it releases the GIL. It becomes the responsibility of the module to ensure atomicity where required. When control is passed back, Python will use the GIL. This can make using 3rd party libraries in conjunction with threads confusing. It is even more difficult to use an external threading library because it adds uncertainty as to where and when control is in the hands of the module vs the interpreter.</p> <p>QT threads operate with the GIL released. QT threads are able to execute QT library code (and other compiled module code that does not acquire the GIL) concurrently. However, the Python code executed within the context of a QT thread <em>still</em> acquires the GIL, and now you have to manage <em>two</em> sets of logic for locking your code.</p> <p>In the end, both QT threads and Python threads are wrappers around system threads. Python threads are marginally safer to use, since those parts that are not written in Python (implicitly using the GIL) use the GIL in any case (although the caveat above still applies.)</p> <p><strong>Non-blocking I/O</strong></p> <p>Threads add extraordinarily complexity to your application. Especially when dealing with the already complex interaction between the Python interpreter and compiled module code. While many find event-based programming difficult to follow, event-based, non-blocking I/O is often much less difficult to reason about than threads.</p> <p>With asynchronous I/O, you can always be sure that, for each open descriptor, the path of execution is consistent and orderly. There are, obviously, issues that must be addressed, such as what to do when code depending on one open channel further depends on the results of code to be called when another open channel returns data.</p> <p>One nice solution for event-based, non-blocking I/O is the new <a href="http://dieselweb.org/">Diesel</a> library. It is restricted to Linux at the moment, but it is extraordinarily fast and quite elegant.</p> <p>It is also worth your time to learn <a href="http://code.google.com/p/pyevent/">pyevent</a>, a wrapper around the wonderful libevent library, which provides a basic framework for event-based programming using the fastest available method for your system (determined at compile time).</p>
23
2009-10-20T15:59:08Z
[ "python", "multithreading", "pyqt" ]
Threading in a PyQt application: Use Qt threads or Python threads?
1,595,649
<p>I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread.</p> <p>[Yes, I know, now I have <a href="http://codemines.blogspot.com/2006/08/now-they-have-two-problems.html">two problems</a>.]</p> <p>Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python <code>threading</code> module? What are advantages / disadvantages of each? Or do you have a totally different suggestion?</p> <p><strong>Edit (re bounty):</strong> While the solution in my particular case will probably be using a non-blocking network request like <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595690#1595690">Jeff Ober</a> and <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595754#1595754">Lukáš Lalinský</a> suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question:</p> <p>What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the <code>threading</code> module)?</p> <p><hr /></p> <p><strong>Edit 2:</strong> Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages.</p> <p>For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1645666#1645666">abbot</a> links to.</p> <p>There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call.</p> <p>Thanks again.</p>
74
2009-10-20T15:54:19Z
1,595,752
<p>Jeff has some good points. Only one main thread can do any GUI updates. If you do need to update the GUI from within the thread, Qt-4's <a href="http://doc.trolltech.com/4.5/qobject.html#connect-2">queued connection</a> signals make it easy to send data across threads and will automatically be invoked if you're using QThread; I'm not sure if they will be if you're using Python threads, although it's easy to add a parameter to <code>connect()</code>.</p>
6
2009-10-20T16:10:59Z
[ "python", "multithreading", "pyqt" ]
Threading in a PyQt application: Use Qt threads or Python threads?
1,595,649
<p>I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread.</p> <p>[Yes, I know, now I have <a href="http://codemines.blogspot.com/2006/08/now-they-have-two-problems.html">two problems</a>.]</p> <p>Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python <code>threading</code> module? What are advantages / disadvantages of each? Or do you have a totally different suggestion?</p> <p><strong>Edit (re bounty):</strong> While the solution in my particular case will probably be using a non-blocking network request like <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595690#1595690">Jeff Ober</a> and <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595754#1595754">Lukáš Lalinský</a> suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question:</p> <p>What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the <code>threading</code> module)?</p> <p><hr /></p> <p><strong>Edit 2:</strong> Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages.</p> <p>For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1645666#1645666">abbot</a> links to.</p> <p>There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call.</p> <p>Thanks again.</p>
74
2009-10-20T15:54:19Z
1,595,754
<p>The advantage of <code>QThread</code> is that it's integrated with the rest of the Qt library. That is, thread-aware methods in Qt will need to know in which thread they run, and to move objects between threads, you will need to use <code>QThread</code>. Another useful feature is running your own event loop in a thread.</p> <p>If you are accessing a HTTP server, you should consider <code>QNetworkAccessManager</code>.</p>
19
2009-10-20T16:11:23Z
[ "python", "multithreading", "pyqt" ]
Threading in a PyQt application: Use Qt threads or Python threads?
1,595,649
<p>I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread.</p> <p>[Yes, I know, now I have <a href="http://codemines.blogspot.com/2006/08/now-they-have-two-problems.html">two problems</a>.]</p> <p>Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python <code>threading</code> module? What are advantages / disadvantages of each? Or do you have a totally different suggestion?</p> <p><strong>Edit (re bounty):</strong> While the solution in my particular case will probably be using a non-blocking network request like <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595690#1595690">Jeff Ober</a> and <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595754#1595754">Lukáš Lalinský</a> suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question:</p> <p>What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the <code>threading</code> module)?</p> <p><hr /></p> <p><strong>Edit 2:</strong> Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages.</p> <p>For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1645666#1645666">abbot</a> links to.</p> <p>There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call.</p> <p>Thanks again.</p>
74
2009-10-20T15:54:19Z
1,620,078
<p>I can't comment on the exact differences between Python and PyQt threads, but I've been doing what you're attempting to do using <code>QThread</code>, <code>QNetworkAcessManager</code> and making sure to call <code>QApplication.processEvents()</code> while the thread is alive. If GUI responsiveness is really the issue you're trying to solve, <a href="http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/qcoreapplication.html#processEvents" rel="nofollow">the later</a> will help.</p>
0
2009-10-25T05:24:57Z
[ "python", "multithreading", "pyqt" ]
Threading in a PyQt application: Use Qt threads or Python threads?
1,595,649
<p>I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread.</p> <p>[Yes, I know, now I have <a href="http://codemines.blogspot.com/2006/08/now-they-have-two-problems.html">two problems</a>.]</p> <p>Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python <code>threading</code> module? What are advantages / disadvantages of each? Or do you have a totally different suggestion?</p> <p><strong>Edit (re bounty):</strong> While the solution in my particular case will probably be using a non-blocking network request like <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595690#1595690">Jeff Ober</a> and <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595754#1595754">Lukáš Lalinský</a> suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question:</p> <p>What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the <code>threading</code> module)?</p> <p><hr /></p> <p><strong>Edit 2:</strong> Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages.</p> <p>For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1645666#1645666">abbot</a> links to.</p> <p>There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call.</p> <p>Thanks again.</p>
74
2009-10-20T15:54:19Z
1,628,262
<p>I asked myself the same question when I was working to <a href="http://pytalk.trunat.fr/">PyTalk</a>.</p> <p>If you are using Qt, you need to use <code>QThread</code> to be able to use the Qt framework and expecially the signal/slot system.</p> <p>With the signal/slot engine, you will be able to talk from a thread to another and with every part of your project.</p> <p>Moreover, there is not very performance question about this choice since both are a C++ bindings.</p> <p>Here is my experience of PyQt and thread.</p> <p>I encourage you to use <code>QThread</code>.</p>
11
2009-10-27T01:17:08Z
[ "python", "multithreading", "pyqt" ]
Threading in a PyQt application: Use Qt threads or Python threads?
1,595,649
<p>I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread.</p> <p>[Yes, I know, now I have <a href="http://codemines.blogspot.com/2006/08/now-they-have-two-problems.html">two problems</a>.]</p> <p>Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python <code>threading</code> module? What are advantages / disadvantages of each? Or do you have a totally different suggestion?</p> <p><strong>Edit (re bounty):</strong> While the solution in my particular case will probably be using a non-blocking network request like <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595690#1595690">Jeff Ober</a> and <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595754#1595754">Lukáš Lalinský</a> suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question:</p> <p>What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the <code>threading</code> module)?</p> <p><hr /></p> <p><strong>Edit 2:</strong> Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages.</p> <p>For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1645666#1645666">abbot</a> links to.</p> <p>There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call.</p> <p>Thanks again.</p>
74
2009-10-20T15:54:19Z
1,636,035
<p>I can't really recommend either, but I can try describing differences between CPython and Qt threads.</p> <p>First of all, CPython threads do not run concurrently, at least not Python code. Yes, they do create system threads for each Python thread, however only the thread currently holding Global Interpreter Lock is allowed to run (C extensions and FFI code might bypass it, but Python bytecode is not executed while thread doesn't hold GIL).</p> <p>On the other hand, we have Qt threads, which are basically common layer over system threads, don't have Global Interpreter Lock, and thus are capable of running concurrently. I'm not sure how PyQt deals with it, however unless your Qt threads call Python code, they should be able to run concurrently (bar various extra locks that might be implemented in various structures).</p> <p>For extra fine-tuning, you can modify the amount of bytecode instructions that are interpreted before switching ownership of GIL - lower values mean more context switching (and possibly higher responsiveness) but lower performance per individual thread (context switches have their cost - if you try switching every few instructions it doesn't help speed.)</p> <p>Hope it helps with your problems :)</p>
4
2009-10-28T09:15:37Z
[ "python", "multithreading", "pyqt" ]
Threading in a PyQt application: Use Qt threads or Python threads?
1,595,649
<p>I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread.</p> <p>[Yes, I know, now I have <a href="http://codemines.blogspot.com/2006/08/now-they-have-two-problems.html">two problems</a>.]</p> <p>Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python <code>threading</code> module? What are advantages / disadvantages of each? Or do you have a totally different suggestion?</p> <p><strong>Edit (re bounty):</strong> While the solution in my particular case will probably be using a non-blocking network request like <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595690#1595690">Jeff Ober</a> and <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1595754#1595754">Lukáš Lalinský</a> suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question:</p> <p>What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the <code>threading</code> module)?</p> <p><hr /></p> <p><strong>Edit 2:</strong> Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages.</p> <p>For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that <a href="http://stackoverflow.com/questions/1595649/threading-in-a-pyqt-application-use-qt-threads-or-python-threads/1645666#1645666">abbot</a> links to.</p> <p>There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call.</p> <p>Thanks again.</p>
74
2009-10-20T15:54:19Z
1,645,666
<p>This was <a href="http://www.mail-archive.com/pyqt@riverbankcomputing.com/msg16050.html">discussed</a> not too long ago in PyQt mailing list. Quoting Giovanni Bajo's <a href="http://www.mail-archive.com/pyqt@riverbankcomputing.com/msg16052.html">comments</a> on the subject:</p> <blockquote> <p>It's mostly the same. The main difference is that QThreads are better integrated with Qt (asynchrnous signals/slots, event loop, etc.). Also, you can't use Qt from a Python thread (you can't for instance post event to the main thread through QApplication.postEvent): you need a QThread for that to work.</p> <p>A general rule of thumb might be to use QThreads if you're going to interact somehow with Qt, and use Python threads otherwise.</p> </blockquote> <p>And some earlier comment on this subject from PyQt's author: "they are both wrappers around the same native thread implementations". And both implementations use GIL in the same way.</p>
72
2009-10-29T18:28:21Z
[ "python", "multithreading", "pyqt" ]
Adding and subtracting dates without Standard Library
1,595,704
<p>I am working in a limited environment developing a python script. </p> <p>My issue is I must be able to accomplish datetime addition and subtractions.</p> <p>For example I get the following string:</p> <pre><code>"09/10/20,09:59:47-16" </code></pre> <p>Which is formatted as year/month/day,hour:minute:second-ms.</p> <p>How would I go about adding 30 seconds to this number in Python? I can't use anything more than basic addition and subtraction and string parsing functions.</p>
0
2009-10-20T16:02:46Z
1,595,720
<p>You are doing math in different bases. You need to parse the string and come up with a list of values, for example (year, month, day, hour, minute, second), and then do other-base math to add and subtract. For example, hours are base-24, so you need to use modulus to perform the calculations. This sounds suspiciously like homework, so I won't go into any more detail :)</p>
2
2009-10-20T16:06:21Z
[ "python", "datetime" ]
Adding and subtracting dates without Standard Library
1,595,704
<p>I am working in a limited environment developing a python script. </p> <p>My issue is I must be able to accomplish datetime addition and subtractions.</p> <p>For example I get the following string:</p> <pre><code>"09/10/20,09:59:47-16" </code></pre> <p>Which is formatted as year/month/day,hour:minute:second-ms.</p> <p>How would I go about adding 30 seconds to this number in Python? I can't use anything more than basic addition and subtraction and string parsing functions.</p>
0
2009-10-20T16:02:46Z
1,595,725
<p>For completeness, datetime.datetime.strptime and datetime.timedelta are included in default python distribution.</p> <pre><code>from datetime import datetime, timedelta got = "09/10/20,09:59:47-16" dt = datetime.strptime(got, '%y/%m/%d,%H:%M:%S-%f') dt = dt + timedelta(seconds=30) print dt.strftime('%y/%m/%d,%H:%M:%S-%f') </code></pre> <p>prints exactly</p> <pre><code>09/10/20,10:00:17-160000 </code></pre> <p>Docs <a href="http://docs.python.org/library/datetime" rel="nofollow">here</a>.</p>
2
2009-10-20T16:06:40Z
[ "python", "datetime" ]
Adding and subtracting dates without Standard Library
1,595,704
<p>I am working in a limited environment developing a python script. </p> <p>My issue is I must be able to accomplish datetime addition and subtractions.</p> <p>For example I get the following string:</p> <pre><code>"09/10/20,09:59:47-16" </code></pre> <p>Which is formatted as year/month/day,hour:minute:second-ms.</p> <p>How would I go about adding 30 seconds to this number in Python? I can't use anything more than basic addition and subtraction and string parsing functions.</p>
0
2009-10-20T16:02:46Z
1,595,739
<p>The easiest way to perform date arithmetic is to not actually perform that arithmetic on the dates, but to do it to a simpler quantity.</p> <p>Typically, that simpler quantity is the number of seconds since a certain epoch. Jan 1, 1970 works out nicely. Knowing the epoch, and the number of days in each month, and which years are leap years, you can convert from this "number of seconds" representation to a date string pretty easily (if not slowly in the naive version).</p> <p>You will also need to convert the date string back to the simpler representation. This is again, not too hard.</p> <p>Once you have those two functions, arithmetic is simple. Just add or subtract the amount of time to/from your "number of seconds" representation. Then convert back to a date string.</p> <p>With all that said, I hope this is a homework assignment, because you absolutely should not be writing your own date handling functions in production code.</p>
2
2009-10-20T16:09:10Z
[ "python", "datetime" ]
Adding and subtracting dates without Standard Library
1,595,704
<p>I am working in a limited environment developing a python script. </p> <p>My issue is I must be able to accomplish datetime addition and subtractions.</p> <p>For example I get the following string:</p> <pre><code>"09/10/20,09:59:47-16" </code></pre> <p>Which is formatted as year/month/day,hour:minute:second-ms.</p> <p>How would I go about adding 30 seconds to this number in Python? I can't use anything more than basic addition and subtraction and string parsing functions.</p>
0
2009-10-20T16:02:46Z
1,596,247
<p>Here's my solution to the problem:</p> <pre><code>year = int(s[0:2]) month = int(s[3:5]) day = int(s[6:8]) hour = int(s[9:11]) minute = int(s[12:14]) second = int(s[15:17]) amount_to_add = 30 second = second + amount_to_add days_in_month = 30 if(month == 1 or month == 3 or month == 5 or month ==7 or month == 8 or month == 10 or month == 12): days_in_month = 31 if(month == 2): days_in_month = 28 if ((((year%4 == 0) and (year%100 != 0)) or (year%400 == 0)) and month == 2): days_in_month = 29 if(second &gt; 60): minute = minute + second/60 second = second%60 if(minute &gt; 60): hour = hour + minute/60 minute = minute%60 if(hour &gt;24): day = day + hour/60 hour = hour%24 if(day &gt; days_in_month): month = month + day/days_in_month day = day%days_in_month if(month &gt; 12): year = year + month/12 month = month%12 </code></pre> <p>Kind of a kludge but it gets the job done.</p>
0
2009-10-20T17:49:25Z
[ "python", "datetime" ]
Spawning more than one thread in Python causes RuntimeError
1,595,772
<p>I'm trying to add multithreading to a Python app, and thus started with some toy examples :</p> <pre><code>import threading def myfunc(arg1, arg2): print 'In thread' print 'args are', arg1, arg2 thread = threading.Thread(target=myfunc, args=('asdf', 'jkle')) thread.start() thread.join() </code></pre> <p>This works beautifully, but as soon as I try to start a second thread, I get a RuntimeError :</p> <pre><code>import threading def myfunc(arg1, arg2): print 'In thread' print 'args are', arg1, arg2 thread = threading.Thread(target=myfunc, args=('asdf', 'jkle')) thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é')) thread.start() thread2.start() thread.join() thread2.join() </code></pre> <p>As others seems to have no problem running this code, let me add that I am on Windows 7 x64 Pro with Python 2.6.3 32bits (if that matters).</p>
0
2009-10-20T16:13:53Z
1,595,790
<pre><code>thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é')) </code></pre> <p>Are you declaring the file as UTF-8?-----------------------------------------------------^</p>
1
2009-10-20T16:16:39Z
[ "python", "multithreading", "runtime-error" ]
Spawning more than one thread in Python causes RuntimeError
1,595,772
<p>I'm trying to add multithreading to a Python app, and thus started with some toy examples :</p> <pre><code>import threading def myfunc(arg1, arg2): print 'In thread' print 'args are', arg1, arg2 thread = threading.Thread(target=myfunc, args=('asdf', 'jkle')) thread.start() thread.join() </code></pre> <p>This works beautifully, but as soon as I try to start a second thread, I get a RuntimeError :</p> <pre><code>import threading def myfunc(arg1, arg2): print 'In thread' print 'args are', arg1, arg2 thread = threading.Thread(target=myfunc, args=('asdf', 'jkle')) thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é')) thread.start() thread2.start() thread.join() thread2.join() </code></pre> <p>As others seems to have no problem running this code, let me add that I am on Windows 7 x64 Pro with Python 2.6.3 32bits (if that matters).</p>
0
2009-10-20T16:13:53Z
1,595,814
<p>Can you post the exact error you get?</p> <p>Runs fine for me (after replacing the <code>é</code> character with an <code>e</code>):</p> <pre><code>In thread args areIn thread asdfargs are jkle1234 3763763e </code></pre> <p>If I leave the original script you posted and save the file as UTF-8 with BOM on Windows:</p> <pre><code>In thread args areIn thread asdfargs are jkle1234 3763763├⌐ </code></pre> <p>Saving the code you posted as ASCII results in a SyntaxError:</p> <blockquote> <p>SyntaxError: Non-ASCII character '\xe9' in file threadtest.py on line 8, but no encoding declared; see <a href="http://www.python.org/peps/pep-0263.html" rel="nofollow">http://www.python.org/peps/pep-0263.html</a> for details</p> </blockquote> <p>Environment information:</p> <blockquote> <p>C:\python -V<br /> Python 2.6.2<br /> C:\cmd<br /> Microsoft Windows XP [Version 5.1.2600]<br /> (C) Copyright 1985-2001 Microsoft Corp.</p> </blockquote>
1
2009-10-20T16:20:37Z
[ "python", "multithreading", "runtime-error" ]
Spawning more than one thread in Python causes RuntimeError
1,595,772
<p>I'm trying to add multithreading to a Python app, and thus started with some toy examples :</p> <pre><code>import threading def myfunc(arg1, arg2): print 'In thread' print 'args are', arg1, arg2 thread = threading.Thread(target=myfunc, args=('asdf', 'jkle')) thread.start() thread.join() </code></pre> <p>This works beautifully, but as soon as I try to start a second thread, I get a RuntimeError :</p> <pre><code>import threading def myfunc(arg1, arg2): print 'In thread' print 'args are', arg1, arg2 thread = threading.Thread(target=myfunc, args=('asdf', 'jkle')) thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é')) thread.start() thread2.start() thread.join() thread2.join() </code></pre> <p>As others seems to have no problem running this code, let me add that I am on Windows 7 x64 Pro with Python 2.6.3 32bits (if that matters).</p>
0
2009-10-20T16:13:53Z
1,690,410
<p>As said in the comments, I think that the problem comes from IDLE itself, and not from my code. Thanks for your help anyway !</p> <p>I upvoted your answers but will be accepting mine, as there is no real solution to this problem.</p>
0
2009-11-06T21:00:01Z
[ "python", "multithreading", "runtime-error" ]
Spawning more than one thread in Python causes RuntimeError
1,595,772
<p>I'm trying to add multithreading to a Python app, and thus started with some toy examples :</p> <pre><code>import threading def myfunc(arg1, arg2): print 'In thread' print 'args are', arg1, arg2 thread = threading.Thread(target=myfunc, args=('asdf', 'jkle')) thread.start() thread.join() </code></pre> <p>This works beautifully, but as soon as I try to start a second thread, I get a RuntimeError :</p> <pre><code>import threading def myfunc(arg1, arg2): print 'In thread' print 'args are', arg1, arg2 thread = threading.Thread(target=myfunc, args=('asdf', 'jkle')) thread2 = threading.Thread(target=myfunc, args=('1234', '3763763é')) thread.start() thread2.start() thread.join() thread2.join() </code></pre> <p>As others seems to have no problem running this code, let me add that I am on Windows 7 x64 Pro with Python 2.6.3 32bits (if that matters).</p>
0
2009-10-20T16:13:53Z
3,466,269
<p>Maybe it is because you have the same filename or project name like "threading" or "Thread" under some directory and you have runned it once since this bootup. </p>
0
2010-08-12T09:24:33Z
[ "python", "multithreading", "runtime-error" ]
Read a single registry key value on a remote machine
1,595,806
<p>I am having a very tough time achieving this one seemingly very simple goal...</p> <p>I have to gather the value of a single registry key on several machines for the sake of auditing whether the machines scanned need to be patched with newer versions of software. I am only permitted to use python 3 as per our company policy (which is on drugs, but what can i do).</p> <p>i have been looking into using the winreg module to connect to the remote machine (using credentials, we are on a domain) but I am confronted time and again with</p> <blockquote> <p>TypeError: The object is not a PyHKEY object (or a number of other issues.)</p> </blockquote> <p>this seems like a very common need and i've been surprised at the difficulty i have had finding any examples for python 3 that i can use to figure out what I'm doing wrong.</p> <p>Any assistance that anyone would be kind enough to give would be greatly appreciated. Thanks in advance.</p>
2
2009-10-20T16:19:17Z
1,611,880
<p>Can you show the code which you are writing? Have you opened the key? Many people do get problems since they have not opened it? This is just a guess, hope it works</p> <pre><code>key = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, r'SYSTEM\CurrentControlSet\Enum\Root') </code></pre>
1
2009-10-23T07:33:38Z
[ "python", "windows", "python-3.x", "winreg", "remote-registry" ]
Django forms: making a disabled field persist between validations
1,596,054
<p>At some point I need to display a <code>"disabled"</code> (greyed out by <code>disabled="disabled"</code> attribute) input of type <code>"select"</code>. As specified in the standard (xhtml and html4), inputs of type <code>"select"</code> can not have the <code>"readonly"</code> attribute. Note that this is for presentation purposes only, the actual value must end up in the POST. So here is what I do (quoting a part of the form declaration in django):</p> <pre><code>from django import forms _choices = ['to be', 'not to be'] class SomeForm(forms.Form): field = forms.ChoiceField(choices=[(item, item) for item in _choices], widget=forms.HiddenInput()) # the real field mock_field = forms.ChoiceField(required=False, # doesn't get submitted choices=[(item, item) for item in _choices], label="The question", widget=forms.Select(attrs={'disabled':'disabled'})) </code></pre> <p>Then it is initialized like this:</p> <pre><code>initial_val = 'to be' form = SomeForm(ititial={'field':initial_val, 'mock_field':initial_val}) </code></pre> <p>And all is well. Well, until the form gets validated and one of the other fields fails the validation. When this happens, the form is reloaded and the values are preserved, but not the one of the "mock_field" - it never got submitted (it is disabled). So it is not preserved. While this doesn't affect the data integrity, it is still not so good presentation-wise.</p> <p>Is there any way to preserve that field, with as little hackery as possible? The form is a part of a <code>django.contrib.formtools.FormWizard</code> and the initial values (and some fields) are generated dynamically. Basically, there is a lot of stuff going on already, it'd be great if it was possible not to overcomplicate things.</p>
6
2009-10-20T17:11:09Z
1,599,190
<p>Browsers don't POST disabled fields.</p> <p>You can try to copy <code>field</code>s initial value to <code>mock_field</code> in your Form's <code>__init__</code></p> <pre><code>def __init__(self, *args, **kwargs): super(SomeForm, self).__init__(*args, **kwargs) mock_initial = self.fields['field'].initial self.fields['mock_field'].initial = mock_initial </code></pre> <p>Code is not tested. Normally you would be concerned about <code>form.data</code> as well, but in this case it won't be different than <code>initial</code></p>
3
2009-10-21T07:07:06Z
[ "python", "django", "django-forms" ]
Django forms: making a disabled field persist between validations
1,596,054
<p>At some point I need to display a <code>"disabled"</code> (greyed out by <code>disabled="disabled"</code> attribute) input of type <code>"select"</code>. As specified in the standard (xhtml and html4), inputs of type <code>"select"</code> can not have the <code>"readonly"</code> attribute. Note that this is for presentation purposes only, the actual value must end up in the POST. So here is what I do (quoting a part of the form declaration in django):</p> <pre><code>from django import forms _choices = ['to be', 'not to be'] class SomeForm(forms.Form): field = forms.ChoiceField(choices=[(item, item) for item in _choices], widget=forms.HiddenInput()) # the real field mock_field = forms.ChoiceField(required=False, # doesn't get submitted choices=[(item, item) for item in _choices], label="The question", widget=forms.Select(attrs={'disabled':'disabled'})) </code></pre> <p>Then it is initialized like this:</p> <pre><code>initial_val = 'to be' form = SomeForm(ititial={'field':initial_val, 'mock_field':initial_val}) </code></pre> <p>And all is well. Well, until the form gets validated and one of the other fields fails the validation. When this happens, the form is reloaded and the values are preserved, but not the one of the "mock_field" - it never got submitted (it is disabled). So it is not preserved. While this doesn't affect the data integrity, it is still not so good presentation-wise.</p> <p>Is there any way to preserve that field, with as little hackery as possible? The form is a part of a <code>django.contrib.formtools.FormWizard</code> and the initial values (and some fields) are generated dynamically. Basically, there is a lot of stuff going on already, it'd be great if it was possible not to overcomplicate things.</p>
6
2009-10-20T17:11:09Z
1,635,901
<p>Well, this will be the first time I answer my question, but I've found a solution and (while it cerainly is a hack) it works.</p> <p>Instead of getting the initial value from the form instance, - <code>self.fields['whatever'].initial</code> seems to be <code>None</code> inside the constructor, I am getting the value from keyword argument "initial". And then I set it as the only choice for the "mock" field. Like this:</p> <pre><code>from django import forms _choices = ['to be', 'not to be'] class SomeForm(forms.Form): field = forms.ChoiceField(choices=[(item, item) for item in _choices], widget=forms.HiddenInput()) # the real field mock_field = forms.ChoiceField(required=False, # doesn't get submitted choices=[(item, item) for item in _choices], label="The question", widget=forms.Select(attrs={'disabled':'disabled'})) def __init__(self, *args, **kwargs): super(SomeForm, self).__init__(*args, **kwargs) mock_initial = kwargs['initial']['field'] self.fields['mock_field'].choices = [(mock_initial, mock_initial),] </code></pre> <p>This probably needs some error handling. Obviously, this will not work if the initial value is not provided for the actual <code>field</code>.</p>
1
2009-10-28T08:44:21Z
[ "python", "django", "django-forms" ]
Python not sorting Unicode correctly
1,596,091
<pre><code>data = [unicode('č', "cp1250"), unicode('d', "cp1250"), unicode('a', "cp1250")] data.sort(key=unicode.lower) for x in range(0,len(data)): print data[x].encode("cp1250") </code></pre> <p>and I get:</p> <pre> a d č </pre> <p>It should be:</p> <pre> a č d </pre> <p>Slovenia Alphabet goes like: a b c č d e f g.....</p> <p>I'm using WIN XP(Active code page: 852 - Slovenia). Can you help me?</p>
2
2009-10-20T17:18:46Z
1,596,154
<p>See the <code>locale</code> module for language-aware sorting. Especially the <code>strcoll</code> and <code>strxfrm</code> functions.</p>
1
2009-10-20T17:32:16Z
[ "python" ]
Python not sorting Unicode correctly
1,596,091
<pre><code>data = [unicode('č', "cp1250"), unicode('d', "cp1250"), unicode('a', "cp1250")] data.sort(key=unicode.lower) for x in range(0,len(data)): print data[x].encode("cp1250") </code></pre> <p>and I get:</p> <pre> a d č </pre> <p>It should be:</p> <pre> a č d </pre> <p>Slovenia Alphabet goes like: a b c č d e f g.....</p> <p>I'm using WIN XP(Active code page: 852 - Slovenia). Can you help me?</p>
2
2009-10-20T17:18:46Z
1,716,705
<p>I solved this problem an now have a working program:</p> <pre><code>import locale locale.setlocale(locale.LC_ALL, 'slovenian') data = ['č', 'ab', 'aa', 'a', 'd', 'ć', 'B', 'c'] data.sort(key=locale.strxfrm) print "Sorted..." for x in range(0,len(data)): print data[x] </code></pre>
2
2009-11-11T17:17:25Z
[ "python" ]
Can I execute an SQL Server DTS package from a Python script?
1,596,270
<p>I currently have a number of Python scripts that help prep a staging area for testing. One thing that the scripts do not handle is executing DTS packages on MS SQL Server. Is there a way to execute these packages using Python?</p>
2
2009-10-20T17:53:08Z
1,596,281
<p>Is calling the DTS run from the command line an option. If so here is an example for that.</p> <p><a href="http://www.mssqltips.com/tip.asp?tip=1007" rel="nofollow">http://www.mssqltips.com/tip.asp?tip=1007</a></p>
1
2009-10-20T17:56:15Z
[ "python", "sql-server" ]
Can I execute an SQL Server DTS package from a Python script?
1,596,270
<p>I currently have a number of Python scripts that help prep a staging area for testing. One thing that the scripts do not handle is executing DTS packages on MS SQL Server. Is there a way to execute these packages using Python?</p>
2
2009-10-20T17:53:08Z
1,596,450
<p>The answer is yes. As mentioned by lansinwd, you'd want to use the command line tool <a href="http://msdn.microsoft.com/en-us/library/aa224467%28SQL.80%29.aspx" rel="nofollow">DTSRun</a>. SQL Server tools will need to be installed on the machine executing the Python script. I'm not sure what percentage or which packages would be needed but the <a href="http://msdn.microsoft.com/en-us/library/aa224467%28SQL.80%29.aspx" rel="nofollow">MSDN page on DTSRun</a> should help answer that, if needed.</p> <p>A basic command line example is this: </p> <pre><code>DTSRun /S "Server[\Instance]" /N "DTS_Package_Name" /E </code></pre> <p>To <a href="http://docs.python.org/library/os.html#process-management" rel="nofollow">run this from Python</a> check out: <a href="http://docs.python.org/library/os.html#process-management" rel="nofollow">http://docs.python.org/library/os.html#process-management</a></p> <p>From the web page: </p> <blockquote> <p>These functions may be used to create and manage processes.</p> <p>The various exec*() functions take a list of arguments for the new program loaded into the process</p> </blockquote>
1
2009-10-20T18:25:48Z
[ "python", "sql-server" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
1,596,399
<p>Cast foo to a <a href="http://docs.python.org/library/stdtypes.html#set">set</a>, if you don't care about element order. </p>
12
2009-10-20T18:14:26Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
1,596,400
<p><code>list(</code><a href="http://docs.python.org/library/stdtypes.html#set"><code>set</code></a><code>(foo))</code> if you are using Python 2.5 or greater, but that doesn't maintain order.</p>
20
2009-10-20T18:14:27Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
1,596,435
<p>If you care about order a readable way is the following</p> <pre><code>def filter_unique(a_list): characters = set() result = [] for c in a_list: if not c in characters: characters.add(c) result.append(c) return result </code></pre> <p>Depending on your requirements of speed, maintanability, space consumption, you could find the above unfitting. In that case, specify your requirements and we can try to do better :-)</p>
2
2009-10-20T18:21:41Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
1,596,471
<pre><code>&gt;&gt;&gt; bar = [] &gt;&gt;&gt; for i in foo: if i not in bar: bar.append(i) &gt;&gt;&gt; bar ['a', 'b', 'c', 'd'] </code></pre> <p>this would be the most straightforward way of removing duplicates from the list and preserving the order as much as possible (even though "order" here is inherently wrong concept).</p>
3
2009-10-20T18:29:51Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
1,598,114
<p>If you write a function to do this i would use a generator, it just wants to be used in this case.</p> <pre>def unique(iterable): yielded = set() for item in iterable: if item not in yielded: yield item yielded.add(item) </pre>
2
2009-10-21T00:33:40Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
1,598,161
<p>Since there isn't an order-preserving answer with a list comprehension, I propose the following:</p> <pre><code>&gt;&gt;&gt; temp = set() &gt;&gt;&gt; [c for c in foo if c not in temp and (temp.add(c) or True)] ['a', 'b', 'c', 'd'] </code></pre> <p>which could also be written as</p> <pre><code>&gt;&gt;&gt; temp = set() &gt;&gt;&gt; filter(lambda c: c not in temp and (temp.add(c) or True), foo) ['a', 'b', 'c', 'd'] </code></pre> <p>Depending on how many elements are in <code>foo</code>, you might have faster results through repeated hash lookups instead of repeated iterative searches through a temporary list.</p> <p><code>c not in temp</code> verifies that <code>temp</code> does not have an item <code>c</code>; and the <code>or True</code> part forces <code>c</code> to be emitted to the output list when the item is added to the set.</p>
5
2009-10-21T00:47:04Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
1,598,187
<p>Inspired by <a href="http://stackoverflow.com/questions/1596390/filtering-lists-in-python/1596435#1596435">Francesco's answer</a>, rather than making our own <code>filter()</code>-type function, let's make the builtin do some work for us:</p> <pre><code>def unique(a, s=set()): if a not in s: s.add(a) return True return False </code></pre> <p>Usage:</p> <pre><code>uniq = filter(unique, orig) </code></pre> <p>This may or may not perform faster or slower than an answer that implements all of the work in pure Python. Benchmark and see. Of course, this only works once, but it demonstrates the concept. The ideal solution is, of course, to use a class:</p> <pre><code>class Unique(set): def __call__(self, a): if a not in self: self.add(a) return True return False </code></pre> <p>Now we can use it as much as we want:</p> <pre><code>uniq = filter(Unique(), orig) </code></pre> <p>Once again, we may (or may not) have thrown performance out the window - the gains of using a built-in function may be offset by the overhead of a class. I just though it was an interesting idea.</p>
1
2009-10-21T00:55:38Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
1,598,668
<p>This is what you want if you need a sorted list at the end:</p> <pre><code>&gt;&gt;&gt; foo = ['a','b','c','a','b','d','a','d'] &gt;&gt;&gt; bar = sorted(set(foo)) &gt;&gt;&gt; bar ['a', 'b', 'c', 'd'] </code></pre>
1
2009-10-21T04:06:16Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
13,634,235
<pre><code>import numpy as np np.unique(foo) </code></pre>
0
2012-11-29T20:32:22Z
[ "python", "list", "unique" ]
filtering lists in python
1,596,390
<p>I want to filter repeated elements in my list for instance</p> <pre><code>foo = ['a','b','c','a','b','d','a','d'] </code></pre> <p>I am only interested with:</p> <pre><code>['a','b','c','d'] </code></pre> <p>What would be the efficient way to do achieve this ? Cheers</p>
4
2009-10-20T18:12:40Z
23,282,658
<p>You could do a sort of ugly list comprehension hack.</p> <pre><code>[l[i] for i in range(len(l)) if l.index(l[i]) == i] </code></pre>
0
2014-04-25T01:23:29Z
[ "python", "list", "unique" ]
Which Python client library should I use for CouchdB?
1,596,440
<p>I'm starting to experiment with CouchDB because it looks like the perfect solution for certain problems we have. Given that all work will be on a brand new project with no legacy dependencies, which client library would you suggest that I use, and why?</p> <p>This would be easier if there was any overlap on the OSes we use. FreeBSD only has <a href="http://code.google.com/p/py-simplecouchdb/">py-simplecouchdb</a> already available in its ports collection, but that library's project website says to use <a href="http://couchdbkit.org">CouchDBKit</a> instead. Neither of those come with Ubuntu, which only ships with <a href="http://pypi.python.org/pypi/CouchDB">CouchDB</a>. Since those two OSes don't have an libraries in common, I'll probably be installing something from source (and hopefully submitting packages to the Ubuntu and FreeBSD folks if I have time).</p> <p>For those interested, I'd like to use CouchDB as a convenient intermediate storage place for data passed between various services - think of a message bus system but with less formality. For example, we have daemons that download and parse web pages, then send interesting bits to other daemons for further processing. A lot of those objects are ill-defined until runtime ("here's some HTML, plus a set of metadata, and some actions to run on it"). Rather than serialize it to an ad-hoc local network protocol or stick it in PostgreSQL, I'd much rather use something designed for the purpose. We're currently using <a href="http://www.lindaspaces.com/products/NWS_overview.html">NetWorkSpaces</a> in this role, but it doesn't have nearly the breadth of support or the user community of CouchDB.</p>
14
2009-10-20T18:23:05Z
1,597,169
<p>I have been using <a href="http://code.google.com/p/couchdb-python/">couchdb-python</a> with quite a lot of success and as far as I know the guys of desktopcouch use it in ubuntu. The prerequisites are very basic and you should have not problems:</p> <ul> <li>httplib2</li> <li>simplejson or cjson</li> <li>Python</li> <li>CouchDB 0.9.x (earlier or later versions are unlikely to work as the interface is still changing) </li> </ul> <p>For me some of the advantages are:</p> <ul> <li>Pythonic interface. You can work with the database like if it was a dict.</li> <li>Interface for design documents.</li> <li>a CouchDB view server that allows writing view functions in Python </li> </ul> <p>It also provides a couple of command-line tools: </p> <ul> <li>couchdb-dump: Writes a snapshot of a CouchDB database</li> <li>couchdb-load: Reads a MIME multipart file as generated by couchdb-dump and loads all the documents, attachments, and design documents into a CouchDB database. </li> <li>couchdb-replicate: Can be used as an update-notification script to trigger replication between databases when data is changed. </li> </ul>
5
2009-10-20T20:41:02Z
[ "python", "couchdb" ]
Which Python client library should I use for CouchdB?
1,596,440
<p>I'm starting to experiment with CouchDB because it looks like the perfect solution for certain problems we have. Given that all work will be on a brand new project with no legacy dependencies, which client library would you suggest that I use, and why?</p> <p>This would be easier if there was any overlap on the OSes we use. FreeBSD only has <a href="http://code.google.com/p/py-simplecouchdb/">py-simplecouchdb</a> already available in its ports collection, but that library's project website says to use <a href="http://couchdbkit.org">CouchDBKit</a> instead. Neither of those come with Ubuntu, which only ships with <a href="http://pypi.python.org/pypi/CouchDB">CouchDB</a>. Since those two OSes don't have an libraries in common, I'll probably be installing something from source (and hopefully submitting packages to the Ubuntu and FreeBSD folks if I have time).</p> <p>For those interested, I'd like to use CouchDB as a convenient intermediate storage place for data passed between various services - think of a message bus system but with less formality. For example, we have daemons that download and parse web pages, then send interesting bits to other daemons for further processing. A lot of those objects are ill-defined until runtime ("here's some HTML, plus a set of metadata, and some actions to run on it"). Rather than serialize it to an ad-hoc local network protocol or stick it in PostgreSQL, I'd much rather use something designed for the purpose. We're currently using <a href="http://www.lindaspaces.com/products/NWS_overview.html">NetWorkSpaces</a> in this role, but it doesn't have nearly the breadth of support or the user community of CouchDB.</p>
14
2009-10-20T18:23:05Z
1,597,325
<p>Considering the task you are trying to solve (distributed task processing) you should consider using one of the many tools designed for message passing rather than using a database. See for instance <a href="http://stackoverflow.com/questions/1516960/anatomy-of-a-distributed-system-in-php">this SO question on running multiple tasks over many machines</a>. </p> <p>If you really want a simple casual message passing system, I recommend you shift your focus to <a href="http://www.morbidq.com/trac/wiki/RestQ" rel="nofollow">MorbidQ</a>. As you get more serious, use <a href="http://www.rabbitmq.com/" rel="nofollow">RabbitMQ</a> or <a href="http://activemq.apache.org/" rel="nofollow">ActiveMQ</a>. This way you reduce the latency in your system and avoid having many clients polling a database (and thus hammering that computer). </p> <p>I've found that <a href="http://blog.gridspy.co.nz/2009/09/database-meet-realtime-data-logging.html" rel="nofollow">avoiding databases is a good idea</a> (That's my blog) - and I have a <a href="http://your.gridspy.co.nz/powertech" rel="nofollow">end-to-end live data system running using MorbidQ</a> here</p>
0
2009-10-20T21:10:03Z
[ "python", "couchdb" ]
Which Python client library should I use for CouchdB?
1,596,440
<p>I'm starting to experiment with CouchDB because it looks like the perfect solution for certain problems we have. Given that all work will be on a brand new project with no legacy dependencies, which client library would you suggest that I use, and why?</p> <p>This would be easier if there was any overlap on the OSes we use. FreeBSD only has <a href="http://code.google.com/p/py-simplecouchdb/">py-simplecouchdb</a> already available in its ports collection, but that library's project website says to use <a href="http://couchdbkit.org">CouchDBKit</a> instead. Neither of those come with Ubuntu, which only ships with <a href="http://pypi.python.org/pypi/CouchDB">CouchDB</a>. Since those two OSes don't have an libraries in common, I'll probably be installing something from source (and hopefully submitting packages to the Ubuntu and FreeBSD folks if I have time).</p> <p>For those interested, I'd like to use CouchDB as a convenient intermediate storage place for data passed between various services - think of a message bus system but with less formality. For example, we have daemons that download and parse web pages, then send interesting bits to other daemons for further processing. A lot of those objects are ill-defined until runtime ("here's some HTML, plus a set of metadata, and some actions to run on it"). Rather than serialize it to an ad-hoc local network protocol or stick it in PostgreSQL, I'd much rather use something designed for the purpose. We're currently using <a href="http://www.lindaspaces.com/products/NWS_overview.html">NetWorkSpaces</a> in this role, but it doesn't have nearly the breadth of support or the user community of CouchDB.</p>
14
2009-10-20T18:23:05Z
1,721,072
<p>If you're still considering CouchDB then I'll recommend Couchdbkit (<a href="http://www.couchdbkit.org" rel="nofollow">http://www.couchdbkit.org</a>). It's simple enough to quickly get a hang on and runs fine on my machine running Karmic Koala. Prior to that I've tried couchdb-python but some bugs (maybe ironed out by now) with httplib was giving me some errors (duplicate documents..etc) but Couchdbkit got me up and going so far without any problems.</p>
2
2009-11-12T09:41:58Z
[ "python", "couchdb" ]
Which Python client library should I use for CouchdB?
1,596,440
<p>I'm starting to experiment with CouchDB because it looks like the perfect solution for certain problems we have. Given that all work will be on a brand new project with no legacy dependencies, which client library would you suggest that I use, and why?</p> <p>This would be easier if there was any overlap on the OSes we use. FreeBSD only has <a href="http://code.google.com/p/py-simplecouchdb/">py-simplecouchdb</a> already available in its ports collection, but that library's project website says to use <a href="http://couchdbkit.org">CouchDBKit</a> instead. Neither of those come with Ubuntu, which only ships with <a href="http://pypi.python.org/pypi/CouchDB">CouchDB</a>. Since those two OSes don't have an libraries in common, I'll probably be installing something from source (and hopefully submitting packages to the Ubuntu and FreeBSD folks if I have time).</p> <p>For those interested, I'd like to use CouchDB as a convenient intermediate storage place for data passed between various services - think of a message bus system but with less formality. For example, we have daemons that download and parse web pages, then send interesting bits to other daemons for further processing. A lot of those objects are ill-defined until runtime ("here's some HTML, plus a set of metadata, and some actions to run on it"). Rather than serialize it to an ad-hoc local network protocol or stick it in PostgreSQL, I'd much rather use something designed for the purpose. We're currently using <a href="http://www.lindaspaces.com/products/NWS_overview.html">NetWorkSpaces</a> in this role, but it doesn't have nearly the breadth of support or the user community of CouchDB.</p>
14
2009-10-20T18:23:05Z
15,859,496
<h1>spycouch</h1> <p>Simple Python API for CouchDB </p> <p>Python library for easily manage CouchDB.</p> <p>Compared to ordinarily available libraries on web, works with the latest version CouchDB - 1.2.1</p> <h2>Functionality</h2> <blockquote> <p>Create a new database on the server</p> <p>Deleting a database from the server</p> <p>Listing databases on the server</p> <p>Database information</p> <p>Database compression</p> <p>Create map view</p> <p>Map view</p> <p>Listing documents in DB</p> <p>Get document from DB</p> <p>Save document to DB</p> <p>Delete document from DB</p> <p>Editing of a document</p> </blockquote> <p>spycouch on >> <a href="https://github.com/cernyjan/repository" rel="nofollow">https://github.com/cernyjan/repository</a></p>
1
2013-04-07T06:12:21Z
[ "python", "couchdb" ]
Which Python client library should I use for CouchdB?
1,596,440
<p>I'm starting to experiment with CouchDB because it looks like the perfect solution for certain problems we have. Given that all work will be on a brand new project with no legacy dependencies, which client library would you suggest that I use, and why?</p> <p>This would be easier if there was any overlap on the OSes we use. FreeBSD only has <a href="http://code.google.com/p/py-simplecouchdb/">py-simplecouchdb</a> already available in its ports collection, but that library's project website says to use <a href="http://couchdbkit.org">CouchDBKit</a> instead. Neither of those come with Ubuntu, which only ships with <a href="http://pypi.python.org/pypi/CouchDB">CouchDB</a>. Since those two OSes don't have an libraries in common, I'll probably be installing something from source (and hopefully submitting packages to the Ubuntu and FreeBSD folks if I have time).</p> <p>For those interested, I'd like to use CouchDB as a convenient intermediate storage place for data passed between various services - think of a message bus system but with less formality. For example, we have daemons that download and parse web pages, then send interesting bits to other daemons for further processing. A lot of those objects are ill-defined until runtime ("here's some HTML, plus a set of metadata, and some actions to run on it"). Rather than serialize it to an ad-hoc local network protocol or stick it in PostgreSQL, I'd much rather use something designed for the purpose. We're currently using <a href="http://www.lindaspaces.com/products/NWS_overview.html">NetWorkSpaces</a> in this role, but it doesn't have nearly the breadth of support or the user community of CouchDB.</p>
14
2009-10-20T18:23:05Z
17,788,369
<p>I have written a couchdb client library built on python-requests (which is in most distributions). We use this library in production. </p> <p><a href="https://github.com/adamlofts/couchdb-requests" rel="nofollow">https://github.com/adamlofts/couchdb-requests</a></p> <blockquote> <p>Robust CouchDB Python interface using python-requests.</p> </blockquote> <p>Goals:</p> <ul> <li>Only one way to do something</li> <li>Fast and stable (connection pooled)</li> <li>Explicit is better than implicit. Buffer sizes, connection pool size.</li> <li>Specify query parameters, no **params in query functions</li> </ul>
0
2013-07-22T13:04:41Z
[ "python", "couchdb" ]
Which Python client library should I use for CouchdB?
1,596,440
<p>I'm starting to experiment with CouchDB because it looks like the perfect solution for certain problems we have. Given that all work will be on a brand new project with no legacy dependencies, which client library would you suggest that I use, and why?</p> <p>This would be easier if there was any overlap on the OSes we use. FreeBSD only has <a href="http://code.google.com/p/py-simplecouchdb/">py-simplecouchdb</a> already available in its ports collection, but that library's project website says to use <a href="http://couchdbkit.org">CouchDBKit</a> instead. Neither of those come with Ubuntu, which only ships with <a href="http://pypi.python.org/pypi/CouchDB">CouchDB</a>. Since those two OSes don't have an libraries in common, I'll probably be installing something from source (and hopefully submitting packages to the Ubuntu and FreeBSD folks if I have time).</p> <p>For those interested, I'd like to use CouchDB as a convenient intermediate storage place for data passed between various services - think of a message bus system but with less formality. For example, we have daemons that download and parse web pages, then send interesting bits to other daemons for further processing. A lot of those objects are ill-defined until runtime ("here's some HTML, plus a set of metadata, and some actions to run on it"). Rather than serialize it to an ad-hoc local network protocol or stick it in PostgreSQL, I'd much rather use something designed for the purpose. We're currently using <a href="http://www.lindaspaces.com/products/NWS_overview.html">NetWorkSpaces</a> in this role, but it doesn't have nearly the breadth of support or the user community of CouchDB.</p>
14
2009-10-20T18:23:05Z
28,433,350
<p>After skimming through the docs of many couchdb python libraries, my choice went to <strong>pycouchdb</strong>.</p> <p>All I needed to know was very quick to grasp from the doc: <a href="https://py-couchdb.readthedocs.org/en/latest/" rel="nofollow">https://py-couchdb.readthedocs.org/en/latest/</a> and it works like a charm.</p> <p>Also, it works well with Python 3.</p>
0
2015-02-10T13:52:28Z
[ "python", "couchdb" ]
django urls without a trailing slash do not redirect
1,596,552
<p>I've got two applications located on two separate computers. On computer A, in the <code>urls.py</code> file I have a line like the following: </p> <pre><code>(r'^cast/$', 'mySite.simulate.views.cast') </code></pre> <p>And that url will work for both <code>mySite.com/cast/</code> and <code>mySite.com/cast</code>. But on computer B I have a similar url written out like:</p> <pre><code>(r'^login/$', 'mySite.myUser.views.login') </code></pre> <p>For some reason on computer B the <code>url mySite.com/login</code>/ will work but <code>mySite.com/login</code> will hang and won't direct back to <code>mySite.com/login/</code> like it will on computer A. Is there something I missed? Both <code>url.py</code> files look identical to me.</p>
42
2009-10-20T18:41:56Z
1,596,600
<p>check your <code>APPEND_SLASH</code> setting in the settings.py file</p> <p><a href="http://docs.djangoproject.com/en/dev/ref/settings/#append-slash">more info in the django docs</a></p>
58
2009-10-20T18:49:39Z
[ "python", "django", "django-urls" ]
django urls without a trailing slash do not redirect
1,596,552
<p>I've got two applications located on two separate computers. On computer A, in the <code>urls.py</code> file I have a line like the following: </p> <pre><code>(r'^cast/$', 'mySite.simulate.views.cast') </code></pre> <p>And that url will work for both <code>mySite.com/cast/</code> and <code>mySite.com/cast</code>. But on computer B I have a similar url written out like:</p> <pre><code>(r'^login/$', 'mySite.myUser.views.login') </code></pre> <p>For some reason on computer B the <code>url mySite.com/login</code>/ will work but <code>mySite.com/login</code> will hang and won't direct back to <code>mySite.com/login/</code> like it will on computer A. Is there something I missed? Both <code>url.py</code> files look identical to me.</p>
42
2009-10-20T18:41:56Z
11,690,144
<p>Or you can write your urls like this:</p> <pre><code>(r'^login/?$', 'mySite.myUser.views.login') </code></pre> <p>The question sign after the trailing slash makes it optional in regexp. Use it if for some reasons you don't want to use APPEND_SLASH setting.</p>
100
2012-07-27T14:44:14Z
[ "python", "django", "django-urls" ]
django urls without a trailing slash do not redirect
1,596,552
<p>I've got two applications located on two separate computers. On computer A, in the <code>urls.py</code> file I have a line like the following: </p> <pre><code>(r'^cast/$', 'mySite.simulate.views.cast') </code></pre> <p>And that url will work for both <code>mySite.com/cast/</code> and <code>mySite.com/cast</code>. But on computer B I have a similar url written out like:</p> <pre><code>(r'^login/$', 'mySite.myUser.views.login') </code></pre> <p>For some reason on computer B the <code>url mySite.com/login</code>/ will work but <code>mySite.com/login</code> will hang and won't direct back to <code>mySite.com/login/</code> like it will on computer A. Is there something I missed? Both <code>url.py</code> files look identical to me.</p>
42
2009-10-20T18:41:56Z
30,736,341
<p>I've had the same problem. In my case it was a stale leftover from some old version in urls.py, from before staticfiles:</p> <pre><code>url(r'^%s(?P&lt;path&gt;.*)$' % settings.MEDIA_URL.lstrip('/'), 'django.views.static.serve', kwargs={'document_root': settings.MEDIA_ROOT}), </code></pre> <p>MEDIA_URL was empty, so this pattern matched everything.</p>
0
2015-06-09T15:23:42Z
[ "python", "django", "django-urls" ]
django urls without a trailing slash do not redirect
1,596,552
<p>I've got two applications located on two separate computers. On computer A, in the <code>urls.py</code> file I have a line like the following: </p> <pre><code>(r'^cast/$', 'mySite.simulate.views.cast') </code></pre> <p>And that url will work for both <code>mySite.com/cast/</code> and <code>mySite.com/cast</code>. But on computer B I have a similar url written out like:</p> <pre><code>(r'^login/$', 'mySite.myUser.views.login') </code></pre> <p>For some reason on computer B the <code>url mySite.com/login</code>/ will work but <code>mySite.com/login</code> will hang and won't direct back to <code>mySite.com/login/</code> like it will on computer A. Is there something I missed? Both <code>url.py</code> files look identical to me.</p>
42
2009-10-20T18:41:56Z
32,078,961
<p>I've had the same problem too. My solution was put an (|/) before the end line of my regular expression.</p> <p><code>url(r'^artists/(?P[\d]+)(|/)$', ArtistDetailView.as_view()),</code></p>
1
2015-08-18T17:30:07Z
[ "python", "django", "django-urls" ]
django urls without a trailing slash do not redirect
1,596,552
<p>I've got two applications located on two separate computers. On computer A, in the <code>urls.py</code> file I have a line like the following: </p> <pre><code>(r'^cast/$', 'mySite.simulate.views.cast') </code></pre> <p>And that url will work for both <code>mySite.com/cast/</code> and <code>mySite.com/cast</code>. But on computer B I have a similar url written out like:</p> <pre><code>(r'^login/$', 'mySite.myUser.views.login') </code></pre> <p>For some reason on computer B the <code>url mySite.com/login</code>/ will work but <code>mySite.com/login</code> will hang and won't direct back to <code>mySite.com/login/</code> like it will on computer A. Is there something I missed? Both <code>url.py</code> files look identical to me.</p>
42
2009-10-20T18:41:56Z
37,603,563
<p>This improves on @Michael Gendin's answer. His answer serves the identical page with two separate URLs. It would be better to have <code>login</code> automatically redirect to <code>login/</code>, and then serve the latter as the main page:</p> <pre><code>from django.conf.urls import patterns from django.views.generic import RedirectView urlpatterns = patterns('', # Redirect login to login/ (r'^login$', RedirectView.as_view(url = '/login/')), # Handle the page with the slash. (r'^login/', "views.my_handler"), ) </code></pre>
0
2016-06-02T23:11:40Z
[ "python", "django", "django-urls" ]
XML Parsing with Python and minidom
1,596,829
<p>I'm using Python (minidom) to parse an XML file that prints a hierarchical structure that looks something like this (indentation is used here to show the significant hierarchical relationship):</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported </code></pre> <p>Instead, the program iterates multiple times over the nodes and produces the following, printing duplicate nodes. (Looking at the node list at each iteration, it's obvious why it does this but I can't seem to find a way to get the node list I'm looking for.)</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported Basic Features About This Software Platforms Supported Platforms Supported </code></pre> <p>Here is the XML source file:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;DOCMAP&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;My Document&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Overview&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Basic Features&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;About This Software&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Platforms Supported&lt;/Title&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/DOCMAP&gt; </code></pre> <p>Here is the Python program:</p> <pre><code>import xml.dom.minidom from xml.dom.minidom import Node dom = xml.dom.minidom.parse("test.xml") Topic=dom.getElementsByTagName('Topic') i = 0 for node in Topic: alist=node.getElementsByTagName('Title') for a in alist: Title= a.firstChild.data print Title </code></pre> <p>I could fix the problem by not nesting 'Topic' elements, by changing the lower level topic names to something like 'SubTopic1' and 'SubTopic2'. But, I want to take advantage of built-in XML hierarchical structuring without needing different element names; it seems that I should be able to nest 'Topic' elements and that there should be some way to know which level 'Topic' I'm currently looking at.</p> <p>I've tried a number of different XPath functions without much success.</p>
12
2009-10-20T19:36:07Z
1,597,645
<p>getElementsByTagName is recursive, you'll get <em>all</em> descendents with a matching tagName. Because your Topics contain other Topics that also have Titles, the call will get the lower-down Titles many times.</p> <p>If you want to ask for all matching direct children only, and you don't have XPath available, you can write a simple filter, eg.:</p> <pre><code>def getChildrenByTagName(node, tagName): for child in node.childNodes: if child.nodeType==child.ELEMENT_NODE and (tagName=='*' or child.tagName==tagName): yield child for topic in document.getElementsByTagName('Topic'): title= list(getChildrenByTagName('Title'))[0] # or just get(...).next() print title.firstChild.data </code></pre>
6
2009-10-20T22:17:38Z
[ "python", "xml", "minidom" ]
XML Parsing with Python and minidom
1,596,829
<p>I'm using Python (minidom) to parse an XML file that prints a hierarchical structure that looks something like this (indentation is used here to show the significant hierarchical relationship):</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported </code></pre> <p>Instead, the program iterates multiple times over the nodes and produces the following, printing duplicate nodes. (Looking at the node list at each iteration, it's obvious why it does this but I can't seem to find a way to get the node list I'm looking for.)</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported Basic Features About This Software Platforms Supported Platforms Supported </code></pre> <p>Here is the XML source file:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;DOCMAP&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;My Document&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Overview&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Basic Features&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;About This Software&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Platforms Supported&lt;/Title&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/DOCMAP&gt; </code></pre> <p>Here is the Python program:</p> <pre><code>import xml.dom.minidom from xml.dom.minidom import Node dom = xml.dom.minidom.parse("test.xml") Topic=dom.getElementsByTagName('Topic') i = 0 for node in Topic: alist=node.getElementsByTagName('Title') for a in alist: Title= a.firstChild.data print Title </code></pre> <p>I could fix the problem by not nesting 'Topic' elements, by changing the lower level topic names to something like 'SubTopic1' and 'SubTopic2'. But, I want to take advantage of built-in XML hierarchical structuring without needing different element names; it seems that I should be able to nest 'Topic' elements and that there should be some way to know which level 'Topic' I'm currently looking at.</p> <p>I've tried a number of different XPath functions without much success.</p>
12
2009-10-20T19:36:07Z
1,598,016
<p>Let me put that comment here ...</p> <p>Thanks for the attempt. It didn't work but it gave me some ideas. The following works (the same general idea; FWIW, the nodeType is ELEMENT_NODE):</p> <pre><code>import xml.dom.minidom from xml.dom.minidom import Node dom = xml.dom.minidom.parse("docmap.xml") def getChildrenByTitle(node): for child in node.childNodes: if child.localName=='Title': yield child Topic=dom.getElementsByTagName('Topic') for node in Topic: alist=getChildrenByTitle(node) for a in alist: # Title= a.firstChild.data Title= a.childNodes[0].nodeValue print Title </code></pre>
7
2009-10-21T00:04:10Z
[ "python", "xml", "minidom" ]
XML Parsing with Python and minidom
1,596,829
<p>I'm using Python (minidom) to parse an XML file that prints a hierarchical structure that looks something like this (indentation is used here to show the significant hierarchical relationship):</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported </code></pre> <p>Instead, the program iterates multiple times over the nodes and produces the following, printing duplicate nodes. (Looking at the node list at each iteration, it's obvious why it does this but I can't seem to find a way to get the node list I'm looking for.)</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported Basic Features About This Software Platforms Supported Platforms Supported </code></pre> <p>Here is the XML source file:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;DOCMAP&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;My Document&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Overview&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Basic Features&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;About This Software&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Platforms Supported&lt;/Title&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/DOCMAP&gt; </code></pre> <p>Here is the Python program:</p> <pre><code>import xml.dom.minidom from xml.dom.minidom import Node dom = xml.dom.minidom.parse("test.xml") Topic=dom.getElementsByTagName('Topic') i = 0 for node in Topic: alist=node.getElementsByTagName('Title') for a in alist: Title= a.firstChild.data print Title </code></pre> <p>I could fix the problem by not nesting 'Topic' elements, by changing the lower level topic names to something like 'SubTopic1' and 'SubTopic2'. But, I want to take advantage of built-in XML hierarchical structuring without needing different element names; it seems that I should be able to nest 'Topic' elements and that there should be some way to know which level 'Topic' I'm currently looking at.</p> <p>I've tried a number of different XPath functions without much success.</p>
12
2009-10-20T19:36:07Z
1,602,792
<p>You could use the following generator to run through the list and get titles with indentation levels:</p> <pre><code>def f(elem, level=-1): if elem.nodeName == "Title": yield elem.childNodes[0].nodeValue, level elif elem.nodeType == elem.ELEMENT_NODE: for child in elem.childNodes: for e, l in f(child, level + 1): yield e, l </code></pre> <p>If you test it with your file:</p> <pre><code>import xml.dom.minidom as minidom doc = minidom.parse("test.xml") list(f(doc)) </code></pre> <p>you will get a list with the following tuples:</p> <pre><code>(u'My Document', 1), (u'Overview', 1), (u'Basic Features', 2), (u'About This Software', 2), (u'Platforms Supported', 3) </code></pre> <p>It is only a basic idea to be fine-tuned of course. If you just want spaces at the beginning you can code that directly in the generator, though with the level you have more flexibility. You could also detect the first level automatically (here it's just a poor job of initializing the level to -1...).</p>
3
2009-10-21T18:45:23Z
[ "python", "xml", "minidom" ]