title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
ImportError: DLL load failed : - when trying to import psycopg2 library | 1,306,367 | <pre>
>>> import psycopg2
Traceback (most recent call last):
File "", line 1, in
File "C:\Python26\lib\site-packages\psycopg2\__init__.py", line 60, in
from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: DLL load failed: The application has failed to start because its si
de-by-side configuration is incorrect. Please see the application event log for
more detail.
</pre>
<p>I get this error when trying to import <a href="http://stickpeople.com/projects/python/win-psycopg">psycopg2</a>.. I've searched for days, and found no solutions.
I've tried installing the Visual C++ 2008 Package, but I still get the same error. </p>
| 12 | 2009-08-20T13:54:40Z | 1,306,396 | <p>According to this <a href="http://groups.google.com/group/django-users/browse%5Fthread/thread/e1b628f0f75896f0">thread</a> you need to install an earlier version since there were problems with the latest build.</p>
<blockquote>
<p>Simply install an earlier version, (2.0.10 works great), even with
PostgreSQL 8.4.x series. </p>
</blockquote>
| 11 | 2009-08-20T14:00:08Z | [
"python",
"django",
"postgresql",
"psycopg2"
] |
ImportError: DLL load failed : - when trying to import psycopg2 library | 1,306,367 | <pre>
>>> import psycopg2
Traceback (most recent call last):
File "", line 1, in
File "C:\Python26\lib\site-packages\psycopg2\__init__.py", line 60, in
from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: DLL load failed: The application has failed to start because its si
de-by-side configuration is incorrect. Please see the application event log for
more detail.
</pre>
<p>I get this error when trying to import <a href="http://stickpeople.com/projects/python/win-psycopg">psycopg2</a>.. I've searched for days, and found no solutions.
I've tried installing the Visual C++ 2008 Package, but I still get the same error. </p>
| 12 | 2009-08-20T13:54:40Z | 5,789,963 | <p>You can also try installing win-psycopg from <a href="http://www.stickpeople.com/projects/python/win-psycopg/">here</a></p>
| 7 | 2011-04-26T11:59:55Z | [
"python",
"django",
"postgresql",
"psycopg2"
] |
ImportError: DLL load failed : - when trying to import psycopg2 library | 1,306,367 | <pre>
>>> import psycopg2
Traceback (most recent call last):
File "", line 1, in
File "C:\Python26\lib\site-packages\psycopg2\__init__.py", line 60, in
from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: DLL load failed: The application has failed to start because its si
de-by-side configuration is incorrect. Please see the application event log for
more detail.
</pre>
<p>I get this error when trying to import <a href="http://stickpeople.com/projects/python/win-psycopg">psycopg2</a>.. I've searched for days, and found no solutions.
I've tried installing the Visual C++ 2008 Package, but I still get the same error. </p>
| 12 | 2009-08-20T13:54:40Z | 20,821,686 | <p>On Windows, make sure your path includes the Postgres bin directory. In my machine it's c:\Programs\PostgreSQL\9.3\bin.</p>
| 11 | 2013-12-29T02:58:20Z | [
"python",
"django",
"postgresql",
"psycopg2"
] |
Calculating a SHA hash with a string + secret key in python | 1,306,550 | <p>Amazon Product API now requires a signature with every request which I'm trying to generate ushing Python.</p>
<p>The step I get hung up on is this one:</p>
<p>"Calculate an RFC 2104-compliant HMAC with the SHA256 hash algorithm using the string above with our "dummy" Secret Access Key: 1234567890. For more information about this step, see documentation and code samples for your programming language." </p>
<p>Given a string and a secret key (in this case 1234567890) how do I calculate this hash using Python?</p>
<p>----------- UPDATE -------------</p>
<p>The first solution using HMAC.new looks correct however I'm getting a different result than they are.</p>
<p><a href="http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?rest-signature.html">http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?rest-signature.html</a></p>
<p>According to Amazon's example when you hash the secret key 1234567890 and the following string</p>
<pre><code>GET
webservices.amazon.com
/onca/xml
AWSAccessKeyId=00000000000000000000&ItemId=0679722769&Operation=I
temLookup&ResponseGroup=ItemAttributes%2COffers%2CImages%2CReview
s&Service=AWSECommerceService&Timestamp=2009-01-01T12%3A00%3A00Z&
Version=2009-01-06
</code></pre>
<p>You should get the following signature: <code>'Nace+U3Az4OhN7tISqgs1vdLBHBEijWcBeCqL5xN9xg='</code></p>
<p>I am getting this: <code>'411a59403c9f58b4a434c9c6a14ef6e363acc1d1bb2c6faf9adc30e20898c83b'</code></p>
| 26 | 2009-08-20T14:23:36Z | 1,306,571 | <p>From <a href="http://docs.python.org/library/hashlib.html#module-hashlib" rel="nofollow">http://docs.python.org/library/hashlib.html#module-hashlib</a> (modified a bit):</p>
<pre><code>import hashlib
secretKey = "1234567890"
m = hashlib.sha256()
# Get string and put into givenString.
m.update(givenString + secretKey)
m.digest()
</code></pre>
| 3 | 2009-08-20T14:27:23Z | [
"python",
"hash",
"sha256"
] |
Calculating a SHA hash with a string + secret key in python | 1,306,550 | <p>Amazon Product API now requires a signature with every request which I'm trying to generate ushing Python.</p>
<p>The step I get hung up on is this one:</p>
<p>"Calculate an RFC 2104-compliant HMAC with the SHA256 hash algorithm using the string above with our "dummy" Secret Access Key: 1234567890. For more information about this step, see documentation and code samples for your programming language." </p>
<p>Given a string and a secret key (in this case 1234567890) how do I calculate this hash using Python?</p>
<p>----------- UPDATE -------------</p>
<p>The first solution using HMAC.new looks correct however I'm getting a different result than they are.</p>
<p><a href="http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?rest-signature.html">http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?rest-signature.html</a></p>
<p>According to Amazon's example when you hash the secret key 1234567890 and the following string</p>
<pre><code>GET
webservices.amazon.com
/onca/xml
AWSAccessKeyId=00000000000000000000&ItemId=0679722769&Operation=I
temLookup&ResponseGroup=ItemAttributes%2COffers%2CImages%2CReview
s&Service=AWSECommerceService&Timestamp=2009-01-01T12%3A00%3A00Z&
Version=2009-01-06
</code></pre>
<p>You should get the following signature: <code>'Nace+U3Az4OhN7tISqgs1vdLBHBEijWcBeCqL5xN9xg='</code></p>
<p>I am getting this: <code>'411a59403c9f58b4a434c9c6a14ef6e363acc1d1bb2c6faf9adc30e20898c83b'</code></p>
| 26 | 2009-08-20T14:23:36Z | 1,306,575 | <pre><code>import hmac
import hashlib
import base64
dig = hmac.new(b'1234567890', msg=your_bytes_string, digestmod=hashlib.sha256).digest()
base64.b64encode(dig).decode() # py3k-mode
'Nace+U3Az4OhN7tISqgs1vdLBHBEijWcBeCqL5xN9xg='
</code></pre>
| 54 | 2009-08-20T14:27:33Z | [
"python",
"hash",
"sha256"
] |
Calculating a SHA hash with a string + secret key in python | 1,306,550 | <p>Amazon Product API now requires a signature with every request which I'm trying to generate ushing Python.</p>
<p>The step I get hung up on is this one:</p>
<p>"Calculate an RFC 2104-compliant HMAC with the SHA256 hash algorithm using the string above with our "dummy" Secret Access Key: 1234567890. For more information about this step, see documentation and code samples for your programming language." </p>
<p>Given a string and a secret key (in this case 1234567890) how do I calculate this hash using Python?</p>
<p>----------- UPDATE -------------</p>
<p>The first solution using HMAC.new looks correct however I'm getting a different result than they are.</p>
<p><a href="http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?rest-signature.html">http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?rest-signature.html</a></p>
<p>According to Amazon's example when you hash the secret key 1234567890 and the following string</p>
<pre><code>GET
webservices.amazon.com
/onca/xml
AWSAccessKeyId=00000000000000000000&ItemId=0679722769&Operation=I
temLookup&ResponseGroup=ItemAttributes%2COffers%2CImages%2CReview
s&Service=AWSECommerceService&Timestamp=2009-01-01T12%3A00%3A00Z&
Version=2009-01-06
</code></pre>
<p>You should get the following signature: <code>'Nace+U3Az4OhN7tISqgs1vdLBHBEijWcBeCqL5xN9xg='</code></p>
<p>I am getting this: <code>'411a59403c9f58b4a434c9c6a14ef6e363acc1d1bb2c6faf9adc30e20898c83b'</code></p>
| 26 | 2009-08-20T14:23:36Z | 1,307,439 | <pre><code>>>> import hmac
>>> import hashlib
>>> import base64
>>> s = """GET
... webservices.amazon.com
... /onca/xml
... AWSAccessKeyId=00000000000000000000&ItemId=0679722769&Operation=ItemLookup&ResponseGroup=ItemAttributes%2COffers%2CImages%2CReviews&Service=AWSECommerceService&Timestamp=2009-01-01T12%3A00%3A00Z&Version=2009-01-06"""
>>> base64.b64encode(hmac.new("1234567890", msg=s, digestmod=hashlib.sha256).digest())
'Nace+U3Az4OhN7tISqgs1vdLBHBEijWcBeCqL5xN9xg='
</code></pre>
| 9 | 2009-08-20T16:41:26Z | [
"python",
"hash",
"sha256"
] |
Calculating a SHA hash with a string + secret key in python | 1,306,550 | <p>Amazon Product API now requires a signature with every request which I'm trying to generate ushing Python.</p>
<p>The step I get hung up on is this one:</p>
<p>"Calculate an RFC 2104-compliant HMAC with the SHA256 hash algorithm using the string above with our "dummy" Secret Access Key: 1234567890. For more information about this step, see documentation and code samples for your programming language." </p>
<p>Given a string and a secret key (in this case 1234567890) how do I calculate this hash using Python?</p>
<p>----------- UPDATE -------------</p>
<p>The first solution using HMAC.new looks correct however I'm getting a different result than they are.</p>
<p><a href="http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?rest-signature.html">http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?rest-signature.html</a></p>
<p>According to Amazon's example when you hash the secret key 1234567890 and the following string</p>
<pre><code>GET
webservices.amazon.com
/onca/xml
AWSAccessKeyId=00000000000000000000&ItemId=0679722769&Operation=I
temLookup&ResponseGroup=ItemAttributes%2COffers%2CImages%2CReview
s&Service=AWSECommerceService&Timestamp=2009-01-01T12%3A00%3A00Z&
Version=2009-01-06
</code></pre>
<p>You should get the following signature: <code>'Nace+U3Az4OhN7tISqgs1vdLBHBEijWcBeCqL5xN9xg='</code></p>
<p>I am getting this: <code>'411a59403c9f58b4a434c9c6a14ef6e363acc1d1bb2c6faf9adc30e20898c83b'</code></p>
| 26 | 2009-08-20T14:23:36Z | 12,165,954 | <pre><code>import hmac
import hashlib
import base64
digest = hmac.new(secret, msg=thing_to_hash, digestmod=hashlib.sha256).digest()
signature = base64.b64encode(digest).decode()
</code></pre>
<p>I know this sounds silly, but make sure you don't have a trailing space on your secret by accident.</p>
| 3 | 2012-08-28T19:13:36Z | [
"python",
"hash",
"sha256"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 1,306,653 | <p>You can't add a list to a set because lists are mutable, meaning that you can change the contents of the list after adding it to the set.</p>
<p>You can however add tuples to the set, because you cannot change the contents of a tuple:</p>
<pre><code>>>> a.add(('f', 'g'))
>>> print a
set(['a', 'c', 'b', 'e', 'd', ('f', 'g')])
</code></pre>
<p><hr /></p>
<p><strong>Edit</strong>: some explanation: The documentation defines a <code>set</code> as <em>an unordered collection of distinct hashable objects.</em> The objects have to be hashable so that finding, adding and removing elements can be done faster than looking at each individual element every time you perform these operations. The specific algorithms used are explained in the <a href="http://en.wikipedia.org/wiki/Hash%5Ftree">Wikipedia article</a>. Pythons hashing algorithms are explained on <a href="http://effbot.org/zone/python-hash.htm">effbot.org</a> and pythons <code>__hash__</code> function in the <a href="http://docs.python.org/reference/datamodel.html#object.%5F%5Fhash%5F%5F">python reference</a>.</p>
<p>Some facts:</p>
<ul>
<li><strong>Set elements</strong> as well as <strong>dictionary keys</strong> have to be hashable</li>
<li>Some unhashable datatypes:
<ul>
<li><code>list</code>: use <code>tuple</code> instead</li>
<li><code>set</code>: use <code>frozenset</code> instead</li>
<li><code>dict</code>: has no official counterpart, but there are some
<a href="http://stackoverflow.com/questions/1151658/python-hashable-dicts">recipes</a></li>
</ul></li>
<li>Object instances are hashable by default with each instance having a unique hash. You can override this behavior as explained in the python reference.</li>
</ul>
| 100 | 2009-08-20T14:39:51Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 1,306,663 | <pre><code>>>> a = set('abcde')
>>> l = ['f', 'g']
>>> a |= set(l)
>>> a
set(['a', 'c', 'b', 'e', 'd', 'g', 'f'])
</code></pre>
<p>The union operator is much faster than add anyway.</p>
<p>edit: If you want the list itself and not its members, then you must use a tuple, unfortunately. Set members must be hashable.</p>
| 246 | 2009-08-20T14:41:46Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 1,306,673 | <p><strong>list objects are unhashable</strong>. you might want to turn them in to tuples though.</p>
| 7 | 2009-08-20T14:43:17Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 1,306,677 | <p>You'll want to use tuples, which are hashable (you can't hash a mutable object like a list).</p>
<pre><code>>>> a = set("abcde")
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> t = ('f', 'g')
>>> a.add(t)
>>> a
set(['a', 'c', 'b', 'e', 'd', ('f', 'g')])
</code></pre>
| 3 | 2009-08-20T14:45:08Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 1,306,692 | <p>You want to add a tuple, not a list:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> t = tuple(l)
>>> t
('f', 'g')
>>> a.add(t)
>>> a
set(['a', 'c', 'b', 'e', 'd', ('f', 'g')])
</code></pre>
<p>If you have a list, you can convert to the tuple, as shown above. A tuple is immutable, so it can be added to the set.</p>
| 4 | 2009-08-20T14:47:02Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 1,306,696 | <p>Sets can't have mutable (changeable) elements/members. A list, being mutable, cannot be a member of a set.</p>
<p>As sets are mutable, you cannot have a set of sets!
You can have a set of frozensets though.</p>
<p>(The same kind of "mutability requirement" applies to the keys of a dict.)</p>
<p>Other answers have already given you code, I hope this gives a bit of insight.
I'm hoping Alex Martelli will answer with even more details.</p>
| 3 | 2009-08-20T14:47:53Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 5,409,309 | <p>I found I needed to do something similar today. The algorithm knew when it was creating a new list that needed to added to the set, but not when it would have finished operating on the list.</p>
<p>Anyway, the behaviour I wanted was for set to use <code>id</code> rather than <code>hash</code>. As such I found <code>mydict[id(mylist)] = mylist</code> instead of <code>myset.add(mylist)</code> to offer the behaviour I wanted.</p>
| 2 | 2011-03-23T17:33:48Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 10,461,650 | <p>Please notice the function <code>set.update()</code>. The documentation says:</p>
<blockquote>
<p>Update a set with the union of itself and others.</p>
</blockquote>
| 7 | 2012-05-05T12:01:53Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 18,399,318 | <p>Hopefully this helps:</p>
<pre><code>>>> seta = set('1234')
>>> listb = ['a','b','c']
>>> seta.union(listb)
set(['a', 'c', 'b', '1', '3', '2', '4'])
>>> seta
set(['1', '3', '2', '4'])
>>> seta = seta.union(listb)
>>> seta
set(['a', 'c', 'b', '1', '3', '2', '4'])
</code></pre>
| 25 | 2013-08-23T09:24:30Z | [
"python",
"list",
"set"
] |
Python: Add list to set? | 1,306,631 | <p>Tested on Python 2.6 interpreter:</p>
<pre><code>>>> a=set('abcde')
>>> a
set(['a', 'c', 'b', 'e', 'd'])
>>> l=['f','g']
>>> l
['f', 'g']
>>> a.add(l)
Traceback (most recent call last):
File "<pyshell#35>", line 1, in <module>
a.add(l)
TypeError: list objects are unhashable
</code></pre>
<p>I think that I can't add the list to the set because there's no way Python can tell If I have added the same list twice. Is there a workaround?</p>
<p><strong>EDIT: I want to add the list itself, not its elements.</strong> </p>
| 99 | 2009-08-20T14:35:42Z | 21,873,769 | <p>Here is how I usually do it:</p>
<pre><code>def add_list_to_set(my_list, my_set):
[my_set.add(each) for each in my_list]
return my_set
</code></pre>
| 2 | 2014-02-19T07:26:42Z | [
"python",
"list",
"set"
] |
Mapping a nested list with List Comprehension in Python? | 1,306,670 | <p>I have the following code which I use to map a nested list in Python to produce a list with the same structure. </p>
<pre><code>>>> nested_list = [['Hello', 'World'], ['Goodbye', 'World']]
>>> [map(str.upper, x) for x in nested_list]
[['HELLO', 'WORLD'], ['GOODBYE', 'WORLD']]
</code></pre>
<p>Can this be done with list comprehension alone (without using the map function)?</p>
| 7 | 2009-08-20T14:42:39Z | 1,306,682 | <p>For nested lists you can use nested list comprehensions:</p>
<pre><code>nested_list = [[s.upper() for s in xs] for xs in nested_list]
</code></pre>
<p>Personally I find <code>map</code> to be cleaner in this situation, even though I almost always prefer list comprehensions. So it's really your call, since either will work.</p>
| 7 | 2009-08-20T14:45:39Z | [
"python",
"list",
"list-comprehension",
"nested-lists",
"map-function"
] |
Mapping a nested list with List Comprehension in Python? | 1,306,670 | <p>I have the following code which I use to map a nested list in Python to produce a list with the same structure. </p>
<pre><code>>>> nested_list = [['Hello', 'World'], ['Goodbye', 'World']]
>>> [map(str.upper, x) for x in nested_list]
[['HELLO', 'WORLD'], ['GOODBYE', 'WORLD']]
</code></pre>
<p>Can this be done with list comprehension alone (without using the map function)?</p>
| 7 | 2009-08-20T14:42:39Z | 1,306,693 | <p>Map is certainly a much cleaner way of doing what you want. You can nest the list comprehensions though, maybe that's what you're after?</p>
<pre><code>[[ix.upper() for ix in x] for x in nested_list]
</code></pre>
| 1 | 2009-08-20T14:47:07Z | [
"python",
"list",
"list-comprehension",
"nested-lists",
"map-function"
] |
Mapping a nested list with List Comprehension in Python? | 1,306,670 | <p>I have the following code which I use to map a nested list in Python to produce a list with the same structure. </p>
<pre><code>>>> nested_list = [['Hello', 'World'], ['Goodbye', 'World']]
>>> [map(str.upper, x) for x in nested_list]
[['HELLO', 'WORLD'], ['GOODBYE', 'WORLD']]
</code></pre>
<p>Can this be done with list comprehension alone (without using the map function)?</p>
| 7 | 2009-08-20T14:42:39Z | 1,309,044 | <p>Other posters have given the answer, but whenever I'm having trouble wrapping my head around a functional construct, I swallow my pride and spell it out longhand with explicitly non-optimal methods and/or objects. You said you wanted to end up with a generator, so:</p>
<pre><code>for xs in n_l:
def doUpper(l):
for x in l:
yield x.upper()
yield doUpper(xs)
for xs in n_l:
yield (x.upper() for x in xs)
((x.upper() for x in xs) for xs in n_l)
</code></pre>
<p>Sometimes it's cleaner to keep one of the longhand versions. For me, map and reduce sometimes make it more obvious, but Python idioms might be more obvious for others.</p>
| 1 | 2009-08-20T21:50:24Z | [
"python",
"list",
"list-comprehension",
"nested-lists",
"map-function"
] |
Mapping a nested list with List Comprehension in Python? | 1,306,670 | <p>I have the following code which I use to map a nested list in Python to produce a list with the same structure. </p>
<pre><code>>>> nested_list = [['Hello', 'World'], ['Goodbye', 'World']]
>>> [map(str.upper, x) for x in nested_list]
[['HELLO', 'WORLD'], ['GOODBYE', 'WORLD']]
</code></pre>
<p>Can this be done with list comprehension alone (without using the map function)?</p>
| 7 | 2009-08-20T14:42:39Z | 15,419,254 | <p>Remember the Zen of Python:</p>
<blockquote>
<p>There is generally more than one -- and probably <em>several</em> -- obvious ways to do it.**</p>
</blockquote>
<p>** Note: Edited for accuracy.</p>
<p>Anyway, I prefer map.</p>
<pre><code>from functools import partial
nested_list = map( partial(map, str.upper), nested_list )
</code></pre>
| 1 | 2013-03-14T20:12:07Z | [
"python",
"list",
"list-comprehension",
"nested-lists",
"map-function"
] |
Mapping a nested list with List Comprehension in Python? | 1,306,670 | <p>I have the following code which I use to map a nested list in Python to produce a list with the same structure. </p>
<pre><code>>>> nested_list = [['Hello', 'World'], ['Goodbye', 'World']]
>>> [map(str.upper, x) for x in nested_list]
[['HELLO', 'WORLD'], ['GOODBYE', 'WORLD']]
</code></pre>
<p>Can this be done with list comprehension alone (without using the map function)?</p>
| 7 | 2009-08-20T14:42:39Z | 26,133,679 | <p>Here is solution for <strong><em>nested list that has arbitrary depth</em></strong>:</p>
<pre><code>def map_nlist(nlist=nlist,fun=lambda x: x*2):
new_list=[]
for i in range(len(nlist)):
if isinstance(nlist[i],list):
new_list += [map_nlist(nlist[i],fun)]
else:
new_list += [fun(nlist[i])]
return new_list
</code></pre>
<p>you want to upper case all you list element, just type</p>
<pre><code>In [26]: nested_list = [['Hello', 'World'], ['Goodbye', [['World']]]]
In [27]: map_nlist(nested_list,fun=str.upper)
Out[27]: [['HELLO', 'WORLD'], ['GOODBYE', [['WORLD']]]]
</code></pre>
<p>And more important, this recursive function can do more than this!</p>
<p>I am new to python, feel free to discuss!</p>
| 1 | 2014-10-01T03:31:47Z | [
"python",
"list",
"list-comprehension",
"nested-lists",
"map-function"
] |
Python __str__ versus __unicode__ | 1,307,014 | <p>Is there a python convention for when you should implement <code>__str__()</code> versus <code>__unicode__()</code>. I've seen classes override <code>__unicode__()</code> more frequently than <code>__str__()</code> but it doesn't appear to be consistent. Are there specific rules when it is better to implement one versus the other? Is it necessary/good practice to implement both?</p>
| 148 | 2009-08-20T15:34:07Z | 1,307,177 | <p>With the world getting smaller, chances are that any string you encounter will contain Unicode eventually. So for any new apps, you should at least provide <code>__unicode__()</code>. Whether you also override <code>__str__()</code> is then just a matter of taste.</p>
| 8 | 2009-08-20T16:00:08Z | [
"python",
"string",
"unicode",
"conventions"
] |
Python __str__ versus __unicode__ | 1,307,014 | <p>Is there a python convention for when you should implement <code>__str__()</code> versus <code>__unicode__()</code>. I've seen classes override <code>__unicode__()</code> more frequently than <code>__str__()</code> but it doesn't appear to be consistent. Are there specific rules when it is better to implement one versus the other? Is it necessary/good practice to implement both?</p>
| 148 | 2009-08-20T15:34:07Z | 1,307,209 | <p>If I didn't especially care about micro-optimizing stringification for a given class I'd always implement <code>__unicode__</code> only, as it's more general. When I do care about such minute performance issues (which is the exception, not the rule), having <code>__str__</code> only (when I can prove there never will be non-ASCII characters in the stringified output) or both (when both are possible), might help.</p>
<p>These I think are solid principles, but in practice it's very common to KNOW there will be nothing but ASCII characters without doing effort to prove it (e.g. the stringified form only has digits, punctuation, and maybe a short ASCII name;-) in which case it's quite typical to move on directly to the "just <code>__str__</code>" approach (but if a programming team I worked with proposed a local guideline to avoid that, I'd be +1 on the proposal, as it's easy to err in these matters AND "premature optimization is the root of all evil in programming";-).</p>
| 14 | 2009-08-20T16:04:22Z | [
"python",
"string",
"unicode",
"conventions"
] |
Python __str__ versus __unicode__ | 1,307,014 | <p>Is there a python convention for when you should implement <code>__str__()</code> versus <code>__unicode__()</code>. I've seen classes override <code>__unicode__()</code> more frequently than <code>__str__()</code> but it doesn't appear to be consistent. Are there specific rules when it is better to implement one versus the other? Is it necessary/good practice to implement both?</p>
| 148 | 2009-08-20T15:34:07Z | 1,307,210 | <p><code>__str__()</code> is the old method -- it returns bytes. <code>__unicode__()</code> is the new, preferred method -- it returns characters. The names are a bit confusing, but in 2.x we're stuck with them for compatibility reasons. Generally, you should put all your string formatting in <code>__unicode__()</code>, and create a stub <code>__str__()</code> method:</p>
<pre><code>def __str__(self):
return unicode(self).encode('utf-8')
</code></pre>
<p>In 3.0, <code>str</code> contains characters, so the same methods are named <code>__bytes__()</code> and <code>__str__()</code>. These behave as expected.</p>
| 186 | 2009-08-20T16:04:34Z | [
"python",
"string",
"unicode",
"conventions"
] |
Pearson Similarity Score, how can I optimise this further? | 1,307,016 | <p>I have an implemented of Pearson's Similarity score for comparing two dictionaries of values. More time is spent in this method than anywhere else (potentially many millions of calls), so this is clearly the critical method to optimise.</p>
<p>Even the slightest optimisation could have a big impact on my code, so I'm keen to explore even the smallest improvements.</p>
<p>Here's what I have so far:</p>
<pre><code>def simple_pearson(v1,v2):
si = [val for val in v1 if val in v2]
n = len(si)
if n==0: return 0.0
sum1 = 0.0
sum2 = 0.0
sum1_sq = 0.0
sum2_sq = 0.0
p_sum = 0.0
for v in si:
val_1 = v1[v]
val_2 = v2[v]
sum1+=val_1
sum2+=val_2
sum1_sq+=pow(val_1,2)
sum2_sq+=pow(val_2,2)
p_sum+=val_1*val_2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = (sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n)
if temp < 0.0:
temp = -temp
den = sqrt(temp)
if den==0: return 1.0
r = num/den
return r
</code></pre>
| 2 | 2009-08-20T15:34:27Z | 1,307,102 | <p>If the inputs to any of your math functions are fairly constrained, you can use a lookup table instead of the math function. This can earn you some performance (speed) at the cost of extra memory to store the table.</p>
| 0 | 2009-08-20T15:47:28Z | [
"python",
"optimization",
"similarity",
"pearson"
] |
Pearson Similarity Score, how can I optimise this further? | 1,307,016 | <p>I have an implemented of Pearson's Similarity score for comparing two dictionaries of values. More time is spent in this method than anywhere else (potentially many millions of calls), so this is clearly the critical method to optimise.</p>
<p>Even the slightest optimisation could have a big impact on my code, so I'm keen to explore even the smallest improvements.</p>
<p>Here's what I have so far:</p>
<pre><code>def simple_pearson(v1,v2):
si = [val for val in v1 if val in v2]
n = len(si)
if n==0: return 0.0
sum1 = 0.0
sum2 = 0.0
sum1_sq = 0.0
sum2_sq = 0.0
p_sum = 0.0
for v in si:
val_1 = v1[v]
val_2 = v2[v]
sum1+=val_1
sum2+=val_2
sum1_sq+=pow(val_1,2)
sum2_sq+=pow(val_2,2)
p_sum+=val_1*val_2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = (sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n)
if temp < 0.0:
temp = -temp
den = sqrt(temp)
if den==0: return 1.0
r = num/den
return r
</code></pre>
| 2 | 2009-08-20T15:34:27Z | 1,307,113 | <p>I'm not sure if this holds in Python. But calculating the sqrt is a processor intensive calculation.</p>
<p>You might go for a fast approximation <a href="http://en.wikipedia.org/wiki/Methods%5Fof%5Fcomputing%5Fsquare%5Froots" rel="nofollow">newton</a></p>
| 0 | 2009-08-20T15:49:11Z | [
"python",
"optimization",
"similarity",
"pearson"
] |
Pearson Similarity Score, how can I optimise this further? | 1,307,016 | <p>I have an implemented of Pearson's Similarity score for comparing two dictionaries of values. More time is spent in this method than anywhere else (potentially many millions of calls), so this is clearly the critical method to optimise.</p>
<p>Even the slightest optimisation could have a big impact on my code, so I'm keen to explore even the smallest improvements.</p>
<p>Here's what I have so far:</p>
<pre><code>def simple_pearson(v1,v2):
si = [val for val in v1 if val in v2]
n = len(si)
if n==0: return 0.0
sum1 = 0.0
sum2 = 0.0
sum1_sq = 0.0
sum2_sq = 0.0
p_sum = 0.0
for v in si:
val_1 = v1[v]
val_2 = v2[v]
sum1+=val_1
sum2+=val_2
sum1_sq+=pow(val_1,2)
sum2_sq+=pow(val_2,2)
p_sum+=val_1*val_2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = (sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n)
if temp < 0.0:
temp = -temp
den = sqrt(temp)
if den==0: return 1.0
r = num/den
return r
</code></pre>
| 2 | 2009-08-20T15:34:27Z | 1,307,161 | <p>The real speed increase would be gained by moving to numpy or scipy. Short of that, there are microoptimizations: e.g. <code>x*x</code> is faster than <code>pow(x,2)</code>; you could extract the values at the same time as the keys by doing, instead of:</p>
<pre><code>si = [val for val in v1 if val in v2]
</code></pre>
<p>something like</p>
<pre><code>vs = [ (v1[val],v2[val]) for val in v1 if val in v2]
</code></pre>
<p>and then</p>
<pre><code>sum1 = sum(x for x, y in vs)
</code></pre>
<p>and so on; whether each of these brings time advantage needs microbenchmarking. Depending on how you're using these coefficients returning the square would save you a sqrt (that's a similar idea to using squares of distances between points, in geometry, rather than the distances themselves, and for the same reason -- saves you a sqrt; which makes sense because the coefficient IS a distance, kinda...;-).</p>
| 4 | 2009-08-20T15:57:29Z | [
"python",
"optimization",
"similarity",
"pearson"
] |
Pearson Similarity Score, how can I optimise this further? | 1,307,016 | <p>I have an implemented of Pearson's Similarity score for comparing two dictionaries of values. More time is spent in this method than anywhere else (potentially many millions of calls), so this is clearly the critical method to optimise.</p>
<p>Even the slightest optimisation could have a big impact on my code, so I'm keen to explore even the smallest improvements.</p>
<p>Here's what I have so far:</p>
<pre><code>def simple_pearson(v1,v2):
si = [val for val in v1 if val in v2]
n = len(si)
if n==0: return 0.0
sum1 = 0.0
sum2 = 0.0
sum1_sq = 0.0
sum2_sq = 0.0
p_sum = 0.0
for v in si:
val_1 = v1[v]
val_2 = v2[v]
sum1+=val_1
sum2+=val_2
sum1_sq+=pow(val_1,2)
sum2_sq+=pow(val_2,2)
p_sum+=val_1*val_2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = (sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n)
if temp < 0.0:
temp = -temp
den = sqrt(temp)
if den==0: return 1.0
r = num/den
return r
</code></pre>
| 2 | 2009-08-20T15:34:27Z | 1,307,233 | <p>I'd suggest changing:</p>
<pre><code>[val for val in v1 if val in v2]
</code></pre>
<p>to</p>
<pre><code>set(v1) & set(v2)
</code></pre>
<p>do</p>
<pre><code>if not n: return 0.0 # and similar for den
</code></pre>
<p>instead of</p>
<pre><code>if n == 0: return 0.0
</code></pre>
<p>and it's worth replacing last 6 lines with:</p>
<pre><code>try:
return num / sqrt(abs(temp))
except ZeroDivisionError:
return 1.0
</code></pre>
| 1 | 2009-08-20T16:08:11Z | [
"python",
"optimization",
"similarity",
"pearson"
] |
Pearson Similarity Score, how can I optimise this further? | 1,307,016 | <p>I have an implemented of Pearson's Similarity score for comparing two dictionaries of values. More time is spent in this method than anywhere else (potentially many millions of calls), so this is clearly the critical method to optimise.</p>
<p>Even the slightest optimisation could have a big impact on my code, so I'm keen to explore even the smallest improvements.</p>
<p>Here's what I have so far:</p>
<pre><code>def simple_pearson(v1,v2):
si = [val for val in v1 if val in v2]
n = len(si)
if n==0: return 0.0
sum1 = 0.0
sum2 = 0.0
sum1_sq = 0.0
sum2_sq = 0.0
p_sum = 0.0
for v in si:
val_1 = v1[v]
val_2 = v2[v]
sum1+=val_1
sum2+=val_2
sum1_sq+=pow(val_1,2)
sum2_sq+=pow(val_2,2)
p_sum+=val_1*val_2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = (sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n)
if temp < 0.0:
temp = -temp
den = sqrt(temp)
if den==0: return 1.0
r = num/den
return r
</code></pre>
| 2 | 2009-08-20T15:34:27Z | 1,309,956 | <p>Since it looks like you're doing quite a bit of numeric computation you should give <strong><a href="http://psyco.sourceforge.net/" rel="nofollow">Psyco</a></strong> a shot. It's a JIT compiler that analyzes running code and optimizes certain operations. Install it, then at the top of your file put:</p>
<pre><code>try:
import psyco
psyco.full()
except ImportError:
pass
</code></pre>
<p>This will enable Psyco's JIT and should speed up your code somewhat, for free :) (actually not, it takes up more memory)</p>
| 1 | 2009-08-21T03:24:43Z | [
"python",
"optimization",
"similarity",
"pearson"
] |
Pearson Similarity Score, how can I optimise this further? | 1,307,016 | <p>I have an implemented of Pearson's Similarity score for comparing two dictionaries of values. More time is spent in this method than anywhere else (potentially many millions of calls), so this is clearly the critical method to optimise.</p>
<p>Even the slightest optimisation could have a big impact on my code, so I'm keen to explore even the smallest improvements.</p>
<p>Here's what I have so far:</p>
<pre><code>def simple_pearson(v1,v2):
si = [val for val in v1 if val in v2]
n = len(si)
if n==0: return 0.0
sum1 = 0.0
sum2 = 0.0
sum1_sq = 0.0
sum2_sq = 0.0
p_sum = 0.0
for v in si:
val_1 = v1[v]
val_2 = v2[v]
sum1+=val_1
sum2+=val_2
sum1_sq+=pow(val_1,2)
sum2_sq+=pow(val_2,2)
p_sum+=val_1*val_2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = (sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n)
if temp < 0.0:
temp = -temp
den = sqrt(temp)
if den==0: return 1.0
r = num/den
return r
</code></pre>
| 2 | 2009-08-20T15:34:27Z | 1,312,266 | <p>I'll post what I've got so far as an answer to differentiate it from the question. This is a combination of some techniques described above that seem to have given the best improvement s far.</p>
<pre><code>def pearson(v1,v2):
vs = [(v1[val],v2[val]) for val in v1 if val in v2]
n = len(vs)
if n==0: return 0.0
sum1,sum2,sum1_sq,sum2_sq,p_sum = 0.0, 0.0, 0.0, 0.0, 0.0
for v1,v2 in vs:
sum1+=v1
sum2+=v2
sum1_sq+=v1*v1
sum2_sq+=v2*v2
p_sum+=v1*v2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = max((sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n),0)
if temp:
return num / sqrt(temp)
return 1.0
</code></pre>
<p>Edit: It looks like psyco gives a 15% improvment for this version which isn't massive but is enough to justify its use.</p>
| 0 | 2009-08-21T14:23:36Z | [
"python",
"optimization",
"similarity",
"pearson"
] |
Pearson Similarity Score, how can I optimise this further? | 1,307,016 | <p>I have an implemented of Pearson's Similarity score for comparing two dictionaries of values. More time is spent in this method than anywhere else (potentially many millions of calls), so this is clearly the critical method to optimise.</p>
<p>Even the slightest optimisation could have a big impact on my code, so I'm keen to explore even the smallest improvements.</p>
<p>Here's what I have so far:</p>
<pre><code>def simple_pearson(v1,v2):
si = [val for val in v1 if val in v2]
n = len(si)
if n==0: return 0.0
sum1 = 0.0
sum2 = 0.0
sum1_sq = 0.0
sum2_sq = 0.0
p_sum = 0.0
for v in si:
val_1 = v1[v]
val_2 = v2[v]
sum1+=val_1
sum2+=val_2
sum1_sq+=pow(val_1,2)
sum2_sq+=pow(val_2,2)
p_sum+=val_1*val_2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = (sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n)
if temp < 0.0:
temp = -temp
den = sqrt(temp)
if den==0: return 1.0
r = num/den
return r
</code></pre>
| 2 | 2009-08-20T15:34:27Z | 1,313,774 | <p>If you can use scipy, you could use the pearson function: <a href="http://www.scipy.org/doc/api%5Fdocs/SciPy.stats.stats.html#pearsonr" rel="nofollow">http://www.scipy.org/doc/api%5Fdocs/SciPy.stats.stats.html#pearsonr</a></p>
<p>Or you could copy/paste the code (it has a liberal license) from <a href="http://svn.scipy.org/svn/scipy/trunk/scipy/stats/stats.py" rel="nofollow">http://svn.scipy.org/svn/scipy/trunk/scipy/stats/stats.py</a> (search for <code>def pearson()</code>).
In the code <code>np</code> is just numpy (the code does <code>import numpy as np</code>).</p>
| 2 | 2009-08-21T19:19:26Z | [
"python",
"optimization",
"similarity",
"pearson"
] |
Pearson Similarity Score, how can I optimise this further? | 1,307,016 | <p>I have an implemented of Pearson's Similarity score for comparing two dictionaries of values. More time is spent in this method than anywhere else (potentially many millions of calls), so this is clearly the critical method to optimise.</p>
<p>Even the slightest optimisation could have a big impact on my code, so I'm keen to explore even the smallest improvements.</p>
<p>Here's what I have so far:</p>
<pre><code>def simple_pearson(v1,v2):
si = [val for val in v1 if val in v2]
n = len(si)
if n==0: return 0.0
sum1 = 0.0
sum2 = 0.0
sum1_sq = 0.0
sum2_sq = 0.0
p_sum = 0.0
for v in si:
val_1 = v1[v]
val_2 = v2[v]
sum1+=val_1
sum2+=val_2
sum1_sq+=pow(val_1,2)
sum2_sq+=pow(val_2,2)
p_sum+=val_1*val_2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = (sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n)
if temp < 0.0:
temp = -temp
den = sqrt(temp)
if den==0: return 1.0
r = num/den
return r
</code></pre>
| 2 | 2009-08-20T15:34:27Z | 2,280,469 | <p>Scipy is the fastest!</p>
<p>I have don some tests with the code above and also with a version I found on my comp, see below for results and the code:</p>
<pre>
pearson 14.7597990757
sim_pearson 15.6806837987
scipy:pearsonr 0.451986019188
</pre>
<pre>
try:
import psyco
psyco.full()
except ImportError:
pass
from math import sqrt
def sim_pearson(set1, set2):
si={}
for item in set1:
if item in set2:
si[item] = 1
#number of elements
n = len(si)
#if none common, return 0 similarity
if n == 0: return 0
#add up all the preferences
sum1 = sum([set1[item] for item in si])
sum2 = sum([set2[item] for item in si])
#sum up the squares
sum_sq1 = sum([pow(set1[item], 2) for item in si])
sum_sq2 = sum([pow(set2[item], 2) for item in si])
#sum up the products
sum_p = sum([set1[item] * set2[item] for item in si])
nom = sum_p - ((sum1 * sum2) / n )
den = sqrt( (sum_sq1 - (sum1)**2 / n) * (sum_sq2 - (sum2)**2 / n) )
if den==0: return 0
return nom/den
# from http://stackoverflow.com/questions/1307016/pearson-similarity-score-how-can-i-optimise-this-further
def pearson(v1, v2):
vs = [(v1[val],v2[val]) for val in v1 if val in v2]
n = len(vs)
if n==0: return 0.0
sum1,sum2,sum1_sq,sum2_sq,p_sum = 0.0, 0.0, 0.0, 0.0, 0.0
for v1,v2 in vs:
sum1+=v1
sum2+=v2
sum1_sq+=v1*v1
sum2_sq+=v2*v2
p_sum+=v1*v2
# Calculate Pearson score
num = p_sum-(sum1*sum2/n)
temp = max((sum1_sq-pow(sum1,2)/n) * (sum2_sq-pow(sum2,2)/n),0)
if temp:
return num / sqrt(temp)
return 1.0
if __name__ == "__main__":
import timeit
tsetup = """
from random import randrange
from __main__ import pearson, sim_pearson
from scipy.stats import pearsonr
v1 = [randrange(0,1000) for x in range(1000)]
v2 = [randrange(0,1000) for x in range(1000)]
#gc.enable()
"""
t1 = timeit.Timer(stmt="pearson(v1,v2)", setup=tsetup)
t2 = timeit.Timer(stmt="sim_pearson(v1,v2)", setup=tsetup)
t3 = timeit.Timer(stmt="pearsonr(v1,v2)", setup=tsetup)
tt = 1000
print 'pearson', t1.timeit(tt)
print 'sim_pearson', t2.timeit(tt)
print 'scipy:pearsonr', t3.timeit(tt)
</pre>
| 2 | 2010-02-17T12:22:13Z | [
"python",
"optimization",
"similarity",
"pearson"
] |
Dynamic list slicing | 1,307,019 | <p>Good day code knights,</p>
<p>I have a tricky problem that I cannot see a simple solution for. And the history of the humankind states that there is a simple solution for everything (excluding buying presents)</p>
<p>Here is the problem:</p>
<p>I need an algorithm that takes multidimensional lists and a filter dictionary, processes them and returns lists based on the filters.</p>
<p>For example:</p>
<pre><code>Bathymetry ('x', 'y')=(182, 149) #notation for (dimensions)=(size)
Chl ('time', 'z', 'y', 'x')=(4, 31, 149, 182)
filters {'x':(0,20), 'y':(3), 'z':(1,2), time:()} #no filter stands for all values
</code></pre>
<p>Would return:</p>
<pre><code>readFrom.variables['Bathymetry'][0:21, 3]
readFrom.variables['Chl'][:, 1:3, 3, 0:21]
</code></pre>
<p>I was thinking about a for loop for the dimensions, reading the filters from the filter list but I cannot get my head around actually passing the attributes to the slicing machine.</p>
<p>Any help much appreciated.</p>
| 0 | 2009-08-20T15:34:45Z | 1,307,112 | <p>Something like the following should work:</p>
<pre><code>def doit(nam, filters):
alldims = []
for dimname in getDimNames(nam):
filt = filters.get(dimname, ())
howmany = len(filt)
if howmany == 0:
sliciflt = slice()
elif howmany == 1:
sliciflt = filt[0]
elif howmany in (2, 3):
sliciflt = slice(*filt)
else:
raise RuntimeError("%d items in slice for dim %r (%r)!"
% (howmany, dimname, nam))
alldims.append(sliciflt)
return readFrom.variables[nam][tuple(alldims)]
</code></pre>
| 2 | 2009-08-20T15:48:57Z | [
"python",
"algorithm",
"list",
"multidimensional-array"
] |
Dynamic list slicing | 1,307,019 | <p>Good day code knights,</p>
<p>I have a tricky problem that I cannot see a simple solution for. And the history of the humankind states that there is a simple solution for everything (excluding buying presents)</p>
<p>Here is the problem:</p>
<p>I need an algorithm that takes multidimensional lists and a filter dictionary, processes them and returns lists based on the filters.</p>
<p>For example:</p>
<pre><code>Bathymetry ('x', 'y')=(182, 149) #notation for (dimensions)=(size)
Chl ('time', 'z', 'y', 'x')=(4, 31, 149, 182)
filters {'x':(0,20), 'y':(3), 'z':(1,2), time:()} #no filter stands for all values
</code></pre>
<p>Would return:</p>
<pre><code>readFrom.variables['Bathymetry'][0:21, 3]
readFrom.variables['Chl'][:, 1:3, 3, 0:21]
</code></pre>
<p>I was thinking about a for loop for the dimensions, reading the filters from the filter list but I cannot get my head around actually passing the attributes to the slicing machine.</p>
<p>Any help much appreciated.</p>
| 0 | 2009-08-20T15:34:45Z | 1,307,117 | <p>I'm not sure I understood your question. But I think the <code>slice</code> object is what you are looking for:</p>
<p>First instead of an empty tuple use None to include all values in time</p>
<pre><code>filters= {'x':(0,20), 'y':(3), 'z':(1,2), 'time':None}
</code></pre>
<p>Then build a slice dictionary like this:</p>
<pre><code>d = dict(
(k, slice(*v) if isinstance(v, tuple) else slice(v))
for k, v in filters.iteritems()
)
</code></pre>
<p>Here is the output:</p>
<pre><code>{
'y': slice(None, 3, None),
'x': slice(0, 20, None),
'z': slice(1, 2, None),
'time': slice(None, None, None)
}
</code></pre>
<p>Then you can use the slice objects to extract the range from the list</p>
| 3 | 2009-08-20T15:50:34Z | [
"python",
"algorithm",
"list",
"multidimensional-array"
] |
How do I programatically pull lists/arrays of (itunes urls to) apps in the iphone app store? | 1,307,322 | <p>I'd like to know how to pragmatically pull lists of apps from the iphone app store. I'd code this in python (via the google app engine) or in an iphone app. My goal would be to select maybe 5 of them and present them to the user. (for instance a top 5 kind of thing, or advanced filtering or queries)</p>
| 0 | 2009-08-20T16:21:38Z | 1,307,656 | <p>Unfortunately the only API that seems to be around for Apple's app store is a commercial offering from ABTO; nobody seems to have developed a free one. I'm afraid you'll have to resort to "screen scraping" -- urlget things, use beautifulsoup or the like for interpreting the HTML you get, and be ready to fix breakages whenever Apple tweaks their formats &c. It seems Apple has no interest in making such a thing available to developers (although as far as I can't tell they're not actively fighting against it either, they appear to just not care).</p>
| 1 | 2009-08-20T17:21:58Z | [
"python",
"iphone",
"arrays",
"google-app-engine",
"app-store"
] |
Python test framework with support of non-fatal failures | 1,307,367 | <p>I'm evaluating "test frameworks" for automated system tests; so far I'm looking for a python framework.
In py.test or nose I can't see something like the EXPECT macros I know from google testing framework.
I'd like to make several assertions in one test while not aborting the test at the first failure.
Am I missing something in these frameworks or does this not work?
Does anybody have suggestions for python test framworks usable for automated system tests?</p>
| 4 | 2009-08-20T16:28:38Z | 1,307,443 | <p>nose will only abort on the first failure if you pass the <code>-x</code> option at the command line.</p>
<p>test.py:</p>
<pre><code>def test1():
assert False
def test2():
assert False
</code></pre>
<p>without -x option:</p>
<pre>
C:\temp\py>C:\Python26\Scripts\nosetests.exe test.py
FF
======================================================================
FAIL: test.test1
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python26\lib\site-packages\nose-0.11.1-py2.6.egg\nose\case.py", line
183, in runTest
self.test(*self.arg)
File "C:\temp\py\test.py", line 2, in test1
assert False
AssertionError
======================================================================
FAIL: test.test2
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python26\lib\site-packages\nose-0.11.1-py2.6.egg\nose\case.py", line
183, in runTest
self.test(*self.arg)
File "C:\temp\py\test.py", line 5, in test2
assert False
AssertionError
----------------------------------------------------------------------
Ran 2 tests in 0.031s
FAILED (failures=2)
</pre>
<p>with -x option:</p>
<pre>
C:\temp\py>C:\Python26\Scripts\nosetests.exe test.py -x
F
======================================================================
FAIL: test.test1
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python26\lib\site-packages\nose-0.11.1-py2.6.egg\nose\case.py", line
183, in runTest
self.test(*self.arg)
File "C:\temp\py\test.py", line 2, in test1
assert False
AssertionError
----------------------------------------------------------------------
Ran 1 test in 0.047s
FAILED (failures=1)
</pre>
<p>You might want to consider reviewing <a href="http://somethingaboutorange.com/mrl/projects/nose/0.11.1/testing.html" rel="nofollow">the nose documentation</a>.</p>
| -1 | 2009-08-20T16:41:49Z | [
"python",
"testing",
"assert",
"nose"
] |
Python test framework with support of non-fatal failures | 1,307,367 | <p>I'm evaluating "test frameworks" for automated system tests; so far I'm looking for a python framework.
In py.test or nose I can't see something like the EXPECT macros I know from google testing framework.
I'd like to make several assertions in one test while not aborting the test at the first failure.
Am I missing something in these frameworks or does this not work?
Does anybody have suggestions for python test framworks usable for automated system tests?</p>
| 4 | 2009-08-20T16:28:38Z | 1,307,672 | <p>I was wanting something similar for functional testing that I'm doing using nose. I eventually came up with this:</p>
<pre><code>def raw_print(str, *args):
out_str = str % args
sys.stdout.write(out_str)
class DeferredAsserter(object):
def __init__(self):
self.broken = False
def assert_equal(self, expected, actual):
outstr = '%s == %s...' % (expected, actual)
raw_print(outstr)
try:
assert expected == actual
except AssertionError:
raw_print('FAILED\n\n')
self.broken = True
except Exception, e:
raw_print('ERROR\n')
traceback.print_exc()
self.broken = True
else:
raw_print('PASSED\n\n')
def invoke(self):
assert not self.broken
</code></pre>
<p>In other words, it's printing out strings indicating if a test passed or failed. At the end of the test, you call the invoke method which actually does the <em>real</em> assertion. It's definitely not preferable, but I haven't seen a Python testing framework that can handle this kind of testing. Nor have I gotten around to figuring out how to write a nose plugin to do this kind of thing. :-/</p>
| 2 | 2009-08-20T17:25:31Z | [
"python",
"testing",
"assert",
"nose"
] |
Python test framework with support of non-fatal failures | 1,307,367 | <p>I'm evaluating "test frameworks" for automated system tests; so far I'm looking for a python framework.
In py.test or nose I can't see something like the EXPECT macros I know from google testing framework.
I'd like to make several assertions in one test while not aborting the test at the first failure.
Am I missing something in these frameworks or does this not work?
Does anybody have suggestions for python test framworks usable for automated system tests?</p>
| 4 | 2009-08-20T16:28:38Z | 1,307,693 | <p>You asked for suggestions so I'll suggest <a href="http://code.google.com/p/robotframework/" rel="nofollow">robot framework</a>. </p>
| 1 | 2009-08-20T17:28:36Z | [
"python",
"testing",
"assert",
"nose"
] |
Python test framework with support of non-fatal failures | 1,307,367 | <p>I'm evaluating "test frameworks" for automated system tests; so far I'm looking for a python framework.
In py.test or nose I can't see something like the EXPECT macros I know from google testing framework.
I'd like to make several assertions in one test while not aborting the test at the first failure.
Am I missing something in these frameworks or does this not work?
Does anybody have suggestions for python test framworks usable for automated system tests?</p>
| 4 | 2009-08-20T16:28:38Z | 1,308,837 | <p>Oddly enough it sounds like you're looking for something like my <a href="http://bitbucket.org/jimd/claft/" rel="nofollow"><code>claft</code></a> (command line and filter tester). Something like it but far more mature.</p>
<p><code>claft</code> is (so far) just a toy I wrote to help students with programming exercises. The idea is to provide the exercises with simple configuration files that represent the program's requirements in terms which are reasonably human readable (and declarative rather than programmatic) while also being suitable for automated testing.</p>
<p><code>claft</code> runs all the defined tests, supplying arguments and inputs to each, checking return codes, and matching output (<code>stdout</code>) and error messages (<code>stderr</code>) against regular expression patterns. It collects all the failures in a list an prints the whole list at the end of each suite.</p>
<p>It does NOT yet do arbitrary dialogs of input/output sequences. So far it just feeds data in then reads all data/errors out. It also doesn't implement timeouts and, in fact, doesn't even capture failed execute attempts. (I did say it's just a toy, so far, didn't I?). I also haven't yet implemented support for Setup, Teardown, and External Check scripts (though I have plans to do so).</p>
<p>Bryan's suggestion of the "robot framework" might be better for your needs; though a quick glance through it suggests that it's considerably more involved than I want for my purposes. (I need to keep things simple enough that students new to programming can focus on their exercises and not spend lots of time fighting with setting up their test harness).</p>
<p>You're welcome to look at <code>claft</code> and use it or derive your own solution there from (it's BSD licensed). Obviously you'd be welcome to contribute back. (It's on [bitbucket]:(http://www.bitbucket.org/) so you can use Mercurial to clone, and fork your own respository ... and submit a "pull request" if you ever want me to look at merging your changes back into my repo).</p>
<p>Then again perhaps I'm misreading your question.</p>
| 1 | 2009-08-20T21:10:09Z | [
"python",
"testing",
"assert",
"nose"
] |
Python test framework with support of non-fatal failures | 1,307,367 | <p>I'm evaluating "test frameworks" for automated system tests; so far I'm looking for a python framework.
In py.test or nose I can't see something like the EXPECT macros I know from google testing framework.
I'd like to make several assertions in one test while not aborting the test at the first failure.
Am I missing something in these frameworks or does this not work?
Does anybody have suggestions for python test framworks usable for automated system tests?</p>
| 4 | 2009-08-20T16:28:38Z | 1,314,470 | <p>Why not (in <code>unittest</code>, but this should work in any framework):</p>
<pre><code>class multiTests(MyTestCase):
def testMulti(self, tests):
tests( a == b )
tests( frobnicate())
...
</code></pre>
<p>assuming your implemented MyTestCase so that a function is wrapped into</p>
<pre><code>testlist = []
x.testMulti(testlist.append)
assert all(testlist)
</code></pre>
| 0 | 2009-08-21T22:26:04Z | [
"python",
"testing",
"assert",
"nose"
] |
Python question regarding a server listener | 1,307,371 | <p>I wrote a plug-in for the jetbrains tool teamcity. It is pretty much just a server listener that listens for a build being triggered and outputs some text files with information about different builds like what triggered it, how many changes there where etc etc. After I finished that I wrote a python script that could input info into teamcity while the server is running and kick of a build. I would like to be able to get the artifacts for that build after the build is ran, but the problem is I don't know how long it takes each build to run. Sometimes it is 30 sec other times 30 minutes. </p>
<p>So I am kicking off the build with this line in python.</p>
<pre><code> urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>')
</code></pre>
<p>After the build runs (some indetermined amount of time) I would like to use this line to get my text file. </p>
<pre><code>urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>')
</code></pre>
<p>Again the problem is I don't know how long to wait before executing the second line. Usually in Java I would do a second thread of sorts that sleeps for a certain amount of time and waits for the build to be done. I am not sure how to do this in python. So if anyone has an idea of either how to do this in python OR a better way to do this I would appreciate it. If I need to explain myself better please let me know.</p>
<p>Grant-</p>
| 0 | 2009-08-20T16:29:03Z | 1,307,578 | <p>Python actually has a <a href="http://docs.python.org/library/threading.html" rel="nofollow">threading system</a> that is fairly similar to Java, so you should be able to use that without much trouble.</p>
<p>But if all you need to do is wait for some predetermined amount of time, use</p>
<pre><code>import time
time.sleep(300) # sleep for 300 seconds
</code></pre>
| 0 | 2009-08-20T17:06:40Z | [
"python",
"plugins",
"multithreading",
"teamcity",
"sleep"
] |
Python question regarding a server listener | 1,307,371 | <p>I wrote a plug-in for the jetbrains tool teamcity. It is pretty much just a server listener that listens for a build being triggered and outputs some text files with information about different builds like what triggered it, how many changes there where etc etc. After I finished that I wrote a python script that could input info into teamcity while the server is running and kick of a build. I would like to be able to get the artifacts for that build after the build is ran, but the problem is I don't know how long it takes each build to run. Sometimes it is 30 sec other times 30 minutes. </p>
<p>So I am kicking off the build with this line in python.</p>
<pre><code> urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>')
</code></pre>
<p>After the build runs (some indetermined amount of time) I would like to use this line to get my text file. </p>
<pre><code>urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>')
</code></pre>
<p>Again the problem is I don't know how long to wait before executing the second line. Usually in Java I would do a second thread of sorts that sleeps for a certain amount of time and waits for the build to be done. I am not sure how to do this in python. So if anyone has an idea of either how to do this in python OR a better way to do this I would appreciate it. If I need to explain myself better please let me know.</p>
<p>Grant-</p>
| 0 | 2009-08-20T16:29:03Z | 1,307,783 | <p>Unless you get get notified by having the build server contact <em>you</em>, the only way to do it is to poll. You can either spawn a thread as indicated in other comments, you just have your main script sleep and poll.</p>
<p>Something like:</p>
<pre><code>wait=True
while wait:
url=urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>')
if url.getcode()!=404:
wait=False
else:
time.sleep(60)
</code></pre>
<p>As an alternative, you could use <a href="http://www.cherrypy.org/" rel="nofollow">CherryPy</a>. Then, when the build is done, you could have curl or wget connect to the listening CherryPy server and trigger your app to download the url.</p>
<p>You could also use xmlrpclib to do something similar.</p>
| 2 | 2009-08-20T17:47:37Z | [
"python",
"plugins",
"multithreading",
"teamcity",
"sleep"
] |
Python MYSQL update statement | 1,307,378 | <p>I'm trying to get this Python MYSQL update statement correct(With Variables):</p>
<pre><code>cursor.execute ("UPDATE tblTableName SET Year=%s" % Year ", Month=%s" % Month ", Day=%s" % Day ", Hour=%s" % Hour ", Minute=%s" Minute "WHERE Server=%s " % ServerID)
</code></pre>
<p>Any ideas where I'm going wrong?</p>
| 19 | 2009-08-20T16:30:31Z | 1,307,400 | <p>It <a href="http://mysql-python.sourceforge.net/MySQLdb.html">should be</a>:</p>
<pre><code>cursor.execute ("""
UPDATE tblTableName
SET Year=%s, Month=%s, Day=%s, Hour=%s, Minute=%s
WHERE Server=%s
""", (Year, Month, Day, Hour, Minute, ServerID))
</code></pre>
<p>You can <strong>also</strong> do it with basic string manipulation,</p>
<pre><code>cursor.execute ("UPDATE tblTableName SET Year=%s, Month=%s, Day=%s, Hour=%s, Minute=%s WHERE Server='%s' " % (Year, Month, Day, Hour, Minute, ServerID))
</code></pre>
<p>but <strong><a href="http://bobby-tables.com/">this way is discouraged because it leaves you open for SQL Injection</a></strong>. As it's so easy (and similar) to do it the <em>right way</em><sup>tm</sup>. Do it correctly. </p>
<p>The only thing you should be careful, is that some database backends don't follow the same convention for string replacement (SQLite comes to mind).</p>
| 41 | 2009-08-20T16:33:46Z | [
"python",
"mysql",
"mysql-python",
"sql-parametrized-query"
] |
Python MYSQL update statement | 1,307,378 | <p>I'm trying to get this Python MYSQL update statement correct(With Variables):</p>
<pre><code>cursor.execute ("UPDATE tblTableName SET Year=%s" % Year ", Month=%s" % Month ", Day=%s" % Day ", Hour=%s" % Hour ", Minute=%s" Minute "WHERE Server=%s " % ServerID)
</code></pre>
<p>Any ideas where I'm going wrong?</p>
| 19 | 2009-08-20T16:30:31Z | 1,307,413 | <p>You've got the syntax all wrong:</p>
<pre><code>cursor.execute ("""
UPDATE tblTableName
SET Year=%s, Month=%s, Day=%s, Hour=%s, Minute=%s
WHERE Server=%s
""", (Year, Month, Day, Hour, Minute, ServerID))
</code></pre>
<p>For more, <a href="http://mysql-python.sourceforge.net/MySQLdb.html">read the documentation</a>.</p>
| 32 | 2009-08-20T16:35:46Z | [
"python",
"mysql",
"mysql-python",
"sql-parametrized-query"
] |
Python MYSQL update statement | 1,307,378 | <p>I'm trying to get this Python MYSQL update statement correct(With Variables):</p>
<pre><code>cursor.execute ("UPDATE tblTableName SET Year=%s" % Year ", Month=%s" % Month ", Day=%s" % Day ", Hour=%s" % Hour ", Minute=%s" Minute "WHERE Server=%s " % ServerID)
</code></pre>
<p>Any ideas where I'm going wrong?</p>
| 19 | 2009-08-20T16:30:31Z | 38,802,132 | <p>This is the right wayï¼</p>
<pre><code>import MySQLdb
if __name__ == '__main__':
connect = MySQLdb.connect(host="localhost", port=3306,
user="xxx", passwd="xxx", db='xxx', charset='utf8')
cursor = connect.cursor()
cursor.execute("""
UPDATE tblTableName
SET Year=%s, Month=%s, Day=%s, Hour=%s, Minute=%s
WHERE Server=%s
""", (Year, Month, Day, Hour, Minute, ServerID))
connect.commit()
connect.close()
</code></pre>
<p>P.S. Don't forget <code>connect.commit()</code>, or it won't work</p>
| 1 | 2016-08-06T08:20:53Z | [
"python",
"mysql",
"mysql-python",
"sql-parametrized-query"
] |
Python MYSQL update statement | 1,307,378 | <p>I'm trying to get this Python MYSQL update statement correct(With Variables):</p>
<pre><code>cursor.execute ("UPDATE tblTableName SET Year=%s" % Year ", Month=%s" % Month ", Day=%s" % Day ", Hour=%s" % Hour ", Minute=%s" Minute "WHERE Server=%s " % ServerID)
</code></pre>
<p>Any ideas where I'm going wrong?</p>
| 19 | 2009-08-20T16:30:31Z | 39,004,054 | <p>Neither of them worked for me for some reason.</p>
<p>I figured it out that for some reason python doesn't read %s. So use (?) instead of %S in you SQL Code.</p>
<p>And finally this worked for me.</p>
<pre><code> cursor.execute ("update tablename set columnName = (?) where ID = (?) ",("test4","4"))
cursor.commit()
</code></pre>
| 0 | 2016-08-17T18:40:34Z | [
"python",
"mysql",
"mysql-python",
"sql-parametrized-query"
] |
Django many-to-many relations, and through | 1,307,548 | <p>I want to store which user invited another user to a group... but django is telling me this is ambigous and against the rules (which makes sense). </p>
<blockquote>
<p>groups.group: Intermediary model
Group_to_Member has more than one
foreign key to User, which is
ambiguous and is not permitted.</p>
</blockquote>
<p>So how do I do this correctly? Maybe a generic relation? might work but seems a bit convoluted... Here's how I was approaching it (with unrelated bits removed)</p>
<pre><code>from django.contrib.auth.models import User
class UserGroup(models.Model):
members = models.ManyToManyField(User, through='Group_to_Member')
class UserGroup_to_Member(models.Model):
group = models.ForeignKey(UserGroup)
member = models.ForeignKey(User)
invited_by = models.ForeignKey(User, related_name="group_invited_users")
</code></pre>
<h2>Solution</h2>
<p>Ok so I did a little combination of the answers you guys provided (Thanks!) and things I found on the internet plus my own admittedly meager python-fu:</p>
<pre><code>from django.contrib.auth.models import User
class UserGroup(models.Model):
# notice there is no member object here
... other model data
def add_member(self, **kwargs):
g2m = UserGroup_to_Member(group = self, **kwargs)
g2m.save()
def remove_member(self, member):
g2m = UserGroup_to_Member.objects.get(group=self, member=member)
g2m.delete()
# This is not elegant at all, help please? I'm pretty sure it isn't
# as bad on the database as it looks though.
def get_members(self):
g2ms = UserGroup_to_Member.objects.filter(group=self)
member_ids = [g2m.member.id for g2m in g2ms]
members = User.objects.none()
for id in member_ids:
members = members | User.objects.get(id=id)
return members
class UserGroup_to_Member(models.Model):
group = models.ForeignKey(UserGroup)
member = models.ForeignKey(User)
invited_by = models.ForeignKey(User, related_name="group_invited_users")
</code></pre>
| 10 | 2009-08-20T17:01:55Z | 1,307,689 | <p>You have to manage it yourself:</p>
<pre><code>class MyGroup(models.Model):
name = models.CharField(max_length=100)
class Membership(models.Model):
group = models.ForeignKey(MyGroup)
member = models.ForeignKey(User)
invited_by = models.ForeignKey(User, related_name='invited_set')
</code></pre>
<p>Then instead of <code>group.members.all()</code> you do <code>group.membership_set.all()</code>.</p>
<p>Also, I wouldn't use 'Group' as your model name, as Django already has a Group object.</p>
| 8 | 2009-08-20T17:27:49Z | [
"python",
"django",
"foreign-keys",
"many-to-many"
] |
Django many-to-many relations, and through | 1,307,548 | <p>I want to store which user invited another user to a group... but django is telling me this is ambigous and against the rules (which makes sense). </p>
<blockquote>
<p>groups.group: Intermediary model
Group_to_Member has more than one
foreign key to User, which is
ambiguous and is not permitted.</p>
</blockquote>
<p>So how do I do this correctly? Maybe a generic relation? might work but seems a bit convoluted... Here's how I was approaching it (with unrelated bits removed)</p>
<pre><code>from django.contrib.auth.models import User
class UserGroup(models.Model):
members = models.ManyToManyField(User, through='Group_to_Member')
class UserGroup_to_Member(models.Model):
group = models.ForeignKey(UserGroup)
member = models.ForeignKey(User)
invited_by = models.ForeignKey(User, related_name="group_invited_users")
</code></pre>
<h2>Solution</h2>
<p>Ok so I did a little combination of the answers you guys provided (Thanks!) and things I found on the internet plus my own admittedly meager python-fu:</p>
<pre><code>from django.contrib.auth.models import User
class UserGroup(models.Model):
# notice there is no member object here
... other model data
def add_member(self, **kwargs):
g2m = UserGroup_to_Member(group = self, **kwargs)
g2m.save()
def remove_member(self, member):
g2m = UserGroup_to_Member.objects.get(group=self, member=member)
g2m.delete()
# This is not elegant at all, help please? I'm pretty sure it isn't
# as bad on the database as it looks though.
def get_members(self):
g2ms = UserGroup_to_Member.objects.filter(group=self)
member_ids = [g2m.member.id for g2m in g2ms]
members = User.objects.none()
for id in member_ids:
members = members | User.objects.get(id=id)
return members
class UserGroup_to_Member(models.Model):
group = models.ForeignKey(UserGroup)
member = models.ForeignKey(User)
invited_by = models.ForeignKey(User, related_name="group_invited_users")
</code></pre>
| 10 | 2009-08-20T17:01:55Z | 22,821,162 | <p>It is possible if you are using Django 1.7. </p>
<p>From the docs: <a href="https://docs.djangoproject.com/en/1.7/topics/db/models/#extra-fields-on-many-to-many-relationships" rel="nofollow">https://docs.djangoproject.com/en/1.7/topics/db/models/#extra-fields-on-many-to-many-relationships</a></p>
<blockquote>
<p>In Django 1.6 and earlier, intermediate models containing more than
one foreign key to any of the models involved in the many-to-many
relationship used to be prohibited.</p>
</blockquote>
| 0 | 2014-04-02T19:27:52Z | [
"python",
"django",
"foreign-keys",
"many-to-many"
] |
Python - MYSQL - Select leading zeros | 1,308,038 | <p>When using Python and doing a Select statement to MYSQL to select 09 from a column the zero gets dropped and only the 9 gets printed.</p>
<p>Is there any way to pull all of the number i.e. including the leading zero?</p>
| 0 | 2009-08-20T18:36:39Z | 1,308,060 | <p>There's almost certainly something in either your query, your table definition, or an ORM you're using that thinks the column is numeric and is converting the results to integers. You'll have to define the column as a string (everywhere!) if you want to preserve leading zeroes.</p>
<p>Edit: <code>ZEROFILL</code> on the server isn't going to cut it. Python treats integer columns as Python integers, and those don't have leading zeroes, period. You'll either have to change the column type to <code>VARCHAR</code>, use something like <code>"%02d" % val</code> in Python, or put a <code>CAST(my_column AS VARCHAR)</code> in the query.</p>
| 3 | 2009-08-20T18:40:11Z | [
"python",
"mysql"
] |
Python - MYSQL - Select leading zeros | 1,308,038 | <p>When using Python and doing a Select statement to MYSQL to select 09 from a column the zero gets dropped and only the 9 gets printed.</p>
<p>Is there any way to pull all of the number i.e. including the leading zero?</p>
| 0 | 2009-08-20T18:36:39Z | 1,308,068 | <p>What data type is the column? If you want to preserve leading zeros, you should probably use a non-numeric column type such as <code>VARCHAR</code>.</p>
<p>To format numbers with leading zeros in Python, use a format specifier:</p>
<pre><code>>>> print "%02d" % (1,)
01
</code></pre>
| 0 | 2009-08-20T18:41:01Z | [
"python",
"mysql"
] |
Python - MYSQL - Select leading zeros | 1,308,038 | <p>When using Python and doing a Select statement to MYSQL to select 09 from a column the zero gets dropped and only the 9 gets printed.</p>
<p>Is there any way to pull all of the number i.e. including the leading zero?</p>
| 0 | 2009-08-20T18:36:39Z | 1,308,102 | <p>MySQL supports this in the column definition:</p>
<pre><code>CREATE TABLE MyTable (
i TINYINT(2) ZEROFILL
);
INSERT INTO MyTable (i) VALUES (9);
SELECT * FROM MyTable;
+------+
| i |
+------+
| 09 |
+------+
</code></pre>
| 3 | 2009-08-20T18:47:26Z | [
"python",
"mysql"
] |
Python - MYSQL - Select leading zeros | 1,308,038 | <p>When using Python and doing a Select statement to MYSQL to select 09 from a column the zero gets dropped and only the 9 gets printed.</p>
<p>Is there any way to pull all of the number i.e. including the leading zero?</p>
| 0 | 2009-08-20T18:36:39Z | 1,308,477 | <p>You could run each query on that column through a function such as:</p>
<pre><code>if len(str(number)) == 1: number = "0" + str(number)
</code></pre>
<p>This would cast the number to a string but as far as I know, Python doesn't allow leading zeroes in its number datatypes. Someone correct me if I'm wrong.</p>
<p>Edit: John Millikin's answer does the same thing as this but is more efficient. Thanks for teaching me about format specifiers!</p>
| 0 | 2009-08-20T20:01:09Z | [
"python",
"mysql"
] |
Python - MYSQL - Select leading zeros | 1,308,038 | <p>When using Python and doing a Select statement to MYSQL to select 09 from a column the zero gets dropped and only the 9 gets printed.</p>
<p>Is there any way to pull all of the number i.e. including the leading zero?</p>
| 0 | 2009-08-20T18:36:39Z | 12,950,072 | <p>Best option will be using <code>CAST</code> as suggested by <code>Eevee</code> but you have to use <code>CAST ( ur_field AS CHAR(2) )</code> as casting to <code>varchar</code> has this problem as given in this post</p>
<p><a href="http://stackoverflow.com/questions/1873085/how-to-convert-from-varbinary-to-char-varchar-in-mysql">How to convert from varbinary to char/varchar in mysql</a></p>
| 1 | 2012-10-18T08:18:00Z | [
"python",
"mysql"
] |
Calling Python from Objective-C | 1,308,079 | <p>I'm developing a Python/ObjC application and I need to call some methods in my Python classes from ObjC.
I've tried several stuffs with no success. </p>
<ul>
<li>How can I call a Python method from Objective-C?</li>
<li>My Python classes are being instantiated in Interface Builder. How can I call a method from that instance?</li>
</ul>
| 8 | 2009-08-20T18:42:48Z | 1,308,469 | <p>Use PyObjC.</p>
<p>It is included with Leopard & later.</p>
<pre><code>>>> from Foundation import *
>>> a = NSArray.arrayWithObjects_("a", "b", "c", None)
>>> a
(
a,
b,
c
)
>>> a[1]
'b'
>>> a.objectAtIndex_(1)
'b'
>>> type(a)
<objective-c class NSCFArray at 0x7fff708bc178>
</code></pre>
<p>It even works with iPython:</p>
<pre><code>In [1]: from Foundation import *
In [2]: a = NSBundle.allFrameworks()
In [3]: ?a
Type: NSCFArray
Base Class: <objective-c class NSCFArray at 0x1002adf40>
</code></pre>
<p>`</p>
<p>To call from Objective-C into Python, the easiest way is to:</p>
<ul>
<li><p>declare an abstract superclass in Objective-C that contains the API you want to call</p></li>
<li><p>create stub implementations of the methods in the class's @implementation</p></li>
<li><p>subclass the class in Python and provide concrete implementations</p></li>
<li><p>create a factory method on the abstract superclass that creates concrete subclass instances</p></li>
</ul>
<p>I.e.</p>
<pre><code>@interface Abstract : NSObject
- (unsigned int) foo: (NSString *) aBar;
+ newConcrete;
@end
@implementation Abstract
- (unsigned int) foo: (NSString *) aBar { return 42; }
+ newConcrete { return [[NSClassFromString(@"MyConcrete") new] autorelease]; }
@end
.....
class Concrete(Abstract):
def foo_(self, s): return s.length()
.....
x = [Abstract newFoo];
[x foo: @"bar"];
</code></pre>
| 17 | 2009-08-20T19:58:17Z | [
"python",
"objective-c",
"cocoa"
] |
In Django how to show a list of objects by year | 1,308,169 | <p>I have theses models:</p>
<pre><code>class Year(models.Model):
name = models.CharField(max_length=15)
date = models.DateField()
class Period(models.Model):
name = models.CharField(max_length=15)
date = models.DateField()
class Notice(models.Model):
year = models.ForeignKey(Year)
period = models.ForeignKey(Period, blank=True, null=True)
text = models.TextField()
order = models.PositiveSmallIntegerField(default=1)
</code></pre>
<p>And in my view I would like to show all the Notices ordered by year and period but I don't want to repeat the year and/or the period for a notice if they are the same as the previous. I would like this kind of result:</p>
<blockquote>
<p><b>1932-1940</b><br />
<i>Mid-summer</i> </p>
<ul>
<li><p>Text Lorem ipsum from notice 1 ...</p></li>
<li><p>Text Lorem ipsum from notice 2 ...</p></li>
</ul>
<p><i>September</i> </p>
<ul>
<li>Text Lorem ipsum from notice 3 ...</li>
</ul>
<p><b>1950</b><br />
<i>January</i></p>
<ul>
<li>Text Lorem ipsum from notice 4 ... </li>
</ul>
<p>etc.</p>
</blockquote>
<p>I found a solution by looping over all the rows to built nested lists like this:</p>
<pre><code>years = [('1932-1940', [
('Mid-summer', [Notice1, Notice2]),
('September', [Notice3])
]),
('1950', [
('January', [Notice4])
])
]
</code></pre>
<p>Here's the code in the view:</p>
<pre><code>years = []
year = []
period = []
prev_year = ''
prev_period = ''
for notice in Notice.objects.all():
if notice.year != prev_year:
prev_year = notice.year
year = []
years.append((prev_year.name, year))
prev_period = ''
if notice.periode != prev_period:
prev_period = notice.periode
period = []
if prev_period:
name = prev_period.name
else:
name = None
year.append((name, period))
period.append(notice)
</code></pre>
<p>But this is slow and inelegant. What's the good way to do this ?</p>
<p>If I could set some variables inside the template I could only iterate over all the notices and only print the year and period by checking if they are the same as the previous one. But it's not possible to set some some temp variables in templates.</p>
| 1 | 2009-08-20T18:59:43Z | 1,308,261 | <p>Luckily Django has some built-in template tags that will help you. Probably the main one you want is <a href="http://docs.djangoproject.com/en/dev/ref/templates/builtins/#regroup" rel="nofollow">regroup</a>:</p>
<pre><code>{% regroup notices by year as year_list %}
{% for year in year_list %}
<h2>{{ year.grouper }}<h2>
<ul>
{% for notice in year.list %}
<li>{{ notice.text }}</li>
{% endfor %}
</ul>
{% endfor %}
</code></pre>
<p>There's also <code>{% ifchanged %}</code>, which can help with looping over lists when one value stays the same.</p>
| 2 | 2009-08-20T19:14:18Z | [
"python",
"django"
] |
In Django how to show a list of objects by year | 1,308,169 | <p>I have theses models:</p>
<pre><code>class Year(models.Model):
name = models.CharField(max_length=15)
date = models.DateField()
class Period(models.Model):
name = models.CharField(max_length=15)
date = models.DateField()
class Notice(models.Model):
year = models.ForeignKey(Year)
period = models.ForeignKey(Period, blank=True, null=True)
text = models.TextField()
order = models.PositiveSmallIntegerField(default=1)
</code></pre>
<p>And in my view I would like to show all the Notices ordered by year and period but I don't want to repeat the year and/or the period for a notice if they are the same as the previous. I would like this kind of result:</p>
<blockquote>
<p><b>1932-1940</b><br />
<i>Mid-summer</i> </p>
<ul>
<li><p>Text Lorem ipsum from notice 1 ...</p></li>
<li><p>Text Lorem ipsum from notice 2 ...</p></li>
</ul>
<p><i>September</i> </p>
<ul>
<li>Text Lorem ipsum from notice 3 ...</li>
</ul>
<p><b>1950</b><br />
<i>January</i></p>
<ul>
<li>Text Lorem ipsum from notice 4 ... </li>
</ul>
<p>etc.</p>
</blockquote>
<p>I found a solution by looping over all the rows to built nested lists like this:</p>
<pre><code>years = [('1932-1940', [
('Mid-summer', [Notice1, Notice2]),
('September', [Notice3])
]),
('1950', [
('January', [Notice4])
])
]
</code></pre>
<p>Here's the code in the view:</p>
<pre><code>years = []
year = []
period = []
prev_year = ''
prev_period = ''
for notice in Notice.objects.all():
if notice.year != prev_year:
prev_year = notice.year
year = []
years.append((prev_year.name, year))
prev_period = ''
if notice.periode != prev_period:
prev_period = notice.periode
period = []
if prev_period:
name = prev_period.name
else:
name = None
year.append((name, period))
period.append(notice)
</code></pre>
<p>But this is slow and inelegant. What's the good way to do this ?</p>
<p>If I could set some variables inside the template I could only iterate over all the notices and only print the year and period by checking if they are the same as the previous one. But it's not possible to set some some temp variables in templates.</p>
| 1 | 2009-08-20T18:59:43Z | 1,308,291 | <p>Your problem would not actually be that hard if not for the unfortunate denormalization you have going on. I'm going to answer your question ignoring the <code>Year</code> class, because I don't understand how that logic relates to the period logic.</p>
<p>Most simply, in the template, put:</p>
<pre><code>{% for period in periods %}
period.name
{% for notice in period.notice_set.all %}
notice.text
{% endfor %}
{% endfor %}
</code></pre>
<p>Now that completely leaves out ordering, so if you like, you could define in your <code>Period</code> model:</p>
<pre><code>def order_notices(self):
return self.notice_set.order_by('order')
</code></pre>
<p>Then use</p>
<pre><code>{% for period in periods %}
period.name
{% for notice in period.order_notices %}
notice.text
{% endfor %}
{% endfor %}
</code></pre>
<p>If you must use years, I strongly suggest defining a method in the <code>Year</code> model of the form</p>
<pre><code>def ordered_periods(self):
... #your logic goes here
... #should return an iterable (list or queryset) or periods
</code></pre>
<p>Then in your template:</p>
<pre><code>{% for year in years %}
year.name
{% for period in year.ordered_periods %}
period.name
{% for notice in period.order_notices %}
notice.text
{% endfor %}
{% endfor %}
</code></pre>
<p>EDIT:
Please keep in mind that the secret to all this success is that the template can only call methods that don't take any arguments (except self).</p>
| 1 | 2009-08-20T19:19:48Z | [
"python",
"django"
] |
Do any Python ORMs (SQLAlchemy?) work with Google App Engine? | 1,308,376 | <p>I'd like to use the Python version of App Engine but rather than write my code specifically for the Google Data Store, I'd like to create my models with a generic Python ORM that could be attached to Big Table, or, if I prefer, a regular database at some later time. Is there any Python ORM such as SQLAlchemy that would allow this?</p>
| 10 | 2009-08-20T19:39:35Z | 1,309,598 | <p>Technically this wouldn't be called an ORM (Object Relational Mapper) but a DAL (Database Abstraction Layer). The ORM part is not really interesting for AppEngine as the API already takes care of the object mapping and does some simple relational mapping (see RelationProperty).</p>
<p>Also realize that a DAL will never let you switch between AppEngine's datastore and a "normal" sql database like mysql because they work very differently. It might let you switch between different key value stores, like reddis, mongo or tokyo cabinet. But as they all have such very different characteristics I would really think twice before using one.</p>
<p>Lastly, the DAL traditionally sits on top of the DB interface, but with AppEngine's api you can implement your own "stubs" that basically let you use other storage backends on their api. The people at Mongo wrote <a href="http://github.com/mongodb/mongo-appengine-connector/tree/master">one</a> for MongoDB which is very nice. And the dev_appserver comes with a filesystem-based one.</p>
<p>And now to the answer: yes there is one! It's part of <a href="http://web2py.com">web.py</a>. I haven't really tried if for the reasons above, so I can't really say if it's good.</p>
<p>PS. I know Ruby has a nice DAL project for keyvalue stores in the works too, but I can't find it now... Maybe nice to port to Python at some point.</p>
| 7 | 2009-08-21T00:55:34Z | [
"python",
"google-app-engine",
"sqlalchemy",
"orm"
] |
Do any Python ORMs (SQLAlchemy?) work with Google App Engine? | 1,308,376 | <p>I'd like to use the Python version of App Engine but rather than write my code specifically for the Google Data Store, I'd like to create my models with a generic Python ORM that could be attached to Big Table, or, if I prefer, a regular database at some later time. Is there any Python ORM such as SQLAlchemy that would allow this?</p>
| 10 | 2009-08-20T19:39:35Z | 11,325,656 | <p>Nowadays they do since Google has launched Cloud SQL</p>
| 3 | 2012-07-04T08:53:46Z | [
"python",
"google-app-engine",
"sqlalchemy",
"orm"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 1,309,172 | <p>This is might not be the answer you are looking for. but you can use charfield to store the path of the file instead of ImageFile. In that way you can programmatically associate uploaded image to field without recreating the file.</p>
| 1 | 2009-08-20T22:26:14Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 1,309,682 | <p>I have some code that fetches an image off the web and stores it in a model. The important bits are:</p>
<pre><code>from django.core.files import File # you need this somewhere
import urllib
# The following actually resides in a method of my model
result = urllib.urlretrieve(image_url) # image_url is a URL to an image
# self.photo is the ImageField
self.photo.save(
os.path.basename(self.url),
File(open(result[0]))
)
self.save()
</code></pre>
<p>That's a bit confusing because it's pulled out of my model and a bit out of context, but the important parts are:</p>
<ul>
<li>The image pulled from the web is <em>not</em> stored in the upload_to folder, it is instead stored as a tempfile by urllib.urlretrieve() and later discarded.</li>
<li>The ImageField.save() method takes a filename (the os.path.basename bit) and a django.core.files.File object.</li>
</ul>
<p>Let me know if you have questions or need clarification.</p>
<p>Edit: for the sake of clarity, here is the model (minus any required import statements):</p>
<pre><code>class CachedImage(models.Model):
url = models.CharField(max_length=255, unique=True)
photo = models.ImageField(upload_to=photo_path, blank=True)
def cache(self):
"""Store image locally if we have a URL"""
if self.url and not self.photo:
result = urllib.urlretrieve(self.url)
self.photo.save(
os.path.basename(self.url),
File(open(result[0]))
)
self.save()
</code></pre>
| 121 | 2009-08-21T01:32:50Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 3,323,465 | <p>Just a little remark. tvon answer works but, if you're working on windows, you probably want to open() the file with 'rb'. Like this:</p>
<pre><code>class CachedImage(models.Model):
url = models.CharField(max_length=255, unique=True)
photo = models.ImageField(upload_to=photo_path, blank=True)
def cache(self):
"""Store image locally if we have a URL"""
if self.url and not self.photo:
result = urllib.urlretrieve(self.url)
self.photo.save(
os.path.basename(self.url),
File(open(result[0], 'rb'))
)
self.save()
</code></pre>
<p>or you'll get your file truncated at the first 0x1A byte. </p>
| 30 | 2010-07-24T01:37:20Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 4,230,210 | <p>What I did was to create my own storage that will just not save the file to the disk:</p>
<pre><code>from django.core.files.storage import FileSystemStorage
class CustomStorage(FileSystemStorage):
def _open(self, name, mode='rb'):
return File(open(self.path(name), mode))
def _save(self, name, content):
# here, you should implement how the file is to be saved
# like on other machines or something, and return the name of the file.
# In our case, we just return the name, and disable any kind of save
return name
def get_available_name(self, name):
return name
</code></pre>
<p>Then, in my models, for my ImageField, I've used the new custom storage:</p>
<pre><code>from custom_storage import CustomStorage
custom_store = CustomStorage()
class Image(models.Model):
thumb = models.ImageField(storage=custom_store, upload_to='/some/path')
</code></pre>
| 8 | 2010-11-19T23:08:23Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 5,256,094 | <p>If you want to just "set" the actual filename, without incurring the overhead of loading and re-saving the file (!!), or resorting to using a charfield (!!!), you might want to try something like this --</p>
<pre><code>model_instance.myfile = model_instance.myfile.field.attr_class(model_instance, model_instance.myfile.field, 'my-filename.jpg')
</code></pre>
<p>This will light up your model_instance.myfile.url and all the rest of them just as if you'd actually uploaded the file.</p>
<p>Like @t-stone says, what we really want, is to be able to set instance.myfile.path = 'my-filename.jpg', but Django doesn't currently support that.</p>
| 6 | 2011-03-10T06:06:30Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 5,772,151 | <p>You can try:</p>
<pre><code>model.ImageField.path = os.path.join('/Upload', generated_image_path)
</code></pre>
| 0 | 2011-04-24T17:44:09Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 6,566,011 | <p>Here is a method that works well and allows you to convert the file to a certain format as well (to avoid "cannot write mode P as JPEG" error): </p>
<pre><code>import urllib2
from django.core.files.base import ContentFile
from StringIO import StringIO
def download_image(name, image, url):
input_file = StringIO(urllib2.urlopen(url).read())
output_file = StringIO()
img = Image.open(input_file)
if img.mode != "RGB":
img = img.convert("RGB")
img.save(output_file, "JPEG")
image.save(name+".jpg", ContentFile(output_file.getvalue()), save=False)
</code></pre>
<p>where image is the django ImageField or your_model_instance.image
here is a usage example:</p>
<pre><code>p = ProfilePhoto(user=user)
download_image(str(user.id), p.image, image_url)
p.save()
</code></pre>
<p>Hope this helps</p>
| 10 | 2011-07-03T22:47:20Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 12,917,845 | <p>Super easy if model hasn't been created yet:</p>
<p><strong>First</strong>, copy your image file to the upload path (assumed = <strong>'path/'</strong> in following snippet).</p>
<p><strong>Second</strong>, use something like:</p>
<pre><code>class Layout(models.Model):
image = models.ImageField('img', upload_to='path/')
layout = Layout()
layout.image = "path/image.png"
layout.save()
</code></pre>
<p>tested and working in django 1.4, it might work also for an existing model.</p>
| 47 | 2012-10-16T15:11:28Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 14,306,661 | <pre><code>class tweet_photos(models.Model):
upload_path='absolute path'
image=models.ImageField(upload_to=upload_path)
image_url = models.URLField(null=True, blank=True)
def save(self, *args, **kwargs):
if self.image_url:
import urllib, os
from urlparse import urlparse
file_save_dir = self.upload_path
filename = urlparse(self.image_url).path.split('/')[-1]
urllib.urlretrieve(self.image_url, os.path.join(file_save_dir, filename))
self.image = os.path.join(file_save_dir, filename)
self.image_url = ''
super(tweet_photos, self).save()
</code></pre>
| 0 | 2013-01-13T18:22:31Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 19,037,233 | <p>Ok, If all you need to do is associate the already existing image file path with the ImageField, then this solution may be helpfull:</p>
<pre><code>from django.core.files.base import ContentFile
with open('/path/to/already/existing/file') as f:
data = f.read()
# obj.image is the ImageField
obj.image.save('imgfilename.jpg', ContentFile(data))
</code></pre>
<p>Well, if be earnest, the already existing image file will not be associated with the ImageField, but the copy of this file will be created in upload_to dir as 'imgfilename.jpg' and will be associated with the ImageField.</p>
| 4 | 2013-09-26T19:41:29Z | [
"python",
"django",
"django-models"
] |
Programmatically saving image to Django ImageField | 1,308,386 | <p>Ok, I've tried about near everything and I cannot get this to work.</p>
<ul>
<li>I have a Django model with an ImageField on it</li>
<li>I have code that downloads an image via HTTP (tested and works)</li>
<li>The image is saved directly into the 'upload_to' folder (the upload_to being the one that is set on the ImageField)</li>
<li>All I need to do is associate the already existing image file path with the ImageField</li>
</ul>
<p>I've written this code about 6 different ways.</p>
<p>The problem I'm running into is all of the code that I'm writing results in the following behavior:
(1) Django will make a 2nd file, (2) rename the new file, adding an _ to the end of the file name, then (3) not transfer any of the data over leaving it basically an empty re-named file. What's left in the 'upload_to' path is 2 files, one that is the actual image, and one that is the name of the image,but is empty, and of course the ImageField path is set to the empty file that Django try to create.</p>
<p>In case that was unclear, I'll try to illustrate:</p>
<pre><code>## Image generation code runs....
/Upload
generated_image.jpg 4kb
## Attempt to set the ImageField path...
/Upload
generated_image.jpg 4kb
generated_image_.jpg 0kb
ImageField.Path = /Upload/generated_image_.jpg
</code></pre>
<p>How can I do this without having Django try to re-store the file? What I'd really like is something to this effect...</p>
<pre><code>model.ImageField.path = generated_image_path
</code></pre>
<p>...but of course that doesn't work.</p>
<p>And yes I've gone through the other questions here like <a href="http://stackoverflow.com/questions/811167/how-to-manually-assign-imagefield-in-django">this one</a> as well as the django doc on <a href="http://docs.djangoproject.com/en/dev/ref/files/file/#django.core.files.File.save">File</a></p>
<p><strong>UPDATE</strong>
After further testing, it only does this behavior when running under Apache on Windows Server. While running under the 'runserver' on XP it does not execute this behavior. </p>
<p>I am stumped.</p>
<p>Here is the code which runs successfully on XP...</p>
<pre><code>f = open(thumb_path, 'r')
model.thumbnail = File(f)
model.save()
</code></pre>
| 139 | 2009-08-20T19:42:39Z | 39,525,265 | <pre><code>class Pin(models.Model):
"""Pin Class"""
image_link = models.CharField(max_length=255, null=True, blank=True)
image = models.ImageField(upload_to='images/', blank=True)
title = models.CharField(max_length=255, null=True, blank=True)
source_name = models.CharField(max_length=255, null=True, blank=True)
source_link = models.CharField(max_length=255, null=True, blank=True)
description = models.TextField(null=True, blank=True)
tags = models.ForeignKey(Tag, blank=True, null=True)
def __unicode__(self):
"""Unicode class."""
return unicode(self.image_link)
def save(self, *args, **kwargs):
"""Store image locally if we have a URL"""
if self.image_link and not self.image:
result = urllib.urlretrieve(self.image_link)
self.image.save(os.path.basename(self.image_link), File(open(result[0], 'r')))
self.save()
super(Pin, self).save()
</code></pre>
| 0 | 2016-09-16T06:45:15Z | [
"python",
"django",
"django-models"
] |
How to catch 404 error in urllib.urlretrieve | 1,308,542 | <p>Background: I am using <a href="http://docs.activestate.com/activepython/2.6/python/library/urllib.html"><code>urllib.urlretrieve</code></a>, as opposed to any other function in the <code>urllib*</code> modules, because of the hook function support (see <code>reporthook</code> below) .. which is used to display a textual progress bar. This is Python >=2.6.</p>
<pre><code>>>> urllib.urlretrieve(url[, filename[, reporthook[, data]]])
</code></pre>
<p>However, <code>urlretrieve</code> is so dumb that it leaves no way to detect the status of the HTTP request (eg: was it 404 or 200?).</p>
<pre><code>>>> fn, h = urllib.urlretrieve('http://google.com/foo/bar')
>>> h.items()
[('date', 'Thu, 20 Aug 2009 20:07:40 GMT'),
('expires', '-1'),
('content-type', 'text/html; charset=ISO-8859-1'),
('server', 'gws'),
('cache-control', 'private, max-age=0')]
>>> h.status
''
>>>
</code></pre>
<p>What is the best known way to download a remote HTTP file with hook-like support (to show progress bar) and a decent HTTP error handling?</p>
| 21 | 2009-08-20T20:14:39Z | 1,308,846 | <p>Check out <code>urllib.urlretrieve</code>'s complete code:</p>
<pre><code>def urlretrieve(url, filename=None, reporthook=None, data=None):
global _urlopener
if not _urlopener:
_urlopener = FancyURLopener()
return _urlopener.retrieve(url, filename, reporthook, data)
</code></pre>
<p>In other words, you can use <a href="http://docs.python.org/library/urllib.html#urllib.FancyURLopener">urllib.FancyURLopener</a> (it's part of the public urllib API). You can override <code>http_error_default</code> to detect 404s:</p>
<pre><code>class MyURLopener(urllib.FancyURLopener):
def http_error_default(self, url, fp, errcode, errmsg, headers):
# handle errors the way you'd like to
fn, h = MyURLopener().retrieve(url, reporthook=my_report_hook)
</code></pre>
| 27 | 2009-08-20T21:11:37Z | [
"python",
"http",
"url",
"urllib"
] |
How to catch 404 error in urllib.urlretrieve | 1,308,542 | <p>Background: I am using <a href="http://docs.activestate.com/activepython/2.6/python/library/urllib.html"><code>urllib.urlretrieve</code></a>, as opposed to any other function in the <code>urllib*</code> modules, because of the hook function support (see <code>reporthook</code> below) .. which is used to display a textual progress bar. This is Python >=2.6.</p>
<pre><code>>>> urllib.urlretrieve(url[, filename[, reporthook[, data]]])
</code></pre>
<p>However, <code>urlretrieve</code> is so dumb that it leaves no way to detect the status of the HTTP request (eg: was it 404 or 200?).</p>
<pre><code>>>> fn, h = urllib.urlretrieve('http://google.com/foo/bar')
>>> h.items()
[('date', 'Thu, 20 Aug 2009 20:07:40 GMT'),
('expires', '-1'),
('content-type', 'text/html; charset=ISO-8859-1'),
('server', 'gws'),
('cache-control', 'private, max-age=0')]
>>> h.status
''
>>>
</code></pre>
<p>What is the best known way to download a remote HTTP file with hook-like support (to show progress bar) and a decent HTTP error handling?</p>
| 21 | 2009-08-20T20:14:39Z | 1,308,858 | <p>The URL Opener object's "retreive" method supports the reporthook and throws an exception on 404.</p>
<p><a href="http://docs.python.org/library/urllib.html#url-opener-objects" rel="nofollow">http://docs.python.org/library/urllib.html#url-opener-objects</a></p>
| 2 | 2009-08-20T21:13:46Z | [
"python",
"http",
"url",
"urllib"
] |
How to catch 404 error in urllib.urlretrieve | 1,308,542 | <p>Background: I am using <a href="http://docs.activestate.com/activepython/2.6/python/library/urllib.html"><code>urllib.urlretrieve</code></a>, as opposed to any other function in the <code>urllib*</code> modules, because of the hook function support (see <code>reporthook</code> below) .. which is used to display a textual progress bar. This is Python >=2.6.</p>
<pre><code>>>> urllib.urlretrieve(url[, filename[, reporthook[, data]]])
</code></pre>
<p>However, <code>urlretrieve</code> is so dumb that it leaves no way to detect the status of the HTTP request (eg: was it 404 or 200?).</p>
<pre><code>>>> fn, h = urllib.urlretrieve('http://google.com/foo/bar')
>>> h.items()
[('date', 'Thu, 20 Aug 2009 20:07:40 GMT'),
('expires', '-1'),
('content-type', 'text/html; charset=ISO-8859-1'),
('server', 'gws'),
('cache-control', 'private, max-age=0')]
>>> h.status
''
>>>
</code></pre>
<p>What is the best known way to download a remote HTTP file with hook-like support (to show progress bar) and a decent HTTP error handling?</p>
| 21 | 2009-08-20T20:14:39Z | 2,202,866 | <p>You should use:</p>
<pre><code>import urllib2
try:
resp = urllib2.urlopen("http://www.google.com/this-gives-a-404/")
except urllib2.URLError, e:
if not hasattr(e, "code"):
raise
resp = e
print "Gave", resp.code, resp.msg
print "=" * 80
print resp.read(80)
</code></pre>
<p><em>Edit:</em> The rationale here is that unless you expect the exceptional state, it is an exception for it to happen, and you probably didn't even think about it -- so instead of letting your code continue to run while it was unsuccessful, the default behavior is--quite sensibly--to inhibit its execution.</p>
| 14 | 2010-02-04T20:17:57Z | [
"python",
"http",
"url",
"urllib"
] |
Embedding Windows Python in Cygwin/GCC C++ program | 1,308,558 | <p>I am currently working on a Cygwin/GCC application written in C++. The application requires embedding of python to run plug-ins, I've successfully embedded using the Cygwin python libraries and was able to run simple python files as part of the program. However, the python files now require the use of a windows GUI framework (wxPython), and so I need to be able to embed the Windows Python environment, otherwise I cannot use the framework in the python files. In an attempt to do this, I created libpython25.a using step 2 of <a href="http://sebsauvage.net/python/mingw.html" rel="nofollow" title="these instructions">these instructions</a>. I then used the library/header files of the windows installation to compile it. However, when I run it the program crashes with some strange debugger output (debug info is on, strangely enough). </p>
<pre><code>gdb: unknown target exception 0xc0000008 at 0x77139a13
Program received signal ?, Unknown signal.
[Switching to thread 2216.0x119c]
0x77139a13 in ntdll!RtlLockMemoryZone () from /cygdrive/c/Windows/system32/ntdll.dll
(gdb) where
#0 0x77139a13 in ntdll!RtlLockMemoryZone () from /cygdrive/c/Windows/system32/ntdll.dll
#1 0x030c1c7c in ?? ()
#2 0x030c1c80 in ?? ()
#3 0x1e0d0e80 in python25!_PyTime_DoubleToTimet ()
from /cygdrive/c/Windows/SysWOW64/python25.dll
#4 0x00000000 in ?? ()'
</code></pre>
<p>If anyone has done this successfully, I would greatly appreciate the help. Is embedding Windows python in a Cygwin/GCC program possible? If not what are my other options? (Right now I can only think of moving over to VC++ but this would be pretty drastic, also I do not want to use X11 for the GUI).</p>
| 1 | 2009-08-20T20:17:15Z | 1,308,764 | <p>Not a direct answer, but you could split the system into 2 processes - the Cygwin one (Python & C++, no wxPython) and the win32 one (Python & wxPython) and communicate between them with <a href="http://rpyc.wikidot.com/" rel="nofollow">RPyC</a>, XML-RPC, etc.</p>
| 0 | 2009-08-20T20:57:03Z | [
"c++",
"python",
"windows",
"cygwin",
"embedding"
] |
Embedding Windows Python in Cygwin/GCC C++ program | 1,308,558 | <p>I am currently working on a Cygwin/GCC application written in C++. The application requires embedding of python to run plug-ins, I've successfully embedded using the Cygwin python libraries and was able to run simple python files as part of the program. However, the python files now require the use of a windows GUI framework (wxPython), and so I need to be able to embed the Windows Python environment, otherwise I cannot use the framework in the python files. In an attempt to do this, I created libpython25.a using step 2 of <a href="http://sebsauvage.net/python/mingw.html" rel="nofollow" title="these instructions">these instructions</a>. I then used the library/header files of the windows installation to compile it. However, when I run it the program crashes with some strange debugger output (debug info is on, strangely enough). </p>
<pre><code>gdb: unknown target exception 0xc0000008 at 0x77139a13
Program received signal ?, Unknown signal.
[Switching to thread 2216.0x119c]
0x77139a13 in ntdll!RtlLockMemoryZone () from /cygdrive/c/Windows/system32/ntdll.dll
(gdb) where
#0 0x77139a13 in ntdll!RtlLockMemoryZone () from /cygdrive/c/Windows/system32/ntdll.dll
#1 0x030c1c7c in ?? ()
#2 0x030c1c80 in ?? ()
#3 0x1e0d0e80 in python25!_PyTime_DoubleToTimet ()
from /cygdrive/c/Windows/SysWOW64/python25.dll
#4 0x00000000 in ?? ()'
</code></pre>
<p>If anyone has done this successfully, I would greatly appreciate the help. Is embedding Windows python in a Cygwin/GCC program possible? If not what are my other options? (Right now I can only think of moving over to VC++ but this would be pretty drastic, also I do not want to use X11 for the GUI).</p>
| 1 | 2009-08-20T20:17:15Z | 5,630,081 | <p>It looks like you have a 32 bit / 64 bit mismatch. </p>
<p>You are running code on a 64bit machine (because there is a SysWow64 folder), but my guess is that your python25.dll is 32 bit. What is confusing is that "system32" contains 64 bit DLLs.</p>
<p>+I don't think debug is on, you only see the public symbols.</p>
| 0 | 2011-04-12T03:28:12Z | [
"c++",
"python",
"windows",
"cygwin",
"embedding"
] |
Is it possible to peek at the data in a urllib2 response? | 1,308,584 | <p>I need to detect character encoding in HTTP responses. To do this I look at the headers, then if it's not set in the content-type header I have to peek at the response and look for a "<code><meta http-equiv='content-type'></code>" header. I'd like to be able to write a function that looks and works something like this:</p>
<pre><code>response = urllib2.urlopen("http://www.example.com/")
encoding = detect_html_encoding(response)
...
page_text = response.read()
</code></pre>
<p>However, if I do response.read() in my "detect_html_encoding" method, then the subseuqent response.read() after the call to my function will fail.</p>
<p>Is there an easy way to peek at the response and/or rewind after a read?</p>
| 1 | 2009-08-20T20:20:30Z | 1,308,636 | <ol>
<li>If it's in the HTTP headers (not the document itself) you could use <code>response.info()</code> to detect the encoding</li>
<li><p>If you want to parse the HTML, save the response data:</p>
<pre><code>page_text = response.read()
encoding = detect_html_encoding(response, page_text)
</code></pre></li>
</ol>
| 0 | 2009-08-20T20:30:44Z | [
"python",
"html",
"http",
"encoding",
"urllib2"
] |
Is it possible to peek at the data in a urllib2 response? | 1,308,584 | <p>I need to detect character encoding in HTTP responses. To do this I look at the headers, then if it's not set in the content-type header I have to peek at the response and look for a "<code><meta http-equiv='content-type'></code>" header. I'd like to be able to write a function that looks and works something like this:</p>
<pre><code>response = urllib2.urlopen("http://www.example.com/")
encoding = detect_html_encoding(response)
...
page_text = response.read()
</code></pre>
<p>However, if I do response.read() in my "detect_html_encoding" method, then the subseuqent response.read() after the call to my function will fail.</p>
<p>Is there an easy way to peek at the response and/or rewind after a read?</p>
| 1 | 2009-08-20T20:20:30Z | 1,309,764 | <pre><code>def detectit(response):
# try headers &c, then, worst case...:
content = response.read()
response.read = lambda: content
# now detect based on content
</code></pre>
<p>The trick of course is ensuring that <code>response.read()</code> WILL return the same thing again if needed... that's why we assign that <code>lambda</code> to it if necessary, i.e., if we already needed to extract the content -- that ensures the same content can be extracted again (and again, and again, ...;-).</p>
| 4 | 2009-08-21T02:05:26Z | [
"python",
"html",
"http",
"encoding",
"urllib2"
] |
Python assert -- improved introspection of failure? | 1,308,607 | <p>This is a rather useless assertion error; it does not tell the values of the expression involved (assume constants used are actually variable names): </p>
<pre><code>$ python -c "assert 6-(3*2)"
[...]
AssertionError
</code></pre>
<p>Is there a better <code>assert</code> implementation in Python that is more fancy? It must not introduce additional overhead over execution (except when assert fails) .. and must turn off if <code>-O</code> flag is used.</p>
<p><strong>Edit</strong>: I know about assert's second argument as a string. I don't want to write one .. as that is encoded in the expression that is being asserted. DRY (Don't Repeat Yourself).</p>
| 10 | 2009-08-20T20:25:23Z | 1,308,621 | <p>You can attach a message to an <code>assert</code>:</p>
<pre><code>assert 6-(3*2), "always fails"
</code></pre>
<p>The message can also be built dynamically:</p>
<pre><code>assert x != 0, "x is not equal to zero (%d)" % x
</code></pre>
<p>See <a href="http://docs.python.org/reference/simple%5Fstmts.html#the-assert-statement">The <code>assert</code> statement</a> in the Python documentation for more information.</p>
| 5 | 2009-08-20T20:28:04Z | [
"python",
"debugging",
"assert",
"syntactic-sugar"
] |
Python assert -- improved introspection of failure? | 1,308,607 | <p>This is a rather useless assertion error; it does not tell the values of the expression involved (assume constants used are actually variable names): </p>
<pre><code>$ python -c "assert 6-(3*2)"
[...]
AssertionError
</code></pre>
<p>Is there a better <code>assert</code> implementation in Python that is more fancy? It must not introduce additional overhead over execution (except when assert fails) .. and must turn off if <code>-O</code> flag is used.</p>
<p><strong>Edit</strong>: I know about assert's second argument as a string. I don't want to write one .. as that is encoded in the expression that is being asserted. DRY (Don't Repeat Yourself).</p>
| 10 | 2009-08-20T20:25:23Z | 1,308,629 | <p>Add a message to your assertion, which will be displayed if the assertion fails:</p>
<pre><code>$ python -c "assert 6-(3*2), '6-(3*2)'"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError: 6-(3*2)
</code></pre>
<p>The only way I can think of to provide this automatically would be to contain the assertion in a procedure call, and then inspect the stack to get the source code for that line. The additional call would, unfortunately, introduce overhead into the test and would not be disabled with <code>-O</code>.</p>
| 0 | 2009-08-20T20:29:13Z | [
"python",
"debugging",
"assert",
"syntactic-sugar"
] |
Python assert -- improved introspection of failure? | 1,308,607 | <p>This is a rather useless assertion error; it does not tell the values of the expression involved (assume constants used are actually variable names): </p>
<pre><code>$ python -c "assert 6-(3*2)"
[...]
AssertionError
</code></pre>
<p>Is there a better <code>assert</code> implementation in Python that is more fancy? It must not introduce additional overhead over execution (except when assert fails) .. and must turn off if <code>-O</code> flag is used.</p>
<p><strong>Edit</strong>: I know about assert's second argument as a string. I don't want to write one .. as that is encoded in the expression that is being asserted. DRY (Don't Repeat Yourself).</p>
| 10 | 2009-08-20T20:25:23Z | 1,308,835 | <p><a href="http://somethingaboutorange.com/mrl/projects/nose/0.11.1/plugins/failuredetail.html">The nose testing suite applies introspection to asserts</a>. </p>
<p>However, AFAICT, you have to call <em>their</em> asserts to get the introspection:</p>
<pre><code>import nose
def test1():
nose.tools.assert_equal(6, 5+2)
</code></pre>
<p>results in</p>
<pre>
C:\temp\py>C:\Python26\Scripts\nosetests.exe -d test.py
F
======================================================================
FAIL: test.test1
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python26\lib\site-packages\nose-0.11.1-py2.6.egg\nose\case.py", line
183, in runTest
self.test(*self.arg)
File "C:\temp\py\test.py", line 3, in test1
nose.tools.assert_equal(6, 5+2)
AssertionError: 6 != 7
>> raise self.failureException, \
(None or '%r != %r' % (6, 7))
</pre>
<p>Notice the AssertionError there. When my line was just <code>assert 6 == 5+2</code>, I would get:</p>
<pre>
C:\temp\py>C:\Python26\Scripts\nosetests.exe -d test.py
F
======================================================================
FAIL: test.test1
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python26\lib\site-packages\nose-0.11.1-py2.6.egg\nose\case.py", line
183, in runTest
self.test(*self.arg)
File "C:\temp\py\test.py", line 2, in test1
assert 6 == 5 + 2
AssertionError:
>> assert 6 == 5 + 2
</pre>
<p>Also, I'm not sure offhand if their asserts are skipped with <code>-O</code>, but that would be a very quick check.</p>
| 4 | 2009-08-20T21:09:49Z | [
"python",
"debugging",
"assert",
"syntactic-sugar"
] |
Python assert -- improved introspection of failure? | 1,308,607 | <p>This is a rather useless assertion error; it does not tell the values of the expression involved (assume constants used are actually variable names): </p>
<pre><code>$ python -c "assert 6-(3*2)"
[...]
AssertionError
</code></pre>
<p>Is there a better <code>assert</code> implementation in Python that is more fancy? It must not introduce additional overhead over execution (except when assert fails) .. and must turn off if <code>-O</code> flag is used.</p>
<p><strong>Edit</strong>: I know about assert's second argument as a string. I don't want to write one .. as that is encoded in the expression that is being asserted. DRY (Don't Repeat Yourself).</p>
| 10 | 2009-08-20T20:25:23Z | 1,309,039 | <p>As <a href="http://stackoverflow.com/questions/1308607/python-assert-improved-introspection-of-failure/1308835#1308835">@Mark Rushakoff said</a> <code>nose</code> can evaluate failed asserts. It works on the standard <code>assert</code> too.</p>
<pre><code># test_error_reporting.py
def test():
a,b,c = 6, 2, 3
assert a - b*c
</code></pre>
<p><code>nosetests</code>' help:</p>
<pre><code>$ nosetests --help|grep -B2 assert
-d, --detailed-errors, --failure-detail
Add detail to error output by attempting to evaluate
failed asserts [NOSE_DETAILED_ERRORS]
</code></pre>
<p>Example:</p>
<pre><code>$ nosetests -d
F
======================================================================
FAIL: test_error_reporting.test
----------------------------------------------------------------------
Traceback (most recent call last):
File "..snip../site-packages/nose/case.py", line 183, in runTest
self.test(*self.arg)
File "..snip../test_error_reporting.py", line 3, in test
assert a - b*c
AssertionError:
6,2,3 = 6, 2, 3
>> assert 6 - 2*3
----------------------------------------------------------------------
Ran 1 test in 0.089s
FAILED (failures=1)
</code></pre>
| 7 | 2009-08-20T21:49:36Z | [
"python",
"debugging",
"assert",
"syntactic-sugar"
] |
Python assert -- improved introspection of failure? | 1,308,607 | <p>This is a rather useless assertion error; it does not tell the values of the expression involved (assume constants used are actually variable names): </p>
<pre><code>$ python -c "assert 6-(3*2)"
[...]
AssertionError
</code></pre>
<p>Is there a better <code>assert</code> implementation in Python that is more fancy? It must not introduce additional overhead over execution (except when assert fails) .. and must turn off if <code>-O</code> flag is used.</p>
<p><strong>Edit</strong>: I know about assert's second argument as a string. I don't want to write one .. as that is encoded in the expression that is being asserted. DRY (Don't Repeat Yourself).</p>
| 10 | 2009-08-20T20:25:23Z | 1,309,812 | <p>Install your of function as <code>sys.excepthook</code> -- see <a href="http://docs.python.org/library/sys.html#sys.excepthook">the docs</a>. Your function, if the second argument is <code>AssertionError</code>, can introspect to your heart's contents; in particular, through the third argument, the traceback, it can get the frame and exact spot in which the assert failed, getting the failing exception through the source or bytecode, the value of all relevant variables, etc. Module <a href="http://docs.python.org/library/inspect.htmlt#module-inspect">inspect</a> helps.</p>
<p>Doing it in full generality is quite a piece of work, but depending on what constraints you're willing to accept in how you write your <code>assert</code>s it can be lightened substantially (e.g. restricting them to only local or global variables makes introspection easier than if nonlocal variables of a closure could be involved, and so forth).</p>
| 10 | 2009-08-21T02:25:37Z | [
"python",
"debugging",
"assert",
"syntactic-sugar"
] |
Python assert -- improved introspection of failure? | 1,308,607 | <p>This is a rather useless assertion error; it does not tell the values of the expression involved (assume constants used are actually variable names): </p>
<pre><code>$ python -c "assert 6-(3*2)"
[...]
AssertionError
</code></pre>
<p>Is there a better <code>assert</code> implementation in Python that is more fancy? It must not introduce additional overhead over execution (except when assert fails) .. and must turn off if <code>-O</code> flag is used.</p>
<p><strong>Edit</strong>: I know about assert's second argument as a string. I don't want to write one .. as that is encoded in the expression that is being asserted. DRY (Don't Repeat Yourself).</p>
| 10 | 2009-08-20T20:25:23Z | 1,314,458 | <p>It sounds like what you really want to do is to set up a debugger breakpoint just before the <code>assert</code> and inspect from your favorite debugger as much as you like.</p>
| 0 | 2009-08-21T22:21:06Z | [
"python",
"debugging",
"assert",
"syntactic-sugar"
] |
Python assert -- improved introspection of failure? | 1,308,607 | <p>This is a rather useless assertion error; it does not tell the values of the expression involved (assume constants used are actually variable names): </p>
<pre><code>$ python -c "assert 6-(3*2)"
[...]
AssertionError
</code></pre>
<p>Is there a better <code>assert</code> implementation in Python that is more fancy? It must not introduce additional overhead over execution (except when assert fails) .. and must turn off if <code>-O</code> flag is used.</p>
<p><strong>Edit</strong>: I know about assert's second argument as a string. I don't want to write one .. as that is encoded in the expression that is being asserted. DRY (Don't Repeat Yourself).</p>
| 10 | 2009-08-20T20:25:23Z | 35,864,193 | <p>I coded a replacement for <code>sys.excepthook</code> (which is called for any unhandled exception) which is a bit more fancy than the standard one. It will analyze the line where the exception occured and print all variables which are referred to in this line (it does not print all local variables because that might be too much noise - also, maybe the important var is global or so).</p>
<p>I called it py_better_exchook (perfect name) and it's <a href="https://github.com/albertz/py_better_exchook" rel="nofollow">here</a>.</p>
<p>Example file:</p>
<pre><code>a = 6
def test():
unrelated_var = 43
b,c = 2, 3
assert a - b*c
import better_exchook
better_exchook.install()
test()
</code></pre>
<p>Output:</p>
<pre><code>$ python test_error_reporting.py
EXCEPTION
Traceback (most recent call last):
File "test_error_reporting.py", line 12, in <module>
line: test()
locals:
test = <local> <function test at 0x7fd91b1a05f0>
File "test_error_reporting.py", line 7, in test
line: assert a - b*c
locals:
a = <global> 6
b = <local> 2
c = <local> 3
AssertionError
</code></pre>
| 0 | 2016-03-08T09:57:37Z | [
"python",
"debugging",
"assert",
"syntactic-sugar"
] |
How I can add a Widget or a Region to an Status Icon in PyGTK | 1,308,679 | <p>This is my first question in StackOverflow, so I would try to explain my self the best I can.</p>
<p>I made an small app trying to emularte the windows Procastination Killer Application, using pygtk and pygame for the sound alerts.</p>
<p>Here is a video of my little app running <a href="http://www.youtube.com/watch?v=FmE-QPA9p-8" rel="nofollow">http://www.youtube.com/watch?v=FmE-QPA9p-8</a></p>
<p>My Issue is that I want to get a widget in the tray icon area, and not jus the plain Icon. Something like an Icon and a Label, to made a counter, or at least extend the icon size to put more information in the status icon.</p>
<p>So my Questions would be:</p>
<ol>
<li>How can I resize the status icon? for example to show a icon 44x22 pixels</li>
<li>How can I add a Widget, Region, or something else instead the status icon</li>
</ol>
<p>Here is the code that use to get the status icon.</p>
<pre><code>self.status_icon = gtk.StatusIcon()
self.status_icon.set_from_file(STATUS_ICON_FILE)
self.status_icon.set_tooltip("Switch, a procastination killer app")
self.status_icon.connect("activate", self.on_toggle_status_trayicon)
self.status_icon.connect("popup-menu", lambda i, b, a: self.status_menu.popup(
None, None, gtk.status_icon_position_menu, b, a, self.status_icon))
</code></pre>
<p>I am packaging the app for ubuntu soon as I find a name :), that maybe would be me third question.</p>
<p>3: How do I name my app?</p>
| 0 | 2009-08-20T20:39:15Z | 1,308,693 | <p>GTK+ doesn't support arbitrary widgets in the notification area, because these don't work well in Windows. You probably want to write a panel applet instead -- <a href="http://www.pygtk.org/articles/applets%5Farturogf/" rel="nofollow">here's a tutorial for panel applets in PyGTK</a>.</p>
| 2 | 2009-08-20T20:42:49Z | [
"python",
"gtk",
"pygtk",
"trayicon",
"tray"
] |
Transparency in PNGs with reportlab 2.3 | 1,308,710 | <p>I have two PNGs that I am trying to combine into a PDF using ReportLab 2.3 on Python 2.5. When I use canvas.drawImage(ImageReader) to write either PNG onto the canvas and save, the transparency comes out black. If I use PIL (1.1.6) to generate a new Image, then paste() either PNG onto the PIL Image, it composits just fine. I've double checked in Gimp and both images have working alpha channels and are being saved correctly. I'm not receiving an error and there doesn't seem to be anything my google-fu can turn up. </p>
<p>Has anybody out there composited a transparent PNG onto a ReportLab canvas, with the transparency working properly? Thanks!</p>
| 15 | 2009-08-20T20:45:23Z | 1,311,056 | <p>ReportLab uses PIL for managing images. Currently, PIL trunk has patch applied to support transparent PNGs, but you will have to wait for a 1.1.6 release if you need stable package.</p>
| 0 | 2009-08-21T10:00:56Z | [
"python",
"python-imaging-library",
"reportlab"
] |
Transparency in PNGs with reportlab 2.3 | 1,308,710 | <p>I have two PNGs that I am trying to combine into a PDF using ReportLab 2.3 on Python 2.5. When I use canvas.drawImage(ImageReader) to write either PNG onto the canvas and save, the transparency comes out black. If I use PIL (1.1.6) to generate a new Image, then paste() either PNG onto the PIL Image, it composits just fine. I've double checked in Gimp and both images have working alpha channels and are being saved correctly. I'm not receiving an error and there doesn't seem to be anything my google-fu can turn up. </p>
<p>Has anybody out there composited a transparent PNG onto a ReportLab canvas, with the transparency working properly? Thanks!</p>
| 15 | 2009-08-20T20:45:23Z | 1,625,350 | <p>Passing the <strong>mask parameter</strong> with a value of 'auto' to <code>drawImage</code> fixes this for me.</p>
<pre><code>drawImage(......., mask='auto')
</code></pre>
<p><a href="http://www.reportlab.com/apis/reportlab/dev/pdfgen.html#reportlab.pdfgen.canvas.Canvas.drawImage">More information on the drawImage-function</a></p>
| 36 | 2009-10-26T15:01:40Z | [
"python",
"python-imaging-library",
"reportlab"
] |
Transparency in PNGs with reportlab 2.3 | 1,308,710 | <p>I have two PNGs that I am trying to combine into a PDF using ReportLab 2.3 on Python 2.5. When I use canvas.drawImage(ImageReader) to write either PNG onto the canvas and save, the transparency comes out black. If I use PIL (1.1.6) to generate a new Image, then paste() either PNG onto the PIL Image, it composits just fine. I've double checked in Gimp and both images have working alpha channels and are being saved correctly. I'm not receiving an error and there doesn't seem to be anything my google-fu can turn up. </p>
<p>Has anybody out there composited a transparent PNG onto a ReportLab canvas, with the transparency working properly? Thanks!</p>
| 15 | 2009-08-20T20:45:23Z | 28,385,327 | <p>I've found that <code>mask='auto'</code> has stopped working for me with reportlab 3.1.8. In the docs it says to pass the values that you want masked out. So what works for me now is <code>mask=[0, 2, 0, 2, 0, 2, ]</code>. Basically it looks like this `mask=[red_start, red_end, green_start, green_end, blue_start, blue_end, ]</p>
<blockquote>
<p>The mask parameter lets you create transparent images. It takes 6
numbers and defines the range of RGB values which will be masked out
or treated as transparent. For example with [0,2,40,42,136,139], it
will mask out any pixels with a Red value from 0 or 1, Green from 40
or 41 and Blue of 136, 137 or 138 (on a scale of 0-255). It's
currently your job to know which color is the 'transparent' or
background one.</p>
</blockquote>
<p>UPDATE: That masks out anything that is <code>rgb(0, 0, 0)</code> or <code>rgb(1, 1, 1)</code> which obviously might not be the right solution. My problem was people uploading png images with a gray color space. So I need to still figure out a way to detect the color space of the image. and only apply that mask on gray space images.</p>
| 1 | 2015-02-07T17:43:38Z | [
"python",
"python-imaging-library",
"reportlab"
] |
cURL: https through a proxy | 1,308,760 | <p>I need to make a cURL request to a https URL, but I have to go through a proxy as well. Is there some problem with doing this? I have been having so much trouble doing this with curl and php, that I tried doing it with urllib2 in Python, only to find that urllib2 cannot POST to https when going through a proxy. I haven't been able to find any documentation to this effect with cURL, but I was wondering if anyone knew if this was an issue?</p>
| 1 | 2009-08-20T20:56:12Z | 1,308,768 | <p>No problem since the proxy server supports the CONNECT method.</p>
| 0 | 2009-08-20T20:58:08Z | [
"php",
"python",
"curl",
"https",
"urllib2"
] |
cURL: https through a proxy | 1,308,760 | <p>I need to make a cURL request to a https URL, but I have to go through a proxy as well. Is there some problem with doing this? I have been having so much trouble doing this with curl and php, that I tried doing it with urllib2 in Python, only to find that urllib2 cannot POST to https when going through a proxy. I haven't been able to find any documentation to this effect with cURL, but I was wondering if anyone knew if this was an issue?</p>
| 1 | 2009-08-20T20:56:12Z | 1,314,698 | <p>I find testing with command-line curl a big help before moving to PHP/cURL.</p>
<p>For example, w/ command-line, unless you've configured certificates, you'll need <code>-k</code> switch. And to go through a proxy, it's the <code>-x <proxyhost[:port]></code> switch.</p>
<p>I believe the <code>-k</code> equivalent is</p>
<pre><code>curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE);
</code></pre>
<p>I believe the <code>-x</code> equivalent is</p>
<pre><code>curl_setopt($curl, CURLOPT_PROXY, '<proxyhost[:port]>');
</code></pre>
<p><hr /></p>
<blockquote>
<p><em>DISCLAIMER: I have not tested any of
this. If you give more information
about what you've tried, it might be
helpful.</em></p>
</blockquote>
| 2 | 2009-08-21T23:56:17Z | [
"php",
"python",
"curl",
"https",
"urllib2"
] |
Simulate multiple IP addresses for testing | 1,308,879 | <p>I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software.</p>
<p>How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections.</p>
<p>Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.</p>
| 4 | 2009-08-20T21:17:02Z | 1,308,897 | <p>Normally you just listen on 0.0.0.0. This is an alias for <em>all</em> IP addresses.</p>
| 2 | 2009-08-20T21:20:28Z | [
".net",
"python",
"networking",
"sockets"
] |
Simulate multiple IP addresses for testing | 1,308,879 | <p>I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software.</p>
<p>How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections.</p>
<p>Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.</p>
| 4 | 2009-08-20T21:17:02Z | 1,308,972 | <p>You should set up a virtual network adapter. They are called TAP/TUN devices. If you are using windows, you can easily setup some dummy addresses with somthing like this:</p>
<p><a href="http://www.ntkernel.com/w&p.php?id=32">http://www.ntkernel.com/w&p.php?id=32</a></p>
<p>Good luck!</p>
| 5 | 2009-08-20T21:33:30Z | [
".net",
"python",
"networking",
"sockets"
] |
Simulate multiple IP addresses for testing | 1,308,879 | <p>I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software.</p>
<p>How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections.</p>
<p>Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.</p>
| 4 | 2009-08-20T21:17:02Z | 1,309,096 | <p>A. consider using Bonjour (zeroconf) for service discovery</p>
<p>B. You can assign 1 or more IP addresses the same NIC:</p>
<p>On XP, Start -> Control Panel -> Network Connections and select properties on your NIC (usually 'Local Area Connection').</p>
<p>Scroll down to Internet Protocol (TCP/IP), select it and click on [Properties].</p>
<p>If you are using DHCP, you will need to get a static, base IP, from your IT.
Otherwise, click on [Advanced] and under 'IP Addresses' click [Add..]
Enter the IP information for the additional IP you want to add.</p>
<p>Repeat for each additional IP address.</p>
<p>C. Consider using VMWare, as you can configure multiple systems and virtual IPs within a single, logical, network of "computers".</p>
<p>-- sky</p>
| 3 | 2009-08-20T22:03:32Z | [
".net",
"python",
"networking",
"sockets"
] |
Running function 5 seconds after pygtk widget is shown | 1,309,006 | <p>How to run function 5 seconds after pygtk widget is shown?</p>
| 11 | 2009-08-20T21:39:46Z | 1,309,257 | <p>You can use <strong><a href="http://library.gnome.org/devel/pygobject/stable/glib-functions.html#function-glib--timeout-add">glib.timeout_add(<em>interval</em>, <em>callback</em>, <em>...</em>)</a></strong> to periodically call a function.</p>
<p>If the function returns <em>True</em> then it will be called again after the interval; if the function return <em>False</em> then it will not be called again.</p>
<p>Here is a short example of adding a timeout after a widget's <em>show</em> event:</p>
<pre><code>import pygtk
pygtk.require('2.0')
import gtk
import glib
def timer_cb():
print "5 seconds elapsed."
return False
def show_cb(widget, data=None):
glib.timeout_add(5000, timer_cb)
def destroy_cb(widget, data=None):
gtk.main_quit()
def main():
window = gtk.Window(gtk.WINDOW_TOPLEVEL)
window.connect("show", show_cb)
window.connect("destroy", destroy_cb)
window.show()
gtk.main()
if __name__ == "__main__":
main()
</code></pre>
| 14 | 2009-08-20T22:49:06Z | [
"python",
"function",
"time",
"pygtk"
] |
Running function 5 seconds after pygtk widget is shown | 1,309,006 | <p>How to run function 5 seconds after pygtk widget is shown?</p>
| 11 | 2009-08-20T21:39:46Z | 1,309,404 | <p>If the time is not critical to be exact to the tenth of a second, use</p>
<pre><code>glib.timeout_add_seconds(5, ..)
</code></pre>
<p>else as above.</p>
<p>*timeout_add_seconds* allows the system to align timeouts to other events, in the long run reducing CPU wakeups (especially if the timeout is reocurring) and <a href="http://gould.cx/ted/blog/Saving%5Fthe%5Fworld%5Fone%5F%5Fw%5Fat%5Fa%5Ftime">save energy for the planet</a>(!)</p>
| 9 | 2009-08-20T23:30:22Z | [
"python",
"function",
"time",
"pygtk"
] |
Fast string to integer conversion in Python | 1,309,123 | <p>A simple problem, really: you have one billion (1e+9) unsigned 32-bit integers stored as decimal ASCII strings in a TSV (tab-separated values) file. Conversion using <code>int()</code> is horribly slow compared to other tools working on the same dataset. Why? And more importantly: how to make it faster?</p>
<p>Therefore the question: what is the fastest way possible to convert a string to an integer, in Python?</p>
<p>What I'm really thinking about is some semi-hidden Python functionality that could be (ab)used for this purpose, not unlike Guido's use of <code>array.array</code> in his <a href="http://www.python.org/doc/essays/list2str/" rel="nofollow">"Optimization Anecdote"</a>.</p>
<p><strong>Sample data</strong> (with tabs expanded to spaces)</p>
<pre><code>38262904 "pfv" 2002-11-15T00:37:20+00:00
12311231 "tnealzref" 2008-01-21T20:46:51+00:00
26783384 "hayb" 2004-02-14T20:43:45+00:00
812874 "qevzasdfvnp" 2005-01-11T00:29:46+00:00
22312733 "bdumtddyasb" 2009-01-17T20:41:04+00:00
</code></pre>
<p>The time it takes reading the data is irrelevant here, processing the data is the bottleneck.</p>
<p><strong>Microbenchmarks</strong></p>
<p>All of the following are interpreted languages. The host machine is running 64-bit Linux.</p>
<p>Python 2.6.2 with IPython 0.9.1, ~214k conversions per second (100%):</p>
<pre><code>In [1]: strings = map(str, range(int(1e7)))
In [2]: %timeit map(int, strings);
10 loops, best of 3: 4.68 s per loop
</code></pre>
<p>REBOL 3.0 Version 2.100.76.4.2, ~231kcps (108%):</p>
<pre><code>>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [map str strings [to integer! str]]
== 0:00:04.328675
</code></pre>
<p>REBOL 2.7.6.4.2 (15-Mar-2008), ~523kcps (261%):</p>
<p>As John noted in the comments, this version does <em>not</em> build a list of converted integers, so the speed-ratio given is relative to Python's 4.99s runtime of <code>for str in strings: int(str)</code>.</p>
<pre><code>>> delta-time: func [c /local t] [t: now/time/precise do c now/time/precise - t]
>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [foreach str strings [to integer! str]]
== 0:00:01.913193
</code></pre>
<p>KDB+ 2.6t 2009.04.15, ~2016kcps (944%):</p>
<pre><code>q)strings:string til "i"$1e7
q)\t "I"$strings
496
</code></pre>
| 3 | 2009-08-20T22:11:33Z | 1,309,132 | <p>I might suggest that for raw speed, Python isn't the right tool for this task. A hand-coded C implementation will beat Python easily.</p>
| 5 | 2009-08-20T22:15:05Z | [
"python",
"performance",
"optimization"
] |
Fast string to integer conversion in Python | 1,309,123 | <p>A simple problem, really: you have one billion (1e+9) unsigned 32-bit integers stored as decimal ASCII strings in a TSV (tab-separated values) file. Conversion using <code>int()</code> is horribly slow compared to other tools working on the same dataset. Why? And more importantly: how to make it faster?</p>
<p>Therefore the question: what is the fastest way possible to convert a string to an integer, in Python?</p>
<p>What I'm really thinking about is some semi-hidden Python functionality that could be (ab)used for this purpose, not unlike Guido's use of <code>array.array</code> in his <a href="http://www.python.org/doc/essays/list2str/" rel="nofollow">"Optimization Anecdote"</a>.</p>
<p><strong>Sample data</strong> (with tabs expanded to spaces)</p>
<pre><code>38262904 "pfv" 2002-11-15T00:37:20+00:00
12311231 "tnealzref" 2008-01-21T20:46:51+00:00
26783384 "hayb" 2004-02-14T20:43:45+00:00
812874 "qevzasdfvnp" 2005-01-11T00:29:46+00:00
22312733 "bdumtddyasb" 2009-01-17T20:41:04+00:00
</code></pre>
<p>The time it takes reading the data is irrelevant here, processing the data is the bottleneck.</p>
<p><strong>Microbenchmarks</strong></p>
<p>All of the following are interpreted languages. The host machine is running 64-bit Linux.</p>
<p>Python 2.6.2 with IPython 0.9.1, ~214k conversions per second (100%):</p>
<pre><code>In [1]: strings = map(str, range(int(1e7)))
In [2]: %timeit map(int, strings);
10 loops, best of 3: 4.68 s per loop
</code></pre>
<p>REBOL 3.0 Version 2.100.76.4.2, ~231kcps (108%):</p>
<pre><code>>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [map str strings [to integer! str]]
== 0:00:04.328675
</code></pre>
<p>REBOL 2.7.6.4.2 (15-Mar-2008), ~523kcps (261%):</p>
<p>As John noted in the comments, this version does <em>not</em> build a list of converted integers, so the speed-ratio given is relative to Python's 4.99s runtime of <code>for str in strings: int(str)</code>.</p>
<pre><code>>> delta-time: func [c /local t] [t: now/time/precise do c now/time/precise - t]
>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [foreach str strings [to integer! str]]
== 0:00:01.913193
</code></pre>
<p>KDB+ 2.6t 2009.04.15, ~2016kcps (944%):</p>
<pre><code>q)strings:string til "i"$1e7
q)\t "I"$strings
496
</code></pre>
| 3 | 2009-08-20T22:11:33Z | 1,309,181 | <p>Agree with Greg; Python, as an interpreted language, is generally slow. You could try compiling the source code on-the-fly with the <a href="http://psyco.sourceforge.net/" rel="nofollow">Psyco library</a> or coding the app in a lower level language such C/C++.</p>
| 0 | 2009-08-20T22:30:12Z | [
"python",
"performance",
"optimization"
] |
Fast string to integer conversion in Python | 1,309,123 | <p>A simple problem, really: you have one billion (1e+9) unsigned 32-bit integers stored as decimal ASCII strings in a TSV (tab-separated values) file. Conversion using <code>int()</code> is horribly slow compared to other tools working on the same dataset. Why? And more importantly: how to make it faster?</p>
<p>Therefore the question: what is the fastest way possible to convert a string to an integer, in Python?</p>
<p>What I'm really thinking about is some semi-hidden Python functionality that could be (ab)used for this purpose, not unlike Guido's use of <code>array.array</code> in his <a href="http://www.python.org/doc/essays/list2str/" rel="nofollow">"Optimization Anecdote"</a>.</p>
<p><strong>Sample data</strong> (with tabs expanded to spaces)</p>
<pre><code>38262904 "pfv" 2002-11-15T00:37:20+00:00
12311231 "tnealzref" 2008-01-21T20:46:51+00:00
26783384 "hayb" 2004-02-14T20:43:45+00:00
812874 "qevzasdfvnp" 2005-01-11T00:29:46+00:00
22312733 "bdumtddyasb" 2009-01-17T20:41:04+00:00
</code></pre>
<p>The time it takes reading the data is irrelevant here, processing the data is the bottleneck.</p>
<p><strong>Microbenchmarks</strong></p>
<p>All of the following are interpreted languages. The host machine is running 64-bit Linux.</p>
<p>Python 2.6.2 with IPython 0.9.1, ~214k conversions per second (100%):</p>
<pre><code>In [1]: strings = map(str, range(int(1e7)))
In [2]: %timeit map(int, strings);
10 loops, best of 3: 4.68 s per loop
</code></pre>
<p>REBOL 3.0 Version 2.100.76.4.2, ~231kcps (108%):</p>
<pre><code>>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [map str strings [to integer! str]]
== 0:00:04.328675
</code></pre>
<p>REBOL 2.7.6.4.2 (15-Mar-2008), ~523kcps (261%):</p>
<p>As John noted in the comments, this version does <em>not</em> build a list of converted integers, so the speed-ratio given is relative to Python's 4.99s runtime of <code>for str in strings: int(str)</code>.</p>
<pre><code>>> delta-time: func [c /local t] [t: now/time/precise do c now/time/precise - t]
>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [foreach str strings [to integer! str]]
== 0:00:01.913193
</code></pre>
<p>KDB+ 2.6t 2009.04.15, ~2016kcps (944%):</p>
<pre><code>q)strings:string til "i"$1e7
q)\t "I"$strings
496
</code></pre>
| 3 | 2009-08-20T22:11:33Z | 1,309,572 | <p>It may not be an option for you, but I would look real hard at using a binary file rather than text. Does it change often? If not, you could pre-process it.</p>
| 0 | 2009-08-21T00:46:17Z | [
"python",
"performance",
"optimization"
] |
Fast string to integer conversion in Python | 1,309,123 | <p>A simple problem, really: you have one billion (1e+9) unsigned 32-bit integers stored as decimal ASCII strings in a TSV (tab-separated values) file. Conversion using <code>int()</code> is horribly slow compared to other tools working on the same dataset. Why? And more importantly: how to make it faster?</p>
<p>Therefore the question: what is the fastest way possible to convert a string to an integer, in Python?</p>
<p>What I'm really thinking about is some semi-hidden Python functionality that could be (ab)used for this purpose, not unlike Guido's use of <code>array.array</code> in his <a href="http://www.python.org/doc/essays/list2str/" rel="nofollow">"Optimization Anecdote"</a>.</p>
<p><strong>Sample data</strong> (with tabs expanded to spaces)</p>
<pre><code>38262904 "pfv" 2002-11-15T00:37:20+00:00
12311231 "tnealzref" 2008-01-21T20:46:51+00:00
26783384 "hayb" 2004-02-14T20:43:45+00:00
812874 "qevzasdfvnp" 2005-01-11T00:29:46+00:00
22312733 "bdumtddyasb" 2009-01-17T20:41:04+00:00
</code></pre>
<p>The time it takes reading the data is irrelevant here, processing the data is the bottleneck.</p>
<p><strong>Microbenchmarks</strong></p>
<p>All of the following are interpreted languages. The host machine is running 64-bit Linux.</p>
<p>Python 2.6.2 with IPython 0.9.1, ~214k conversions per second (100%):</p>
<pre><code>In [1]: strings = map(str, range(int(1e7)))
In [2]: %timeit map(int, strings);
10 loops, best of 3: 4.68 s per loop
</code></pre>
<p>REBOL 3.0 Version 2.100.76.4.2, ~231kcps (108%):</p>
<pre><code>>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [map str strings [to integer! str]]
== 0:00:04.328675
</code></pre>
<p>REBOL 2.7.6.4.2 (15-Mar-2008), ~523kcps (261%):</p>
<p>As John noted in the comments, this version does <em>not</em> build a list of converted integers, so the speed-ratio given is relative to Python's 4.99s runtime of <code>for str in strings: int(str)</code>.</p>
<pre><code>>> delta-time: func [c /local t] [t: now/time/precise do c now/time/precise - t]
>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [foreach str strings [to integer! str]]
== 0:00:01.913193
</code></pre>
<p>KDB+ 2.6t 2009.04.15, ~2016kcps (944%):</p>
<pre><code>q)strings:string til "i"$1e7
q)\t "I"$strings
496
</code></pre>
| 3 | 2009-08-20T22:11:33Z | 1,310,262 | <p>You will get some percentage of speed by ensuring only "local" variables are used in your tightest of loops. The <code>int</code> function is a global, so looking it up will be more expensive than a local.</p>
<p>Do you really need all billion numbers in memory at all times. Consider using some iterators to give you only a few values at a time A billion numbers will take a bit of storage. Appending these to a list, one at a time, is going to require several large reallocations. </p>
<p>Get your looping out of Python entirely if possible. The map function here can be your friend. I'm not sure how your data is stored. If it is a single number per line, you could reduce the code to</p>
<pre><code>values = map(int, open("numberfile.txt"))
</code></pre>
<p>If there are multiple values per line that are white space separated, dig into the itertools to keep the looping code out of Python. This version has the added benefit of creating a number iterator, so you can spool only one or several numbers out of the file at a time, instead of one billion in one shot.</p>
<pre><code>numfile = open("numberfile.txt")
valIter = itertools.imap(int, itertools.chain(itertools.imap(str.split, numfile)))
</code></pre>
| 3 | 2009-08-21T05:53:31Z | [
"python",
"performance",
"optimization"
] |
Fast string to integer conversion in Python | 1,309,123 | <p>A simple problem, really: you have one billion (1e+9) unsigned 32-bit integers stored as decimal ASCII strings in a TSV (tab-separated values) file. Conversion using <code>int()</code> is horribly slow compared to other tools working on the same dataset. Why? And more importantly: how to make it faster?</p>
<p>Therefore the question: what is the fastest way possible to convert a string to an integer, in Python?</p>
<p>What I'm really thinking about is some semi-hidden Python functionality that could be (ab)used for this purpose, not unlike Guido's use of <code>array.array</code> in his <a href="http://www.python.org/doc/essays/list2str/" rel="nofollow">"Optimization Anecdote"</a>.</p>
<p><strong>Sample data</strong> (with tabs expanded to spaces)</p>
<pre><code>38262904 "pfv" 2002-11-15T00:37:20+00:00
12311231 "tnealzref" 2008-01-21T20:46:51+00:00
26783384 "hayb" 2004-02-14T20:43:45+00:00
812874 "qevzasdfvnp" 2005-01-11T00:29:46+00:00
22312733 "bdumtddyasb" 2009-01-17T20:41:04+00:00
</code></pre>
<p>The time it takes reading the data is irrelevant here, processing the data is the bottleneck.</p>
<p><strong>Microbenchmarks</strong></p>
<p>All of the following are interpreted languages. The host machine is running 64-bit Linux.</p>
<p>Python 2.6.2 with IPython 0.9.1, ~214k conversions per second (100%):</p>
<pre><code>In [1]: strings = map(str, range(int(1e7)))
In [2]: %timeit map(int, strings);
10 loops, best of 3: 4.68 s per loop
</code></pre>
<p>REBOL 3.0 Version 2.100.76.4.2, ~231kcps (108%):</p>
<pre><code>>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [map str strings [to integer! str]]
== 0:00:04.328675
</code></pre>
<p>REBOL 2.7.6.4.2 (15-Mar-2008), ~523kcps (261%):</p>
<p>As John noted in the comments, this version does <em>not</em> build a list of converted integers, so the speed-ratio given is relative to Python's 4.99s runtime of <code>for str in strings: int(str)</code>.</p>
<pre><code>>> delta-time: func [c /local t] [t: now/time/precise do c now/time/precise - t]
>> strings: array n: to-integer 1e7 repeat i n [poke strings i mold (i - 1)]
== "9999999"
>> delta-time [foreach str strings [to integer! str]]
== 0:00:01.913193
</code></pre>
<p>KDB+ 2.6t 2009.04.15, ~2016kcps (944%):</p>
<pre><code>q)strings:string til "i"$1e7
q)\t "I"$strings
496
</code></pre>
| 3 | 2009-08-20T22:11:33Z | 1,310,375 | <p>As others have said you could code up your own C module to do the parsing/conversion for you. Then you could simply import that and call on it. You might be able to use Pyrex or its Cython derivative to generate your C from your Python (by adding a few type constraining hints to the Python).</p>
<p>You can read more about <a href="http://cython.org/" rel="nofollow">Cython</a> and see if that will help.</p>
<p>Another question that comes to mind though ... what are you going to be doing with these billion integers? Is it possible that you might load them as strings, search for them as strings and perform a lazy conversion as necessary? Or could you parallelize the conversion and the other computations using <code>threading</code> or <code>multiprocessing</code> modules and Queues? (Have one or more threads/processes performing the conversion and feeding a Queue from which your processing engine fetches them). In other words would a producer/consumer design alleviate the problem?</p>
| 1 | 2009-08-21T06:56:00Z | [
"python",
"performance",
"optimization"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.