title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Signals registered more than once in django1.1 testserver | 1,149,317 | <p>I've defined a signal handler function in my models.py file. At the bottom of that file, I use <code>signals.post_save.connect(myhandler, sender=myclass)</code> as recommended in the docs at <a href="http://docs.djangoproject.com/en/dev/topics/signals/" rel="nofollow">http://docs.djangoproject.com/en/dev/topics/signals/</a>.</p>
<p>However, when I run the test server, simple print-statement debugging shows that the models.py file gets imported twice and (as far as I can tell), this causes my signal handler to get registered twice. This means that every action is handled twice, which is obviously not the intended behaviour.</p>
<p>The first import seems to occur during the model checking phase, and the second happens right when the model itself is needed during the first request handled by the server.</p>
<p>Should I be registering my signals handlers elsewhere? Is this a bug in the 1.1 test server? Am I missing something else?</p>
| 1 | 2009-07-19T05:59:53Z | 1,149,336 | <p>The signature for the <code>connect</code> method is </p>
<pre><code>def connect(self, receiver, sender=None, weak=True, dispatch_uid=None)
</code></pre>
<p>where the <code>dispatch_uid</code> parameter is an identifier used to uniquely identify a particular instance of a receiver. This will usually be a string, though it may be anything hashable. If receivers have a <code>dispatch_uid</code> attribute, the receiver will not be added if another receiver already exists with that <code>dispatch_uid</code>.</p>
<p>So, you could specify a <code>dispatch_uid</code> in your <code>connect</code> call to see if that eliminates the problem.</p>
| 4 | 2009-07-19T06:14:06Z | [
"python",
"django",
"django-models",
"django-signals"
] |
Exploring and decompiling python bytecode | 1,149,513 | <p>Lets say I have:</p>
<pre><code>>>> def test(a):
>>> print a
</code></pre>
<p>Now, I want to explore see how test looks like in its compiled form.</p>
<pre><code>>>> test.func_code.co_code
'|\x00\x00GHd\x00\x00S'
</code></pre>
<p>I can get the disassembled form using the <strong>dis</strong> module:</p>
<pre><code>>>> import dis
>>> dis.dis(test)
2 0 LOAD_FAST 0 (a)
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
</code></pre>
<p>Is there an opensource and maintained decompiler I could use to turn the bytecode back into readable python code?</p>
<p><strong>update: thanks for suggesting decompile, but it's outdated (python2.3) and no one maintains it anymore. Is there anything for python2.5 or later?</strong> </p>
| 12 | 2009-07-19T08:59:47Z | 1,149,523 | <p><a href="http://sourceforge.net/projects/decompyle/" rel="nofollow">decompyle</a></p>
<blockquote>
<p>Decompyle is a python disassembler and
decompiler which converts Python
byte-code (.pyc or .pyo) back into
equivalent Python source. Verification
of the produced code (re-compiled) is
avaliable as well.</p>
</blockquote>
| 3 | 2009-07-19T09:06:57Z | [
"python",
"decompiling"
] |
Exploring and decompiling python bytecode | 1,149,513 | <p>Lets say I have:</p>
<pre><code>>>> def test(a):
>>> print a
</code></pre>
<p>Now, I want to explore see how test looks like in its compiled form.</p>
<pre><code>>>> test.func_code.co_code
'|\x00\x00GHd\x00\x00S'
</code></pre>
<p>I can get the disassembled form using the <strong>dis</strong> module:</p>
<pre><code>>>> import dis
>>> dis.dis(test)
2 0 LOAD_FAST 0 (a)
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
</code></pre>
<p>Is there an opensource and maintained decompiler I could use to turn the bytecode back into readable python code?</p>
<p><strong>update: thanks for suggesting decompile, but it's outdated (python2.3) and no one maintains it anymore. Is there anything for python2.5 or later?</strong> </p>
| 12 | 2009-07-19T08:59:47Z | 1,149,590 | <p><strong>UnPyc</strong></p>
<p><a href="http://sourceforge.net/projects/unpyc/" rel="nofollow">http://sourceforge.net/projects/unpyc/</a></p>
<p>It is a maintained fork of the old decompyle updated to work with 2.5 and 2.6.</p>
| 7 | 2009-07-19T10:00:58Z | [
"python",
"decompiling"
] |
Exploring and decompiling python bytecode | 1,149,513 | <p>Lets say I have:</p>
<pre><code>>>> def test(a):
>>> print a
</code></pre>
<p>Now, I want to explore see how test looks like in its compiled form.</p>
<pre><code>>>> test.func_code.co_code
'|\x00\x00GHd\x00\x00S'
</code></pre>
<p>I can get the disassembled form using the <strong>dis</strong> module:</p>
<pre><code>>>> import dis
>>> dis.dis(test)
2 0 LOAD_FAST 0 (a)
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
</code></pre>
<p>Is there an opensource and maintained decompiler I could use to turn the bytecode back into readable python code?</p>
<p><strong>update: thanks for suggesting decompile, but it's outdated (python2.3) and no one maintains it anymore. Is there anything for python2.5 or later?</strong> </p>
| 12 | 2009-07-19T08:59:47Z | 11,102,744 | <p>get uncompyle2 from github! :)</p>
| 6 | 2012-06-19T14:06:44Z | [
"python",
"decompiling"
] |
Exploring and decompiling python bytecode | 1,149,513 | <p>Lets say I have:</p>
<pre><code>>>> def test(a):
>>> print a
</code></pre>
<p>Now, I want to explore see how test looks like in its compiled form.</p>
<pre><code>>>> test.func_code.co_code
'|\x00\x00GHd\x00\x00S'
</code></pre>
<p>I can get the disassembled form using the <strong>dis</strong> module:</p>
<pre><code>>>> import dis
>>> dis.dis(test)
2 0 LOAD_FAST 0 (a)
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
</code></pre>
<p>Is there an opensource and maintained decompiler I could use to turn the bytecode back into readable python code?</p>
<p><strong>update: thanks for suggesting decompile, but it's outdated (python2.3) and no one maintains it anymore. Is there anything for python2.5 or later?</strong> </p>
| 12 | 2009-07-19T08:59:47Z | 19,268,774 | <p>Uncompyle2 worked for me with Python 2.7.</p>
<p><a href="https://github.com/wibiti/uncompyle2" rel="nofollow">https://github.com/wibiti/uncompyle2</a></p>
<p>Quick how to use uncompyle2 , Install it and then </p>
<pre><code>>>>import uncompyle2
>>> with open("decompiled.py","wb") as f:
... uncompyle2.uncompyle_file("compiled.pyc",f)
</code></pre>
<p>It will generate source code back in decompile.py </p>
| 2 | 2013-10-09T09:56:57Z | [
"python",
"decompiling"
] |
Exploring and decompiling python bytecode | 1,149,513 | <p>Lets say I have:</p>
<pre><code>>>> def test(a):
>>> print a
</code></pre>
<p>Now, I want to explore see how test looks like in its compiled form.</p>
<pre><code>>>> test.func_code.co_code
'|\x00\x00GHd\x00\x00S'
</code></pre>
<p>I can get the disassembled form using the <strong>dis</strong> module:</p>
<pre><code>>>> import dis
>>> dis.dis(test)
2 0 LOAD_FAST 0 (a)
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
</code></pre>
<p>Is there an opensource and maintained decompiler I could use to turn the bytecode back into readable python code?</p>
<p><strong>update: thanks for suggesting decompile, but it's outdated (python2.3) and no one maintains it anymore. Is there anything for python2.5 or later?</strong> </p>
| 12 | 2009-07-19T08:59:47Z | 21,366,088 | <p>In addition to what DevC wrote:</p>
<ol>
<li><p>Uncompyle2 works with Python 2.7</p></li>
<li><p>with Uncompyle2, you can also un-compile from the command line:</p>
<p>$ uncompyle2 compiled.pyc >> source.uncompyle2.py</p></li>
<li><p>to install Uncompyle2, do</p>
<p>$ git clone <a href="https://github.com/wibiti/uncompyle2" rel="nofollow">https://github.com/wibiti/uncompyle2</a></p>
<p>$ cd uncompyle2</p>
<p>$ sudo ./setup.py install</p></li>
</ol>
| 1 | 2014-01-26T16:30:29Z | [
"python",
"decompiling"
] |
Exploring and decompiling python bytecode | 1,149,513 | <p>Lets say I have:</p>
<pre><code>>>> def test(a):
>>> print a
</code></pre>
<p>Now, I want to explore see how test looks like in its compiled form.</p>
<pre><code>>>> test.func_code.co_code
'|\x00\x00GHd\x00\x00S'
</code></pre>
<p>I can get the disassembled form using the <strong>dis</strong> module:</p>
<pre><code>>>> import dis
>>> dis.dis(test)
2 0 LOAD_FAST 0 (a)
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
</code></pre>
<p>Is there an opensource and maintained decompiler I could use to turn the bytecode back into readable python code?</p>
<p><strong>update: thanks for suggesting decompile, but it's outdated (python2.3) and no one maintains it anymore. Is there anything for python2.5 or later?</strong> </p>
| 12 | 2009-07-19T08:59:47Z | 37,440,699 | <p>There is also now <a href="https://pypi.python.org/pypi?name=uncompyle6&:action=display" rel="nofollow">uncompyle6</a> which is written in Python and <a href="https://github.com/zrax/pycdc" rel="nofollow">pycdc</a> which is written in C++. </p>
<p>Both of these handle several versions of Python bytecode including Python 2 versions and Python 3 versions. </p>
| 0 | 2016-05-25T14:50:35Z | [
"python",
"decompiling"
] |
python and ruby - for what to use it? | 1,149,581 | <p>I'm thinking about learning ruby and python a little bit, and it occurred to me, for what ruby/python is good for? When to use ruby and when python, or for what ruby/python is not for? :)</p>
<p>What should I do in these languages?</p>
<p>thanks</p>
| 8 | 2009-07-19T09:56:34Z | 1,149,595 | <p>They are good for mostly for rapid prototyping, quick development, dynamic programs, web applications and scripts. They're general purpose languages, so you can use them for pretty much everything you want. You'll have smaller development times (compared to, say, Java or C++), but worse performance and less static error-checking.</p>
<p>You can also develop desktop apps on them, but there may be some minor complications on shipping (since you'll usually have to ship the interpreter too).</p>
<p>You shouldn't do critical code or heavy computations on them - if you need these things, make them on a faster language (like C) and make a binding for the code. I believe Python is better for this than Ruby, but I could be wrong. (OTOH, Ruby has a stronger metaprogramming) </p>
| 11 | 2009-07-19T10:08:03Z | [
"python",
"ruby"
] |
python and ruby - for what to use it? | 1,149,581 | <p>I'm thinking about learning ruby and python a little bit, and it occurred to me, for what ruby/python is good for? When to use ruby and when python, or for what ruby/python is not for? :)</p>
<p>What should I do in these languages?</p>
<p>thanks</p>
| 8 | 2009-07-19T09:56:34Z | 1,149,596 | <p>If you want to know what people actually use them for, check out <a href="http://pypi.python.org/pypi">Python Package Index</a>, <a href="http://rubyforge.org/">RubyForge</a>, and search <a href="http://web.sourceforge.com/">SourceForge</a> or even StackOverflow.</p>
<p>As shylent says, you can easily get into holy wars about what they <em>should</em> be used for. Both Ruby and Python are popular especially for prototyping, but you can also build production software like <a href="http://rubyforge.org/">Ruby on Rails</a>, <a href="http://www.zope.org/">Zope</a>, and <a href="http://mercurial.selenic.com/wiki/">Mercurial</a>.</p>
<p>What one would not use them for is code that is performance-critical (most isn't) or close to the metal.</p>
| 6 | 2009-07-19T10:08:46Z | [
"python",
"ruby"
] |
python and ruby - for what to use it? | 1,149,581 | <p>I'm thinking about learning ruby and python a little bit, and it occurred to me, for what ruby/python is good for? When to use ruby and when python, or for what ruby/python is not for? :)</p>
<p>What should I do in these languages?</p>
<p>thanks</p>
| 8 | 2009-07-19T09:56:34Z | 1,149,661 | <p>They are good for everything.</p>
<p>Ruby has an edge for munging textfiles awk/perl style. That's slightly easier in Ruby.
For the rest, I think Python has a string edge, and that it TOTALLY subjective. See <a href="http://stackoverflow.com/questions/1113611/what-does-ruby-have-that-python-doesnt-and-vice-versa">http://stackoverflow.com/questions/1113611/what-does-ruby-have-that-python-doesnt-and-vice-versa</a> and the follow-up blogpost <a href="http://regebro.wordpress.com/2009/07/12/python-vs-ruby/" rel="nofollow">http://regebro.wordpress.com/2009/07/12/python-vs-ruby/</a> .</p>
<p>I use Python for every programming related thing I need to do, and will do that until there is a complete shift in programming paradigm that kicks OO development into the stoneage.</p>
| 2 | 2009-07-19T11:03:32Z | [
"python",
"ruby"
] |
python and ruby - for what to use it? | 1,149,581 | <p>I'm thinking about learning ruby and python a little bit, and it occurred to me, for what ruby/python is good for? When to use ruby and when python, or for what ruby/python is not for? :)</p>
<p>What should I do in these languages?</p>
<p>thanks</p>
| 8 | 2009-07-19T09:56:34Z | 1,149,723 | <p>To avoid the holy war and maybe give another perspective I say (without requesting more information of what fun part of programming the question-ere thinks is cool to do):</p>
<p>Learn python first! </p>
<p>If you haven't done any scripting language yet I would recommend python.
The core of python is somewhat cleaner than the core of ruby and if you learn the basic core of scripting with python first you will more or less as a bonus learn ruby.</p>
<p>You will (because you use python) write code that looks very clean and has good indentation
right from the beginning. </p>
<p>The difficulties about what to learn is what you actually will you try to solve!</p>
<p>If you are looking for a new production language to solve X the answer get more complicated.
Is X part of the language core? Was the language in fact invented to solve X?</p>
<p>If the question was: What single programming language should I Master and eventually reach Nirva with? My answer is, I don't have a clue!
(CLisp, Scheme48, Erlang or Haskell should probably have been on my final list though)</p>
<p>PS.
I know that this isn't the spot on answer to the very simplified question in the post.
what can ruby do that python can't or what can python do that ruby can't.</p>
<p>The point is that when you set out to learn something one usually have a hidden agenda so you try to solve your favorite problem in any language again and again.</p>
<p>If your really are out to learn without have an agenda I think that python in it's most basic form is a clean and crisp way and you should be able to use the same style when using ruby. </p>
<p>DISCLAIMER: I prefer ruby in a production (commercial setup) over python. I prefer ruby over python on windows. I prefer ruby over python on the things I do at home. I do that because the things I really like to solve is more fun to solve in ruby than in python. My programming style/habit tends to fit better in ruby. </p>
| 1 | 2009-07-19T11:43:42Z | [
"python",
"ruby"
] |
Use Google AppEngine datastore outside of AppEngine project | 1,149,639 | <p>For my little framework <a href="http://code.google.com/p/pyxer/">Pyxer</a> I would like to to be able to use the Google AppEngine datastores also outside of AppEngine projects, because I'm now used to this ORM pattern and for little quick hacks this is nice. I can not use Google AppEngine for all of my projects because of its's limitations in file size and number of files.</p>
<p>A great alternative would also be, if there was a project that provides an ORM with the same naming as the AppEngine datastore. I also like the GQL approach very much, since this is a nice combination of ORM and SQL patterns.</p>
<p>Any ideas where or how I might find such a solution? Thanks.</p>
| 5 | 2009-07-19T10:48:19Z | 1,150,756 | <p>Nick Johnson, from the app engine team himself, has a <a href="http://blog.notdot.net/2009/04/Announcing-BDBDatastore-a-replacement-datastore-for-App-Engine" rel="nofollow">blog posting</a> listing some of the alternatives, including his BDBdatastore.</p>
<p>However, that assumes you want to use exactly the same ORM that you use now in app engine. There are tons of ORM options in general out there, though I am not familiar with the state of the art in Python. <a href="http://stackoverflow.com/questions/53428/what-are-some-good-python-orm-solutions">This</a> question does seem to address the issue though.</p>
| 5 | 2009-07-19T20:11:30Z | [
"python",
"sql",
"google-app-engine",
"orm"
] |
Use Google AppEngine datastore outside of AppEngine project | 1,149,639 | <p>For my little framework <a href="http://code.google.com/p/pyxer/">Pyxer</a> I would like to to be able to use the Google AppEngine datastores also outside of AppEngine projects, because I'm now used to this ORM pattern and for little quick hacks this is nice. I can not use Google AppEngine for all of my projects because of its's limitations in file size and number of files.</p>
<p>A great alternative would also be, if there was a project that provides an ORM with the same naming as the AppEngine datastore. I also like the GQL approach very much, since this is a nice combination of ORM and SQL patterns.</p>
<p>Any ideas where or how I might find such a solution? Thanks.</p>
| 5 | 2009-07-19T10:48:19Z | 1,830,581 | <p>You might also want to look at <a href="http://code.google.com/p/appscale/" rel="nofollow">AppScale</a>, which is "a platform that allows users to deploy and host their own Google App Engine applications". </p>
<p>It's probably overkill for your purposes, but definitely something to look into.</p>
| 4 | 2009-12-02T04:03:57Z | [
"python",
"sql",
"google-app-engine",
"orm"
] |
Use Google AppEngine datastore outside of AppEngine project | 1,149,639 | <p>For my little framework <a href="http://code.google.com/p/pyxer/">Pyxer</a> I would like to to be able to use the Google AppEngine datastores also outside of AppEngine projects, because I'm now used to this ORM pattern and for little quick hacks this is nice. I can not use Google AppEngine for all of my projects because of its's limitations in file size and number of files.</p>
<p>A great alternative would also be, if there was a project that provides an ORM with the same naming as the AppEngine datastore. I also like the GQL approach very much, since this is a nice combination of ORM and SQL patterns.</p>
<p>Any ideas where or how I might find such a solution? Thanks.</p>
| 5 | 2009-07-19T10:48:19Z | 2,437,624 | <p>There is also the Remote API which the bulkloader tool uses to upload or download data into/from the Datastore.</p>
<p>Maybe it could be used to have applications which are not hosted on AppEngine to still use the Datastore there.</p>
| 0 | 2010-03-13T07:22:27Z | [
"python",
"sql",
"google-app-engine",
"orm"
] |
Pythonic Swap? | 1,149,802 | <p>I found that i have to perform a swap in python and i write something like this.</p>
<pre><code>arr[first], arr[second] = arr[second], arr[first]
</code></pre>
<p>I suppose this is not so pythonic. Does somebody know how to do a swap in python more elegent?</p>
<p><strong>EDIT:</strong>
I think another example will show my doubts</p>
<pre><code>self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
</code></pre>
<p>is this the only available solution for swap in python?
I did searched a lot but didn't find a nice answer...</p>
| 13 | 2009-07-19T12:29:16Z | 1,149,804 | <pre><code>a, b = b, a
</code></pre>
<p><a href="http://love-python.blogspot.com/2008/02/swap-values-python-way.html" rel="nofollow">Is a perfectly Pythonic idiom.</a> It is short and readable, as long as your variable names are short enough.</p>
| 15 | 2009-07-19T12:31:08Z | [
"python",
"swap"
] |
Pythonic Swap? | 1,149,802 | <p>I found that i have to perform a swap in python and i write something like this.</p>
<pre><code>arr[first], arr[second] = arr[second], arr[first]
</code></pre>
<p>I suppose this is not so pythonic. Does somebody know how to do a swap in python more elegent?</p>
<p><strong>EDIT:</strong>
I think another example will show my doubts</p>
<pre><code>self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
</code></pre>
<p>is this the only available solution for swap in python?
I did searched a lot but didn't find a nice answer...</p>
| 13 | 2009-07-19T12:29:16Z | 1,149,860 | <p>It's difficult to imagine how it could be made more elegant: using a hypothetical built-in function ... <code>swap_sequence_elements(arr, first, second)</code> elegant? maybe, but this is in YAGGI territory -- you aren't going to get it ;-) -- and the function call overhead would/should put you off implementing it yourself.</p>
<p>What you have is much more elegant than the alternative in-line way:</p>
<pre><code>temp = arr[first]
arr[first] = arr[second]
arr[second] = temp
</code></pre>
<p>and (bonus!) is faster too (on the not unreasonable assumption that a bytecode <code>ROT_TWO</code> is faster than a <code>LOAD_FAST</code> plus a <code>STORE_FAST</code>).</p>
| 1 | 2009-07-19T13:05:52Z | [
"python",
"swap"
] |
Pythonic Swap? | 1,149,802 | <p>I found that i have to perform a swap in python and i write something like this.</p>
<pre><code>arr[first], arr[second] = arr[second], arr[first]
</code></pre>
<p>I suppose this is not so pythonic. Does somebody know how to do a swap in python more elegent?</p>
<p><strong>EDIT:</strong>
I think another example will show my doubts</p>
<pre><code>self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
</code></pre>
<p>is this the only available solution for swap in python?
I did searched a lot but didn't find a nice answer...</p>
| 13 | 2009-07-19T12:29:16Z | 1,149,884 | <p><code>a, b = b, a</code> is about as short as you'll get, it's only three characters (aside from the variable names).. It's about as Python'y as you'll get</p>
<p>One alternative is the usual use-a-temp-variable:</p>
<pre><code>self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
</code></pre>
<p>..becomes..</p>
<pre><code>temp = self.memberlist[someindexB]
self.memberlist[someindexB] = self.memberlist[someindexA]
self.memberlist[someindexA] = temp
</code></pre>
<p>..which I think is messier and less "obvious"</p>
<p>Another way, which is maybe a bit more readable with long variable names:</p>
<pre><code>a, b = self.memberlist[someindexA], self.memberlist[someindexB]
self.memberlist[someindexA], self.memberlist[someindexB] = b, a
</code></pre>
| 1 | 2009-07-19T13:17:42Z | [
"python",
"swap"
] |
Pythonic Swap? | 1,149,802 | <p>I found that i have to perform a swap in python and i write something like this.</p>
<pre><code>arr[first], arr[second] = arr[second], arr[first]
</code></pre>
<p>I suppose this is not so pythonic. Does somebody know how to do a swap in python more elegent?</p>
<p><strong>EDIT:</strong>
I think another example will show my doubts</p>
<pre><code>self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
</code></pre>
<p>is this the only available solution for swap in python?
I did searched a lot but didn't find a nice answer...</p>
| 13 | 2009-07-19T12:29:16Z | 1,150,272 | <p>The one thing I might change in your example code: if you're going to use some long name such as <code>self.memberlist</code> over an over again, it's often more readable to alias ("assign") it to a shorter name first. So for example instead of the long, hard-to-read:</p>
<pre><code>self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
</code></pre>
<p>you could code:</p>
<pre><code>L = self.memberlist
L[someindexA], L[someindexB] = L[someindexB], L[someindexA]
</code></pre>
<p>Remember that Python works by-reference so L refers to exactly the same object as <code>self.memberlist</code>, NOT a copy (by the same token, the assignment is extremely fast no matter how long the list may be, because it's not copied anyway -- it's just one more reference).</p>
<p>I don't think any further complication is warranted, though of course some fancy ones might easily be conceived, such as (for a, b "normal" indices <code>>=0</code>):</p>
<pre><code>def slicer(a, b):
return slice(a, b+cmp(b,a), b-a), slice(b, a+cmp(a,b), a-b)
back, forth = slicer(someindexA, someindexB)
self.memberlist[back] = self.memberlist[forth]
</code></pre>
<p>I think figuring out these kinds of "advanced" uses is a nice conceit, useful mental exercise, and good fun -- I recommend that interested readers, once the general idea is clear, focus on the role of those <code>+cmp</code> and how they make things work for the three possibilities (a>b, a<b, a==b) [[not for negative indices, though -- why not, and how would slicer need to change to fix this?]]. But using such a fancy approach in production code would generally be overkill and quite unwarranted, making things murkier and harder to maintain than the simple and straightforward approach.</p>
<p>Remember, <a href="http://www.python.org/dev/peps/pep-0020/" rel="nofollow">simple is better than complex</a>!</p>
| 12 | 2009-07-19T16:36:43Z | [
"python",
"swap"
] |
Pythonic Swap? | 1,149,802 | <p>I found that i have to perform a swap in python and i write something like this.</p>
<pre><code>arr[first], arr[second] = arr[second], arr[first]
</code></pre>
<p>I suppose this is not so pythonic. Does somebody know how to do a swap in python more elegent?</p>
<p><strong>EDIT:</strong>
I think another example will show my doubts</p>
<pre><code>self.memberlist[someindexA], self.memberlist[someindexB] = self.memberlist[someindexB], self.memberlist[someindexA]
</code></pre>
<p>is this the only available solution for swap in python?
I did searched a lot but didn't find a nice answer...</p>
| 13 | 2009-07-19T12:29:16Z | 1,151,911 | <p>I suppose you could take advantage of the step argument of slice notation to do something like this:</p>
<p>myarr[:2] = myarr[:2][::-1]</p>
<p>I'm not sure this is clearer or more pythonic though...</p>
| -1 | 2009-07-20T06:01:25Z | [
"python",
"swap"
] |
$_SERVER vs. WSGI environ parameter | 1,149,881 | <p>I'm designing a site. It is in a very early stage, and I have to make a decision whether or not to use a SingleSignOn service provided by the server. (it's a campus site, and more and more sites are using SSO here, so generally it's a good idea).
The target platform is most probably going to be django via mod_wsgi. However, any documentation provided with this service features php code. This method heavily relies on using custom <code>$_SERVER['HTTPsomething']</code> variables. Unfortunately, right now I don't have access to this environment. </p>
<p>(How) can I access these custom variables in django? According the the <a href="http://www.python.org/dev/peps/pep-0333/#environ-variables" rel="nofollow">WSGI</a> documentation, the environ variable should contain as many as possible variables. Can I be sure that I can access them?</p>
| 1 | 2009-07-19T13:16:11Z | 1,149,924 | <p>Well, $_SERVER is PHP. You are likely to be able to access the same variables via WSGI as well, but to be sure you need to figure out exactly how the SSO works, so you know what creates these variables (probably Apache) and that you can access them.</p>
<p>Or, you can get yourself access and try it out. :)</p>
| 0 | 2009-07-19T13:45:11Z | [
"php",
"python",
"django",
"environment-variables"
] |
$_SERVER vs. WSGI environ parameter | 1,149,881 | <p>I'm designing a site. It is in a very early stage, and I have to make a decision whether or not to use a SingleSignOn service provided by the server. (it's a campus site, and more and more sites are using SSO here, so generally it's a good idea).
The target platform is most probably going to be django via mod_wsgi. However, any documentation provided with this service features php code. This method heavily relies on using custom <code>$_SERVER['HTTPsomething']</code> variables. Unfortunately, right now I don't have access to this environment. </p>
<p>(How) can I access these custom variables in django? According the the <a href="http://www.python.org/dev/peps/pep-0333/#environ-variables" rel="nofollow">WSGI</a> documentation, the environ variable should contain as many as possible variables. Can I be sure that I can access them?</p>
| 1 | 2009-07-19T13:16:11Z | 1,150,166 | <p>In Django, the server environment variables are provided as dictionary members of the <code>META</code> attribute on the <code>request</code> object - so in your view, you can always access them via <code>request.META['foo']</code> where foo is the name of the variable.</p>
<p>An easy way to see what is available is to create a view containing <code>assert False</code> to trigger an error. As long as you're running with <code>DEBUG=True</code>, you'll see a nice error page containing lots of information about the server status, including a full list of all the <code>request</code> attributes. </p>
| 5 | 2009-07-19T15:53:10Z | [
"php",
"python",
"django",
"environment-variables"
] |
$_SERVER vs. WSGI environ parameter | 1,149,881 | <p>I'm designing a site. It is in a very early stage, and I have to make a decision whether or not to use a SingleSignOn service provided by the server. (it's a campus site, and more and more sites are using SSO here, so generally it's a good idea).
The target platform is most probably going to be django via mod_wsgi. However, any documentation provided with this service features php code. This method heavily relies on using custom <code>$_SERVER['HTTPsomething']</code> variables. Unfortunately, right now I don't have access to this environment. </p>
<p>(How) can I access these custom variables in django? According the the <a href="http://www.python.org/dev/peps/pep-0333/#environ-variables" rel="nofollow">WSGI</a> documentation, the environ variable should contain as many as possible variables. Can I be sure that I can access them?</p>
| 1 | 2009-07-19T13:16:11Z | 1,151,129 | <p>To determine the set of variables passed in the raw WSGI environment, before Django does anything to them, put the following code in the WSGI script file in place of your Django stuff.</p>
<pre><code>import StringIO
def application(environ, start_response):
headers = []
headers.append(('Content-type', 'text/plain'))
start_response('200 OK', headers)
input = environ['wsgi.input']
output = StringIO.StringIO()
keys = environ.keys()
keys.sort()
for key in keys:
print >> output, '%s: %s' % (key, repr(environ[key]))
print >> output
length = int(environ.get('CONTENT_LENGTH', '0'))
output.write(input.read(length))
return [output.getvalue()]
</code></pre>
<p>It will display back to the browser the set of key/value pairs.</p>
<p>Finding out how the SSO mechanism works is important. If it does the sensible thing, you will possibly find that it sets REMOTE_USER and possibly AUTH_TYPE variables. If REMOTE_USER is set it is an indicator that the user named in the variable has been authenticated by some higher level authentication mechanism in Apache. These variables would normally be set for HTTP Basic and Digest authentication, but to work with as many systems as possible, a SSO mechanism, should also use them.</p>
<p>If they are set, then there is a Django feature, described at:</p>
<p><a href="http://docs.djangoproject.com/en/dev/howto/auth-remote-user/" rel="nofollow">http://docs.djangoproject.com/en/dev/howto/auth-remote-user/</a></p>
<p>which can then be used to have Django accept authentication done at a higher level.</p>
<p>Even if the SSO mechanism doesn't use REMOTE_USER, but instead uses custom headers, you can use a custom WSGI wrapper around the whole Django application to translate any custom headers to a suitable REMOTE_USER value which Django can then make use of.</p>
| 3 | 2009-07-19T23:02:33Z | [
"php",
"python",
"django",
"environment-variables"
] |
cleaning up when using exceptions and files in python | 1,149,983 | <p>I'm learning python for a couple of days now and am struggling with its 'spirit'.
I'm comming from the C/C++/Java/Perl school and I understand that python is not C (at all) that's why I'm trying to understand the spirit to get the most out of it (and so far it's hard)...</p>
<p>My question is especially focused on exception handling and cleaning:
The code at the end of this post is meant to simulate a fairly common case of file opening/parsing where you need to close the file in case of an error...</p>
<p>Most samples I have seen use the 'else' clause of a try statement to close the file... which made sense to me until I realized that the error might be due to</p>
<ul>
<li>the opening itself (in which case
there is no need to close the not
opened file)</li>
<li>the parsing (in which
case the file needs to be closed)</li>
</ul>
<p>The trap here is that if you use the 'else' clause of a try bloc then the file never gets closed if the error happens during parsing!
On the other end using the 'finally' clause result in an extra necessary check because the file_desc variable may not exist if the error happened during the opened (see comments in the code below)...</p>
<p>This extra check is inefficient and full of shit because any reasonable program may contain hundreds of symbols and parsing the results of dir() is a pain in the ass... Not to mention the lack of readability of such a statement...</p>
<p>Most other languages allow for variable definitions which could save the day here... but in python, everything seems to be implicit...</p>
<p>Normally, one would just declare a file_desc variable, then use many try/catch blocs for every task... one for opening, one for parsing and the last one for the closing()... no need to nest them... here I don't know a way to declare the variable... so I'm stuck right at the begining of the problem !</p>
<p>so what is the spirit of python here ???</p>
<ul>
<li>split the opening/parsing in two different methods ? How ?</li>
<li>use some kind of nested try/except clauses ??? How ?</li>
<li>maybe there is a way to declare the file_desc variable and then there would be no need for the extra checking... is it at all possible ??? desirable ???</li>
<li>what about the close() statement ??? what if it raises an error ?</li>
</ul>
<p>thanx for your hints... here is the sample code:</p>
<pre><code>class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
file_desc = open(file_name, 'r')
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
else:
file_desc.close()
# finally:
# if 'file_desc' in dir() and not file_desc.closed:
# file_desc.close()
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
</code></pre>
| 3 | 2009-07-19T14:12:25Z | 1,149,997 | <p>The easiest way to deal with this is to use the fact that file objects in Python 2.5+ are <a href="http://www.python.org/doc/2.5.2/lib/typecontextmanager.html">context managers</a>. You can use the <a href="http://www.python.org/dev/peps/pep-0343/"><code>with</code></a> statement to enter a context; the context manager's <code>__exit__</code> method is automatically called when exiting this <code>with</code> scope. The file object's context management automatically closes the file then.</p>
<pre><code>try:
with file("hello.txt") as input_file:
for line in input_file:
if "hello" not in line:
raise ValueError("Every line must contain 'hello'!")
except IOError:
print "Damnit, couldn't open the file."
except:
raise
else:
print "Everything went fine!"
</code></pre>
<p>The open hello.txt handle will automatically be closed, and exceptions from within the with scope are propagated outside.</p>
| 6 | 2009-07-19T14:20:15Z | [
"python",
"exception",
"file-io"
] |
cleaning up when using exceptions and files in python | 1,149,983 | <p>I'm learning python for a couple of days now and am struggling with its 'spirit'.
I'm comming from the C/C++/Java/Perl school and I understand that python is not C (at all) that's why I'm trying to understand the spirit to get the most out of it (and so far it's hard)...</p>
<p>My question is especially focused on exception handling and cleaning:
The code at the end of this post is meant to simulate a fairly common case of file opening/parsing where you need to close the file in case of an error...</p>
<p>Most samples I have seen use the 'else' clause of a try statement to close the file... which made sense to me until I realized that the error might be due to</p>
<ul>
<li>the opening itself (in which case
there is no need to close the not
opened file)</li>
<li>the parsing (in which
case the file needs to be closed)</li>
</ul>
<p>The trap here is that if you use the 'else' clause of a try bloc then the file never gets closed if the error happens during parsing!
On the other end using the 'finally' clause result in an extra necessary check because the file_desc variable may not exist if the error happened during the opened (see comments in the code below)...</p>
<p>This extra check is inefficient and full of shit because any reasonable program may contain hundreds of symbols and parsing the results of dir() is a pain in the ass... Not to mention the lack of readability of such a statement...</p>
<p>Most other languages allow for variable definitions which could save the day here... but in python, everything seems to be implicit...</p>
<p>Normally, one would just declare a file_desc variable, then use many try/catch blocs for every task... one for opening, one for parsing and the last one for the closing()... no need to nest them... here I don't know a way to declare the variable... so I'm stuck right at the begining of the problem !</p>
<p>so what is the spirit of python here ???</p>
<ul>
<li>split the opening/parsing in two different methods ? How ?</li>
<li>use some kind of nested try/except clauses ??? How ?</li>
<li>maybe there is a way to declare the file_desc variable and then there would be no need for the extra checking... is it at all possible ??? desirable ???</li>
<li>what about the close() statement ??? what if it raises an error ?</li>
</ul>
<p>thanx for your hints... here is the sample code:</p>
<pre><code>class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
file_desc = open(file_name, 'r')
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
else:
file_desc.close()
# finally:
# if 'file_desc' in dir() and not file_desc.closed:
# file_desc.close()
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
</code></pre>
| 3 | 2009-07-19T14:12:25Z | 1,150,000 | <p>As of Python 2.5, there's a <code>with</code> command that simplifies some of what you're fighting with. Read more about it <a href="http://effbot.org/zone/python-with-statement.htm" rel="nofollow">here</a>. Here's a transformed version of your code:</p>
<pre><code>class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
with open(file_name, 'r') as file_desc:
try:
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
</code></pre>
| 1 | 2009-07-19T14:21:45Z | [
"python",
"exception",
"file-io"
] |
cleaning up when using exceptions and files in python | 1,149,983 | <p>I'm learning python for a couple of days now and am struggling with its 'spirit'.
I'm comming from the C/C++/Java/Perl school and I understand that python is not C (at all) that's why I'm trying to understand the spirit to get the most out of it (and so far it's hard)...</p>
<p>My question is especially focused on exception handling and cleaning:
The code at the end of this post is meant to simulate a fairly common case of file opening/parsing where you need to close the file in case of an error...</p>
<p>Most samples I have seen use the 'else' clause of a try statement to close the file... which made sense to me until I realized that the error might be due to</p>
<ul>
<li>the opening itself (in which case
there is no need to close the not
opened file)</li>
<li>the parsing (in which
case the file needs to be closed)</li>
</ul>
<p>The trap here is that if you use the 'else' clause of a try bloc then the file never gets closed if the error happens during parsing!
On the other end using the 'finally' clause result in an extra necessary check because the file_desc variable may not exist if the error happened during the opened (see comments in the code below)...</p>
<p>This extra check is inefficient and full of shit because any reasonable program may contain hundreds of symbols and parsing the results of dir() is a pain in the ass... Not to mention the lack of readability of such a statement...</p>
<p>Most other languages allow for variable definitions which could save the day here... but in python, everything seems to be implicit...</p>
<p>Normally, one would just declare a file_desc variable, then use many try/catch blocs for every task... one for opening, one for parsing and the last one for the closing()... no need to nest them... here I don't know a way to declare the variable... so I'm stuck right at the begining of the problem !</p>
<p>so what is the spirit of python here ???</p>
<ul>
<li>split the opening/parsing in two different methods ? How ?</li>
<li>use some kind of nested try/except clauses ??? How ?</li>
<li>maybe there is a way to declare the file_desc variable and then there would be no need for the extra checking... is it at all possible ??? desirable ???</li>
<li>what about the close() statement ??? what if it raises an error ?</li>
</ul>
<p>thanx for your hints... here is the sample code:</p>
<pre><code>class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
file_desc = open(file_name, 'r')
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
else:
file_desc.close()
# finally:
# if 'file_desc' in dir() and not file_desc.closed:
# file_desc.close()
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
</code></pre>
| 3 | 2009-07-19T14:12:25Z | 1,150,020 | <p>OK, I'm an ass.
edit:and BTW, many thanx for those who already answered while I was posting this</p>
<p>The code below does the trick.
You must create a nested block with the 'with as' statement to make sure the file is cleaned:</p>
<pre><code>class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
#
# THIS IS PYTHON'S SPIRIT... no else/finally
#
with open(file_name, 'r') as file_desc:
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
raise FormatError("oops... the file format is wrong...")
print "will never get here"
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
</code></pre>
| 0 | 2009-07-19T14:28:07Z | [
"python",
"exception",
"file-io"
] |
cleaning up when using exceptions and files in python | 1,149,983 | <p>I'm learning python for a couple of days now and am struggling with its 'spirit'.
I'm comming from the C/C++/Java/Perl school and I understand that python is not C (at all) that's why I'm trying to understand the spirit to get the most out of it (and so far it's hard)...</p>
<p>My question is especially focused on exception handling and cleaning:
The code at the end of this post is meant to simulate a fairly common case of file opening/parsing where you need to close the file in case of an error...</p>
<p>Most samples I have seen use the 'else' clause of a try statement to close the file... which made sense to me until I realized that the error might be due to</p>
<ul>
<li>the opening itself (in which case
there is no need to close the not
opened file)</li>
<li>the parsing (in which
case the file needs to be closed)</li>
</ul>
<p>The trap here is that if you use the 'else' clause of a try bloc then the file never gets closed if the error happens during parsing!
On the other end using the 'finally' clause result in an extra necessary check because the file_desc variable may not exist if the error happened during the opened (see comments in the code below)...</p>
<p>This extra check is inefficient and full of shit because any reasonable program may contain hundreds of symbols and parsing the results of dir() is a pain in the ass... Not to mention the lack of readability of such a statement...</p>
<p>Most other languages allow for variable definitions which could save the day here... but in python, everything seems to be implicit...</p>
<p>Normally, one would just declare a file_desc variable, then use many try/catch blocs for every task... one for opening, one for parsing and the last one for the closing()... no need to nest them... here I don't know a way to declare the variable... so I'm stuck right at the begining of the problem !</p>
<p>so what is the spirit of python here ???</p>
<ul>
<li>split the opening/parsing in two different methods ? How ?</li>
<li>use some kind of nested try/except clauses ??? How ?</li>
<li>maybe there is a way to declare the file_desc variable and then there would be no need for the extra checking... is it at all possible ??? desirable ???</li>
<li>what about the close() statement ??? what if it raises an error ?</li>
</ul>
<p>thanx for your hints... here is the sample code:</p>
<pre><code>class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
file_desc = open(file_name, 'r')
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
else:
file_desc.close()
# finally:
# if 'file_desc' in dir() and not file_desc.closed:
# file_desc.close()
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
</code></pre>
| 3 | 2009-07-19T14:12:25Z | 1,150,123 | <p>Close can to my knowledge never return an error.</p>
<p>In fact, the file handle will be closed when garbage collected, so you don't have to do it explicitly in Python. Although it's still good programming to do so, obviously.</p>
| 0 | 2009-07-19T15:30:13Z | [
"python",
"exception",
"file-io"
] |
cleaning up when using exceptions and files in python | 1,149,983 | <p>I'm learning python for a couple of days now and am struggling with its 'spirit'.
I'm comming from the C/C++/Java/Perl school and I understand that python is not C (at all) that's why I'm trying to understand the spirit to get the most out of it (and so far it's hard)...</p>
<p>My question is especially focused on exception handling and cleaning:
The code at the end of this post is meant to simulate a fairly common case of file opening/parsing where you need to close the file in case of an error...</p>
<p>Most samples I have seen use the 'else' clause of a try statement to close the file... which made sense to me until I realized that the error might be due to</p>
<ul>
<li>the opening itself (in which case
there is no need to close the not
opened file)</li>
<li>the parsing (in which
case the file needs to be closed)</li>
</ul>
<p>The trap here is that if you use the 'else' clause of a try bloc then the file never gets closed if the error happens during parsing!
On the other end using the 'finally' clause result in an extra necessary check because the file_desc variable may not exist if the error happened during the opened (see comments in the code below)...</p>
<p>This extra check is inefficient and full of shit because any reasonable program may contain hundreds of symbols and parsing the results of dir() is a pain in the ass... Not to mention the lack of readability of such a statement...</p>
<p>Most other languages allow for variable definitions which could save the day here... but in python, everything seems to be implicit...</p>
<p>Normally, one would just declare a file_desc variable, then use many try/catch blocs for every task... one for opening, one for parsing and the last one for the closing()... no need to nest them... here I don't know a way to declare the variable... so I'm stuck right at the begining of the problem !</p>
<p>so what is the spirit of python here ???</p>
<ul>
<li>split the opening/parsing in two different methods ? How ?</li>
<li>use some kind of nested try/except clauses ??? How ?</li>
<li>maybe there is a way to declare the file_desc variable and then there would be no need for the extra checking... is it at all possible ??? desirable ???</li>
<li>what about the close() statement ??? what if it raises an error ?</li>
</ul>
<p>thanx for your hints... here is the sample code:</p>
<pre><code>class FormatError(Exception):
def __init__(self, message):
self.strerror = message
def __str__(self):
return repr(message)
file_name = raw_input("Input a filename please: ")
try:
file_desc = open(file_name, 'r')
# read the file...
while True:
current_line = file_desc.readline()
if not current_line: break
print current_line.rstrip("\n")
# lets simulate some parsing error...
raise FormatError("oops... the file format is wrong...")
except FormatError as format_error:
print "The file {0} is invalid: {1}".format(file_name, format_error.strerror)
except IOError as io_error:
print "The file {0} could not be read: {1}".format(file_name, io_error.strerror)
else:
file_desc.close()
# finally:
# if 'file_desc' in dir() and not file_desc.closed:
# file_desc.close()
if 'file_desc' in dir():
print "The file exists and closed={0}".format(file_desc.closed)
else:
print "The file has never been defined..."
</code></pre>
| 3 | 2009-07-19T14:12:25Z | 1,150,504 | <p>Just a note: you can always declare a variable, and then it would become something like this:</p>
<pre><code>file_desc = None
try:
file_desc = open(file_name, 'r')
except IOError, err:
pass
finally:
if file_desc:
close(file_desc)
</code></pre>
<p>Of course, if you are using a newer version of Python, the construct using context manager is way better; however, I wanted to point out how you can generically deal with exceptions and variable scope in Python.</p>
| 1 | 2009-07-19T18:19:35Z | [
"python",
"exception",
"file-io"
] |
Can I create a Python extension module in D (instead of C) | 1,150,093 | <p>I hear D is link-compatible with C. I'd like to use D to create an extension module for Python. Am I overlooking some reason why it's never going to work?</p>
| 14 | 2009-07-19T15:10:18Z | 1,150,107 | <p>Wait? Something like this <a href="http://pyd.dsource.org/">http://pyd.dsource.org/</a></p>
| 14 | 2009-07-19T15:21:32Z | [
"python",
"module",
"d"
] |
Can I create a Python extension module in D (instead of C) | 1,150,093 | <p>I hear D is link-compatible with C. I'd like to use D to create an extension module for Python. Am I overlooking some reason why it's never going to work?</p>
| 14 | 2009-07-19T15:10:18Z | 1,315,336 | <p>Sounds easy and people here who say it's just up to the C API don't know how difficult it is to integrate the Boehm GC used by D within Python. PyD looks like a typical concept proof where people haven't realized the real world problems.</p>
| 2 | 2009-08-22T06:45:00Z | [
"python",
"module",
"d"
] |
Generating random sentences from custom text in Python's NLTK? | 1,150,144 | <p>I'm having trouble with the NLTK under Python, specifically the .generate() method.</p>
<blockquote>
<p>generate(self, length=100)</p>
<p>Print random text, generated using a trigram language model.</p>
<p>Parameters:</p>
<pre><code> * length (int) - The length of text to generate (default=100)
</code></pre>
</blockquote>
<p>Here is a simplified version of what I am attempting.</p>
<pre><code>import nltk
words = 'The quick brown fox jumps over the lazy dog'
tokens = nltk.word_tokenize(words)
text = nltk.Text(tokens)
print text.generate(3)
</code></pre>
<p>This will <em>always</em> generate</p>
<pre><code>Building ngram index...
The quick brown
None
</code></pre>
<p>As opposed to building a random phrase out of the words.</p>
<p>Here is my output when I do</p>
<pre><code>print text.generate()
Building ngram index...
The quick brown fox jumps over the lazy dog fox jumps over the lazy
dog dog The quick brown fox jumps over the lazy dog dog brown fox
jumps over the lazy dog over the lazy dog The quick brown fox jumps
over the lazy dog fox jumps over the lazy dog lazy dog The quick brown
fox jumps over the lazy dog the lazy dog The quick brown fox jumps
over the lazy dog jumps over the lazy dog over the lazy dog brown fox
jumps over the lazy dog quick brown fox jumps over the lazy dog The
None
</code></pre>
<p>Again starting out with the same text, but then varying it. I've also tried using the first chapter from Orwell's 1984. Again that <strong>always</strong> starts with the first 3 tokens (one of which is a space in this case) and <em>then</em> goes on to randomly generate text.</p>
<p>What am I doing wrong here?</p>
| 5 | 2009-07-19T15:41:58Z | 1,150,154 | <p>Maybe you can sort the tokens array randomly before generating a sentence.</p>
| -1 | 2009-07-19T15:47:09Z | [
"python",
"random",
"nltk"
] |
Generating random sentences from custom text in Python's NLTK? | 1,150,144 | <p>I'm having trouble with the NLTK under Python, specifically the .generate() method.</p>
<blockquote>
<p>generate(self, length=100)</p>
<p>Print random text, generated using a trigram language model.</p>
<p>Parameters:</p>
<pre><code> * length (int) - The length of text to generate (default=100)
</code></pre>
</blockquote>
<p>Here is a simplified version of what I am attempting.</p>
<pre><code>import nltk
words = 'The quick brown fox jumps over the lazy dog'
tokens = nltk.word_tokenize(words)
text = nltk.Text(tokens)
print text.generate(3)
</code></pre>
<p>This will <em>always</em> generate</p>
<pre><code>Building ngram index...
The quick brown
None
</code></pre>
<p>As opposed to building a random phrase out of the words.</p>
<p>Here is my output when I do</p>
<pre><code>print text.generate()
Building ngram index...
The quick brown fox jumps over the lazy dog fox jumps over the lazy
dog dog The quick brown fox jumps over the lazy dog dog brown fox
jumps over the lazy dog over the lazy dog The quick brown fox jumps
over the lazy dog fox jumps over the lazy dog lazy dog The quick brown
fox jumps over the lazy dog the lazy dog The quick brown fox jumps
over the lazy dog jumps over the lazy dog over the lazy dog brown fox
jumps over the lazy dog quick brown fox jumps over the lazy dog The
None
</code></pre>
<p>Again starting out with the same text, but then varying it. I've also tried using the first chapter from Orwell's 1984. Again that <strong>always</strong> starts with the first 3 tokens (one of which is a space in this case) and <em>then</em> goes on to randomly generate text.</p>
<p>What am I doing wrong here?</p>
| 5 | 2009-07-19T15:41:58Z | 1,150,200 | <p>Are you sure that using <code>word_tokenize</code> is the right approach? </p>
<p><a href="http://groups.google.com/group/nltk-users/browse%5Fthread/thread/dc97124e8e8d17cb" rel="nofollow">This Google groups page</a> has the example:</p>
<pre><code>>>> import nltk
>>> text = nltk.Text(nltk.corpus.brown.words()) # Get text from brown
>>> text.generate()
</code></pre>
<p>But I've never used nltk, so I can't say whether that works the way you want.</p>
| -1 | 2009-07-19T16:07:12Z | [
"python",
"random",
"nltk"
] |
Generating random sentences from custom text in Python's NLTK? | 1,150,144 | <p>I'm having trouble with the NLTK under Python, specifically the .generate() method.</p>
<blockquote>
<p>generate(self, length=100)</p>
<p>Print random text, generated using a trigram language model.</p>
<p>Parameters:</p>
<pre><code> * length (int) - The length of text to generate (default=100)
</code></pre>
</blockquote>
<p>Here is a simplified version of what I am attempting.</p>
<pre><code>import nltk
words = 'The quick brown fox jumps over the lazy dog'
tokens = nltk.word_tokenize(words)
text = nltk.Text(tokens)
print text.generate(3)
</code></pre>
<p>This will <em>always</em> generate</p>
<pre><code>Building ngram index...
The quick brown
None
</code></pre>
<p>As opposed to building a random phrase out of the words.</p>
<p>Here is my output when I do</p>
<pre><code>print text.generate()
Building ngram index...
The quick brown fox jumps over the lazy dog fox jumps over the lazy
dog dog The quick brown fox jumps over the lazy dog dog brown fox
jumps over the lazy dog over the lazy dog The quick brown fox jumps
over the lazy dog fox jumps over the lazy dog lazy dog The quick brown
fox jumps over the lazy dog the lazy dog The quick brown fox jumps
over the lazy dog jumps over the lazy dog over the lazy dog brown fox
jumps over the lazy dog quick brown fox jumps over the lazy dog The
None
</code></pre>
<p>Again starting out with the same text, but then varying it. I've also tried using the first chapter from Orwell's 1984. Again that <strong>always</strong> starts with the first 3 tokens (one of which is a space in this case) and <em>then</em> goes on to randomly generate text.</p>
<p>What am I doing wrong here?</p>
| 5 | 2009-07-19T15:41:58Z | 1,150,268 | <p>Your sample corpus is most likely to be too small. I don't know how exactly nltk builds its trigram model but it is common practice that beginning and end of sentences are handled somehow. Since there is only one beginning of sentence in your corpus this might be the reason why every sentence has the same beginning.</p>
| 1 | 2009-07-19T16:35:15Z | [
"python",
"random",
"nltk"
] |
Generating random sentences from custom text in Python's NLTK? | 1,150,144 | <p>I'm having trouble with the NLTK under Python, specifically the .generate() method.</p>
<blockquote>
<p>generate(self, length=100)</p>
<p>Print random text, generated using a trigram language model.</p>
<p>Parameters:</p>
<pre><code> * length (int) - The length of text to generate (default=100)
</code></pre>
</blockquote>
<p>Here is a simplified version of what I am attempting.</p>
<pre><code>import nltk
words = 'The quick brown fox jumps over the lazy dog'
tokens = nltk.word_tokenize(words)
text = nltk.Text(tokens)
print text.generate(3)
</code></pre>
<p>This will <em>always</em> generate</p>
<pre><code>Building ngram index...
The quick brown
None
</code></pre>
<p>As opposed to building a random phrase out of the words.</p>
<p>Here is my output when I do</p>
<pre><code>print text.generate()
Building ngram index...
The quick brown fox jumps over the lazy dog fox jumps over the lazy
dog dog The quick brown fox jumps over the lazy dog dog brown fox
jumps over the lazy dog over the lazy dog The quick brown fox jumps
over the lazy dog fox jumps over the lazy dog lazy dog The quick brown
fox jumps over the lazy dog the lazy dog The quick brown fox jumps
over the lazy dog jumps over the lazy dog over the lazy dog brown fox
jumps over the lazy dog quick brown fox jumps over the lazy dog The
None
</code></pre>
<p>Again starting out with the same text, but then varying it. I've also tried using the first chapter from Orwell's 1984. Again that <strong>always</strong> starts with the first 3 tokens (one of which is a space in this case) and <em>then</em> goes on to randomly generate text.</p>
<p>What am I doing wrong here?</p>
| 5 | 2009-07-19T15:41:58Z | 1,155,231 | <p>To generate random text, U need to use <a href="http://mathworld.wolfram.com/MarkovChain.html">Markov Chains</a></p>
<p>code to do that: <a href="http://gist.github.com/131679">from here</a></p>
<pre><code>import random
class Markov(object):
def __init__(self, open_file):
self.cache = {}
self.open_file = open_file
self.words = self.file_to_words()
self.word_size = len(self.words)
self.database()
def file_to_words(self):
self.open_file.seek(0)
data = self.open_file.read()
words = data.split()
return words
def triples(self):
""" Generates triples from the given data string. So if our string were
"What a lovely day", we'd generate (What, a, lovely) and then
(a, lovely, day).
"""
if len(self.words) < 3:
return
for i in range(len(self.words) - 2):
yield (self.words[i], self.words[i+1], self.words[i+2])
def database(self):
for w1, w2, w3 in self.triples():
key = (w1, w2)
if key in self.cache:
self.cache[key].append(w3)
else:
self.cache[key] = [w3]
def generate_markov_text(self, size=25):
seed = random.randint(0, self.word_size-3)
seed_word, next_word = self.words[seed], self.words[seed+1]
w1, w2 = seed_word, next_word
gen_words = []
for i in xrange(size):
gen_words.append(w1)
w1, w2 = w2, random.choice(self.cache[(w1, w2)])
gen_words.append(w2)
return ' '.join(gen_words)
</code></pre>
<p>Explaination: <a href="http://uswaretech.com/blog/2009/06/pseudo-random-text-markov-chains-python/">
Generating pseudo random text with Markov chains using Python</a></p>
| 5 | 2009-07-20T18:48:31Z | [
"python",
"random",
"nltk"
] |
Generating random sentences from custom text in Python's NLTK? | 1,150,144 | <p>I'm having trouble with the NLTK under Python, specifically the .generate() method.</p>
<blockquote>
<p>generate(self, length=100)</p>
<p>Print random text, generated using a trigram language model.</p>
<p>Parameters:</p>
<pre><code> * length (int) - The length of text to generate (default=100)
</code></pre>
</blockquote>
<p>Here is a simplified version of what I am attempting.</p>
<pre><code>import nltk
words = 'The quick brown fox jumps over the lazy dog'
tokens = nltk.word_tokenize(words)
text = nltk.Text(tokens)
print text.generate(3)
</code></pre>
<p>This will <em>always</em> generate</p>
<pre><code>Building ngram index...
The quick brown
None
</code></pre>
<p>As opposed to building a random phrase out of the words.</p>
<p>Here is my output when I do</p>
<pre><code>print text.generate()
Building ngram index...
The quick brown fox jumps over the lazy dog fox jumps over the lazy
dog dog The quick brown fox jumps over the lazy dog dog brown fox
jumps over the lazy dog over the lazy dog The quick brown fox jumps
over the lazy dog fox jumps over the lazy dog lazy dog The quick brown
fox jumps over the lazy dog the lazy dog The quick brown fox jumps
over the lazy dog jumps over the lazy dog over the lazy dog brown fox
jumps over the lazy dog quick brown fox jumps over the lazy dog The
None
</code></pre>
<p>Again starting out with the same text, but then varying it. I've also tried using the first chapter from Orwell's 1984. Again that <strong>always</strong> starts with the first 3 tokens (one of which is a space in this case) and <em>then</em> goes on to randomly generate text.</p>
<p>What am I doing wrong here?</p>
| 5 | 2009-07-19T15:41:58Z | 1,481,492 | <p>You should be "training" the Markov model with multiple sequences, so that you accurately sample the starting state probabilities as well (called "pi" in Markov-speak). If you use a single sequence then you will always start in the same state.</p>
<p>In the case of Orwell's 1984 you would want to use sentence tokenization first (NLTK is very good at it), then word tokenization (yielding a list of lists of tokens, not just a single list of tokens) and then feed each sentence separately to the Markov model. This will allow it to properly model sequence starts, instead of being stuck on a single way to start every sequence.</p>
| 7 | 2009-09-26T15:50:57Z | [
"python",
"random",
"nltk"
] |
Source interface with Python and urllib2 | 1,150,332 | <p>How do i set the source IP/interface with Python and urllib2?</p>
| 29 | 2009-07-19T17:05:12Z | 1,150,408 | <p>This seems to work.</p>
<pre><code>import urllib2, httplib, socket
class BindableHTTPConnection(httplib.HTTPConnection):
def connect(self):
"""Connect to the host and port specified in __init__."""
self.sock = socket.socket()
self.sock.bind((self.source_ip, 0))
if isinstance(self.timeout, float):
self.sock.settimeout(self.timeout)
self.sock.connect((self.host,self.port))
def BindableHTTPConnectionFactory(source_ip):
def _get(host, port=None, strict=None, timeout=0):
bhc=BindableHTTPConnection(host, port=port, strict=strict, timeout=timeout)
bhc.source_ip=source_ip
return bhc
return _get
class BindableHTTPHandler(urllib2.HTTPHandler):
def http_open(self, req):
return self.do_open(BindableHTTPConnectionFactory('127.0.0.1'), req)
opener = urllib2.build_opener(BindableHTTPHandler)
opener.open("http://google.com/").read() # Will fail, 127.0.0.1 can't reach google.com.
</code></pre>
<p>You'll need to figure out some way to parameterize "127.0.0.1" there, though.</p>
| 22 | 2009-07-19T17:36:44Z | [
"python",
"urllib2"
] |
Source interface with Python and urllib2 | 1,150,332 | <p>How do i set the source IP/interface with Python and urllib2?</p>
| 29 | 2009-07-19T17:05:12Z | 1,150,423 | <p>Unfortunately the stack of standard library modules in use (urllib2, httplib, socket) is somewhat badly designed for the purpose -- at the key point in the operation, <code>HTTPConnection.connect</code> (in httplib) delegates to <code>socket.create_connection</code>, which in turn gives you no "hook" whatsoever between the creation of the socket instance <code>sock</code> and the <code>sock.connect</code> call, for you to insert the <code>sock.bind</code> just before <code>sock.connect</code> that is what you need to set the source IP (I'm evangelizing widely for NOT designing abstractions in such an airtight, excessively-encapsulated way -- I'll be speaking about that at OSCON this Thursday under the title "Zen and the Art of Abstraction Maintenance" -- but here your problem is how to deal with a stack of abstractions that WERE designed this way, sigh).</p>
<p>When you're facing such problems you only have two not-so-good solutions: either copy, paste and edit the misdesigned code into which you need to place a "hook" that the original designer didn't cater for; or, "monkey-patch" that code. Neither is GOOD, but both can work, so at least let's be thankful that we have such options (by using an open-source and dynamic language). In this case, I think I'd go for monkey-patching (which is bad, but copy and paste coding is even worse) -- a code fragment such as:</p>
<pre><code>import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind((sourceIP, 0))
return sock
socket.socket = bound_socket
</code></pre>
<p>Depending on your exact needs (do you need all sockets to be bound to the same source IP, or...?) you could simply run this before using <code>urllib2</code> normally, or (in more complex ways of course) run it at need just for those outgoing sockets you DO need to bind in a certain way (then each time restore <code>socket.socket = true_socket</code> to get out of the way for future sockets yet to be created). The second alternative adds its own complications to orchestrate properly, so I'm waiting for you to clarify whether you do need such complications before explaining them all.</p>
<p>AKX's good answer is a variant on the "copy / paste / edit" alternative so I don't need to expand much on that -- note however that it doesn't exactly reproduce <code>socket.create_connection</code> in its <code>connect</code> method, see the source <a href="http://svn.python.org/view/python/trunk/Lib/socket.py?revision=73145&view=markup">here</a> (at the very end of the page) and decide what other functionality of the <code>create_connection</code> function you may want to embody in your copied/pasted/edited version if you decide to go that route.</p>
| 44 | 2009-07-19T17:43:41Z | [
"python",
"urllib2"
] |
Source interface with Python and urllib2 | 1,150,332 | <p>How do i set the source IP/interface with Python and urllib2?</p>
| 29 | 2009-07-19T17:05:12Z | 3,318,747 | <p>I thought I'd follow up with a slightly better version of the monkey patch. If you need to be able to set different port options on some of the sockets or are using something like SSL that subclasses socket, the following code works a bit better.</p>
<pre><code>_ip_address = None
def bind_outgoing_sockets_to_ip(ip_address):
"""This binds all python sockets to the passed in ip address"""
global _ip_address
_ip_address = ip_address
import socket
from socket import socket as s
class bound_socket(s):
def connect(self, *args, **kwargs):
if self.family == socket.AF_INET:
if self.getsockname()[0] == "0.0.0.0" and _ip_address:
self.bind((_ip_address, 0))
s.connect(self, *args, **kwargs)
socket.socket = bound_socket
</code></pre>
<p>You have to only bind the socket on connect if you need to run something like a webserver in the same process that needs to bind to a different ip address.</p>
| 1 | 2010-07-23T13:54:02Z | [
"python",
"urllib2"
] |
Source interface with Python and urllib2 | 1,150,332 | <p>How do i set the source IP/interface with Python and urllib2?</p>
| 29 | 2009-07-19T17:05:12Z | 9,980,681 | <p>Reasoning that I should monkey-patch at the highest level available, here's an alternative to Alex's answer which patches <code>httplib</code> instead of <code>socket</code>, taking advantage of <code>httplib.HTTPSConnection.__init__()</code>'s <code>source_address</code> keyword argument (which is not exposed by <code>urllib2</code>, AFAICT). Tested and working on Python 2.7.2.</p>
<pre><code>import httplib
HTTPSConnection_real = httplib.HTTPSConnection
class HTTPSConnection_monkey(HTTPSConnection_real):
def __init__(*a, **kw):
HTTPSConnection_real.__init__(*a, source_address=(SOURCE_IP, 0), **kw)
httplib.HTTPSConnection = HTTPSConnection_monkey
</code></pre>
| 1 | 2012-04-02T17:08:15Z | [
"python",
"urllib2"
] |
Source interface with Python and urllib2 | 1,150,332 | <p>How do i set the source IP/interface with Python and urllib2?</p>
| 29 | 2009-07-19T17:05:12Z | 14,669,175 | <p>Here's a further refinement that makes use of <a href="http://docs.python.org/2/library/httplib.html#httplib.HTTPConnection" rel="nofollow">HTTPConnection's source_address argument</a> (introduced in Python 2.7):</p>
<pre><code>import functools
import httplib
import urllib2
class BoundHTTPHandler(urllib2.HTTPHandler):
def __init__(self, source_address=None, debuglevel=0):
urllib2.HTTPHandler.__init__(self, debuglevel)
self.http_class = functools.partial(httplib.HTTPConnection,
source_address=source_address)
def http_open(self, req):
return self.do_open(self.http_class, req)
</code></pre>
<p>This gives us a custom <a href="http://docs.python.org/2/library/urllib2.html#urllib2.HTTPHandler" rel="nofollow">urllib2.HTTPHandler</a> implementation that is source_address aware. We can add it to a new <a href="http://docs.python.org/2/library/urllib2.html#urllib2.OpenerDirector" rel="nofollow">urllib2.OpenerDirector</a> and install it as the default opener (for future <a href="http://docs.python.org/2/library/urllib2.html#urllib2.urlopen" rel="nofollow">urlopen()</a> calls) with the following code:</p>
<pre><code>handler = BoundHTTPHandler(source_address=("192.168.1.10", 0))
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
</code></pre>
| 9 | 2013-02-03T03:16:40Z | [
"python",
"urllib2"
] |
Source interface with Python and urllib2 | 1,150,332 | <p>How do i set the source IP/interface with Python and urllib2?</p>
| 29 | 2009-07-19T17:05:12Z | 17,170,328 | <p>As of Python 2.7 httplib.HTTPConnection had source_address added to it, allowing you to provide an IP port pair to bind to.</p>
<p>See: <a href="http://docs.python.org/2/library/httplib.html#httplib.HTTPConnection" rel="nofollow">http://docs.python.org/2/library/httplib.html#httplib.HTTPConnection</a> </p>
| 1 | 2013-06-18T13:35:17Z | [
"python",
"urllib2"
] |
Attribute Cache in Django - What's the point? | 1,150,368 | <p>I was just looking over EveryBlock's source code and I noticed this code in the alerts/models.py code:</p>
<pre><code> def _get_user(self):
if not hasattr(self, '_user_cache'):
from ebpub.accounts.models import User
try:
self._user_cache = User.objects.get(id=self.user_id)
except User.DoesNotExist:
self._user_cache = None
return self._user_cache
user = property(_get_user)
</code></pre>
<p>I've noticed this pattern around a bunch, but I don't quite understand the use. Is the whole idea to make sure that when accessing the FK on self (self = alert object), that you only grab the user object once from the db? Why wouldn't you just rely upon the db caching amd django's ForeignKey() field? I noticed that the model definition only holds the user id and not a foreign key field:</p>
<pre><code>class EmailAlert(models.Model):
user_id = models.IntegerField()
...
</code></pre>
<p>Any insights would be appreciated.</p>
| 2 | 2009-07-19T17:22:49Z | 1,150,421 | <p>I don't know why this is an IntegerField; it looks like it definitely should be a ForeignKey(User) field--you lose things like select_related() here and other things because of that, too.</p>
<p>As to the caching, many databases don't cache results--they (or rather, the OS) will cache the data on disk needed to get the result, so looking it up a second time should be faster than the first, but it'll still take work.</p>
<p>It also still takes a database round-trip to look it up. In my experience, with Django, doing an item lookup can take around 0.5 to 1ms, for an SQL command to a local Postgresql server plus sometimes nontrivial overhead of QuerySet. 1ms is a lot if you don't need it--do that a few times and you can turn a 30ms request into a 35ms request.</p>
<p>If your SQL server isn't local and you actually have network round-trips to deal with, the numbers get bigger.</p>
<p>Finally, people generally expect accessing a property to be fast; when they're complex enough to cause SQL queries, caching the result is generally a good idea.</p>
| 2 | 2009-07-19T17:42:51Z | [
"python",
"django",
"django-models"
] |
Attribute Cache in Django - What's the point? | 1,150,368 | <p>I was just looking over EveryBlock's source code and I noticed this code in the alerts/models.py code:</p>
<pre><code> def _get_user(self):
if not hasattr(self, '_user_cache'):
from ebpub.accounts.models import User
try:
self._user_cache = User.objects.get(id=self.user_id)
except User.DoesNotExist:
self._user_cache = None
return self._user_cache
user = property(_get_user)
</code></pre>
<p>I've noticed this pattern around a bunch, but I don't quite understand the use. Is the whole idea to make sure that when accessing the FK on self (self = alert object), that you only grab the user object once from the db? Why wouldn't you just rely upon the db caching amd django's ForeignKey() field? I noticed that the model definition only holds the user id and not a foreign key field:</p>
<pre><code>class EmailAlert(models.Model):
user_id = models.IntegerField()
...
</code></pre>
<p>Any insights would be appreciated.</p>
| 2 | 2009-07-19T17:22:49Z | 1,150,444 | <p>Although databases do cache things internally, there's still an overhead in going back to the db every time you want to check the value of a related field - setting up the query within Django, the network latency in connecting to the db and returning the data over the network, instantiating the object in Django, etc. If you know the data hasn't changed in the meantime - and within the context of a single web request you probably don't care if it has - it makes much more sense to get the data once and cache it, rather than querying it every single time.</p>
<p>One of the applications I work on has an extremely complex home page containing a huge amount of data. Previously it was carrying out over 400 db queries to render. I've refactored it now so it 'only' uses 80, using very similar techniques to the one you've posted, and you'd better believe that it gives a massive performance boost.</p>
| 2 | 2009-07-19T17:52:03Z | [
"python",
"django",
"django-models"
] |
Compile the Python interpreter statically? | 1,150,373 | <p>I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. <code>libc.a</code> not <code>libc.so</code>).</p>
<p>I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using <code>Freeze.py</code>, but is there an alternative so that it can be done in one step?</p>
| 34 | 2009-07-19T17:23:56Z | 1,150,451 | <p>Using freeze doesn't prevent doing it all in one run (no matter what approach you use, you will need multiple build steps - e.g. many compiler invocations). First, you edit <code>Modules/Setup</code> to include all extension modules that you want. Next, you build Python, getting libpythonxy.a. Then, you run freeze, getting a number of C files and a config.c. You compile these as well, and integrate them into libpythonxy.a (or create a separate library).</p>
<p>You do all this once, for each architecture and Python version you want to integrate. When building your application, you only link with libpythonxy.a, and the library that freeze has produced.</p>
| 6 | 2009-07-19T17:54:36Z | [
"c++",
"python",
"c",
"compilation"
] |
Compile the Python interpreter statically? | 1,150,373 | <p>I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. <code>libc.a</code> not <code>libc.so</code>).</p>
<p>I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using <code>Freeze.py</code>, but is there an alternative so that it can be done in one step?</p>
| 34 | 2009-07-19T17:23:56Z | 1,151,005 | <p>You can try with <a href="http://statifier.sourceforge.net/" rel="nofollow">ELF STATIFIER</a>. I've been used it before and it works fairly well. I just had problems with it in a couple of cases and then I had to use another similar program called <a href="http://www.magicermine.com/features.html" rel="nofollow">Ermine</a>. Unfortunately this one is a commercial program. </p>
| 3 | 2009-07-19T22:00:25Z | [
"c++",
"python",
"c",
"compilation"
] |
Compile the Python interpreter statically? | 1,150,373 | <p>I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. <code>libc.a</code> not <code>libc.so</code>).</p>
<p>I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using <code>Freeze.py</code>, but is there an alternative so that it can be done in one step?</p>
| 34 | 2009-07-19T17:23:56Z | 1,155,092 | <p>I found this (mainly concerning static compilation of Python modules):</p>
<ul>
<li><a href="http://bytes.com/groups/python/23235-build-static-python-executable-linux">http://bytes.com/groups/python/23235-build-static-python-executable-linux</a></li>
</ul>
<p>Which describes a file used for configuration located here: </p>
<pre><code><Python_Source>/Modules/Setup
</code></pre>
<p>If this file isn't present, it can be created by copying:</p>
<pre><code><Python_Source>/Modules/Setup.dist
</code></pre>
<p>The <code>Setup</code> file has tons of documentation in it and the <code>README</code> included with the source offers lots of good compilation information as well.</p>
<p>I haven't tried compiling yet, but I think with these resources, I should be successful when I try. I will post my results as a comment here.</p>
<h2>Update</h2>
<p>To get a pure-static python executable, you must also configure as follows:</p>
<pre><code>./configure LDFLAGS="-static -static-libgcc" CPPFLAGS="-static"
</code></pre>
<p>Once you build with these flags enabled, you will likely get lots of warnings about "renaming because library isn't present". This means that you have not configured <code>Modules/Setup</code> correctly and need to:</p>
<p>a) add a single line (near the top) like this:</p>
<pre><code>*static*
</code></pre>
<p>(that's asterisk/star the word "static" and asterisk with no spaces)</p>
<p>b) uncomment all modules that you want to be available statically (such as math, array, etc...)</p>
<p>You may also need to add specific linker flags (as mentioned in the link I posted above). My experience so far has been that the libraries are working without modification.</p>
<p>It may also be helpful to run make with as follows:</p>
<pre><code>make 2>&1 | grep 'renaming'
</code></pre>
<p>This will show all modules that are failing to compile due to being statically linked.</p>
| 26 | 2009-07-20T18:21:42Z | [
"c++",
"python",
"c",
"compilation"
] |
Compile the Python interpreter statically? | 1,150,373 | <p>I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. <code>libc.a</code> not <code>libc.so</code>).</p>
<p>I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using <code>Freeze.py</code>, but is there an alternative so that it can be done in one step?</p>
| 34 | 2009-07-19T17:23:56Z | 27,769,640 | <p><a href="https://github.com/davidsansome/python-cmake-buildsystem" rel="nofollow">CPython CMake Buildsystem</a> offers an alternative way to build Python, using <a href="http://www.cmake.org" rel="nofollow">CMake</a>.</p>
<p>It can build python lib statically, and include in that lib all the modules you want. Just set CMake's options</p>
<pre><code>BUILD_SHARED OFF
BUILD_STATIC ON
</code></pre>
<p>and set the <code>BUILTIN_<extension></code> you want to <code>ON</code>.</p>
| 4 | 2015-01-04T19:49:26Z | [
"c++",
"python",
"c",
"compilation"
] |
How to escape a hash (#) char in python? | 1,150,581 | <p>I'm using pyodbc to query an AS400 (unfortunately), and some column names have hashes in them! Here is a small example:</p>
<pre><code>self.cursor.execute('select LPPLNM, LPPDR# from BSYDTAD.LADWJLFU')
for row in self.cursor:
p = Patient()
p.last = row.LPPLNM
p.pcp = row.LPPDR#
</code></pre>
<p>I get errors like this obviously:</p>
<pre><code> AttributeError: 'pyodbc.Row' object has no attribute 'LPPDR'
</code></pre>
<p>Is there some way to escape this? Seems doubtful that a hash is even allowed in a var name. I just picked up python today, so forgive me if the answer is common knowledge.</p>
<p>Thanks, Pete</p>
| 3 | 2009-07-19T18:52:16Z | 1,150,586 | <p>Use the <code>getattr</code> function</p>
<pre><code>p.pcp = getattr(row, "LPPDR#")
</code></pre>
<p>This is, in general, the way that you deal with attributes which aren't legal Python identifiers. For example, you can say</p>
<pre><code>setattr(p, "&)(@#$@!!~%&", "Hello World!")
print getattr(p, "&)(@#$@!!~%&") # prints "Hello World!"
</code></pre>
<p>Also, as JG suggests, you can give your columns an alias, such as by saying</p>
<pre><code>SELECT LPPDR# AS LPPDR ...
</code></pre>
| 7 | 2009-07-19T18:54:25Z | [
"python",
"odbc",
"escaping",
"pyodbc"
] |
How to escape a hash (#) char in python? | 1,150,581 | <p>I'm using pyodbc to query an AS400 (unfortunately), and some column names have hashes in them! Here is a small example:</p>
<pre><code>self.cursor.execute('select LPPLNM, LPPDR# from BSYDTAD.LADWJLFU')
for row in self.cursor:
p = Patient()
p.last = row.LPPLNM
p.pcp = row.LPPDR#
</code></pre>
<p>I get errors like this obviously:</p>
<pre><code> AttributeError: 'pyodbc.Row' object has no attribute 'LPPDR'
</code></pre>
<p>Is there some way to escape this? Seems doubtful that a hash is even allowed in a var name. I just picked up python today, so forgive me if the answer is common knowledge.</p>
<p>Thanks, Pete</p>
| 3 | 2009-07-19T18:52:16Z | 1,150,588 | <p>You can try to give the column an alias, i.e.:</p>
<pre><code> self.cursor.execute('select LPPLNM, LPPDR# as LPPDR from BSYDTAD.LADWJLFU')
</code></pre>
| 5 | 2009-07-19T18:54:48Z | [
"python",
"odbc",
"escaping",
"pyodbc"
] |
How to escape a hash (#) char in python? | 1,150,581 | <p>I'm using pyodbc to query an AS400 (unfortunately), and some column names have hashes in them! Here is a small example:</p>
<pre><code>self.cursor.execute('select LPPLNM, LPPDR# from BSYDTAD.LADWJLFU')
for row in self.cursor:
p = Patient()
p.last = row.LPPLNM
p.pcp = row.LPPDR#
</code></pre>
<p>I get errors like this obviously:</p>
<pre><code> AttributeError: 'pyodbc.Row' object has no attribute 'LPPDR'
</code></pre>
<p>Is there some way to escape this? Seems doubtful that a hash is even allowed in a var name. I just picked up python today, so forgive me if the answer is common knowledge.</p>
<p>Thanks, Pete</p>
| 3 | 2009-07-19T18:52:16Z | 1,150,604 | <p><code>self.cursor.execute</code> returns a tuple, so this would also work:</p>
<pre><code>for row in self.cursor:
p = Patient()
p.last = row[0]
p.pcp = row[1]
</code></pre>
<p>But I prefer the other answers :-)</p>
| 2 | 2009-07-19T18:59:32Z | [
"python",
"odbc",
"escaping",
"pyodbc"
] |
How to escape a hash (#) char in python? | 1,150,581 | <p>I'm using pyodbc to query an AS400 (unfortunately), and some column names have hashes in them! Here is a small example:</p>
<pre><code>self.cursor.execute('select LPPLNM, LPPDR# from BSYDTAD.LADWJLFU')
for row in self.cursor:
p = Patient()
p.last = row.LPPLNM
p.pcp = row.LPPDR#
</code></pre>
<p>I get errors like this obviously:</p>
<pre><code> AttributeError: 'pyodbc.Row' object has no attribute 'LPPDR'
</code></pre>
<p>Is there some way to escape this? Seems doubtful that a hash is even allowed in a var name. I just picked up python today, so forgive me if the answer is common knowledge.</p>
<p>Thanks, Pete</p>
| 3 | 2009-07-19T18:52:16Z | 1,151,211 | <p>The question has been answered, but this is just another alternative (based on Adam Bernier's answer + tuple unpacking) which I think is the cleanest:</p>
<pre><code>for row in self.cursor:
p = Patient()
p.last, p.pcp = row
</code></pre>
| 1 | 2009-07-19T23:47:49Z | [
"python",
"odbc",
"escaping",
"pyodbc"
] |
Python 2.5 socket._fileobject is what in Python 3.1? | 1,150,653 | <p>I'm porting some code that runs on Python 2.5 to Python 3.1. A couple of classes subclass the socket._fileobject:</p>
<pre><code>class X(socket._fileobject):
....
</code></pre>
<p>Is there an equivalent to socket._fileobject in Python 3.1? A quick scan of the source code doesn't turn up anything useful. Thanks!</p>
| 1 | 2009-07-19T19:23:53Z | 1,150,774 | <p>Python 3 uses SocketIO instead of _fileobject in the makefile() method, so that's probably the way to go. </p>
| 2 | 2009-07-19T20:19:14Z | [
"python",
"python-3.x"
] |
A question regarding string instance uniqueness in python | 1,150,765 | <p>I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:</p>
<pre><code>A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
</code></pre>
<p>prints:</p>
<pre>
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
</pre>
<p>I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "<code>+</code>" in C for example implies a function call to <strong>add</strong>. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome. </p>
<p>So, what is the special treatment that A-D undergo, making them come out as the same instance?</p>
| 3 | 2009-07-19T20:15:45Z | 1,150,791 | <p>I believe short strings that can be evaluated at compile time, will be interned automatically. In the last examples, the result can't be evaluated at compile time because <code>str</code> or <code>join</code> might be redefined.</p>
| 1 | 2009-07-19T20:24:28Z | [
"python",
"string",
"instance",
"uniqueidentifier"
] |
A question regarding string instance uniqueness in python | 1,150,765 | <p>I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:</p>
<pre><code>A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
</code></pre>
<p>prints:</p>
<pre>
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
</pre>
<p>I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "<code>+</code>" in C for example implies a function call to <strong>add</strong>. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome. </p>
<p>So, what is the special treatment that A-D undergo, making them come out as the same instance?</p>
| 3 | 2009-07-19T20:15:45Z | 1,150,860 | <p>Python is allowed to inline string constants; A,B,C,D are actually the same literals (if Python sees a constant expression, it treats it as a constant).</p>
<p><code>str</code> is actually a class, so <code>str(whatever)</code> is calling this class' constructor, which should yield a fresh object. This explains E,F,G (note that each of these has separate identity).</p>
<p>As for H, I am not sure, but I'd go for explanation that this expression is too complicated for Python to figure out it's actually a constant, so it computes a new string.</p>
| 4 | 2009-07-19T20:47:48Z | [
"python",
"string",
"instance",
"uniqueidentifier"
] |
A question regarding string instance uniqueness in python | 1,150,765 | <p>I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:</p>
<pre><code>A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
</code></pre>
<p>prints:</p>
<pre>
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
</pre>
<p>I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "<code>+</code>" in C for example implies a function call to <strong>add</strong>. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome. </p>
<p>So, what is the special treatment that A-D undergo, making them come out as the same instance?</p>
| 3 | 2009-07-19T20:15:45Z | 1,150,943 | <p>In terms of language specification, any compliant Python compiler and runtime is fully allowed, for any instance of an immutable type, to make a new instance OR find an existing instance of the same type that's equal to the required value and use a new reference to that same instance. This means it's always incorrect to use <code>is</code> or by-id comparison among immutables, and any minor release may tweak or change strategy in this matter to enhance optimization.</p>
<p>In terms of implementations, the tradeoff are pretty clear: trying to reuse an existing instance may mean time spent (perhaps wasted) trying to find such an instance, but if the attempt succeeds then some memory is saved (as well as the time to allocate and later free the memory bits needed to hold a new instance).</p>
<p>How to solve those implementation tradeoffs is not entirely obvious -- if you can identify heuristics that indicate that finding a suitable existing instance is likely and the search (even if it fails) will be fast, then you may want to attempt the search-and-reuse when the heuristics suggest it, but skip it otherwise.</p>
<p>In your observations you seem to have found a particular dot-release implementation that performs a modicum of peephole optimization when that's entirely safe, fast, and simple, so the assignments A to D all boil down to exactly the same as A (but E to F don't, as they involve named functions or methods that the optimizer's authors may reasonably have considered not 100% safe to assume semantics for -- and low-ROI if that was done -- so they're not peephole-optimized).</p>
<p>Thus, A to D reusing the same instance boils down to A and B doing so (as C and D get peephole-optimized to exactly the same construct).</p>
<p>That reuse, in turn, clearly suggests compiler tactics/optimizer heuristics whereby identical literal constants of an immutable type in the same function's local namespace are collapsed to references to just one instance in the function's <code>.func_code.co_consts</code> (to use current CPython's terminology for attributes of functions and code objects) -- reasonable tactics and heuristics, as reuse of the same immutable constant literal within one function are somewhat frequent, AND the price is only paid once (at compile time) while the advantage is accrued many times (every time the function runs, maybe within loops etc etc).</p>
<p>(It so happens that these specific tactics and heuristics, given their clearly-positive tradeoffs, have been pervasive in all recent versions of CPython, and, I believe, IronPython, Jython, and PyPy as well;-).</p>
<p>This is a somewhat worthy and interesting are of study if you're planning to write compilers, runtime environments, peephole optimizers, etc etc, for Python itself or similar languages. I guess that deep study of the internals (ideally of many different correct implementations, of course, so as not to fixate on the quirks of a specific one -- good thing Python currently enjoys at least 4 separate production-worthy implementations, not to mention several versions of each!) can also help, indirectly, make one a better Python programmer -- but it's particularly important to focus on what's <em>guaranteed</em> by the language itself, which is somewhat less than what you'll find in common among separate implementations, because the parts that "just happen" to be in common right now (without being <em>required</em> to be so by the language specs) may perfectly well change under you at the next point release of one or another implementation and, if your production code was mistakenly relying on such details, that might cause nasty surprises;-). Plus -- it's hardly ever necessary, or even particularly helpful, to rely on such variable implementation details rather than on language-mandated behavior (unless you're coding something like an optimizer, debugger, profiler, or the like, of course;-).</p>
| 10 | 2009-07-19T21:24:36Z | [
"python",
"string",
"instance",
"uniqueidentifier"
] |
A question regarding string instance uniqueness in python | 1,150,765 | <p>I was trying to figure out which integers python only instantiates once (-6 to 256 it seems), and in the process stumbled on some string behaviour I can't see the pattern in. Sometimes, equal strings created in different ways share the same id, sometimes not. This code:</p>
<pre><code>A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = str(100) + "00"
H = "0".join(("10","00"))
for obj in (A,B,C,D,E,F,G,H):
print obj, id(obj), obj is A
</code></pre>
<p>prints:</p>
<pre>
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959776 True
10000 4959456 False
10000 4959488 False
10000 4959520 False
10000 4959680 False
</pre>
<p>I don't even see the pattern - save for the fact that the first four don't have an explicit function call - but surely that can't be it, since the "<code>+</code>" in C for example implies a function call to <strong>add</strong>. I especially don't understand why C and G are different, seeing as that implies that the ids of the components of the addition are more important than the outcome. </p>
<p>So, what is the special treatment that A-D undergo, making them come out as the same instance?</p>
| 3 | 2009-07-19T20:15:45Z | 1,151,200 | <p>in answer to S.Lott's suggestion of examining the byte code:</p>
<pre><code>import dis
def moo():
A = "10000"
B = "10000"
C = "100" + "00"
D = "%i"%10000
E = str(10000)
F = str(10000)
G = "1000"+str(0)
H = "0".join(("10","00"))
I = str("10000")
for obj in (A,B,C,D,E,F,G,H, I):
print obj, id(obj), obj is A
moo()
print dis.dis(moo)
</code></pre>
<p>yields:</p>
<pre><code>10000 4968128 True
10000 4968128 True
10000 4968128 True
10000 4968128 True
10000 2840928 False
10000 2840896 False
10000 2840864 False
10000 2840832 False
10000 4968128 True
4 0 LOAD_CONST 1 ('10000')
3 STORE_FAST 0 (A)
5 6 LOAD_CONST 1 ('10000')
9 STORE_FAST 1 (B)
6 12 LOAD_CONST 10 ('10000')
15 STORE_FAST 2 (C)
7 18 LOAD_CONST 11 ('10000')
21 STORE_FAST 3 (D)
8 24 LOAD_GLOBAL 0 (str)
27 LOAD_CONST 5 (10000)
30 CALL_FUNCTION 1
33 STORE_FAST 4 (E)
9 36 LOAD_GLOBAL 0 (str)
39 LOAD_CONST 5 (10000)
42 CALL_FUNCTION 1
45 STORE_FAST 5 (F)
10 48 LOAD_CONST 6 ('1000')
51 LOAD_GLOBAL 0 (str)
54 LOAD_CONST 7 (0)
57 CALL_FUNCTION 1
60 BINARY_ADD
61 STORE_FAST 6 (G)
11 64 LOAD_CONST 8 ('0')
67 LOAD_ATTR 1 (join)
70 LOAD_CONST 12 (('10', '00'))
73 CALL_FUNCTION 1
76 STORE_FAST 7 (H)
12 79 LOAD_GLOBAL 0 (str)
82 LOAD_CONST 1 ('10000')
85 CALL_FUNCTION 1
88 STORE_FAST 8 (I)
14 91 SETUP_LOOP 66 (to 160)
94 LOAD_FAST 0 (A)
97 LOAD_FAST 1 (B)
100 LOAD_FAST 2 (C)
103 LOAD_FAST 3 (D)
106 LOAD_FAST 4 (E)
109 LOAD_FAST 5 (F)
112 LOAD_FAST 6 (G)
115 LOAD_FAST 7 (H)
118 LOAD_FAST 8 (I)
121 BUILD_TUPLE 9
124 GET_ITER
>> 125 FOR_ITER 31 (to 159)
128 STORE_FAST 9 (obj)
15 131 LOAD_FAST 9 (obj)
134 PRINT_ITEM
135 LOAD_GLOBAL 2 (id)
138 LOAD_FAST 9 (obj)
141 CALL_FUNCTION 1
144 PRINT_ITEM
145 LOAD_FAST 9 (obj)
148 LOAD_FAST 0 (A)
151 COMPARE_OP 8 (is)
154 PRINT_ITEM
155 PRINT_NEWLINE
156 JUMP_ABSOLUTE 125
>> 159 POP_BLOCK
>> 160 LOAD_CONST 0 (None)
163 RETURN_VALUE
</code></pre>
<p>so it would seem that indeed the compiler understands A-D to mean the same thing, and so it saves memory by only generating it once (as suggested by Alex,Maciej and Greg). (added case <code>I</code> seems to just be str() realising it's trying to make a string from a string, and just passing it through.)</p>
<p>Thanks everyone, that's a lot clearer now. </p>
| 1 | 2009-07-19T23:42:25Z | [
"python",
"string",
"instance",
"uniqueidentifier"
] |
Emulate processing with python? | 1,150,897 | <p>I'm looking for a basic programmatic animation framework similar to processing except in python. That is, something that allows pixel manipulation, has basic drawing/color primitives, and is geared towards animation. Is pygame pretty much the best bet or are there other options?</p>
| 4 | 2009-07-19T21:00:28Z | 1,150,922 | <p>You could get pretty close to processing with vpython:</p>
<p><a href="http://vpython.org/" rel="nofollow">http://vpython.org/</a></p>
<p>The primitives are very easy to work with, and it is adept at animation. </p>
<p>I am not sure what kind of pixel manipulation you are looking for, but there may be something for that as well.</p>
| 1 | 2009-07-19T21:12:30Z | [
"python",
"pygame",
"processing"
] |
Emulate processing with python? | 1,150,897 | <p>I'm looking for a basic programmatic animation framework similar to processing except in python. That is, something that allows pixel manipulation, has basic drawing/color primitives, and is geared towards animation. Is pygame pretty much the best bet or are there other options?</p>
| 4 | 2009-07-19T21:00:28Z | 1,150,923 | <p>"Similar to processing except in python" screams "NodeBox" to me. NodeBox is OSX-only, and i don't know if it allows pixel-level manipulation, but much of its command set was derived directly from processing. You can find it at <a href="http://nodebox.net/" rel="nofollow">the NodeBox site</a>.</p>
| 2 | 2009-07-19T21:12:39Z | [
"python",
"pygame",
"processing"
] |
Emulate processing with python? | 1,150,897 | <p>I'm looking for a basic programmatic animation framework similar to processing except in python. That is, something that allows pixel manipulation, has basic drawing/color primitives, and is geared towards animation. Is pygame pretty much the best bet or are there other options?</p>
| 4 | 2009-07-19T21:00:28Z | 1,150,930 | <p>There's quite recent C++ library, <a href="http://www.sfml-dev.org/" rel="nofollow">SFML</a>, which is a good alternative to SDL.
Thus its Python bindings should be a good alternative to Pygame.</p>
| 0 | 2009-07-19T21:17:18Z | [
"python",
"pygame",
"processing"
] |
Emulate processing with python? | 1,150,897 | <p>I'm looking for a basic programmatic animation framework similar to processing except in python. That is, something that allows pixel manipulation, has basic drawing/color primitives, and is geared towards animation. Is pygame pretty much the best bet or are there other options?</p>
| 4 | 2009-07-19T21:00:28Z | 1,151,320 | <p>I prefer pyglet to pygame (but I'm not sure exactly what your needs will be):</p>
<ul>
<li><a href="http://www.pyglet.org/" rel="nofollow">http://www.pyglet.org/</a></li>
</ul>
<p>If you need a 3d engine:</p>
<ul>
<li><a href="http://www.panda3d.org/" rel="nofollow">http://www.panda3d.org/</a></li>
<li><a href="http://www.pysoy.org/" rel="nofollow">http://www.pysoy.org/</a></li>
</ul>
<p>Someone's already mentioned Shoebot, which is probably the closest in spirit to Processing:</p>
<ul>
<li><a href="http://tinkerhouse.net/shoebot/" rel="nofollow">http://tinkerhouse.net/shoebot/</a></li>
</ul>
| 1 | 2009-07-20T00:49:33Z | [
"python",
"pygame",
"processing"
] |
Emulate processing with python? | 1,150,897 | <p>I'm looking for a basic programmatic animation framework similar to processing except in python. That is, something that allows pixel manipulation, has basic drawing/color primitives, and is geared towards animation. Is pygame pretty much the best bet or are there other options?</p>
| 4 | 2009-07-19T21:00:28Z | 1,605,005 | <p>Well, this is as close as it gets: <a href="http://code.google.com/p/pyprocessing/" rel="nofollow">http://code.google.com/p/pyprocessing/</a></p>
| 2 | 2009-10-22T04:19:54Z | [
"python",
"pygame",
"processing"
] |
map raw sql to django orm | 1,150,898 | <p>Is there a way to simplify this working code?
This code gets for an object all the different vote types, there are like 20 possible, and counts each type.
I prefer not to write raw sql but use the orm. It is a little bit more tricky because I use generic foreign key in the model.</p>
<pre><code>def get_object_votes(self, obj):
"""
Get a dictionary mapping vote to votecount
"""
ctype = ContentType.objects.get_for_model(obj)
cursor = connection.cursor()
cursor.execute("""
SELECT v.vote , COUNT(*)
FROM votes v
WHERE %d = v.object_id AND %d = v.content_type_id
GROUP BY 1
ORDER BY 1 """ % ( obj.id, ctype.id )
)
votes = {}
for row in cursor.fetchall():
votes[row[0]] = row[1]
return votes
</code></pre>
<p>The models im using</p>
<pre><code>class Vote(models.Model):
user = models.ForeignKey(User)
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
payload = generic.GenericForeignKey('content_type', 'object_id')
vote = models.IntegerField(choices = possible_votes.items() )
class Issue(models.Model):
title = models.CharField( blank=True, max_length=200)
</code></pre>
| 1 | 2009-07-19T21:01:23Z | 1,150,915 | <p>Yes, definitely use the ORM. What you should really be doing is this in your model:</p>
<pre><code>class Obj(models.Model):
#whatever the object has
class Vote(models.Model):
obj = models.ForeignKey(Obj) #this ties a vote to its object
</code></pre>
<p>Then to get all of the votes from an object, have these Django calls be in one of your view functions:</p>
<pre><code>obj = Obj.objects.get(id=#the id)
votes = obj.vote_set.all()
</code></pre>
<p>From there it's fairly easy to see how to count them (get the length of the list called votes).</p>
<p>I recommend reading about many-to-one relationships from the documentation, it's quite handy.</p>
<p><a href="http://www.djangoproject.com/documentation/models/many_to_one/" rel="nofollow">http://www.djangoproject.com/documentation/models/many_to_one/</a></p>
| 0 | 2009-07-19T21:09:31Z | [
"python",
"sql",
"django",
"orm",
"django-models"
] |
map raw sql to django orm | 1,150,898 | <p>Is there a way to simplify this working code?
This code gets for an object all the different vote types, there are like 20 possible, and counts each type.
I prefer not to write raw sql but use the orm. It is a little bit more tricky because I use generic foreign key in the model.</p>
<pre><code>def get_object_votes(self, obj):
"""
Get a dictionary mapping vote to votecount
"""
ctype = ContentType.objects.get_for_model(obj)
cursor = connection.cursor()
cursor.execute("""
SELECT v.vote , COUNT(*)
FROM votes v
WHERE %d = v.object_id AND %d = v.content_type_id
GROUP BY 1
ORDER BY 1 """ % ( obj.id, ctype.id )
)
votes = {}
for row in cursor.fetchall():
votes[row[0]] = row[1]
return votes
</code></pre>
<p>The models im using</p>
<pre><code>class Vote(models.Model):
user = models.ForeignKey(User)
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
payload = generic.GenericForeignKey('content_type', 'object_id')
vote = models.IntegerField(choices = possible_votes.items() )
class Issue(models.Model):
title = models.CharField( blank=True, max_length=200)
</code></pre>
| 1 | 2009-07-19T21:01:23Z | 1,151,508 | <p>The code Below did the trick for me!</p>
<pre><code>def get_object_votes(self, obj, all=False):
"""
Get a dictionary mapping vote to votecount
"""
object_id = obj._get_pk_val()
ctype = ContentType.objects.get_for_model(obj)
queryset = self.filter(content_type=ctype, object_id=object_id)
if not all:
queryset = queryset.filter(is_archived=False) # only pick active votes
queryset = queryset.values('vote')
queryset = queryset.annotate(vcount=Count("vote")).order_by()
votes = {}
for count in queryset:
votes[count['vote']] = count['vcount']
return votes
</code></pre>
| 1 | 2009-07-20T02:26:47Z | [
"python",
"sql",
"django",
"orm",
"django-models"
] |
python saving unicode into file | 1,150,994 | <p>i'm having some trouble figuring out how to save unicode into a file in python. I have the following code, and if i run it in a script test.py, it should create a new file called priceinfo.txt, and write what's in price_info to the file. But i do not see the file, can anyone enlighten me on what could be the problem?</p>
<p>Thanks a lot!</p>
<pre><code>price_info = u'it costs \u20ac 5'
f = codecs.open('priceinfo.txt','wb','utf-8')
f.write(price_info)
f.close()
</code></pre>
| -1 | 2009-07-19T21:55:31Z | 1,151,017 | <p>Assuming no error messages from the program (which would be the result of forgetting to import the codecs module), are you sure you're looking in the right place? That code writes priceinfo.txt in the current working directory (IOW are you sure that you're looking inside the working directory?)</p>
| 1 | 2009-07-19T22:06:37Z | [
"python",
"file",
"unicode"
] |
python saving unicode into file | 1,150,994 | <p>i'm having some trouble figuring out how to save unicode into a file in python. I have the following code, and if i run it in a script test.py, it should create a new file called priceinfo.txt, and write what's in price_info to the file. But i do not see the file, can anyone enlighten me on what could be the problem?</p>
<p>Thanks a lot!</p>
<pre><code>price_info = u'it costs \u20ac 5'
f = codecs.open('priceinfo.txt','wb','utf-8')
f.write(price_info)
f.close()
</code></pre>
| -1 | 2009-07-19T21:55:31Z | 1,151,018 | <p>I can think of several reasons:</p>
<ol>
<li>the file gets created, but in a different directory. Be certain what the working
directory of the script is.</li>
<li>you don't have permission to create the file, in the directory where you want to create it.</li>
<li>you have some error in your Python script, and it does not get executed at all.</li>
</ol>
<p>To find out which one it is, run the script in a command window, and check for any error output that you get.</p>
| 3 | 2009-07-19T22:07:55Z | [
"python",
"file",
"unicode"
] |
Multiple regression in Python | 1,151,088 | <p>I am currently using scipy's linregress function for single regression. I am unable to find if the same library, or another, is able to do multiple regression, that is, one dependent variable and more than one independent variable. I'd like to avoid R if possible. If you're wondering, I am doing FX market analysis with the goal of replicating one currency pair with multiple other currency pairs. Anyone help? Thanks,</p>
<p>Thomas</p>
| 4 | 2009-07-19T22:41:02Z | 1,151,111 | <p>I'm not sure if this is what you need, but the <a href="http://mdp-toolkit.sourceforge.net/" rel="nofollow">Modular toolkit for Data Processing (MDP)</a> libray recently implemented <a href="http://mdp-toolkit.sourceforge.net/node_list.html#mdp.nodes.LinearRegressionNode" rel="nofollow">multivariate linear regression</a>. It is under LGPL license.</p>
| 2 | 2009-07-19T22:53:31Z | [
"python",
"math",
"scipy",
"regression"
] |
Multiple regression in Python | 1,151,088 | <p>I am currently using scipy's linregress function for single regression. I am unable to find if the same library, or another, is able to do multiple regression, that is, one dependent variable and more than one independent variable. I'd like to avoid R if possible. If you're wondering, I am doing FX market analysis with the goal of replicating one currency pair with multiple other currency pairs. Anyone help? Thanks,</p>
<p>Thomas</p>
| 4 | 2009-07-19T22:41:02Z | 1,152,136 | <p>Use the OLS class [<a href="http://www.scipy.org/Cookbook/OLS">http://www.scipy.org/Cookbook/OLS</a>] from the SciPy cookbook.</p>
| 9 | 2009-07-20T07:33:51Z | [
"python",
"math",
"scipy",
"regression"
] |
Accessing Reuters data in Python | 1,151,103 | <p>I am currently successfully downloading live Bloomberg market prices, as well as historical series, using the service's COM API and win32com. Does anyone have any experience doing the same for Reuters live and historical data into Python?</p>
<p>I know that live feeds are available for both services in Excel, so Reuters must also have an API that I can access. Only problem is while Bloomberg support is excellent and describes its API in depth, for this type of query Reuters hasn't been able to get back to me for 2 months! Instead keep trying to sell me their email subscription service which is NOT what I need!!</p>
<p>Anyway rant over any help much appreciated. </p>
| 3 | 2009-07-19T22:48:43Z | 1,152,290 | <p>Reuters seems to charge for their financial data feeds, here is an overview page of their offerings: <a href="http://thomsonreuters.com/products%5Fservices/financial/financial%5Fproducts/event%5Fdriven%5Ftrading/data%5Ffeeds" rel="nofollow">Reuters data feeds</a></p>
| 1 | 2009-07-20T08:22:46Z | [
"python",
"finance",
"reuters"
] |
Accessing Reuters data in Python | 1,151,103 | <p>I am currently successfully downloading live Bloomberg market prices, as well as historical series, using the service's COM API and win32com. Does anyone have any experience doing the same for Reuters live and historical data into Python?</p>
<p>I know that live feeds are available for both services in Excel, so Reuters must also have an API that I can access. Only problem is while Bloomberg support is excellent and describes its API in depth, for this type of query Reuters hasn't been able to get back to me for 2 months! Instead keep trying to sell me their email subscription service which is NOT what I need!!</p>
<p>Anyway rant over any help much appreciated. </p>
| 3 | 2009-07-19T22:48:43Z | 1,166,228 | <p>I have some experience with their APIs. </p>
<p>Reuters also have complete documentations in their <a href="https://customers.reuters.com/developer/" rel="nofollow">Customer Zone Website</a>. More infos on their APIs can be found there. They have their APIs available in Java, C++, and COM. So I believe there are many possibilities for Python code to interop with these.</p>
<p>Take a look at <a href="https://customers.reuters.com/developer/Kits/SFC/CPlusPlus/SFC%5FCPlusPlus%5FKeyFeatures%5FBenefits.aspx" rel="nofollow">SFC C++ Time Series Subscription</a></p>
| 3 | 2009-07-22T15:45:23Z | [
"python",
"finance",
"reuters"
] |
Accessing Reuters data in Python | 1,151,103 | <p>I am currently successfully downloading live Bloomberg market prices, as well as historical series, using the service's COM API and win32com. Does anyone have any experience doing the same for Reuters live and historical data into Python?</p>
<p>I know that live feeds are available for both services in Excel, so Reuters must also have an API that I can access. Only problem is while Bloomberg support is excellent and describes its API in depth, for this type of query Reuters hasn't been able to get back to me for 2 months! Instead keep trying to sell me their email subscription service which is NOT what I need!!</p>
<p>Anyway rant over any help much appreciated. </p>
| 3 | 2009-07-19T22:48:43Z | 15,900,170 | <p>Check out <a href="http://devcartel.com" rel="nofollow">http://devcartel.com</a> they have PyRFA -- Reuters market data API for Python.</p>
| 3 | 2013-04-09T11:04:06Z | [
"python",
"finance",
"reuters"
] |
Accessing Reuters data in Python | 1,151,103 | <p>I am currently successfully downloading live Bloomberg market prices, as well as historical series, using the service's COM API and win32com. Does anyone have any experience doing the same for Reuters live and historical data into Python?</p>
<p>I know that live feeds are available for both services in Excel, so Reuters must also have an API that I can access. Only problem is while Bloomberg support is excellent and describes its API in depth, for this type of query Reuters hasn't been able to get back to me for 2 months! Instead keep trying to sell me their email subscription service which is NOT what I need!!</p>
<p>Anyway rant over any help much appreciated. </p>
| 3 | 2009-07-19T22:48:43Z | 26,461,205 | <p>There's an API (SOAP), it is provided under the Thomson Reuters Dataworks Enterprise (former Datastream) subscription. Though it is not free and it does not come with Thomson Reuters Eikon - you'll need to pay some extra for the license of data streaming/storage.</p>
<p>If you have this subscription, then pydatastream (<a href="https://github.com/vfilimonov/pydatastream" rel="nofollow">https://github.com/vfilimonov/pydatastream</a>) will allow you to get the data directly to python in pandas.DataFrame format (cross-platform).</p>
| 1 | 2014-10-20T08:17:48Z | [
"python",
"finance",
"reuters"
] |
How to get Python syntax highlighting for Visual Studio? | 1,151,207 | <p>Visual Studio 2008 is great as text editor, but it lacks Python syntax highlighting, can I get this as an add-on? Where can I find it?</p>
| 3 | 2009-07-19T23:46:13Z | 1,151,218 | <p>Have you considered installing IronPython and using that to edit your work? </p>
<ul>
<li><a href="http://www.codeplex.com/IronPythonStudio" rel="nofollow">http://www.codeplex.com/IronPythonStudio</a></li>
</ul>
| 2 | 2009-07-19T23:49:08Z | [
"python",
"visual-studio",
"visual-studio-2008",
"syntax-highlighting"
] |
Equivalent of NotImplementedError for fields in Python | 1,151,212 | <p>In Python 2.x when you want to mark a method as abstract, you can define it like so:</p>
<pre><code>class Base:
def foo(self):
raise NotImplementedError("Subclasses should implement this!")
</code></pre>
<p>Then if you forget to override it, you get a nice reminder exception. Is there an equivalent way to mark a field as abstract? Or is stating it in the class docstring all you can do?</p>
<p>At first I thought I could set the field to NotImplemented, but when I looked up what it's actually for (rich comparisons) it seemed abusive.</p>
| 32 | 2009-07-19T23:48:42Z | 1,151,260 | <p>Yes, you can. Use the <code>@property</code> decorator. For instance, if you have a field called "example" then can't you do something like this:</p>
<pre><code>class Base(object):
@property
def example(self):
raise NotImplementedError("Subclasses should implement this!")
</code></pre>
<p>Running the following produces a <code>NotImplementedError</code> just like you want.</p>
<pre><code>b = Base()
print b.example
</code></pre>
| 34 | 2009-07-20T00:08:11Z | [
"python",
"abstract-class"
] |
Equivalent of NotImplementedError for fields in Python | 1,151,212 | <p>In Python 2.x when you want to mark a method as abstract, you can define it like so:</p>
<pre><code>class Base:
def foo(self):
raise NotImplementedError("Subclasses should implement this!")
</code></pre>
<p>Then if you forget to override it, you get a nice reminder exception. Is there an equivalent way to mark a field as abstract? Or is stating it in the class docstring all you can do?</p>
<p>At first I thought I could set the field to NotImplemented, but when I looked up what it's actually for (rich comparisons) it seemed abusive.</p>
| 32 | 2009-07-19T23:48:42Z | 1,151,261 | <pre><code>def require_abstract_fields(obj, cls):
abstract_fields = getattr(cls, "abstract_fields", None)
if abstract_fields is None:
return
for field in abstract_fields:
if not hasattr(obj, field):
raise RuntimeError, "object %s failed to define %s" % (obj, field)
class a(object):
abstract_fields = ("x", )
def __init__(self):
require_abstract_fields(self, a)
class b(a):
abstract_fields = ("y", )
x = 5
def __init__(self):
require_abstract_fields(self, b)
super(b, self).__init__()
b()
a()
</code></pre>
<p>Note the passing of the class type into <code>require_abstract_fields</code>, so if multiple inherited classes use this, they don't all validate the most-derived-class's fields. You might be able to automate this with a metaclass, but I didn't dig into that. Defining a field to None is accepted.</p>
| 1 | 2009-07-20T00:08:46Z | [
"python",
"abstract-class"
] |
Equivalent of NotImplementedError for fields in Python | 1,151,212 | <p>In Python 2.x when you want to mark a method as abstract, you can define it like so:</p>
<pre><code>class Base:
def foo(self):
raise NotImplementedError("Subclasses should implement this!")
</code></pre>
<p>Then if you forget to override it, you get a nice reminder exception. Is there an equivalent way to mark a field as abstract? Or is stating it in the class docstring all you can do?</p>
<p>At first I thought I could set the field to NotImplemented, but when I looked up what it's actually for (rich comparisons) it seemed abusive.</p>
| 32 | 2009-07-19T23:48:42Z | 1,151,275 | <p>Alternate answer:</p>
<pre><code>@property
def NotImplementedField(self):
raise NotImplementedError
class a(object):
x = NotImplementedField
class b(a):
# x = 5
pass
b().x
a().x
</code></pre>
<p>This is like Evan's, but concise and cheap--you'll only get a single instance of NotImplementedField.</p>
| 23 | 2009-07-20T00:15:49Z | [
"python",
"abstract-class"
] |
Equivalent of NotImplementedError for fields in Python | 1,151,212 | <p>In Python 2.x when you want to mark a method as abstract, you can define it like so:</p>
<pre><code>class Base:
def foo(self):
raise NotImplementedError("Subclasses should implement this!")
</code></pre>
<p>Then if you forget to override it, you get a nice reminder exception. Is there an equivalent way to mark a field as abstract? Or is stating it in the class docstring all you can do?</p>
<p>At first I thought I could set the field to NotImplemented, but when I looked up what it's actually for (rich comparisons) it seemed abusive.</p>
| 32 | 2009-07-19T23:48:42Z | 34,535,240 | <p>And here is my solution:</p>
<pre class="lang-py prettyprint-override"><code>def not_implemented_method(func):
from functools import wraps
from inspect import getargspec, formatargspec
@wraps(func)
def wrapper(self, *args, **kwargs):
c = self.__class__.__name__
m = func.__name__
a = formatargspec(*getargspec(func))
raise NotImplementedError('\'%s\' object does not implement the method \'%s%s\'' % (c, m, a))
return wrapper
def not_implemented_property(func):
from functools import wraps
from inspect import getargspec, formatargspec
@wraps(func)
def wrapper(self, *args, **kwargs):
c = self.__class__.__name__
m = func.__name__
raise NotImplementedError('\'%s\' object does not implement the property \'%s\'' % (c, m))
return property(wrapper, wrapper, wrapper)
</code></pre>
<p>It can be used as</p>
<pre class="lang-py prettyprint-override"><code>class AbstractBase(object):
@not_implemented_method
def test(self):
pass
@not_implemented_property
def value(self):
pass
class Implementation(AbstractBase):
value = None
def __init__(self):
self.value = 42
def test(self):
return True
</code></pre>
| 0 | 2015-12-30T18:39:11Z | [
"python",
"abstract-class"
] |
Mechanize not being installed by easy_install? | 1,151,256 | <p>I am in the process of migrating from an old Win2K machine to a new and much more powerful Vista 64 bit PC. Most of the migration has gone fairly smoothly - but I did find that I needed to reinstall ALL of my Python related tools.</p>
<p>I've downloaded the mechanize-0.1.11.tar.gz file and ran easy_install to install it. This produced C:\Python25\Lib\site-packages\mechanize-0.1.11-py2.5.egg.</p>
<p>I then ran a python script to test it, and it worked fine under the interpreter. But, when I ran py2exe to compile the script, I get a message that mechanize cannot be found.</p>
<p>I then moved the egg to a new folder, used easy_install to install it - and got every indication that it did install.</p>
<p>But, I still get the same message when trying to use py2exe - that mechanize does not exist!</p>
<p>I did a search for "mechanize" of the entire disk, and get only the 2 egg files as a result. What files should be produced by the install - and where should I expect them to be located?</p>
<p>Obviously, I'm missing something here...any suggestions?</p>
<p>Also, perhaps related, the python I am running is the 32 bit 2.5.4 version...which is what I had before and wanted to get everything working properly prior to installing the 64 bit version - plus, I don't see some of the tools (easy_install & py2exe) which seem to support the 64 bit versions. Is that part of the problem, do I need to install & run the 64-bit version - and will that be a problem for those who run 32-bit PC's when they run my scripts?</p>
| 2 | 2009-07-20T00:06:27Z | 1,151,280 | <p>There is a <a href="http://www.py2exe.org/index.cgi/ExeWithEggs" rel="nofollow">note on the py2exe site</a> that it does not work if the source is in egg format:</p>
<blockquote>
<p>py2exe does not currently (as of
0.6.5) work out of the box if some of your program's dependencies are in
.egg form.</p>
<p>If your program does not itself use
setuptools facilities (eg,
pkg_resources), then all you need to
do is make sure the dependencies are
installed on your system in unzipped
form, rather than in a zipped .egg.</p>
<p><strong>One way to achieve this is to use the</strong>
<strong>--always-unzip option to easy_install</strong>.</p>
</blockquote>
<p>Which version are you running? The latest version listed at pypi.python.org is version 0.6.9 but there is no indication I can find if the problem with eggs is fixed in this release.</p>
| 2 | 2009-07-20T00:22:28Z | [
"python",
"mechanize",
"easy-install"
] |
Mechanize not being installed by easy_install? | 1,151,256 | <p>I am in the process of migrating from an old Win2K machine to a new and much more powerful Vista 64 bit PC. Most of the migration has gone fairly smoothly - but I did find that I needed to reinstall ALL of my Python related tools.</p>
<p>I've downloaded the mechanize-0.1.11.tar.gz file and ran easy_install to install it. This produced C:\Python25\Lib\site-packages\mechanize-0.1.11-py2.5.egg.</p>
<p>I then ran a python script to test it, and it worked fine under the interpreter. But, when I ran py2exe to compile the script, I get a message that mechanize cannot be found.</p>
<p>I then moved the egg to a new folder, used easy_install to install it - and got every indication that it did install.</p>
<p>But, I still get the same message when trying to use py2exe - that mechanize does not exist!</p>
<p>I did a search for "mechanize" of the entire disk, and get only the 2 egg files as a result. What files should be produced by the install - and where should I expect them to be located?</p>
<p>Obviously, I'm missing something here...any suggestions?</p>
<p>Also, perhaps related, the python I am running is the 32 bit 2.5.4 version...which is what I had before and wanted to get everything working properly prior to installing the 64 bit version - plus, I don't see some of the tools (easy_install & py2exe) which seem to support the 64 bit versions. Is that part of the problem, do I need to install & run the 64-bit version - and will that be a problem for those who run 32-bit PC's when they run my scripts?</p>
| 2 | 2009-07-20T00:06:27Z | 8,323,200 | <p>As other users suggested as above... I hereby summarize the steps I need to make Mechanize and BeautifulSoup work with py2exe.</p>
<p><strong>Converting .py Files to Windows .exe</strong></p>
<p>Follow instructions in here: <a href="http://www.py2exe.org/index.cgi/Tutorial" rel="nofollow">py2exe Tutorial</a></p>
<p>STEP 1</p>
<p>Download py2exe from here⦠<a href="http://sourceforge.net/projects/py2exe/files/" rel="nofollow">http://sourceforge.net/projects/py2exe/files/</a>
(I am using Python 2.7)</p>
<p>I installed 0.6.9 for Python 2.7</p>
<p><strong>py2exe-0.6.9.win32-py2.7.exe (201KB)</strong></p>
<p>Install it</p>
<p>STEP 2</p>
<p>Try a hello world file.. to make sure all works.. as given in </p>
<p><a href="http://www.py2exe.org/index.cgi/Tutorial" rel="nofollow">http://www.py2exe.org/index.cgi/Tutorial</a></p>
<ul>
<li>Python setup.py install (step 2 on web tutorial) </li>
<li>Then use a setup.py (step 3 on web tutorial).</li>
</ul>
<p>See Issues below for any problems with Modules (under this folder: C:\Python27\Lib\site-packages)</p>
<p>STEP 3</p>
<p>Test the executable file.. in the dist directory.</p>
<p>In summary, when you have problems with modules, make sure you visit the site packages directory.. and see if the full package is there instead of just the .egg file.
py2exe cannot make use of just the .egg file (a layman's understanding).</p>
<p><strong>Issues:</strong></p>
<p>Mechanize module was not found by py2exe.. this was due to my first installation of mechanize on my local machine was just an .egg file (mechanize-0.2.5-py2.7.egg.OLD 324KB).. I need to install the full mechanize like this:</p>
<pre><code>easy_install --always-unzip <library_name>
</code></pre>
<p>I did that.. then this time mechanize was installed in a folder named mechanize-0.2.5-py2.7.egg (1.1MB).</p>
<p>Also beautifulsoup-3.2.0-py2.7.egg originally the .egg file was 69KB⦠and after installing with </p>
<pre><code>easy_install -âalways-unzip BeautifulSoup
</code></pre>
<p>it was installed in a folder named beautifulsoup-3.2.0-py2.7.egg (229KB).</p>
<p>Some instructions in here: <a href="http://www.daniweb.com/software-development/python/threads/204941" rel="nofollow">http://www.daniweb.com/software-development/python/threads/204941</a></p>
| 0 | 2011-11-30T08:48:35Z | [
"python",
"mechanize",
"easy-install"
] |
Error on connecting to Oracle from py2exe'd program: Unable to acquire Oracle environment handle | 1,151,557 | <p>My python program (Python 2.6) works fine when I run it using the Python interpreter, it connects to the Oracle database (10g XE) without error. However, when I compile it using py2exe, the executable version fails with "Unable to acquire Oracle environment handle" at the call to cx_Oracle.connect().</p>
<p>I've tried the following with no joy:</p>
<ul>
<li>Oracle instant client 10g and 11g</li>
<li>Oracle XE Client</li>
<li>reinstall cx_Oracle-5.0.2-10g.win32-py2.6.msi</li>
<li>setting <code>ORACLE_HOME</code> as well as PATH</li>
<li>another computer with just an Oracle client and the exe</li>
<li>various options for building the exe (no compression and/or using zip file)</li>
</ul>
<p>My testcase:</p>
<p>testora.py:</p>
<pre><code>import cx_Oracle
import decimal # needed for py2exe to compile this correctly
def testora():
"""testora
>>> testora.testora()
<cx_Oracle.Connection to scott@localhost:1521/orcl>
X
"""
orcl = cx_Oracle.connect('scott/tiger@localhost:1521/orcl')
print orcl
curs = orcl.cursor()
result = curs.execute('SELECT * FROM DUAL')
for (dummy,) in result:
print dummy
if __name__ == '__main__':
testora()
</code></pre>
<p>build_testora.py:</p>
<pre><code>from distutils.core import setup
import py2exe, sys
sys.argv.append('py2exe')
setup(
options = {'py2exe': {
'bundle_files': 2,
'compressed': True
}},
console = [{'script': "testora.py"}],
zipfile = None
)
</code></pre>
<p>Results:</p>
<pre><code>C:\Python26\working>python testora.py
<cx_Oracle.Connection to scott@localhost:1521/orcl>
X
C:\Python26\working>python build_testora.py py2exe
C:\Python26\lib\site-packages\py2exe\build_exe.py:16: DeprecationWarning: the se
ts module is deprecated
import sets
running py2exe
creating C:\Python26\working\build
creating C:\Python26\working\build\bdist.win32
creating C:\Python26\working\build\bdist.win32\winexe
creating C:\Python26\working\build\bdist.win32\winexe\collect-2.6
creating C:\Python26\working\build\bdist.win32\winexe\bundle-2.6
creating C:\Python26\working\build\bdist.win32\winexe\temp
*** searching for required modules ***
*** parsing results ***
*** finding dlls needed ***
*** create binaries ***
*** byte compile python files ***
byte-compiling C:\Python26\lib\StringIO.py to StringIO.pyc
byte-compiling C:\Python26\lib\UserDict.py to UserDict.pyc
byte-compiling C:\Python26\lib\__future__.py to __future__.pyc
byte-compiling C:\Python26\lib\_abcoll.py to _abcoll.pyc
byte-compiling C:\Python26\lib\_strptime.py to _strptime.pyc
byte-compiling C:\Python26\lib\_threading_local.py to _threading_local.pyc
byte-compiling C:\Python26\lib\abc.py to abc.pyc
byte-compiling C:\Python26\lib\atexit.py to atexit.pyc
byte-compiling C:\Python26\lib\base64.py to base64.pyc
byte-compiling C:\Python26\lib\bdb.py to bdb.pyc
byte-compiling C:\Python26\lib\bisect.py to bisect.pyc
byte-compiling C:\Python26\lib\calendar.py to calendar.pyc
byte-compiling C:\Python26\lib\cmd.py to cmd.pyc
byte-compiling C:\Python26\lib\codecs.py to codecs.pyc
byte-compiling C:\Python26\lib\collections.py to collections.pyc
byte-compiling C:\Python26\lib\copy.py to copy.pyc
byte-compiling C:\Python26\lib\copy_reg.py to copy_reg.pyc
byte-compiling C:\Python26\lib\decimal.py to decimal.pyc
byte-compiling C:\Python26\lib\difflib.py to difflib.pyc
byte-compiling C:\Python26\lib\dis.py to dis.pyc
byte-compiling C:\Python26\lib\doctest.py to doctest.pyc
byte-compiling C:\Python26\lib\dummy_thread.py to dummy_thread.pyc
byte-compiling C:\Python26\lib\encodings\__init__.py to encodings\__init__.pyc
creating C:\Python26\working\build\bdist.win32\winexe\collect-2.6\encodings
byte-compiling C:\Python26\lib\encodings\aliases.py to encodings\aliases.pyc
byte-compiling C:\Python26\lib\encodings\ascii.py to encodings\ascii.pyc
byte-compiling C:\Python26\lib\encodings\base64_codec.py to encodings\base64_cod
ec.pyc
byte-compiling C:\Python26\lib\encodings\big5.py to encodings\big5.pyc
byte-compiling C:\Python26\lib\encodings\big5hkscs.py to encodings\big5hkscs.pyc
byte-compiling C:\Python26\lib\encodings\bz2_codec.py to encodings\bz2_codec.pyc
byte-compiling C:\Python26\lib\encodings\charmap.py to encodings\charmap.pyc
byte-compiling C:\Python26\lib\encodings\cp037.py to encodings\cp037.pyc
byte-compiling C:\Python26\lib\encodings\cp1006.py to encodings\cp1006.pyc
byte-compiling C:\Python26\lib\encodings\cp1026.py to encodings\cp1026.pyc
byte-compiling C:\Python26\lib\encodings\cp1140.py to encodings\cp1140.pyc
byte-compiling C:\Python26\lib\encodings\cp1250.py to encodings\cp1250.pyc
byte-compiling C:\Python26\lib\encodings\cp1251.py to encodings\cp1251.pyc
byte-compiling C:\Python26\lib\encodings\cp1252.py to encodings\cp1252.pyc
byte-compiling C:\Python26\lib\encodings\cp1253.py to encodings\cp1253.pyc
byte-compiling C:\Python26\lib\encodings\cp1254.py to encodings\cp1254.pyc
byte-compiling C:\Python26\lib\encodings\cp1255.py to encodings\cp1255.pyc
byte-compiling C:\Python26\lib\encodings\cp1256.py to encodings\cp1256.pyc
byte-compiling C:\Python26\lib\encodings\cp1257.py to encodings\cp1257.pyc
byte-compiling C:\Python26\lib\encodings\cp1258.py to encodings\cp1258.pyc
byte-compiling C:\Python26\lib\encodings\cp424.py to encodings\cp424.pyc
byte-compiling C:\Python26\lib\encodings\cp437.py to encodings\cp437.pyc
byte-compiling C:\Python26\lib\encodings\cp500.py to encodings\cp500.pyc
byte-compiling C:\Python26\lib\encodings\cp737.py to encodings\cp737.pyc
byte-compiling C:\Python26\lib\encodings\cp775.py to encodings\cp775.pyc
byte-compiling C:\Python26\lib\encodings\cp850.py to encodings\cp850.pyc
byte-compiling C:\Python26\lib\encodings\cp852.py to encodings\cp852.pyc
byte-compiling C:\Python26\lib\encodings\cp855.py to encodings\cp855.pyc
byte-compiling C:\Python26\lib\encodings\cp856.py to encodings\cp856.pyc
byte-compiling C:\Python26\lib\encodings\cp857.py to encodings\cp857.pyc
byte-compiling C:\Python26\lib\encodings\cp860.py to encodings\cp860.pyc
byte-compiling C:\Python26\lib\encodings\cp861.py to encodings\cp861.pyc
byte-compiling C:\Python26\lib\encodings\cp862.py to encodings\cp862.pyc
byte-compiling C:\Python26\lib\encodings\cp863.py to encodings\cp863.pyc
byte-compiling C:\Python26\lib\encodings\cp864.py to encodings\cp864.pyc
byte-compiling C:\Python26\lib\encodings\cp865.py to encodings\cp865.pyc
byte-compiling C:\Python26\lib\encodings\cp866.py to encodings\cp866.pyc
byte-compiling C:\Python26\lib\encodings\cp869.py to encodings\cp869.pyc
byte-compiling C:\Python26\lib\encodings\cp874.py to encodings\cp874.pyc
byte-compiling C:\Python26\lib\encodings\cp875.py to encodings\cp875.pyc
byte-compiling C:\Python26\lib\encodings\cp932.py to encodings\cp932.pyc
byte-compiling C:\Python26\lib\encodings\cp949.py to encodings\cp949.pyc
byte-compiling C:\Python26\lib\encodings\cp950.py to encodings\cp950.pyc
byte-compiling C:\Python26\lib\encodings\euc_jis_2004.py to encodings\euc_jis_20
04.pyc
byte-compiling C:\Python26\lib\encodings\euc_jisx0213.py to encodings\euc_jisx02
13.pyc
byte-compiling C:\Python26\lib\encodings\euc_jp.py to encodings\euc_jp.pyc
byte-compiling C:\Python26\lib\encodings\euc_kr.py to encodings\euc_kr.pyc
byte-compiling C:\Python26\lib\encodings\gb18030.py to encodings\gb18030.pyc
byte-compiling C:\Python26\lib\encodings\gb2312.py to encodings\gb2312.pyc
byte-compiling C:\Python26\lib\encodings\gbk.py to encodings\gbk.pyc
byte-compiling C:\Python26\lib\encodings\hex_codec.py to encodings\hex_codec.pyc
byte-compiling C:\Python26\lib\encodings\hp_roman8.py to encodings\hp_roman8.pyc
byte-compiling C:\Python26\lib\encodings\hz.py to encodings\hz.pyc
byte-compiling C:\Python26\lib\encodings\idna.py to encodings\idna.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp.py to encodings\iso2022_jp.p
yc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_1.py to encodings\iso2022_jp
_1.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_2.py to encodings\iso2022_jp
_2.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_2004.py to encodings\iso2022
_jp_2004.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_3.py to encodings\iso2022_jp
_3.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_ext.py to encodings\iso2022_
jp_ext.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_kr.py to encodings\iso2022_kr.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_1.py to encodings\iso8859_1.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_10.py to encodings\iso8859_10.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_11.py to encodings\iso8859_11.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_13.py to encodings\iso8859_13.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_14.py to encodings\iso8859_14.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_15.py to encodings\iso8859_15.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_16.py to encodings\iso8859_16.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_2.py to encodings\iso8859_2.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_3.py to encodings\iso8859_3.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_4.py to encodings\iso8859_4.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_5.py to encodings\iso8859_5.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_6.py to encodings\iso8859_6.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_7.py to encodings\iso8859_7.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_8.py to encodings\iso8859_8.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_9.py to encodings\iso8859_9.pyc
byte-compiling C:\Python26\lib\encodings\johab.py to encodings\johab.pyc
byte-compiling C:\Python26\lib\encodings\koi8_r.py to encodings\koi8_r.pyc
byte-compiling C:\Python26\lib\encodings\koi8_u.py to encodings\koi8_u.pyc
byte-compiling C:\Python26\lib\encodings\latin_1.py to encodings\latin_1.pyc
byte-compiling C:\Python26\lib\encodings\mac_arabic.py to encodings\mac_arabic.p
yc
byte-compiling C:\Python26\lib\encodings\mac_centeuro.py to encodings\mac_centeu
ro.pyc
byte-compiling C:\Python26\lib\encodings\mac_croatian.py to encodings\mac_croati
an.pyc
byte-compiling C:\Python26\lib\encodings\mac_cyrillic.py to encodings\mac_cyrill
ic.pyc
byte-compiling C:\Python26\lib\encodings\mac_farsi.py to encodings\mac_farsi.pyc
byte-compiling C:\Python26\lib\encodings\mac_greek.py to encodings\mac_greek.pyc
byte-compiling C:\Python26\lib\encodings\mac_iceland.py to encodings\mac_iceland
.pyc
byte-compiling C:\Python26\lib\encodings\mac_latin2.py to encodings\mac_latin2.p
yc
byte-compiling C:\Python26\lib\encodings\mac_roman.py to encodings\mac_roman.pyc
byte-compiling C:\Python26\lib\encodings\mac_romanian.py to encodings\mac_romani
an.pyc
byte-compiling C:\Python26\lib\encodings\mac_turkish.py to encodings\mac_turkish
.pyc
byte-compiling C:\Python26\lib\encodings\mbcs.py to encodings\mbcs.pyc
byte-compiling C:\Python26\lib\encodings\palmos.py to encodings\palmos.pyc
byte-compiling C:\Python26\lib\encodings\ptcp154.py to encodings\ptcp154.pyc
byte-compiling C:\Python26\lib\encodings\punycode.py to encodings\punycode.pyc
byte-compiling C:\Python26\lib\encodings\quopri_codec.py to encodings\quopri_cod
ec.pyc
byte-compiling C:\Python26\lib\encodings\raw_unicode_escape.py to encodings\raw_
unicode_escape.pyc
byte-compiling C:\Python26\lib\encodings\rot_13.py to encodings\rot_13.pyc
byte-compiling C:\Python26\lib\encodings\shift_jis.py to encodings\shift_jis.pyc
byte-compiling C:\Python26\lib\encodings\shift_jis_2004.py to encodings\shift_ji
s_2004.pyc
byte-compiling C:\Python26\lib\encodings\shift_jisx0213.py to encodings\shift_ji
sx0213.pyc
byte-compiling C:\Python26\lib\encodings\string_escape.py to encodings\string_es
cape.pyc
byte-compiling C:\Python26\lib\encodings\tis_620.py to encodings\tis_620.pyc
byte-compiling C:\Python26\lib\encodings\undefined.py to encodings\undefined.pyc
byte-compiling C:\Python26\lib\encodings\unicode_escape.py to encodings\unicode_
escape.pyc
byte-compiling C:\Python26\lib\encodings\unicode_internal.py to encodings\unicod
e_internal.pyc
byte-compiling C:\Python26\lib\encodings\utf_16.py to encodings\utf_16.pyc
byte-compiling C:\Python26\lib\encodings\utf_16_be.py to encodings\utf_16_be.pyc
byte-compiling C:\Python26\lib\encodings\utf_16_le.py to encodings\utf_16_le.pyc
byte-compiling C:\Python26\lib\encodings\utf_32.py to encodings\utf_32.pyc
byte-compiling C:\Python26\lib\encodings\utf_32_be.py to encodings\utf_32_be.pyc
byte-compiling C:\Python26\lib\encodings\utf_32_le.py to encodings\utf_32_le.pyc
byte-compiling C:\Python26\lib\encodings\utf_7.py to encodings\utf_7.pyc
byte-compiling C:\Python26\lib\encodings\utf_8.py to encodings\utf_8.pyc
byte-compiling C:\Python26\lib\encodings\utf_8_sig.py to encodings\utf_8_sig.pyc
byte-compiling C:\Python26\lib\encodings\uu_codec.py to encodings\uu_codec.pyc
byte-compiling C:\Python26\lib\encodings\zlib_codec.py to encodings\zlib_codec.p
yc
byte-compiling C:\Python26\lib\functools.py to functools.pyc
byte-compiling C:\Python26\lib\genericpath.py to genericpath.pyc
byte-compiling C:\Python26\lib\getopt.py to getopt.pyc
byte-compiling C:\Python26\lib\gettext.py to gettext.pyc
byte-compiling C:\Python26\lib\heapq.py to heapq.pyc
byte-compiling C:\Python26\lib\inspect.py to inspect.pyc
byte-compiling C:\Python26\lib\keyword.py to keyword.pyc
byte-compiling C:\Python26\lib\linecache.py to linecache.pyc
byte-compiling C:\Python26\lib\locale.py to locale.pyc
byte-compiling C:\Python26\lib\ntpath.py to ntpath.pyc
byte-compiling C:\Python26\lib\numbers.py to numbers.pyc
byte-compiling C:\Python26\lib\opcode.py to opcode.pyc
byte-compiling C:\Python26\lib\optparse.py to optparse.pyc
byte-compiling C:\Python26\lib\os.py to os.pyc
byte-compiling C:\Python26\lib\os2emxpath.py to os2emxpath.pyc
byte-compiling C:\Python26\lib\pdb.py to pdb.pyc
byte-compiling C:\Python26\lib\pickle.py to pickle.pyc
byte-compiling C:\Python26\lib\posixpath.py to posixpath.pyc
byte-compiling C:\Python26\lib\pprint.py to pprint.pyc
byte-compiling C:\Python26\lib\quopri.py to quopri.pyc
byte-compiling C:\Python26\lib\random.py to random.pyc
byte-compiling C:\Python26\lib\re.py to re.pyc
byte-compiling C:\Python26\lib\repr.py to repr.pyc
byte-compiling C:\Python26\lib\shlex.py to shlex.pyc
byte-compiling C:\Python26\lib\site-packages\zipextimporter.py to zipextimporter
.pyc
byte-compiling C:\Python26\lib\sre.py to sre.pyc
byte-compiling C:\Python26\lib\sre_compile.py to sre_compile.pyc
byte-compiling C:\Python26\lib\sre_constants.py to sre_constants.pyc
byte-compiling C:\Python26\lib\sre_parse.py to sre_parse.pyc
byte-compiling C:\Python26\lib\stat.py to stat.pyc
byte-compiling C:\Python26\lib\string.py to string.pyc
byte-compiling C:\Python26\lib\stringprep.py to stringprep.pyc
byte-compiling C:\Python26\lib\struct.py to struct.pyc
byte-compiling C:\Python26\lib\subprocess.py to subprocess.pyc
byte-compiling C:\Python26\lib\tempfile.py to tempfile.pyc
byte-compiling C:\Python26\lib\textwrap.py to textwrap.pyc
byte-compiling C:\Python26\lib\threading.py to threading.pyc
byte-compiling C:\Python26\lib\token.py to token.pyc
byte-compiling C:\Python26\lib\tokenize.py to tokenize.pyc
byte-compiling C:\Python26\lib\traceback.py to traceback.pyc
byte-compiling C:\Python26\lib\types.py to types.pyc
byte-compiling C:\Python26\lib\unittest.py to unittest.pyc
byte-compiling C:\Python26\lib\warnings.py to warnings.pyc
*** copy extensions ***
copying C:\Python26\DLLs\bz2.pyd -> C:\Python26\working\build\bdist.win32\winexe
\collect-2.6
copying C:\Python26\DLLs\select.pyd -> C:\Python26\working\build\bdist.win32\win
exe\collect-2.6
copying C:\Python26\DLLs\unicodedata.pyd -> C:\Python26\working\build\bdist.win3
2\winexe\collect-2.6
copying C:\Python26\lib\site-packages\cx_Oracle.pyd -> C:\Python26\working\build
\bdist.win32\winexe\collect-2.6
*** copy dlls ***
copying C:\Oracle\XEClient\bin\OCI.dll -> C:\Python26\working\build\bdist.win32\
winexe\collect-2.6
copying C:\Python26\lib\site-packages\py2exe\run.exe -> C:\Python26\working\dist
\testora.exe
*** binary dependencies ***
Your executable(s) also depend on these dlls which are not included,
you may or may not need to distribute them.
Make sure you have the license if you distribute any of them, and
make sure you don't distribute files belonging to the operating system.
USER32.dll - C:\WINDOWS\system32\USER32.dll
SHELL32.dll - C:\WINDOWS\system32\SHELL32.dll
WSOCK32.dll - C:\WINDOWS\system32\WSOCK32.dll
ADVAPI32.dll - C:\WINDOWS\system32\ADVAPI32.dll
msvcrt.dll - C:\WINDOWS\system32\msvcrt.dll
KERNEL32.dll - C:\WINDOWS\system32\KERNEL32.dll
C:\Python26\working\dist>testora
Traceback (most recent call last):
File "testora.py", line 19, in <module>
File "testora.py", line 11, in testora
cx_Oracle.InterfaceError: Unable to acquire Oracle environment handle
</code></pre>
| 5 | 2009-07-20T02:57:13Z | 1,151,577 | <p>Did you make sure to exclude the OCI.dll when you built with py2exe? If the version of the DLL on your machine is incompatible with the client version on another machine you test it on (I noticed you tried a 11g client but 10g on your machine), then this configuration will not work (I forget the actual error message though).</p>
| 8 | 2009-07-20T03:10:38Z | [
"python",
"oracle",
"py2exe",
"cx-oracle"
] |
Error on connecting to Oracle from py2exe'd program: Unable to acquire Oracle environment handle | 1,151,557 | <p>My python program (Python 2.6) works fine when I run it using the Python interpreter, it connects to the Oracle database (10g XE) without error. However, when I compile it using py2exe, the executable version fails with "Unable to acquire Oracle environment handle" at the call to cx_Oracle.connect().</p>
<p>I've tried the following with no joy:</p>
<ul>
<li>Oracle instant client 10g and 11g</li>
<li>Oracle XE Client</li>
<li>reinstall cx_Oracle-5.0.2-10g.win32-py2.6.msi</li>
<li>setting <code>ORACLE_HOME</code> as well as PATH</li>
<li>another computer with just an Oracle client and the exe</li>
<li>various options for building the exe (no compression and/or using zip file)</li>
</ul>
<p>My testcase:</p>
<p>testora.py:</p>
<pre><code>import cx_Oracle
import decimal # needed for py2exe to compile this correctly
def testora():
"""testora
>>> testora.testora()
<cx_Oracle.Connection to scott@localhost:1521/orcl>
X
"""
orcl = cx_Oracle.connect('scott/tiger@localhost:1521/orcl')
print orcl
curs = orcl.cursor()
result = curs.execute('SELECT * FROM DUAL')
for (dummy,) in result:
print dummy
if __name__ == '__main__':
testora()
</code></pre>
<p>build_testora.py:</p>
<pre><code>from distutils.core import setup
import py2exe, sys
sys.argv.append('py2exe')
setup(
options = {'py2exe': {
'bundle_files': 2,
'compressed': True
}},
console = [{'script': "testora.py"}],
zipfile = None
)
</code></pre>
<p>Results:</p>
<pre><code>C:\Python26\working>python testora.py
<cx_Oracle.Connection to scott@localhost:1521/orcl>
X
C:\Python26\working>python build_testora.py py2exe
C:\Python26\lib\site-packages\py2exe\build_exe.py:16: DeprecationWarning: the se
ts module is deprecated
import sets
running py2exe
creating C:\Python26\working\build
creating C:\Python26\working\build\bdist.win32
creating C:\Python26\working\build\bdist.win32\winexe
creating C:\Python26\working\build\bdist.win32\winexe\collect-2.6
creating C:\Python26\working\build\bdist.win32\winexe\bundle-2.6
creating C:\Python26\working\build\bdist.win32\winexe\temp
*** searching for required modules ***
*** parsing results ***
*** finding dlls needed ***
*** create binaries ***
*** byte compile python files ***
byte-compiling C:\Python26\lib\StringIO.py to StringIO.pyc
byte-compiling C:\Python26\lib\UserDict.py to UserDict.pyc
byte-compiling C:\Python26\lib\__future__.py to __future__.pyc
byte-compiling C:\Python26\lib\_abcoll.py to _abcoll.pyc
byte-compiling C:\Python26\lib\_strptime.py to _strptime.pyc
byte-compiling C:\Python26\lib\_threading_local.py to _threading_local.pyc
byte-compiling C:\Python26\lib\abc.py to abc.pyc
byte-compiling C:\Python26\lib\atexit.py to atexit.pyc
byte-compiling C:\Python26\lib\base64.py to base64.pyc
byte-compiling C:\Python26\lib\bdb.py to bdb.pyc
byte-compiling C:\Python26\lib\bisect.py to bisect.pyc
byte-compiling C:\Python26\lib\calendar.py to calendar.pyc
byte-compiling C:\Python26\lib\cmd.py to cmd.pyc
byte-compiling C:\Python26\lib\codecs.py to codecs.pyc
byte-compiling C:\Python26\lib\collections.py to collections.pyc
byte-compiling C:\Python26\lib\copy.py to copy.pyc
byte-compiling C:\Python26\lib\copy_reg.py to copy_reg.pyc
byte-compiling C:\Python26\lib\decimal.py to decimal.pyc
byte-compiling C:\Python26\lib\difflib.py to difflib.pyc
byte-compiling C:\Python26\lib\dis.py to dis.pyc
byte-compiling C:\Python26\lib\doctest.py to doctest.pyc
byte-compiling C:\Python26\lib\dummy_thread.py to dummy_thread.pyc
byte-compiling C:\Python26\lib\encodings\__init__.py to encodings\__init__.pyc
creating C:\Python26\working\build\bdist.win32\winexe\collect-2.6\encodings
byte-compiling C:\Python26\lib\encodings\aliases.py to encodings\aliases.pyc
byte-compiling C:\Python26\lib\encodings\ascii.py to encodings\ascii.pyc
byte-compiling C:\Python26\lib\encodings\base64_codec.py to encodings\base64_cod
ec.pyc
byte-compiling C:\Python26\lib\encodings\big5.py to encodings\big5.pyc
byte-compiling C:\Python26\lib\encodings\big5hkscs.py to encodings\big5hkscs.pyc
byte-compiling C:\Python26\lib\encodings\bz2_codec.py to encodings\bz2_codec.pyc
byte-compiling C:\Python26\lib\encodings\charmap.py to encodings\charmap.pyc
byte-compiling C:\Python26\lib\encodings\cp037.py to encodings\cp037.pyc
byte-compiling C:\Python26\lib\encodings\cp1006.py to encodings\cp1006.pyc
byte-compiling C:\Python26\lib\encodings\cp1026.py to encodings\cp1026.pyc
byte-compiling C:\Python26\lib\encodings\cp1140.py to encodings\cp1140.pyc
byte-compiling C:\Python26\lib\encodings\cp1250.py to encodings\cp1250.pyc
byte-compiling C:\Python26\lib\encodings\cp1251.py to encodings\cp1251.pyc
byte-compiling C:\Python26\lib\encodings\cp1252.py to encodings\cp1252.pyc
byte-compiling C:\Python26\lib\encodings\cp1253.py to encodings\cp1253.pyc
byte-compiling C:\Python26\lib\encodings\cp1254.py to encodings\cp1254.pyc
byte-compiling C:\Python26\lib\encodings\cp1255.py to encodings\cp1255.pyc
byte-compiling C:\Python26\lib\encodings\cp1256.py to encodings\cp1256.pyc
byte-compiling C:\Python26\lib\encodings\cp1257.py to encodings\cp1257.pyc
byte-compiling C:\Python26\lib\encodings\cp1258.py to encodings\cp1258.pyc
byte-compiling C:\Python26\lib\encodings\cp424.py to encodings\cp424.pyc
byte-compiling C:\Python26\lib\encodings\cp437.py to encodings\cp437.pyc
byte-compiling C:\Python26\lib\encodings\cp500.py to encodings\cp500.pyc
byte-compiling C:\Python26\lib\encodings\cp737.py to encodings\cp737.pyc
byte-compiling C:\Python26\lib\encodings\cp775.py to encodings\cp775.pyc
byte-compiling C:\Python26\lib\encodings\cp850.py to encodings\cp850.pyc
byte-compiling C:\Python26\lib\encodings\cp852.py to encodings\cp852.pyc
byte-compiling C:\Python26\lib\encodings\cp855.py to encodings\cp855.pyc
byte-compiling C:\Python26\lib\encodings\cp856.py to encodings\cp856.pyc
byte-compiling C:\Python26\lib\encodings\cp857.py to encodings\cp857.pyc
byte-compiling C:\Python26\lib\encodings\cp860.py to encodings\cp860.pyc
byte-compiling C:\Python26\lib\encodings\cp861.py to encodings\cp861.pyc
byte-compiling C:\Python26\lib\encodings\cp862.py to encodings\cp862.pyc
byte-compiling C:\Python26\lib\encodings\cp863.py to encodings\cp863.pyc
byte-compiling C:\Python26\lib\encodings\cp864.py to encodings\cp864.pyc
byte-compiling C:\Python26\lib\encodings\cp865.py to encodings\cp865.pyc
byte-compiling C:\Python26\lib\encodings\cp866.py to encodings\cp866.pyc
byte-compiling C:\Python26\lib\encodings\cp869.py to encodings\cp869.pyc
byte-compiling C:\Python26\lib\encodings\cp874.py to encodings\cp874.pyc
byte-compiling C:\Python26\lib\encodings\cp875.py to encodings\cp875.pyc
byte-compiling C:\Python26\lib\encodings\cp932.py to encodings\cp932.pyc
byte-compiling C:\Python26\lib\encodings\cp949.py to encodings\cp949.pyc
byte-compiling C:\Python26\lib\encodings\cp950.py to encodings\cp950.pyc
byte-compiling C:\Python26\lib\encodings\euc_jis_2004.py to encodings\euc_jis_20
04.pyc
byte-compiling C:\Python26\lib\encodings\euc_jisx0213.py to encodings\euc_jisx02
13.pyc
byte-compiling C:\Python26\lib\encodings\euc_jp.py to encodings\euc_jp.pyc
byte-compiling C:\Python26\lib\encodings\euc_kr.py to encodings\euc_kr.pyc
byte-compiling C:\Python26\lib\encodings\gb18030.py to encodings\gb18030.pyc
byte-compiling C:\Python26\lib\encodings\gb2312.py to encodings\gb2312.pyc
byte-compiling C:\Python26\lib\encodings\gbk.py to encodings\gbk.pyc
byte-compiling C:\Python26\lib\encodings\hex_codec.py to encodings\hex_codec.pyc
byte-compiling C:\Python26\lib\encodings\hp_roman8.py to encodings\hp_roman8.pyc
byte-compiling C:\Python26\lib\encodings\hz.py to encodings\hz.pyc
byte-compiling C:\Python26\lib\encodings\idna.py to encodings\idna.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp.py to encodings\iso2022_jp.p
yc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_1.py to encodings\iso2022_jp
_1.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_2.py to encodings\iso2022_jp
_2.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_2004.py to encodings\iso2022
_jp_2004.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_3.py to encodings\iso2022_jp
_3.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_jp_ext.py to encodings\iso2022_
jp_ext.pyc
byte-compiling C:\Python26\lib\encodings\iso2022_kr.py to encodings\iso2022_kr.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_1.py to encodings\iso8859_1.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_10.py to encodings\iso8859_10.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_11.py to encodings\iso8859_11.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_13.py to encodings\iso8859_13.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_14.py to encodings\iso8859_14.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_15.py to encodings\iso8859_15.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_16.py to encodings\iso8859_16.p
yc
byte-compiling C:\Python26\lib\encodings\iso8859_2.py to encodings\iso8859_2.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_3.py to encodings\iso8859_3.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_4.py to encodings\iso8859_4.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_5.py to encodings\iso8859_5.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_6.py to encodings\iso8859_6.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_7.py to encodings\iso8859_7.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_8.py to encodings\iso8859_8.pyc
byte-compiling C:\Python26\lib\encodings\iso8859_9.py to encodings\iso8859_9.pyc
byte-compiling C:\Python26\lib\encodings\johab.py to encodings\johab.pyc
byte-compiling C:\Python26\lib\encodings\koi8_r.py to encodings\koi8_r.pyc
byte-compiling C:\Python26\lib\encodings\koi8_u.py to encodings\koi8_u.pyc
byte-compiling C:\Python26\lib\encodings\latin_1.py to encodings\latin_1.pyc
byte-compiling C:\Python26\lib\encodings\mac_arabic.py to encodings\mac_arabic.p
yc
byte-compiling C:\Python26\lib\encodings\mac_centeuro.py to encodings\mac_centeu
ro.pyc
byte-compiling C:\Python26\lib\encodings\mac_croatian.py to encodings\mac_croati
an.pyc
byte-compiling C:\Python26\lib\encodings\mac_cyrillic.py to encodings\mac_cyrill
ic.pyc
byte-compiling C:\Python26\lib\encodings\mac_farsi.py to encodings\mac_farsi.pyc
byte-compiling C:\Python26\lib\encodings\mac_greek.py to encodings\mac_greek.pyc
byte-compiling C:\Python26\lib\encodings\mac_iceland.py to encodings\mac_iceland
.pyc
byte-compiling C:\Python26\lib\encodings\mac_latin2.py to encodings\mac_latin2.p
yc
byte-compiling C:\Python26\lib\encodings\mac_roman.py to encodings\mac_roman.pyc
byte-compiling C:\Python26\lib\encodings\mac_romanian.py to encodings\mac_romani
an.pyc
byte-compiling C:\Python26\lib\encodings\mac_turkish.py to encodings\mac_turkish
.pyc
byte-compiling C:\Python26\lib\encodings\mbcs.py to encodings\mbcs.pyc
byte-compiling C:\Python26\lib\encodings\palmos.py to encodings\palmos.pyc
byte-compiling C:\Python26\lib\encodings\ptcp154.py to encodings\ptcp154.pyc
byte-compiling C:\Python26\lib\encodings\punycode.py to encodings\punycode.pyc
byte-compiling C:\Python26\lib\encodings\quopri_codec.py to encodings\quopri_cod
ec.pyc
byte-compiling C:\Python26\lib\encodings\raw_unicode_escape.py to encodings\raw_
unicode_escape.pyc
byte-compiling C:\Python26\lib\encodings\rot_13.py to encodings\rot_13.pyc
byte-compiling C:\Python26\lib\encodings\shift_jis.py to encodings\shift_jis.pyc
byte-compiling C:\Python26\lib\encodings\shift_jis_2004.py to encodings\shift_ji
s_2004.pyc
byte-compiling C:\Python26\lib\encodings\shift_jisx0213.py to encodings\shift_ji
sx0213.pyc
byte-compiling C:\Python26\lib\encodings\string_escape.py to encodings\string_es
cape.pyc
byte-compiling C:\Python26\lib\encodings\tis_620.py to encodings\tis_620.pyc
byte-compiling C:\Python26\lib\encodings\undefined.py to encodings\undefined.pyc
byte-compiling C:\Python26\lib\encodings\unicode_escape.py to encodings\unicode_
escape.pyc
byte-compiling C:\Python26\lib\encodings\unicode_internal.py to encodings\unicod
e_internal.pyc
byte-compiling C:\Python26\lib\encodings\utf_16.py to encodings\utf_16.pyc
byte-compiling C:\Python26\lib\encodings\utf_16_be.py to encodings\utf_16_be.pyc
byte-compiling C:\Python26\lib\encodings\utf_16_le.py to encodings\utf_16_le.pyc
byte-compiling C:\Python26\lib\encodings\utf_32.py to encodings\utf_32.pyc
byte-compiling C:\Python26\lib\encodings\utf_32_be.py to encodings\utf_32_be.pyc
byte-compiling C:\Python26\lib\encodings\utf_32_le.py to encodings\utf_32_le.pyc
byte-compiling C:\Python26\lib\encodings\utf_7.py to encodings\utf_7.pyc
byte-compiling C:\Python26\lib\encodings\utf_8.py to encodings\utf_8.pyc
byte-compiling C:\Python26\lib\encodings\utf_8_sig.py to encodings\utf_8_sig.pyc
byte-compiling C:\Python26\lib\encodings\uu_codec.py to encodings\uu_codec.pyc
byte-compiling C:\Python26\lib\encodings\zlib_codec.py to encodings\zlib_codec.p
yc
byte-compiling C:\Python26\lib\functools.py to functools.pyc
byte-compiling C:\Python26\lib\genericpath.py to genericpath.pyc
byte-compiling C:\Python26\lib\getopt.py to getopt.pyc
byte-compiling C:\Python26\lib\gettext.py to gettext.pyc
byte-compiling C:\Python26\lib\heapq.py to heapq.pyc
byte-compiling C:\Python26\lib\inspect.py to inspect.pyc
byte-compiling C:\Python26\lib\keyword.py to keyword.pyc
byte-compiling C:\Python26\lib\linecache.py to linecache.pyc
byte-compiling C:\Python26\lib\locale.py to locale.pyc
byte-compiling C:\Python26\lib\ntpath.py to ntpath.pyc
byte-compiling C:\Python26\lib\numbers.py to numbers.pyc
byte-compiling C:\Python26\lib\opcode.py to opcode.pyc
byte-compiling C:\Python26\lib\optparse.py to optparse.pyc
byte-compiling C:\Python26\lib\os.py to os.pyc
byte-compiling C:\Python26\lib\os2emxpath.py to os2emxpath.pyc
byte-compiling C:\Python26\lib\pdb.py to pdb.pyc
byte-compiling C:\Python26\lib\pickle.py to pickle.pyc
byte-compiling C:\Python26\lib\posixpath.py to posixpath.pyc
byte-compiling C:\Python26\lib\pprint.py to pprint.pyc
byte-compiling C:\Python26\lib\quopri.py to quopri.pyc
byte-compiling C:\Python26\lib\random.py to random.pyc
byte-compiling C:\Python26\lib\re.py to re.pyc
byte-compiling C:\Python26\lib\repr.py to repr.pyc
byte-compiling C:\Python26\lib\shlex.py to shlex.pyc
byte-compiling C:\Python26\lib\site-packages\zipextimporter.py to zipextimporter
.pyc
byte-compiling C:\Python26\lib\sre.py to sre.pyc
byte-compiling C:\Python26\lib\sre_compile.py to sre_compile.pyc
byte-compiling C:\Python26\lib\sre_constants.py to sre_constants.pyc
byte-compiling C:\Python26\lib\sre_parse.py to sre_parse.pyc
byte-compiling C:\Python26\lib\stat.py to stat.pyc
byte-compiling C:\Python26\lib\string.py to string.pyc
byte-compiling C:\Python26\lib\stringprep.py to stringprep.pyc
byte-compiling C:\Python26\lib\struct.py to struct.pyc
byte-compiling C:\Python26\lib\subprocess.py to subprocess.pyc
byte-compiling C:\Python26\lib\tempfile.py to tempfile.pyc
byte-compiling C:\Python26\lib\textwrap.py to textwrap.pyc
byte-compiling C:\Python26\lib\threading.py to threading.pyc
byte-compiling C:\Python26\lib\token.py to token.pyc
byte-compiling C:\Python26\lib\tokenize.py to tokenize.pyc
byte-compiling C:\Python26\lib\traceback.py to traceback.pyc
byte-compiling C:\Python26\lib\types.py to types.pyc
byte-compiling C:\Python26\lib\unittest.py to unittest.pyc
byte-compiling C:\Python26\lib\warnings.py to warnings.pyc
*** copy extensions ***
copying C:\Python26\DLLs\bz2.pyd -> C:\Python26\working\build\bdist.win32\winexe
\collect-2.6
copying C:\Python26\DLLs\select.pyd -> C:\Python26\working\build\bdist.win32\win
exe\collect-2.6
copying C:\Python26\DLLs\unicodedata.pyd -> C:\Python26\working\build\bdist.win3
2\winexe\collect-2.6
copying C:\Python26\lib\site-packages\cx_Oracle.pyd -> C:\Python26\working\build
\bdist.win32\winexe\collect-2.6
*** copy dlls ***
copying C:\Oracle\XEClient\bin\OCI.dll -> C:\Python26\working\build\bdist.win32\
winexe\collect-2.6
copying C:\Python26\lib\site-packages\py2exe\run.exe -> C:\Python26\working\dist
\testora.exe
*** binary dependencies ***
Your executable(s) also depend on these dlls which are not included,
you may or may not need to distribute them.
Make sure you have the license if you distribute any of them, and
make sure you don't distribute files belonging to the operating system.
USER32.dll - C:\WINDOWS\system32\USER32.dll
SHELL32.dll - C:\WINDOWS\system32\SHELL32.dll
WSOCK32.dll - C:\WINDOWS\system32\WSOCK32.dll
ADVAPI32.dll - C:\WINDOWS\system32\ADVAPI32.dll
msvcrt.dll - C:\WINDOWS\system32\msvcrt.dll
KERNEL32.dll - C:\WINDOWS\system32\KERNEL32.dll
C:\Python26\working\dist>testora
Traceback (most recent call last):
File "testora.py", line 19, in <module>
File "testora.py", line 11, in testora
cx_Oracle.InterfaceError: Unable to acquire Oracle environment handle
</code></pre>
| 5 | 2009-07-20T02:57:13Z | 1,151,591 | <p>Revised build_testora.py, for future reference:</p>
<pre><code>from distutils.core import setup
import py2exe, sys
sys.argv.append('py2exe')
setup(
options = {'py2exe': {
'bundle_files': 2,
'compressed': True,
'dll_excludes': ["oci.dll"]
}},
console = [{'script': "testora.py"}],
zipfile = None
)
</code></pre>
| 2 | 2009-07-20T03:21:46Z | [
"python",
"oracle",
"py2exe",
"cx-oracle"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 1,151,686 | <p>Hashables should be immutable -- not enforcing this but TRUSTING you not to mutate a dict after its first use as a key, the following approach would work:</p>
<pre><code>class hashabledict(dict):
def __key(self):
return tuple((k,self[k]) for k in sorted(self))
def __hash__(self):
return hash(self.__key())
def __eq__(self, other):
return self.__key() == other.__key()
</code></pre>
<p>If you DO need to mutate your dicts and STILL want to use them as keys, complexity explodes hundredfolds -- not to say it can't be done, but I'll wait until a VERY specific indication before I get into THAT incredible morass!-)</p>
| 39 | 2009-07-20T04:18:04Z | [
"python"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 1,151,705 | <p>Here is the easy way to make a hashable dictionary. Just remember not to mutate them after embedding in another dictionary for obvious reasons.</p>
<pre><code>class hashabledict(dict):
def __hash__(self):
return hash(tuple(sorted(self.items())))
</code></pre>
| 34 | 2009-07-20T04:30:24Z | [
"python"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 2,705,638 | <p>A reasonably clean, straightforward implementation is</p>
<pre><code>import collections
class FrozenDict(collections.Mapping):
"""Don't forget the docstrings!!"""
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
def __getitem__(self, key):
return self._d[key]
def __hash__(self):
return hash(tuple(sorted(self._d.iteritems())))
</code></pre>
| 8 | 2010-04-24T18:24:55Z | [
"python"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 3,161,538 | <p>You might also want to add these two methods to get the v2 pickling protocol work with hashdict instances. Otherwise cPickle will try to use hashdict.<strong>_<em>setitem</em>_</strong> resulting in a TypeError. Interestingly, with the other two versions of the protocol your code works just fine. </p>
<pre><code>def __setstate__(self, objstate):
for k,v in objstate.items():
dict.__setitem__(self,k,v)
def __reduce__(self):
return (hashdict, (), dict(self),)
</code></pre>
| 2 | 2010-07-01T20:47:00Z | [
"python"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 6,014,481 | <p>The given answers are okay, but they could be improved by using <code>frozenset(...)</code> instead of <code>tuple(sorted(...))</code> to generate the hash:</p>
<pre><code>>>> import timeit
>>> timeit.timeit('hash(tuple(sorted(d.iteritems())))', "d = dict(a=3, b='4', c=2345, asdfsdkjfew=0.23424, x='sadfsadfadfsaf')")
4.7758948802947998
>>> timeit.timeit('hash(frozenset(d.iteritems()))', "d = dict(a=3, b='4', c=2345, asdfsdkjfew=0.23424, x='sadfsadfadfsaf')")
1.8153600692749023
</code></pre>
<p>The performance advantage depends on the content of the dictionary, but in most cases I've tested, hashing with <code>frozenset</code> is at least 2 times faster (mainly because it does not need to sort).</p>
| 18 | 2011-05-16T07:50:01Z | [
"python"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 10,130,656 | <p>If you don't put numbers in the dictionary and you never lose the variables containing your dictionaries, you can do this:</p>
<p><code>cache[id(rule)] = "whatever"</code></p>
<p>since id() is unique for every dictionary</p>
<p>EDIT:</p>
<p>Oh sorry, yeah in that case what the other guys said would be better. I think you could also serialize your dictionaries as a string, like </p>
<p><code>cache[ 'foo:bar' ] = 'baz'</code> </p>
<p>If you need to recover your dictionaries from the keys though, then you'd have to do something uglier like </p>
<p><code>cache[ 'foo:bar' ] = ( {'foo':'bar'}, 'baz' )</code></p>
<p>I guess the advantage of this is that you wouldn't have to write as much code.</p>
| -2 | 2012-04-12T19:46:22Z | [
"python"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 12,029,459 | <p>I keep coming back to this topic... Here's another variation. I'm uneasy with subclassing <code>dict</code> to add a <code>__hash__</code> method; There's virtually no escape from the problem that dict's are mutable, and trusting that they won't change seems like a weak idea. So I've instead looked at building a mapping based on a builtin type that is itself immutable. although <code>tuple</code> is an obvious choice, accessing values in it implies a sort and a bisect; not a problem, but it doesn't seem to be leveraging much of the power of the type it's built on.</p>
<p>What if you jam key, value pairs into a <code>frozenset</code>? What would that require, how would it work?</p>
<p>Part 1, you need a way of encoding the 'item's in such a way that a frozenset will treat them mainly by their keys; I'll make a little subclass for that.</p>
<pre><code>import collections
class pair(collections.namedtuple('pair_base', 'key value')):
def __hash__(self):
return hash((self.key, None))
def __eq__(self, other):
if type(self) != type(other):
return NotImplemented
return self.key == other.key
def __repr__(self):
return repr((self.key, self.value))
</code></pre>
<p>That alone puts you in spitting distance of an immutable mapping:</p>
<pre><code>>>> frozenset(pair(k, v) for k, v in enumerate('abcd'))
frozenset([(0, 'a'), (2, 'c'), (1, 'b'), (3, 'd')])
>>> pairs = frozenset(pair(k, v) for k, v in enumerate('abcd'))
>>> pair(2, None) in pairs
True
>>> pair(5, None) in pairs
False
>>> goal = frozenset((pair(2, None),))
>>> pairs & goal
frozenset([(2, None)])
</code></pre>
<p><em>D'oh!</em> Unfortunately, when you use the set operators and the elements are equal but not the same object; which one ends up in the return value is <em>undefined</em>, we'll have to go through some more gyrations.</p>
<pre><code>>>> pairs - (pairs - goal)
frozenset([(2, 'c')])
>>> iter(pairs - (pairs - goal)).next().value
'c'
</code></pre>
<p>However, looking values up in this way is cumbersome, and worse, creates lots of intermediate sets; that won't do! We'll create a 'fake' key-value pair to get around it:</p>
<pre><code>class Thief(object):
def __init__(self, key):
self.key = key
def __hash__(self):
return hash(pair(self.key, None))
def __eq__(self, other):
self.value = other.value
return pair(self.key, None) == other
</code></pre>
<p>Which results in the less problematic:</p>
<pre><code>>>> thief = Thief(2)
>>> thief in pairs
True
>>> thief.value
'c'
</code></pre>
<p>That's all the deep magic; the rest is wrapping it all up into something that has an <em>interface</em> like a dict. Since we're subclassing from <code>frozenset</code>, which has a very different interface, there's quite a lot of methods; we get a little help from <code>collections.Mapping</code>, but most of the work is overriding the <code>frozenset</code> methods for versions that work like dicts, instead:</p>
<pre><code>class FrozenDict(frozenset, collections.Mapping):
def __new__(cls, seq=()):
return frozenset.__new__(cls, (pair(k, v) for k, v in seq))
def __getitem__(self, key):
thief = Thief(key)
if frozenset.__contains__(self, thief):
return thief.value
raise KeyError(key)
def __eq__(self, other):
if not isinstance(other, FrozenDict):
return dict(self.iteritems()) == other
if len(self) != len(other):
return False
for key, value in self.iteritems():
try:
if value != other[key]:
return False
except KeyError:
return False
return True
def __hash__(self):
return hash(frozenset(self.iteritems()))
def get(self, key, default=None):
thief = Thief(key)
if frozenset.__contains__(self, thief):
return thief.value
return default
def __iter__(self):
for item in frozenset.__iter__(self):
yield item.key
def iteritems(self):
for item in frozenset.__iter__(self):
yield (item.key, item.value)
def iterkeys(self):
for item in frozenset.__iter__(self):
yield item.key
def itervalues(self):
for item in frozenset.__iter__(self):
yield item.value
def __contains__(self, key):
return frozenset.__contains__(self, pair(key, None))
has_key = __contains__
def __repr__(self):
return type(self).__name__ + (', '.join(repr(item) for item in self.iteritems())).join('()')
@classmethod
def fromkeys(cls, keys, value=None):
return cls((key, value) for key in keys)
</code></pre>
<p>which, ultimately, does answer my own question:</p>
<pre><code>>>> myDict = {}
>>> myDict[FrozenDict(enumerate('ab'))] = 5
>>> FrozenDict(enumerate('ab')) in myDict
True
>>> FrozenDict(enumerate('bc')) in myDict
False
>>> FrozenDict(enumerate('ab', 3)) in myDict
False
>>> myDict[FrozenDict(enumerate('ab'))]
5
</code></pre>
| 3 | 2012-08-19T19:47:43Z | [
"python"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 16,162,138 | <p>All that is needed to make dictionaries usable for your purpose is to add a __hash__ method:</p>
<pre><code>class Hashabledict(dict):
def __hash__(self):
return hash(frozenset(self))
</code></pre>
<p>Note, the <em>frozenset</em> conversion will work for all dictionaries (i.e. it doesn't require the keys to be sortable). Likewise, there is no restriction on the dictionary values.</p>
<p>If there are many dictionaries with identical keys but with distinct values, it is necessary to have the hash take the values into account. The fastest way to do that is:</p>
<pre><code>class Hashabledict(dict):
def __hash__(self):
return hash((frozenset(self), frozenset(self.itervalues())))
</code></pre>
<p>This is quicker than <code>frozenset(self.iteritems())</code> for two reasons. First, the <code>frozenset(self)</code> step reuses the hash values stored in the dictionary, saving unnecessary calls to <code>hash(key)</code>. Second, using <em>itervalues</em> will access the values directly and avoid the many memory allocator calls using by <em>items</em> to form new many key/value tuples in memory every time you do a lookup.</p>
| 16 | 2013-04-23T06:07:31Z | [
"python"
] |
Python hashable dicts | 1,151,658 | <p>As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (<em>EDIT</em>: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine)</p>
<p>The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help.</p>
<pre><code>>>> cache = {}
>>> rule = {"foo":"bar"}
>>> cache[(rule, "baz")] = "quux"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'dict'
>>>
</code></pre>
<p>I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, <code>collections.namedtuple</code> has a very different syntax, but <em>can</em> be used as a key. continuing from above session:</p>
<pre><code>>>> from collections import namedtuple
>>> Rule = namedtuple("Rule",rule.keys())
>>> cache[(Rule(**rule), "baz")] = "quux"
>>> cache
{(Rule(foo='bar'), 'baz'): 'quux'}
</code></pre>
<p>Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. </p>
<p>Edit: An additional problem with <code>namedtuple</code>s is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: </p>
<pre><code>>>> you = namedtuple("foo",["bar","baz"])
>>> me = namedtuple("foo",["bar","quux"])
>>> you(bar=1,baz=2) == me(bar=1,quux=2)
True
>>> bob = namedtuple("foo",["baz","bar"])
>>> you(bar=1,baz=2) == bob(bar=1,baz=2)
False
</code></pre>
<p>tl'dr: How do I get <code>dict</code>s that can be used as keys to other <code>dict</code>s?</p>
<p>Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling <code>dict.__setitem__(instance, key, value)</code> but we're all adults here.</p>
<pre><code>class hashdict(dict):
"""
hashable dict implementation, suitable for use as a key into
other dicts.
>>> h1 = hashdict({"apples": 1, "bananas":2})
>>> h2 = hashdict({"bananas": 3, "mangoes": 5})
>>> h1+h2
hashdict(apples=1, bananas=3, mangoes=5)
>>> d1 = {}
>>> d1[h1] = "salad"
>>> d1[h1]
'salad'
>>> d1[h2]
Traceback (most recent call last):
...
KeyError: hashdict(bananas=3, mangoes=5)
based on answers from
http://stackoverflow.com/questions/1151658/python-hashable-dicts
"""
def __key(self):
return tuple(sorted(self.items()))
def __repr__(self):
return "{0}({1})".format(self.__class__.__name__,
", ".join("{0}={1}".format(
str(i[0]),repr(i[1])) for i in self.__key()))
def __hash__(self):
return hash(self.__key())
def __setitem__(self, key, value):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def __delitem__(self, key):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def clear(self):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def pop(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def popitem(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def setdefault(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
def update(self, *args, **kwargs):
raise TypeError("{0} does not support item assignment"
.format(self.__class__.__name__))
# update is not ok because it mutates the object
# __add__ is ok because it creates a new object
# while the new object is under construction, it's ok to mutate it
def __add__(self, right):
result = hashdict(self)
dict.update(result, right)
return result
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
| 47 | 2009-07-20T04:04:30Z | 29,672,632 | <p>The accepted answer by @Unknown, as well as the answer by @AlexMartelli work perfectly fine, but only under the following constraints:</p>
<ol>
<li>The dictionary's values must be hashable. For example, <code>hash(hashabledict({'a':[1,2]}))</code> will raise <code>TypeError</code>.</li>
<li>Keys must support comparison operation. For example, <code>hash(hashabledict({'a':'a', 1:1}))</code> will raise <code>TypeError</code>. </li>
<li>The comparison operator on keys imposes total ordering. For example, if the two keys in a dictionary are <code>frozenset((1,2,3))</code> and <code>frozenset((4,5,6))</code>, they compare unequal in both directions. Therefore, sorting the items of a dictionary with such keys can result in an arbitrary order, and therefore will violate the rule that equal objects must have the same hash value.</li>
</ol>
<p>The much faster answer by @ObenSonne lifts the constraints 2 and 3, but is still bound by constraint 1 (values must be hashable). </p>
<p>The faster yet answer by @RaymondHettinger lifts all 3 constraints because it does not include <code>.values()</code> in the hash calculation. However, its performance is good only if:</p>
<ol start="4">
<li>Most of the (non-equal) dictionaries that need to be hashed have do not identical <code>.keys()</code>.</li>
</ol>
<p>If this condition isn't satisfied, the hash function will still be valid, but may cause too many collisions. For example, in the extreme case where all the dictionaries are generated from a website template (field names as keys, user input as values), the keys will always be the same, and the hash function will return the same value for all the inputs. As a result, a hashtable that relies on such a hash function will become as slow as a list when retrieving an item (<code>O(N)</code> instead of <code>O(1)</code>).</p>
<p>I think the following solution will work reasonably well even if all 4 constraints I listed above are violated. It has an additional advantage that it can hash not only dictionaries, but any containers, even if they have nested mutable containers.</p>
<p>I'd much appreciate any feedback on this, since I only tested this lightly so far.</p>
<pre><code># python 3.4
import collections
import operator
import sys
import itertools
import reprlib
# a wrapper to make an object hashable, while preserving equality
class AutoHash:
# for each known container type, we can optionally provide a tuple
# specifying: type, transform, aggregator
# even immutable types need to be included, since their items
# may make them unhashable
# transformation may be used to enforce the desired iteration
# the result of a transformation must be an iterable
# default: no change; for dictionaries, we use .items() to see values
# usually transformation choice only affects efficiency, not correctness
# aggregator is the function that combines all items into one object
# default: frozenset; for ordered containers, we can use tuple
# aggregator choice affects both efficiency and correctness
# e.g., using a tuple aggregator for a set is incorrect,
# since identical sets may end up with different hash values
# frozenset is safe since at worst it just causes more collisions
# unfortunately, no collections.ABC class is available that helps
# distinguish ordered from unordered containers
# so we need to just list them out manually as needed
type_info = collections.namedtuple(
'type_info',
'type transformation aggregator')
ident = lambda x: x
# order matters; first match is used to handle a datatype
known_types = (
# dict also handles defaultdict
type_info(dict, lambda d: d.items(), frozenset),
# no need to include set and frozenset, since they are fine with defaults
type_info(collections.OrderedDict, ident, tuple),
type_info(list, ident, tuple),
type_info(tuple, ident, tuple),
type_info(collections.deque, ident, tuple),
type_info(collections.Iterable, ident, frozenset) # other iterables
)
# hash_func can be set to replace the built-in hash function
# cache can be turned on; if it is, cycles will be detected,
# otherwise cycles in a data structure will cause failure
def __init__(self, data, hash_func=hash, cache=False, verbose=False):
self._data=data
self.hash_func=hash_func
self.verbose=verbose
self.cache=cache
# cache objects' hashes for performance and to deal with cycles
if self.cache:
self.seen={}
def hash_ex(self, o):
# note: isinstance(o, Hashable) won't check inner types
try:
if self.verbose:
print(type(o),
reprlib.repr(o),
self.hash_func(o),
file=sys.stderr)
return self.hash_func(o)
except TypeError:
pass
# we let built-in hash decide if the hash value is worth caching
# so we don't cache the built-in hash results
if self.cache and id(o) in self.seen:
return self.seen[id(o)][0] # found in cache
# check if o can be handled by decomposing it into components
for typ, transformation, aggregator in AutoHash.known_types:
if isinstance(o, typ):
# another option is:
# result = reduce(operator.xor, map(_hash_ex, handler(o)))
# but collisions are more likely with xor than with frozenset
# e.g. hash_ex([1,2,3,4])==0 with xor
try:
# try to frozenset the actual components, it's faster
h = self.hash_func(aggregator(transformation(o)))
except TypeError:
# components not hashable with built-in;
# apply our extended hash function to them
h = self.hash_func(aggregator(map(self.hash_ex, transformation(o))))
if self.cache:
# storing the object too, otherwise memory location will be reused
self.seen[id(o)] = (h, o)
if self.verbose:
print(type(o), reprlib.repr(o), h, file=sys.stderr)
return h
raise TypeError('Object {} of type {} not hashable'.format(repr(o), type(o)))
def __hash__(self):
return self.hash_ex(self._data)
def __eq__(self, other):
# short circuit to save time
if self is other:
return True
# 1) type(self) a proper subclass of type(other) => self.__eq__ will be called first
# 2) any other situation => lhs.__eq__ will be called first
# case 1. one side is a subclass of the other, and AutoHash.__eq__ is not overridden in either
# => the subclass instance's __eq__ is called first, and we should compare self._data and other._data
# case 2. neither side is a subclass of the other; self is lhs
# => we can't compare to another type; we should let the other side decide what to do, return NotImplemented
# case 3. neither side is a subclass of the other; self is rhs
# => we can't compare to another type, and the other side already tried and failed;
# we should return False, but NotImplemented will have the same effect
# any other case: we won't reach the __eq__ code in this class, no need to worry about it
if isinstance(self, type(other)): # identifies case 1
return self._data == other._data
else: # identifies cases 2 and 3
return NotImplemented
d1 = {'a':[1,2], 2:{3:4}}
print(hash(AutoHash(d1, cache=True, verbose=True)))
d = AutoHash(dict(a=1, b=2, c=3, d=[4,5,6,7], e='a string of chars'),cache=True, verbose=True)
print(hash(d))
</code></pre>
| 2 | 2015-04-16T10:57:14Z | [
"python"
] |
tkMessageBox | 1,151,770 | <p>Can anybody help me out in how to activate 'close' button of askquestion() of tkMessageBox??</p>
| 1 | 2009-07-20T05:00:42Z | 1,151,900 | <p>By 'activate', do you mean make it so the user can close the message box by clicking the close ('X') button?</p>
<p>I do not think it is possible using <code>tkMessageBox</code>. I guess your best bet is to implement a dialog box with this functionality yourself.</p>
<p>BTW: What should <code>askquestion()</code> return when the user closes the dialog box?</p>
| 0 | 2009-07-20T05:54:48Z | [
"python",
"tkmessagebox"
] |
How can I perform a ping or traceroute using native python? | 1,151,771 | <p>I would like to be able to perform a ping and traceroute from within Python without having to execute the corresponding shell commands so I'd prefer a native python solution.</p>
| 4 | 2009-07-20T05:01:59Z | 1,151,782 | <p>Running interpreters as root is often frowned upon on security grounds (and of course you DO need to have root permission to access the "raw" socked as needed by the ICMP specs of ping and traceroute!), but if you have no problems with that it's not hard -- e.g., <a href="http://mail.python.org/pipermail/python-list/2005-December/355365.html" rel="nofollow">this post</a> gives a workable ping, and Jeremy Hylton's old <a href="http://www.python.org/~jeremy/python.html" rel="nofollow">page</a> has still-usable underlying code for ICMP (both ping and traceroute) though it's written for very old Python versions and needs a litte facelift to shine with modern ones -- but, the concepts ARE all there, in both the URLs I gave you!</p>
| 4 | 2009-07-20T05:06:49Z | [
"python",
"ping",
"traceroute"
] |
How can I perform a ping or traceroute using native python? | 1,151,771 | <p>I would like to be able to perform a ping and traceroute from within Python without having to execute the corresponding shell commands so I'd prefer a native python solution.</p>
| 4 | 2009-07-20T05:01:59Z | 1,152,165 | <p>you might want to check out the <a href="http://www.secdev.org/projects/scapy/" rel="nofollow">scapy</a> package. it's the swiss army knife of network tools for python.</p>
| 0 | 2009-07-20T07:42:16Z | [
"python",
"ping",
"traceroute"
] |
How can I perform a ping or traceroute using native python? | 1,151,771 | <p>I would like to be able to perform a ping and traceroute from within Python without having to execute the corresponding shell commands so I'd prefer a native python solution.</p>
| 4 | 2009-07-20T05:01:59Z | 2,974,474 | <p>ICMP Ping is standard as part of the ICMP protocol.</p>
<p>Traceroute uses features of ICMP and IP to determine a path via Time To Live values. Using TTL values, you can do traceroutes in a variety of protocols as long as IP/ICMP work because it is the ICMP TTL EXceeded messages that tell you about the hop in the path.</p>
<p>If you attempt to access a port where no listener is available, by ICMP protocol rules, the host is supposed to send an ICMP Port Unreachable message. </p>
| 0 | 2010-06-04T13:16:41Z | [
"python",
"ping",
"traceroute"
] |
How can I perform a ping or traceroute using native python? | 1,151,771 | <p>I would like to be able to perform a ping and traceroute from within Python without having to execute the corresponding shell commands so I'd prefer a native python solution.</p>
| 4 | 2009-07-20T05:01:59Z | 7,018,928 | <p>If you don't mind using an external module and not using UDP or TCP, <a href="http://www.secdev.org/projects/scapy/" rel="nofollow">scapy</a> is an easy solution:</p>
<pre><code>from scapy.all import *
target = ["192.168.1.254"]
result, unans = traceroute(target,l4=UDP(sport=RandShort())/DNS(qd=DNSQR(qname="www.google.com")))
</code></pre>
<p>Or you can use the tcp version</p>
<pre><code>from scapy.all import *
target = ["192.168.1.254"]
result, unans = traceroute(target,maxttl=32)
</code></pre>
<p>Please note you will have to run scapy as root in order to be able to perform these tasks or you will get:</p>
<pre><code>socket.error: [Errno 1] Operation not permitted
</code></pre>
| 7 | 2011-08-10T22:57:19Z | [
"python",
"ping",
"traceroute"
] |
How can I perform a ping or traceroute using native python? | 1,151,771 | <p>I would like to be able to perform a ping and traceroute from within Python without having to execute the corresponding shell commands so I'd prefer a native python solution.</p>
| 4 | 2009-07-20T05:01:59Z | 15,654,552 | <p>I wrote a simple tcptraceroute in python which does not need root privileges <a href="http://www.thomas-guettler.de/scripts/tcptraceroute.py.txt" rel="nofollow">http://www.thomas-guettler.de/scripts/tcptraceroute.py.txt</a></p>
<p>But it can't display the IP addresses of the intermediate hops. But sometimes it is useful, since you can guess where the blocking firewall is: Either at the beginning or at the end of the route.</p>
| 0 | 2013-03-27T08:33:47Z | [
"python",
"ping",
"traceroute"
] |
How can I perform a ping or traceroute using native python? | 1,151,771 | <p>I would like to be able to perform a ping and traceroute from within Python without having to execute the corresponding shell commands so I'd prefer a native python solution.</p>
| 4 | 2009-07-20T05:01:59Z | 31,644,021 | <p>The <a href="https://github.com/hardikvasa/webb" rel="nofollow">Webb Library</a> is very handy in performing all kinds of web related extracts...and ping and traceroute can be done easily through it. Just include the URL you want to traceroute to:</p>
<pre><code>import webb
webb.traceroute("your-web-page-url")
</code></pre>
<p>If you wish to store the traceroute log to a text file automatically, use the following command:</p>
<pre><code>webb.traceroute("your-web-page-url",'file-name.txt')
</code></pre>
<p>Similarly a IP address of a URl (server) can be obtained with the following lines of code:</p>
<pre><code>print(webb.get_ip("your-web-page-url"))
</code></pre>
<p>Hope it helps!</p>
| 3 | 2015-07-27T02:15:35Z | [
"python",
"ping",
"traceroute"
] |
Where and how is django Model objects attribute defined? | 1,151,879 | <p>I'm trying to get my head around Django ORM. I've been reading django.db.models.base.py source code but still could understand how does the Model.objects attributes in our Model object gets defined. Does anybody know how does django adds that objects attribute into our Model object?</p>
<p>Thanks in advance</p>
| 1 | 2009-07-20T05:46:19Z | 1,151,977 | <p>The Django ORM makes heavy use of Python metaclasses. From <a href="http://en.wikipedia.org/wiki/Metaclass" rel="nofollow">Wikipedia</a>:</p>
<blockquote>
<p>In object-oriented programming, a metaclass is a class whose instances are classes. Just as an ordinary class defines the behavior of certain objects, a metaclass defines the behavior of certain classes and their instances.</p>
</blockquote>
<p>Here's a blog post that describes how metaclasses are used in the Django ORM: <a href="http://lazypython.blogspot.com/2008/11/how-heck-do-django-models-work.html" rel="nofollow">How the Heck do Django Models Work</a></p>
| 3 | 2009-07-20T06:35:28Z | [
"python",
"django"
] |
How can I perform a ping or traceroute in python, accessing the output as it is produced? | 1,151,897 | <p>Earlier, I asked this question:</p>
<p><a href="http://stackoverflow.com/questions/1151771/how-can-i-perform-a-ping-or-traceroute-using-native-python">How can I perform a ping or traceroute using native python?</a></p>
<p>However because python is not running as root it doens't have the ability to open the raw ICMP sockets needed to perform the ping/traceroute in native python.</p>
<p>This brings me back to using the system's ping/traceroute shell commands. This question has a couple examples using the <code>subprocess</code> module which seem to work well:</p>
<p><a href="http://stackoverflow.com/questions/316866/ping-a-site-in-python">Ping a site in Python?</a></p>
<p>I still have one more requirement though: I need to be able to access the output as it is produced (eg. for a long running traceroute.)</p>
<p>The examples above all run the shell command and then only give you access to the complete output once the command has completed. Is there a way to access the command output as it is produced?</p>
<p><strong>Edit:</strong> Based on Alex Martelli's answer, here's what worked:</p>
<pre><code>import pexpect
child = pexpect.spawn('ping -c 5 www.google.com')
while 1:
line = child.readline()
if not line: break
print line,
</code></pre>
| 1 | 2009-07-20T05:53:54Z | 1,151,929 | <p><a href="http://pexpect.sourceforge.net/pexpect.html" rel="nofollow">pexpect</a> is what I'd reach for, "by default", for any requirement such as yours -- there are other similar modules, but pexpect is almost invariably the richest, most stable, and most mature one. The one case where I'd bother looking for alternatives would be if I had to run correctly under Windows too (where ping and traceroute may have their own problems anyway) -- let us know if that's the case for you, and we'll see what can be arranged!-)</p>
| 5 | 2009-07-20T06:05:33Z | [
"python",
"ping",
"traceroute"
] |
How can I perform a ping or traceroute in python, accessing the output as it is produced? | 1,151,897 | <p>Earlier, I asked this question:</p>
<p><a href="http://stackoverflow.com/questions/1151771/how-can-i-perform-a-ping-or-traceroute-using-native-python">How can I perform a ping or traceroute using native python?</a></p>
<p>However because python is not running as root it doens't have the ability to open the raw ICMP sockets needed to perform the ping/traceroute in native python.</p>
<p>This brings me back to using the system's ping/traceroute shell commands. This question has a couple examples using the <code>subprocess</code> module which seem to work well:</p>
<p><a href="http://stackoverflow.com/questions/316866/ping-a-site-in-python">Ping a site in Python?</a></p>
<p>I still have one more requirement though: I need to be able to access the output as it is produced (eg. for a long running traceroute.)</p>
<p>The examples above all run the shell command and then only give you access to the complete output once the command has completed. Is there a way to access the command output as it is produced?</p>
<p><strong>Edit:</strong> Based on Alex Martelli's answer, here's what worked:</p>
<pre><code>import pexpect
child = pexpect.spawn('ping -c 5 www.google.com')
while 1:
line = child.readline()
if not line: break
print line,
</code></pre>
| 1 | 2009-07-20T05:53:54Z | 1,151,938 | <p>You should read the documentation for the <a href="http://docs.python.org/library/subprocess.html" rel="nofollow">subprocess</a> module, it describes how to run an external process and access its output in real time.</p>
<p>Basically, you do</p>
<pre><code>from subprocess import Popen, PIPE
p = Popen(['tracert', host], stdout=PIPE)
while True:
line = p.stdout.readline()
if not line:
break
# Do stuff with line
</code></pre>
<p>Actually, the answers in the SO question you linked to are very close to what you need. <a href="http://stackoverflow.com/questions/316866/ping-a-site-in-python/318142#318142">Corey Goldberg's answer</a> uses a pipe and <code>readline</code>, but since it runs ping with <code>-n 1</code> it doesn't last long enough to make a difference.</p>
| 3 | 2009-07-20T06:08:39Z | [
"python",
"ping",
"traceroute"
] |
How can I perform a ping or traceroute in python, accessing the output as it is produced? | 1,151,897 | <p>Earlier, I asked this question:</p>
<p><a href="http://stackoverflow.com/questions/1151771/how-can-i-perform-a-ping-or-traceroute-using-native-python">How can I perform a ping or traceroute using native python?</a></p>
<p>However because python is not running as root it doens't have the ability to open the raw ICMP sockets needed to perform the ping/traceroute in native python.</p>
<p>This brings me back to using the system's ping/traceroute shell commands. This question has a couple examples using the <code>subprocess</code> module which seem to work well:</p>
<p><a href="http://stackoverflow.com/questions/316866/ping-a-site-in-python">Ping a site in Python?</a></p>
<p>I still have one more requirement though: I need to be able to access the output as it is produced (eg. for a long running traceroute.)</p>
<p>The examples above all run the shell command and then only give you access to the complete output once the command has completed. Is there a way to access the command output as it is produced?</p>
<p><strong>Edit:</strong> Based on Alex Martelli's answer, here's what worked:</p>
<pre><code>import pexpect
child = pexpect.spawn('ping -c 5 www.google.com')
while 1:
line = child.readline()
if not line: break
print line,
</code></pre>
| 1 | 2009-07-20T05:53:54Z | 1,151,973 | <p>You can create a tty pair for the subprocess and run inside of that. According to the C standard (C99 7.19.3) the only time stdout is line buffered (as opposed to fully buffered which is what you say you don't want) is when it's a terminal. (or the child called setvbuf() obviously).</p>
<p>Check out os.openpty().</p>
<p>Untested code:</p>
<pre><code>master, slave = os.openpty()
pid = os.fork()
if pid == 0:
os.close(master)
os.dup2(slave, 0)
os.dup2(slave, 1)
os.dup2(slave, 2)
os.execv("/usr/sbin/traceroute", ("traceroute","4.2.2.1"))
# FIXME: log error somewhere
os.exit(1)
os.close(slave)
while True:
d = os.read(master)
if len(d) == 0:
break
print d
os.waitpid(pid, 0)
</code></pre>
<p>Note that having the child process (just after fork()) call setvbuf() will <strong>not</strong> work, since setvbuf() is a libc function and not a syscall. It just changes the state of the current process output, which will be overwritten on the exec call when the new binary in loaded.</p>
| 1 | 2009-07-20T06:32:52Z | [
"python",
"ping",
"traceroute"
] |
How can I perform a ping or traceroute in python, accessing the output as it is produced? | 1,151,897 | <p>Earlier, I asked this question:</p>
<p><a href="http://stackoverflow.com/questions/1151771/how-can-i-perform-a-ping-or-traceroute-using-native-python">How can I perform a ping or traceroute using native python?</a></p>
<p>However because python is not running as root it doens't have the ability to open the raw ICMP sockets needed to perform the ping/traceroute in native python.</p>
<p>This brings me back to using the system's ping/traceroute shell commands. This question has a couple examples using the <code>subprocess</code> module which seem to work well:</p>
<p><a href="http://stackoverflow.com/questions/316866/ping-a-site-in-python">Ping a site in Python?</a></p>
<p>I still have one more requirement though: I need to be able to access the output as it is produced (eg. for a long running traceroute.)</p>
<p>The examples above all run the shell command and then only give you access to the complete output once the command has completed. Is there a way to access the command output as it is produced?</p>
<p><strong>Edit:</strong> Based on Alex Martelli's answer, here's what worked:</p>
<pre><code>import pexpect
child = pexpect.spawn('ping -c 5 www.google.com')
while 1:
line = child.readline()
if not line: break
print line,
</code></pre>
| 1 | 2009-07-20T05:53:54Z | 19,930,626 | <p>Here is another approach:</p>
<pre><code># const_output.py
import sys
from subprocess import Popen
if len(sys.argv) < 2:
print 'Usage: const_output.py "command to watch"'
sys.exit(1)
cmd_line = sys.argv[1:]
p = Popen(cmd_line)
p.communicate()[0]
</code></pre>
<p>Example usage:</p>
<p>traceroute:</p>
<pre><code>> python const_output.py traceroute 10.0.0.38
traceroute to 10.0.0.38 (10.0.0.38), 30 hops max, 60 byte packets
1 10.0.0.38 (10.0.0.38) 0.106 ms 0.023 ms 0.021 ms
</code></pre>
<p>ping:</p>
<pre><code>> python const_output.py ping 10.0.0.38
PING 10.0.0.38 (10.0.0.38) 56(84) bytes of data.
64 bytes from 10.0.0.38: icmp_seq=1 ttl=64 time=0.046 ms
64 bytes from 10.0.0.38: icmp_seq=2 ttl=64 time=0.075 ms
64 bytes from 10.0.0.38: icmp_seq=3 ttl=64 time=0.076 ms
64 bytes from 10.0.0.38: icmp_seq=4 ttl=64 time=0.073 ms
</code></pre>
<p>top:</p>
<pre><code>> python const_output.py top
# you will see the top output
</code></pre>
| 1 | 2013-11-12T13:35:23Z | [
"python",
"ping",
"traceroute"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.