title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
How to match search strings to content in python
1,103,685
<p>Usually when we search, we have a list of stories, we provide a search string, and expect back a list of results where the given search strings matches the story.</p> <p>What I am looking to do, is the opposite. Give a list of search strings, and one story and find out which search strings match to that story.</p> <p>Now this could be done with re but the case here is i wanna use complex search queries as supported by solr. Full details of the <a href="http://lucene.apache.org/java/2_4_0/queryparsersyntax.html" rel="nofollow">query syntax here</a>. Note: i wont use boost.</p> <p>Basically i want to get some pointers for the doesitmatch function in the sample code below.</p> <pre><code>def doesitmatch(contents, searchstring): """ returns result of searching contents for searchstring (True or False) """ ??????? ??????? story = "big chunk of story 200 to 1000 words long" searchstrings = ['sajal' , 'sajal AND "is a jerk"' , 'sajal kayan' , 'sajal AND (kayan OR bangkok OR Thailand OR ( webmaster AND python))' , 'bangkok'] matches = [[searchstr] for searchstr in searchstrings if doesitmatch(story, searchstr) ] </code></pre> <p><strong>Edit:</strong> Additionally would also be interested to know if any module exists to convert lucene query like below into regex:</p> <pre><code>sajal AND (kayan OR bangkok OR Thailand OR ( webmaster AND python) OR "is a jerk") </code></pre>
0
2009-07-09T12:54:23Z
1,104,172
<p>Here's a suggestion in pseudocode. I'm assuming you store a story identifier with the search terms in the index, so that you can retrieve it with the search results.</p> <pre><code>def search_strings_matching(story_id_to_match, search_strings): result = set() for s in search_strings: result_story_ids = query_index(s) # query_index returns an id iterable if story_id_to_match in result_story_ids: result.add(s) return result </code></pre>
0
2009-07-09T14:15:58Z
[ "python", "search", "lucene", "solr" ]
How to match search strings to content in python
1,103,685
<p>Usually when we search, we have a list of stories, we provide a search string, and expect back a list of results where the given search strings matches the story.</p> <p>What I am looking to do, is the opposite. Give a list of search strings, and one story and find out which search strings match to that story.</p> <p>Now this could be done with re but the case here is i wanna use complex search queries as supported by solr. Full details of the <a href="http://lucene.apache.org/java/2_4_0/queryparsersyntax.html" rel="nofollow">query syntax here</a>. Note: i wont use boost.</p> <p>Basically i want to get some pointers for the doesitmatch function in the sample code below.</p> <pre><code>def doesitmatch(contents, searchstring): """ returns result of searching contents for searchstring (True or False) """ ??????? ??????? story = "big chunk of story 200 to 1000 words long" searchstrings = ['sajal' , 'sajal AND "is a jerk"' , 'sajal kayan' , 'sajal AND (kayan OR bangkok OR Thailand OR ( webmaster AND python))' , 'bangkok'] matches = [[searchstr] for searchstr in searchstrings if doesitmatch(story, searchstr) ] </code></pre> <p><strong>Edit:</strong> Additionally would also be interested to know if any module exists to convert lucene query like below into regex:</p> <pre><code>sajal AND (kayan OR bangkok OR Thailand OR ( webmaster AND python) OR "is a jerk") </code></pre>
0
2009-07-09T12:54:23Z
1,105,551
<p>After extensive googling, i realized what i am looking to do is a Boolean search.</p> <p>Found the code that makes regex boolean aware : <a href="http://code.activestate.com/recipes/252526/" rel="nofollow">http://code.activestate.com/recipes/252526/</a></p> <p>Issue looks solved for now.</p>
1
2009-07-09T18:01:33Z
[ "python", "search", "lucene", "solr" ]
How to match search strings to content in python
1,103,685
<p>Usually when we search, we have a list of stories, we provide a search string, and expect back a list of results where the given search strings matches the story.</p> <p>What I am looking to do, is the opposite. Give a list of search strings, and one story and find out which search strings match to that story.</p> <p>Now this could be done with re but the case here is i wanna use complex search queries as supported by solr. Full details of the <a href="http://lucene.apache.org/java/2_4_0/queryparsersyntax.html" rel="nofollow">query syntax here</a>. Note: i wont use boost.</p> <p>Basically i want to get some pointers for the doesitmatch function in the sample code below.</p> <pre><code>def doesitmatch(contents, searchstring): """ returns result of searching contents for searchstring (True or False) """ ??????? ??????? story = "big chunk of story 200 to 1000 words long" searchstrings = ['sajal' , 'sajal AND "is a jerk"' , 'sajal kayan' , 'sajal AND (kayan OR bangkok OR Thailand OR ( webmaster AND python))' , 'bangkok'] matches = [[searchstr] for searchstr in searchstrings if doesitmatch(story, searchstr) ] </code></pre> <p><strong>Edit:</strong> Additionally would also be interested to know if any module exists to convert lucene query like below into regex:</p> <pre><code>sajal AND (kayan OR bangkok OR Thailand OR ( webmaster AND python) OR "is a jerk") </code></pre>
0
2009-07-09T12:54:23Z
1,109,852
<p>This is probably less interesting to you now, since you've already solved your problem, but what you're describing sounds like <a href="http://en.wikipedia.org/wiki/Prospective%5Fsearch" rel="nofollow">Prospective Search</a>, which is what you call it when you have the query first and you want to match it against documents as they come along.</p> <p>Lucene's <a href="http://lucene.apache.org/java/2%5F4%5F0/api/org/apache/lucene/index/memory/MemoryIndex.html" rel="nofollow">MemoryIndex</a> is a class that was designed specifically for something like this, and in your case it might be efficient enough to run many queries against a single document.</p> <p>This has nothing to do with Python, though. You'd probably be better off writing something like this in java.</p>
0
2009-07-10T14:25:31Z
[ "python", "search", "lucene", "solr" ]
How to match search strings to content in python
1,103,685
<p>Usually when we search, we have a list of stories, we provide a search string, and expect back a list of results where the given search strings matches the story.</p> <p>What I am looking to do, is the opposite. Give a list of search strings, and one story and find out which search strings match to that story.</p> <p>Now this could be done with re but the case here is i wanna use complex search queries as supported by solr. Full details of the <a href="http://lucene.apache.org/java/2_4_0/queryparsersyntax.html" rel="nofollow">query syntax here</a>. Note: i wont use boost.</p> <p>Basically i want to get some pointers for the doesitmatch function in the sample code below.</p> <pre><code>def doesitmatch(contents, searchstring): """ returns result of searching contents for searchstring (True or False) """ ??????? ??????? story = "big chunk of story 200 to 1000 words long" searchstrings = ['sajal' , 'sajal AND "is a jerk"' , 'sajal kayan' , 'sajal AND (kayan OR bangkok OR Thailand OR ( webmaster AND python))' , 'bangkok'] matches = [[searchstr] for searchstr in searchstrings if doesitmatch(story, searchstr) ] </code></pre> <p><strong>Edit:</strong> Additionally would also be interested to know if any module exists to convert lucene query like below into regex:</p> <pre><code>sajal AND (kayan OR bangkok OR Thailand OR ( webmaster AND python) OR "is a jerk") </code></pre>
0
2009-07-09T12:54:23Z
7,152,733
<p>If you are writing Python on AppEngine, you can use the AppEngine Prospective Search Service to achieve exactly what you are trying to do here. See: <a href="http://code.google.com/appengine/docs/python/prospectivesearch/overview.html" rel="nofollow">http://code.google.com/appengine/docs/python/prospectivesearch/overview.html</a></p>
0
2011-08-22T19:44:27Z
[ "python", "search", "lucene", "solr" ]
Upload a potentially huge textfile to a plain WSGI-server in Python
1,103,940
<p>I need to upload a potentially huge plain-text file to a very simple wsgi-app without eating up all available memory on the server. How do I accomplish that? I want to use standard python modules and avoid third-party modules if possible.</p>
1
2009-07-09T13:36:57Z
1,104,001
<p>Has python know how to <a href="http://docs.python.org/library/archiving.html" rel="nofollow">deal with gzip annd zip file</a> I would suggest you to compress your file (using a zip or gzip compliant application like 7-zip) on your client and then upload it to your server.</p> <p>Even better : write a script that automaticaly compress your file and upload it. This is possible with the standard library.</p>
0
2009-07-09T13:47:35Z
[ "python", "file", "upload", "wsgi" ]
Upload a potentially huge textfile to a plain WSGI-server in Python
1,103,940
<p>I need to upload a potentially huge plain-text file to a very simple wsgi-app without eating up all available memory on the server. How do I accomplish that? I want to use standard python modules and avoid third-party modules if possible.</p>
1
2009-07-09T13:36:57Z
1,104,012
<p>wsgi.input should be a file like stream object. You can read from that in blocks, and write those blocks directly to disk. That shouldn't use up any significant memory.</p> <p>Or maybe I misunderstood the question?</p>
3
2009-07-09T13:49:23Z
[ "python", "file", "upload", "wsgi" ]
Upload a potentially huge textfile to a plain WSGI-server in Python
1,103,940
<p>I need to upload a potentially huge plain-text file to a very simple wsgi-app without eating up all available memory on the server. How do I accomplish that? I want to use standard python modules and avoid third-party modules if possible.</p>
1
2009-07-09T13:36:57Z
1,209,507
<p>If you use the cgi module to parse the input (which most frameworks use, e.g., Pylons, WebOb, CherryPy) then it will automatically save the uploaded file to a temporary file, and not load it into memory.</p>
2
2009-07-30T21:32:04Z
[ "python", "file", "upload", "wsgi" ]
Run (remote) php script from (local) python script
1,104,064
<p>How do I make python (local) run php script <strong>on a remote server</strong>? </p> <p>I don't want to process its output with python script or anything, just execute it and meanwhile quit python (while php script will be already working and doing its job).</p> <p>edit: What I'm trying to achieve:</p> <ul> <li>python script connects to ftp server and uploads php script (I already have this part of code)</li> <li>it runs php script (that's part of code i'm asking about)</li> <li>python script continues to do something else</li> <li>python script quits (but probably php script still didn't finished its work so i don't want it to end when it'll exit python)</li> <li>python script quit, php script still continues its task</li> </ul> <p>(I <em>don't</em> plan to do anything with php output in python - python just has to upload php script and make it start working)</p> <p>Hope I'm more clear now. Sorry if my question wasn't specific enough.</p> <p>another edit: Also please note that I don't have shell access on remote server. I have only ftp and control panel (cpanel); trying to use ftp for it.</p>
5
2009-07-09T13:58:59Z
1,104,084
<pre><code>os.system("php yourscript.php") </code></pre> <p>Another alternative would be:</p> <pre><code># will return new process' id os.spawnl(os.P_NOWAIT, "php yourscript.php") </code></pre> <p>You can check all os module documentation <a href="http://docs.python.org/library/os.html" rel="nofollow">here</a>.</p>
5
2009-07-09T14:01:03Z
[ "php", "python" ]
Run (remote) php script from (local) python script
1,104,064
<p>How do I make python (local) run php script <strong>on a remote server</strong>? </p> <p>I don't want to process its output with python script or anything, just execute it and meanwhile quit python (while php script will be already working and doing its job).</p> <p>edit: What I'm trying to achieve:</p> <ul> <li>python script connects to ftp server and uploads php script (I already have this part of code)</li> <li>it runs php script (that's part of code i'm asking about)</li> <li>python script continues to do something else</li> <li>python script quits (but probably php script still didn't finished its work so i don't want it to end when it'll exit python)</li> <li>python script quit, php script still continues its task</li> </ul> <p>(I <em>don't</em> plan to do anything with php output in python - python just has to upload php script and make it start working)</p> <p>Hope I'm more clear now. Sorry if my question wasn't specific enough.</p> <p>another edit: Also please note that I don't have shell access on remote server. I have only ftp and control panel (cpanel); trying to use ftp for it.</p>
5
2009-07-09T13:58:59Z
1,104,391
<p>I'll paraphrase the answer to <a href="http://stackoverflow.com/questions/1060436/how-do-i-include-a-php-script-in-python">http://stackoverflow.com/questions/1060436/how-do-i-include-a-php-script-in-python</a>.</p> <pre><code>import subprocess def php(script_path): p = subprocess.Popen(['php', script_path] ) </code></pre>
0
2009-07-09T14:50:31Z
[ "php", "python" ]
Run (remote) php script from (local) python script
1,104,064
<p>How do I make python (local) run php script <strong>on a remote server</strong>? </p> <p>I don't want to process its output with python script or anything, just execute it and meanwhile quit python (while php script will be already working and doing its job).</p> <p>edit: What I'm trying to achieve:</p> <ul> <li>python script connects to ftp server and uploads php script (I already have this part of code)</li> <li>it runs php script (that's part of code i'm asking about)</li> <li>python script continues to do something else</li> <li>python script quits (but probably php script still didn't finished its work so i don't want it to end when it'll exit python)</li> <li>python script quit, php script still continues its task</li> </ul> <p>(I <em>don't</em> plan to do anything with php output in python - python just has to upload php script and make it start working)</p> <p>Hope I'm more clear now. Sorry if my question wasn't specific enough.</p> <p>another edit: Also please note that I don't have shell access on remote server. I have only ftp and control panel (cpanel); trying to use ftp for it.</p>
5
2009-07-09T13:58:59Z
1,105,790
<p>If python is on a different physical machine than the PHP script, I'd make sure the PHP script is web-accessible and use <a href="http://docs.python.org/library/urllib2.html" rel="nofollow">urllib2</a> to call to that url</p> <pre><code>import urllib2 urllib2.urlopen("http://remotehost.com/myscript.php") </code></pre>
4
2009-07-09T18:46:36Z
[ "php", "python" ]
Combining module files in Python
1,104,066
<p>Is there a way to put together Python files, akin to JAR in Java? I need a way of packaging set of Python classes and functions, but unlike a standard module, I'd like it to be in one file.</p>
6
2009-07-09T13:59:06Z
1,104,080
<p>Take a look at Python Eggs: <a href="http://peak.telecommunity.com/DevCenter/PythonEggs">http://peak.telecommunity.com/DevCenter/PythonEggs</a></p> <p>Or, you can use regular zips: <a href="http://docs.python.org/library/zipimport.html">http://docs.python.org/library/zipimport.html</a></p>
7
2009-07-09T14:00:31Z
[ "python", "packaging" ]
Combining module files in Python
1,104,066
<p>Is there a way to put together Python files, akin to JAR in Java? I need a way of packaging set of Python classes and functions, but unlike a standard module, I'd like it to be in one file.</p>
6
2009-07-09T13:59:06Z
1,104,081
<p>The simplest approach is to just use <code>zip</code>. A <code>jar</code> file in Java is a zipfile containing some metadata such as a manifest; but you don't necessarily need the metatada -- Python can import from inside a zipfile as long as you place that zipfile on sys.path, just as you would do for any directory. In the zipfile you can have the sources (.py files), but then Python will have to compile them on the fly each time a process first imports them; or you can have the bytecode files (.pyc or .pyo) but then you're limited to a specific release of Python and to either absence (for .pyc) or presence (for .pyo) of flag -O (or -OO).</p> <p>As other answers indicated, there are formats such as <code>.egg</code> that enrich the zipfile with metatada in Python as well, like Java <code>.jar</code>, but whether in a particular use case that gives you extra value wrt a plain zipfile is a decision for you to make</p>
3
2009-07-09T14:00:55Z
[ "python", "packaging" ]
Combining module files in Python
1,104,066
<p>Is there a way to put together Python files, akin to JAR in Java? I need a way of packaging set of Python classes and functions, but unlike a standard module, I'd like it to be in one file.</p>
6
2009-07-09T13:59:06Z
1,104,089
<p>You can create zip files containing Python code and import from zip files using <a href="http://docs.python.org/library/zipimport.html" rel="nofollow">zipimport</a>. A system such as <a href="http://www.pyinstaller.org/" rel="nofollow">PyInstaller</a> (cross-platform) or <a href="http://www.py2exe.org/" rel="nofollow">py2exe</a> (Windows) will do all this for you.</p>
2
2009-07-09T14:01:42Z
[ "python", "packaging" ]
Combining module files in Python
1,104,066
<p>Is there a way to put together Python files, akin to JAR in Java? I need a way of packaging set of Python classes and functions, but unlike a standard module, I'd like it to be in one file.</p>
6
2009-07-09T13:59:06Z
1,104,102
<p>Read this <a href="http://www.python.org/dev/peps/pep-0273/" rel="nofollow">PEP</a> for informations.<br> Also, <a href="http://docs.python.org/library/zipimport.html" rel="nofollow">Import modules from Zip</a>.</p>
0
2009-07-09T14:03:32Z
[ "python", "packaging" ]
Combining module files in Python
1,104,066
<p>Is there a way to put together Python files, akin to JAR in Java? I need a way of packaging set of Python classes and functions, but unlike a standard module, I'd like it to be in one file.</p>
6
2009-07-09T13:59:06Z
6,611,335
<p>After looking for a solution to the same problem, I ended up writing a simple tool which combines multiple .py files into one: <a href="http://pagekite.net/wiki/Floss/PyBreeder/">PyBreeder</a></p> <p>It will only work with pure-Python modules and may require some trial-and-error to get the order of modules right, but it is quite handy whenever you want to deploy a script with some dependencies as a single .py. Comments/patches/critique are very welcome!</p>
8
2011-07-07T13:27:07Z
[ "python", "packaging" ]
Scrolling QGraphicsView programmatically
1,104,304
<p>I've want to implement a scroll/pan-feature on a QGraphicsView in my (Py)Qt application. It's supposed to work like this: The user presses the middle mouse button, and the view scrolls as the user moves the mouse (this is quite a common feature).<br /> I tried using the scroll() method inherited from QWidget. However, this somehow moves the view instead - scrollbars and all. See picture.<br /> So, given that this is not the way I'm supposed to do this, how should I? Or is it the correct way, but I do something else wrong? The code I use:</p> <pre><code> def __init__(self): ... self.ui.imageArea.mousePressEvent=self.evImagePress self.ui.imageArea.mouseMoveEvent=self.evMouseMove self.scrollOnMove=False self.scrollOrigin=[] ... def evImagePress(self, event): if event.button() == Qt.LeftButton: self.evImageLeftClick(event) if event.button() == Qt.MidButton: self.scrollOnMove=not self.scrollOnMove if self.scrollOnMove: self.scrollOrigin=[event.x(), event.y()] ... def evMouseMove(self, event): if self.scrollOnMove: self.ui.imageArea.scroll(event.x()-self.scrollOrigin[0], event.y()-self.scrollOrigin[1]) </code></pre> <p>It works as I expect, except for the whole move-the-widget business.</p> <p><img src="http://img55.imageshack.us/img55/3222/scrollfail.jpg" alt="Fails to scroll" /></p>
1
2009-07-09T14:39:07Z
1,105,111
<p>I haven't done this myself but this is from the <a href="http://doc.trolltech.com/4.5/qgraphicsview.html" rel="nofollow"><code>QGraphicsView</code></a> documentation</p> <blockquote> <p>... When the scene is larger than the scroll bars' values, you can choose to use translate() to navigate the scene instead.</p> </blockquote> <p>By using <code>scroll</code> you are moving the widget, <a href="http://doc.trolltech.com/4.5/qgraphicsview.html#translate" rel="nofollow"><code>translate</code></a> should achieve what you are looking for, moving the contents of the <code>QGraphicsScene</code> underneath the view</p>
3
2009-07-09T16:45:58Z
[ "python", "qt" ]
Scrolling QGraphicsView programmatically
1,104,304
<p>I've want to implement a scroll/pan-feature on a QGraphicsView in my (Py)Qt application. It's supposed to work like this: The user presses the middle mouse button, and the view scrolls as the user moves the mouse (this is quite a common feature).<br /> I tried using the scroll() method inherited from QWidget. However, this somehow moves the view instead - scrollbars and all. See picture.<br /> So, given that this is not the way I'm supposed to do this, how should I? Or is it the correct way, but I do something else wrong? The code I use:</p> <pre><code> def __init__(self): ... self.ui.imageArea.mousePressEvent=self.evImagePress self.ui.imageArea.mouseMoveEvent=self.evMouseMove self.scrollOnMove=False self.scrollOrigin=[] ... def evImagePress(self, event): if event.button() == Qt.LeftButton: self.evImageLeftClick(event) if event.button() == Qt.MidButton: self.scrollOnMove=not self.scrollOnMove if self.scrollOnMove: self.scrollOrigin=[event.x(), event.y()] ... def evMouseMove(self, event): if self.scrollOnMove: self.ui.imageArea.scroll(event.x()-self.scrollOrigin[0], event.y()-self.scrollOrigin[1]) </code></pre> <p>It works as I expect, except for the whole move-the-widget business.</p> <p><img src="http://img55.imageshack.us/img55/3222/scrollfail.jpg" alt="Fails to scroll" /></p>
1
2009-07-09T14:39:07Z
1,108,477
<p>You can set the QGraphicsScene's area that will be displayed by the QGraphicsView with the method <a href="http://doc.trolltech.com/4.5/qgraphicsview.html#sceneRect-prop" rel="nofollow">QGraphicsView::setSceneRect()</a>. So when you press the button and move the mouse, you can change the center of the displayed part of the scene and achieve your goal.</p>
0
2009-07-10T08:54:08Z
[ "python", "qt" ]
Scrolling QGraphicsView programmatically
1,104,304
<p>I've want to implement a scroll/pan-feature on a QGraphicsView in my (Py)Qt application. It's supposed to work like this: The user presses the middle mouse button, and the view scrolls as the user moves the mouse (this is quite a common feature).<br /> I tried using the scroll() method inherited from QWidget. However, this somehow moves the view instead - scrollbars and all. See picture.<br /> So, given that this is not the way I'm supposed to do this, how should I? Or is it the correct way, but I do something else wrong? The code I use:</p> <pre><code> def __init__(self): ... self.ui.imageArea.mousePressEvent=self.evImagePress self.ui.imageArea.mouseMoveEvent=self.evMouseMove self.scrollOnMove=False self.scrollOrigin=[] ... def evImagePress(self, event): if event.button() == Qt.LeftButton: self.evImageLeftClick(event) if event.button() == Qt.MidButton: self.scrollOnMove=not self.scrollOnMove if self.scrollOnMove: self.scrollOrigin=[event.x(), event.y()] ... def evMouseMove(self, event): if self.scrollOnMove: self.ui.imageArea.scroll(event.x()-self.scrollOrigin[0], event.y()-self.scrollOrigin[1]) </code></pre> <p>It works as I expect, except for the whole move-the-widget business.</p> <p><img src="http://img55.imageshack.us/img55/3222/scrollfail.jpg" alt="Fails to scroll" /></p>
1
2009-07-09T14:39:07Z
3,697,290
<p>My addition to translate() method. It works great unless you scale the scene. If you do this, you'll notice, that the image is not in sync with your mouse movements. That's when mapToScene() comes to help. You should map your points from mouse events to scene coordinates. Then the mapped difference goes to translate(), voila viola- your scene follows your mouse with a great precision.</p> <p>For example:</p> <pre><code>QPointF tmp2 = mapToScene(event-&gt;pos()); QPointF tmp = tmp2.mapToScene(previous_point); translate(tmp.x(),tmp.y()); </code></pre>
4
2010-09-13T00:39:29Z
[ "python", "qt" ]
Scrolling QGraphicsView programmatically
1,104,304
<p>I've want to implement a scroll/pan-feature on a QGraphicsView in my (Py)Qt application. It's supposed to work like this: The user presses the middle mouse button, and the view scrolls as the user moves the mouse (this is quite a common feature).<br /> I tried using the scroll() method inherited from QWidget. However, this somehow moves the view instead - scrollbars and all. See picture.<br /> So, given that this is not the way I'm supposed to do this, how should I? Or is it the correct way, but I do something else wrong? The code I use:</p> <pre><code> def __init__(self): ... self.ui.imageArea.mousePressEvent=self.evImagePress self.ui.imageArea.mouseMoveEvent=self.evMouseMove self.scrollOnMove=False self.scrollOrigin=[] ... def evImagePress(self, event): if event.button() == Qt.LeftButton: self.evImageLeftClick(event) if event.button() == Qt.MidButton: self.scrollOnMove=not self.scrollOnMove if self.scrollOnMove: self.scrollOrigin=[event.x(), event.y()] ... def evMouseMove(self, event): if self.scrollOnMove: self.ui.imageArea.scroll(event.x()-self.scrollOrigin[0], event.y()-self.scrollOrigin[1]) </code></pre> <p>It works as I expect, except for the whole move-the-widget business.</p> <p><img src="http://img55.imageshack.us/img55/3222/scrollfail.jpg" alt="Fails to scroll" /></p>
1
2009-07-09T14:39:07Z
28,180,049
<p>Answer given by denis is correct to get translate to work. The comment by PF4Public is also valid: this can screw up scaling. My workaround is different than P4FPublc's -- instead of mapToScene I preserve the anchor and restore it after a translation:</p> <pre><code>previousAnchor = view.transformationAnchor() #have to set this for self.translate() to work. view.setTransformationAnchor(QGraphicsView.NoAnchor) view.translate(x_diff,y_diff) #have to reset the anchor or scaling (zoom) stops working: view.setTransformationAnchor(previousAnchor) </code></pre>
0
2015-01-27T21:09:45Z
[ "python", "qt" ]
Twisted sometimes throws (seemingly incomplete) 'maximum recursion depth exceeded' RuntimeError
1,104,587
<p>Because the Twisted <code>getPage</code> function doesn't give me access to headers, I had to write my own <code>getPageWithHeaders</code> function.</p> <pre><code>def getPageWithHeaders(contextFactory=None, *args, **kwargs): try: return _makeGetterFactory(url, HTTPClientFactory, contextFactory=contextFactory, *args, **kwargs) except: traceback.print_exc() </code></pre> <p>This is exactly the same as the normal <code>getPage</code> function, except that I added the try/except block and return the factory object instead of returning the factory.deferred</p> <p>For some reason, I sometimes get a maximum recursion depth exceeded error here. It happens consistently a few times out of 700, usually on different sites each time. Can anyone shed any light on this? I'm not clear why or how this could be happening, and the Twisted codebase is large enough that I don't even know where to look.</p> <p>EDIT: Here's the traceback I get, which seems bizarrely incomplete:</p> <pre><code>Traceback (most recent call last): File "C:\keep-alive\utility\background.py", line 70, in getPageWithHeaders factory = _makeGetterFactory(url, HTTPClientFactory, timeout=60 , contextFactory=context, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 449, in _makeGetterFactory factory = factoryFactory(url, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 248, in __init__ self.headers = InsensitiveDict(headers) RuntimeError: maximum recursion depth exceeded </code></pre> <p>This is the entire traceback, which clearly isn't long enough to have exceeded our max recursion depth. Is there something else I need to do in order to get the full stack? I've never had this problem before; typically when I do something like</p> <pre><code>def f(): return f() try: f() except: traceback.print_exc() </code></pre> <p>then I get the kind of "maximum recursion depth exceeded" stack that you'd expect, with a ton of references to <code>f()</code></p>
2
2009-07-09T15:21:58Z
1,104,617
<p>You should look at the traceback you're getting together with the exception -- that will tell you what function(s) is/are recursing too deeply, "below" <code>_makeGetterFactory</code>. Most likely you'll find that your own <code>getPageWithHeaders</code> is involved in the recursion, exactly because instead of properly returning a deferred it tries to return a factory that's not ready yet. What happens if you <em>do</em> go back to returning the deferred?</p>
1
2009-07-09T15:25:59Z
[ "python", "twisted" ]
Twisted sometimes throws (seemingly incomplete) 'maximum recursion depth exceeded' RuntimeError
1,104,587
<p>Because the Twisted <code>getPage</code> function doesn't give me access to headers, I had to write my own <code>getPageWithHeaders</code> function.</p> <pre><code>def getPageWithHeaders(contextFactory=None, *args, **kwargs): try: return _makeGetterFactory(url, HTTPClientFactory, contextFactory=contextFactory, *args, **kwargs) except: traceback.print_exc() </code></pre> <p>This is exactly the same as the normal <code>getPage</code> function, except that I added the try/except block and return the factory object instead of returning the factory.deferred</p> <p>For some reason, I sometimes get a maximum recursion depth exceeded error here. It happens consistently a few times out of 700, usually on different sites each time. Can anyone shed any light on this? I'm not clear why or how this could be happening, and the Twisted codebase is large enough that I don't even know where to look.</p> <p>EDIT: Here's the traceback I get, which seems bizarrely incomplete:</p> <pre><code>Traceback (most recent call last): File "C:\keep-alive\utility\background.py", line 70, in getPageWithHeaders factory = _makeGetterFactory(url, HTTPClientFactory, timeout=60 , contextFactory=context, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 449, in _makeGetterFactory factory = factoryFactory(url, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 248, in __init__ self.headers = InsensitiveDict(headers) RuntimeError: maximum recursion depth exceeded </code></pre> <p>This is the entire traceback, which clearly isn't long enough to have exceeded our max recursion depth. Is there something else I need to do in order to get the full stack? I've never had this problem before; typically when I do something like</p> <pre><code>def f(): return f() try: f() except: traceback.print_exc() </code></pre> <p>then I get the kind of "maximum recursion depth exceeded" stack that you'd expect, with a ton of references to <code>f()</code></p>
2
2009-07-09T15:21:58Z
1,104,648
<p>The URL opener is likely following an un-ending series of 301 or 302 redirects.</p>
-1
2009-07-09T15:29:49Z
[ "python", "twisted" ]
Twisted sometimes throws (seemingly incomplete) 'maximum recursion depth exceeded' RuntimeError
1,104,587
<p>Because the Twisted <code>getPage</code> function doesn't give me access to headers, I had to write my own <code>getPageWithHeaders</code> function.</p> <pre><code>def getPageWithHeaders(contextFactory=None, *args, **kwargs): try: return _makeGetterFactory(url, HTTPClientFactory, contextFactory=contextFactory, *args, **kwargs) except: traceback.print_exc() </code></pre> <p>This is exactly the same as the normal <code>getPage</code> function, except that I added the try/except block and return the factory object instead of returning the factory.deferred</p> <p>For some reason, I sometimes get a maximum recursion depth exceeded error here. It happens consistently a few times out of 700, usually on different sites each time. Can anyone shed any light on this? I'm not clear why or how this could be happening, and the Twisted codebase is large enough that I don't even know where to look.</p> <p>EDIT: Here's the traceback I get, which seems bizarrely incomplete:</p> <pre><code>Traceback (most recent call last): File "C:\keep-alive\utility\background.py", line 70, in getPageWithHeaders factory = _makeGetterFactory(url, HTTPClientFactory, timeout=60 , contextFactory=context, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 449, in _makeGetterFactory factory = factoryFactory(url, *args, **kwargs) File "c:\Python26\lib\site-packages\twisted\web\client.py", line 248, in __init__ self.headers = InsensitiveDict(headers) RuntimeError: maximum recursion depth exceeded </code></pre> <p>This is the entire traceback, which clearly isn't long enough to have exceeded our max recursion depth. Is there something else I need to do in order to get the full stack? I've never had this problem before; typically when I do something like</p> <pre><code>def f(): return f() try: f() except: traceback.print_exc() </code></pre> <p>then I get the kind of "maximum recursion depth exceeded" stack that you'd expect, with a ton of references to <code>f()</code></p>
2
2009-07-09T15:21:58Z
1,108,849
<p>The specific traceback that you're looking at is a bit mystifying. You could try <code>traceback.print_stack</code> rather than <code>traceback.print_exc</code> to get a look at the <em>entire</em> stack above the problematic code, rather than just the stack going back to where the exception is caught.</p> <p>Without seeing more of your traceback I can't be certain, but you <em>may</em> be running into <a href="http://twistedmatrix.com/trac/ticket/411" rel="nofollow">the problem where Deferreds will raise a recursion limit exception if you chain too many of them together</a>.</p> <p>If you turn on Deferred debugging (<code>from twisted.internet.defer import setDebugging; setDebugging(True)</code>) you may get more useful tracebacks in some cases, but please be aware that this may also slow down your server quite a bit.</p>
2
2009-07-10T10:38:43Z
[ "python", "twisted" ]
Lengthy single line strings in Python without going over maximum line length
1,104,762
<p>How can I break a long one liner string in my code and keep the string indented with the rest of the code? <a href="http://www.python.org/dev/peps/pep-0008/" rel="nofollow" title="PEP-8">PEP 8</a> doesn't have any example for this case.</p> <p>Correct ouptut but strangely indented:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p>Bad output, but looks better in code:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p><hr /></p> <p>Wow, lots of fast answers. Thanks!</p>
1
2009-07-09T15:46:39Z
1,104,789
<p>Adjacent strings are concatenated at compile time:</p> <pre><code>if True: print ("this is the first line of a very long string" " this is the second line") </code></pre> <p>Output:</p> <pre><code>this is the first line of a very long string this is the second line </code></pre>
27
2009-07-09T15:49:47Z
[ "python", "string" ]
Lengthy single line strings in Python without going over maximum line length
1,104,762
<p>How can I break a long one liner string in my code and keep the string indented with the rest of the code? <a href="http://www.python.org/dev/peps/pep-0008/" rel="nofollow" title="PEP-8">PEP 8</a> doesn't have any example for this case.</p> <p>Correct ouptut but strangely indented:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p>Bad output, but looks better in code:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p><hr /></p> <p>Wow, lots of fast answers. Thanks!</p>
1
2009-07-09T15:46:39Z
1,104,790
<p>You can use a trailing backslash to join separate strings like this:</p> <pre><code>if True: print "long test long test long test long test long " \ "test long test long test long test long test long test" </code></pre>
2
2009-07-09T15:49:56Z
[ "python", "string" ]
Lengthy single line strings in Python without going over maximum line length
1,104,762
<p>How can I break a long one liner string in my code and keep the string indented with the rest of the code? <a href="http://www.python.org/dev/peps/pep-0008/" rel="nofollow" title="PEP-8">PEP 8</a> doesn't have any example for this case.</p> <p>Correct ouptut but strangely indented:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p>Bad output, but looks better in code:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p><hr /></p> <p>Wow, lots of fast answers. Thanks!</p>
1
2009-07-09T15:46:39Z
1,104,793
<pre><code>if True: print "long test long test long test long test long"\ "test long test long test long test long test long test" </code></pre>
6
2009-07-09T15:50:07Z
[ "python", "string" ]
Lengthy single line strings in Python without going over maximum line length
1,104,762
<p>How can I break a long one liner string in my code and keep the string indented with the rest of the code? <a href="http://www.python.org/dev/peps/pep-0008/" rel="nofollow" title="PEP-8">PEP 8</a> doesn't have any example for this case.</p> <p>Correct ouptut but strangely indented:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p>Bad output, but looks better in code:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p><hr /></p> <p>Wow, lots of fast answers. Thanks!</p>
1
2009-07-09T15:46:39Z
1,104,796
<pre><code>if True: print "long test long test long test "+ "long test long test long test "+ "long test long test long test " </code></pre> <p>And so on.</p>
-6
2009-07-09T15:50:54Z
[ "python", "string" ]
Lengthy single line strings in Python without going over maximum line length
1,104,762
<p>How can I break a long one liner string in my code and keep the string indented with the rest of the code? <a href="http://www.python.org/dev/peps/pep-0008/" rel="nofollow" title="PEP-8">PEP 8</a> doesn't have any example for this case.</p> <p>Correct ouptut but strangely indented:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p>Bad output, but looks better in code:</p> <pre><code>if True: print "long test long test long test long test long \ test long test long test long test long test long test" &gt;&gt;&gt; long test long test long test long test long test long test long test long test long test long test </code></pre> <p><hr /></p> <p>Wow, lots of fast answers. Thanks!</p>
1
2009-07-09T15:46:39Z
1,107,873
<p>Why isn't anyone recommending triple quotes?</p> <pre><code>print """ blah blah blah ..............""" </code></pre>
0
2009-07-10T05:34:02Z
[ "python", "string" ]
Python tell when an ftp transfer sits on completion
1,105,014
<p>I have to download some files from an FTP server. Seems prosaic enough. However, the way this server behaves is if the file is very large, the connection will just hang when the download ostensibly completes.</p> <p>How can I handle this gracefully using ftplib in python?</p> <p>Sample python code:</p> <pre><code>from ftplib import FTP ... ftp = FTP(host) ftp.login(login, passwd) files=ftp.nlst() ftp.set_debuglevel(2) for fname in files: ret_status = ftp.retrbinary('RETR ' + fname, open(fname, 'wb').write) </code></pre> <p>debug output from the above:</p> <pre><code>*cmd* 'TYPE I' *put* 'TYPE I\r\n' *get* '200 Type set to I.\r\n' *resp* '200 Type set to I.' *cmd* 'PASV' *put* 'PASV\r\n' *get* '227 Entering Passive Mode (0,0,0,0,10,52).\r\n' *resp* '227 Entering Passive Mode (0,0,0,0,10,52).' *cmd* 'RETR some_file' *put* 'RETR some_file\r\n' *get* '125 Data connection already open; Transfer starting.\r\n' *resp* '125 Data connection already open; Transfer starting.' [just sits there indefinitely] </code></pre> <p>This is what it looks like when I attempt the same download using curl -v:</p> <pre><code>* About to connect() to some_server port 21 (#0) * Trying some_ip... connected * Connected to some_server (some_ip) port 21 (#0) &lt; 220 Microsoft FTP Service &gt; USER some_user &lt; 331 Password required for some_user. &gt; PASS some_password &lt; 230 User some_user logged in. &gt; PWD &lt; 257 "/some_dir" is current directory. * Entry path is '/some_dir' &gt; EPSV * Connect data stream passively &lt; 500 'EPSV': command not understood * disabling EPSV usage &gt; PASV &lt; 227 Entering Passive Mode (0,0,0,0,11,116). * Trying some_ip... connected * Connecting to some_ip (some_ip) port 2932 &gt; TYPE I &lt; 200 Type set to I. &gt; SIZE some_file &lt; 213 229376897 &gt; RETR some_file &lt; 125 Data connection already open; Transfer starting. * Maxdownload = -1 * Getting file with size: 229376897 { [data not shown] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 218M 100 218M 0 0 182k 0 0:20:28 0:20:28 --:--:-- 0* FTP response timeout * control connection looks dead 100 218M 100 218M 0 0 182k 0 0:20:29 0:20:29 --:--:-- 0* Connection #0 to host some_server left intact curl: (28) FTP response timeout * Closing connection #0 </code></pre> <p>wget output is kind of interesting as well, it notices the connection is dead, then attempts to re-download the file which only confirms that it is already finished:</p> <pre><code>--2009-07-09 11:32:23-- ftp://some_server/some_file =&gt; `some_file' Resolving some_server... 0.0.0.0 Connecting to some_server|0.0.0.0|:21... connected. Logging in as some_user ... Logged in! ==&gt; SYST ... done. ==&gt; PWD ... done. ==&gt; TYPE I ... done. ==&gt; CWD not needed. ==&gt; SIZE some_file ... 229376897 ==&gt; PASV ... done. ==&gt; RETR some_file ... done. Length: 229376897 (219M) 100%[==========================================================&gt;] 229,376,897 387K/s in 18m 54s 2009-07-09 11:51:17 (198 KB/s) - Control connection closed. Retrying. --2009-07-09 12:06:18-- ftp://some_server/some_file (try: 2) =&gt; `some_file' Connecting to some_server|0.0.0.0|:21... connected. Logging in as some_user ... Logged in! ==&gt; SYST ... done. ==&gt; PWD ... done. ==&gt; TYPE I ... done. ==&gt; CWD not needed. ==&gt; SIZE some_file ... 229376897 ==&gt; PASV ... done. ==&gt; REST 229376897 ... done. ==&gt; RETR some_file ... done. Length: 229376897 (219M), 0 (0) remaining 100%[+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++] 229,376,897 --.-K/s in 0s 2009-07-09 12:06:18 (0.00 B/s) - `some_file' saved [229376897] </code></pre>
2
2009-07-09T16:30:56Z
1,106,842
<p>I've never used ftplib, but perhaps you could do:</p> <ol> <li>Get the name and size of the file you want.</li> <li>Start a new daemonic thread to download the file.</li> <li>In the main thread, check every few seconds whether the file size on disk equals the target size.</li> <li>When it does, wait a few seconds to give the connection a chance to close nicely, and then exit the program.</li> </ol>
0
2009-07-09T22:49:44Z
[ "python", "ftp", "network-programming" ]
Python tell when an ftp transfer sits on completion
1,105,014
<p>I have to download some files from an FTP server. Seems prosaic enough. However, the way this server behaves is if the file is very large, the connection will just hang when the download ostensibly completes.</p> <p>How can I handle this gracefully using ftplib in python?</p> <p>Sample python code:</p> <pre><code>from ftplib import FTP ... ftp = FTP(host) ftp.login(login, passwd) files=ftp.nlst() ftp.set_debuglevel(2) for fname in files: ret_status = ftp.retrbinary('RETR ' + fname, open(fname, 'wb').write) </code></pre> <p>debug output from the above:</p> <pre><code>*cmd* 'TYPE I' *put* 'TYPE I\r\n' *get* '200 Type set to I.\r\n' *resp* '200 Type set to I.' *cmd* 'PASV' *put* 'PASV\r\n' *get* '227 Entering Passive Mode (0,0,0,0,10,52).\r\n' *resp* '227 Entering Passive Mode (0,0,0,0,10,52).' *cmd* 'RETR some_file' *put* 'RETR some_file\r\n' *get* '125 Data connection already open; Transfer starting.\r\n' *resp* '125 Data connection already open; Transfer starting.' [just sits there indefinitely] </code></pre> <p>This is what it looks like when I attempt the same download using curl -v:</p> <pre><code>* About to connect() to some_server port 21 (#0) * Trying some_ip... connected * Connected to some_server (some_ip) port 21 (#0) &lt; 220 Microsoft FTP Service &gt; USER some_user &lt; 331 Password required for some_user. &gt; PASS some_password &lt; 230 User some_user logged in. &gt; PWD &lt; 257 "/some_dir" is current directory. * Entry path is '/some_dir' &gt; EPSV * Connect data stream passively &lt; 500 'EPSV': command not understood * disabling EPSV usage &gt; PASV &lt; 227 Entering Passive Mode (0,0,0,0,11,116). * Trying some_ip... connected * Connecting to some_ip (some_ip) port 2932 &gt; TYPE I &lt; 200 Type set to I. &gt; SIZE some_file &lt; 213 229376897 &gt; RETR some_file &lt; 125 Data connection already open; Transfer starting. * Maxdownload = -1 * Getting file with size: 229376897 { [data not shown] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 218M 100 218M 0 0 182k 0 0:20:28 0:20:28 --:--:-- 0* FTP response timeout * control connection looks dead 100 218M 100 218M 0 0 182k 0 0:20:29 0:20:29 --:--:-- 0* Connection #0 to host some_server left intact curl: (28) FTP response timeout * Closing connection #0 </code></pre> <p>wget output is kind of interesting as well, it notices the connection is dead, then attempts to re-download the file which only confirms that it is already finished:</p> <pre><code>--2009-07-09 11:32:23-- ftp://some_server/some_file =&gt; `some_file' Resolving some_server... 0.0.0.0 Connecting to some_server|0.0.0.0|:21... connected. Logging in as some_user ... Logged in! ==&gt; SYST ... done. ==&gt; PWD ... done. ==&gt; TYPE I ... done. ==&gt; CWD not needed. ==&gt; SIZE some_file ... 229376897 ==&gt; PASV ... done. ==&gt; RETR some_file ... done. Length: 229376897 (219M) 100%[==========================================================&gt;] 229,376,897 387K/s in 18m 54s 2009-07-09 11:51:17 (198 KB/s) - Control connection closed. Retrying. --2009-07-09 12:06:18-- ftp://some_server/some_file (try: 2) =&gt; `some_file' Connecting to some_server|0.0.0.0|:21... connected. Logging in as some_user ... Logged in! ==&gt; SYST ... done. ==&gt; PWD ... done. ==&gt; TYPE I ... done. ==&gt; CWD not needed. ==&gt; SIZE some_file ... 229376897 ==&gt; PASV ... done. ==&gt; REST 229376897 ... done. ==&gt; RETR some_file ... done. Length: 229376897 (219M), 0 (0) remaining 100%[+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++] 229,376,897 --.-K/s in 0s 2009-07-09 12:06:18 (0.00 B/s) - `some_file' saved [229376897] </code></pre>
2
2009-07-09T16:30:56Z
1,940,568
<p>I think some debugging could be useful. Could you fold the class below into your code? (I didn't do it myself because I know this version works, and didn't want to risk making an error. You should be able to just put the class at the top of your file and replace the body of the loop with what I've written after #LOOP BODY)</p> <pre><code>class CounterFile(): def __init__(self, file, maxsize): self.file = file self.count = 0 self.maxsize = maxsize def write(self, bytes): self.count += len(bytes) print "total %d bytes / %d"%(self.count, self.maxsize) if self.count == self.maxsize: print " Should be complete" self.file.write(bytes) from ftplib import FTP ftp = FTP('ftp.gimp.org') ftp.login('ftp', 'thouis@gmail.com') ftp.set_debuglevel(2) ftp.cwd('/pub/gimp/v2.6/') fname = 'gimp-2.6.2.tar.bz2' # LOOP BODY sz = ftp.size(fname) if sz is None: print "Could not get size!" sz = 0 ret_status = ftp.retrbinary('RETR ' + fname, CounterFile(open(fname, 'wb'), sz).write) </code></pre>
0
2009-12-21T14:58:02Z
[ "python", "ftp", "network-programming" ]
Pythonic way to get some rows of a matrix
1,105,101
<p>I was thinking about a code that I wrote a few years ago in Python, at some point it had to get just some elements, by index, of a list of lists.</p> <p>I remember I did something like this:</p> <pre><code>def getRows(m, row_indices): tmp = [] for i in row_indices: tmp.append(m[i]) return tmp </code></pre> <p>Now that I've learnt a little bit more since then, I'd use a list comprehension like this:</p> <pre><code>[m[i] for i in row_indices] </code></pre> <p>But I'm still wondering if there's an even more pythonic way to do it. Any ideas?</p> <p>I would like to know also alternatives with numpy o any other array libraries.</p>
1
2009-07-09T16:44:37Z
1,105,120
<p>It's the clean an obvious way. So, I'd say it doesn't get more Pythonic than that.</p>
4
2009-07-09T16:47:34Z
[ "list", "coding-style", "filtering", "python" ]
Pythonic way to get some rows of a matrix
1,105,101
<p>I was thinking about a code that I wrote a few years ago in Python, at some point it had to get just some elements, by index, of a list of lists.</p> <p>I remember I did something like this:</p> <pre><code>def getRows(m, row_indices): tmp = [] for i in row_indices: tmp.append(m[i]) return tmp </code></pre> <p>Now that I've learnt a little bit more since then, I'd use a list comprehension like this:</p> <pre><code>[m[i] for i in row_indices] </code></pre> <p>But I'm still wondering if there's an even more pythonic way to do it. Any ideas?</p> <p>I would like to know also alternatives with numpy o any other array libraries.</p>
1
2009-07-09T16:44:37Z
1,105,177
<p>It's worth looking at <a href="http://www.scipy.org/Tentative%5FNumPy%5FTutorial" rel="nofollow">NumPy</a> for its slicing syntax. Scroll down in the linked page until you get to "Indexing, Slicing and Iterating".</p>
4
2009-07-09T16:58:03Z
[ "list", "coding-style", "filtering", "python" ]
Pythonic way to get some rows of a matrix
1,105,101
<p>I was thinking about a code that I wrote a few years ago in Python, at some point it had to get just some elements, by index, of a list of lists.</p> <p>I remember I did something like this:</p> <pre><code>def getRows(m, row_indices): tmp = [] for i in row_indices: tmp.append(m[i]) return tmp </code></pre> <p>Now that I've learnt a little bit more since then, I'd use a list comprehension like this:</p> <pre><code>[m[i] for i in row_indices] </code></pre> <p>But I'm still wondering if there's an even more pythonic way to do it. Any ideas?</p> <p>I would like to know also alternatives with numpy o any other array libraries.</p>
1
2009-07-09T16:44:37Z
1,106,175
<p>As Curt said, it seems that Numpy is a good tool for this. Here's an example,</p> <pre><code>from numpy import * a = arange(16).reshape((4,4)) b = a[:, [1,2]] c = a[[1,2], :] print a print b print c </code></pre> <p>gives</p> <pre><code>[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] [[ 1 2] [ 5 6] [ 9 10] [13 14]] [[ 4 5 6 7] [ 8 9 10 11]] </code></pre>
2
2009-07-09T20:06:33Z
[ "list", "coding-style", "filtering", "python" ]
How to exclude U+2028 from line separators in Python when reading file?
1,105,106
<p>I have a file in UTF-8, where some lines contain the U+2028 Line Separator character (<a href="http://www.fileformat.info/info/unicode/char/2028/index.htm" rel="nofollow">http://www.fileformat.info/info/unicode/char/2028/index.htm</a>). I don't want it to be treated as a line break when I read lines from the file. Is there a way to exclude it from separators when I iterate over the file or use readlines()? (Besides reading the entire file into a string and then splitting by \n.) Thank you!</p>
3
2009-07-09T16:44:49Z
1,105,207
<p>If you use Python 3.0 (note that I don't, so I can't test), according to the <a href="http://docs.python.org/3.0/library/functions.html#open" rel="nofollow">documentation</a> you can pass an optional <code>newline</code> parameter to <code>open</code> to specifify which line seperator to use. However, the documentation doesn't mention U+2028 at all (it only mentions <code>\r</code>, <code>\n</code>, and <code>\r\n</code> as line seperators), so it's actually a suprise to me that this even occurs (although I can confirm this even with Python 2.6).</p>
0
2009-07-09T17:03:54Z
[ "python", "utf-8", "readline", "separator" ]
How to exclude U+2028 from line separators in Python when reading file?
1,105,106
<p>I have a file in UTF-8, where some lines contain the U+2028 Line Separator character (<a href="http://www.fileformat.info/info/unicode/char/2028/index.htm" rel="nofollow">http://www.fileformat.info/info/unicode/char/2028/index.htm</a>). I don't want it to be treated as a line break when I read lines from the file. Is there a way to exclude it from separators when I iterate over the file or use readlines()? (Besides reading the entire file into a string and then splitting by \n.) Thank you!</p>
3
2009-07-09T16:44:49Z
1,105,563
<p>I couldn't reproduce that behavior but here's a naive solution that just merges readline results until they don't end with U+2028.</p> <pre><code>#!/usr/bin/env python from __future__ import with_statement def my_readlines(f): buf = u"" for line in f.readlines(): uline = line.decode('utf8') buf += uline if uline[-1] != u'\u2028': yield buf buf = u"" if buf: yield buf with open("in.txt", "rb") as fin: for l in my_readlines(fin): print l </code></pre>
2
2009-07-09T18:04:17Z
[ "python", "utf-8", "readline", "separator" ]
How to exclude U+2028 from line separators in Python when reading file?
1,105,106
<p>I have a file in UTF-8, where some lines contain the U+2028 Line Separator character (<a href="http://www.fileformat.info/info/unicode/char/2028/index.htm" rel="nofollow">http://www.fileformat.info/info/unicode/char/2028/index.htm</a>). I don't want it to be treated as a line break when I read lines from the file. Is there a way to exclude it from separators when I iterate over the file or use readlines()? (Besides reading the entire file into a string and then splitting by \n.) Thank you!</p>
3
2009-07-09T16:44:49Z
1,106,449
<p>I can't duplicate this behaviour in python 2.5, 2.6 or 3.0 on mac os x - U+2028 is always treated as non-endline. Could you go into more detail about where you see this error?</p> <p>That said, here is a subclass of the "file" class that might do what you want:</p> <pre><code>#/usr/bin/python # -*- coding: utf-8 -*- class MyFile (file): def __init__(self, *arg, **kwarg): file.__init__(self, *arg, **kwarg) self.EOF = False def next(self, catchEOF = False): if self.EOF: raise StopIteration("End of file") try: nextLine= file.next(self) except StopIteration: self.EOF = True if not catchEOF: raise return "" if nextLine.decode("utf8")[-1] == u'\u2028': return nextLine+self.next(catchEOF = True) else: return nextLine A = MyFile("someUnicode.txt") for line in A: print line.strip("\n").decode("utf8") </code></pre>
1
2009-07-09T21:04:52Z
[ "python", "utf-8", "readline", "separator" ]
How to exclude U+2028 from line separators in Python when reading file?
1,105,106
<p>I have a file in UTF-8, where some lines contain the U+2028 Line Separator character (<a href="http://www.fileformat.info/info/unicode/char/2028/index.htm" rel="nofollow">http://www.fileformat.info/info/unicode/char/2028/index.htm</a>). I don't want it to be treated as a line break when I read lines from the file. Is there a way to exclude it from separators when I iterate over the file or use readlines()? (Besides reading the entire file into a string and then splitting by \n.) Thank you!</p>
3
2009-07-09T16:44:49Z
1,106,760
<p>Thanks to everyone for answering. I think I know why you might not have been able to replicate this.I just realized that it happens if I decode the file when opening, as in:</p> <pre><code>f = codecs.open(filename, encoding='utf-8') for line in f: print line </code></pre> <p>The lines are not separated on u2028, if I open the file first and then decode individual lines:</p> <pre><code>f = open(filename) for line in f: print line.decode("utf8") </code></pre> <p>(I'm using Python 2.6 on Windows. The file was originally UTF16LE and then it was converted into UTF8).</p> <p>This is very interesting, I guess I won't be using codecs.open much from now on :-).</p>
1
2009-07-09T22:24:58Z
[ "python", "utf-8", "readline", "separator" ]
How to exclude U+2028 from line separators in Python when reading file?
1,105,106
<p>I have a file in UTF-8, where some lines contain the U+2028 Line Separator character (<a href="http://www.fileformat.info/info/unicode/char/2028/index.htm" rel="nofollow">http://www.fileformat.info/info/unicode/char/2028/index.htm</a>). I don't want it to be treated as a line break when I read lines from the file. Is there a way to exclude it from separators when I iterate over the file or use readlines()? (Besides reading the entire file into a string and then splitting by \n.) Thank you!</p>
3
2009-07-09T16:44:49Z
1,107,249
<p>The codecs module is doing the RIGHT thing. U+2028 is named "LINE SEPARATOR" with the comment "may be used to represent this semantic unambiguously". So treating it as a line separator is sensible.</p> <p>Presumably the creator would not have put the U+2028 characters there without good reason ... does the file have u"\n" as well? Why do you want lines not to be split on U+2028?</p>
0
2009-07-10T01:15:31Z
[ "python", "utf-8", "readline", "separator" ]
storing uploaded photos and documents - filesystem vs database blob
1,105,429
<p><strong>My specific situation</strong></p> <p>Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. </p> <p>For photos, there will be thumbnails of each.</p> <p><strong>My question</strong></p> <p>My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. </p> <p>Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?</p> <p>Thanks in advance!</p>
10
2009-07-09T17:39:58Z
1,105,444
<p>File system. No contest. The data has to go through a lot more layers when you store it in the db.</p> <p>Edit on caching: If you want to cache the file while the user uploads it to ensure the operation finishes as soon as possible, dumping it straight to disk (i.e. file system) is about as quick as it gets. As long as the files aren't too big and you don't have too many concurrent users, you can 'cache' the file in memory, return to the user, then save to disk. To be honest, I wouldn't bother.</p> <p>If you are making the files available on the web after they have been uploaded and want to cache to improve the performance, file system is still the best option. You'll get caching for free (may have to adjust a setting or two) from your web server. You wont get this if the files are in the database.</p> <p>After all that it sounds like you should never store files in the database. Not the case, you just need a good reason to do so.</p>
9
2009-07-09T17:42:19Z
[ "python", "postgresql", "storage", "photos", "photo-management" ]
storing uploaded photos and documents - filesystem vs database blob
1,105,429
<p><strong>My specific situation</strong></p> <p>Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. </p> <p>For photos, there will be thumbnails of each.</p> <p><strong>My question</strong></p> <p>My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. </p> <p>Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?</p> <p>Thanks in advance!</p>
10
2009-07-09T17:39:58Z
1,105,453
<p>While there are exceptions to everything, the general case is that storing images in the file system is your best bet. You can easily provide caching services to the images, you don't need to worry about additional code to handle image processing, and you can easily do maintenance on the images if needed through standard image editing methods.</p> <p>It sounds like your business model fits nicely into this scenario.</p>
10
2009-07-09T17:43:25Z
[ "python", "postgresql", "storage", "photos", "photo-management" ]
storing uploaded photos and documents - filesystem vs database blob
1,105,429
<p><strong>My specific situation</strong></p> <p>Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. </p> <p>For photos, there will be thumbnails of each.</p> <p><strong>My question</strong></p> <p>My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. </p> <p>Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?</p> <p>Thanks in advance!</p>
10
2009-07-09T17:39:58Z
1,105,479
<p>Definitely store your images on the filesystem. One concern that folks don't consider enough when considering these types of things is bloat; cramming images as binary blobs into your database is a really quick way to bloat your DB way up. With a large database comes higher hardware requirements, more difficult replication and backup requirements, etc. Sticking your images on a filesystem means you can back them up / replicate them with many existing tools easily and simply. Storage space is far easier to increase on filesystem than in database, as well.</p>
3
2009-07-09T17:48:56Z
[ "python", "postgresql", "storage", "photos", "photo-management" ]
storing uploaded photos and documents - filesystem vs database blob
1,105,429
<p><strong>My specific situation</strong></p> <p>Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. </p> <p>For photos, there will be thumbnails of each.</p> <p><strong>My question</strong></p> <p>My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. </p> <p>Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?</p> <p>Thanks in advance!</p>
10
2009-07-09T17:39:58Z
1,105,534
<p>a DB <em>might</em> be faster than a filesystem on some operations, but loading a well-identified chunk of data 100s of KB is not one of them.</p> <p>also, a good frontend webserver (like nginx) is way faster than any webapp layer you'd have to write to read the blob from the DB. in some tests nginx is roughly on par with memcached for raw data serving of medium-sized files (like big HTMLs or medium-sized images).</p> <p>go FS. no contest.</p>
1
2009-07-09T17:58:11Z
[ "python", "postgresql", "storage", "photos", "photo-management" ]
storing uploaded photos and documents - filesystem vs database blob
1,105,429
<p><strong>My specific situation</strong></p> <p>Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. </p> <p>For photos, there will be thumbnails of each.</p> <p><strong>My question</strong></p> <p>My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. </p> <p>Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?</p> <p>Thanks in advance!</p>
10
2009-07-09T17:39:58Z
1,106,705
<p>Maybe on a slight tangent, but in <a href="http://www.mysqlconf.com/mysql2009/public/schedule/detail/8232" rel="nofollow">this</a> video from the MySQL Conference, the presenter talks about how the website <a href="http://www.smugmug.com/" rel="nofollow">smugmug</a> uses MySQL and various other technologies for superior performance. I think the video builds upon some of the answers posted here, but also suggest ways of improving website performance outside the scope of the DB.</p>
1
2009-07-09T22:12:14Z
[ "python", "postgresql", "storage", "photos", "photo-management" ]
storing uploaded photos and documents - filesystem vs database blob
1,105,429
<p><strong>My specific situation</strong></p> <p>Property management web site where users can upload photos and lease documents. For every apartment unit, there might be 4 photos, so there won't be an overwhelming number of photo in the system. </p> <p>For photos, there will be thumbnails of each.</p> <p><strong>My question</strong></p> <p>My #1 priority is performance. For the end user, I want to load pages and show the image as fast as possible. </p> <p>Should I store the images inside the database, or file system, or doesn't matter? Do I need to be caching anything?</p> <p>Thanks in advance!</p>
10
2009-07-09T17:39:58Z
14,577,062
<p><em>Comment to the Sheepy's answer.</em></p> <p>In common storing files in SQL is better when file size less than 256 kilobytes, and worth when it greater 1 megabyte. So between 256-1024 kilobytes it depends on several factors. Read <a href="http://research.microsoft.com/apps/pubs/default.aspx?id=64525" rel="nofollow">this</a> to learn more about reasons to use SQL or file systems.</p>
2
2013-01-29T06:40:03Z
[ "python", "postgresql", "storage", "photos", "photo-management" ]
Starting an INDIVIDUAL instance of a subclass from asynchat
1,105,814
<p>So the situation I have is that I have loaded more than one class that I've made that subclasses from <code>asynchat</code>, but I only want one of them to run. Of course, this doesn't work out when I call <code>asyncore.loop()</code> as they all begin. Is there any way to make only one of them begin running?</p> <p><strong>edit:</strong> I think it has something to do with the <code>map</code> parameter that can be passed to <code>asyncore.loop</code> but I can't get it working.</p> <p><strong>edit2:</strong> I got it. Basically I did the following:</p> <pre><code>asyncore.loop(map=my_instance._map) </code></pre>
0
2009-07-09T18:51:53Z
1,319,238
<p>For all who were curious, I figured it out. If you pass your instance's <code>_map</code> to <code>loop()</code> it seems to only start the single instance.</p> <p>Example:</p> <pre><code>my_asyncore_obj = SomeAsyncoreObj() asyncore.loop(map=my_asyncore_obj._map) </code></pre>
0
2009-08-23T18:42:45Z
[ "python", "asyncore" ]
Parse annotations from a pdf
1,106,098
<p>I want a python function that takes a pdf and returns a list of the text of the note annotations in the document. I have looked at python-poppler (<a href="https://code.launchpad.net/~poppler-python/poppler-python/trunk">https://code.launchpad.net/~poppler-python/poppler-python/trunk</a>) but I can not figure out how to get it to give me anything useful.</p> <p>I found the <code>get_annot_mapping</code> method and modified the demo program provided to call it via <code>self.current_page.get_annot_mapping()</code>, but I have no idea what to do with an AnnotMapping object. It seems to not be fully implemented, providing only the copy method.</p> <p>If there are any other libraries that provide this function, that's fine as well.</p>
16
2009-07-09T19:52:08Z
1,107,909
<p>I didn't ever used this, nor I wanted this kind of features, but I found <a href="http://www.unixuser.org/~euske/python/pdfminer/index.html" rel="nofollow">PDFMiner</a> - this link has information about basic usage, maybe this is what You are looking for?</p>
1
2009-07-10T05:50:55Z
[ "python", "pdf" ]
Parse annotations from a pdf
1,106,098
<p>I want a python function that takes a pdf and returns a list of the text of the note annotations in the document. I have looked at python-poppler (<a href="https://code.launchpad.net/~poppler-python/poppler-python/trunk">https://code.launchpad.net/~poppler-python/poppler-python/trunk</a>) but I can not figure out how to get it to give me anything useful.</p> <p>I found the <code>get_annot_mapping</code> method and modified the demo program provided to call it via <code>self.current_page.get_annot_mapping()</code>, but I have no idea what to do with an AnnotMapping object. It seems to not be fully implemented, providing only the copy method.</p> <p>If there are any other libraries that provide this function, that's fine as well.</p>
16
2009-07-09T19:52:08Z
1,116,901
<p>Turns out the bindings were incomplete. It is now fixed. <a href="https://bugs.launchpad.net/poppler-python/+bug/397850" rel="nofollow">https://bugs.launchpad.net/poppler-python/+bug/397850</a></p>
3
2009-07-12T20:57:11Z
[ "python", "pdf" ]
Parse annotations from a pdf
1,106,098
<p>I want a python function that takes a pdf and returns a list of the text of the note annotations in the document. I have looked at python-poppler (<a href="https://code.launchpad.net/~poppler-python/poppler-python/trunk">https://code.launchpad.net/~poppler-python/poppler-python/trunk</a>) but I can not figure out how to get it to give me anything useful.</p> <p>I found the <code>get_annot_mapping</code> method and modified the demo program provided to call it via <code>self.current_page.get_annot_mapping()</code>, but I have no idea what to do with an AnnotMapping object. It seems to not be fully implemented, providing only the copy method.</p> <p>If there are any other libraries that provide this function, that's fine as well.</p>
16
2009-07-09T19:52:08Z
12,502,560
<p>Just in case somebody is looking for some working code. Here is a script I use.</p> <pre><code>import poppler import sys import urllib import os def main(): input_filename = sys.argv[1] # http://blog.hartwork.org/?p=612 document = poppler.document_new_from_file('file://%s' % \ urllib.pathname2url(os.path.abspath(input_filename)), None) n_pages = document.get_n_pages() all_annots = 0 for i in range(n_pages): page = document.get_page(i) annot_mappings = page.get_annot_mapping () num_annots = len(annot_mappings) if num_annots &gt; 0: for annot_mapping in annot_mappings: if annot_mapping.annot.get_annot_type().value_name != 'POPPLER_ANNOT_LINK': all_annots += 1 print 'page: {0:3}, {1:10}, type: {2:10}, content: {3}'.format(i+1, annot_mapping.annot.get_modified(), annot_mapping.annot.get_annot_type().value_nick, annot_mapping.annot.get_contents()) if all_annots &gt; 0: print str(all_annots) + " annotation(s) found" else: print "no annotations found" if __name__ == "__main__": main() </code></pre>
10
2012-09-19T20:40:13Z
[ "python", "pdf" ]
improving Boyer-Moore string search
1,106,112
<p>I've been playing around with the Boyer-Moore sting search algorithm and starting with a base code set from Shriphani Palakodety I created 2 additional versions (v2 and v3) - each making some modifications such as removing len() function from the loop and than refactoring the while/if conditions. From v1 to v2 I see about a 10%-15% improvement and from v1 to v3 a 25%-30% improvement (significant). </p> <p>My question is: does anyone have any additional mods that would improve performance even more (if you can submit as a v4) - keeping the base 'algorithim' true to Boyer-Moore.</p> <p><hr /></p> <pre><code>#!/usr/bin/env python #original Boyer-Moore implementor (v1): Shriphani Palakodety import time bcs = {} #the table def goodSuffixShift(key): for i in xrange(len(key)-1, -1, -1): if key[i] not in bcs.keys(): bcs[key[i]] = len(key)-i-1 #---------------------- v1 ---------------------- def searchv1(text, key): #base from Shriphani Palakodety fixed for single char i = len(key)-1 index = len(key) -1 j = i while True: if i &lt; 0: return j + 1 elif j &gt; len(text): return "not found" elif text[j] != key[i] and text[j] not in bcs.keys(): j += len(key) i = index elif text[j] != key[i] and text[j] in bcs.keys(): j += bcs[text[j]] i = index else: j -= 1 i -= 1 #---------------------- v2 ---------------------- def searchv2(text, key): #removed string len functions from loop len_text = len(text) len_key = len(key) i = len_key-1 index = len_key -1 j = i while True: if i &lt; 0: return j + 1 elif j &gt; len_text: return "not found" elif text[j] != key[i] and text[j] not in bcs.keys(): j += len_key i = index elif text[j] != key[i] and text[j] in bcs.keys(): j += bcs[text[j]] i = index else: j -= 1 i -= 1 #---------------------- v3 ---------------------- def searchv3(text, key): #from v2 plus modified 3rd if condition - breaking down the comparison for efficency, #modified the while loop to include the first if condition (oppposite of it) len_text = len(text) len_key = len(key) i = len_key-1 index = len_key -1 j = i while i &gt;= 0 and j &lt;= len_text: if text[j] != key[i]: if text[j] not in bcs.keys(): j += len_key i = index else: j += bcs[text[j]] i = index else: j -= 1 i -= 1 if j &gt; len_text: return "not found" else: return j + 1 key_list = ["POWER", "HOUSE", "COMP", "SCIENCE", "SHRIPHANI", "BRUAH", "A", "H"] text = "SHRIPHANI IS A COMPUTER SCIENCE POWERHOUSE" t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv1(text, key) searchv1(text, key) bcs = {} t2 = time.clock() print 'v1 took %0.5f ms' % ((t2-t1)*1000.0) t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv2(text, key) searchv2(text, key) bcs = {} t2 = time.clock() print 'v2 took %0.5f ms' % ((t2-t1)*1000.0) t1 = time.clock() for key in key_list: goodSuffixShift(key) #print searchv3(text, key) searchv3(text, key) bcs = {} t2 = time.clock() print 'v3 took %0.5f ms' % ((t2-t1)*1000.0) </code></pre>
2
2009-07-09T19:54:07Z
1,107,298
<p>Using "in bcs.keys()" is creating a list and then doing an O(N) search of the list -- just use "in bcs". </p> <p>Do the goodSuffixShift(key) thing inside the search function. Two benefits: the caller has only one API to use, and you avoid having bcs as a global (horrid ** 2). </p> <p>Your indentation is incorrect in several places.</p> <p><strong>Update</strong> </p> <p>This is not the Boyer-Moore algorithm (which uses TWO lookup tables). It looks more like the Boyer-Moore-Horspool algorithm, which uses only the first BM table.</p> <p>A probable speedup: add the line 'bcsget = bcs.get' after setting up the bcs dict. Then replace:</p> <pre><code>if text[j] != key[i]: if text[j] not in bcs.keys(): j += len_key i = index else: j += bcs[text[j]] i = index </code></pre> <p>with:</p> <pre><code>if text[j] != key[i]: j += bcsget(text[j], len_key) i = index </code></pre> <p><strong>Update 2 -- back to basics, like getting the code correct before you optimise</strong> </p> <p>Version 1 has some bugs which you have carried forward into versions 2 and 3. Some suggestions: </p> <p>Change the not-found response from "not found" to -1. This makes it compatible with text.find(key), which you can use to check your results.</p> <p>Get some more text values e.g. "R" * 20, "X" * 20, and "XXXSCIENCEYYY" for use with your existing key values.</p> <p>Lash up a test harness, something like this:</p> <pre><code>func_list = [searchv1, searchv2, searchv3] def test(): for text in text_list: print '==== text is', repr(text) for func in func_list: for key in key_list: try: result = func(text, key) except Exception, e: print "EXCEPTION: %r expected:%d func:%s key:%r" % (e, expected, func.__name__, key) continue expected = text.find(key) if result != expected: print "ERROR actual:%d expected:%d func:%s key:%r" % (result, expected, func.__name__, key) </code></pre> <p>Run that, fix the errors in v1, carry those fixes forward, run the tests again until they're all OK. Then you can tidy up your timing harness along the same lines, and see what the performance is. Then you can report back here, and I'll give you my idea of what a searchv4 function should look like ;-)</p>
3
2009-07-10T01:37:30Z
[ "python", "performance" ]
How are these type of python decorators written?
1,106,223
<p>I'd like to write a decorator that would limit the number of times a function can be executed, something along the following syntax :</p> <pre><code> @max_execs(5) def my_method(*a,**k): # do something here pass </code></pre> <p>I think it's possible to write this type of decorator, but I don't know how. I think a function won't be this decorator's first argument, right? I'd like a "plain decorator" implementation, not some class with a <b><strong>call</strong></b> method.</p> <p>The reason for this is to learn how they are written. Please explain the syntax, and how that decorator works.</p>
6
2009-07-09T20:16:56Z
1,106,242
<p>I know you said you didn't want a class, but unfortunately that's the only way I can think of how to do it off the top of my head.</p> <pre><code>class mymethodwrapper: def __init__(self): self.maxcalls = 0 def mymethod(self): self.maxcalls += 1 if self.maxcalls &gt; 5: return #rest of your code print "Code fired!" </code></pre> <p>Fire it up like this</p> <pre><code>a = mymethodwrapper for x in range(1000): a.mymethod() </code></pre> <p>The output would be:</p> <pre><code>&gt;&gt;&gt; Code fired! &gt;&gt;&gt; Code fired! &gt;&gt;&gt; Code fired! &gt;&gt;&gt; Code fired! &gt;&gt;&gt; Code fired! </code></pre>
0
2009-07-09T20:24:46Z
[ "python", "language-features", "decorator" ]
How are these type of python decorators written?
1,106,223
<p>I'd like to write a decorator that would limit the number of times a function can be executed, something along the following syntax :</p> <pre><code> @max_execs(5) def my_method(*a,**k): # do something here pass </code></pre> <p>I think it's possible to write this type of decorator, but I don't know how. I think a function won't be this decorator's first argument, right? I'd like a "plain decorator" implementation, not some class with a <b><strong>call</strong></b> method.</p> <p>The reason for this is to learn how they are written. Please explain the syntax, and how that decorator works.</p>
6
2009-07-09T20:16:56Z
1,106,244
<p>Decorator is merely a callable that transforms a function into something else. In your case, <code>max_execs(5)</code> must be a callable that transforms a function into another callable object that will count and forward the calls.</p> <pre><code>class helper: def __init__(self, i, fn): self.i = i self.fn = fn def __call__(self, *args, **kwargs): if self.i &gt; 0: self.i = self.i - 1 return self.fn(*args, **kwargs) class max_execs: def __init__(self, i): self.i = i def __call__(self, fn): return helper(self.i, fn) </code></pre> <p>I don't see why you would want to limit yourself to a function (and not a class). But if you really want to...</p> <pre><code>def max_execs(n): return lambda fn, i=n: return helper(i, fn) </code></pre>
4
2009-07-09T20:24:53Z
[ "python", "language-features", "decorator" ]
How are these type of python decorators written?
1,106,223
<p>I'd like to write a decorator that would limit the number of times a function can be executed, something along the following syntax :</p> <pre><code> @max_execs(5) def my_method(*a,**k): # do something here pass </code></pre> <p>I think it's possible to write this type of decorator, but I don't know how. I think a function won't be this decorator's first argument, right? I'd like a "plain decorator" implementation, not some class with a <b><strong>call</strong></b> method.</p> <p>The reason for this is to learn how they are written. Please explain the syntax, and how that decorator works.</p>
6
2009-07-09T20:16:56Z
1,106,255
<p>There are two ways of doing it. The object-oriented way is to make a class:</p> <pre><code>class max_execs: def __init__(self, max_executions): self.max_executions = max_executions self.executions = 0 def __call__(self, func): @wraps(func) def maybe(*args, **kwargs): if self.executions &lt; self.max_executions: self.executions += 1 return func(*args, **kwargs) else: print "fail" return maybe </code></pre> <p>See <a href="http://stackoverflow.com/questions/308999/what-does-functools-wraps-do">this question</a> for an explanation of <code>wraps</code>.</p> <p>I prefer the above OOP approach for this kind of decorator, since you've basically got a private count variable tracking the number of executions. However, the other approach is to use a closure, such as</p> <pre><code>def max_execs(max_executions): executions = [0] def actual_decorator(func): @wraps(func) def maybe(*args, **kwargs): if executions[0] &lt; max_executions: executions[0] += 1 return func(*args, **kwargs) else: print "fail" return maybe return actual_decorator </code></pre> <p>This involved three functions. The <code>max_execs</code> function is given a parameter for the number of executions and returns a decorator that will restrict you to that many calls. That function, the <code>actual_decorator</code>, does the same thing as our <code>__call__</code> method in the OOP example. The only weirdness is that since we don't have a class with private variables, we need to mutate the <code>executions</code> variable which is in the outer scope of our closure. Python 3.0 supports this with the <code>nonlocal</code> statement, but in Python 2.6 or earlier, we need to wrap our executions count in a list so that it can be mutated.</p>
3
2009-07-09T20:27:09Z
[ "python", "language-features", "decorator" ]
How are these type of python decorators written?
1,106,223
<p>I'd like to write a decorator that would limit the number of times a function can be executed, something along the following syntax :</p> <pre><code> @max_execs(5) def my_method(*a,**k): # do something here pass </code></pre> <p>I think it's possible to write this type of decorator, but I don't know how. I think a function won't be this decorator's first argument, right? I'd like a "plain decorator" implementation, not some class with a <b><strong>call</strong></b> method.</p> <p>The reason for this is to learn how they are written. Please explain the syntax, and how that decorator works.</p>
6
2009-07-09T20:16:56Z
1,106,289
<p>This is what I whipped up. It doesn't use a class, but it does use function attributes:</p> <pre><code>def max_execs(n=5): def decorator(fn): fn.max = n fn.called = 0 def wrapped(*args, **kwargs): fn.called += 1 if fn.called &lt;= fn.max: return fn(*args, **kwargs) else: # Replace with your own exception, or something # else that you want to happen when the limit # is reached raise RuntimeError("max executions exceeded") return wrapped return decorator </code></pre> <p><code>max_execs</code> returns a functioned called <code>decorator</code>, which in turn returns <code>wrapped</code>. <code>decoration</code> stores the max execs and current number of execs in two function attributes, which then get checked in <code>wrapped</code>.</p> <p><strong>Translation:</strong> When using the decorator like this:</p> <pre><code>@max_execs(5) def f(): print "hi!" </code></pre> <p>You're basically doing something like this:</p> <pre><code>f = max_execs(5)(f) </code></pre>
12
2009-07-09T20:32:25Z
[ "python", "language-features", "decorator" ]
How are these type of python decorators written?
1,106,223
<p>I'd like to write a decorator that would limit the number of times a function can be executed, something along the following syntax :</p> <pre><code> @max_execs(5) def my_method(*a,**k): # do something here pass </code></pre> <p>I think it's possible to write this type of decorator, but I don't know how. I think a function won't be this decorator's first argument, right? I'd like a "plain decorator" implementation, not some class with a <b><strong>call</strong></b> method.</p> <p>The reason for this is to learn how they are written. Please explain the syntax, and how that decorator works.</p>
6
2009-07-09T20:16:56Z
1,106,349
<p>Without relying to a state in a class, you have to save the state (count) in the function itself:</p> <pre><code>def max_execs(count): def new_meth(meth): meth.count = count def new(*a,**k): meth.count -= 1 print meth.count if meth.count&gt;=0: return meth(*a,**k) return new return new_meth @max_execs(5) def f(): print "invoked" [f() for _ in range(10)] </code></pre> <p>It gives:</p> <pre><code>5 invoked 4 invoked 3 invoked 2 invoked 1 invoked 0 -1 -2 -3 -4 </code></pre>
2
2009-07-09T20:46:08Z
[ "python", "language-features", "decorator" ]
How are these type of python decorators written?
1,106,223
<p>I'd like to write a decorator that would limit the number of times a function can be executed, something along the following syntax :</p> <pre><code> @max_execs(5) def my_method(*a,**k): # do something here pass </code></pre> <p>I think it's possible to write this type of decorator, but I don't know how. I think a function won't be this decorator's first argument, right? I'd like a "plain decorator" implementation, not some class with a <b><strong>call</strong></b> method.</p> <p>The reason for this is to learn how they are written. Please explain the syntax, and how that decorator works.</p>
6
2009-07-09T20:16:56Z
1,106,423
<p>This method does not modify function internals, instead wraps it into a callable object.</p> <p>Using class slows down execution by ~20% vs using the patched function!</p> <pre><code>def max_execs(n=1): class limit_wrapper: def __init__(self, fn, max): self.calls_left = max self.fn = fn def __call__(self,*a,**kw): if self.calls_left &gt; 0: self.calls_left -= 1 return self.fn(*a,**kw) raise Exception("max num of calls is %d" % self.i) def decorator(fn): return limit_wrapper(fn,n) return decorator @max_execs(2) def fun(): print "called" </code></pre>
1
2009-07-09T21:01:00Z
[ "python", "language-features", "decorator" ]
Python: better way to open lots of sockets
1,106,433
<p>I have the following program to open lot's of sockets, and hold them open to stress test one of our servers. There are several problem's with this. I think it could be a lot more efficient than a recursive call, and it's really still opening sockets in a serial fashion rather than parallel fashion. I realize there are tools like ab that could probably simulate what I'm trying to do, but I'm hoping to increase my python knowledge. Is this something I should be rewriting as either multi-threaded or multi-process?</p> <pre><code>&gt; #!/usr/bin/env python &gt; &gt; import socket, time, sys &gt; sys.setrecursionlimit(2000) &gt; &gt; def open_socket(counter): &gt; sockname = "s" + str(counter) &gt; port = counter + 3000 &gt; sockname = socket.socket() &gt; sockname.bind(('localhost', port)) &gt; sockname.listen(1) &gt; if counter == 2: &gt; time.sleep(20) &gt; elif counter &gt; 2: &gt; counter = counter -1 &gt; open_socket(counter) &gt; &gt; open_socket(1500) </code></pre>
1
2009-07-09T21:02:03Z
1,106,504
<p>Well it's already multi process - put a sleep in before calling open_socket, and run , say , 500 of them from a shell:</p> <pre><code>for i in `seq 500`; do ./yourprogram &amp; done </code></pre> <p>You're not a actually connecting to something though - seems you're setting up server sockets ? If you need to connect to something you surly should test it with parallelism(multiple threads, or run many processes like shown above or using asyncronous connect's). <a href="http://docs.python.org/library/asyncore.html" rel="nofollow" title="Python asyncore">This</a> should be of interest to read:</p>
0
2009-07-09T21:16:24Z
[ "python", "sockets" ]
Python: better way to open lots of sockets
1,106,433
<p>I have the following program to open lot's of sockets, and hold them open to stress test one of our servers. There are several problem's with this. I think it could be a lot more efficient than a recursive call, and it's really still opening sockets in a serial fashion rather than parallel fashion. I realize there are tools like ab that could probably simulate what I'm trying to do, but I'm hoping to increase my python knowledge. Is this something I should be rewriting as either multi-threaded or multi-process?</p> <pre><code>&gt; #!/usr/bin/env python &gt; &gt; import socket, time, sys &gt; sys.setrecursionlimit(2000) &gt; &gt; def open_socket(counter): &gt; sockname = "s" + str(counter) &gt; port = counter + 3000 &gt; sockname = socket.socket() &gt; sockname.bind(('localhost', port)) &gt; sockname.listen(1) &gt; if counter == 2: &gt; time.sleep(20) &gt; elif counter &gt; 2: &gt; counter = counter -1 &gt; open_socket(counter) &gt; &gt; open_socket(1500) </code></pre>
1
2009-07-09T21:02:03Z
1,106,521
<p>You can try using Twisted for this. It greatly simplifies networking on Python. Their site has <a href="http://twistedmatrix.com/projects/core/documentation/howto/index.html" rel="nofollow">some tutorials</a> to get you started.</p> <p>However, you could easily see using Python an overkill for this task. A faster option to hack up would be to just open 1500 instances of nc:</p> <pre><code>for i in {3000..4500}; do nc -l -p $i &amp; done </code></pre>
2
2009-07-09T21:18:44Z
[ "python", "sockets" ]
Python: better way to open lots of sockets
1,106,433
<p>I have the following program to open lot's of sockets, and hold them open to stress test one of our servers. There are several problem's with this. I think it could be a lot more efficient than a recursive call, and it's really still opening sockets in a serial fashion rather than parallel fashion. I realize there are tools like ab that could probably simulate what I'm trying to do, but I'm hoping to increase my python knowledge. Is this something I should be rewriting as either multi-threaded or multi-process?</p> <pre><code>&gt; #!/usr/bin/env python &gt; &gt; import socket, time, sys &gt; sys.setrecursionlimit(2000) &gt; &gt; def open_socket(counter): &gt; sockname = "s" + str(counter) &gt; port = counter + 3000 &gt; sockname = socket.socket() &gt; sockname.bind(('localhost', port)) &gt; sockname.listen(1) &gt; if counter == 2: &gt; time.sleep(20) &gt; elif counter &gt; 2: &gt; counter = counter -1 &gt; open_socket(counter) &gt; &gt; open_socket(1500) </code></pre>
1
2009-07-09T21:02:03Z
1,106,537
<p>I was puzzled why you would use recursion instead of a simple loop. My guess is that with a simple loop, you would have overwritten the variable sockname again and again, so that Python's garbage collection would actually close the previous socket after you created the next one. The solution is to store them all in a list, to prevent Python from garbage-collecting them:</p> <pre><code>def open_socket(counter): sockets = [] for i in range(counter): s = socket.socket() s.bind(('localhost', i+3000)) s.listen(1) sockets.append(s) time.sleep(20) </code></pre> <p>Also notice that in your code, the first assignment to sockname is completely redundant, as it is overwritten by the second assignment.</p>
4
2009-07-09T21:22:30Z
[ "python", "sockets" ]
How does os.path map to posixpath.pyc and not os/path.py?
1,106,455
<p>What is the underlying mechanism in Python that handles such "aliases"?</p> <pre><code>&gt;&gt;&gt; import os.path &gt;&gt;&gt; os.path.__file__ '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/posixpath.pyc' </code></pre>
3
2009-07-09T21:05:59Z
1,106,464
<p>Perhaps os uses import as?</p> <pre><code>import posixpath as path </code></pre>
0
2009-07-09T21:08:31Z
[ "python", "import", "path", "module", "alias" ]
How does os.path map to posixpath.pyc and not os/path.py?
1,106,455
<p>What is the underlying mechanism in Python that handles such "aliases"?</p> <pre><code>&gt;&gt;&gt; import os.path &gt;&gt;&gt; os.path.__file__ '/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/posixpath.pyc' </code></pre>
3
2009-07-09T21:05:59Z
1,106,498
<p>Taken from os.py on CPython 2.6:</p> <pre><code>sys.modules['os.path'] = path from os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep, devnull) </code></pre> <p><code>path</code> is defined earlier as the platform-specific module:</p> <pre><code>if 'posix' in _names: name = 'posix' linesep = '\n' from posix import * try: from posix import _exit except ImportError: pass import posixpath as path import posix __all__.extend(_get_exports_list(posix)) del posix elif 'nt' in _names: # ... </code></pre>
6
2009-07-09T21:15:30Z
[ "python", "import", "path", "module", "alias" ]
installing easy_install for Python 2.6.2 (missing?)
1,106,574
<p>I am not a python user, I'm just trying to get couchdb-dump up and running and it's in an "egg" file which I guess needs easy_install. I have Python 2.6.2 running on my computer but it seems to know nothing about easy_install or setuptools... help! What can I do to fix this??? </p> <p><strong>edit:</strong> you may note from the <a href="http://pypi.python.org/pypi/setuptools#windows" rel="nofollow">setuptools</a> page that there are Windows .exe installers for 2.3, 2.4, and 2.5, but not 2.6. What the heck?!?!</p> <p>argh, this is a <a href="http://stackoverflow.com/questions/309412/how-to-setup-setuptools-for-python-2-6-on-windows">duplicate question</a>, sorry.</p> <p>p.s. <a href="http://stackoverflow.com/questions/309412/how-to-setup-setuptools-for-python-2-6-on-windows/425318#425318">this solution</a> is the one that seemed simplest and it worked for me. </p>
0
2009-07-09T21:31:36Z
1,106,613
<p>I don't like the whole easy_install thing either.</p> <p>But the solution is to download the source, untar it, and type </p> <pre><code>python setup.py install </code></pre>
7
2009-07-09T21:43:29Z
[ "python", "easy-install" ]
installing easy_install for Python 2.6.2 (missing?)
1,106,574
<p>I am not a python user, I'm just trying to get couchdb-dump up and running and it's in an "egg" file which I guess needs easy_install. I have Python 2.6.2 running on my computer but it seems to know nothing about easy_install or setuptools... help! What can I do to fix this??? </p> <p><strong>edit:</strong> you may note from the <a href="http://pypi.python.org/pypi/setuptools#windows" rel="nofollow">setuptools</a> page that there are Windows .exe installers for 2.3, 2.4, and 2.5, but not 2.6. What the heck?!?!</p> <p>argh, this is a <a href="http://stackoverflow.com/questions/309412/how-to-setup-setuptools-for-python-2-6-on-windows">duplicate question</a>, sorry.</p> <p>p.s. <a href="http://stackoverflow.com/questions/309412/how-to-setup-setuptools-for-python-2-6-on-windows/425318#425318">this solution</a> is the one that seemed simplest and it worked for me. </p>
0
2009-07-09T21:31:36Z
1,107,267
<p>For installing setuptools for 2.6 download "ez_setup.py" from:</p> <p><a href="http://svn.python.org/projects/sandbox/branches/setuptools-0.6/#egg=setuptools-dev06" rel="nofollow">http://svn.python.org/projects/sandbox/branches/setuptools-0.6/#egg=setuptools-dev06</a></p> <p>And run it. setuptools should be installed. This will place easy_install in your python26/Scripts directory, make sure this is in your PATH, and then you should be able to use easy_install.</p>
0
2009-07-10T01:22:34Z
[ "python", "easy-install" ]
installing easy_install for Python 2.6.2 (missing?)
1,106,574
<p>I am not a python user, I'm just trying to get couchdb-dump up and running and it's in an "egg" file which I guess needs easy_install. I have Python 2.6.2 running on my computer but it seems to know nothing about easy_install or setuptools... help! What can I do to fix this??? </p> <p><strong>edit:</strong> you may note from the <a href="http://pypi.python.org/pypi/setuptools#windows" rel="nofollow">setuptools</a> page that there are Windows .exe installers for 2.3, 2.4, and 2.5, but not 2.6. What the heck?!?!</p> <p>argh, this is a <a href="http://stackoverflow.com/questions/309412/how-to-setup-setuptools-for-python-2-6-on-windows">duplicate question</a>, sorry.</p> <p>p.s. <a href="http://stackoverflow.com/questions/309412/how-to-setup-setuptools-for-python-2-6-on-windows/425318#425318">this solution</a> is the one that seemed simplest and it worked for me. </p>
0
2009-07-09T21:31:36Z
6,418,280
<p><a href="http://pypi.python.org/pypi/setuptools" rel="nofollow">http://pypi.python.org/pypi/setuptools</a><br> ... has been updated and has windows installers for Python 2.6 and 2.7</p> <p>(note: if you need 64-bit windows installer: <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a>)</p>
1
2011-06-20T22:23:00Z
[ "python", "easy-install" ]
Do I have any obligations if I upload an egg to the CheeseShop?
1,106,759
<p>Suppose I'd like to upload some eggs on the Cheese Shop. Do I have any obligation? Am I required to provide a license? Am I required to provide tests? Will I have any obligations to the users of this egg ( if any ) ?</p> <p>I haven't really released anything as open source 'till now, and I'd like to know the process.</p>
14
2009-07-09T22:24:51Z
1,106,782
<p>You will need to license the code. Despite what some people may think, the authors of content actually need to grant the license on their own. The Cheese Shop can't grant a license to other people to use the content until you've granted it as the copyright owner.</p>
3
2009-07-09T22:31:20Z
[ "python", "egg", "pypi" ]
Do I have any obligations if I upload an egg to the CheeseShop?
1,106,759
<p>Suppose I'd like to upload some eggs on the Cheese Shop. Do I have any obligation? Am I required to provide a license? Am I required to provide tests? Will I have any obligations to the users of this egg ( if any ) ?</p> <p>I haven't really released anything as open source 'till now, and I'd like to know the process.</p>
14
2009-07-09T22:24:51Z
1,106,807
<p>See <a href="http://wiki.python.org/moin/CheeseShopTutorial" rel="nofollow">CheeseShopTutorial</a> and <a href="http://docs.python.org/distutils/setupscript.html" rel="nofollow">Writing the Setup Script</a>.</p>
4
2009-07-09T22:38:10Z
[ "python", "egg", "pypi" ]
Do I have any obligations if I upload an egg to the CheeseShop?
1,106,759
<p>Suppose I'd like to upload some eggs on the Cheese Shop. Do I have any obligation? Am I required to provide a license? Am I required to provide tests? Will I have any obligations to the users of this egg ( if any ) ?</p> <p>I haven't really released anything as open source 'till now, and I'd like to know the process.</p>
14
2009-07-09T22:24:51Z
1,108,038
<ol> <li><p>You have an obligation to register the package with a useful description. Nothing is more frustrating than finding a Package that <em>may</em> be good, but you don't know, because there is no description.</p> <p>Typical example of Lazy developer: <a href="http://pypi.python.org/pypi/gevent/0.9.1">http://pypi.python.org/pypi/gevent/0.9.1</a></p> <p>Better: <a href="http://pypi.python.org/pypi/itty/0.6.0">http://pypi.python.org/pypi/itty/0.6.0</a></p> <p>Fantastic (even a changelog!): <a href="http://pypi.python.org/pypi/jarn.mkrelease/2.0b2">http://pypi.python.org/pypi/jarn.mkrelease/2.0b2</a></p></li> <li><p>On CheeseShop you can also choose to just register the package, but not upload the code. Instead you can provide your own downloading URL. <em>DO NOT DO THAT!</em> That means that your software gets unavailable when cheeseshop is down <em>or</em> when your server is down. That means that if you want to install a system that uses your software, the chances that it will fail because a server is down somewhere doubles. And with a big system, when you have five different servers involved... Always upload the package to the CheeseShop as well as registering it!</p></li> <li><p>You also have the obligation not to remove the egg (except under exceptional circumstances) as people who starts to depend on a specific version of your software will fail if you remove that version.</p> <p>If you don't want to support the software anymore, upload a new version, with a big fat "THIS IS NO LONGER SUPPORTED SOFTWARE" or something, on top of the description.</p> <p>And don't upload development versions, like "0.1dev-r73183".</p></li> <li><p>And although you may not have an "obligation" to License your software, you kinda have to, or the uploading gets pointless. If you are unsure, go with GPL.</p></li> </ol> <p>That's it as far as I'm concerned. Sorry about the ranting. ;-)</p>
9
2009-07-10T06:43:17Z
[ "python", "egg", "pypi" ]
Find functions explicitly defined in a module (python)
1,106,840
<p>Ok I know you can use the dir() method to list everything in a module, but is there any way to see only the functions that are defined in that module? For example, assume my module looks like this:</p> <pre><code>from datetime import date, datetime def test(): return "This is a real method" </code></pre> <p>Even if i use inspect() to filter out the builtins, I'm still left with anything that was imported. E.g I'll see:</p> <p>['date', 'datetime', 'test']</p> <p>Is there any way to exclude imports? Or another way to find out what's defined in a module?</p>
18
2009-07-09T22:49:03Z
1,106,856
<p>Every class in python has a <code>__module__</code> attribute. You can use its value to perform filtering. Take a look at <a href="http://diveintopython.net/file_handling/more_on_modules.html" rel="nofollow">example 6.14 in dive into python</a></p>
1
2009-07-09T22:53:53Z
[ "python", "introspection" ]
Find functions explicitly defined in a module (python)
1,106,840
<p>Ok I know you can use the dir() method to list everything in a module, but is there any way to see only the functions that are defined in that module? For example, assume my module looks like this:</p> <pre><code>from datetime import date, datetime def test(): return "This is a real method" </code></pre> <p>Even if i use inspect() to filter out the builtins, I'm still left with anything that was imported. E.g I'll see:</p> <p>['date', 'datetime', 'test']</p> <p>Is there any way to exclude imports? Or another way to find out what's defined in a module?</p>
18
2009-07-09T22:49:03Z
1,106,871
<p>the python <a href="http://docs.python.org/3.0/library/inspect.html" rel="nofollow">inspect</a> module is probably what you're looking for here.</p> <pre><code>import inspect if inspect.ismethod(methodInQuestion): pass # It's a method </code></pre>
0
2009-07-09T22:56:42Z
[ "python", "introspection" ]
Find functions explicitly defined in a module (python)
1,106,840
<p>Ok I know you can use the dir() method to list everything in a module, but is there any way to see only the functions that are defined in that module? For example, assume my module looks like this:</p> <pre><code>from datetime import date, datetime def test(): return "This is a real method" </code></pre> <p>Even if i use inspect() to filter out the builtins, I'm still left with anything that was imported. E.g I'll see:</p> <p>['date', 'datetime', 'test']</p> <p>Is there any way to exclude imports? Or another way to find out what's defined in a module?</p>
18
2009-07-09T22:49:03Z
1,106,875
<p>How about the following:</p> <pre><code>grep ^def my_module.py </code></pre>
4
2009-07-09T22:59:22Z
[ "python", "introspection" ]
Find functions explicitly defined in a module (python)
1,106,840
<p>Ok I know you can use the dir() method to list everything in a module, but is there any way to see only the functions that are defined in that module? For example, assume my module looks like this:</p> <pre><code>from datetime import date, datetime def test(): return "This is a real method" </code></pre> <p>Even if i use inspect() to filter out the builtins, I'm still left with anything that was imported. E.g I'll see:</p> <p>['date', 'datetime', 'test']</p> <p>Is there any way to exclude imports? Or another way to find out what's defined in a module?</p>
18
2009-07-09T22:49:03Z
1,107,010
<p>You can check <code>__module__</code> attribute of the function in question. I say "function" because a method belongs to a class usually ;-).</p> <p>BTW, a class actually also has <code>__module__</code> attribute.</p>
2
2009-07-09T23:47:26Z
[ "python", "introspection" ]
Find functions explicitly defined in a module (python)
1,106,840
<p>Ok I know you can use the dir() method to list everything in a module, but is there any way to see only the functions that are defined in that module? For example, assume my module looks like this:</p> <pre><code>from datetime import date, datetime def test(): return "This is a real method" </code></pre> <p>Even if i use inspect() to filter out the builtins, I'm still left with anything that was imported. E.g I'll see:</p> <p>['date', 'datetime', 'test']</p> <p>Is there any way to exclude imports? Or another way to find out what's defined in a module?</p>
18
2009-07-09T22:49:03Z
1,107,150
<p>Are you looking for something like this?</p> <pre><code>import sys, inspect def is_mod_function(mod, func): return inspect.isfunction(func) and inspect.getmodule(func) == mod def list_functions(mod): return [func.__name__ for func in mod.__dict__.itervalues() if is_mod_function(mod, func)] print 'functions in current module:\n', list_functions(sys.modules[__name__]) print 'functions in inspect module:\n', list_functions(inspect) </code></pre> <p>EDIT: Changed variable names from 'meth' to 'func' to avoid confusion (we're dealing with functions, not methods, here).</p>
21
2009-07-10T00:36:47Z
[ "python", "introspection" ]
Is 'if element in aList' possible with Django templates?
1,106,849
<p>Does something like the python</p> <pre><code>if "a" in ["a", "b", "c"]: pass </code></pre> <p>exist in Django templates? </p> <p>If not, is there an easy way to implement it?</p>
2
2009-07-09T22:50:54Z
1,106,928
<p>This is something you usually do in your view functions.</p> <pre><code>aList = ["a", "b", "c"] listAndFlags = [ (item,item in aList) for item in someQuerySet ] </code></pre> <p>Now you have a simple two-element list that you can display</p> <pre><code>{% for item, flag in someList %} &lt;tr&gt;&lt;td class="{{flag}}"&gt;{{item}}&lt;/td&gt;&lt;/tr&gt; {% endfor %} </code></pre>
2
2009-07-09T23:17:23Z
[ "python", "django", "django-templates" ]
Is 'if element in aList' possible with Django templates?
1,106,849
<p>Does something like the python</p> <pre><code>if "a" in ["a", "b", "c"]: pass </code></pre> <p>exist in Django templates? </p> <p>If not, is there an easy way to implement it?</p>
2
2009-07-09T22:50:54Z
1,106,960
<p>Not directly, there is no if x in iterable template tag included.</p> <p>This is not typically something needed inside the templates themselves. Without more context about the surrounding problem a good answer cannot be given. We can guess and say that you want to either pass a nested list like the above comment, or you really just need to do more calculation in the view and pass a single list (testing for empty if you don't want it to do anything).</p> <p>Hope this helps.</p>
1
2009-07-09T23:28:56Z
[ "python", "django", "django-templates" ]
Python: StopIteration exception and list comprehensions
1,106,903
<p>I'd like to read at most 20 lines from a csv file:</p> <pre><code>rows = [csvreader.next() for i in range(20)] </code></pre> <p>Works fine if the file has 20 or more rows, fails with a StopIteration exception otherwise.</p> <p>Is there an elegant way to deal with an iterator that could throw a StopIteration exception in a list comprehension or should I use a regular for loop?</p>
8
2009-07-09T23:09:56Z
1,106,921
<p>You can use <a href="http://docs.python.org/library/itertools.html#itertools.islice"><code>itertools.islice</code></a>. It is the iterator version of list slicing. If the iterator has less than 20 elements, it will return all elements.</p> <pre><code>import itertools rows = list(itertools.islice(csvreader, 20)) </code></pre>
10
2009-07-09T23:15:50Z
[ "python", "iterator", "list-comprehension", "stopiteration" ]
Python: StopIteration exception and list comprehensions
1,106,903
<p>I'd like to read at most 20 lines from a csv file:</p> <pre><code>rows = [csvreader.next() for i in range(20)] </code></pre> <p>Works fine if the file has 20 or more rows, fails with a StopIteration exception otherwise.</p> <p>Is there an elegant way to deal with an iterator that could throw a StopIteration exception in a list comprehension or should I use a regular for loop?</p>
8
2009-07-09T23:09:56Z
1,106,952
<p>If for whatever reason you need also to keep track of the line number, I'd recommend you:</p> <pre><code>rows = zip(xrange(20), csvreader) </code></pre> <p>If not, you can strip it out after or... well, you'd better try other option more optimal from the beginning :-)</p>
-1
2009-07-09T23:26:24Z
[ "python", "iterator", "list-comprehension", "stopiteration" ]
Python: StopIteration exception and list comprehensions
1,106,903
<p>I'd like to read at most 20 lines from a csv file:</p> <pre><code>rows = [csvreader.next() for i in range(20)] </code></pre> <p>Works fine if the file has 20 or more rows, fails with a StopIteration exception otherwise.</p> <p>Is there an elegant way to deal with an iterator that could throw a StopIteration exception in a list comprehension or should I use a regular for loop?</p>
8
2009-07-09T23:09:56Z
1,247,653
<p><a href="http://docs.python.org/library/itertools.html#itertools.izip" rel="nofollow"><code>itertools.izip</code></a> (<a href="http://doc.astro-wise.org/itertools.html#islice" rel="nofollow">2</a>) provides a way to easily make list comprehensions work, but <code>islice</code> looks to be the way to go in this case.</p> <pre><code>from itertools import izip [row for (row,i) in izip(csvreader, range(20))] </code></pre>
0
2009-08-08T01:00:55Z
[ "python", "iterator", "list-comprehension", "stopiteration" ]
How to model an object with references to arbitrary number of arbitrary field types? (django orm)
1,107,024
<p>I'd like to define a set of model/objects which allow for one to represent the relationship: field_set has many fields where fields are django.db.model field objects (IPAddressField, FilePathField etc). </p> <p>My goals is to have a ORM model which supports the following type of 'api'.</p> <p>From a controller view lets say:</p> <pre><code># Desired api below def homepage(request): from mymodels.models import ProfileGroup, FieldSet, Field group = ProfileGroup() group.name = 'Profile Information' group.save() geographicFieldSet = FieldSet() # Bind this 'field set' to the 'profile group' geographicFieldSet.profilegroup = group address_field = Field() address_field.name = 'Street Address' address_field.f = models.CharField(max_length=100) # Bind this field to the geo field set address_field.fieldset = geographicFieldSet town_field = Field() town_field.name = 'Town / City' town_field.f = models.CharField(max_length=100) # Bind this field to the geo field set town_field.fieldset = geographicFieldSet demographicFieldSet = FieldSet() demographicFieldSet.profilegroup = group age_field = Field() age_field.name = 'Age' age_field.f = models.IntegerField() # Bind this field to the demo field set age_field.fieldset = demographicFieldSet # Define a 'weight_field' here similar to 'age' above. for obj in [geographicFieldSet, town_field, address_field, demographicFieldSet, age_field, weight_field]: obj.save() # Id also add some methods to these model objects so that they # know how to render themselves on the page... return render_to_response('page.templ', {'profile_group':group}) </code></pre> <p>Essentially I want to support 'logically grouped fields' since I see myself supporting many 'field sets' of different types thus my desire for a meaningful abstraction.</p> <p>Id like to define this model so that I can define a group of fields where the # of fields is arbitrary as is the field type. So I may have a field group 'Geographic' which includes the fields 'State' (CharField w/ choices), 'Town' (TextField) etc. </p> <p>Heres what Ive come up with so far:</p> <pre><code>class ProfileGroup(models.Model): name = models.CharField(max_length=200) # FieldSets have many Fields class FieldSet(models.Model): name = models.CharField(max_length=200) profilegroup = models.ForeignKey(ProfileGroup) class Field(models.Model): f = models.Field() fieldset = models.ForeignKey(FieldSet) </code></pre> <p>Though using these models produces an error in the shell and ultimately doesnt allow me to store arbitrary fields. </p> <pre><code>In [1]: from splink.profile_accumulator.models import Field, FieldSet, ProfileGroup In [2]: import django.db In [3]: profile_group = ProfileGroup() In [4]: profile_group.name = 'profile group name' In [5]: profile_group.save() In [6]: field_set = FieldSet() In [7]: field_set.name = 'field set name' In [8]: field_set.profilegroup = profile_group In [9]: field_set.save() In [10]: field = Field() In [11]: field.name = 'field name' In [12]: field.f = django.db.models.FileField() In [13]: field.save() --------------------------------------------------------------------------- ProgrammingError Traceback (most recent call last) /var/www/splinkpage.com/splinkpage.pinax/splink/&lt;ipython console&gt; in &lt;module&gt;() /usr/lib/pymodules/python2.5/django/db/models/base.pyc in save(self, force_insert, force_update) 309 raise ValueError("Cannot force both insert and updating in " 310 "model saving.") --&gt; 311 self.save_base(force_insert=force_insert, force_update=force_update) 312 313 save.alters_data = True /usr/lib/pymodules/python2.5/django/db/models/base.pyc in save_base(self, raw, cls, force_insert, force_update) 381 if values: 382 # Create a new record. --&gt; 383 result = manager._insert(values, return_id=update_pk) 384 else: 385 # Create a new record with defaults for everything. /usr/lib/pymodules/python2.5/django/db/models/manager.pyc in _insert(self, values, **kwargs) 136 137 def _insert(self, values, **kwargs): --&gt; 138 return insert_query(self.model, values, **kwargs) 139 140 def _update(self, values, **kwargs): /usr/lib/pymodules/python2.5/django/db/models/query.pyc in insert_query(model, values, return_id, raw_values) 890 part of the public API. 891 """ 892 query = sql.InsertQuery(model, connection) 893 query.insert_values(values, raw_values) --&gt; 894 return query.execute_sql(return_id) /usr/lib/pymodules/python2.5/django/db/models/sql/subqueries.pyc in execute_sql(self, return_id) 307 308 def execute_sql(self, return_id=False): --&gt; 309 cursor = super(InsertQuery, self).execute_sql(None) 310 if return_id: 311 return self.connection.ops.last_insert_id(cursor, /usr/lib/pymodules/python2.5/django/db/models/sql/query.pyc in execute_sql(self, result_type) 1732 1733 cursor = self.connection.cursor() -&gt; 1734 cursor.execute(sql, params) 1735 1736 if not result_type: /usr/lib/pymodules/python2.5/django/db/backends/util.pyc in execute(self, sql, params) 17 start = time() 18 try: ---&gt; 19 return self.cursor.execute(sql, params) 20 finally: 21 stop = time() ProgrammingError: can't adapt </code></pre> <p>So Im wondering if this is totally the wrong approach or if I need to use django's model classes a bit differently to get what I want.</p>
2
2009-07-09T23:52:39Z
1,107,105
<p>In SQL there is no such thing as a table with variable number of columns or variable type columns. Also, Django does not modify database layout at run time - i.e. does not call <code>ALTER TABLE</code> statements - as far as I know.</p> <p>In django data models must be completely defined before you run your application.</p> <p>You might find <a href="http://www.djangoproject.com/documentation/models/many%5Fto%5Fone/" rel="nofollow" title="many to one">this doc page</a> relevant for use of "many-to-one" relationships.</p> <p>Example:</p> <pre><code>#profile class ProfileGroup(models.Model): ... #fieldset class FieldSet(models.Model): profile = models.ForeignKey(ProfileGroup) #field 1 class Address(models.Model) owner = models.ForeignKey(FieldSet,related_name='address') #related name above adds accessor function address_set to Profile #more fields like street address, zip code, etc #field 2 class Interest(models.Model) owner = models.ForeignKey(FieldSet,related_name='interest') description = models.CharField() #etc. </code></pre> <p>Populate and access fields:</p> <pre><code>f = FieldSet() f.interest_set.create(description='ping pong') f.address_set.create(street='... ', zip='... ') f.save() addresses = f.address_set.all() interests = f.interest_set.all() #there are other methods to work with sets </code></pre> <p>sets in this case emulate variable fields in a table Profile. In the database, however, interest and address data is stored in separate tables, with foreign key links to Profile.</p> <p>If you want to access all that with one accessor function - you could write something that wraps calls to all related sets.</p> <p>Even though you can't modify model on the fly, you can add new models then issue</p> <pre><code>manage.py syncdb </code></pre> <p>This will create new tables in the db. However you won't be able to modify fields in existing tables with 'syncdb' - django doesn't do it. You'll have to enter SQL commands manually for that. (<strong>supposedly web2py platform handles this automatically</strong>, but unfortunately web2py is not well documented yet, but it might be a cut above django in terms of quality of API and is worth taking a look at)</p>
0
2009-07-10T00:21:06Z
[ "python", "django", "orm" ]
How to model an object with references to arbitrary number of arbitrary field types? (django orm)
1,107,024
<p>I'd like to define a set of model/objects which allow for one to represent the relationship: field_set has many fields where fields are django.db.model field objects (IPAddressField, FilePathField etc). </p> <p>My goals is to have a ORM model which supports the following type of 'api'.</p> <p>From a controller view lets say:</p> <pre><code># Desired api below def homepage(request): from mymodels.models import ProfileGroup, FieldSet, Field group = ProfileGroup() group.name = 'Profile Information' group.save() geographicFieldSet = FieldSet() # Bind this 'field set' to the 'profile group' geographicFieldSet.profilegroup = group address_field = Field() address_field.name = 'Street Address' address_field.f = models.CharField(max_length=100) # Bind this field to the geo field set address_field.fieldset = geographicFieldSet town_field = Field() town_field.name = 'Town / City' town_field.f = models.CharField(max_length=100) # Bind this field to the geo field set town_field.fieldset = geographicFieldSet demographicFieldSet = FieldSet() demographicFieldSet.profilegroup = group age_field = Field() age_field.name = 'Age' age_field.f = models.IntegerField() # Bind this field to the demo field set age_field.fieldset = demographicFieldSet # Define a 'weight_field' here similar to 'age' above. for obj in [geographicFieldSet, town_field, address_field, demographicFieldSet, age_field, weight_field]: obj.save() # Id also add some methods to these model objects so that they # know how to render themselves on the page... return render_to_response('page.templ', {'profile_group':group}) </code></pre> <p>Essentially I want to support 'logically grouped fields' since I see myself supporting many 'field sets' of different types thus my desire for a meaningful abstraction.</p> <p>Id like to define this model so that I can define a group of fields where the # of fields is arbitrary as is the field type. So I may have a field group 'Geographic' which includes the fields 'State' (CharField w/ choices), 'Town' (TextField) etc. </p> <p>Heres what Ive come up with so far:</p> <pre><code>class ProfileGroup(models.Model): name = models.CharField(max_length=200) # FieldSets have many Fields class FieldSet(models.Model): name = models.CharField(max_length=200) profilegroup = models.ForeignKey(ProfileGroup) class Field(models.Model): f = models.Field() fieldset = models.ForeignKey(FieldSet) </code></pre> <p>Though using these models produces an error in the shell and ultimately doesnt allow me to store arbitrary fields. </p> <pre><code>In [1]: from splink.profile_accumulator.models import Field, FieldSet, ProfileGroup In [2]: import django.db In [3]: profile_group = ProfileGroup() In [4]: profile_group.name = 'profile group name' In [5]: profile_group.save() In [6]: field_set = FieldSet() In [7]: field_set.name = 'field set name' In [8]: field_set.profilegroup = profile_group In [9]: field_set.save() In [10]: field = Field() In [11]: field.name = 'field name' In [12]: field.f = django.db.models.FileField() In [13]: field.save() --------------------------------------------------------------------------- ProgrammingError Traceback (most recent call last) /var/www/splinkpage.com/splinkpage.pinax/splink/&lt;ipython console&gt; in &lt;module&gt;() /usr/lib/pymodules/python2.5/django/db/models/base.pyc in save(self, force_insert, force_update) 309 raise ValueError("Cannot force both insert and updating in " 310 "model saving.") --&gt; 311 self.save_base(force_insert=force_insert, force_update=force_update) 312 313 save.alters_data = True /usr/lib/pymodules/python2.5/django/db/models/base.pyc in save_base(self, raw, cls, force_insert, force_update) 381 if values: 382 # Create a new record. --&gt; 383 result = manager._insert(values, return_id=update_pk) 384 else: 385 # Create a new record with defaults for everything. /usr/lib/pymodules/python2.5/django/db/models/manager.pyc in _insert(self, values, **kwargs) 136 137 def _insert(self, values, **kwargs): --&gt; 138 return insert_query(self.model, values, **kwargs) 139 140 def _update(self, values, **kwargs): /usr/lib/pymodules/python2.5/django/db/models/query.pyc in insert_query(model, values, return_id, raw_values) 890 part of the public API. 891 """ 892 query = sql.InsertQuery(model, connection) 893 query.insert_values(values, raw_values) --&gt; 894 return query.execute_sql(return_id) /usr/lib/pymodules/python2.5/django/db/models/sql/subqueries.pyc in execute_sql(self, return_id) 307 308 def execute_sql(self, return_id=False): --&gt; 309 cursor = super(InsertQuery, self).execute_sql(None) 310 if return_id: 311 return self.connection.ops.last_insert_id(cursor, /usr/lib/pymodules/python2.5/django/db/models/sql/query.pyc in execute_sql(self, result_type) 1732 1733 cursor = self.connection.cursor() -&gt; 1734 cursor.execute(sql, params) 1735 1736 if not result_type: /usr/lib/pymodules/python2.5/django/db/backends/util.pyc in execute(self, sql, params) 17 start = time() 18 try: ---&gt; 19 return self.cursor.execute(sql, params) 20 finally: 21 stop = time() ProgrammingError: can't adapt </code></pre> <p>So Im wondering if this is totally the wrong approach or if I need to use django's model classes a bit differently to get what I want.</p>
2
2009-07-09T23:52:39Z
1,107,166
<p>I see several problems with the code. First, with this class definition:</p> <pre><code>class Field(models.Model): f = models.Field() fieldset = models.ForeignKey(FieldSet) </code></pre> <p>The class models.Field is not supposed to be used directly for a field definition. It is a base class for all field types in Django so it lack specifics for a particular field type to be useful.</p> <p>The second problem is with the following line:</p> <pre><code>In [12]: field.f = django.db.models.FileField() </code></pre> <p>When you assign to attribute <code>f</code> of your <code>Field</code> instance, you are supposed to give a specific value to be saved to the database. For example, if you used <code>CharField</code> for <code>Field.f</code> definition, you would assign a string here. <code>models.Field</code> has no specific assignable values though. You are trying to assign something that is clearly not possible to save to the DB. It is <code>modles.FileField</code> definition.</p> <p>So, Django has a hard time to "adjust" the value you are assigning to the field attribute for two reasons. First, the are no values defined for <code>models.Field</code> type to assign as it is an "abstract class", or a base class for specific field type definitions. Second, you can not assign "field definition" to an attribute and hope that it is going to be saved to a DB.</p> <p>I understand your confusion. You are basically trying to do impossible thing both from the DB and Django's points of view. </p> <p>I suppose there could be a solution for your design problem though. If you describe what you are trying to achieve in detail, somebody could probably give you a hint. </p>
1
2009-07-10T00:42:12Z
[ "python", "django", "orm" ]
How to model an object with references to arbitrary number of arbitrary field types? (django orm)
1,107,024
<p>I'd like to define a set of model/objects which allow for one to represent the relationship: field_set has many fields where fields are django.db.model field objects (IPAddressField, FilePathField etc). </p> <p>My goals is to have a ORM model which supports the following type of 'api'.</p> <p>From a controller view lets say:</p> <pre><code># Desired api below def homepage(request): from mymodels.models import ProfileGroup, FieldSet, Field group = ProfileGroup() group.name = 'Profile Information' group.save() geographicFieldSet = FieldSet() # Bind this 'field set' to the 'profile group' geographicFieldSet.profilegroup = group address_field = Field() address_field.name = 'Street Address' address_field.f = models.CharField(max_length=100) # Bind this field to the geo field set address_field.fieldset = geographicFieldSet town_field = Field() town_field.name = 'Town / City' town_field.f = models.CharField(max_length=100) # Bind this field to the geo field set town_field.fieldset = geographicFieldSet demographicFieldSet = FieldSet() demographicFieldSet.profilegroup = group age_field = Field() age_field.name = 'Age' age_field.f = models.IntegerField() # Bind this field to the demo field set age_field.fieldset = demographicFieldSet # Define a 'weight_field' here similar to 'age' above. for obj in [geographicFieldSet, town_field, address_field, demographicFieldSet, age_field, weight_field]: obj.save() # Id also add some methods to these model objects so that they # know how to render themselves on the page... return render_to_response('page.templ', {'profile_group':group}) </code></pre> <p>Essentially I want to support 'logically grouped fields' since I see myself supporting many 'field sets' of different types thus my desire for a meaningful abstraction.</p> <p>Id like to define this model so that I can define a group of fields where the # of fields is arbitrary as is the field type. So I may have a field group 'Geographic' which includes the fields 'State' (CharField w/ choices), 'Town' (TextField) etc. </p> <p>Heres what Ive come up with so far:</p> <pre><code>class ProfileGroup(models.Model): name = models.CharField(max_length=200) # FieldSets have many Fields class FieldSet(models.Model): name = models.CharField(max_length=200) profilegroup = models.ForeignKey(ProfileGroup) class Field(models.Model): f = models.Field() fieldset = models.ForeignKey(FieldSet) </code></pre> <p>Though using these models produces an error in the shell and ultimately doesnt allow me to store arbitrary fields. </p> <pre><code>In [1]: from splink.profile_accumulator.models import Field, FieldSet, ProfileGroup In [2]: import django.db In [3]: profile_group = ProfileGroup() In [4]: profile_group.name = 'profile group name' In [5]: profile_group.save() In [6]: field_set = FieldSet() In [7]: field_set.name = 'field set name' In [8]: field_set.profilegroup = profile_group In [9]: field_set.save() In [10]: field = Field() In [11]: field.name = 'field name' In [12]: field.f = django.db.models.FileField() In [13]: field.save() --------------------------------------------------------------------------- ProgrammingError Traceback (most recent call last) /var/www/splinkpage.com/splinkpage.pinax/splink/&lt;ipython console&gt; in &lt;module&gt;() /usr/lib/pymodules/python2.5/django/db/models/base.pyc in save(self, force_insert, force_update) 309 raise ValueError("Cannot force both insert and updating in " 310 "model saving.") --&gt; 311 self.save_base(force_insert=force_insert, force_update=force_update) 312 313 save.alters_data = True /usr/lib/pymodules/python2.5/django/db/models/base.pyc in save_base(self, raw, cls, force_insert, force_update) 381 if values: 382 # Create a new record. --&gt; 383 result = manager._insert(values, return_id=update_pk) 384 else: 385 # Create a new record with defaults for everything. /usr/lib/pymodules/python2.5/django/db/models/manager.pyc in _insert(self, values, **kwargs) 136 137 def _insert(self, values, **kwargs): --&gt; 138 return insert_query(self.model, values, **kwargs) 139 140 def _update(self, values, **kwargs): /usr/lib/pymodules/python2.5/django/db/models/query.pyc in insert_query(model, values, return_id, raw_values) 890 part of the public API. 891 """ 892 query = sql.InsertQuery(model, connection) 893 query.insert_values(values, raw_values) --&gt; 894 return query.execute_sql(return_id) /usr/lib/pymodules/python2.5/django/db/models/sql/subqueries.pyc in execute_sql(self, return_id) 307 308 def execute_sql(self, return_id=False): --&gt; 309 cursor = super(InsertQuery, self).execute_sql(None) 310 if return_id: 311 return self.connection.ops.last_insert_id(cursor, /usr/lib/pymodules/python2.5/django/db/models/sql/query.pyc in execute_sql(self, result_type) 1732 1733 cursor = self.connection.cursor() -&gt; 1734 cursor.execute(sql, params) 1735 1736 if not result_type: /usr/lib/pymodules/python2.5/django/db/backends/util.pyc in execute(self, sql, params) 17 start = time() 18 try: ---&gt; 19 return self.cursor.execute(sql, params) 20 finally: 21 stop = time() ProgrammingError: can't adapt </code></pre> <p>So Im wondering if this is totally the wrong approach or if I need to use django's model classes a bit differently to get what I want.</p>
2
2009-07-09T23:52:39Z
23,850,110
<p>Why not to make sth like this?</p> <pre><code>class Info(models.Model): info_type = models.ForeignKey('InfoType', blank=False, null=False, default='') info_int = models.IntegerField(null=True, blank=True) info_img = models.ImageField(upload_to='info',null=True, blank=True) info_date = models.DateTimeField(null=True, blank=True) info_text = models.TextField(null=True, blank=True) info_bool = models.BooleanField(null=True, blank=True) info_char = models.CharField(max_length=128,null=True, blank=True) info_dec = models.DecimalField(max_digits=20, decimal_places=12, null=True, blank=True) info_float = models.FloatField(null=True, blank=True) parent_info = models.ForeignKey('self', blank=True, null=True) class InfoType(models.Model): type = models.CharField(max_length=64, blank=False, null=False, default='') info_field = models.CharField(max_length=32, blank=False, null=False, default='') </code></pre> <p>so depending which type we choose we know in which field we can find the value</p>
0
2014-05-24T22:07:16Z
[ "python", "django", "orm" ]
Python Lambda Problems
1,107,210
<p>What's going on here? I'm trying to create a list of functions:</p> <pre><code>def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(lambda x:f(i,x)) </code></pre> <p>This isn't doing what I expect. I would expect the list to act like this:</p> <pre><code>funcs[3](3) = 9 funcs[0](5) = 0 </code></pre> <p>But all the functions in the list seem to be identical, and be setting the fixed value to be 9:</p> <pre><code>funcs[3](3) = 27 funcs[3](1) = 9 funcs[2](6) = 54 </code></pre> <p>Any ideas?</p>
12
2009-07-10T01:01:25Z
1,107,232
<p>All the lambdas end up being bound to the last one. See this question for a longer answer:</p> <p><a href="http://stackoverflow.com/questions/452610/how-do-i-create-a-list-of-python-lambdas-in-a-list-comprehension-for-loop">http://stackoverflow.com/questions/452610/how-do-i-create-a-list-of-python-lambdas-in-a-list-comprehension-for-loop</a></p>
1
2009-07-10T01:10:21Z
[ "python", "lambda" ]
Python Lambda Problems
1,107,210
<p>What's going on here? I'm trying to create a list of functions:</p> <pre><code>def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(lambda x:f(i,x)) </code></pre> <p>This isn't doing what I expect. I would expect the list to act like this:</p> <pre><code>funcs[3](3) = 9 funcs[0](5) = 0 </code></pre> <p>But all the functions in the list seem to be identical, and be setting the fixed value to be 9:</p> <pre><code>funcs[3](3) = 27 funcs[3](1) = 9 funcs[2](6) = 54 </code></pre> <p>Any ideas?</p>
12
2009-07-10T01:01:25Z
1,107,234
<p>Considering the final value of <code>i == 9</code></p> <p>Like any good python function, it's going to use the value of the variable in the scope it was defined. Perhaps <code>lambda: varname</code> (being that it is a language construct) binds to the name, not the value, and evaluates that name at runtime?</p> <p>Similar to:</p> <pre><code>i = 9 def foo(): print i i = 10 foo() </code></pre> <p><sup>I'd be quite interested in finding out of my answer is correct</sup></p>
1
2009-07-10T01:10:41Z
[ "python", "lambda" ]
Python Lambda Problems
1,107,210
<p>What's going on here? I'm trying to create a list of functions:</p> <pre><code>def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(lambda x:f(i,x)) </code></pre> <p>This isn't doing what I expect. I would expect the list to act like this:</p> <pre><code>funcs[3](3) = 9 funcs[0](5) = 0 </code></pre> <p>But all the functions in the list seem to be identical, and be setting the fixed value to be 9:</p> <pre><code>funcs[3](3) = 27 funcs[3](1) = 9 funcs[2](6) = 54 </code></pre> <p>Any ideas?</p>
12
2009-07-10T01:01:25Z
1,107,243
<p>lambdas in python are closures.... the arguments you give it aren't going to be evaluated until the lambda is evaluated. At that time, i=9 regardless, because your iteration is finished.</p> <p>The behavior you're looking for can be achieved with functools.partial</p> <pre><code>import functools def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(functools.partial(f,i)) </code></pre>
15
2009-07-10T01:13:24Z
[ "python", "lambda" ]
Python Lambda Problems
1,107,210
<p>What's going on here? I'm trying to create a list of functions:</p> <pre><code>def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(lambda x:f(i,x)) </code></pre> <p>This isn't doing what I expect. I would expect the list to act like this:</p> <pre><code>funcs[3](3) = 9 funcs[0](5) = 0 </code></pre> <p>But all the functions in the list seem to be identical, and be setting the fixed value to be 9:</p> <pre><code>funcs[3](3) = 27 funcs[3](1) = 9 funcs[2](6) = 54 </code></pre> <p>Any ideas?</p>
12
2009-07-10T01:01:25Z
1,107,260
<p>There's only one <code>i</code> which is bound to each lambda, contrary to what you think. This is a common mistake. </p> <p>One way to get what you want is:</p> <pre><code>for i in range(0,10): funcs.append(lambda x, i=i: f(i, x)) </code></pre> <p>Now you're creating a default parameter <code>i</code> in each lambda closure and binding to it the current <em>value</em> of the looping variable <code>i</code>.</p>
8
2009-07-10T01:18:54Z
[ "python", "lambda" ]
Python Lambda Problems
1,107,210
<p>What's going on here? I'm trying to create a list of functions:</p> <pre><code>def f(a,b): return a*b funcs = [] for i in range(0,10): funcs.append(lambda x:f(i,x)) </code></pre> <p>This isn't doing what I expect. I would expect the list to act like this:</p> <pre><code>funcs[3](3) = 9 funcs[0](5) = 0 </code></pre> <p>But all the functions in the list seem to be identical, and be setting the fixed value to be 9:</p> <pre><code>funcs[3](3) = 27 funcs[3](1) = 9 funcs[2](6) = 54 </code></pre> <p>Any ideas?</p>
12
2009-07-10T01:01:25Z
1,107,333
<p>Yep, the usual "scoping problem" (actually a binding-later-than-you want problem, but it's often called by that name). You've already gotten the two best (because simplest) answers -- the "fake default" <code>i=i</code> solution, and <code>functools.partial</code>, so I'm only giving the third one of the classic three, the "factory lambda":</p> <pre><code>for i in range(0,10): funcs.append((lambda i: lambda x: f(i, x))(i)) </code></pre> <p>Personally I'd go with <code>i=i</code> if there's no risk of the functions in <code>funcs</code> being accidentally called with 2 parameters instead of just 1, but the factory function approach is worth considering when you need something a little bit richer than just pre-binding one arg.</p>
11
2009-07-10T01:56:49Z
[ "python", "lambda" ]
Platform-independent version of /var/lib and ~/.config
1,107,213
<p>We see that programs like <code>apt-get</code> store information in several places:</p> <pre><code>/var/cache/apt &lt;- cache /var/lib/apt &lt;- keyrings, package db, states, locks, mirrors /etc/apt &lt;- configuration file ~/.aptitude/config &lt;- user configuration file </code></pre> <p>So we see four kinds of paths here:</p> <ol> <li>Cache path</li> <li>Data path</li> <li>System-wide configuration</li> <li>User configuration</li> </ol> <p>Perhaps (1) can be made part of (2) for simplicity sake. Can anyone think of ways to get such appropriate paths in platform-independent way? Is there a library that does this, or does one have to invent this wheel?</p>
1
2009-07-10T01:02:05Z
1,107,869
<p>Do you mean something like <a href="http://pypi.python.org/pypi/virtualenv" rel="nofollow">virtualenv</a>?</p>
-1
2009-07-10T05:32:31Z
[ "python", "configuration", "path", "cross-platform" ]
Platform-independent version of /var/lib and ~/.config
1,107,213
<p>We see that programs like <code>apt-get</code> store information in several places:</p> <pre><code>/var/cache/apt &lt;- cache /var/lib/apt &lt;- keyrings, package db, states, locks, mirrors /etc/apt &lt;- configuration file ~/.aptitude/config &lt;- user configuration file </code></pre> <p>So we see four kinds of paths here:</p> <ol> <li>Cache path</li> <li>Data path</li> <li>System-wide configuration</li> <li>User configuration</li> </ol> <p>Perhaps (1) can be made part of (2) for simplicity sake. Can anyone think of ways to get such appropriate paths in platform-independent way? Is there a library that does this, or does one have to invent this wheel?</p>
1
2009-07-10T01:02:05Z
1,127,382
<p>For Linux, check out the <a href="http://www.pathname.com/fhs/" rel="nofollow">Filesystem Hierarchy Standard</a> (but be aware that these standards are for software being part of distribution, software installed locally should not interfere with distribution's package management and stay in /usr/local/ and /var/local/).</p> <p>If you want to be truly cross-platform, IMO best way would be to leave this things configurable for packager, defaulting to run in current directory (so that users without administrative privileges can simply unpack and run program). This way, people packaging for particular OS/distribution will set sensible values for system-wide installation, and users will be able to use it locally without administrative rights for the machine.</p>
1
2009-07-14T19:09:03Z
[ "python", "configuration", "path", "cross-platform" ]
looking for a more pythonic way to access the database
1,107,297
<p>I have a bunch of python methods that follow this pattern:</p> <pre><code>def delete_session(guid): conn = get_conn() cur = conn.cursor() cur.execute("delete from sessions where guid=%s", guid) conn.commit() conn.close() </code></pre> <p>Is there a more pythonic way to execute raw sql. The 2 lines at the beginning and end of every method are starting to bother me.</p> <p>I'm not looking for an orm, I want to stick with raw sql.</p>
4
2009-07-10T01:37:14Z
1,107,303
<p>You could write a context manager and use the with statement. For example, see this blog post:</p> <p><a href="http://jessenoller.com/2009/02/03/get-with-the-program-as-contextmanager-completely-different/" rel="nofollow">http://jessenoller.com/2009/02/03/get-with-the-program-as-contextmanager-completely-different/</a></p> <p>Also the python documentation has a sample that pretty much matches your needs. See section 8.1 on this page, in particular the snippet that begins:</p> <pre><code>db_connection = DatabaseConnection() with db_connection as cursor: cursor.execute('insert into ...') cursor.execute('delete from ...') # ... more operations ... </code></pre> <ul> <li><a href="https://docs.python.org/2.5/whatsnew/pep-343.html" rel="nofollow">https://docs.python.org/2.5/whatsnew/pep-343.html</a></li> </ul>
8
2009-07-10T01:42:16Z
[ "python", "mysql" ]
looking for a more pythonic way to access the database
1,107,297
<p>I have a bunch of python methods that follow this pattern:</p> <pre><code>def delete_session(guid): conn = get_conn() cur = conn.cursor() cur.execute("delete from sessions where guid=%s", guid) conn.commit() conn.close() </code></pre> <p>Is there a more pythonic way to execute raw sql. The 2 lines at the beginning and end of every method are starting to bother me.</p> <p>I'm not looking for an orm, I want to stick with raw sql.</p>
4
2009-07-10T01:37:14Z
1,107,314
<p>Careful about that <code>execute</code>, the second argument needs to be [guid] (a list with just one item). As for your question, I normally just use a class encapsulating connection and cursor, but it looks like you may prefer to use an <em>execution context</em> object whose <code>__enter__</code> method gives you a cursor while <code>__leave__</code> commits or rollbacks depending on whether the termination was normal or by exception; this would make your code</p> <pre><code>def delete_session(): with get_cursor() as cur: cur.execute(etc etc) </code></pre> <p>If you like this style, let us know and I'll show you how to write <code>get_cursor</code>. Others will no doubt propose a decorator instead, so you'd write:</p> <pre><code>@withcursor def delete_session(cur): cur.execute(etc etc) </code></pre> <p>but I think this makes commit/rollback, among other issues, a bit murkier. Still, if <em>this</em> is your preference, again let us know and I can show you how to write that form, too.</p>
3
2009-07-10T01:46:46Z
[ "python", "mysql" ]
looking for a more pythonic way to access the database
1,107,297
<p>I have a bunch of python methods that follow this pattern:</p> <pre><code>def delete_session(guid): conn = get_conn() cur = conn.cursor() cur.execute("delete from sessions where guid=%s", guid) conn.commit() conn.close() </code></pre> <p>Is there a more pythonic way to execute raw sql. The 2 lines at the beginning and end of every method are starting to bother me.</p> <p>I'm not looking for an orm, I want to stick with raw sql.</p>
4
2009-07-10T01:37:14Z
1,107,315
<p>It doesn't have to be more pythonic, just more structured:</p> <pre><code>def execSql(statement): conn = get_conn() cur = conn.cursor() cur.execute(statement) conn.commit() conn.close() def delete_session(guid): execSql("delete from sessions where guid=%s"%(guid)) </code></pre>
0
2009-07-10T01:47:48Z
[ "python", "mysql" ]
looking for a more pythonic way to access the database
1,107,297
<p>I have a bunch of python methods that follow this pattern:</p> <pre><code>def delete_session(guid): conn = get_conn() cur = conn.cursor() cur.execute("delete from sessions where guid=%s", guid) conn.commit() conn.close() </code></pre> <p>Is there a more pythonic way to execute raw sql. The 2 lines at the beginning and end of every method are starting to bother me.</p> <p>I'm not looking for an orm, I want to stick with raw sql.</p>
4
2009-07-10T01:37:14Z
1,107,316
<p>A decorator?</p> <pre><code>class SqlExec: def __init__ (self, f): self.f = f def __call__ (self, *args): conn = get_conn() cur = conn.cursor() cur.execute(self.f (*args)) conn.commit() conn.close() @SqlExec def delete_session(guid): return "delete from sessions where guid=%s" % guid </code></pre>
0
2009-07-10T01:48:22Z
[ "python", "mysql" ]
looking for a more pythonic way to access the database
1,107,297
<p>I have a bunch of python methods that follow this pattern:</p> <pre><code>def delete_session(guid): conn = get_conn() cur = conn.cursor() cur.execute("delete from sessions where guid=%s", guid) conn.commit() conn.close() </code></pre> <p>Is there a more pythonic way to execute raw sql. The 2 lines at the beginning and end of every method are starting to bother me.</p> <p>I'm not looking for an orm, I want to stick with raw sql.</p>
4
2009-07-10T01:37:14Z
1,107,440
<p>"I have a bunch of python methods that follow this pattern:"</p> <p>This is confusing.</p> <p>Either you have a bunch of functions, or you have a bunch of methods of a class.</p> <p><strong>Bunch of Functions</strong>.</p> <p>Do this instead.</p> <pre><code>class SQLFunction( object ): def __init__( self, connection ): self.connection = connection def __call__( self, args=None ): self.cursor= self.connection.cursor() self.run( args ) self.cursor.commit() self.cursor.close() class DeleteSession( SQLFunction ): def run( self, args ): self.cursor.execute( "statement" ) delete_session = DeleteSession( connection ) </code></pre> <p>Your function declarations are two lines longer, but essentially the same. You can do <code>func1( args )</code> because it's a callable object. The rest of your program should remain unchanged.</p> <p><strong>Bunch of Methods in One Class</strong>.</p> <pre><code>class SomeClass( object ): def __init__( self, connection ): self.connection= connection def sql_execute( self, statement, args= None ) self.cursor= self.connection.cursor() self.cursor.execute( statement, args if args is not None else [] ) self.connection.commit() self.cursor.close() def delete_session( self ): self.sql_execute( "statement" ) </code></pre> <p>All your methods can look like delete_session and make use of a common <code>sql_execute</code> method. </p>
3
2009-07-10T02:47:12Z
[ "python", "mysql" ]
looking for a more pythonic way to access the database
1,107,297
<p>I have a bunch of python methods that follow this pattern:</p> <pre><code>def delete_session(guid): conn = get_conn() cur = conn.cursor() cur.execute("delete from sessions where guid=%s", guid) conn.commit() conn.close() </code></pre> <p>Is there a more pythonic way to execute raw sql. The 2 lines at the beginning and end of every method are starting to bother me.</p> <p>I'm not looking for an orm, I want to stick with raw sql.</p>
4
2009-07-10T01:37:14Z
39,772,336
<p>According to the <a href="https://docs.python.org/2/library/sqlite3.html#using-shortcut-methods" rel="nofollow">docs</a>, if you were using SQLite3, you wouldn't even <em>need</em> a <code>Cursor</code> which, as the docs say, is "often superfluous".</p> <p>Instead you can use the shortcut methods <code>execute</code> <code>executemany</code> and <code>executescript</code> directly on the connection object:</p> <pre><code>import sqlite3 persons = [ ("Hugo", "Boss"), ("Calvin", "Klein") ] con = sqlite3.connect(":memory:") # Create the table con.execute("create table person(firstname, lastname)") # Fill the table con.executemany("insert into person(firstname, lastname) values (?, ?)", persons) # Print the table contents for row in con.execute("select firstname, lastname from person"): print row print "I just deleted", con.execute("delete from person").rowcount, "rows" </code></pre>
0
2016-09-29T13:55:34Z
[ "python", "mysql" ]
hex to string formatting conversion in python
1,107,331
<p>I used to generate random string in the following way (now I've switched to <a href="http://stackoverflow.com/questions/785058/random-strings-in-python-2-6-is-this-ok/785086#785086" rel="nofollow" title="this method">this method</a>).</p> <pre><code>key = '%016x' % random.getrandbits(128) </code></pre> <p>The key generated this way is most often a 32 character string, but once I've got 31 chars.</p> <p>This is what I don't get: <strong>why it's 32 chars, not 16</strong>? Doesn't one hex digit take one character to print? </p> <p>So if I ask for <code>%016x</code> - shouldn't one expect sixteen chars with possible leading zeroes?</p> <p><strong>Why string legth is not always the same?</strong></p> <h1>Test case</h1> <pre><code>import random import collections stats = collections.defaultdict(int) for i in range(1000000): key = '%016x' % random.getrandbits(128) length = len(key) stats[length] += 1 for key in stats: print key, ' ', stats[key] </code></pre> <p>Prints:</p> <pre><code>32 937911 27 1 28 9 29 221 30 3735 31 58123 </code></pre>
2
2009-07-10T01:56:20Z
1,107,341
<p>Yes, but the format you're using doesn't truncate -- you generate 128 random bits, which require (usually) 32 hex digits to show, and the <code>%016</code> means AT LEAST 16 hex digits, but doesn't just throw away the extra ones you need to show all of that 128-bit number. Why not generate just 64 random bits if that's what you actually need? Less work for the random generator AND no formatting problems.</p> <p>To satisfy your side curiosity, the length is occasionally 31 digits because 1 time in 16 the top 4 bits will all be 0; actually 1 time in 256 all the top 8 bits will be 0 so you'll get only 30 digits, etc. You've only asked for 16 digits, so the formatting will give the least number that's >= 16 and doesn't require the truncation you have not asked for.</p>
5
2009-07-10T01:59:50Z
[ "python", "string" ]
hex to string formatting conversion in python
1,107,331
<p>I used to generate random string in the following way (now I've switched to <a href="http://stackoverflow.com/questions/785058/random-strings-in-python-2-6-is-this-ok/785086#785086" rel="nofollow" title="this method">this method</a>).</p> <pre><code>key = '%016x' % random.getrandbits(128) </code></pre> <p>The key generated this way is most often a 32 character string, but once I've got 31 chars.</p> <p>This is what I don't get: <strong>why it's 32 chars, not 16</strong>? Doesn't one hex digit take one character to print? </p> <p>So if I ask for <code>%016x</code> - shouldn't one expect sixteen chars with possible leading zeroes?</p> <p><strong>Why string legth is not always the same?</strong></p> <h1>Test case</h1> <pre><code>import random import collections stats = collections.defaultdict(int) for i in range(1000000): key = '%016x' % random.getrandbits(128) length = len(key) stats[length] += 1 for key in stats: print key, ' ', stats[key] </code></pre> <p>Prints:</p> <pre><code>32 937911 27 1 28 9 29 221 30 3735 31 58123 </code></pre>
2
2009-07-10T01:56:20Z
1,107,348
<p>Each hex characters from 0 to F contains 4 bits of information, or half a byte. 128 bits is 16 bytes, and since it takes two hex characters to print a byte you get 32 characters. Your format string should thus be <code>'%032x'</code> which will always generate a 32-character string, never shorter.</p> <pre><code>jkugelman$ cat rand.py #!/usr/bin/env python import random import collections stats = collections.defaultdict(int) for i in range(1000000): key = '%032x' % random.getrandbits(128) length = len(key) stats[length] += 1 for key in stats: print key, ' ', stats[key] jkugelman$ python rand.py 32 1000000 </code></pre>
3
2009-07-10T02:01:41Z
[ "python", "string" ]
Noob components design question
1,107,368
<p><strong>Updated question, see below</strong></p> <p>I'm starting a new project and I would like to experiment with components based architecture (I chose <a href="http://peak.telecommunity.com/PyProtocols.html" rel="nofollow">PyProtocols</a>). It's a little program to display and interract with realtime graphics.</p> <p>I started by designing the user input components:</p> <ul> <li><strong>IInputDevice</strong> - e.g. a mouse, keyboard, etc... An InputDevice may have one or more output channels: <ul> <li><strong>IOutput</strong> - an output channel containing a single value (e.g. the value of a MIDI slider)</li> <li><strong>ISequenceOutput</strong> - an output channel containing a sequence of values (e.g. 2 integers representing mouse position)</li> <li><strong>IDictOutput</strong> - an output channel containing named values (e.g. the state of each key of the keyboard, indexed by keyboard symbols)</li> </ul></li> </ul> <p>Now I would like to define interfaces to filter those outputs (smooth, jitter, invert, etc...). </p> <p>My first approach was to create an InputFilter interface, that had different filter methods for each kind of output channel it was connected to... But the introduction in PyProtocols documentation clearly says that the whole interface and adapters thing is about avoiding type checking !</p> <p>So my guess is that my InputFilter interfaces should look like this:</p> <ul> <li><strong>IInputFilter</strong> - filters IOutput</li> <li><strong>ISequenceInputFilter</strong> - filters ISequenceOutput</li> <li><strong>IDictInputFilter</strong> - filters IDictOutput</li> </ul> <p>Then I could have a connect() method in the I*Ouptut interfaces, that could magically adapt my filters and use the one appropriate for the type of output.</p> <p>I tried to implement that, and it kind of works:</p> <pre><code>class InputFilter(object): """ Basic InputFilter implementation. """ advise( instancesProvide=[IInputFilter], ) def __init__(self): self.parameters = {} def connect(self, src): self.src = src def read(self): return self.src.read() class InvertInputFilter(InputFilter): """ A filter inverting single values. """ def read(self): return -self.src.read() class InvertSequenceInputFilter(InputFilter): """ A filter inverting sequences of values. """ advise( instancesProvide=[ISequenceInputFilter], asAdapterForProtocols=[IInputFilter], ) def __init__(self, ob): self.ob = ob def read(self): res = [] for value in self.src.read(): res.append(-value) return res </code></pre> <p>Now I can adapt my filters to the type of output:</p> <pre><code>filter = InvertInputFilter() single_filter = IInputFilter(filter) # noop sequence_filter = ISequenceInputFilter(filter) # creates an InvertSequenceInputFilter instance </code></pre> <p>single_filter and sequence_filter have the correct behaviors and produce single and sequence data types. Now if I define a new InputFilter type on the same model, I get errors like this:</p> <pre><code>TypeError: ('Ambiguous adapter choice', &lt;class 'InvertSequenceInputFilter'&gt;, &lt;class 'SomeOtherSequenceInputFilter'&gt;, 1, 1) </code></pre> <p>I must be doing something terribly wrong, is my design even correct ? Or maybe am I missing the point on how to implement my InputFilterS ?</p> <p><strong>Update 2</strong></p> <p>I understand I was expecting a little too much magic here, adapters don't type check the objects they are adapting and just look at the interface they provide, which now sounds normal to me (remember I'm new to these concepts !).</p> <p>So I came up with a new design (stripped to the bare minimum and omitted the dict interfaces):</p> <pre><code>class IInputFilter(Interface): def read(): pass def connect(src): pass class ISingleInputFilter(Interface): def read_single(): pass class ISequenceInputFilter(Interface): def read_sequence(): pass </code></pre> <p>So IInputFilter is now a sort of generic component, the one that is actually used, ISingleInputFilter and ISequenceInputFilter provide the specialized implementations. Now I can write adapters from the specialized to the generic interfaces:</p> <pre><code>class SingleInputFilterAsInputFilter(object): advise( instancesProvide=[IInputFilter], asAdapterForProtocols=[ISingleInputFilter], ) def __init__(self, ob): self.read = ob.read_single class SequenceInputFilterAsInputFilter(object): advise( instancesProvide=[IInputFilter], asAdapterForProtocols=[ISequenceInputFilter], ) def __init__(self, ob): self.read = ob.read_sequence </code></pre> <p>Now I write my InvertInputFilter like this:</p> <pre><code>class InvertInputFilter(object): advise( instancesProvide=[ ISingleInputFilter, ISequenceInputFilter ] ) def read_single(self): # Return single value inverted def read_sequence(self): # Return sequence of inverted values </code></pre> <p>And to use it with the various output types I would do:</p> <pre><code>filter = InvertInputFilter() single_filter = SingleInputFilterAsInputFilter(filter) sequence_filter = SequenceInputFilterAsInputFilter(filter) </code></pre> <p>But, again, this fails miserably with the same kind of error, and this time it's triggered directly by the InvertInputFilter definition:</p> <pre><code>TypeError: ('Ambiguous adapter choice', &lt;class 'SingleInputFilterAsInputFilter'&gt;, &lt;class 'SequenceInputFilterAsInputFilter'&gt;, 2, 2) </code></pre> <p>(the error disapears as soon as I put exactly one interface in the class' instancesProvide clause) </p> <p><strong>Update 3</strong></p> <p>After some discussion on the PEAK mailing list, it seems that this last error is due to a design flaw in PyProtocols, that does some extra checks at declaration time. I rewrote everything with zope.interface and it works perfectly.</p>
3
2009-07-10T02:11:05Z
1,107,971
<p>I haven't used PyProtocols, only the Zope Component Architecture, but they are similar enough for these principles to be the same.</p> <p>Your error is that you have two adapters that can adapt the same thing. You both have an averaging filter and an inversion filter. When you then ask for the filter, both are found, and you get the "ambigous adapter" error.</p> <p>You can handle this by having different interfaces for averaging filters and inverting filters, but it's getting silly. In the Zope component architecture you would typically handle this case with named adapters. Each adapter gets a name, by default ''. In this case you would give the adapter names like "averaging" and "inverting", and you'd look them up with that name, so you know if you get the averaging or the inverting filter.</p> <p>For the more general question, if the design makes sense or not, it's hard to tell. You having three different kinds of outputs and three different kinds of filters doesn't seem like a good idea. Perhaps you could make the sequence and dict outputs into composites of the single value output, so that each output value gets it's own object, so it can be filtered independently. That would make more sense to me.</p>
1
2009-07-10T06:21:25Z
[ "python", "interface", "protocols" ]
Manually logging out a user, after a site update in Django
1,107,598
<p>I have a website, which will be frequently updated. Sometimes changes happen to User specific models and are linked to sessions.</p> <p>After I update my site, I want the user to log out and log back in. So I would log out the user right then. If he logs back in, he will see the latest updates to the site.</p> <p>How do I do it?</p>
1
2009-07-10T03:52:15Z
1,107,640
<p>You could just reset your session table. This would logout every user. Of course, depending on what your doing with sessions, it could have other implications (like emptying a shopping cart, for example).</p> <pre><code>python manage.py reset sessions </code></pre> <p>Or in raw SQL:</p> <pre><code>DELETE FROM django_sessions </code></pre>
10
2009-07-10T04:04:59Z
[ "python", "django", "authentication", "logout" ]
Manually logging out a user, after a site update in Django
1,107,598
<p>I have a website, which will be frequently updated. Sometimes changes happen to User specific models and are linked to sessions.</p> <p>After I update my site, I want the user to log out and log back in. So I would log out the user right then. If he logs back in, he will see the latest updates to the site.</p> <p>How do I do it?</p>
1
2009-07-10T03:52:15Z
1,108,781
<p>See this: <a href="http://docs.djangoproject.com/en/dev/topics/auth/#how-to-log-a-user-out" rel="nofollow">http://docs.djangoproject.com/en/dev/topics/auth/#how-to-log-a-user-out</a></p> <p>That seems to cover it.</p>
-1
2009-07-10T10:18:27Z
[ "python", "django", "authentication", "logout" ]
Assigning a Iron Python list to .NET array
1,107,789
<p>I have a list comprehension operating on elements of an .NET array like</p> <pre><code>obj.arr = [f(x) for x in obj.arr] </code></pre> <p>However the assignment back to obj.arr fails.</p> <p>Is it possible to convert a list to a .NET array in IronPython?</p>
3
2009-07-10T05:05:16Z
1,107,828
<p>Try this:</p> <pre><code>obj.arr = Array[T]([f(x) for x in obj.arr]) </code></pre> <p>replacing <code>T</code> with type of array elements.</p> <p>Alternatively:</p> <pre><code>obj.arr = tuple([f(x) for x in obj.arr]) </code></pre>
5
2009-07-10T05:17:39Z
[ "python", "ironpython" ]
Assigning a Iron Python list to .NET array
1,107,789
<p>I have a list comprehension operating on elements of an .NET array like</p> <pre><code>obj.arr = [f(x) for x in obj.arr] </code></pre> <p>However the assignment back to obj.arr fails.</p> <p>Is it possible to convert a list to a .NET array in IronPython?</p>
3
2009-07-10T05:05:16Z
1,107,830
<p>Arrays have to be typed as far as I know. This works for me:</p> <pre><code>num_list = [n for n in range(10)] from System import Array num_arr = Array[int](num_list) </code></pre> <p>Similarly for strings and other types. </p>
2
2009-07-10T05:18:28Z
[ "python", "ironpython" ]
python long running daemon job processor
1,107,826
<p>I want to write a long running process (linux daemon) that serves two purposes:</p> <ul> <li>responds to REST web requests</li> <li>executes jobs which can be scheduled </li> </ul> <p>I originally had it working as a simple program that would run through runs and do the updates which I then cron’d, but now I have the added REST requirement, and would also like to change the frequency of some jobs, but not others (let’s say all jobs have different frequencies). </p> <p>I have 0 experience writing long running processes, especially ones that do things on their own, rather than responding to requests.</p> <p>My basic plan is to run the REST part in a separate thread/process, and figured I’d run the jobs part separately.</p> <p>I’m wondering if there exists any patterns, specifically python, (I’ve looked and haven’t really found any examples of what I want to do) or if anyone has any suggestions on where to begin with transitioning my project to meet these new requirements. I’ve seen a few projects that touch on scheduling, but I’m really looking for real world user experience / suggestions here. What works / doesn’t work for you?</p>
2
2009-07-10T05:16:44Z
1,107,859
<p>I usually use <code>cron</code> for scheduling. As for REST you can use one of the many, many web frameworks out there. But just running SimpleHTTPServer should be enough.</p> <p>You can schedule the REST service startup with <code>cron</code> @reboot</p> <pre><code>@reboot (cd /path/to/my/app &amp;&amp; nohup python myserver.py&amp;) </code></pre>
0
2009-07-10T05:29:23Z
[ "python", "web-services", "scheduling", "long-running-processes" ]
python long running daemon job processor
1,107,826
<p>I want to write a long running process (linux daemon) that serves two purposes:</p> <ul> <li>responds to REST web requests</li> <li>executes jobs which can be scheduled </li> </ul> <p>I originally had it working as a simple program that would run through runs and do the updates which I then cron’d, but now I have the added REST requirement, and would also like to change the frequency of some jobs, but not others (let’s say all jobs have different frequencies). </p> <p>I have 0 experience writing long running processes, especially ones that do things on their own, rather than responding to requests.</p> <p>My basic plan is to run the REST part in a separate thread/process, and figured I’d run the jobs part separately.</p> <p>I’m wondering if there exists any patterns, specifically python, (I’ve looked and haven’t really found any examples of what I want to do) or if anyone has any suggestions on where to begin with transitioning my project to meet these new requirements. I’ve seen a few projects that touch on scheduling, but I’m really looking for real world user experience / suggestions here. What works / doesn’t work for you?</p>
2
2009-07-10T05:16:44Z
1,107,871
<p>One option is to simply choose a lightweight WSGI server from this list:</p> <ul> <li><a href="http://wsgi.org/wsgi/Servers" rel="nofollow">http://wsgi.org/wsgi/Servers</a></li> </ul> <p>and let it do the work of a long-running process that serves requests. (I would recommend <a href="http://pypi.python.org/pypi/Spawning/0.7" rel="nofollow">Spawning</a>.) Your code can concentrate on the REST API and handling requests through the well defined WSGI interface, and scheduling jobs. </p> <p>There are at least a couple of scheduling libraries you could use, but I don't know much about them:</p> <ul> <li><a href="http://sourceforge.net/projects/pycron/" rel="nofollow">http://sourceforge.net/projects/pycron/</a></li> <li><a href="http://code.google.com/p/scheduler-py/" rel="nofollow">http://code.google.com/p/scheduler-py/</a></li> </ul>
1
2009-07-10T05:33:28Z
[ "python", "web-services", "scheduling", "long-running-processes" ]