title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags listlengths 1 5 |
|---|---|---|---|---|---|---|---|---|---|
Why would traceback.extract_stack() return [] when there is definitely a call stack? | 1,252,823 | <p>I have a class that calls</p>
<pre><code>traceback.extract_stack()
</code></pre>
<p>in its <code>__init__()</code>, but whenever I do that, the value of <code>traceback.extract_stack()</code> is <code>[]</code>.</p>
<p>What are some reasons that this could be the case?
Is there another way to get a traceback that will be more reliable?</p>
<p>I think the problem is that the code is running in Pylons. Here is some code for a controller action:</p>
<pre><code>def test_tb(self):
import traceback
return a.lib.htmlencode(traceback.extract_stack())
</code></pre>
<p>It generates a webpage that is just</p>
<pre><code>[]
</code></pre>
<p>So, I don't think it has anything to do with being in the constructor of an object or anything like that. Could it have to do with an incompatibility between some kinds of threading and the traceback module or something like that?</p>
| 0 | 2009-08-10T01:43:15Z | 1,252,831 | <p>Looking at the code for the traceback module, one possibility is that you've got sys.tracebacklimit set to zero, though that seems like a longshot...</p>
| 0 | 2009-08-10T01:49:11Z | [
"python",
"stack-trace"
] |
Why would traceback.extract_stack() return [] when there is definitely a call stack? | 1,252,823 | <p>I have a class that calls</p>
<pre><code>traceback.extract_stack()
</code></pre>
<p>in its <code>__init__()</code>, but whenever I do that, the value of <code>traceback.extract_stack()</code> is <code>[]</code>.</p>
<p>What are some reasons that this could be the case?
Is there another way to get a traceback that will be more reliable?</p>
<p>I think the problem is that the code is running in Pylons. Here is some code for a controller action:</p>
<pre><code>def test_tb(self):
import traceback
return a.lib.htmlencode(traceback.extract_stack())
</code></pre>
<p>It generates a webpage that is just</p>
<pre><code>[]
</code></pre>
<p>So, I don't think it has anything to do with being in the constructor of an object or anything like that. Could it have to do with an incompatibility between some kinds of threading and the traceback module or something like that?</p>
| 0 | 2009-08-10T01:43:15Z | 1,252,853 | <p>Following shows traceback.extract_stack() working when called from a class's <code>__init__</code> method. Please post your code showing that it doesn't work. Include the Python version. Don't type from memory; use copy/paste as I have done.</p>
<pre><code>Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import traceback as tb
>>> tb.extract_stack()
[('<stdin>', 1, '<module>', None)]
>>> def func():
... print tb.extract_stack()
...
>>> func()
[('<stdin>', 1, '<module>', None), ('<stdin>', 2, 'func', None)]
>>> class Klass(object):
... def __init__(self):
... print tb.extract_stack()
...
>>> k = Klass()
[('<stdin>', 1, '<module>', None), ('<stdin>', 3, '__init__', None)]
>>>
</code></pre>
<p><strong>UPDATE</strong> Instead of looking at <code>return a.lib.htmlencode(traceback.extract_stack())</code> and wondering, tap into the pipeline:</p>
<p>(1) do <code>tb_stack = repr((traceback.extract_stack())</code> and write the result to your logfile for checking</p>
<p>(2) do <code>return a.lib.htmlencode(some_known_constant_data)</code> and check that the known data shows up correctly where you expect it to show up.</p>
| 1 | 2009-08-10T01:59:31Z | [
"python",
"stack-trace"
] |
Why would traceback.extract_stack() return [] when there is definitely a call stack? | 1,252,823 | <p>I have a class that calls</p>
<pre><code>traceback.extract_stack()
</code></pre>
<p>in its <code>__init__()</code>, but whenever I do that, the value of <code>traceback.extract_stack()</code> is <code>[]</code>.</p>
<p>What are some reasons that this could be the case?
Is there another way to get a traceback that will be more reliable?</p>
<p>I think the problem is that the code is running in Pylons. Here is some code for a controller action:</p>
<pre><code>def test_tb(self):
import traceback
return a.lib.htmlencode(traceback.extract_stack())
</code></pre>
<p>It generates a webpage that is just</p>
<pre><code>[]
</code></pre>
<p>So, I don't think it has anything to do with being in the constructor of an object or anything like that. Could it have to do with an incompatibility between some kinds of threading and the traceback module or something like that?</p>
| 0 | 2009-08-10T01:43:15Z | 1,257,132 | <p>The reason turned out to be that someone turned on Pysco on the project, and Psyco doesn't play nice with the traceback module.</p>
| 0 | 2009-08-10T20:49:27Z | [
"python",
"stack-trace"
] |
How to include and use .eggs/pkg_resources within a project directory targeting python 2.5.1 | 1,252,910 | <p>I have python .egg files that are stored in a relative location to some .py code. The problem is, I am targeting python 2.5.1 computers which require my project be self contained in a folder (hundreds of thousands of OLPC XO 8.2.1 release laptops running Sugar). This means I cannot just ./ez_install to perform a system-wide setuptools/pkg_resources installation.</p>
<p>Example directory structure:</p>
<pre><code>My Application/
My Application/library1.egg
My Application/libs/library2.egg
My Application/test.py
</code></pre>
<p>I am wondering how best to import and use library1 and library2 from within test.py with no pkg_resources system-wide installation. Is my best option simply to unzip the .egg files?</p>
<p>Thanks for any tips.</p>
| 3 | 2009-08-10T02:40:14Z | 1,252,942 | <p><strong>Include pkg_resources.py in the <code>lib/</code> directory.</strong></p>
<p>Add at the top of example.py...</p>
<pre><code> import sys
sys.path.append("lib/")
import pkg_resources
</code></pre>
<p>and then you can...</p>
<pre><code> sys.path.append("library1.egg")
sys.path.append("libs/library2.egg")
import library1
import library2
</code></pre>
| 0 | 2009-08-10T03:01:53Z | [
"python",
"egg",
"python-2.5",
"olpc"
] |
How to include and use .eggs/pkg_resources within a project directory targeting python 2.5.1 | 1,252,910 | <p>I have python .egg files that are stored in a relative location to some .py code. The problem is, I am targeting python 2.5.1 computers which require my project be self contained in a folder (hundreds of thousands of OLPC XO 8.2.1 release laptops running Sugar). This means I cannot just ./ez_install to perform a system-wide setuptools/pkg_resources installation.</p>
<p>Example directory structure:</p>
<pre><code>My Application/
My Application/library1.egg
My Application/libs/library2.egg
My Application/test.py
</code></pre>
<p>I am wondering how best to import and use library1 and library2 from within test.py with no pkg_resources system-wide installation. Is my best option simply to unzip the .egg files?</p>
<p>Thanks for any tips.</p>
| 3 | 2009-08-10T02:40:14Z | 2,898,409 | <p>If you want to be able to use pkg_resources, just copy pkg_resources.py alongside your application's main script. It's designed to be able to be used this way as a standalone runtime.</p>
| 1 | 2010-05-24T16:14:06Z | [
"python",
"egg",
"python-2.5",
"olpc"
] |
Distributing my python scripts as jars with jython? | 1,252,965 | <p>I have been a python programmer for almost 2 years and I am used to writing small scripts to automate some repetitive tasks I had to do at office. Now, apparently my colleagues noticed this and they want those scripts too.</p>
<p>Some of them have macs, some windows, I made these on windows. I investigated the possibility of using py2exe or even py2app to make natives of my script, but they never satisfied me..</p>
<p>I came to know that all of them have JVM on their systems, so can I give them one single executable jar file of my script using something like jython may be?</p>
<p>How feasible is this... I mean, I had no idea how to write scripts for jython, neither did I care about it when I wrote them... what kind of problems will it give?</p>
| 50 | 2009-08-10T03:11:51Z | 1,253,267 | <p>The 'jythonc' command should be able to compile your .py source into JVM bytecode, which should make it portable to any Java install. Or so I read at: <a href="http://hell.org.ua/Docs/oreilly/other2/python/0596001886%5Fpythonian-chp-25-sect-3.html" rel="nofollow">http://hell.org.ua/Docs/oreilly/other2/python/0596001886_pythonian-chp-25-sect-3.html</a></p>
| 1 | 2009-08-10T05:47:00Z | [
"python",
"jython",
"distribution",
"executable-jar"
] |
Distributing my python scripts as jars with jython? | 1,252,965 | <p>I have been a python programmer for almost 2 years and I am used to writing small scripts to automate some repetitive tasks I had to do at office. Now, apparently my colleagues noticed this and they want those scripts too.</p>
<p>Some of them have macs, some windows, I made these on windows. I investigated the possibility of using py2exe or even py2app to make natives of my script, but they never satisfied me..</p>
<p>I came to know that all of them have JVM on their systems, so can I give them one single executable jar file of my script using something like jython may be?</p>
<p>How feasible is this... I mean, I had no idea how to write scripts for jython, neither did I care about it when I wrote them... what kind of problems will it give?</p>
| 50 | 2009-08-10T03:11:51Z | 1,255,113 | <p>The best current techniques for distributing your Python files in a jar are detailed in this article on Jython's wiki: <a href="http://wiki.python.org/jython/JythonFaq/DistributingJythonScripts">http://wiki.python.org/jython/JythonFaq/DistributingJythonScripts</a></p>
<p>For your case, I think you would want to take the jython.jar file that you get when you install Jython and zip the Jython Lib directory into it, then zip your .py files in, and then add a <code>__run__.py</code> file with your startup logic (this file is treated specially by Jython and will be the file executed when you call the jar with "java -jar").</p>
<p>This process is definitely more complicated then in ought to be, and so we (the Jython developers) need to come up with a nice tool that will automate these tasks, but for now these are the best methods. Below I'm copying the recipe at the bottom of the above article (modified slightly to fit your problem description) to give you a sense of the solution.</p>
<p>Create the basic jar:</p>
<pre><code>$ cd $JYTHON_HOME
$ cp jython.jar jythonlib.jar
$ zip -r jythonlib.jar Lib
</code></pre>
<p>Add other modules to the jar:</p>
<pre><code>$ cd $MY_APP_DIRECTORY
$ cp $JYTHON_HOME/jythonlib.jar myapp.jar
$ zip myapp.jar Lib/showobjs.py
# Add path to additional jar file.
$ jar ufm myapp.jar othermanifest.mf
</code></pre>
<p>Add the <code>__run__.py</code> module:</p>
<pre><code># Copy or rename your start-up script, removing the "__name__ == '__main__'" check.
$ cp mymainscript.py __run__.py
# Add your start-up script (__run__.py) to the jar.
$ zip myapp.jar __run__.py
# Add path to main jar to the CLASSPATH environment variable.
$ export CLASSPATH=/path/to/my/app/myapp.jar:$CLASSPATH
</code></pre>
<p>On MS Windows, that last line, setting the CLASSPATH environment variable, would look something like this:</p>
<pre><code>set CLASSPATH=C:\path\to\my\app\myapp.jar;%CLASSPATH%
</code></pre>
<p>Or, again on MS Windows, use the Control Panel and the System properties to set the CLASSPATH environment variable.</p>
<p>Run the application:</p>
<pre><code>$ java -jar myapp.jar mymainscript.py arg1 arg2
</code></pre>
<p>Or, if you have added your start-up script to the jar, use one of the following:</p>
<pre><code>$ java org.python.util.jython -jar myapp.jar arg1 arg2
$ java -cp myapp.jar org.python.util.jython -jar myapp.jar arg1 arg2
$ java -jar myapp.jar -jar myapp.jar arg1 arg2
</code></pre>
<p>The double -jar is kind of annoying, so if you want to avoid that and get the more pleasing:</p>
<pre><code>$ java -jar myapp.jar arg1
</code></pre>
<p>You'll have to do a bit more work until we get something like this into a future Jython [Update: JarRunner is part of Jython 2.5.1]. Here is some Java code that looks for the <code>__run__.py</code> automatically, and runs it. Note that this is my first try at this class. Let me know if it needs improvement!</p>
<pre><code>package org.python.util;
import org.python.core.imp;
import org.python.core.PySystemState;
public class JarRunner {
public static void run(String[] args) {
final String runner = "__run__";
String[] argv = new String[args.length + 1];
argv[0] = runner;
System.arraycopy(args, 0, argv, 1, args.length);
PySystemState.initialize(PySystemState.getBaseProperties(), null, argv);
imp.load(runner);
}
public static void main(String[] args) {
run(args);
}
}
</code></pre>
<p>I put this code into the org.python.util package, since that's where it would go if we decide to include it in a future Jython. To compile it, you'll need to put jython.jar (or your myapp.jar) into the classpath like:</p>
<pre><code>$ javac -classpath myapp.jar org/python/util/JarRunner.java
</code></pre>
<p>Then you'll need to add JarRunner.class to your jar (the class file will need to be in org/python/util/JarRunner.class) calling jar on the "org" directory will get the whole path into your jar.</p>
<pre><code>$ jar uf org
</code></pre>
<p>Add this to a file that you will use to update the manifest, a good name is manifest.txt:</p>
<pre><code>Main-Class: org.python.util.JarRunner
</code></pre>
<p>Then update the jar's manifest:</p>
<pre><code>$ jar ufm myapp.jar manifest.txt
</code></pre>
<p>Now you should be able to run your app like this:</p>
<pre><code>$ java -jar myapp.jar
</code></pre>
| 57 | 2009-08-10T14:12:55Z | [
"python",
"jython",
"distribution",
"executable-jar"
] |
Distributing my python scripts as jars with jython? | 1,252,965 | <p>I have been a python programmer for almost 2 years and I am used to writing small scripts to automate some repetitive tasks I had to do at office. Now, apparently my colleagues noticed this and they want those scripts too.</p>
<p>Some of them have macs, some windows, I made these on windows. I investigated the possibility of using py2exe or even py2app to make natives of my script, but they never satisfied me..</p>
<p>I came to know that all of them have JVM on their systems, so can I give them one single executable jar file of my script using something like jython may be?</p>
<p>How feasible is this... I mean, I had no idea how to write scripts for jython, neither did I care about it when I wrote them... what kind of problems will it give?</p>
| 50 | 2009-08-10T03:11:51Z | 14,900,677 | <p>I experienced a similar issue in that I want to be able to create simple command line calls for my jython apps, not require that the user go through the jython installation process, and be able to have the jython scripts append library dependencies at runtime to sys.path so as to include core java code.</p>
<pre><code># append Java library elements to path
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), "..", "..", "lib", "poi-3.8-20120326.jar"))
</code></pre>
<p>When running the 'jython' launcher explicitly on the command line, on Unix systems, it just runs a big shell script to properly form a java command line call. This jython launcher seems to have a dependency on reaching back to a core install of jython, and by some way of magic allows the proper handling of .jar files being added to the sys.path at runtime from within my .py scripts. You can see what the call is and block execution by the following:</p>
<pre><code>jython --print run_form.py
java -Xmx512m -Xss1024k -Dfile.encoding=UTF-8 -classpath /Applications/jython2.5.2/jython.jar: -Dpython.home=/Applications/jython2.5.2 -Dpython.executable=/Applications/jython2.5.2/bin/jython org.python.util.jython run_form.py
</code></pre>
<p>But it's still just firing up a JVM and running a class file. So my goal was to be able to make this java call to a standalone jython.jar present in my distribution's lib directory so users would not need to do any additional installation steps to start using my .py scripted utilities.</p>
<pre><code>java -Xmx512m -Xss1024k -classpath ../../lib/jython.jar org.python.util.jython run_form.py
</code></pre>
<p>Trouble is that the behavior is enough different that I would get responses like this:</p>
<pre><code> File "run_form.py", line 14, in <module>
import xls_mgr
File "/Users/test/Eclipse/workspace/test_code/py/test/xls_mgr.py", line 17, in <module>
import org.apache.poi.hssf.extractor as xls_extractor
ImportError: No module named apache
</code></pre>
<p>Now you might say that I should just add the jar files to the -classpath, which in fact I tried, but I would get the same result.</p>
<p>The suggestion of bundling all of your .class files in a jython.jar did not sound appealing to me at all. It would be a mess and would bind the Java/Python hybrid application too tightly to the jython distribution. So that idea was not going to fly. Finally, after lots of searching, I ran across bug #1776 at jython.org, which has been listed as critical for a year and a half, but I don't see that the latest updates to jython incorporate a fix. Still, if you're having problems with having jython include your separate jar files, you should read this.</p>
<p><a href="http://bugs.jython.org/issue1776" rel="nofollow">http://bugs.jython.org/issue1776</a></p>
<p>In there, you will find the temporary workaround for this. In my case, I took the Apache POI jar file and unjar'ed it into its own separate lib directory and then modified the sys.path entry to point to the directory instead of the jar:</p>
<pre><code>sys.path.append('/Users/test/Eclipse/workspace/test_code/lib/poi_lib')
</code></pre>
<p>Now, when I run jython by way of java, referencing my local jython.jar, the utility runs just peachy. Now I can create simple scripts or batch files to make a seamless command line experience for my .py utilities, which the user can run without any additional installation steps.</p>
| 2 | 2013-02-15T18:04:39Z | [
"python",
"jython",
"distribution",
"executable-jar"
] |
Distributing my python scripts as jars with jython? | 1,252,965 | <p>I have been a python programmer for almost 2 years and I am used to writing small scripts to automate some repetitive tasks I had to do at office. Now, apparently my colleagues noticed this and they want those scripts too.</p>
<p>Some of them have macs, some windows, I made these on windows. I investigated the possibility of using py2exe or even py2app to make natives of my script, but they never satisfied me..</p>
<p>I came to know that all of them have JVM on their systems, so can I give them one single executable jar file of my script using something like jython may be?</p>
<p>How feasible is this... I mean, I had no idea how to write scripts for jython, neither did I care about it when I wrote them... what kind of problems will it give?</p>
| 50 | 2009-08-10T03:11:51Z | 29,523,138 | <p>I know this question is rather old, but nonetheless I thought it worth pointing out that for distributing your Python scripts in a way that doesn't require a native Python installation, you could also try <a href="http://nuitka.net/" rel="nofollow">Nuitka</a>, which basically translates your Python code to C++ code, which is then compiled to a true native binary.</p>
| 0 | 2015-04-08T19:00:18Z | [
"python",
"jython",
"distribution",
"executable-jar"
] |
Google Web Toolkit like application in Django | 1,253,056 | <p>I'm trying to develop an application that would be perfect for GWT, however I am using this app as a learning example for Django. Is there some precedence for this type of application in Django?</p>
| 3 | 2009-08-10T04:00:23Z | 1,253,082 | <p><a href="http://pyjs.org/">Pyjamas</a> is sort of like GWT which is written with Python. From there you can make it work with your django code.</p>
| 5 | 2009-08-10T04:10:27Z | [
"python",
"django",
"gwt"
] |
Google Web Toolkit like application in Django | 1,253,056 | <p>I'm trying to develop an application that would be perfect for GWT, however I am using this app as a learning example for Django. Is there some precedence for this type of application in Django?</p>
| 3 | 2009-08-10T04:00:23Z | 1,258,305 | <p>Lots of people have done this by writing their UI in GWT and having it issue ajax calls back to their python backend. There are basically two ways to go about it. First, you can simply use JSON to communicate between the frontend and the backend. That's the approach you will find here (<a href="http://palantar.blogspot.com/2006/06/agad-tutorial-ish-sort-of-post.html" rel="nofollow">http://palantar.blogspot.com/2006/06/agad-tutorial-ish-sort-of-post.html</a>). Second, some people want to use GWT's RPC system to talk to python backends. This is a little more involved, but some people have created tools (for example, <a href="http://code.google.com/p/python-gwt-rpc/" rel="nofollow">http://code.google.com/p/python-gwt-rpc/</a>).</p>
<p>To be honest, most successful projects just use JSON to communicate between GWT and the python server. GWT's RPC is pretty advanced in that it is able to serialize arbitrary java object graphs to and from the client. It's a tricky problem to get right and I'm pretty doubtful that any of the python tools have it right.</p>
| 3 | 2009-08-11T03:10:00Z | [
"python",
"django",
"gwt"
] |
Why does subprocess.Popen() with shell=True work differently on Linux vs Windows? | 1,253,122 | <p>When using <code>subprocess.Popen(args, shell=True)</code> to run "<code>gcc --version</code>" (just as an example), on Windows we get this:</p>
<pre><code>>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc (GCC) 3.4.5 (mingw-vista special r3) ...
</code></pre>
<p>So it's nicely printing out the version as I expect. But on Linux we get this:</p>
<pre><code>>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc: no input files
</code></pre>
<p>Because gcc hasn't received the <code>--version</code> option.</p>
<p>The docs don't specify exactly what should happen to the args under Windows, but it does say, on Unix, <em>"If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments."</em> IMHO the Windows way is better, because it allows you to treat <code>Popen(arglist)</code> calls the same as <code>Popen(arglist, shell=True)</code> ones.</p>
<p><strong>Why the difference between Windows and Linux here?</strong></p>
| 19 | 2009-08-10T04:39:58Z | 1,253,161 | <p>From the subprocess.py source:</p>
<blockquote>
<p>On UNIX, with shell=True: If args is a string, it specifies the
command string to execute through the shell. If args is a sequence,
the first item specifies the command string, and any additional items
will be treated as additional shell arguments.</p>
<p>On Windows: the Popen class uses CreateProcess() to execute the child
program, which operates on strings. If args is a sequence, it will be
converted to a string using the list2cmdline method. Please note that
not all MS Windows applications interpret the command line the same
way: The list2cmdline is designed for applications using the same
rules as the MS C runtime.</p>
</blockquote>
<p>That doesn't answer why, just clarifies that you are seeing the expected behavior.</p>
<p>The "why" is probably that on UNIX-like systems, command arguments are actually passed through to applications (using the <code>exec*</code> family of calls) as an array of strings. In other words, the calling process decides what goes into EACH command line argument. Whereas when you tell it to use a shell, the calling process actually only gets the chance to pass a single command line argument to the shell to execute: The entire command line that you want executed, executable name and arguments, as a single string.</p>
<p>But on Windows, the entire command line (according to the above documentation) is passed as a single string to the child process. If you look at the <a href="http://msdn.microsoft.com/en-us/library/ms682425%28VS.85%29.aspx" rel="nofollow">CreateProcess</a> API documentation, you will notice that it expects all of the command line arguments to be concatenated together into a big string (hence the call to <code>list2cmdline</code>).</p>
<p>Plus there is the fact that on UNIX-like systems there actually <em>is</em> a shell that can do useful things, so I suspect that the other reason for the difference is that on Windows, <code>shell=True</code> does nothing, which is why it is working the way you are seeing. The only way to make the two systems act identically would be for it to simply drop all of the command line arguments when <code>shell=True</code> on Windows.</p>
| 5 | 2009-08-10T04:55:52Z | [
"python",
"shell",
"subprocess",
"popen"
] |
Why does subprocess.Popen() with shell=True work differently on Linux vs Windows? | 1,253,122 | <p>When using <code>subprocess.Popen(args, shell=True)</code> to run "<code>gcc --version</code>" (just as an example), on Windows we get this:</p>
<pre><code>>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc (GCC) 3.4.5 (mingw-vista special r3) ...
</code></pre>
<p>So it's nicely printing out the version as I expect. But on Linux we get this:</p>
<pre><code>>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc: no input files
</code></pre>
<p>Because gcc hasn't received the <code>--version</code> option.</p>
<p>The docs don't specify exactly what should happen to the args under Windows, but it does say, on Unix, <em>"If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments."</em> IMHO the Windows way is better, because it allows you to treat <code>Popen(arglist)</code> calls the same as <code>Popen(arglist, shell=True)</code> ones.</p>
<p><strong>Why the difference between Windows and Linux here?</strong></p>
| 19 | 2009-08-10T04:39:58Z | 1,254,322 | <p>Actually on Windows, it does use <code>cmd.exe</code> when <code>shell=True</code> - it prepends <code>cmd.exe /c </code> (it actually looks up the <code>COMSPEC</code> environment variable but defaults to <code>cmd.exe</code> if not present) to the shell arguments. (On Windows 95/98 it uses the intermediate <code>w9xpopen</code> program to actually launch the command).</p>
<p>So the strange implementation is actually the <code>UNIX</code> one, which does the following (where each space separates a different argument):</p>
<pre><code>/bin/sh -c gcc --version
</code></pre>
<p>It looks like the correct implementation (at least on Linux) would be:</p>
<pre><code>/bin/sh -c "gcc --version" gcc --version
</code></pre>
<p>Since this would set the command string from the quoted parameters, and pass the other parameters successfully.</p>
<p>From the <code>sh</code> man page section for <code>-c</code>:</p>
<blockquote>
<p><code>Read commands from the command_string operand instead of from the standard input. Special parameter 0 will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the remaining argument operands.</code></p>
</blockquote>
<p>This patch seems to fairly simply do the trick:</p>
<pre><code>--- subprocess.py.orig 2009-04-19 04:43:42.000000000 +0200
+++ subprocess.py 2009-08-10 13:08:48.000000000 +0200
@@ -990,7 +990,7 @@
args = list(args)
if shell:
- args = ["/bin/sh", "-c"] + args
+ args = ["/bin/sh", "-c"] + [" ".join(args)] + args
if executable is None:
executable = args[0]
</code></pre>
| 14 | 2009-08-10T11:09:49Z | [
"python",
"shell",
"subprocess",
"popen"
] |
Why does subprocess.Popen() with shell=True work differently on Linux vs Windows? | 1,253,122 | <p>When using <code>subprocess.Popen(args, shell=True)</code> to run "<code>gcc --version</code>" (just as an example), on Windows we get this:</p>
<pre><code>>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc (GCC) 3.4.5 (mingw-vista special r3) ...
</code></pre>
<p>So it's nicely printing out the version as I expect. But on Linux we get this:</p>
<pre><code>>>> from subprocess import Popen
>>> Popen(['gcc', '--version'], shell=True)
gcc: no input files
</code></pre>
<p>Because gcc hasn't received the <code>--version</code> option.</p>
<p>The docs don't specify exactly what should happen to the args under Windows, but it does say, on Unix, <em>"If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments."</em> IMHO the Windows way is better, because it allows you to treat <code>Popen(arglist)</code> calls the same as <code>Popen(arglist, shell=True)</code> ones.</p>
<p><strong>Why the difference between Windows and Linux here?</strong></p>
| 19 | 2009-08-10T04:39:58Z | 27,404,401 | <p>The reason for the UNIX behaviour of <code>shell=True</code> is to do with quoting. When we write a shell command, it will be split at spaces, so we have to quote some arguments:</p>
<pre><code>cp "My File" "New Location"
</code></pre>
<p>This leads to problems when our arguments <em>contain</em> quotes, which requires escaping:</p>
<pre><code>grep -r "\"hello\"" .
</code></pre>
<p>Sometimes we can get <a href="http://en.wikipedia.org/wiki/Leaning_toothpick_syndrome" rel="nofollow">awful situations</a> where <code>\</code> must be escaped too!</p>
<p>Of course, the real problem is that we're trying to use <em>one</em> string to specify <em>multiple</em> strings. When calling system commands, most programming languages avoid this by allowing us to send multiple strings in the first place, hence:</p>
<pre><code>Popen(['cp', 'My File', 'New Location'])
Popen(['grep', '-r', '"hello"'])
</code></pre>
<p>Sometimes it can be nice to run "raw" shell commands; for example, if we're copy-pasting something from a shell script or a Web site, and we don't want to convert all of the horrible escaping manually. That's why the <code>shell=True</code> option exists:</p>
<pre><code>Popen(['cp "My File" "New Location"'], shell=True)
Popen(['grep -r "\"hello\"" .'], shell=True)
</code></pre>
<p>I'm not familiar with Windows so I don't know how or why it behaves differently.</p>
| 0 | 2014-12-10T15:11:01Z | [
"python",
"shell",
"subprocess",
"popen"
] |
Is there an easy way to pickle a python function (or otherwise serialize its code)? | 1,253,528 | <p>I'm trying to transfer a transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side affects) for transfer like this?</p>
<p>I would ideally like to have a pair of functions similar to these:</p>
<pre><code>def transmit(func):
obj = pickle.dumps(func)
[send obj across the network]
def receive():
[receive obj from the network]
func = pickle.loads(s)
func()
</code></pre>
| 68 | 2009-08-10T07:25:47Z | 1,253,540 | <p>The most simple way is probably <code>inspect.getsource(object)</code> (see the <a href="http://docs.python.org/library/inspect.html#retrieving-source-code">inspect module</a>) which returns a String with the source code for a function or a method.</p>
| 9 | 2009-08-10T07:29:42Z | [
"python",
"function",
"pickle"
] |
Is there an easy way to pickle a python function (or otherwise serialize its code)? | 1,253,528 | <p>I'm trying to transfer a transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side affects) for transfer like this?</p>
<p>I would ideally like to have a pair of functions similar to these:</p>
<pre><code>def transmit(func):
obj = pickle.dumps(func)
[send obj across the network]
def receive():
[receive obj from the network]
func = pickle.loads(s)
func()
</code></pre>
| 68 | 2009-08-10T07:25:47Z | 1,253,579 | <p><a href="http://pythonhosted.org/Pyro4/">Pyro</a> is able to <a href="http://packages.python.org/Pyro/7-features.html#mobile">do this for you</a>.</p>
| 13 | 2009-08-10T07:45:33Z | [
"python",
"function",
"pickle"
] |
Is there an easy way to pickle a python function (or otherwise serialize its code)? | 1,253,528 | <p>I'm trying to transfer a transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side affects) for transfer like this?</p>
<p>I would ideally like to have a pair of functions similar to these:</p>
<pre><code>def transmit(func):
obj = pickle.dumps(func)
[send obj across the network]
def receive():
[receive obj from the network]
func = pickle.loads(s)
func()
</code></pre>
| 68 | 2009-08-10T07:25:47Z | 1,253,813 | <p>You could serialise the function bytecode and then reconstruct it on the caller. The <a href="https://docs.python.org/3/library/marshal.html">marshal</a> module can be used to serialise code objects, which can then be reassembled into a function. ie:</p>
<pre><code>import marshal
def foo(x): return x*x
code_string = marshal.dumps(foo.func_code)
</code></pre>
<p>Then in the remote process (after transferring code_string):</p>
<pre><code>import marshal, types
code = marshal.loads(code_string)
func = types.FunctionType(code, globals(), "some_func_name")
func(10) # gives 100
</code></pre>
<p>A few caveats:</p>
<ul>
<li><p>marshal's format (any python bytecode for that matter) may not be compatable between major python versions.</p></li>
<li><p>Will only work for cpython implementation.</p></li>
<li><p>If the function references globals (including imported modules, other functions etc) that you need to pick up, you'll need to serialise these too, or recreate them on the remote side. My example just gives it the remote process's global namespace.</p></li>
<li><p>You'll probably need to do a bit more to support more complex cases, like closures or generator functions.</p></li>
</ul>
| 91 | 2009-08-10T08:58:22Z | [
"python",
"function",
"pickle"
] |
Is there an easy way to pickle a python function (or otherwise serialize its code)? | 1,253,528 | <p>I'm trying to transfer a transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side affects) for transfer like this?</p>
<p>I would ideally like to have a pair of functions similar to these:</p>
<pre><code>def transmit(func):
obj = pickle.dumps(func)
[send obj across the network]
def receive():
[receive obj from the network]
func = pickle.loads(s)
func()
</code></pre>
| 68 | 2009-08-10T07:25:47Z | 1,253,895 | <p>It all depends on whether you generate the function at runtime or not:</p>
<p>If you do - <code>inspect.getsource(object)</code> won't work for dynamically generated functions as it gets object's source from <code>.py</code> file, so only functions defined before execution can be retrieved as source.</p>
<p>And if your functions are placed in files anyway, why not give receiver access to them and only pass around module and function names.</p>
<p>The only solution for dynamically created functions that I can think of is to construct function as a string before transmission, transmit source, and then <code>eval()</code> it on the receiver side.</p>
<p>Edit: the <code>marshal</code> solution looks also pretty smart, didn't know you can serialize something other thatn built-ins</p>
| 5 | 2009-08-10T09:22:44Z | [
"python",
"function",
"pickle"
] |
Is there an easy way to pickle a python function (or otherwise serialize its code)? | 1,253,528 | <p>I'm trying to transfer a transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side affects) for transfer like this?</p>
<p>I would ideally like to have a pair of functions similar to these:</p>
<pre><code>def transmit(func):
obj = pickle.dumps(func)
[send obj across the network]
def receive():
[receive obj from the network]
func = pickle.loads(s)
func()
</code></pre>
| 68 | 2009-08-10T07:25:47Z | 1,416,937 | <p>The basic functions used for this module covers your query, plus you get the best compression over the wire; see the instructive source code:</p>
<p>y_serial.py module :: warehouse Python objects with SQLite</p>
<p>"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."</p>
<p><a href="http://yserial.sourceforge.net" rel="nofollow">http://yserial.sourceforge.net</a></p>
| 1 | 2009-09-13T05:35:44Z | [
"python",
"function",
"pickle"
] |
Is there an easy way to pickle a python function (or otherwise serialize its code)? | 1,253,528 | <p>I'm trying to transfer a transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side affects) for transfer like this?</p>
<p>I would ideally like to have a pair of functions similar to these:</p>
<pre><code>def transmit(func):
obj = pickle.dumps(func)
[send obj across the network]
def receive():
[receive obj from the network]
func = pickle.loads(s)
func()
</code></pre>
| 68 | 2009-08-10T07:25:47Z | 16,900,634 | <p>The <code>cloud</code> package (pip install cloud) can pickle arbitrary code, including dependencies. See <a href="http://stackoverflow.com/a/16891169/1264797">http://stackoverflow.com/a/16891169/1264797</a>.</p>
| 2 | 2013-06-03T15:41:56Z | [
"python",
"function",
"pickle"
] |
Is there an easy way to pickle a python function (or otherwise serialize its code)? | 1,253,528 | <p>I'm trying to transfer a transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side affects) for transfer like this?</p>
<p>I would ideally like to have a pair of functions similar to these:</p>
<pre><code>def transmit(func):
obj = pickle.dumps(func)
[send obj across the network]
def receive():
[receive obj from the network]
func = pickle.loads(s)
func()
</code></pre>
| 68 | 2009-08-10T07:25:47Z | 20,417,344 | <p>Check out <a href="https://pypi.python.org/pypi/dill">Dill</a>, which extends Python's pickle library to support a greater variety of types, including functions:</p>
<pre><code>>>> import dill as pickle
>>> def f(x): return x + 1
...
>>> g = pickle.dumps(f)
>>> f(1)
2
>>> pickle.loads(g)(1)
2
</code></pre>
<p>It also supports references to objects in the function's closure:</p>
<pre><code>>>> def plusTwo(x): return f(f(x))
...
>>> pickle.loads(pickle.dumps(plusTwo))(1)
3
</code></pre>
| 15 | 2013-12-06T06:20:22Z | [
"python",
"function",
"pickle"
] |
get_allowed_auths() in paramiko for authentication types | 1,253,870 | <p>I am trying to get supported authentication types/methods from a running SSH server in Python.</p>
<p>I found this method get_allowed_auths() in the ServerInterface class in Paramiko but I can't understand if it is usable in a simple client-like snippet of code (I am writing something that accomplish in ONLY this task).</p>
<p>Anyone can suggest me some links with examples, other the distribution documentation?
Maybe any other ideas to do this?</p>
<p>Thanks.</p>
| 3 | 2009-08-10T09:15:19Z | 1,257,769 | <p>You can try to authenticate using no authentication, which should always fail, but the server will then send back the auth types that can continue. There is an <code>auth_none()</code> method provided by <code>paramiko.Transport</code> to do this.</p>
<pre><code>import paramiko
import socket
s = socket.socket()
s.connect(('localhost', 22))
t = paramiko.Transport(s)
t.connect()
try:
t.auth_none('')
except paramiko.BadAuthenticationType, err:
print err.allowed_types
</code></pre>
| 4 | 2009-08-10T23:22:38Z | [
"python",
"authentication",
"ssh",
"paramiko"
] |
Suggestion Needed - Networking in Python - A good idea? | 1,253,905 | <p>I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.</p>
<p>Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development.</p>
<p>What do you think of the python socket module :</p>
<ol>
<li>Is it reliable and fast enough for production quality software ?</li>
<li>Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ?</li>
</ol>
<p>Thanks in advance,</p>
<p>Paul</p>
| 4 | 2009-08-10T09:26:22Z | 1,253,945 | <p>To answer #1, I know that among other things, EVE Online (the MMO) uses a variant of Python for their server code.</p>
| 1 | 2009-08-10T09:37:03Z | [
"python",
"network-programming"
] |
Suggestion Needed - Networking in Python - A good idea? | 1,253,905 | <p>I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.</p>
<p>Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development.</p>
<p>What do you think of the python socket module :</p>
<ol>
<li>Is it reliable and fast enough for production quality software ?</li>
<li>Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ?</li>
</ol>
<p>Thanks in advance,</p>
<p>Paul</p>
| 4 | 2009-08-10T09:26:22Z | 1,254,092 | <p>The python that EVE online uses is StacklessPython (<a href="http://www.stackless.com/" rel="nofollow">http://www.stackless.com/</a>), and as far as i understand they use it for how it implements threading through using tasklets and whatnot. But since python itself can handle stuff like MMO with 40k people online i think it can do anything.</p>
<p>This bad answer and not really an answer to your question, rather addition to previous answer.</p>
<p>Alan.</p>
| 1 | 2009-08-10T10:15:36Z | [
"python",
"network-programming"
] |
Suggestion Needed - Networking in Python - A good idea? | 1,253,905 | <p>I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.</p>
<p>Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development.</p>
<p>What do you think of the python socket module :</p>
<ol>
<li>Is it reliable and fast enough for production quality software ?</li>
<li>Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ?</li>
</ol>
<p>Thanks in advance,</p>
<p>Paul</p>
| 4 | 2009-08-10T09:26:22Z | 1,254,288 | <p>Python is a mature language that can do almost anything that you can do in C/C++ (even direct memory access if you really want to hurt yourself).</p>
<p>You'll find that you can write beautiful code in it in a very short time, that this code is readable from the start and that it will stay readable (you will still know what it does even after returning one year later).</p>
<p>The drawback of Python is that <em>your</em> code will be somewhat slow. "Somewhat" as in "might be too slow for certain cases". So the usual approach is to write as much as possible in Python because it will make your app maintainable. Eventually, you might run into speed issues. That would be the time to consider to rewrite a part of your app in C.</p>
<p>The main advantages of this approach are:</p>
<ol>
<li>You already have a running application. Translating the code from Python to C is much more simple than write it from scratch.</li>
<li>You already have a running application. After the translation of a small part of Python to C, you just have to test that small part and you can use the rest of the app (that didn't change) to do it.</li>
<li>You don't pay a price upfront. If Python is fast enough for you, you'll never have to do the optional optimization.</li>
<li>Python is much, much more powerful than C. Every line of Python can do the same as 100 or even 1000 lines of C.</li>
</ol>
| 3 | 2009-08-10T11:04:14Z | [
"python",
"network-programming"
] |
Suggestion Needed - Networking in Python - A good idea? | 1,253,905 | <p>I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.</p>
<p>Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development.</p>
<p>What do you think of the python socket module :</p>
<ol>
<li>Is it reliable and fast enough for production quality software ?</li>
<li>Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ?</li>
</ol>
<p>Thanks in advance,</p>
<p>Paul</p>
| 4 | 2009-08-10T09:26:22Z | 1,255,979 | <p>Check out <a href="http://twistedmatrix.com/trac/">Twisted</a>, a Python engine for Networking. Has built-in support for TCP, UDP, SSL/TLS, multicast, Unix sockets, a large number of protocols (including HTTP, NNTP, IMAP, SSH, IRC, FTP, and others)</p>
| 9 | 2009-08-10T16:52:04Z | [
"python",
"network-programming"
] |
How to add a custom implicit conversion to a C++ type in Boost::Python? | 1,254,013 | <p>In my C++ code I have a class Foo with many methods taking a Bar type variable as an argument:</p>
<pre><code>class Foo {
public:
void do_this(Bar b);
void do_that(Bar b);
/* ... */
};
</code></pre>
<p>Bar has a number of constructors that create a new object from many common types such as int, std::string, float, etc:</p>
<pre><code>class Bar {
public:
Bar(int i);
Bar(float f);
Bar(std::string s);
/* ... */
};
</code></pre>
<p>I wrapped this with Boost::Python and I'm now able to call my Foo methods using Python literals directly, as they get implicitly converted to Bar objects.</p>
<pre><code>f = Foo()
f.do_this(5)
f.do_that("hello")
</code></pre>
<p>Now, I would like to be able to use also other Python types, such as tuples, like this:</p>
<pre><code>f.do_that((1,2,3))
</code></pre>
<p>But I don't want to touch the original Bar definition, and I don't want to pollute my C++ library with Boost::Python stuff. I want to write a wrapper function in my binding code, but I just can't understand if this is possible and what is the correct way for doing this.</p>
<p>In other words: can I register a factory function to be used as an automatic conversion in Python?</p>
| 2 | 2009-08-10T09:55:13Z | 1,254,045 | <p>Create some type TupleCollector with overridden "operator ,(int)" so you can write</p>
<pre><code>f.do_that((TuppleCollector(), 1, 2, 3))
</code></pre>
<p>at last create conversion between TupleCollector and expected target</p>
| 0 | 2009-08-10T10:03:26Z | [
"c++",
"python",
"boost-python"
] |
How to add a custom implicit conversion to a C++ type in Boost::Python? | 1,254,013 | <p>In my C++ code I have a class Foo with many methods taking a Bar type variable as an argument:</p>
<pre><code>class Foo {
public:
void do_this(Bar b);
void do_that(Bar b);
/* ... */
};
</code></pre>
<p>Bar has a number of constructors that create a new object from many common types such as int, std::string, float, etc:</p>
<pre><code>class Bar {
public:
Bar(int i);
Bar(float f);
Bar(std::string s);
/* ... */
};
</code></pre>
<p>I wrapped this with Boost::Python and I'm now able to call my Foo methods using Python literals directly, as they get implicitly converted to Bar objects.</p>
<pre><code>f = Foo()
f.do_this(5)
f.do_that("hello")
</code></pre>
<p>Now, I would like to be able to use also other Python types, such as tuples, like this:</p>
<pre><code>f.do_that((1,2,3))
</code></pre>
<p>But I don't want to touch the original Bar definition, and I don't want to pollute my C++ library with Boost::Python stuff. I want to write a wrapper function in my binding code, but I just can't understand if this is possible and what is the correct way for doing this.</p>
<p>In other words: can I register a factory function to be used as an automatic conversion in Python?</p>
| 2 | 2009-08-10T09:55:13Z | 1,887,414 | <p>You can export function do_that which take boost::python::object param, check if param is a python tuple, extract data and pass it to object.</p>
| -1 | 2009-12-11T11:21:35Z | [
"c++",
"python",
"boost-python"
] |
How to add a custom implicit conversion to a C++ type in Boost::Python? | 1,254,013 | <p>In my C++ code I have a class Foo with many methods taking a Bar type variable as an argument:</p>
<pre><code>class Foo {
public:
void do_this(Bar b);
void do_that(Bar b);
/* ... */
};
</code></pre>
<p>Bar has a number of constructors that create a new object from many common types such as int, std::string, float, etc:</p>
<pre><code>class Bar {
public:
Bar(int i);
Bar(float f);
Bar(std::string s);
/* ... */
};
</code></pre>
<p>I wrapped this with Boost::Python and I'm now able to call my Foo methods using Python literals directly, as they get implicitly converted to Bar objects.</p>
<pre><code>f = Foo()
f.do_this(5)
f.do_that("hello")
</code></pre>
<p>Now, I would like to be able to use also other Python types, such as tuples, like this:</p>
<pre><code>f.do_that((1,2,3))
</code></pre>
<p>But I don't want to touch the original Bar definition, and I don't want to pollute my C++ library with Boost::Python stuff. I want to write a wrapper function in my binding code, but I just can't understand if this is possible and what is the correct way for doing this.</p>
<p>In other words: can I register a factory function to be used as an automatic conversion in Python?</p>
| 2 | 2009-08-10T09:55:13Z | 4,137,806 | <p>Subclass Bar near the wrapper code and give your subclass a ctor that takes a bp::object (or a more specific python type)</p>
<pre><code>struct Bar_wrapper:Bar,bp::wrapper<Bar>
{
Bar_wrapper(bp::object arg)
{
//code to build a Bar_wrapper Here
}
}
</code></pre>
<p>Then export a Bar_wrapper to python instead of a Bar, and call it a Bar as the python name:</p>
<pre><code>class<Bar_wrapper>("Bar")
...
.def(init<bp::object>())
...
</code></pre>
| 2 | 2010-11-09T19:46:57Z | [
"c++",
"python",
"boost-python"
] |
How to add a custom implicit conversion to a C++ type in Boost::Python? | 1,254,013 | <p>In my C++ code I have a class Foo with many methods taking a Bar type variable as an argument:</p>
<pre><code>class Foo {
public:
void do_this(Bar b);
void do_that(Bar b);
/* ... */
};
</code></pre>
<p>Bar has a number of constructors that create a new object from many common types such as int, std::string, float, etc:</p>
<pre><code>class Bar {
public:
Bar(int i);
Bar(float f);
Bar(std::string s);
/* ... */
};
</code></pre>
<p>I wrapped this with Boost::Python and I'm now able to call my Foo methods using Python literals directly, as they get implicitly converted to Bar objects.</p>
<pre><code>f = Foo()
f.do_this(5)
f.do_that("hello")
</code></pre>
<p>Now, I would like to be able to use also other Python types, such as tuples, like this:</p>
<pre><code>f.do_that((1,2,3))
</code></pre>
<p>But I don't want to touch the original Bar definition, and I don't want to pollute my C++ library with Boost::Python stuff. I want to write a wrapper function in my binding code, but I just can't understand if this is possible and what is the correct way for doing this.</p>
<p>In other words: can I register a factory function to be used as an automatic conversion in Python?</p>
| 2 | 2009-08-10T09:55:13Z | 7,401,595 | <p>Add a new constructor <code>template <typename T> Bar(T)</code>
in your header and implement as <code>template <>
Bar::Bar(Tupple) {}</code></p>
| 1 | 2011-09-13T12:07:38Z | [
"c++",
"python",
"boost-python"
] |
How to add a custom implicit conversion to a C++ type in Boost::Python? | 1,254,013 | <p>In my C++ code I have a class Foo with many methods taking a Bar type variable as an argument:</p>
<pre><code>class Foo {
public:
void do_this(Bar b);
void do_that(Bar b);
/* ... */
};
</code></pre>
<p>Bar has a number of constructors that create a new object from many common types such as int, std::string, float, etc:</p>
<pre><code>class Bar {
public:
Bar(int i);
Bar(float f);
Bar(std::string s);
/* ... */
};
</code></pre>
<p>I wrapped this with Boost::Python and I'm now able to call my Foo methods using Python literals directly, as they get implicitly converted to Bar objects.</p>
<pre><code>f = Foo()
f.do_this(5)
f.do_that("hello")
</code></pre>
<p>Now, I would like to be able to use also other Python types, such as tuples, like this:</p>
<pre><code>f.do_that((1,2,3))
</code></pre>
<p>But I don't want to touch the original Bar definition, and I don't want to pollute my C++ library with Boost::Python stuff. I want to write a wrapper function in my binding code, but I just can't understand if this is possible and what is the correct way for doing this.</p>
<p>In other words: can I register a factory function to be used as an automatic conversion in Python?</p>
| 2 | 2009-08-10T09:55:13Z | 7,402,828 | <p>You can make a static factory method and then expose it as one of the Python constructors for the class. Just make a converter that will take any Python object and you're free do do whatever your please.</p>
<pre><code>using namespace boost::python;
Bar CreateBar(object obj)
{
// Do your thing here
return Bar;
}
// ..................
class_<Bar>("Bar")
// .....................
.def("__init__", make_constructor(&CreateBar))
//.............
;
</code></pre>
| 0 | 2011-09-13T13:38:43Z | [
"c++",
"python",
"boost-python"
] |
How to add a custom implicit conversion to a C++ type in Boost::Python? | 1,254,013 | <p>In my C++ code I have a class Foo with many methods taking a Bar type variable as an argument:</p>
<pre><code>class Foo {
public:
void do_this(Bar b);
void do_that(Bar b);
/* ... */
};
</code></pre>
<p>Bar has a number of constructors that create a new object from many common types such as int, std::string, float, etc:</p>
<pre><code>class Bar {
public:
Bar(int i);
Bar(float f);
Bar(std::string s);
/* ... */
};
</code></pre>
<p>I wrapped this with Boost::Python and I'm now able to call my Foo methods using Python literals directly, as they get implicitly converted to Bar objects.</p>
<pre><code>f = Foo()
f.do_this(5)
f.do_that("hello")
</code></pre>
<p>Now, I would like to be able to use also other Python types, such as tuples, like this:</p>
<pre><code>f.do_that((1,2,3))
</code></pre>
<p>But I don't want to touch the original Bar definition, and I don't want to pollute my C++ library with Boost::Python stuff. I want to write a wrapper function in my binding code, but I just can't understand if this is possible and what is the correct way for doing this.</p>
<p>In other words: can I register a factory function to be used as an automatic conversion in Python?</p>
| 2 | 2009-08-10T09:55:13Z | 7,403,293 | <p>You can register from-python converter which will construct <code>Bar</code> instance from arbitrary object. See <a href="http://misspent.wordpress.com/2009/09/27/how-to-write-boost-python-converters/" rel="nofollow">here</a> and an exmaple of my own (converts either <code>(Vector3,Quaternion)</code> tuple or 7*double-tuple to 3d transformation <code>Se3</code>) <a href="http://bazaar.launchpad.net/~eudoxos/+junk/tr2/view/head:/py/wrapper/customConverters.cpp#L76" rel="nofollow">here</a>.</p>
<p>Note that the logic has two steps, first you determine whether the object is convertible (<code>convertible</code>; in your case, you check that it is a sequence, and has the right number of elements), then the <code>construct</code> method is called, which actually returns the instance, allocated with the pointer passed as parameter.</p>
<p>The converter must then be registered in <code>BOOST_PYTHON_MODULE</code>. Since the converter registry is global, once registered, it will be subsequently used automatically everywhere. All function argument of type <code>Bar</code> or <code>const Bar&</code> should be handled just fine (nor sure about <code>Bar&</code> from the top of my head).</p>
| 0 | 2011-09-13T14:12:13Z | [
"c++",
"python",
"boost-python"
] |
Has anyone succeeded in using Google App Engine with Python version 2.6? | 1,254,028 | <p>Since Python 2.6 is backward compatible to 2.52 , did anyone succeeded in using it with Google app Engine ( which supports 2.52 officially ).</p>
<p>I know i should try it myself. But i am a python and web-apps new bee and for me installation and configuration is the hardest part while getting started with something new in this domain.
( .... I am trying it myself in the meanwhile ....)</p>
<p>Thanks</p>
| 9 | 2009-08-10T09:58:39Z | 1,254,047 | <p>I suppose logging module crashes if you try to start the dev environment. See <a href="http://code.google.com/p/googleappengine/issues/detail?id=1159#c7" rel="nofollow">the issue and a workaround</a>.</p>
<p>After doing that change my code worked in 2.6 without any problems. I suggest using 2.5.x though so there are no other incompatibilities introduced in your code which would make your app fail on the live server.</p>
| 11 | 2009-08-10T10:04:45Z | [
"python",
"google-app-engine"
] |
Has anyone succeeded in using Google App Engine with Python version 2.6? | 1,254,028 | <p>Since Python 2.6 is backward compatible to 2.52 , did anyone succeeded in using it with Google app Engine ( which supports 2.52 officially ).</p>
<p>I know i should try it myself. But i am a python and web-apps new bee and for me installation and configuration is the hardest part while getting started with something new in this domain.
( .... I am trying it myself in the meanwhile ....)</p>
<p>Thanks</p>
| 9 | 2009-08-10T09:58:39Z | 1,255,893 | <p>There are a few issues with using Python 2.6 with the SDK, mostly related to the SDK's sandboxing, which is designed to imitate the sandbox limitations in production. Note, of course, that even if you get Python 2.6 running with the SDK, your code will still have to run under 2.5 in production.</p>
| 6 | 2009-08-10T16:32:18Z | [
"python",
"google-app-engine"
] |
string quoting issues in doctests | 1,254,187 | <p>When I run doctests on different Python versions (2.5 vs 2.6) and different plattforms (FreeBSD vs Mac OS) strings get quoted differently:</p>
<pre><code>Failed example:
decode('{"created_by":"test","guid":123,"num":5.00}')
Expected:
{'guid': 123, 'num': Decimal("5.00"), 'created_by': 'test'}
Got:
{'guid': 123, 'num': Decimal('5.00'), 'created_by': 'test'}
</code></pre>
<p>So on one box repr(decimal.Decimal('5.00')) results in 'Decimal("5.00")' on the other in "Decimal('5.00')". Is there any way to get arround the issue withyout creating more compliated test logic?</p>
| 2 | 2009-08-10T10:38:03Z | 1,254,230 | <p>This is actually because the <code>decimal</code> module's source code has changed: In python 2.4 and python2.5 the <code>decimal.Decimal.__repr__</code> function contains:</p>
<pre><code>return 'Decimal("%s")' % str(self)
</code></pre>
<p>whereas in python2.6 it contains:</p>
<pre><code>return "Decimal('%s')" % str(self)
</code></pre>
<p>So in this case the best thing to do is just to print out <code>str()</code> of the result and check the type separately if necessary...</p>
| 4 | 2009-08-10T10:49:29Z | [
"python",
"testing",
"doctest"
] |
string quoting issues in doctests | 1,254,187 | <p>When I run doctests on different Python versions (2.5 vs 2.6) and different plattforms (FreeBSD vs Mac OS) strings get quoted differently:</p>
<pre><code>Failed example:
decode('{"created_by":"test","guid":123,"num":5.00}')
Expected:
{'guid': 123, 'num': Decimal("5.00"), 'created_by': 'test'}
Got:
{'guid': 123, 'num': Decimal('5.00'), 'created_by': 'test'}
</code></pre>
<p>So on one box repr(decimal.Decimal('5.00')) results in 'Decimal("5.00")' on the other in "Decimal('5.00')". Is there any way to get arround the issue withyout creating more compliated test logic?</p>
| 2 | 2009-08-10T10:38:03Z | 1,254,452 | <p>Following the hits by D<a href="http://stackoverflow.com/users/120398/david-fraser">avid Fraser</a> i found <a href="http://mail.python.org/pipermail/python-dev/2008-July/081420.html" rel="nofollow">this suggestion</a> by Raymond Hettinger on the Python mailinglist.</p>
<p>I now use something like this:</p>
<pre><code>import sys
if sys.version_info[:2] <= (2, 5):
# ugly monkeypatch to make doctests work. For the reasons see
# See http://mail.python.org/pipermail/python-dev/2008-July/081420.html
# It can go away once all our boxes run python > 2.5
decimal.Decimal.__repr__ = lambda s: "Decimal('%s')" % str(s)
</code></pre>
| 0 | 2009-08-10T11:51:13Z | [
"python",
"testing",
"doctest"
] |
PHP - Print all statements that are executed in a PHP command line script? | 1,254,215 | <p>In python, one can trace all the statements that are executed by a command line script using the <a href="http://docs.python.org/library/trace.html" rel="nofollow">trace</a> module. In bash you can do the same with <code>set -x</code>. We have a PHP script that we're running from the command line, like a normal bash / python / perl / etc script. Nothing web-y is going on.</p>
<p>Is there anyway to get a trace of all the lines of code that are being executes?</p>
| 1 | 2009-08-10T10:46:04Z | 1,254,247 | <p>Not in pure-PHP, no -- as far as i know.</p>
<p>But you can use a debugger ; a nice way to do that is with </p>
<ul>
<li>The extension <a href="http://xdebug.org/" rel="nofollow">Xdebug</a>, which can be used as a <a href="http://xdebug.org/docs/remote" rel="nofollow">debugger</a></li>
<li>and some graphical IDE that integrates some debugging tools, like <a href="http://eclipse.org/pdt/" rel="nofollow">Eclipse PDT</a></li>
</ul>
<p>Both of those are free, btw.</p>
<p>With those, you can do step by step, set up breakpoints, watch the content of variables, view stack traces, ... And it works both for Web and CLI scripts ;-)</p>
<p>Of course, it means having Eclipse running on the machine you are executing your script... But if you are executing it on your development machine, you probably have a GUI and all that, so it should be fine...
<br><em>(I know that, for web applications, you can have Eclipse running on a different machine than the one with the PHP webserver -- don't know if it's possible in CLI, though)</em></p>
<p><br>
As a sidenote : maybe you can integrate Xdebug with a CLI-based debugger ; see the page I linked to earlier for a list of supported tools.</p>
| 1 | 2009-08-10T10:53:13Z | [
"php",
"python",
"debugging",
"command-line"
] |
PHP - Print all statements that are executed in a PHP command line script? | 1,254,215 | <p>In python, one can trace all the statements that are executed by a command line script using the <a href="http://docs.python.org/library/trace.html" rel="nofollow">trace</a> module. In bash you can do the same with <code>set -x</code>. We have a PHP script that we're running from the command line, like a normal bash / python / perl / etc script. Nothing web-y is going on.</p>
<p>Is there anyway to get a trace of all the lines of code that are being executes?</p>
| 1 | 2009-08-10T10:46:04Z | 1,254,625 | <p>I'm kinda blind here but I guess one way you could do it is to write all the relevant code inside custom functions and call <a href="http://php.net/manual/en/function.debug-backtrace.php" rel="nofollow">debug_backtrace()</a>. <a href="http://php.net/manual/en/function.debug-print-backtrace.php" rel="nofollow">debug_print_backtrace</a> may also be useful.</p>
<p>I hope it helps.</p>
| -1 | 2009-08-10T12:37:07Z | [
"php",
"python",
"debugging",
"command-line"
] |
PHP - Print all statements that are executed in a PHP command line script? | 1,254,215 | <p>In python, one can trace all the statements that are executed by a command line script using the <a href="http://docs.python.org/library/trace.html" rel="nofollow">trace</a> module. In bash you can do the same with <code>set -x</code>. We have a PHP script that we're running from the command line, like a normal bash / python / perl / etc script. Nothing web-y is going on.</p>
<p>Is there anyway to get a trace of all the lines of code that are being executes?</p>
| 1 | 2009-08-10T10:46:04Z | 1,256,690 | <p>There is a PECL extension, <a href="http://pecl.php.net/package/apd" rel="nofollow">apd</a>, that will generate a trace file. </p>
| 2 | 2009-08-10T19:17:11Z | [
"php",
"python",
"debugging",
"command-line"
] |
Multipart form post to google app engine not working | 1,254,270 | <p>I am trying to post a multi-part form using httplib, url is hosted on google app engine, on post it says Method not allowed, though the post using urllib2 works. Full working example is attached.</p>
<p>My question is what is the difference between two, why one works but not the other</p>
<ol>
<li><p>is there a problem in my mulipart form post code?</p></li>
<li><p>or the problem is with google app engine?</p></li>
<li><p>or something else ?</p></li>
</ol>
<p><hr /></p>
<pre><code>import httplib
import urllib2, urllib
# multipart form post using httplib fails, saying
# 405, 'Method Not Allowed'
url = "http://mockpublish.appspot.com/publish/api/revision_screen_create"
_, host, selector, _, _ = urllib2.urlparse.urlsplit(url)
print host, selector
h = httplib.HTTP(host)
h.putrequest('POST', selector)
BOUNDARY = '----------THE_FORM_BOUNDARY'
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
h.putheader('content-type', content_type)
h.putheader('User-Agent', 'Python-urllib/2.5,gzip(gfe)')
content = ""
L = []
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="test"')
L.append('')
L.append("xxx")
L.append('--' + BOUNDARY + '--')
L.append('')
content = '\r\n'.join(L)
h.putheader('content-length', str(len(content)))
h.endheaders()
h.send(content)
print h.getreply()
# post using urllib2 works
data = urllib.urlencode({'test':'xxx'})
request = urllib2.Request(url)
f = urllib2.urlopen(request, data)
output = f.read()
print output
</code></pre>
<p>Edit: After changing putrequest to request (on Nick Johnson's suggestion), it works</p>
<pre><code>url = "http://mockpublish.appspot.com/publish/api/revision_screen_create"
_, host, selector, _, _ = urllib2.urlparse.urlsplit(url)
h = httplib.HTTPConnection(host)
BOUNDARY = '----------THE_FORM_BOUNDARY'
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
content = ""
L = []
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="test"')
L.append('')
L.append("xxx")
L.append('--' + BOUNDARY + '--')
L.append('')
content = '\r\n'.join(L)
h.request('POST', selector, content,{'content-type':content_type})
res = h.getresponse()
print res.status, res.reason, res.read()
</code></pre>
<p>so now the question remains what is the difference between two approaches and can first first be made to work?</p>
| 0 | 2009-08-10T10:59:02Z | 1,387,513 | <p><a href="http://stackoverflow.com/users/12030/nick-johnson">Nick Johnson's</a> answer </p>
<p>Have you tried sending the request with httplib using .request() instead of .putrequest() etc, supplying the headers as a dict?</p>
<p>it works!</p>
| 0 | 2009-09-07T04:09:16Z | [
"python",
"google-app-engine",
"forms",
"html-post"
] |
Avoid program exit on I/O error | 1,254,292 | <p>I have a Python script using shutil.copy2 extensively. Since I use it to copy files over the network, I get too frequent I/O errors, which lead to the abortion of my program's execution:</p>
<pre><code>Traceback (most recent call last):
File "run_model.py", line 46, in <module>
main()
File "run_model.py", line 41, in main
tracerconfigfile=OPT.tracerconfig)
File "ModelRun.py", line 517, in run
self.copy_data()
File "ModelRun.py", line 604, in copy_ecmwf_data
shutil.copy2(remotefilename, localfilename)
File "/usr/lib64/python2.6/shutil.py", line 99, in copy2
copyfile(src, dst)
File "/usr/lib64/python2.6/shutil.py", line 54, in copyfile
copyfileobj(fsrc, fdst)
File "/usr/lib64/python2.6/shutil.py", line 27, in copyfileobj
buf = fsrc.read(length)
IOError: [Errno 5] Input/output error
</code></pre>
<p>How can I avoid abortion of my program's execution and have it retry the copy process instead?</p>
<p>The code I'm using already checks that the file is actually copied completely by checking the filesize:</p>
<pre><code>def check_file(file, size=0):
if not os.path.exists(file):
return False
if (size != 0 and os.path.getsize(file) != size):
return False
return True
while (check_file(rempdg,self._ndays*130160640) is False):
shutil.copy2(locpdg, rempdg)
</code></pre>
| 3 | 2009-08-10T11:05:10Z | 1,254,309 | <p>you can use </p>
<pre><code>try:
...
except IOError as err:
...
</code></pre>
<p>to catch the errors and treat them</p>
<p>Have a look on <a href="http://docs.python.org/tutorial/errors.html" rel="nofollow">this</a></p>
| 6 | 2009-08-10T11:08:13Z | [
"python",
"shutil"
] |
Avoid program exit on I/O error | 1,254,292 | <p>I have a Python script using shutil.copy2 extensively. Since I use it to copy files over the network, I get too frequent I/O errors, which lead to the abortion of my program's execution:</p>
<pre><code>Traceback (most recent call last):
File "run_model.py", line 46, in <module>
main()
File "run_model.py", line 41, in main
tracerconfigfile=OPT.tracerconfig)
File "ModelRun.py", line 517, in run
self.copy_data()
File "ModelRun.py", line 604, in copy_ecmwf_data
shutil.copy2(remotefilename, localfilename)
File "/usr/lib64/python2.6/shutil.py", line 99, in copy2
copyfile(src, dst)
File "/usr/lib64/python2.6/shutil.py", line 54, in copyfile
copyfileobj(fsrc, fdst)
File "/usr/lib64/python2.6/shutil.py", line 27, in copyfileobj
buf = fsrc.read(length)
IOError: [Errno 5] Input/output error
</code></pre>
<p>How can I avoid abortion of my program's execution and have it retry the copy process instead?</p>
<p>The code I'm using already checks that the file is actually copied completely by checking the filesize:</p>
<pre><code>def check_file(file, size=0):
if not os.path.exists(file):
return False
if (size != 0 and os.path.getsize(file) != size):
return False
return True
while (check_file(rempdg,self._ndays*130160640) is False):
shutil.copy2(locpdg, rempdg)
</code></pre>
| 3 | 2009-08-10T11:05:10Z | 1,254,324 | <p>Which block is giving the error? Just wrap a <a href="http://docs.python.org/tutorial/errors.html">try/except</a> around it:</p>
<pre><code>def check_file(file, size=0):
try:
if not os.path.exists(file):
return False
if (size != 0 and os.path.getsize(file) != size):
return False
return True
except IOError:
return False # or True, whatever your default is
while (check_file(rempdg,self._ndays*130160640) is False):
try:
shutil.copy2(locpdg, rempdg)
except IOError:
pass # ignore the IOError and keep going
</code></pre>
| 7 | 2009-08-10T11:10:03Z | [
"python",
"shutil"
] |
Reimport a module in python while interactive | 1,254,370 | <p>I know it can be done, but I never remember how.</p>
<p>How can you reimport a module in python?
The scenario is as follows: I import a module interactively and tinker with it, but then I face an error. I fix the error in the .py file and then I want to reimport the fixed module without quitting python. How can I do it ?</p>
| 148 | 2009-08-10T11:24:17Z | 1,254,379 | <p>This should work:</p>
<pre><code>reload(my.module)
</code></pre>
<p>From the <a href="http://docs.python.org/library/functions.html#reload">Python docs</a></p>
<blockquote>
<p>Reload a previously imported module. The argument must be a module object, so it must have been successfully imported before. This is useful if you have edited the module source file using an external editor and want to try out the new version without leaving the Python interpreter.</p>
</blockquote>
| 168 | 2009-08-10T11:29:12Z | [
"python"
] |
Reimport a module in python while interactive | 1,254,370 | <p>I know it can be done, but I never remember how.</p>
<p>How can you reimport a module in python?
The scenario is as follows: I import a module interactively and tinker with it, but then I face an error. I fix the error in the .py file and then I want to reimport the fixed module without quitting python. How can I do it ?</p>
| 148 | 2009-08-10T11:24:17Z | 14,390,676 | <p>In python 3, <code>reload</code> is no longer a built in function.</p>
<p>If you are using python 3.4+ you should use <a href="https://docs.python.org/3/library/importlib.html#importlib.reload"><code>reload</code></a> from the <a href="https://docs.python.org/3/library/importlib.html"><code>importlib</code></a> library instead.</p>
<p>If you are using python 3.2 or 3.3 you should:</p>
<pre><code>import imp
imp.reload(module)
</code></pre>
<p>instead. See <a href="http://docs.python.org/3.0/library/imp.html#imp.reload">http://docs.python.org/3.0/library/imp.html#imp.reload</a></p>
| 67 | 2013-01-18T00:35:29Z | [
"python"
] |
Reimport a module in python while interactive | 1,254,370 | <p>I know it can be done, but I never remember how.</p>
<p>How can you reimport a module in python?
The scenario is as follows: I import a module interactively and tinker with it, but then I face an error. I fix the error in the .py file and then I want to reimport the fixed module without quitting python. How can I do it ?</p>
| 148 | 2009-08-10T11:24:17Z | 23,901,170 | <p>Actually, in Python 3 the module <code>imp</code> is marked as DEPRECATED. Well, at least that's true for 3.4.</p>
<p>Instead the <code>reload</code> function from the <code>importlib</code> module should be used:</p>
<p><a href="https://docs.python.org/3/library/importlib.html#importlib.reload">https://docs.python.org/3/library/importlib.html#importlib.reload</a></p>
<p>But be aware that this library had some API-changes with the last two minor versions.</p>
| 28 | 2014-05-28T00:39:22Z | [
"python"
] |
Python API to fetch PGP public key from key server? | 1,254,425 | <p>Is there any Python API which can fetch a PGP public key from the public key server?</p>
| 3 | 2009-08-10T11:42:24Z | 1,254,526 | <p>You can use HTTP (urllib2 and beautiful soup would be my choice) if you're querying the MIT PGP keyserver.</p>
<p><a href="http://pgp.mit.edu/extracthelp.html" rel="nofollow">http://pgp.mit.edu/extracthelp.html</a></p>
| 3 | 2009-08-10T12:11:27Z | [
"python",
"pgp"
] |
Fastest way to convert a dict's keys & values from `unicode` to `str`? | 1,254,454 | <p>I'm receiving a dict from one "layer" of code upon which some calculations/modifications are performed before passing it onto another "layer". The original dict's keys & "string" values are <code>unicode</code>, but the layer they're being passed onto only accepts <code>str</code>.</p>
<p>This is going to be called often, so I'd like to know what would be the fastest way to convert something like:</p>
<pre><code>{ u'spam': u'eggs', u'foo': True, u'bar': { u'baz': 97 } }
</code></pre>
<p>...to:</p>
<pre><code>{ 'spam': 'eggs', 'foo': True, 'bar': { 'baz': 97 } }
</code></pre>
<p>...bearing in mind the non-"string" values need to stay as their original type.</p>
<p>Any thoughts?</p>
| 45 | 2009-08-10T11:51:17Z | 1,254,488 | <pre><code>def to_str(key, value):
if isinstance(key, unicode):
key = str(key)
if isinstance(value, unicode):
value = str(value)
return key, value
</code></pre>
<p>pass key and value to it, and add recursion to your code to account for inner dictionary.</p>
| 3 | 2009-08-10T11:59:49Z | [
"python",
"casting",
"types"
] |
Fastest way to convert a dict's keys & values from `unicode` to `str`? | 1,254,454 | <p>I'm receiving a dict from one "layer" of code upon which some calculations/modifications are performed before passing it onto another "layer". The original dict's keys & "string" values are <code>unicode</code>, but the layer they're being passed onto only accepts <code>str</code>.</p>
<p>This is going to be called often, so I'd like to know what would be the fastest way to convert something like:</p>
<pre><code>{ u'spam': u'eggs', u'foo': True, u'bar': { u'baz': 97 } }
</code></pre>
<p>...to:</p>
<pre><code>{ 'spam': 'eggs', 'foo': True, 'bar': { 'baz': 97 } }
</code></pre>
<p>...bearing in mind the non-"string" values need to stay as their original type.</p>
<p>Any thoughts?</p>
| 45 | 2009-08-10T11:51:17Z | 1,254,499 | <pre><code>DATA = { u'spam': u'eggs', u'foo': frozenset([u'Gah!']), u'bar': { u'baz': 97 },
u'list': [u'list', (True, u'Maybe'), set([u'and', u'a', u'set', 1])]}
def convert(data):
if isinstance(data, basestring):
return str(data)
elif isinstance(data, collections.Mapping):
return dict(map(convert, data.iteritems()))
elif isinstance(data, collections.Iterable):
return type(data)(map(convert, data))
else:
return data
print DATA
print convert(DATA)
# Prints:
# {u'list': [u'list', (True, u'Maybe'), set([u'and', u'a', u'set', 1])], u'foo': frozenset([u'Gah!']), u'bar': {u'baz': 97}, u'spam': u'eggs'}
# {'bar': {'baz': 97}, 'foo': frozenset(['Gah!']), 'list': ['list', (True, 'Maybe'), set(['and', 'a', 'set', 1])], 'spam': 'eggs'}
</code></pre>
<p>Assumptions:</p>
<ul>
<li>You've imported the collections module and can make use of the abstract base classes it provides</li>
<li>You're happy to convert using the default encoding (use <code>data.encode('utf-8')</code> rather than <code>str(data)</code> if you need an explicit encoding).</li>
</ul>
<p>If you need to support other container types, hopefully it's obvious how to follow the pattern and add cases for them.</p>
| 102 | 2009-08-10T12:03:54Z | [
"python",
"casting",
"types"
] |
Fastest way to convert a dict's keys & values from `unicode` to `str`? | 1,254,454 | <p>I'm receiving a dict from one "layer" of code upon which some calculations/modifications are performed before passing it onto another "layer". The original dict's keys & "string" values are <code>unicode</code>, but the layer they're being passed onto only accepts <code>str</code>.</p>
<p>This is going to be called often, so I'd like to know what would be the fastest way to convert something like:</p>
<pre><code>{ u'spam': u'eggs', u'foo': True, u'bar': { u'baz': 97 } }
</code></pre>
<p>...to:</p>
<pre><code>{ 'spam': 'eggs', 'foo': True, 'bar': { 'baz': 97 } }
</code></pre>
<p>...bearing in mind the non-"string" values need to stay as their original type.</p>
<p>Any thoughts?</p>
| 45 | 2009-08-10T11:51:17Z | 6,728,427 | <p>If you wanted to do this inline and didn't need recursive descent, this might work:</p>
<pre><code>DATA = { u'spam': u'eggs', u'foo': True, u'bar': { u'baz': 97 } }
print DATA
# "{ u'spam': u'eggs', u'foo': True, u'bar': { u'baz': 97 } }"
STRING_DATA = dict([(str(k), v) for k, v in data.items()])
print STRING_DATA
# "{ 'spam': 'eggs', 'foo': True, 'bar': { u'baz': 97 } }"
</code></pre>
| 10 | 2011-07-18T03:39:39Z | [
"python",
"casting",
"types"
] |
Fastest way to convert a dict's keys & values from `unicode` to `str`? | 1,254,454 | <p>I'm receiving a dict from one "layer" of code upon which some calculations/modifications are performed before passing it onto another "layer". The original dict's keys & "string" values are <code>unicode</code>, but the layer they're being passed onto only accepts <code>str</code>.</p>
<p>This is going to be called often, so I'd like to know what would be the fastest way to convert something like:</p>
<pre><code>{ u'spam': u'eggs', u'foo': True, u'bar': { u'baz': 97 } }
</code></pre>
<p>...to:</p>
<pre><code>{ 'spam': 'eggs', 'foo': True, 'bar': { 'baz': 97 } }
</code></pre>
<p>...bearing in mind the non-"string" values need to stay as their original type.</p>
<p>Any thoughts?</p>
| 45 | 2009-08-10T11:51:17Z | 7,027,514 | <p>I know I'm late on this one:</p>
<pre><code>def convert_keys_to_string(dictionary):
"""Recursively converts dictionary keys to strings."""
if not isinstance(dictionary, dict):
return dictionary
return dict((str(k), convert_keys_to_string(v))
for k, v in dictionary.items())
</code></pre>
| 13 | 2011-08-11T14:18:22Z | [
"python",
"casting",
"types"
] |
Fastest way to convert a dict's keys & values from `unicode` to `str`? | 1,254,454 | <p>I'm receiving a dict from one "layer" of code upon which some calculations/modifications are performed before passing it onto another "layer". The original dict's keys & "string" values are <code>unicode</code>, but the layer they're being passed onto only accepts <code>str</code>.</p>
<p>This is going to be called often, so I'd like to know what would be the fastest way to convert something like:</p>
<pre><code>{ u'spam': u'eggs', u'foo': True, u'bar': { u'baz': 97 } }
</code></pre>
<p>...to:</p>
<pre><code>{ 'spam': 'eggs', 'foo': True, 'bar': { 'baz': 97 } }
</code></pre>
<p>...bearing in mind the non-"string" values need to stay as their original type.</p>
<p>Any thoughts?</p>
| 45 | 2009-08-10T11:51:17Z | 38,476,472 | <p>for a non-nested dict (since the title does not mention that case, it might be interesting for other people)</p>
<pre><code>{str(k): str(v) for k,v in DATA}
</code></pre>
| 0 | 2016-07-20T08:45:29Z | [
"python",
"casting",
"types"
] |
Can Python's optparse display the default value of an option? | 1,254,469 | <p>Is there a way to make Python's optparse print the default value of an option or flag when showing the help with --help?</p>
| 38 | 2009-08-10T11:55:02Z | 1,254,491 | <p>Try using the <code>%default</code> string placeholder:</p>
<pre><code># This example taken from http://docs.python.org/library/optparse.html#generating-help
parser.add_option("-m", "--mode",
default="intermediate",
help="interaction mode: novice, intermediate, "
"or expert [default: %default]")
</code></pre>
| 50 | 2009-08-10T12:01:33Z | [
"python",
"optparse"
] |
Can Python's optparse display the default value of an option? | 1,254,469 | <p>Is there a way to make Python's optparse print the default value of an option or flag when showing the help with --help?</p>
| 38 | 2009-08-10T11:55:02Z | 1,254,500 | <p>And if you need programmatic access to the default values, you can get to them via the <code>defaults</code> attribute of the parser (it's a dict)</p>
| 7 | 2009-08-10T12:04:04Z | [
"python",
"optparse"
] |
Can Python's optparse display the default value of an option? | 1,254,469 | <p>Is there a way to make Python's optparse print the default value of an option or flag when showing the help with --help?</p>
| 38 | 2009-08-10T11:55:02Z | 11,906,371 | <p>And if you want to add default values automatically to all options that you have specified, you can do the following:</p>
<pre><code>for option in parser.option_list:
if option.default != ("NO", "DEFAULT"):
option.help += (" " if option.help else "") + "[default: %default]"
</code></pre>
| 7 | 2012-08-10T17:16:52Z | [
"python",
"optparse"
] |
Can Python's optparse display the default value of an option? | 1,254,469 | <p>Is there a way to make Python's optparse print the default value of an option or flag when showing the help with --help?</p>
| 38 | 2009-08-10T11:55:02Z | 33,969,137 | <p>The comments to your question already indicate there's another way to parse arguments called <a href="https://docs.python.org/3.5/library/argparse.html" rel="nofollow">argparse</a>. It's been introduced in Python 3.2. It actually deprecates <code>optparse</code> but is used similarly.</p>
<p><code>argpass</code> comes with different formatting classes and for instance <code>argparse.ArgumentDefaultsHelpFormatter</code> will also print the default values without you having to manipulate the help string manually.</p>
<blockquote>
<p>ArgumentParser objects allow the help formatting to be customized by
specifying an alternate formatting class. Currently, there are four
such classes:</p>
<p>class argparse.RawDescriptionHelpFormatter</p>
<p>class argparse.RawTextHelpFormatter</p>
<p>class argparse.ArgumentDefaultsHelpFormatter</p>
<p>class argparse.MetavarTypeHelpFormatter</p>
</blockquote>
<p>An example from the python docs:</p>
<pre><code>>>> parser = argparse.ArgumentParser(
... prog='PROG',
... formatter_class=argparse.ArgumentDefaultsHelpFormatter)
>>> parser.add_argument('--foo', type=int, default=42, help='FOO!')
>>> parser.add_argument('bar', nargs='*', default=[1, 2, 3], help='BAR!')
>>> parser.print_help()
usage: PROG [-h] [--foo FOO] [bar [bar ...]]
positional arguments:
bar BAR! (default: [1, 2, 3])
optional arguments:
-h, --help show this help message and exit
--foo FOO FOO! (default: 42)
</code></pre>
<p>see <a href="https://docs.python.org/3.5/library/argparse.html#formatter-class" rel="nofollow">argparse formatting classes</a></p>
| 0 | 2015-11-28T08:07:00Z | [
"python",
"optparse"
] |
Why don't Django and CherryPy support HTTP verb-based dispatch natively? | 1,254,629 | <p>It's not the same to POST to an URL than to GET it, DELETE it or PUT it. These actions are fundamentally different. However, Django seems to ignore them in its dispatch mechanism. Basically, one is forced to either ignore HTTP verbs completely or do this on every view:</p>
<pre><code>def my_view(request, arg1, arg2):
if request.method == 'GET':
return get_view(request, arg1, arg2)
if request.method == 'POST':
return post_view(request, arg1, arg2)
return http.HttpResponseNotAllowed(['GET', 'POST'])
</code></pre>
<p>The few solutions I have found for this in the web (<a href="http://www.djangosnippets.org/snippets/436/">this snippet</a> for verb-based dispatch, or <a href="http://code.djangoproject.com/browser/django/trunk/django/views/decorators/http.py">this decorator</a> for verb requirement) are not very elegant as they are clearly just workarounds.</p>
<p>The situation with CherryPy seems to be the same. The only frameworks I know of that get this right are web.py and Google App Engine's.</p>
<p>I see this as a serious design flaw for a web framework. Does anyone agree? Or is it a deliberate decision based on reasons/requirements I ignore?</p>
| 10 | 2009-08-10T12:39:18Z | 1,254,672 | <p>I believe the decision for django was made because usually just <code>GET</code> and <code>POST</code> is enough, and that keeps the framework simpler for its requirements. It is very convenient to just "not care" about which verb was used.</p>
<p>However, there are plenty other frameworks that can do dispatch based on verb. I like <a href="http://dev.pocoo.org/projects/werkzeug/" rel="nofollow">werkzeug</a>, it makes easy to define your own dispatch code, so you can dispatch based on whatever you want, to whatever you want.</p>
| 2 | 2009-08-10T12:46:58Z | [
"python",
"django",
"rest",
"cherrypy",
"web-frameworks"
] |
Why don't Django and CherryPy support HTTP verb-based dispatch natively? | 1,254,629 | <p>It's not the same to POST to an URL than to GET it, DELETE it or PUT it. These actions are fundamentally different. However, Django seems to ignore them in its dispatch mechanism. Basically, one is forced to either ignore HTTP verbs completely or do this on every view:</p>
<pre><code>def my_view(request, arg1, arg2):
if request.method == 'GET':
return get_view(request, arg1, arg2)
if request.method == 'POST':
return post_view(request, arg1, arg2)
return http.HttpResponseNotAllowed(['GET', 'POST'])
</code></pre>
<p>The few solutions I have found for this in the web (<a href="http://www.djangosnippets.org/snippets/436/">this snippet</a> for verb-based dispatch, or <a href="http://code.djangoproject.com/browser/django/trunk/django/views/decorators/http.py">this decorator</a> for verb requirement) are not very elegant as they are clearly just workarounds.</p>
<p>The situation with CherryPy seems to be the same. The only frameworks I know of that get this right are web.py and Google App Engine's.</p>
<p>I see this as a serious design flaw for a web framework. Does anyone agree? Or is it a deliberate decision based on reasons/requirements I ignore?</p>
| 10 | 2009-08-10T12:39:18Z | 1,254,735 | <p>Because this is not hard to DIY. Just have a dictionary of accepted verbs to functions in each class.</p>
<pre><code>def dispatcher(someObject, request):
try:
return someObject.acceptedVerbs[request.method]()
except:
return http.HttpResponseNotAllowed(someObject.acceptedVerbs.keys())
</code></pre>
| 1 | 2009-08-10T12:58:12Z | [
"python",
"django",
"rest",
"cherrypy",
"web-frameworks"
] |
Why don't Django and CherryPy support HTTP verb-based dispatch natively? | 1,254,629 | <p>It's not the same to POST to an URL than to GET it, DELETE it or PUT it. These actions are fundamentally different. However, Django seems to ignore them in its dispatch mechanism. Basically, one is forced to either ignore HTTP verbs completely or do this on every view:</p>
<pre><code>def my_view(request, arg1, arg2):
if request.method == 'GET':
return get_view(request, arg1, arg2)
if request.method == 'POST':
return post_view(request, arg1, arg2)
return http.HttpResponseNotAllowed(['GET', 'POST'])
</code></pre>
<p>The few solutions I have found for this in the web (<a href="http://www.djangosnippets.org/snippets/436/">this snippet</a> for verb-based dispatch, or <a href="http://code.djangoproject.com/browser/django/trunk/django/views/decorators/http.py">this decorator</a> for verb requirement) are not very elegant as they are clearly just workarounds.</p>
<p>The situation with CherryPy seems to be the same. The only frameworks I know of that get this right are web.py and Google App Engine's.</p>
<p>I see this as a serious design flaw for a web framework. Does anyone agree? Or is it a deliberate decision based on reasons/requirements I ignore?</p>
| 10 | 2009-08-10T12:39:18Z | 1,255,171 | <p>I can't speak for Django, but in CherryPy, you can have one function per HTTP verb with a single config entry:</p>
<pre><code>request.dispatch = cherrypy.dispatch.MethodDispatcher()
</code></pre>
<p>However, I have seen some situations where that's not desirable.</p>
<p>One example would be a hard redirect regardless of verb.</p>
<p>Another case is when the majority of your handlers only handle GET. It's especially annoying in that case to have a thousand page handlers all named 'GET'. It's prettier to express that in a decorator than in a function name:</p>
<pre><code>def allow(*methods):
methods = list(methods)
if not methods:
methods = ['GET', 'HEAD']
elif 'GET' in methods and 'HEAD' not in methods:
methods.append('HEAD')
def wrap(f):
def inner(*args, **kwargs):
cherrypy.response.headers['Allow'] = ', '.join(methods)
if cherrypy.request.method not in methods:
raise cherrypy.HTTPError(405)
return f(*args, **kwargs):
inner.exposed = True
return inner
return wrap
class Root:
@allow()
def index(self):
return "Hello"
cowboy_greeting = "Howdy"
@allow()
def cowboy(self):
return self.cowboy_greeting
@allow('PUT')
def cowboyup(self, new_greeting=None):
self.cowboy_greeting = new_greeting
</code></pre>
<p>Another common one I see is looking up data corresponding to the resource in a database, which should happen regardless of verb:</p>
<pre><code>def default(self, id, **kwargs):
# 404 if no such beast
thing = Things.get(id=id)
if thing is None:
raise cherrypy.NotFound()
# ...and now switch on method
if cherrypy.request.method == 'GET': ...
</code></pre>
<p>CherryPy tries to not make the decision for you, yet makes it easy (a one-liner) if that's what you want.</p>
| 12 | 2009-08-10T14:22:15Z | [
"python",
"django",
"rest",
"cherrypy",
"web-frameworks"
] |
Why don't Django and CherryPy support HTTP verb-based dispatch natively? | 1,254,629 | <p>It's not the same to POST to an URL than to GET it, DELETE it or PUT it. These actions are fundamentally different. However, Django seems to ignore them in its dispatch mechanism. Basically, one is forced to either ignore HTTP verbs completely or do this on every view:</p>
<pre><code>def my_view(request, arg1, arg2):
if request.method == 'GET':
return get_view(request, arg1, arg2)
if request.method == 'POST':
return post_view(request, arg1, arg2)
return http.HttpResponseNotAllowed(['GET', 'POST'])
</code></pre>
<p>The few solutions I have found for this in the web (<a href="http://www.djangosnippets.org/snippets/436/">this snippet</a> for verb-based dispatch, or <a href="http://code.djangoproject.com/browser/django/trunk/django/views/decorators/http.py">this decorator</a> for verb requirement) are not very elegant as they are clearly just workarounds.</p>
<p>The situation with CherryPy seems to be the same. The only frameworks I know of that get this right are web.py and Google App Engine's.</p>
<p>I see this as a serious design flaw for a web framework. Does anyone agree? Or is it a deliberate decision based on reasons/requirements I ignore?</p>
| 10 | 2009-08-10T12:39:18Z | 15,728,522 | <p>Came across this from Google, and thought of updating.</p>
<h1>Django</h1>
<p>Just FYI, This is now supported in Django as class based views. You can extend the generic class <code>View</code> and add methods like <code>get()</code>, <code>post()</code>, <code>put()</code> etc. E.g. - </p>
<pre><code>from django.http import HttpResponse
from django.views.generic import View
class MyView(View):
def get(self, request, *args, **kwargs):
return HttpResponse('Hello, World!')
</code></pre>
<p>The <code>dispatch()</code> part handles this- </p>
<blockquote>
<h2><strong>dispatch(request, *args, **kwargs)</strong></h2>
<p>The view part of the view â the
method that accepts a request argument plus arguments, and returns a
HTTP response.</p>
<p><strong>The default implementation will inspect the HTTP method and attempt to
delegate to a method that matches the HTTP method; a GET will be
delegated to get(), a POST to post(), and so on.</strong></p>
<p>By default, a HEAD request will be delegated to get(). If you need to
handle HEAD requests in a different way than GET, you can override the
head() method. See Supporting other HTTP methods for an example.</p>
<p>The default implementation also sets request, args and kwargs as
instance variables, so any method on the view can know the full
details of the request that was made to invoke the view.</p>
</blockquote>
<p>Then you can use it in <code>urls.py</code> - </p>
<pre><code>from django.conf.urls import patterns, url
from myapp.views import MyView
urlpatterns = patterns('',
url(r'^mine/$', MyView.as_view(), name='my-view'),
)
</code></pre>
<p><a href="https://docs.djangoproject.com/en/dev/ref/class-based-views/base/#django.views.generic.base.View">More details</a>.</p>
<h1>CherryPy</h1>
<p>CherryPy now also supports this. They have a <a href="http://docs.cherrypy.org/dev/progguide/REST.html">full page</a> on this.</p>
| 7 | 2013-03-31T10:35:44Z | [
"python",
"django",
"rest",
"cherrypy",
"web-frameworks"
] |
Call PHP code from Python | 1,254,802 | <p>I'm trying to integrate an old PHP ad management system into a (Django) Python-based web application. The PHP and the Python code are both installed on the same hosts, PHP is executed by <code>mod_php5</code> and Python through <code>mod_wsgi</code>, usually.</p>
<p>Now I wonder what's the best way to call this PHP ad management code from within my Python code in a most efficient manner (the ad management code has to be called multiple times for each page)? </p>
<p>The solutions I came up with so far, are the following:</p>
<ol>
<li><p>Write SOAP interface in PHP for the ad management code and write a SOAP client in Python which then calls the appropriate functions.</p>
<p>The problem I see is, that will slow down the execution of the Python code considerably, since for each page served, multiple SOAP client requests are necessary in the background.</p></li>
<li><p>Call the PHP code through os.execvp() or subprocess.Popen() using PHP command line interface.</p>
<p>The problem here is that the PHP code makes use of the Apache environment ($_SERVER vars and other superglobals). I'm not sure if this can be simulated correctly.</p></li>
<li><p>Rewrite the ad management code in Python.</p>
<p>This will probably be the last resort. This ad management code just runs and runs, and there is no one remaining who wrote a piece of code for this :) I'd be quite afraid to do this ;)</p></li>
</ol>
<p>Any other ideas or hints how this can be done?</p>
<p>Thanks.</p>
| 2 | 2009-08-10T13:13:10Z | 1,254,825 | <p>How about using AJAX from the browser to load the ads?</p>
<p>For instance (using JQuery):</p>
<pre><code>$(document).ready(function() { $("#apageelement").load("/phpapp/getads.php"); })
</code></pre>
<p>This allows you to keep you app almost completely separate from the PHP app.</p>
| 4 | 2009-08-10T13:17:53Z | [
"php",
"python"
] |
Call PHP code from Python | 1,254,802 | <p>I'm trying to integrate an old PHP ad management system into a (Django) Python-based web application. The PHP and the Python code are both installed on the same hosts, PHP is executed by <code>mod_php5</code> and Python through <code>mod_wsgi</code>, usually.</p>
<p>Now I wonder what's the best way to call this PHP ad management code from within my Python code in a most efficient manner (the ad management code has to be called multiple times for each page)? </p>
<p>The solutions I came up with so far, are the following:</p>
<ol>
<li><p>Write SOAP interface in PHP for the ad management code and write a SOAP client in Python which then calls the appropriate functions.</p>
<p>The problem I see is, that will slow down the execution of the Python code considerably, since for each page served, multiple SOAP client requests are necessary in the background.</p></li>
<li><p>Call the PHP code through os.execvp() or subprocess.Popen() using PHP command line interface.</p>
<p>The problem here is that the PHP code makes use of the Apache environment ($_SERVER vars and other superglobals). I'm not sure if this can be simulated correctly.</p></li>
<li><p>Rewrite the ad management code in Python.</p>
<p>This will probably be the last resort. This ad management code just runs and runs, and there is no one remaining who wrote a piece of code for this :) I'd be quite afraid to do this ;)</p></li>
</ol>
<p>Any other ideas or hints how this can be done?</p>
<p>Thanks.</p>
| 2 | 2009-08-10T13:13:10Z | 1,255,480 | <p>Best solution is to use server side includes. Most webservers support this.</p>
<p>For example this is how it would be done in nginx:</p>
<pre><code><!--# include virtual="http://localhost:8080/phpapp/getads.php" -->
</code></pre>
<p>Your webserver would then dynamically request from your php backend, and insert it into the response that goes to the client. No javascript necessary, and entirely transparent.</p>
<p>You could also use a borderless <code><iframe></code></p>
| 2 | 2009-08-10T15:17:51Z | [
"php",
"python"
] |
Call PHP code from Python | 1,254,802 | <p>I'm trying to integrate an old PHP ad management system into a (Django) Python-based web application. The PHP and the Python code are both installed on the same hosts, PHP is executed by <code>mod_php5</code> and Python through <code>mod_wsgi</code>, usually.</p>
<p>Now I wonder what's the best way to call this PHP ad management code from within my Python code in a most efficient manner (the ad management code has to be called multiple times for each page)? </p>
<p>The solutions I came up with so far, are the following:</p>
<ol>
<li><p>Write SOAP interface in PHP for the ad management code and write a SOAP client in Python which then calls the appropriate functions.</p>
<p>The problem I see is, that will slow down the execution of the Python code considerably, since for each page served, multiple SOAP client requests are necessary in the background.</p></li>
<li><p>Call the PHP code through os.execvp() or subprocess.Popen() using PHP command line interface.</p>
<p>The problem here is that the PHP code makes use of the Apache environment ($_SERVER vars and other superglobals). I'm not sure if this can be simulated correctly.</p></li>
<li><p>Rewrite the ad management code in Python.</p>
<p>This will probably be the last resort. This ad management code just runs and runs, and there is no one remaining who wrote a piece of code for this :) I'd be quite afraid to do this ;)</p></li>
</ol>
<p>Any other ideas or hints how this can be done?</p>
<p>Thanks.</p>
| 2 | 2009-08-10T13:13:10Z | 1,256,684 | <p>I've done this in the past by serving the PHP portions directly via Apache. You could either put them in with your media files, (/site_media/php/) or if you prefer to use something more lightweight for your media server (like lighttpd), you can set up another portion of the site that goes through apache with PHP enabled.</p>
<p>From there, you can either take the ajax route in your templates, or you can load the PHP from your views using urllib(2) or httplib(2). Better yet, wrap the urllib2 call in a templatetag, and call that in your templates.</p>
| 0 | 2009-08-10T19:15:55Z | [
"php",
"python"
] |
Non-editable text box in wxPython | 1,254,819 | <p>How to create a non-editable text box with no cursor in wxPython to dump text in? </p>
| 5 | 2009-08-10T13:16:03Z | 1,254,881 | <p><code>wx.StaticText</code></p>
<p>You could also use a regular TextCtrl with the style <code>TE_READONLY</code> but that shows a cursor and the text looks editable, but it isn't.</p>
| 7 | 2009-08-10T13:29:01Z | [
"python",
"textbox",
"wxpython",
"wx"
] |
How do I validate XML document using RELAX NG schema in Python? | 1,254,919 | <p>How do I validate XML document via compact RELAX NG schema in Python?</p>
| 7 | 2009-08-10T13:33:51Z | 1,254,992 | <p>How about using <a href="http://codespeak.net/lxml/validation.html#relaxng">lxml</a>?</p>
<p>From the docs:</p>
<pre><code>>>> f = StringIO('''\
... <element name="a" xmlns="http://relaxng.org/ns/structure/1.0">
... <zeroOrMore>
... <element name="b">
... <text />
... </element>
... </zeroOrMore>
... </element>
... ''')
>>> relaxng_doc = etree.parse(f)
>>> relaxng = etree.RelaxNG(relaxng_doc)
>>> valid = StringIO('<a><b></b></a>')
>>> doc = etree.parse(valid)
>>> relaxng.validate(doc)
True
>>> invalid = StringIO('<a><c></c></a>')
>>> doc2 = etree.parse(invalid)
>>> relaxng.validate(doc2)
False
</code></pre>
| 15 | 2009-08-10T13:46:26Z | [
"python",
"xml",
"relaxng"
] |
How can you ensure registered atexit function will run with AppHelper.runEventLoop() in PyObjC? | 1,255,025 | <p>I'm just wondering why I my registered an atexit function... e.g.</p>
<pre><code>import atexit
atexit.register(somefunc)
...
AppHelper.runEventLoop()
</code></pre>
<p>Of course I know when will <code>atexit</code> won't work. When I comment out <code>AppHelper.runEventLoop()</code> the <code>atexit</code> function gets called. I also browsed my <code>pyobjc</code> egg, and I saw under <code>__init__.py</code> under <code>objc</code> package the following code:</p>
<pre><code>import atexit
atexit.register(recycleAutoreleasePool)
</code></pre>
<p>I looked for any reference within the egg to no avail. I also tried surrounding a try-finally shell around <code>AppHelper.runEventLoop()</code>, and the commands in the finally block won't get called. </p>
<p>Hope someone could help me out here.</p>
<p>P.S. Assuming I don't want to use Application delegate's <code>applicationShouldTerminate:</code> method...</p>
| 1 | 2009-08-10T13:52:42Z | 1,255,910 | <p>I believe you do need delegates, because otherwise the event loop can exit the process rather abruptly (kind of like <code>os._exit</code>) and therefore not give the Python runtime a chance to run termination code such as <code>finally</code> clauses, <code>atexit</code> functions, etc etc.</p>
| 1 | 2009-08-10T16:37:20Z | [
"python",
"pyobjc",
"atexit"
] |
Python: using threads to call subprocess.Popen multiple times | 1,255,449 | <p>I have a service that is running (Twisted jsonrpc server). When I make a call to "run_procs" the service will look at a bunch of objects and inspect their timestamp property to see if they should run. If they should, they get added to a thread_pool (list) and then every item in the thread_pool gets the start() method called. </p>
<p>I have used this setup for several other applications where I wanted to run a function within my class with theading. However, when I am using a subprocess.Popen call in the function called by each thread, the calls run one-at-a-time instead of running concurrently like I would expect.</p>
<p>Here is some sample code:</p>
<pre><code>class ProcService(jsonrpc.JSONRPC):
self.thread_pool = []
self.running_threads = []
self.lock = threading.Lock()
def clean_pool(self, thread_pool, join=False):
for th in [x for x in thread_pool if not x.isAlive()]:
if join: th.join()
thread_pool.remove(th)
del th
return thread_pool
def run_threads(self, parallel=10):
while len(self.running_threads)+len(self.thread_pool) > 0:
self.clean_pool(self.running_threads, join=True)
n = min(max(parallel - len(self.running_threads), 0), len(self.thread_pool))
if n > 0:
for th in self.thread_pool[0:n]: th.start()
self.running_threads.extend(self.thread_pool[0:n])
del self.thread_pool[0:n]
time.sleep(.01)
for th in self.running_threads+self.thread_pool: th.join()
def jsonrpc_run_procs(self):
for i, item in enumerate(self.items):
if item.should_run():
self.thread_pool.append(threading.Thread(target=self.run_proc, args=tuple([item])))
self.run_threads(5)
def run_proc(self, proc):
self.lock.acquire()
print "\nSubprocess started"
p = subprocess.Popen('%s/program_to_run.py %s' %(os.getcwd(), proc.data), shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,)
stdout_value = proc.communicate('through stdin to stdout')[0]
self.lock.release()
</code></pre>
<p>Any help/suggestions are appreciated.</p>
<p><strong>* EDIT *</strong>
OK. So now I want to read back the output from the stdout pipe. This works some of the time, but also fails with select.error: (4, 'Interrupted system call') I assume this is because sometimes the process has already terminated before I try to run the communicate method. the code in the run_proc method has been changed to: </p>
<p>def run_proc(self, proc):
self.lock.acquire()
p = subprocess.Popen( #etc
self.running_procs.append([p, proc.data.id])
self.lock.release() </p>
<p>after I call self.run_threads(5) I call self.check_procs()</p>
<p>check_procs method iterates the list of running_procs to check for poll() is not None. How can I get output from pipe? I have tried both of the following</p>
<pre><code>calling check_procs once:
def check_procs(self):
for proc_details in self.running_procs:
proc = proc_details[0]
while (proc.poll() == None):
time.sleep(0.1)
stdout_value = proc.communicate('through stdin to stdout')[0]
self.running_procs.remove(proc_details)
print proc_details[1], stdout_value
del proc_details
</code></pre>
<p><hr /></p>
<pre><code>calling check_procs in while loop like:
while len(self.running_procs) > 0:
self.check_procs()
def check_procs(self):
for proc_details in self.running_procs:
if (proc.poll() is not None):
stdout_value = proc.communicate('through stdin to stdout')[0]
self.running_procs.remove(proc_details)
print proc_details[1], stdout_value
del proc_details
</code></pre>
| -1 | 2009-08-10T15:11:10Z | 1,255,505 | <p>I think the key code is:</p>
<pre><code> self.lock.acquire()
print "\nSubprocess started"
p = subprocess.Popen( # etc
stdout_value = proc.communicate('through stdin to stdout')[0]
self.lock.release()
</code></pre>
<p>the explicit calls to acquire and release should guarantee serialization -- don't you observe serialization just as invariably if you do other things in this block instead of the subprocess use?</p>
<p><strong>Edit</strong>: all silence here, so I'll add the suggestion to remove the locking and instead put each <code>stdout_value</code> on a <code>Queue.Queue()</code> instance -- Queue is intrinsicaly threadsafe (deals with its own locking) so you can <code>get</code> (or <code>get_nowait</code>, etc etc) results from it once they're ready and have been <code>put</code> there. In general, <code>Queue</code> is the best way to arrange thread communication (and often synchronization too) in Python, any time it can be feasibly arranged to do things that way.</p>
<p>Specifically: add <code>import Queue</code> at the start; give up making, acquiring and releasing <code>self.lock</code> (just delete those three lines); add <code>self.q = Queue.Queue()</code> to the <code>__init__</code>; right after the call <code>stdout_value = proc.communicate(...</code> add one statement <code>self.q.put(stdout_value)</code>; now e.g finish the <code>jsonrpc_run_procs</code> method with</p>
<pre><code>while not self.q.empty():
result = self.q.get()
print 'One result is %r' % result
</code></pre>
<p>to confirm that all the results are there. (Normally the <code>empty</code> method of queues is not reliable, but in this case all threads putting to the queue are already finished, so you should be fine).</p>
| 1 | 2009-08-10T15:21:56Z | [
"python"
] |
Python: using threads to call subprocess.Popen multiple times | 1,255,449 | <p>I have a service that is running (Twisted jsonrpc server). When I make a call to "run_procs" the service will look at a bunch of objects and inspect their timestamp property to see if they should run. If they should, they get added to a thread_pool (list) and then every item in the thread_pool gets the start() method called. </p>
<p>I have used this setup for several other applications where I wanted to run a function within my class with theading. However, when I am using a subprocess.Popen call in the function called by each thread, the calls run one-at-a-time instead of running concurrently like I would expect.</p>
<p>Here is some sample code:</p>
<pre><code>class ProcService(jsonrpc.JSONRPC):
self.thread_pool = []
self.running_threads = []
self.lock = threading.Lock()
def clean_pool(self, thread_pool, join=False):
for th in [x for x in thread_pool if not x.isAlive()]:
if join: th.join()
thread_pool.remove(th)
del th
return thread_pool
def run_threads(self, parallel=10):
while len(self.running_threads)+len(self.thread_pool) > 0:
self.clean_pool(self.running_threads, join=True)
n = min(max(parallel - len(self.running_threads), 0), len(self.thread_pool))
if n > 0:
for th in self.thread_pool[0:n]: th.start()
self.running_threads.extend(self.thread_pool[0:n])
del self.thread_pool[0:n]
time.sleep(.01)
for th in self.running_threads+self.thread_pool: th.join()
def jsonrpc_run_procs(self):
for i, item in enumerate(self.items):
if item.should_run():
self.thread_pool.append(threading.Thread(target=self.run_proc, args=tuple([item])))
self.run_threads(5)
def run_proc(self, proc):
self.lock.acquire()
print "\nSubprocess started"
p = subprocess.Popen('%s/program_to_run.py %s' %(os.getcwd(), proc.data), shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,)
stdout_value = proc.communicate('through stdin to stdout')[0]
self.lock.release()
</code></pre>
<p>Any help/suggestions are appreciated.</p>
<p><strong>* EDIT *</strong>
OK. So now I want to read back the output from the stdout pipe. This works some of the time, but also fails with select.error: (4, 'Interrupted system call') I assume this is because sometimes the process has already terminated before I try to run the communicate method. the code in the run_proc method has been changed to: </p>
<p>def run_proc(self, proc):
self.lock.acquire()
p = subprocess.Popen( #etc
self.running_procs.append([p, proc.data.id])
self.lock.release() </p>
<p>after I call self.run_threads(5) I call self.check_procs()</p>
<p>check_procs method iterates the list of running_procs to check for poll() is not None. How can I get output from pipe? I have tried both of the following</p>
<pre><code>calling check_procs once:
def check_procs(self):
for proc_details in self.running_procs:
proc = proc_details[0]
while (proc.poll() == None):
time.sleep(0.1)
stdout_value = proc.communicate('through stdin to stdout')[0]
self.running_procs.remove(proc_details)
print proc_details[1], stdout_value
del proc_details
</code></pre>
<p><hr /></p>
<pre><code>calling check_procs in while loop like:
while len(self.running_procs) > 0:
self.check_procs()
def check_procs(self):
for proc_details in self.running_procs:
if (proc.poll() is not None):
stdout_value = proc.communicate('through stdin to stdout')[0]
self.running_procs.remove(proc_details)
print proc_details[1], stdout_value
del proc_details
</code></pre>
| -1 | 2009-08-10T15:11:10Z | 1,255,539 | <p>Your specific problem is probably caused by the line <code>stdout_value = proc.communicate('through stdin to stdout')[0]</code>. Subprocess.communicate will <a href="http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow">"Wait for process to terminate"</a>, which, when used with a lock, will run one at a time.</p>
<p>What you can do is simply add the <code>p</code> variable to a list and run and use the Subprocess API to wait for the subprocesses to finish. Periodically poll each subprocess in your main thread.</p>
<p>On second look, it looks like you may have an issue on this line as well: <code>for th in self.running_threads+self.thread_pool: th.join()</code>. Thread.join() is another method that will wait for the thread to finish.</p>
| 1 | 2009-08-10T15:24:34Z | [
"python"
] |
Catching uncaught exceptions through django development server | 1,255,467 | <p>I am looking for some way in django's development server that will make the server to stop at any uncaught exception automatically, as it is done with pdb mode in ipython console.</p>
<p>I know to put import pdb; pdb.set_trace() lines into the code to make application stop. But this doesn't help me, because the line where the exception is thrown is being called too many times. So I can't find out the exact conditions to define a conditional break point. </p>
<p>Is this possible? </p>
<p>Thank you...</p>
| 0 | 2009-08-10T15:15:14Z | 1,255,858 | <p>You can set <code>sys.excepthook</code> to a function that does <code>import pdb; pdb.pm()</code>, as per <a href="http://my.safaribooksonline.com/0596001673/pythoncook-CHP-14-SECT-6" rel="nofollow">this recipe</a>.</p>
| 2 | 2009-08-10T16:24:34Z | [
"python",
"django",
"debugging"
] |
text file format from array | 1,255,688 | <p>I have no: of arrays, and i like to take it to text file in specific format, for eg.,</p>
<p>'present form'</p>
<pre><code>a= [1 2 3 4 5 ]
b= [ 1 2 3 4 5 6 7 8 ]
c= [ 8 9 10 12 23 43 45 56 76 78]
d= [ 1 2 3 4 5 6 7 8 45 56 76 78 12 23 43 ]
</code></pre>
<p>The 'required format' in a txt file,</p>
<pre><code> a '\t' b '\t' d '\t' c
1 '\t' 1
2 '\t' 2
3 '\t' 3
4 '\t' 4
5 '\t' 5
6
7
8
</code></pre>
<p><code>'\t'</code>- 1 tab space</p>
<p>problem is,</p>
<p>I have the array in linear form[a],[b],[c],and d, i have to transpose('required format') and sort [a],[b],[d],and [c] and write it as a txt file</p>
| 0 | 2009-08-10T15:49:56Z | 1,255,742 | <pre><code>from __future__ import with_statement
import csv
import itertools
a= [1, 2, 3, 4, 5]
b= [1, 2, 3, 4, 5, 6, 7, 8]
c= [8, 9, 10, 12, 23, 43, 45, 56, 76, 78]
d= [1, 2, 3, 4, 5, 6, 7, 8, 45, 56, 76, 78, 12, 23, 43]
with open('destination.txt', 'w') as f:
cf = csv.writer(f, delimiter='\t')
cf.writerow(['a', 'b', 'd', 'c']) # header
cf.writerows(itertools.izip_longest(a, b, d, c))
</code></pre>
<p>Results on <code>destination.txt</code> (<em><code><tab></code>s</em> are in fact real tabs on the file):</p>
<pre><code>a<tab>b<tab>d<tab>c
1<tab>1<tab>1<tab>8
2<tab>2<tab>2<tab>9
3<tab>3<tab>3<tab>10
4<tab>4<tab>4<tab>12
5<tab>5<tab>5<tab>23
<tab>6<tab>6<tab>43
<tab>7<tab>7<tab>45
<tab>8<tab>8<tab>56
<tab><tab>45<tab>76
<tab><tab>56<tab>78
<tab><tab>76<tab>
<tab><tab>78<tab>
<tab><tab>12<tab>
<tab><tab>23<tab>
<tab><tab>43<tab>
</code></pre>
<p>Here's the izip_longest function, if you have python < 2.6:</p>
<pre><code>def izip_longest(*iterables, fillvalue=None):
def sentinel(counter=([fillvalue]*(len(iterables)-1)).pop):
yield counter()
fillers = itertools.repeat(fillvalue)
iters = [itertools.chain(it, sentinel(), fillers)
for it in iterables]
try:
for tup in itertools.izip(*iters):
yield tup
except IndexError:
pass
</code></pre>
| 6 | 2009-08-10T15:59:47Z | [
"python"
] |
text file format from array | 1,255,688 | <p>I have no: of arrays, and i like to take it to text file in specific format, for eg.,</p>
<p>'present form'</p>
<pre><code>a= [1 2 3 4 5 ]
b= [ 1 2 3 4 5 6 7 8 ]
c= [ 8 9 10 12 23 43 45 56 76 78]
d= [ 1 2 3 4 5 6 7 8 45 56 76 78 12 23 43 ]
</code></pre>
<p>The 'required format' in a txt file,</p>
<pre><code> a '\t' b '\t' d '\t' c
1 '\t' 1
2 '\t' 2
3 '\t' 3
4 '\t' 4
5 '\t' 5
6
7
8
</code></pre>
<p><code>'\t'</code>- 1 tab space</p>
<p>problem is,</p>
<p>I have the array in linear form[a],[b],[c],and d, i have to transpose('required format') and sort [a],[b],[d],and [c] and write it as a txt file</p>
| 0 | 2009-08-10T15:49:56Z | 1,255,877 | <p>Have a look at matplotlib.mlab.rec2csv and csv2rec:</p>
<pre><code>>>> from matplotlib.mlab import rec2csv,csv2rec
# note: these are also imported automatically when you do ipython -pylab
>>> rec = csv2rec('csv file.csv')
>>> rec2csv(rec, 'copy csv file', delimiter='\t')
</code></pre>
| 1 | 2009-08-10T16:27:58Z | [
"python"
] |
text file format from array | 1,255,688 | <p>I have no: of arrays, and i like to take it to text file in specific format, for eg.,</p>
<p>'present form'</p>
<pre><code>a= [1 2 3 4 5 ]
b= [ 1 2 3 4 5 6 7 8 ]
c= [ 8 9 10 12 23 43 45 56 76 78]
d= [ 1 2 3 4 5 6 7 8 45 56 76 78 12 23 43 ]
</code></pre>
<p>The 'required format' in a txt file,</p>
<pre><code> a '\t' b '\t' d '\t' c
1 '\t' 1
2 '\t' 2
3 '\t' 3
4 '\t' 4
5 '\t' 5
6
7
8
</code></pre>
<p><code>'\t'</code>- 1 tab space</p>
<p>problem is,</p>
<p>I have the array in linear form[a],[b],[c],and d, i have to transpose('required format') and sort [a],[b],[d],and [c] and write it as a txt file</p>
| 0 | 2009-08-10T15:49:56Z | 1,256,020 | <p>Just for fun with no imports:</p>
<pre><code>a= [1, 2, 3, 4, 5]
b= [1, 2, 3, 4, 5, 6, 7, 8]
c= [8, 9, 10, 12, 23, 43, 45, 56, 76, 78]
d= [1, 2, 3, 4, 5, 6, 7, 8, 45, 56, 76, 78, 12, 23, 43]
fh = open("out.txt","w")
# header line
fh.write("a\tb\td\tc\n")
# rest of file
for i in map(lambda *row: [elem or "" for elem in row], *[a,b,d,c]):
fh.write("\t".join(map(str,i))+"\n")
fh.close()
</code></pre>
| -1 | 2009-08-10T17:03:12Z | [
"python"
] |
Finding Functions Defined in a with: Block | 1,255,914 | <p>Here's some code from <a href="http://www.mechanicalcat.net/richard/log/Python/Something%5FI%5Fm%5Fworking%5Fon.3">Richard Jones' Blog</a>:</p>
<pre><code>with gui.vertical:
text = gui.label('hello!')
items = gui.selection(['one', 'two', 'three'])
with gui.button('click me!'):
def on_click():
text.value = items.value
text.foreground = red
</code></pre>
<p>My question is: how the heck did he do this? How can the context manager access the scope inside the with block? Here's a basic template for trying to figure this out:</p>
<pre><code>from __future__ import with_statement
class button(object):
def __enter__(self):
#do some setup
pass
def __exit__(self, exc_type, exc_value, traceback):
#XXX: how can we find the testing() function?
pass
with button():
def testing():
pass
</code></pre>
| 9 | 2009-08-10T16:38:03Z | 1,256,018 | <p>Here's one way:</p>
<pre><code>from __future__ import with_statement
import inspect
class button(object):
def __enter__(self):
# keep track of all that's already defined BEFORE the `with`
f = inspect.currentframe(1)
self.mustignore = dict(f.f_locals)
def __exit__(self, exc_type, exc_value, traceback):
f = inspect.currentframe(1)
# see what's been bound anew in the body of the `with`
interesting = dict()
for n in f.f_locals:
newf = f.f_locals[n]
if n not in self.mustignore:
interesting[n] = newf
continue
anf = self.mustignore[n]
if id(newf) != id(anf):
interesting[n] = newf
if interesting:
print 'interesting new things: %s' % ', '.join(sorted(interesting))
for n, v in interesting.items():
if isinstance(v, type(lambda:None)):
print 'function %r' % n
print v()
else:
print 'nothing interesting'
def main():
for i in (1, 2):
def ignorebefore():
pass
with button():
def testing(i=i):
return i
def ignoreafter():
pass
main()
</code></pre>
<p><strong>Edit</strong>: stretched code a bit more, added some explanation...:</p>
<p>Catching caller's locals at <code>__exit__</code> is easy -- trickier is avoiding those locals that were already defined <em>before</em> the <code>with</code> block, which is why I added to main two local functions that the <code>with</code> should ignore. I'm not 100% happy with this solution, which looks a bit complicated, but I couldn't get equality testing correct with either <code>==</code> or <code>is</code>, so I resorted to this rather complicated approach.</p>
<p>I've also added a loop (to make more strongly sure the <code>def</code>s before / within / after are being properly handled) and a type-check and function-call to make sure the right incarnation of <code>testing</code> is the one that's identified (everything seems to work fine) -- of course the code as written only works if the <code>def</code> inside the <code>with</code> is for a function callable without arguments, it's not hard to get the signature with <code>inspect</code> to ward against that (but since I'm doing the call only for the purpose of checking that the right function objects are identified, I didn't bother about this last refinement;-).</p>
| 11 | 2009-08-10T17:02:58Z | [
"python",
"scope",
"with-statement",
"contextmanager"
] |
Finding Functions Defined in a with: Block | 1,255,914 | <p>Here's some code from <a href="http://www.mechanicalcat.net/richard/log/Python/Something%5FI%5Fm%5Fworking%5Fon.3">Richard Jones' Blog</a>:</p>
<pre><code>with gui.vertical:
text = gui.label('hello!')
items = gui.selection(['one', 'two', 'three'])
with gui.button('click me!'):
def on_click():
text.value = items.value
text.foreground = red
</code></pre>
<p>My question is: how the heck did he do this? How can the context manager access the scope inside the with block? Here's a basic template for trying to figure this out:</p>
<pre><code>from __future__ import with_statement
class button(object):
def __enter__(self):
#do some setup
pass
def __exit__(self, exc_type, exc_value, traceback):
#XXX: how can we find the testing() function?
pass
with button():
def testing():
pass
</code></pre>
| 9 | 2009-08-10T16:38:03Z | 1,256,661 | <p>To answer your question, yes, it's frame introspection.</p>
<p>But the syntax I would create to do the same thing is</p>
<pre><code>with gui.vertical:
text = gui.label('hello!')
items = gui.selection(['one', 'two', 'three'])
@gui.button('click me!')
class button:
def on_click():
text.value = items.value
text.foreground = red
</code></pre>
<p>Here I would implement <code>gui.button</code> as a decorator that returns button instance given some parameters and events (though it appears to me now that <code>button = gui.button('click me!', mybutton_onclick</code> is fine as well).</p>
<p>I would also leave <code>gui.vertical</code> as it is since it can be implemented without introspection. I'm not sure about its implementation, but it may involve setting <code>gui.direction = gui.VERTICAL</code> so that <code>gui.label()</code> and others use it in computing their coordinates.</p>
<p>Now when I look at this, I think I'd try the syntax:</p>
<pre><code> with gui.vertical:
text = gui.label('hello!')
items = gui.selection(['one', 'two', 'three'])
@gui.button('click me!')
def button():
text.value = items.value
foreground = red
</code></pre>
<p>(the idea being that similarly to how label is made out of text, a button is made out of text and function)</p>
| 1 | 2009-08-10T19:10:42Z | [
"python",
"scope",
"with-statement",
"contextmanager"
] |
How to read String in java that was written using python's struct.pack method | 1,255,918 | <p>I have written information to a file in python using struct.pack
eg.</p>
<pre><code>out.write( struct.pack(">f", 1.1) );
out.write( struct.pack(">i", 12) );
out.write( struct.pack(">3s", "abc") );
</code></pre>
<p>Then I read it in java using <code>DataInputStream</code> and <code>readInt</code>, <code>readFloat</code> and <code>readUTF</code>.
Reading the numbers works but as soon as I call <code>readUTF()</code> I get <code>EOFException</code>.</p>
<p>I assume this is because of the differences in the format of the string being written and the way java reads it, or am I doing something wrong?</p>
<p>If they are incompatible, is there another way to read and write strings?</p>
| 2 | 2009-08-10T16:38:51Z | 1,256,054 | <p>The format expected by <code>readUTF()</code>, is documented <a href="http://java.sun.com/javase/6/docs/api/java/io/DataInput.html" rel="nofollow">here</a>. In short, it expects a 16-bit, big-endian length followed by the bytes of the string. So, I think you could modify your pack call to look something like this:</p>
<pre><code>s = "abc"
out.write( struct.pack(">H", len(s) ))
out.write( struct.pack(">%ds" % len(s), s ))
</code></pre>
<p>My Python is a little rusty, but I think that's close. It also assume that a short (the <code>>H</code>) is 16 bits.</p>
| 4 | 2009-08-10T17:09:22Z | [
"java",
"python"
] |
How do you dynamically hide form fields in Django? | 1,255,976 | <p>I am making a profile form in Django. There are a lot of optional extra profile fields but I would only like to show two at a time. How do I hide or remove the fields I do not want to show dynamically? </p>
<p>Here is what I have so far:</p>
<pre><code>class UserProfileForm(forms.ModelForm):
extra_fields = ('field1', 'field2', 'field3')
extra_field_total = 2
class Meta:
model = UserProfile
def __init__(self, *args, **kwargs):
extra_field_count = 0
for key, field in self.base_fields.iteritems():
if key in self.extra_fields:
if extra_field_count < self.extra_field_total:
extra_field_count += 1
else:
# do something here to hide or remove field
super(UserProfileForm, self).__init__(*args, **kwargs)
</code></pre>
| 13 | 2009-08-10T16:51:04Z | 1,256,386 | <p>You are coding this in the Form. Wouldn't it make more sense to do this using CSS and JavaScript in the template code? Hiding a field is as easy as setting "display='none'" and toggling it back to 'block', say, if you need to display it.</p>
<p>Maybe some context on what the requirement is would clarify this.</p>
| 0 | 2009-08-10T18:16:41Z | [
"python",
"django",
"django-forms"
] |
How do you dynamically hide form fields in Django? | 1,255,976 | <p>I am making a profile form in Django. There are a lot of optional extra profile fields but I would only like to show two at a time. How do I hide or remove the fields I do not want to show dynamically? </p>
<p>Here is what I have so far:</p>
<pre><code>class UserProfileForm(forms.ModelForm):
extra_fields = ('field1', 'field2', 'field3')
extra_field_total = 2
class Meta:
model = UserProfile
def __init__(self, *args, **kwargs):
extra_field_count = 0
for key, field in self.base_fields.iteritems():
if key in self.extra_fields:
if extra_field_count < self.extra_field_total:
extra_field_count += 1
else:
# do something here to hide or remove field
super(UserProfileForm, self).__init__(*args, **kwargs)
</code></pre>
| 13 | 2009-08-10T16:51:04Z | 1,256,705 | <p>I think I found my answer.</p>
<p>First I tried:</p>
<pre><code>field.widget = field.hidden_widget
</code></pre>
<p>which didn't work.</p>
<p>The correct way happens to be:</p>
<pre><code>field.widget = field.hidden_widget()
</code></pre>
| 14 | 2009-08-10T19:18:53Z | [
"python",
"django",
"django-forms"
] |
How do you dynamically hide form fields in Django? | 1,255,976 | <p>I am making a profile form in Django. There are a lot of optional extra profile fields but I would only like to show two at a time. How do I hide or remove the fields I do not want to show dynamically? </p>
<p>Here is what I have so far:</p>
<pre><code>class UserProfileForm(forms.ModelForm):
extra_fields = ('field1', 'field2', 'field3')
extra_field_total = 2
class Meta:
model = UserProfile
def __init__(self, *args, **kwargs):
extra_field_count = 0
for key, field in self.base_fields.iteritems():
if key in self.extra_fields:
if extra_field_count < self.extra_field_total:
extra_field_count += 1
else:
# do something here to hide or remove field
super(UserProfileForm, self).__init__(*args, **kwargs)
</code></pre>
| 13 | 2009-08-10T16:51:04Z | 9,183,273 | <p>Can also use</p>
<pre><code>def __init__(self, instance, *args, **kwargs):
super(FormClass, self).__init__(instance=instance, *args, **kwargs)
if instance and instance.item:
del self.fields['field_for_item']
</code></pre>
| 3 | 2012-02-07T20:21:35Z | [
"python",
"django",
"django-forms"
] |
How do you dynamically hide form fields in Django? | 1,255,976 | <p>I am making a profile form in Django. There are a lot of optional extra profile fields but I would only like to show two at a time. How do I hide or remove the fields I do not want to show dynamically? </p>
<p>Here is what I have so far:</p>
<pre><code>class UserProfileForm(forms.ModelForm):
extra_fields = ('field1', 'field2', 'field3')
extra_field_total = 2
class Meta:
model = UserProfile
def __init__(self, *args, **kwargs):
extra_field_count = 0
for key, field in self.base_fields.iteritems():
if key in self.extra_fields:
if extra_field_count < self.extra_field_total:
extra_field_count += 1
else:
# do something here to hide or remove field
super(UserProfileForm, self).__init__(*args, **kwargs)
</code></pre>
| 13 | 2009-08-10T16:51:04Z | 9,792,130 | <pre><code>def __init__(self, *args, **kwargs):
is_video = kwargs.pop('is_video')
is_image = kwargs.pop('is_image')
super(ContestForm, self).__init__(*args, **kwargs)
if is_video:
del self.fields['video_link']
#self.exclude('video_link')
if is_image:
del self.fields['image']
</code></pre>
<p>use delete instead of self.exclude. </p>
| 3 | 2012-03-20T17:42:19Z | [
"python",
"django",
"django-forms"
] |
create an array from a txt file | 1,256,099 | <p>hey guys,
I'm new in python and I have a problem.
I have some measured data saved in a txt file.
the data is separated with tabs, it has this structure:</p>
<pre><code>0 0 -11.007001 -14.222319 2.336769
</code></pre>
<p>i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point.</p>
<p>The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates.</p>
<p>I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates.</p>
<p>I already started a bit and here is what I have so far:</p>
<pre><code>## read file
coords = [x.split('\t') for x in
open(f,'r').read().replace('\r','')[:-1].split('\n')]
## extract the information you want
simnum = [int(x[0]) for x in coords]
npts = [int(x[1]) for x in coords]
xyz = array([map(float,x[2:]) for x in coords])
</code></pre>
<p>but I don't know how to combine these 2 lists and this one array.</p>
<p>in the end i would like to have something like this:</p>
<p>array = [simnum][num_dat_point][xyz]</p>
<p>thanks for your help.</p>
<p>I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this.</p>
<p>thanks again</p>
| 3 | 2009-08-10T17:19:12Z | 1,256,160 | <p>you can combine them with <a href="http://docs.python.org/library/functions.html?highlight=zip#zip" rel="nofollow"><code>zip</code> function</a>, like so:</p>
<pre><code>for sim, datapoint, x, y, z in zip(simnum, npts, *xyz):
# do your thing
</code></pre>
<p>or you could avoid list comprehensions altogether and just iterate over the lines of the file:</p>
<pre><code>for line in open(fname):
lst = line.split('\t')
sim, datapoint = int(lst[0]), int(lst[1])
x, y, z = [float(i) for i in lst[2:]]
# do your thing
</code></pre>
<p>to parse a single line you could (and should) do the following:</p>
<pre><code>coords = [x.split('\t') for x in open(fname)]
</code></pre>
| 2 | 2009-08-10T17:29:17Z | [
"python",
"arrays",
"text"
] |
create an array from a txt file | 1,256,099 | <p>hey guys,
I'm new in python and I have a problem.
I have some measured data saved in a txt file.
the data is separated with tabs, it has this structure:</p>
<pre><code>0 0 -11.007001 -14.222319 2.336769
</code></pre>
<p>i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point.</p>
<p>The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates.</p>
<p>I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates.</p>
<p>I already started a bit and here is what I have so far:</p>
<pre><code>## read file
coords = [x.split('\t') for x in
open(f,'r').read().replace('\r','')[:-1].split('\n')]
## extract the information you want
simnum = [int(x[0]) for x in coords]
npts = [int(x[1]) for x in coords]
xyz = array([map(float,x[2:]) for x in coords])
</code></pre>
<p>but I don't know how to combine these 2 lists and this one array.</p>
<p>in the end i would like to have something like this:</p>
<p>array = [simnum][num_dat_point][xyz]</p>
<p>thanks for your help.</p>
<p>I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this.</p>
<p>thanks again</p>
| 3 | 2009-08-10T17:19:12Z | 1,256,221 | <p>According to the zen of python, flat is better than nested. I'd just use a dict.</p>
<pre><code>import csv
f = csv.reader(open('thefile.csv'), delimiter='\t',
quoting=csv.QUOTE_NONNUMERIC)
result = {}
for simn, dpoint, c1, c2, c3 in f:
result[simn, dpoint] = c1, c2, c3
# pretty-prints the result:
from pprint import pprint
pprint(result)
</code></pre>
| 2 | 2009-08-10T17:43:20Z | [
"python",
"arrays",
"text"
] |
create an array from a txt file | 1,256,099 | <p>hey guys,
I'm new in python and I have a problem.
I have some measured data saved in a txt file.
the data is separated with tabs, it has this structure:</p>
<pre><code>0 0 -11.007001 -14.222319 2.336769
</code></pre>
<p>i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point.</p>
<p>The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates.</p>
<p>I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates.</p>
<p>I already started a bit and here is what I have so far:</p>
<pre><code>## read file
coords = [x.split('\t') for x in
open(f,'r').read().replace('\r','')[:-1].split('\n')]
## extract the information you want
simnum = [int(x[0]) for x in coords]
npts = [int(x[1]) for x in coords]
xyz = array([map(float,x[2:]) for x in coords])
</code></pre>
<p>but I don't know how to combine these 2 lists and this one array.</p>
<p>in the end i would like to have something like this:</p>
<p>array = [simnum][num_dat_point][xyz]</p>
<p>thanks for your help.</p>
<p>I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this.</p>
<p>thanks again</p>
| 3 | 2009-08-10T17:19:12Z | 1,256,248 | <p>This seems like a good opportunity to use itertools.groupby.</p>
<pre><code>import itertools
import csv
file = open("data.txt")
reader = csv.reader(file, delimiter='\t')
result = []
for simnumberStr, rows in itertools.groupby(reader, key=lambda t: t[0]):
simData = []
for row in rows:
simData.append([float(v) for v in row[2:]])
result.append(simData)
file.close()
</code></pre>
<p>This will create a 3 dimensional list named 'result'. The first index is the simulation number, and the second index is the data index within that simulation. The value is a list of integers containing the x, y, and z coordinate.</p>
<p>Note that this assumes the data is already sorted on simulation number and data number.</p>
| 2 | 2009-08-10T17:47:54Z | [
"python",
"arrays",
"text"
] |
create an array from a txt file | 1,256,099 | <p>hey guys,
I'm new in python and I have a problem.
I have some measured data saved in a txt file.
the data is separated with tabs, it has this structure:</p>
<pre><code>0 0 -11.007001 -14.222319 2.336769
</code></pre>
<p>i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point.</p>
<p>The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates.</p>
<p>I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates.</p>
<p>I already started a bit and here is what I have so far:</p>
<pre><code>## read file
coords = [x.split('\t') for x in
open(f,'r').read().replace('\r','')[:-1].split('\n')]
## extract the information you want
simnum = [int(x[0]) for x in coords]
npts = [int(x[1]) for x in coords]
xyz = array([map(float,x[2:]) for x in coords])
</code></pre>
<p>but I don't know how to combine these 2 lists and this one array.</p>
<p>in the end i would like to have something like this:</p>
<p>array = [simnum][num_dat_point][xyz]</p>
<p>thanks for your help.</p>
<p>I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this.</p>
<p>thanks again</p>
| 3 | 2009-08-10T17:19:12Z | 1,256,313 | <p>essentially the difficulty is what happens if different simulations have different numbers of points.</p>
<p>You will therefore need to dimension an array to the appropriate sizes first.
t should be an array of at least <code>max(simnum) x max(npts) x 3</code>.
To eliminate confusion you should initialise with not-a-number,
this will allow you to see missing points.</p>
<p>then use something like</p>
<pre><code>for x in coords:
t[int(x[0])][int(x[1])][0]=float(x[3])
t[int(x[0])][int(x[1])][1]=float(x[4])
t[int(x[0])][int(x[1])][2]=float(x[5])
</code></pre>
<p>is this what you meant?</p>
| 1 | 2009-08-10T18:02:16Z | [
"python",
"arrays",
"text"
] |
create an array from a txt file | 1,256,099 | <p>hey guys,
I'm new in python and I have a problem.
I have some measured data saved in a txt file.
the data is separated with tabs, it has this structure:</p>
<pre><code>0 0 -11.007001 -14.222319 2.336769
</code></pre>
<p>i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point.</p>
<p>The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates.</p>
<p>I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates.</p>
<p>I already started a bit and here is what I have so far:</p>
<pre><code>## read file
coords = [x.split('\t') for x in
open(f,'r').read().replace('\r','')[:-1].split('\n')]
## extract the information you want
simnum = [int(x[0]) for x in coords]
npts = [int(x[1]) for x in coords]
xyz = array([map(float,x[2:]) for x in coords])
</code></pre>
<p>but I don't know how to combine these 2 lists and this one array.</p>
<p>in the end i would like to have something like this:</p>
<p>array = [simnum][num_dat_point][xyz]</p>
<p>thanks for your help.</p>
<p>I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this.</p>
<p>thanks again</p>
| 3 | 2009-08-10T17:19:12Z | 1,256,365 | <p>First I'd point out that your first data point appears to be an index, and wonder if the data is therefore important or not, but whichever :-)</p>
<pre><code>def parse(line):
mch = re.compile('^(\d+)\s+(\d+)\s+([-\d\.]+)\s+([-\d\.]+)\s+([-\d\.]+)$')
m = mch.match(line)
if m:
l = m.groups()
(idx,data,xyz) = (int(l[0]),int(l[1]), map(float, l[2:]))
return (idx, data, xyz)
return None
finaldata = []
file = open("data.txt",'r')
for line in file:
r = parse(line)
if r is not None:
finaldata.append(r)
</code></pre>
<p>Final data should have output along the lines of:</p>
<pre><code>[(0, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]),
(1, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]),
(2, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]),
(3, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999]),
(4, 0, [-11.007001000000001, -14.222319000000001, 2.3367689999999999])]
</code></pre>
<p>This should be pretty robust about dealing w/ the whitespace issues (tabs spaces whatnot)... </p>
<p>I also wonder how big your data files are, mine are usually large so being able to process them in chunks or groups become more important... Anyway this will work in python 2.6.</p>
| 0 | 2009-08-10T18:11:57Z | [
"python",
"arrays",
"text"
] |
create an array from a txt file | 1,256,099 | <p>hey guys,
I'm new in python and I have a problem.
I have some measured data saved in a txt file.
the data is separated with tabs, it has this structure:</p>
<pre><code>0 0 -11.007001 -14.222319 2.336769
</code></pre>
<p>i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point.</p>
<p>The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates.</p>
<p>I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates.</p>
<p>I already started a bit and here is what I have so far:</p>
<pre><code>## read file
coords = [x.split('\t') for x in
open(f,'r').read().replace('\r','')[:-1].split('\n')]
## extract the information you want
simnum = [int(x[0]) for x in coords]
npts = [int(x[1]) for x in coords]
xyz = array([map(float,x[2:]) for x in coords])
</code></pre>
<p>but I don't know how to combine these 2 lists and this one array.</p>
<p>in the end i would like to have something like this:</p>
<p>array = [simnum][num_dat_point][xyz]</p>
<p>thanks for your help.</p>
<p>I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this.</p>
<p>thanks again</p>
| 3 | 2009-08-10T17:19:12Z | 1,256,420 | <p>You could be using many different kinds of containers for your purposes, but none of them has <code>array</code> as an unqualified name -- Python has a module <code>array</code> which you can import from the standard library, but the <code>array.array</code> type is too limited for your purposes (1-D only and with elementary types as contents); there's a popular third-party extension known as <code>numpy</code>, which does have a powerful <code>numpy.array</code> type, which you could use if you has downloaded and installed the extension -- but as you never even once mention <code>numpy</code> I doubt that's what you mean; the relevant builtin types are <code>list</code> and <code>dict</code>. I'll assume you want any container whatsoever -- but if you could learn to use precise terminology in the future, that will substantially help you AND anybody who's trying to help you (say list when you mean list, array only when you DO mean array, "container" when you're uncertain about what container to use, and so forth).</p>
<p>I suggest you look at the <code>csv</code> module in the standard library for a more robust way to reading your data, but that's a separate issue. Let's start from when you have the <code>coords</code> list of lists of 5 strings each, each sublist with strings representing two ints followed by three floats. Two more key aspects need to be specified...</p>
<p>One key aspect you don't tell us about: is the list sorted in some significant way? is there, in particular, some significant order you want to keep? As you don't even mention either issue, I will have to assume one way or another, and I'll assume that there isn't any guaranteed nor meaningful order; but, no repetition (each pair of simulation/datapoint numbers is not allowed to occur more than once).</p>
<p>Second key aspect: are there the same number of datapoints per simulation, in increasing order (0, 1, 2, ...), or is that not necessarily the case (and btw, are the simulation themselves numbered 0, 1, 2, ...)? Again, no clue from you on this indispensable part of the specs -- note how many assumptions you're forcing would-be helpers to make by just <em>not telling us</em> about such obviously crucial aspects. Don't let people who want to help you stumble in the dark: rather, learn to <a href="http://catb.org/~esr/faqs/smart-questions.html" rel="nofollow">ask questions the smart way</a> -- this will save untold amounts of time to yourself AND would-be helpers, <strong>and</strong> give you higher-quality and more relevant help, so, why not do it? Anyway, forced to make yet another assumption, I'll have to assume nothing at all is known about the simulation numbers nor about the numers of datapoints in each simulation.</p>
<p>With these assumptions <code>dict</code> emerges as the only sensible structure to use for the outer container: a dictionary whose key is a tuple with two items, simulation number then datapoint number within the simulation. The values may as well be tuple, too (with three floats each), since it does appear that you have exactly 3 coordinates per line.</p>
<p>With all of these assumptions...:</p>
<pre><code>def make_container(coords):
result = dict()
for s, d, x, y, z in coords:
key = int(s), int(d)
value = float(x), float(y), float(z)
result[key] = value
return result
</code></pre>
<p>It's always best, and fastest, to have all significant code within <code>def</code> statements (i.e. as functions to be called, possibly with appropriate arguments), so I'm presenting it this way. <code>make_container</code> returns a dictionary which you can address with the simulation number and datapoint number; for example,</p>
<pre><code>d = make_container(coords)
print d[0, 0]
</code></pre>
<p>will print the x, y, z for dp 0 of sim 0, assuming one exists (you would get an error if such a sim/dp combination did not exist). dicts have many useful methods, e.g. changing the print statement above to</p>
<pre><code>print d.get((0, 0))
</code></pre>
<p>(yes, you <strong>do</strong> need double parentheses here -- inner ones to make a tuple, outer ones to call <code>get</code> with that tuple as its single argument), you'd see <code>None</code>, rather than get an exception, if there was no such sim/dp combinarion as (0, 0).</p>
<p>If you can edit your question to make your specs more precise (perhaps including some indication of ways you plan to use the resulting container, as well as the various key aspects I've listed above), I might well be able to fine-tune this advice to match your need and circumstances much better (and so might ever other responder, regarding their own advice!), so I strongly recommend you do so -- thanks in advance for helping us help you!-)</p>
| 1 | 2009-08-10T18:25:03Z | [
"python",
"arrays",
"text"
] |
create an array from a txt file | 1,256,099 | <p>hey guys,
I'm new in python and I have a problem.
I have some measured data saved in a txt file.
the data is separated with tabs, it has this structure:</p>
<pre><code>0 0 -11.007001 -14.222319 2.336769
</code></pre>
<p>i have always 32 datapoints per simulation (0,1,2,...,31) and i have 300 simulations (0,1,2...,299), so the data is sorted at first with the number of simulation and then the number of the data point.</p>
<p>The first column is the simulation number, the second column is the data point number and the other 3 columns are the x,y,z coordinates.</p>
<p>I would like to create a 3d array, the first dimension should be the simulation number, the second the number of the datapoint and the third the three coordinates.</p>
<p>I already started a bit and here is what I have so far:</p>
<pre><code>## read file
coords = [x.split('\t') for x in
open(f,'r').read().replace('\r','')[:-1].split('\n')]
## extract the information you want
simnum = [int(x[0]) for x in coords]
npts = [int(x[1]) for x in coords]
xyz = array([map(float,x[2:]) for x in coords])
</code></pre>
<p>but I don't know how to combine these 2 lists and this one array.</p>
<p>in the end i would like to have something like this:</p>
<p>array = [simnum][num_dat_point][xyz]</p>
<p>thanks for your help.</p>
<p>I hope you understand my problem, it's my first posting in a python forum, so if I did anything wrong, I'm sorry about this.</p>
<p>thanks again</p>
| 3 | 2009-08-10T17:19:12Z | 1,257,787 | <p>Are you sure a 3d array is what you want? It seems more likely that you want a 2d array, where the simulation number is one dimension, the data point is the second, and then the value stored at that location is the coordinates.</p>
<p>This code will give you that.</p>
<pre><code>data = []
for coord in coords:
if coord[0] not in data:
data[coord[0]] = []
data[coord[0]][coord[1]] = (coord[2], coord[3], coord[4])
</code></pre>
<p>To get the coordinates at simulation 7, data point 13, just do data[7][13]</p>
| 0 | 2009-08-10T23:26:49Z | [
"python",
"arrays",
"text"
] |
Django - Getting last object created, simultaneous filters | 1,256,190 | <p>Apologies, I am completely new to Django and Python.</p>
<p>I have 2 questions. First, how would I go about getting the last object created (or highest pk) in a list of objects? For example, I know that I could use the following to get the first object:</p>
<pre><code>list = List.objects.all()[0]
</code></pre>
<p>Is there a way to get the length of List.objects? I've tried List.objects.length but to no avail.</p>
<p>Second, is it possible to create simultaneous filters or combine lists? Here is an example:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number)
</code></pre>
<p>I want something like the above, but more like:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number or home_phone=number)
</code></pre>
<p>What is the correct syntax, if any?</p>
| 21 | 2009-08-10T17:35:15Z | 1,256,241 | <p>You can use the count() method on a query set the get the number of items.</p>
<pre><code>list = List.objects.all()
list.count()
</code></pre>
<p>Arguments to filter are "AND"ed together. If you need to do OR filters look at Q objects.
<a href="http://docs.djangoproject.com/en/dev/topics/db/queries/#complex-lookups-with-q-objects" rel="nofollow">http://docs.djangoproject.com/en/dev/topics/db/queries/#complex-lookups-with-q-objects</a></p>
| 4 | 2009-08-10T17:46:11Z | [
"python",
"django",
"filter",
"django-views"
] |
Django - Getting last object created, simultaneous filters | 1,256,190 | <p>Apologies, I am completely new to Django and Python.</p>
<p>I have 2 questions. First, how would I go about getting the last object created (or highest pk) in a list of objects? For example, I know that I could use the following to get the first object:</p>
<pre><code>list = List.objects.all()[0]
</code></pre>
<p>Is there a way to get the length of List.objects? I've tried List.objects.length but to no avail.</p>
<p>Second, is it possible to create simultaneous filters or combine lists? Here is an example:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number)
</code></pre>
<p>I want something like the above, but more like:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number or home_phone=number)
</code></pre>
<p>What is the correct syntax, if any?</p>
| 21 | 2009-08-10T17:35:15Z | 1,256,325 | <p>I haven't tried this yet, but I'd look at the <em>latest()</em> operator on <a href="http://docs.djangoproject.com/en/dev/ref/models/querysets/#ref-models-querysets">QuerySets</a>:</p>
<blockquote>
<p>latest(field_name=None)</p>
<p>Returns the latest object in the
table, by date, using the field_name
provided as the date field.</p>
<p>This example returns the latest Entry
in the table, according to the
pub_date field:</p>
<p>Entry.objects.latest('pub_date')</p>
<p>If your model's Meta specifies
get_latest_by, you can leave off the
field_name argument to latest().
Django will use the field specified in
get_latest_by by default.</p>
<p>Like get(), latest() raises
DoesNotExist if an object doesn't
exist with the given parameters.</p>
<p>Note latest() exists purely for
convenience and readability.</p>
</blockquote>
<p>And the <a href="http://docs.djangoproject.com/en/dev/ref/models/options/">model docs on get_latest_by</a>:</p>
<blockquote>
<p>get_latest_by</p>
<p>Options.get_latest_by</p>
<p>The name of a DateField or DateTimeField in the model. This specifies the default field to use in your model Manager's latest method.</p>
<p>Example:</p>
<p>get_latest_by = "order_date"</p>
<p>See the docs for latest() for more.</p>
</blockquote>
<p>Edit: Wade has a good answer on Q() operator.</p>
| 31 | 2009-08-10T18:04:21Z | [
"python",
"django",
"filter",
"django-views"
] |
Django - Getting last object created, simultaneous filters | 1,256,190 | <p>Apologies, I am completely new to Django and Python.</p>
<p>I have 2 questions. First, how would I go about getting the last object created (or highest pk) in a list of objects? For example, I know that I could use the following to get the first object:</p>
<pre><code>list = List.objects.all()[0]
</code></pre>
<p>Is there a way to get the length of List.objects? I've tried List.objects.length but to no avail.</p>
<p>Second, is it possible to create simultaneous filters or combine lists? Here is an example:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number)
</code></pre>
<p>I want something like the above, but more like:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number or home_phone=number)
</code></pre>
<p>What is the correct syntax, if any?</p>
| 21 | 2009-08-10T17:35:15Z | 1,256,899 | <p>For the largest primary key, try this: </p>
<pre><code>List.objects.order_by('-pk')[0]
</code></pre>
<p>Note that using <code>pk</code> works regardless of the actual name of the field defined as your primary key.</p>
| 11 | 2009-08-10T19:58:19Z | [
"python",
"django",
"filter",
"django-views"
] |
Django - Getting last object created, simultaneous filters | 1,256,190 | <p>Apologies, I am completely new to Django and Python.</p>
<p>I have 2 questions. First, how would I go about getting the last object created (or highest pk) in a list of objects? For example, I know that I could use the following to get the first object:</p>
<pre><code>list = List.objects.all()[0]
</code></pre>
<p>Is there a way to get the length of List.objects? I've tried List.objects.length but to no avail.</p>
<p>Second, is it possible to create simultaneous filters or combine lists? Here is an example:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number)
</code></pre>
<p>I want something like the above, but more like:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number or home_phone=number)
</code></pre>
<p>What is the correct syntax, if any?</p>
| 21 | 2009-08-10T17:35:15Z | 1,886,857 | <p>this works!</p>
<p><code>Model.objects.latest('field')</code> - field can be id. that will be the latest id</p>
| 13 | 2009-12-11T09:28:20Z | [
"python",
"django",
"filter",
"django-views"
] |
Django - Getting last object created, simultaneous filters | 1,256,190 | <p>Apologies, I am completely new to Django and Python.</p>
<p>I have 2 questions. First, how would I go about getting the last object created (or highest pk) in a list of objects? For example, I know that I could use the following to get the first object:</p>
<pre><code>list = List.objects.all()[0]
</code></pre>
<p>Is there a way to get the length of List.objects? I've tried List.objects.length but to no avail.</p>
<p>Second, is it possible to create simultaneous filters or combine lists? Here is an example:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number)
</code></pre>
<p>I want something like the above, but more like:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number or home_phone=number)
</code></pre>
<p>What is the correct syntax, if any?</p>
| 21 | 2009-08-10T17:35:15Z | 24,908,035 | <p>alternative for the latest object created:</p>
<pre><code>List.objects.all()[List.objects.count()-1]
</code></pre>
<p>It is necessary to add an AssertionError for the case when there are no items in the list.</p>
<pre><code>except AssertionError:
...
</code></pre>
| 0 | 2014-07-23T10:30:04Z | [
"python",
"django",
"filter",
"django-views"
] |
Django - Getting last object created, simultaneous filters | 1,256,190 | <p>Apologies, I am completely new to Django and Python.</p>
<p>I have 2 questions. First, how would I go about getting the last object created (or highest pk) in a list of objects? For example, I know that I could use the following to get the first object:</p>
<pre><code>list = List.objects.all()[0]
</code></pre>
<p>Is there a way to get the length of List.objects? I've tried List.objects.length but to no avail.</p>
<p>Second, is it possible to create simultaneous filters or combine lists? Here is an example:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number)
</code></pre>
<p>I want something like the above, but more like:</p>
<pre><code>def findNumber(request, number)
phone_list = Numbers.objects.filter(cell=number or home_phone=number)
</code></pre>
<p>What is the correct syntax, if any?</p>
| 21 | 2009-08-10T17:35:15Z | 33,762,411 | <p><strong>Since Django 1.6 - last</strong></p>
<p><code>last()</code></p>
<p>Works like <code>first()</code>, but returns the last object in the queryset. </p>
<p>Returns the last object matched by the queryset, or <code>None</code> if there is no matching object. If the QuerySet has no ordering defined, then the queryset is automatically ordered by the primary key.</p>
<p><code>list = List.objects.last()</code> gives you the last object created</p>
| 5 | 2015-11-17T16:41:44Z | [
"python",
"django",
"filter",
"django-views"
] |
pywikipedia bot with https and http authentication | 1,256,213 | <p>I'm having trouble getting my bot to login to a MediaWiki install on the intranet. I believe it is due to the http authentication protecting the wiki. </p>
<p>Facts:</p>
<ol>
<li>The wiki root is: <a href="https://local.example.com/mywiki/" rel="nofollow">https://local.example.com/mywiki/</a></li>
<li>When visiting the wiki with a web browser, a popup comes up asking for enterprise credentials (I assume this is basic access authentication) </li>
</ol>
<p>This is what I have in my user-config.py:</p>
<pre><code>mylang = 'en'
family = 'mywiki'
usernames['mywiki']['en'] = u'Bot'
authenticate['local.example.com'] = ('user', 'pass')
</code></pre>
<p>This is what I have in mywiki_family.py:</p>
<pre><code># -*- coding: utf-8 -*-
import family, config
# The Wikimedia family that is known as mywiki
class Family(family.Family):
def __init__(self):
family.Family.__init__(self)
self.name = 'mywiki'
self.langs = { 'en' : 'local.example.com'}
def scriptpath(self, code):
return '/mywiki'
def version(self, code):
return '1.13.5'
def isPublic(self):
return False
def hostname(self, code):
return 'local.example.com'
def protocol(self, code):
return 'https'
def path(self, code):
return '/mywiki/index.php'
</code></pre>
<p>When I execute login.py -v -v, I get this:</p>
<pre><code>urllib2.urlopen(urllib2.Request('https://local.example.com/w/index.php?title=Special:Userlogin&useskin=monobook&action=submit', wpSkipCookieCheck=1&wpPassword=XXXX&wpDomain=&wpRemember=1&wpLoginattempt=Aanmelden%20%26%20Inschrijven&wpName=Bot, {'Content-type': 'application/x-www-form-urlencoded', 'User-agent': 'PythonWikipediaBot/1.0'})):
(Redundant traceback info here)
urllib2.HTTPError: HTTP Error 401: Unauthorized
</code></pre>
<p>(I'm not sure why it has 'local.example.com/w' instead of '/mywiki'.)</p>
<p>I thought it might be trying to authenticate to example.com instead of example.com/wiki, so I changed the authenticate line to:</p>
<pre><code>authenticate['local.example.com/mywiki'] = ('user', 'pass')
</code></pre>
<p>But then I get an HTTP 401.2 error back from IIS:</p>
<blockquote>
<p>You do not have permission to view this directory or page using the credentials that you supplied because your Web browser is sending a WWW-Authenticate header field that the Web server is not configured to accept.</p>
</blockquote>
<p>Any help on how to get this working would be appreciated.</p>
<p><strong>Update</strong> After fixing my family file, it now says:</p>
<blockquote>
<p>Getting information for site mywiki:en
('http error', 401, 'Unauthorized', )
WARNING: Could not open '<a href="https://local.example.com/mywiki/index.php?title=Non-existing_page&action=edit&useskin=monobook" rel="nofollow">https://local.example.com/mywiki/index.php?title=Non-existing_page&action=edit&useskin=monobook</a>'. Maybe the server or your connection is down. Retrying in 1 minutes...</p>
</blockquote>
<p>I looked at the HTTP headers on a plan urllib2.ulropen call and it's using WWW-Authenticate: Negotiate WWW-Authenticate: NTLM. I'm guessing urllib2 and thus pywikipedia don't support this?</p>
<p><strong>Update</strong> Added a tasty bounty for help in getting this to work. I can authenticate using python-ntlm. How do I integrate this into pywikipedia?</p>
| 3 | 2009-08-10T17:41:25Z | 1,257,603 | <p>I am guessing the problem you have is that the server expects basic authentication and you are not handling that in your client. Michael Foord wrote a good article about handling <a href="http://www.voidspace.org.uk/python/articles/authentication.shtml" rel="nofollow">basic authentication in Python</a>.</p>
<p>You did not provide enough information for me to be sure about this, so if that does not work, please provide some additional information, like network dump of you connection attempt.</p>
| 0 | 2009-08-10T22:31:59Z | [
"python",
"https",
"urllib2",
"http-authentication",
"pywikipedia"
] |
pywikipedia bot with https and http authentication | 1,256,213 | <p>I'm having trouble getting my bot to login to a MediaWiki install on the intranet. I believe it is due to the http authentication protecting the wiki. </p>
<p>Facts:</p>
<ol>
<li>The wiki root is: <a href="https://local.example.com/mywiki/" rel="nofollow">https://local.example.com/mywiki/</a></li>
<li>When visiting the wiki with a web browser, a popup comes up asking for enterprise credentials (I assume this is basic access authentication) </li>
</ol>
<p>This is what I have in my user-config.py:</p>
<pre><code>mylang = 'en'
family = 'mywiki'
usernames['mywiki']['en'] = u'Bot'
authenticate['local.example.com'] = ('user', 'pass')
</code></pre>
<p>This is what I have in mywiki_family.py:</p>
<pre><code># -*- coding: utf-8 -*-
import family, config
# The Wikimedia family that is known as mywiki
class Family(family.Family):
def __init__(self):
family.Family.__init__(self)
self.name = 'mywiki'
self.langs = { 'en' : 'local.example.com'}
def scriptpath(self, code):
return '/mywiki'
def version(self, code):
return '1.13.5'
def isPublic(self):
return False
def hostname(self, code):
return 'local.example.com'
def protocol(self, code):
return 'https'
def path(self, code):
return '/mywiki/index.php'
</code></pre>
<p>When I execute login.py -v -v, I get this:</p>
<pre><code>urllib2.urlopen(urllib2.Request('https://local.example.com/w/index.php?title=Special:Userlogin&useskin=monobook&action=submit', wpSkipCookieCheck=1&wpPassword=XXXX&wpDomain=&wpRemember=1&wpLoginattempt=Aanmelden%20%26%20Inschrijven&wpName=Bot, {'Content-type': 'application/x-www-form-urlencoded', 'User-agent': 'PythonWikipediaBot/1.0'})):
(Redundant traceback info here)
urllib2.HTTPError: HTTP Error 401: Unauthorized
</code></pre>
<p>(I'm not sure why it has 'local.example.com/w' instead of '/mywiki'.)</p>
<p>I thought it might be trying to authenticate to example.com instead of example.com/wiki, so I changed the authenticate line to:</p>
<pre><code>authenticate['local.example.com/mywiki'] = ('user', 'pass')
</code></pre>
<p>But then I get an HTTP 401.2 error back from IIS:</p>
<blockquote>
<p>You do not have permission to view this directory or page using the credentials that you supplied because your Web browser is sending a WWW-Authenticate header field that the Web server is not configured to accept.</p>
</blockquote>
<p>Any help on how to get this working would be appreciated.</p>
<p><strong>Update</strong> After fixing my family file, it now says:</p>
<blockquote>
<p>Getting information for site mywiki:en
('http error', 401, 'Unauthorized', )
WARNING: Could not open '<a href="https://local.example.com/mywiki/index.php?title=Non-existing_page&action=edit&useskin=monobook" rel="nofollow">https://local.example.com/mywiki/index.php?title=Non-existing_page&action=edit&useskin=monobook</a>'. Maybe the server or your connection is down. Retrying in 1 minutes...</p>
</blockquote>
<p>I looked at the HTTP headers on a plan urllib2.ulropen call and it's using WWW-Authenticate: Negotiate WWW-Authenticate: NTLM. I'm guessing urllib2 and thus pywikipedia don't support this?</p>
<p><strong>Update</strong> Added a tasty bounty for help in getting this to work. I can authenticate using python-ntlm. How do I integrate this into pywikipedia?</p>
| 3 | 2009-08-10T17:41:25Z | 1,258,883 | <p>Well the fact that <code>login.py</code> tries accessing '\w' instead of your path shows that there is a family configuration issue.</p>
<p>Your code is indented strangely: is <code>scriptpath</code> a member of the new Family class? as in:</p>
<pre><code>class Family(family.Family):
def __init__(self):
family.Family.__init__(self)
self.name = 'mywiki'
self.langs = { 'en' : 'local.example.com'}
def scriptpath(self, code):
return '/mywiki'
def version(self, code):
return '1.13.5'
def isPublic(self):
return False
def hostname(self, code):
return 'local.example.com'
def protocol(self, code):
return 'https'
</code></pre>
<p>?</p>
<p>I believe that something is wrong with your family file. A good way to check is to do in a python console:</p>
<pre><code>import wikipedia
site = wikipedia.getSite('en', 'mywiki')
print site.login_address()
</code></pre>
<p>as long as the relative address is wrong, showing '/w' instead of '/mywiki', it means that the family file is still not configured correctly, and that the bot won't work :)</p>
<p><strong>Update</strong>: how to integrate ntlm in pywikipedia?</p>
<p>I just had a look at the basic example <a href="http://code.google.com/p/python-ntlm/" rel="nofollow">here</a>. I would integrate the code before that line in <code>login.py</code>:</p>
<pre><code>response = urllib2.urlopen(urllib2.Request(self.site.protocol() + '://' + self.site.hostname() + address, data, headers))
</code></pre>
<p>You want to write something of the like:</p>
<pre><code>from ntlm import HTTPNtlmAuthHandler
user = 'DOMAIN\User'
password = "Password"
url = self.site.protocol() + '://' + self.site.hostname()
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, url, user, password)
# create the NTLM authentication handler
auth_NTLM = HTTPNtlmAuthHandler.HTTPNtlmAuthHandler(passman)
# create and install the opener
opener = urllib2.build_opener(auth_NTLM)
urllib2.install_opener(opener)
response = urllib2.urlopen(urllib2.Request(self.site.protocol() + '://' + self.site.hostname() + address, data, headers))
</code></pre>
<p>I would test this and integrate it directly into pywikipedia codebase if only I had an available ntlm setup...</p>
<p>Whatever happens, please do not vanish with your solution: we're interested, at pywikipedia, by your solution :)</p>
| 4 | 2009-08-11T07:29:46Z | [
"python",
"https",
"urllib2",
"http-authentication",
"pywikipedia"
] |
Selecting related objects in django | 1,256,387 | <p>I have following problem:</p>
<p>My application have 2 models:</p>
<p>1)</p>
<pre><code>class ActiveList(models.Model):
user = models.ForeignKey(User, unique=True)
updatedOn = models.DateTimeField(auto_now=True)
def __unicode__(self):
return self.user.username
'''
GameClaim class, to store game requests.
'''
class GameClaim(models.Model):
me = models.ForeignKey(ActiveList, related_name='gameclaim_me')
opponent = models.ForeignKey(ActiveList, related_name='gameclaim_opponent')
</code></pre>
<p>In my view I took all ActiveList objects all = ActiveList.objects.all() and passed it to the template</p>
<p>In template I am looping through every item in the ActiveList, and create an xml file which is used on my client application.</p>
<p>the question is: </p>
<p>How can I query the info about the claims which one user (e.g. test, part of ActiveList), made to the user who is under loop</p>
<p>user2 e.g is taken like this</p>
<pre><code>{% for item in activeList %}
{% endfor %}
</code></pre>
<p>user 2 is an item in this case</p>
| 0 | 2009-08-10T18:17:02Z | 1,256,513 | <p>I'm not sure I entirely understand your question, but I think the information you're looking for might be here: <a href="http://docs.djangoproject.com/en/dev/topics/db/queries/" rel="nofollow">http://docs.djangoproject.com/en/dev/topics/db/queries/</a></p>
<p>Perhaps you could clarify the question if you don't find an answer there?</p>
| 0 | 2009-08-10T18:42:15Z | [
"python",
"django"
] |
Selecting related objects in django | 1,256,387 | <p>I have following problem:</p>
<p>My application have 2 models:</p>
<p>1)</p>
<pre><code>class ActiveList(models.Model):
user = models.ForeignKey(User, unique=True)
updatedOn = models.DateTimeField(auto_now=True)
def __unicode__(self):
return self.user.username
'''
GameClaim class, to store game requests.
'''
class GameClaim(models.Model):
me = models.ForeignKey(ActiveList, related_name='gameclaim_me')
opponent = models.ForeignKey(ActiveList, related_name='gameclaim_opponent')
</code></pre>
<p>In my view I took all ActiveList objects all = ActiveList.objects.all() and passed it to the template</p>
<p>In template I am looping through every item in the ActiveList, and create an xml file which is used on my client application.</p>
<p>the question is: </p>
<p>How can I query the info about the claims which one user (e.g. test, part of ActiveList), made to the user who is under loop</p>
<p>user2 e.g is taken like this</p>
<pre><code>{% for item in activeList %}
{% endfor %}
</code></pre>
<p>user 2 is an item in this case</p>
| 0 | 2009-08-10T18:17:02Z | 1,256,748 | <p>What you are looking at doing belongs more properly in the view than the template. I think you want something like:</p>
<pre><code>claimer = User.objects.get(name='test')
claimed_opponents = User.objects.filter(gameclaim_opponent__me__user=claimer)
</code></pre>
<p>Then you can pass those into your template, and operate on them directly. </p>
<p>You might also look at rethinking how your tables relate to one another. I think claims should probably go directly between users, and whether a given user is active should be external to the relationship. I would think a user should be able to claim a game with an inactive user, even if they have to wait for the user to reactivate before that game can begin.</p>
| 1 | 2009-08-10T19:28:33Z | [
"python",
"django"
] |
Checking whether a command produced output | 1,256,424 | <p>I am using the following call for executing the 'aspell' command on some strings in Python:</p>
<pre><code>r,w,e = popen2.popen3("echo " +str(m[i]) + " | aspell -l")
</code></pre>
<p>I want to test the success of the function looking at the stdout File Object <code>r</code>. If there is no output the command is successful.</p>
<p>What is the best way to test that in Python?
Thanks in advance.</p>
| 0 | 2009-08-10T18:25:57Z | 1,256,449 | <p>Best is to use the <code>subprocess</code> module of the standard Python library, see <a href="http://docs.python.org/library/subprocess.html?highlight=subprocess#module-subprocess" rel="nofollow">here</a> -- <code>popen2</code> is old and not recommended.</p>
<p>Anyway, in your code, <code>if r.read(1):</code> is a fast way to test if there's any content in <code>r</code> (if you don't care about what that content might specifically be).</p>
| 2 | 2009-08-10T18:30:13Z | [
"python",
"unix",
"scripting"
] |
Checking whether a command produced output | 1,256,424 | <p>I am using the following call for executing the 'aspell' command on some strings in Python:</p>
<pre><code>r,w,e = popen2.popen3("echo " +str(m[i]) + " | aspell -l")
</code></pre>
<p>I want to test the success of the function looking at the stdout File Object <code>r</code>. If there is no output the command is successful.</p>
<p>What is the best way to test that in Python?
Thanks in advance.</p>
| 0 | 2009-08-10T18:25:57Z | 1,256,483 | <p>Why don't you use <code>aspell -a</code>?</p>
<p>You could use subprocess as indicated by Alex, but keep the pipe open. Follow <a href="http://aspell.net/man-html/Through-A-Pipe.html#Through-A-Pipe" rel="nofollow">the directions</a> for using the pipe API of aspell, and it should be pretty efficient.</p>
<p>The upside is that you won't have to check for an empty line. You can always read from stdout, knowing that you will get a response. This takes care of a lot of problematic race conditions.</p>
| 2 | 2009-08-10T18:36:52Z | [
"python",
"unix",
"scripting"
] |
Need help in refactoring my python script | 1,256,704 | <p>I have a python script which process a file line by line, if the line
matches a regex, it calls a function to handle it.</p>
<p>My question is is there a better write to refactor my script. The
script works, but as it is, i need to keep indent to the right of the
editor as I add more and more regex for my file.</p>
<p>Thank you for any idea.
Now my code end up like this:</p>
<pre>
for line in fi.readlines():
result= reg1.match(line)
if result:
handleReg1(result)
else:
result = reg2.match(line)
if result:
handleReg2(result)
else:
result = reg3.match(line)
if result:
handleReg3(result)
else:
result = reg4.match(line)
if result:
handleReg4(result)
else:
result = reg5.match(line)
if result:
handleReg5(result)
</pre>
| 3 | 2009-08-10T19:18:13Z | 1,256,723 | <p>I'd switch to using a data structure mapping regexes to functions. Something like:</p>
<pre><code>map = { reg1: handleReg1, reg2: handleReg2, etc }
</code></pre>
<p>Then you just loop through them:</p>
<pre><code>for reg, handler in map.items():
result = reg.match(line)
if result:
handler(result)
break
</code></pre>
<p>If you need the matches to happen in a particular order you'll need to use a list instead of a dictionary, but the principal is the same.</p>
| 12 | 2009-08-10T19:24:29Z | [
"python"
] |
Need help in refactoring my python script | 1,256,704 | <p>I have a python script which process a file line by line, if the line
matches a regex, it calls a function to handle it.</p>
<p>My question is is there a better write to refactor my script. The
script works, but as it is, i need to keep indent to the right of the
editor as I add more and more regex for my file.</p>
<p>Thank you for any idea.
Now my code end up like this:</p>
<pre>
for line in fi.readlines():
result= reg1.match(line)
if result:
handleReg1(result)
else:
result = reg2.match(line)
if result:
handleReg2(result)
else:
result = reg3.match(line)
if result:
handleReg3(result)
else:
result = reg4.match(line)
if result:
handleReg4(result)
else:
result = reg5.match(line)
if result:
handleReg5(result)
</pre>
| 3 | 2009-08-10T19:18:13Z | 1,256,730 | <p>Here's a trivial one:</p>
<pre><code>handlers = { reg1 : handleReg1, ... }
for line in fi.readlines():
for h in handlers:
x = h.match(line)
if x:
handlers[h](x)
</code></pre>
<p>If there could be a line that matches several regexps this code will be different from the code you pasted: it will call several handlers. Adding <code>break</code> won't help, because the regexps will be tried in a different order, so you'll end up calling the wrong one. So if this is the case you should iterate over list:</p>
<pre><code>handlers = [ (reg1, handleReg1), (reg2, handleReg2), ... ]
for line in fi.readlines():
for reg, handler in handlers:
x = reg.match(line)
if x:
handler(x)
break
</code></pre>
| 1 | 2009-08-10T19:25:32Z | [
"python"
] |
Need help in refactoring my python script | 1,256,704 | <p>I have a python script which process a file line by line, if the line
matches a regex, it calls a function to handle it.</p>
<p>My question is is there a better write to refactor my script. The
script works, but as it is, i need to keep indent to the right of the
editor as I add more and more regex for my file.</p>
<p>Thank you for any idea.
Now my code end up like this:</p>
<pre>
for line in fi.readlines():
result= reg1.match(line)
if result:
handleReg1(result)
else:
result = reg2.match(line)
if result:
handleReg2(result)
else:
result = reg3.match(line)
if result:
handleReg3(result)
else:
result = reg4.match(line)
if result:
handleReg4(result)
else:
result = reg5.match(line)
if result:
handleReg5(result)
</pre>
| 3 | 2009-08-10T19:18:13Z | 1,256,782 | <p>An alternate approach that might work for you is to combine all the regexps into one giant regexp and use m.group() to detect which matched. My intuition says this should be faster, but I haven't tested it.</p>
<pre><code>>>> reg = re.compile('(cat)|(dog)|(apple)')
>>> m = reg.search('we like dogs')
>>> print m.group()
dog
>>> print m.groups()
(None, 'dog', None)
</code></pre>
<p>This gets complicated if the regexps you're testing against are themselves complicated or use match groups.</p>
| 0 | 2009-08-10T19:37:16Z | [
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.