title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
XML Parsing with Python and minidom
1,596,829
<p>I'm using Python (minidom) to parse an XML file that prints a hierarchical structure that looks something like this (indentation is used here to show the significant hierarchical relationship):</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported </code></pre> <p>Instead, the program iterates multiple times over the nodes and produces the following, printing duplicate nodes. (Looking at the node list at each iteration, it's obvious why it does this but I can't seem to find a way to get the node list I'm looking for.)</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported Basic Features About This Software Platforms Supported Platforms Supported </code></pre> <p>Here is the XML source file:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;DOCMAP&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;My Document&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Overview&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Basic Features&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;About This Software&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Platforms Supported&lt;/Title&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/DOCMAP&gt; </code></pre> <p>Here is the Python program:</p> <pre><code>import xml.dom.minidom from xml.dom.minidom import Node dom = xml.dom.minidom.parse("test.xml") Topic=dom.getElementsByTagName('Topic') i = 0 for node in Topic: alist=node.getElementsByTagName('Title') for a in alist: Title= a.firstChild.data print Title </code></pre> <p>I could fix the problem by not nesting 'Topic' elements, by changing the lower level topic names to something like 'SubTopic1' and 'SubTopic2'. But, I want to take advantage of built-in XML hierarchical structuring without needing different element names; it seems that I should be able to nest 'Topic' elements and that there should be some way to know which level 'Topic' I'm currently looking at.</p> <p>I've tried a number of different XPath functions without much success.</p>
12
2009-10-20T19:36:07Z
14,255,287
<p>Recusive function:</p> <pre><code>import xml.dom.minidom def traverseTree(document, depth=0): tag = document.tagName for child in document.childNodes: if child.nodeType == child.TEXT_NODE: if document.tagName == 'Title': print depth*' ', child.data if child.nodeType == xml.dom.Node.ELEMENT_NODE: traverseTree(child, depth+1) filename = 'sample.xml' dom = xml.dom.minidom.parse(filename) traverseTree(dom.documentElement) </code></pre> <p>Your xml:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;DOCMAP&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;My Document&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Overview&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Basic Features&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;About This Software&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Platforms Supported&lt;/Title&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/DOCMAP&gt; </code></pre> <p>Your desired output:</p> <pre><code> $ python parse_sample.py My Document Overview Basic Features About This Software Platforms Supported </code></pre>
1
2013-01-10T10:08:00Z
[ "python", "xml", "minidom" ]
XML Parsing with Python and minidom
1,596,829
<p>I'm using Python (minidom) to parse an XML file that prints a hierarchical structure that looks something like this (indentation is used here to show the significant hierarchical relationship):</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported </code></pre> <p>Instead, the program iterates multiple times over the nodes and produces the following, printing duplicate nodes. (Looking at the node list at each iteration, it's obvious why it does this but I can't seem to find a way to get the node list I'm looking for.)</p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported Basic Features About This Software Platforms Supported Platforms Supported </code></pre> <p>Here is the XML source file:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;DOCMAP&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;My Document&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Overview&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Basic Features&lt;/Title&gt; &lt;/Topic&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;About This Software&lt;/Title&gt; &lt;Topic Target="ALL"&gt; &lt;Title&gt;Platforms Supported&lt;/Title&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/Topic&gt; &lt;/DOCMAP&gt; </code></pre> <p>Here is the Python program:</p> <pre><code>import xml.dom.minidom from xml.dom.minidom import Node dom = xml.dom.minidom.parse("test.xml") Topic=dom.getElementsByTagName('Topic') i = 0 for node in Topic: alist=node.getElementsByTagName('Title') for a in alist: Title= a.firstChild.data print Title </code></pre> <p>I could fix the problem by not nesting 'Topic' elements, by changing the lower level topic names to something like 'SubTopic1' and 'SubTopic2'. But, I want to take advantage of built-in XML hierarchical structuring without needing different element names; it seems that I should be able to nest 'Topic' elements and that there should be some way to know which level 'Topic' I'm currently looking at.</p> <p>I've tried a number of different XPath functions without much success.</p>
12
2009-10-20T19:36:07Z
21,410,794
<p>I think that can help</p> <pre><code>import os import sys import subprocess import base64,xml.dom.minidom from xml.dom.minidom import Node f = open("file.xml",'r') data = f.read() i = 0 doc = xml.dom.minidom.parseString(data) for topic in doc.getElementsByTagName('Topic'): title= doc.getElementsByTagName('Title')[i].firstChild.nodeValue print title i +=1 </code></pre> <p><strong>Output:</strong></p> <pre><code>My Document Overview Basic Features About This Software Platforms Supported </code></pre>
1
2014-01-28T16:07:43Z
[ "python", "xml", "minidom" ]
Read a file from server with ssh using python
1,596,963
<p>I am trying to read a file from a server using ssh from python. I am using paramiko to connect. I can connect to the server and run a command like 'cat filename' and get the data back from the server but some files I am trying to read are around 1 GB or more in size.</p> <p>How can I read the file on the server line by line using python?</p> <p>Additional Info: What is regularly do is run a 'cat filename' command and store the result in a variable and work off that. But since the file here is quite big, I am looking for a way to read a file line by line off the server.</p> <p>EDIT: I can read a bunch of data and split it into lines but the problem is that the data received in the buffer does not always include the complete lines. for eg, if buffer has 300 lines, the last line may only be half of the line on the server and the next half would be fetched in the next call to the server. I want complete lines </p> <p>EDIT 2: what command can I use to print lines in a file in a certain range. Like print first 100 lines, then the next 100 and so on? This way the buffer will always contain complete lines.</p>
18
2009-10-20T20:04:00Z
1,597,006
<p>What do you mean by "line by line" - there are lots of data buffers between network hosts, and none of them are line-oriented. </p> <p>So you can read a bunch of data, then split it into lines at the near end.</p> <pre><code>ssh otherhost cat somefile | python process_standard_input.py | do_process_locally </code></pre> <p>Or you can have a process read a bunch of data at the far end, break it up, and format it line by line and send it to you.</p> <pre><code>scp process_standard_input.py otherhost ssh otherhost python process_standard_input.py somefile | do_process_locally </code></pre> <p>The only difference I would care about is what way reduces the volume of data over a limited network pipe. In your situation it may, or may not matter.</p> <p>There is nothing wrong in general with using <code>cat</code> over an SSH pipe to move gigabytes of data.</p>
4
2009-10-20T20:11:21Z
[ "python", "file-io" ]
Read a file from server with ssh using python
1,596,963
<p>I am trying to read a file from a server using ssh from python. I am using paramiko to connect. I can connect to the server and run a command like 'cat filename' and get the data back from the server but some files I am trying to read are around 1 GB or more in size.</p> <p>How can I read the file on the server line by line using python?</p> <p>Additional Info: What is regularly do is run a 'cat filename' command and store the result in a variable and work off that. But since the file here is quite big, I am looking for a way to read a file line by line off the server.</p> <p>EDIT: I can read a bunch of data and split it into lines but the problem is that the data received in the buffer does not always include the complete lines. for eg, if buffer has 300 lines, the last line may only be half of the line on the server and the next half would be fetched in the next call to the server. I want complete lines </p> <p>EDIT 2: what command can I use to print lines in a file in a certain range. Like print first 100 lines, then the next 100 and so on? This way the buffer will always contain complete lines.</p>
18
2009-10-20T20:04:00Z
1,597,009
<pre><code>#!/usr/bin/env python import paramiko import select client = paramiko.SSHClient() client.load_system_host_keys() client.connect('yourhost.com') transport = client.get_transport() channel = transport.open_session() channel.exec_command("cat /path/to/your/file") while True: rl, wl, xl = select.select([channel],[],[],0.0) if len(rl) &gt; 0: # Must be stdout print channel.recv(1024) </code></pre>
3
2009-10-20T20:12:03Z
[ "python", "file-io" ]
Read a file from server with ssh using python
1,596,963
<p>I am trying to read a file from a server using ssh from python. I am using paramiko to connect. I can connect to the server and run a command like 'cat filename' and get the data back from the server but some files I am trying to read are around 1 GB or more in size.</p> <p>How can I read the file on the server line by line using python?</p> <p>Additional Info: What is regularly do is run a 'cat filename' command and store the result in a variable and work off that. But since the file here is quite big, I am looking for a way to read a file line by line off the server.</p> <p>EDIT: I can read a bunch of data and split it into lines but the problem is that the data received in the buffer does not always include the complete lines. for eg, if buffer has 300 lines, the last line may only be half of the line on the server and the next half would be fetched in the next call to the server. I want complete lines </p> <p>EDIT 2: what command can I use to print lines in a file in a certain range. Like print first 100 lines, then the next 100 and so on? This way the buffer will always contain complete lines.</p>
18
2009-10-20T20:04:00Z
1,597,750
<p><a href="https://github.com/paramiko/paramiko">Paramiko's</a> <code>SFTPClient</code> class allows you to get a file-like object to read data from a remote file in a Pythonic way.</p> <p>Assuming you have an open <code>SSHClient</code>:</p> <pre><code>sftp_client = ssh_client.open_sftp() remote_file = sftp_client.open('remote_filename') try: for line in remote_file: # process line finally: remote_file.close() </code></pre>
32
2009-10-20T22:53:20Z
[ "python", "file-io" ]
Read a file from server with ssh using python
1,596,963
<p>I am trying to read a file from a server using ssh from python. I am using paramiko to connect. I can connect to the server and run a command like 'cat filename' and get the data back from the server but some files I am trying to read are around 1 GB or more in size.</p> <p>How can I read the file on the server line by line using python?</p> <p>Additional Info: What is regularly do is run a 'cat filename' command and store the result in a variable and work off that. But since the file here is quite big, I am looking for a way to read a file line by line off the server.</p> <p>EDIT: I can read a bunch of data and split it into lines but the problem is that the data received in the buffer does not always include the complete lines. for eg, if buffer has 300 lines, the last line may only be half of the line on the server and the next half would be fetched in the next call to the server. I want complete lines </p> <p>EDIT 2: what command can I use to print lines in a file in a certain range. Like print first 100 lines, then the next 100 and so on? This way the buffer will always contain complete lines.</p>
18
2009-10-20T20:04:00Z
1,598,554
<p>Here's an extension to <a href="http://stackoverflow.com/questions/1596963/read-a-file-from-server-with-ssh-using-python/1597750#1597750">@Matt Good's answer</a>:</p> <pre><code>from contextlib import closing from fabric.network import connect with closing(connect(user, host, port)) as ssh, \ closing(ssh.open_sftp()) as sftp, \ closing(sftp.open('remote_filename')) as file: for line in file: process(line) </code></pre>
7
2009-10-21T03:15:19Z
[ "python", "file-io" ]
Multithreaded Downloading Through Proxies In Python
1,597,093
<p>What would be the best library for multithreaded harvesting/downloading with multiple proxy support? I've looked at Tkinter, it looks good but there are so many, does anyone have a specific recommendation? Many thanks!</p>
0
2009-10-20T20:27:51Z
1,597,104
<p><a href="http://twistedmatrix.com/trac/" rel="nofollow">Twisted</a></p>
1
2009-10-20T20:28:57Z
[ "python", "proxy", "download", "multithreading", "harvest" ]
Multithreaded Downloading Through Proxies In Python
1,597,093
<p>What would be the best library for multithreaded harvesting/downloading with multiple proxy support? I've looked at Tkinter, it looks good but there are so many, does anyone have a specific recommendation? Many thanks!</p>
0
2009-10-20T20:27:51Z
1,597,142
<p>Is this something you can't just do by passing a URL to newly spawned threads and calling urllib2.urlopen in each one, or is there a more specific requirement?</p>
0
2009-10-20T20:36:02Z
[ "python", "proxy", "download", "multithreading", "harvest" ]
Multithreaded Downloading Through Proxies In Python
1,597,093
<p>What would be the best library for multithreaded harvesting/downloading with multiple proxy support? I've looked at Tkinter, it looks good but there are so many, does anyone have a specific recommendation? Many thanks!</p>
0
2009-10-20T20:27:51Z
1,597,402
<p>Also take a look at <a href="http://scrapy.org/" rel="nofollow">http://scrapy.org/</a>, which is a scraping framework built on top of twisted. </p>
0
2009-10-20T21:24:04Z
[ "python", "proxy", "download", "multithreading", "harvest" ]
Python graceful fail on int() call?
1,597,114
<p>I have to make a rudimentary FSM in a class, and am writing it in Python. The assignment requires we read the transitions for the machine from a text file. So for example, a FSM with 3 states, each of which have 2 possible transitions, with possible inputs 'a' and 'b', wolud have a text file that looks like this:</p> <pre><code>2 # first line lists all final states 0 a 1 0 b 2 1 a 0 1 b 2 2 a 0 2 b 1 </code></pre> <p>I am trying to come up with a more pythonic way to read a line at a time and convert the states to ints, while keeping the input vals as strings. Basically this is the idea:</p> <pre><code>self.finalStates = f.readline().strip("\n").split(" ") for line in f: current_state, input_val, next_state = [int(x) for x in line.strip("\n").split(" ")] </code></pre> <p>Of course, when it tries to int("a") it throws a ValueError. I know I could use a traditional loop and just catch the ValueError but I was hoping to have a more Pythonic way of doing this.</p>
1
2009-10-20T20:30:26Z
1,597,148
<p>You should really only be trying to parse the tokens that you expect to be integers</p> <pre><code>for line in f: tokens = line.split(" ") current_state, input_val, next_state = int(tokens[0]), tokens[1], int(tokens[2]) </code></pre> <p>Arguably more-readable:</p> <pre><code>for line in f: current_state, input_val, next_state = parseline(line) def parseline(line): tokens = line.split(" ") return (int(tokens[0]), tokens[1], int(tokens[2])) </code></pre>
12
2009-10-20T20:37:19Z
[ "python", "fsm" ]
Python graceful fail on int() call?
1,597,114
<p>I have to make a rudimentary FSM in a class, and am writing it in Python. The assignment requires we read the transitions for the machine from a text file. So for example, a FSM with 3 states, each of which have 2 possible transitions, with possible inputs 'a' and 'b', wolud have a text file that looks like this:</p> <pre><code>2 # first line lists all final states 0 a 1 0 b 2 1 a 0 1 b 2 2 a 0 2 b 1 </code></pre> <p>I am trying to come up with a more pythonic way to read a line at a time and convert the states to ints, while keeping the input vals as strings. Basically this is the idea:</p> <pre><code>self.finalStates = f.readline().strip("\n").split(" ") for line in f: current_state, input_val, next_state = [int(x) for x in line.strip("\n").split(" ")] </code></pre> <p>Of course, when it tries to int("a") it throws a ValueError. I know I could use a traditional loop and just catch the ValueError but I was hoping to have a more Pythonic way of doing this.</p>
1
2009-10-20T20:30:26Z
1,597,155
<p>This is something very functional, but I'm not sure if it's "pythonic"... And it may cause some people to scratch their heads. You should really have a "lazy" zip() to do it this way if you have a large number of values:</p> <pre><code>types = [int, str, int] for line in f: current_state, input_val, next_state = multi_type(types, line) def multi_type(ts,xs): return [t(x) for (t,x) in zip(ts, xs.strip().split())] </code></pre> <p>Also the arguments you use for strip and split can be omitted, because the defaults will work here.</p> <p>Edit: reformatted - I wouldn't use it as one long line in real code.</p>
5
2009-10-20T20:38:03Z
[ "python", "fsm" ]
Python graceful fail on int() call?
1,597,114
<p>I have to make a rudimentary FSM in a class, and am writing it in Python. The assignment requires we read the transitions for the machine from a text file. So for example, a FSM with 3 states, each of which have 2 possible transitions, with possible inputs 'a' and 'b', wolud have a text file that looks like this:</p> <pre><code>2 # first line lists all final states 0 a 1 0 b 2 1 a 0 1 b 2 2 a 0 2 b 1 </code></pre> <p>I am trying to come up with a more pythonic way to read a line at a time and convert the states to ints, while keeping the input vals as strings. Basically this is the idea:</p> <pre><code>self.finalStates = f.readline().strip("\n").split(" ") for line in f: current_state, input_val, next_state = [int(x) for x in line.strip("\n").split(" ")] </code></pre> <p>Of course, when it tries to int("a") it throws a ValueError. I know I could use a traditional loop and just catch the ValueError but I was hoping to have a more Pythonic way of doing this.</p>
1
2009-10-20T20:30:26Z
1,597,785
<pre><code>self.finalStates = [int(state) for state in f.readline().split()] for line in f: words = line.split() current_state, input_val, next_state = int(words[0]), words[1], int(words[2]) # now do something with values </code></pre> <p>Note that you can shorten <code>line.strip("\n").split(" ")</code> down to just <code>line.split()</code>. The default behavior of <code>str.split()</code> is to split on any white space, and it will return a set of words that have no leading or trailing white space of any sort.</p> <p>If you are converting the states to <code>int</code> in the loop, I presume you want the <code>finalStates</code> to be <code>int</code> as well.</p>
0
2009-10-20T23:03:33Z
[ "python", "fsm" ]
Python graceful fail on int() call?
1,597,114
<p>I have to make a rudimentary FSM in a class, and am writing it in Python. The assignment requires we read the transitions for the machine from a text file. So for example, a FSM with 3 states, each of which have 2 possible transitions, with possible inputs 'a' and 'b', wolud have a text file that looks like this:</p> <pre><code>2 # first line lists all final states 0 a 1 0 b 2 1 a 0 1 b 2 2 a 0 2 b 1 </code></pre> <p>I am trying to come up with a more pythonic way to read a line at a time and convert the states to ints, while keeping the input vals as strings. Basically this is the idea:</p> <pre><code>self.finalStates = f.readline().strip("\n").split(" ") for line in f: current_state, input_val, next_state = [int(x) for x in line.strip("\n").split(" ")] </code></pre> <p>Of course, when it tries to int("a") it throws a ValueError. I know I could use a traditional loop and just catch the ValueError but I was hoping to have a more Pythonic way of doing this.</p>
1
2009-10-20T20:30:26Z
1,598,326
<p>You got excellent answers that match your problem well. However, in other cases, there may indeed be situations where you want to convert some fields to <code>int</code> if feasible (i.e. if they're all digits) and leave them as <code>str</code> otherwise (as the title of your question suggests) <em>without</em> knowing in advance which fields are ints and which ones are not.</p> <p>The traditional Python approach is try/except...:</p> <pre><code>def maybeint(s): try: return int(s) except ValueError: return s </code></pre> <p>...which you need to wrap into a function as there's no way to do a try/except in an expression (e.g. in a list comprehension). So, you'd use it like:</p> <pre><code>several_fields = [maybeint(x) for x in line.split()] </code></pre> <p>However, it <em>is</em> possible to do this specific task inline, if you prefer:</p> <pre><code>several_fields = [(int(x) if x.isdigit() else x) for x in line.split()] </code></pre> <p>the <code>if</code>/<code>else</code> "ternary operator" looks a bit strange, but one can get used to it;-); and the <code>isdigit</code> method of a string gives True if the string is nonempty and only has digits.</p> <p>To repeat, this is <em>not</em> what you should do in your specific case, where you know the specific <code>int</code>-<code>str</code>-<code>int</code> pattern of input types; but it might be appropriate in a more general situation where you don't have such precise information in advance!</p>
1
2009-10-21T01:44:49Z
[ "python", "fsm" ]
cElementTree invalid encoding problem
1,597,604
<p>I'm encoding challenged, so this is probably simple, but I'm stuck.</p> <p>I'm trying to parse an XML file emailed to the <a href="http://en.wikipedia.org/wiki/Google_App_Engine" rel="nofollow">App Engine</a>'s new receive mail functionality. First, I just pasted the XML into the body of the message, and it parsed fine with CElementTree. Then I changed to using an attachment, and parsing it with CElementTree produces this error:</p> <blockquote> <p>SyntaxError: not well-formed (invalid token): line 3, column 10</p> </blockquote> <p>I've output the XML from both emailing in the body and as an attachment, and they look the same to me. I assume pasting it in the box is changing the encoding in a way that attaching the file is not, but I don't know how to fix it.</p> <p>The first few lines look this:</p> <pre><code>&lt;?xml version="1.0" standalone="yes"?&gt; &lt;gpx xmlns="http://www.topografix.com/GPX/1/0" version="1.0" creator="TopoFusion 2.85" xmlns:TopoFusion="http://www.TopoFusion.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.topografix.com/GPX/1/0 http://www.topografix.com/GPX/1/0/gpx.xsd http://www.TopoFusion.com http://www.TopoFusion.com/topofusion.xsd"&gt; &lt;name&gt;&lt;![CDATA[Pacific Crest Trail section K hike 4]]&gt;&lt;/name&gt;&lt;desc&gt;&lt;![CDATA[Pacific Crest Trail section K hike 4. Five Lakes to Old Highway 40 near Donner. As described in Day Hikes on the PCT California edition by George &amp; Patricia Semb. See pages 150-152 for access and exit trailheads. GPS data provided by the USFS]]&gt;&lt;/desc&gt;&lt;author&gt;&lt;![CDATA[MikeOnTheTrail]]&gt;&lt;/author&gt;&lt;email&gt;&lt;![CDATA[michaelonthetrail@yahoo.com]]&gt;&lt;/email&gt;&lt;url&gt;&lt;![CDATA[http://www.pcta.org]]&gt;&lt;/url&gt; &lt;urlname&gt;&lt;![CDATA[Pacific Crest Trail Association Homepage]]&gt;&lt;/urlname&gt; &lt;time&gt;2006-07-08T02:16:05Z&lt;/time&gt; </code></pre> <p>Edited to add more info:</p> <p>I have a <a href="http://en.wikipedia.org/wiki/GPS_eXchange_Format" rel="nofollow">GPX</a> file that's a few thousand lines. If I paste it into the body of the message I can parse it correctly, like so:</p> <pre><code> gpxcontent = message.bodies(content_type='text/plain') for x in gpxcontent: gpxcontent = x[1].decode() for event, elem in ET.iterparse(StringIO.StringIO(gpxcontent), events=("start", "start-ns")): </code></pre> <p>If I attach it to the mail as an attachment, using Gmail. And then extract it like so:</p> <pre><code>if isinstance(message.attachments, tuple): attachments = [message.attachments] gpxcontent = attachments[0][3].decode() for event, elem in ET.iterparse(StringIO.StringIO(gpxcontent), events=("start", "start-ns")): </code></pre> <p>I get the error above. Line 3 column 10 seems to be the start of ![CDATA on the third line.</p>
0
2009-10-20T22:07:19Z
1,598,839
<p>Ah, nevermind. There's a bug in App Engine that is calling lower() on all attachments when you decode them. This made the CDATA string invalid. </p> <p>Here's a link to the bug report: <a href="http://code.google.com/p/googleappengine/issues/detail?id=2289#c2" rel="nofollow">http://code.google.com/p/googleappengine/issues/detail?id=2289#c2</a></p>
0
2009-10-21T05:10:42Z
[ "python", "xml" ]
Replace strings in files by Python
1,597,649
<p><strong>How can you replace the match with the given replacement recursively in a given directory and its subdirectories?</strong></p> <h2>Pseudo-code</h2> <pre><code>import os import re from os.path import walk for root, dirs, files in os.walk("/home/noa/Desktop/codes"): for name in dirs: re.search("dbname=noa user=noa", "dbname=masi user=masi") // I am trying to replace here a given match in a file </code></pre>
10
2009-10-20T22:17:55Z
1,597,739
<p>Do you really need regular expressions?</p> <pre><code>import os def recursive_replace( root, pattern, replace ) for dir, subdirs, names in os.walk( root ): for name in names: path = os.path.join( dir, name ) text = open( path ).read() if pattern in text: open( path, 'w' ).write( text.replace( pattern, replace ) ) </code></pre>
9
2009-10-20T22:49:27Z
[ "python", "regex", "search", "replace", "operating-system" ]
Replace strings in files by Python
1,597,649
<p><strong>How can you replace the match with the given replacement recursively in a given directory and its subdirectories?</strong></p> <h2>Pseudo-code</h2> <pre><code>import os import re from os.path import walk for root, dirs, files in os.walk("/home/noa/Desktop/codes"): for name in dirs: re.search("dbname=noa user=noa", "dbname=masi user=masi") // I am trying to replace here a given match in a file </code></pre>
10
2009-10-20T22:17:55Z
1,597,755
<p>Put all this code into a file called <code>mass_replace</code>. Under Linux or Mac OS X, you can do <code>chmod +x mass_replace</code> and then just run this. Under Windows, you can run it with <code>python mass_replace</code> followed by the appropriate arguments.</p> <pre><code>#!/usr/bin/python import os import re import sys # list of extensions to replace DEFAULT_REPLACE_EXTENSIONS = None # example: uncomment next line to only replace *.c, *.h, and/or *.txt # DEFAULT_REPLACE_EXTENSIONS = (".c", ".h", ".txt") def try_to_replace(fname, replace_extensions=DEFAULT_REPLACE_EXTENSIONS): if replace_extensions: return fname.lower().endswith(replace_extensions) return True def file_replace(fname, pat, s_after): # first, see if the pattern is even in the file. with open(fname) as f: if not any(re.search(pat, line) for line in f): return # pattern does not occur in file so we are done. # pattern is in the file, so perform replace operation. with open(fname) as f: out_fname = fname + ".tmp" out = open(out_fname, "w") for line in f: out.write(re.sub(pat, s_after, line)) out.close() os.rename(out_fname, fname) def mass_replace(dir_name, s_before, s_after, replace_extensions=DEFAULT_REPLACE_EXTENSIONS): pat = re.compile(s_before) for dirpath, dirnames, filenames in os.walk(dir_name): for fname in filenames: if try_to_replace(fname, replace_extensions): fullname = os.path.join(dirpath, fname) file_replace(fullname, pat, s_after) if len(sys.argv) != 4: u = "Usage: mass_replace &lt;dir_name&gt; &lt;string_before&gt; &lt;string_after&gt;\n" sys.stderr.write(u) sys.exit(1) mass_replace(sys.argv[1], sys.argv[2], sys.argv[3]) </code></pre> <p>EDIT: I have changed the above code from the original answer. There are several changes. First, <code>mass_replace()</code> now calls <code>re.compile()</code> to pre-compile the search pattern; second, to check what extension the file has, we now pass in a tuple of file extensions to <code>.endswith()</code> rather than calling <code>.endswith()</code> three times; third, it now uses the <code>with</code> statement available in recent versions of Python; and finally, <code>file_replace()</code> now checks to see if the pattern is found within the file, and doesn't rewrite the file if the pattern is not found. (The old version would rewrite every file, changing the timestamps even if the output file was identical to the input file; this was inelegant.)</p> <p>EDIT: I changed this to default to replacing every file, but with one line you can edit to limit it to particular extensions. I think replacing every file is a more useful out-of-the-box default. This could be extended with a list of extensions or filenames not to touch, options to make it case insensitive, etc.</p> <p>EDIT: In a comment, @asciimo pointed out a bug. I edited this to fix the bug. <code>str.endswith()</code> is documented to accept a tuple of strings to try, but not a list. Fixed. Also, I made a couple of the functions accept an optional argument to let you pass in a tuple of extensions; it should be pretty easy to modify this to accept a command-line argument to specify which extensions.</p>
19
2009-10-20T22:54:46Z
[ "python", "regex", "search", "replace", "operating-system" ]
Replace strings in files by Python
1,597,649
<p><strong>How can you replace the match with the given replacement recursively in a given directory and its subdirectories?</strong></p> <h2>Pseudo-code</h2> <pre><code>import os import re from os.path import walk for root, dirs, files in os.walk("/home/noa/Desktop/codes"): for name in dirs: re.search("dbname=noa user=noa", "dbname=masi user=masi") // I am trying to replace here a given match in a file </code></pre>
10
2009-10-20T22:17:55Z
1,598,317
<p>Of course, if you just want to get it done without coding it up, use find and xargs:</p> <pre><code>find /home/noa/Desktop/codes -type f -print0 | \ xargs -0 sed --in-place "s/dbname=noa user=noa/dbname=masi user=masi" </code></pre> <p>(And you could likely do this with find's -exec or something as well, but I prefer xargs.)</p>
3
2009-10-21T01:39:08Z
[ "python", "regex", "search", "replace", "operating-system" ]
Replace strings in files by Python
1,597,649
<p><strong>How can you replace the match with the given replacement recursively in a given directory and its subdirectories?</strong></p> <h2>Pseudo-code</h2> <pre><code>import os import re from os.path import walk for root, dirs, files in os.walk("/home/noa/Desktop/codes"): for name in dirs: re.search("dbname=noa user=noa", "dbname=masi user=masi") // I am trying to replace here a given match in a file </code></pre>
10
2009-10-20T22:17:55Z
6,258,249
<p>This is how I would find and replace strings in files using python. This is a simple little function that will recursively search a directories for a string and replace it with a string. You can also limit files with a certain file extension like the example below.</p> <pre><code>import os, fnmatch def findReplace(directory, find, replace, filePattern): for path, dirs, files in os.walk(os.path.abspath(directory)): for filename in fnmatch.filter(files, filePattern): filepath = os.path.join(path, filename) with open(filepath) as f: s = f.read() s = s.replace(find, replace) with open(filepath, "w") as f: f.write(s) </code></pre> <p>This allows you to do something like:</p> <pre><code>findReplace("some_dir", "find this", "replace with this", "*.txt") </code></pre>
2
2011-06-06T21:30:17Z
[ "python", "regex", "search", "replace", "operating-system" ]
Replace strings in files by Python
1,597,649
<p><strong>How can you replace the match with the given replacement recursively in a given directory and its subdirectories?</strong></p> <h2>Pseudo-code</h2> <pre><code>import os import re from os.path import walk for root, dirs, files in os.walk("/home/noa/Desktop/codes"): for name in dirs: re.search("dbname=noa user=noa", "dbname=masi user=masi") // I am trying to replace here a given match in a file </code></pre>
10
2009-10-20T22:17:55Z
35,413,705
<p>this should work:</p> <pre><code>import re, os import fnmatch for path, dirs, files in os.walk(os.path.abspath(directory)): for filename in fnmatch.filter(files, filePattern): filepath = os.path.join(path, filename) with open("namelist.wps", 'a') as out: with open("namelist.wps", 'r') as readf: for line in readf: line = re.sub(r"dbname=noa user=noa", "dbname=masi user=masi", line) out.write(line) </code></pre>
1
2016-02-15T15:57:07Z
[ "python", "regex", "search", "replace", "operating-system" ]
Is there a better, pythonic way to do this?
1,597,764
<p>This is my first python program - </p> <p>Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds.</p> <p>Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way?</p> <p>CODE :</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Thanks.</p>
10
2009-10-20T22:58:28Z
1,597,795
<p>You could shorten the for-loop to this:</p> <pre><code>for row in reader: adDict.setdefault(row[0], set()).add(row[1]) </code></pre>
7
2009-10-20T23:06:58Z
[ "dictionary", "set", "python" ]
Is there a better, pythonic way to do this?
1,597,764
<p>This is my first python program - </p> <p>Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds.</p> <p>Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way?</p> <p>CODE :</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Thanks.</p>
10
2009-10-20T22:58:28Z
1,597,796
<p>The only changes I'd make are extracting multiple elements from the reader at once, and using string formatting for print statements.</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') # Can extract multiple elements from a list in the iteration statement: for adId, userId in reader: if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): # I believe this gives you more control over how things are formatted: print ("%s, %d" % (key, len(value))) </code></pre>
1
2009-10-20T23:07:59Z
[ "dictionary", "set", "python" ]
Is there a better, pythonic way to do this?
1,597,764
<p>This is my first python program - </p> <p>Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds.</p> <p>Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way?</p> <p>CODE :</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Thanks.</p>
10
2009-10-20T22:58:28Z
1,597,800
<p>Congratulations, your code is very nice. There are a few little tricks you could use to make it shorter/simpler.</p> <p>There is a nifty object type called defaultdict which is provided by the collections module. Instead of having to check if adDict has an adId key, you can set up a defaultdict which acts like a regular dict, except that it automatically provides you with an empty set() when there is no key. So you can change</p> <pre><code>if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) </code></pre> <p>to simply</p> <pre><code>adDict[adId].add(userId) </code></pre> <p>Also, instead of </p> <pre><code>for row in reader: adId = row[0] userId = row[1] </code></pre> <p>you could shorten that to</p> <pre><code>for adId,userId in reader: </code></pre> <p><strong>Edit:</strong> As Parker kindly points out in the comments, </p> <pre><code>for key, value in adDict.iteritems(): </code></pre> <p>is the most efficient way to iterate over a dict, if you are going to use both the key and value in the loop. In Python3, you can use</p> <pre><code>for key, value in adDict.items(): </code></pre> <p>since items() returns an iterator. </p> <pre><code>#!/usr/bin/env python import csv from collections import defaultdict adDict = defaultdict(set) reader = csv.reader(open("some.csv"), delimiter=' ') for adId,userId in reader: adDict[adId].add(userId) for key,value in adDict.iteritems(): print (key, ',' , len(value)) </code></pre>
18
2009-10-20T23:08:26Z
[ "dictionary", "set", "python" ]
Is there a better, pythonic way to do this?
1,597,764
<p>This is my first python program - </p> <p>Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds.</p> <p>Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way?</p> <p>CODE :</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Thanks.</p>
10
2009-10-20T22:58:28Z
1,597,803
<p>Just a few bits and pieces:</p> <p>For extracting the row list into variables:</p> <pre><code>adId, userId = row </code></pre> <p>The if statement does not need braces:</p> <pre><code>if adId in adDict: </code></pre> <p>You <em>could</em> use exceptions to handle a missing Key in the dict, but both ways work well, e.g.:</p> <pre><code>try: adDict[adId].add(userId) except KeyError: adDict[adId] = set(userId) </code></pre>
1
2009-10-20T23:09:01Z
[ "dictionary", "set", "python" ]
Is there a better, pythonic way to do this?
1,597,764
<p>This is my first python program - </p> <p>Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds.</p> <p>Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way?</p> <p>CODE :</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Thanks.</p>
10
2009-10-20T22:58:28Z
1,597,809
<p>Instead of:</p> <pre><code>for row in reader: adId = row[0] userId = row[1] </code></pre> <p>Use automatic sequence unpacking:</p> <pre><code>for (adId, userId) in reader: </code></pre> <p>In:</p> <pre><code>if ( adId in adDict ): </code></pre> <p>You don't need parentheses.</p> <p>Instead of:</p> <pre><code>if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) </code></pre> <p>Use <code>defaultdict</code>:</p> <pre><code>from collections import defaultdict adDict = defaultDict(set) # ... adDict[adId].add(userId) </code></pre> <p>Or, if you're not allowed to use other modules by your professor, use <code>setdefault()</code>:</p> <pre><code>adDict.setdefault(adId, set()).add(userId) </code></pre> <p>When printing:</p> <pre><code>for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Using string formatting might be easier to format:</p> <pre><code>print "%s,%s" % (key, len(value)) </code></pre> <p>Or, if you're using Python 3:</p> <pre><code>print ("{0},{1}".format (key, len(value))) </code></pre>
3
2009-10-20T23:10:51Z
[ "dictionary", "set", "python" ]
Is there a better, pythonic way to do this?
1,597,764
<p>This is my first python program - </p> <p>Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds.</p> <p>Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way?</p> <p>CODE :</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Thanks.</p>
10
2009-10-20T22:58:28Z
1,597,838
<p>Since you only have a space-delimited file, I'd do:</p> <pre><code>from __future__ import with_statement from collections import defaultdict ads = defaultdict(set) with open("some.csv") as f: for ad, user in (line.split(" ") for line in f): ads[ad].add(user) for ad in ads: print "%s, %s" % (ad, len(ads[ad])) </code></pre>
3
2009-10-20T23:18:40Z
[ "dictionary", "set", "python" ]
Is there a better, pythonic way to do this?
1,597,764
<p>This is my first python program - </p> <p>Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds.</p> <p>Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way?</p> <p>CODE :</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Thanks.</p>
10
2009-10-20T22:58:28Z
1,598,247
<p>the line of code:</p> <pre><code>adDict[adId] = set(userId) </code></pre> <p>is unlikely to do what you want -- it will treat string <code>userId</code> as a sequence of letters, so for example if <code>userId</code> was <code>aleax</code> you'd get a set with four items, just like, say, <code>set(['a', 'l', 'e', 'x'])</code>. Later, an <code>.add(userId)</code> when <code>userId</code> is <code>aleax</code> again will add a fifth item, the string <code>'aleax'</code>, because <code>.add</code> (differently from the set initializer, which takes an iterable as its argument) takes a single item as its argument.</p> <p>To make a set with a single item, use <code>set([userId])</code> instead.</p> <p>This is a reasonably frequent bug so I wanted to explain it clearly. That being said, <code>defaultdict</code> as suggested in other answers is clearly the right approach (avoid <code>setdefault</code>, that was never a good design and doesn't have good performance either, as well as being pretty murky).</p> <p>I would also avoid the kinda-overkill of <code>csv</code> in favor of a simple loop with a .split and .strip on each line...</p>
10
2009-10-21T01:13:15Z
[ "dictionary", "set", "python" ]
Is there a better, pythonic way to do this?
1,597,764
<p>This is my first python program - </p> <p>Requirement: Read a file consisting of {adId UserId} in each line. For each adId, print the number of unique userIds.</p> <p>Here is my code, put together from reading the python docs. Could you give me feedback on how I can write this in more python-ish way?</p> <p>CODE :</p> <pre><code>import csv adDict = {} reader = csv.reader(open("some.csv"), delimiter=' ') for row in reader: adId = row[0] userId = row[1] if ( adId in adDict ): adDict[adId].add(userId) else: adDict[adId] = set(userId) for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Thanks.</p>
10
2009-10-20T22:58:28Z
1,598,266
<p>There are some great answers in here.</p> <p>One trick I particularly like is to make my code easier to reuse in future like so </p> <pre><code>import csv def parse_my_file(file_name): # some existing code goes here return aDict if __name__ == "__main__": #this gets executed if this .py file is run directly, rather than imported aDict = parse_my_file("some.csv") for key, value in adDict.items(): print (key, ',' , len(value)) </code></pre> <p>Now you can import your csv parser from another module and get programmatic access to aDict. </p>
3
2009-10-21T01:21:36Z
[ "dictionary", "set", "python" ]
How to store dynamically generated HTML form elements from Javascript in Python?
1,597,766
<p>I have an HTML form that a user can add an arbitrary amount of input fields to through jQuery. The user is also able to remove any input field from any position. My current implementation is that each new input box has an id of "field[i]" so when the form is posted it is processed in Python as field1, field2 field3, ...field[n]</p> <pre><code>i = 0 while self.request.get("field" + str(i)): temp = self.request.get("field" + str(i)) someList.append(temp) i += 1 </code></pre> <p>(Assume the JavaScript handles removing of deleted elements and sorts the field names prior to post for simplicity)</p> <p>This approach is working for me, but is there a better way to handle this situation? I feel like this is a very brute force method.</p> <p>Platform information: Python 2.5.4; JavaScript; DHTML; jquery; Google App Engine</p> <p><strong>Edit:</strong> It appears that self.request.get_all() was the solution: <a href="http://code.google.com/appengine/docs/python/tools/webapp/requestclass.html#Request%5Fget%5Fall" rel="nofollow">GAE Doc</a></p>
2
2009-10-20T22:58:39Z
1,597,786
<p>You could serialize the data with javascript and pass it in as json. Then you would just have a dictionary to work with in python. You would need something like <a href="http://code.google.com/p/simplejson/" rel="nofollow">simplejson</a>, of course</p>
1
2009-10-20T23:03:38Z
[ "javascript", "jquery", "python", "google-app-engine", "dhtml" ]
Printing basenames by Python
1,598,013
<p><strong>How can you print the basenames of files by Python in the main folder and subfolders?</strong></p> <p><strong>My attempt</strong> </p> <pre><code>#!/usr/bin/python import os import sys def dir_basename (dir_name): for dirpath, dirnames, filenames in os.walk(dir_name): for fname in filenames: print os.path.basename(fname) // Problem here! if len(sys.argv) != 1: u = "Usage: dir_basename &lt;dir_name&gt;\n" sys.stderr.write(u) sys.exit(1) dir_basename ( sys.argv[1] ) </code></pre> <p><hr /></p> <p><strong>1st problem solved with the off-by-one-error</strong></p> <p><hr /></p> <p>2nd problem: The code gives me the output unsuccessfully</p> <pre><code>man.aux about_8php.tex refman.pdf successful_notice.php ... </code></pre> <p>I expect to get as an output</p> <pre><code> aux tex pdf php ... </code></pre>
1
2009-10-21T00:03:11Z
1,598,036
<pre><code>if len(sys.argv) != 1: </code></pre> <p>I think you mean <code>2</code>. <code>argv[0]</code> is the name of the script; <code>argv[1]</code> is the first argument, etc.</p>
3
2009-10-21T00:09:06Z
[ "python", "path" ]
Printing basenames by Python
1,598,013
<p><strong>How can you print the basenames of files by Python in the main folder and subfolders?</strong></p> <p><strong>My attempt</strong> </p> <pre><code>#!/usr/bin/python import os import sys def dir_basename (dir_name): for dirpath, dirnames, filenames in os.walk(dir_name): for fname in filenames: print os.path.basename(fname) // Problem here! if len(sys.argv) != 1: u = "Usage: dir_basename &lt;dir_name&gt;\n" sys.stderr.write(u) sys.exit(1) dir_basename ( sys.argv[1] ) </code></pre> <p><hr /></p> <p><strong>1st problem solved with the off-by-one-error</strong></p> <p><hr /></p> <p>2nd problem: The code gives me the output unsuccessfully</p> <pre><code>man.aux about_8php.tex refman.pdf successful_notice.php ... </code></pre> <p>I expect to get as an output</p> <pre><code> aux tex pdf php ... </code></pre>
1
2009-10-21T00:03:11Z
1,598,038
<p>The length of <code>sys.argv</code> is 2 because you have an item at index 0 (the program name) and an item at index 1 (the first argument to the program).</p> <p>Changing your program to compare against 2 appears to give the correct results, without making any other changes.</p>
1
2009-10-21T00:09:18Z
[ "python", "path" ]
Printing basenames by Python
1,598,013
<p><strong>How can you print the basenames of files by Python in the main folder and subfolders?</strong></p> <p><strong>My attempt</strong> </p> <pre><code>#!/usr/bin/python import os import sys def dir_basename (dir_name): for dirpath, dirnames, filenames in os.walk(dir_name): for fname in filenames: print os.path.basename(fname) // Problem here! if len(sys.argv) != 1: u = "Usage: dir_basename &lt;dir_name&gt;\n" sys.stderr.write(u) sys.exit(1) dir_basename ( sys.argv[1] ) </code></pre> <p><hr /></p> <p><strong>1st problem solved with the off-by-one-error</strong></p> <p><hr /></p> <p>2nd problem: The code gives me the output unsuccessfully</p> <pre><code>man.aux about_8php.tex refman.pdf successful_notice.php ... </code></pre> <p>I expect to get as an output</p> <pre><code> aux tex pdf php ... </code></pre>
1
2009-10-21T00:03:11Z
1,598,039
<p><code>argv</code> typically includes the name of the program/script invokved as the first element, and thus the length when passing it a single argument is actually 2, not 1.</p>
1
2009-10-21T00:09:23Z
[ "python", "path" ]
Printing basenames by Python
1,598,013
<p><strong>How can you print the basenames of files by Python in the main folder and subfolders?</strong></p> <p><strong>My attempt</strong> </p> <pre><code>#!/usr/bin/python import os import sys def dir_basename (dir_name): for dirpath, dirnames, filenames in os.walk(dir_name): for fname in filenames: print os.path.basename(fname) // Problem here! if len(sys.argv) != 1: u = "Usage: dir_basename &lt;dir_name&gt;\n" sys.stderr.write(u) sys.exit(1) dir_basename ( sys.argv[1] ) </code></pre> <p><hr /></p> <p><strong>1st problem solved with the off-by-one-error</strong></p> <p><hr /></p> <p>2nd problem: The code gives me the output unsuccessfully</p> <pre><code>man.aux about_8php.tex refman.pdf successful_notice.php ... </code></pre> <p>I expect to get as an output</p> <pre><code> aux tex pdf php ... </code></pre>
1
2009-10-21T00:03:11Z
1,598,065
<p>Let me explain the debugging methodology a little bit.</p> <p>As you've encountered the situation in which <code>len(sys.argv) != 1</code>, you should ask youself: "What is the actual value of <code>len(sys.argv)</code>? Why it is so?". The answers are:</p> <pre><code>&gt;&gt;&gt; len(sys.argv) 2 &gt;&gt;&gt; sys.argv ['/tmp/basename.py', '/path/to/home/Desktop/pgCodes/'] </code></pre> <p>I guess now the problem should become more clear.</p> <p><strong>Edit:</strong> To address your second question, things you are interested in are called <strong>file extensions</strong> or <strong>suffixes</strong>, not basenames. Here is the complete solution:</p> <pre><code>import sys, os def iflatten(xss): 'Iterable(Iterable(a)) -&gt; Iterable(a)' return (x for xs in xss for x in xs) def allfiles(dir): 'str -&gt; Iterable(str)' return iflatten(files for path, dirs, files in os.walk(dir)) def ext(path): 'str -&gt; str' (root, ext) = os.path.splitext(path) return ext[1:] def main(): assert len(sys.argv) == 2, 'usage: progname DIR' dir = sys.argv[1] exts = (ext(f) for f in allfiles(dir)) for e in exts: print e if __name__ == '__main__': main() </code></pre>
8
2009-10-21T00:17:23Z
[ "python", "path" ]
Printing basenames by Python
1,598,013
<p><strong>How can you print the basenames of files by Python in the main folder and subfolders?</strong></p> <p><strong>My attempt</strong> </p> <pre><code>#!/usr/bin/python import os import sys def dir_basename (dir_name): for dirpath, dirnames, filenames in os.walk(dir_name): for fname in filenames: print os.path.basename(fname) // Problem here! if len(sys.argv) != 1: u = "Usage: dir_basename &lt;dir_name&gt;\n" sys.stderr.write(u) sys.exit(1) dir_basename ( sys.argv[1] ) </code></pre> <p><hr /></p> <p><strong>1st problem solved with the off-by-one-error</strong></p> <p><hr /></p> <p>2nd problem: The code gives me the output unsuccessfully</p> <pre><code>man.aux about_8php.tex refman.pdf successful_notice.php ... </code></pre> <p>I expect to get as an output</p> <pre><code> aux tex pdf php ... </code></pre>
1
2009-10-21T00:03:11Z
1,598,299
<p>As others have noted, the first element of sys.argv is the program::</p> <pre><code># argv.py import sys for index, arg in enumerate(sys.argv): print '%(index)s: %(arg)s' % locals() </code></pre> <p>If I run this without parameters::</p> <pre><code>$ python argv.py 0: argv.py </code></pre> <p>I see that the first and only item in argv is the name of the program/script. If I pass parameters::</p> <pre><code>$ python argv.py a b c 0: argv.py 1: a 2: b 3: c </code></pre> <p>And so on.</p> <p>The other thing is that you really don't need to use os.path.basename on the items in the third element of the tuple yielded by os.walk::</p> <pre><code>import os import sys # Imagine some usage check here... # Slice sys.argv to skip the first element... for path in sys.argv[1:]: for root, dirs, files in os.walk(path): for name in files: # No need to use basename, since these are already base'd, so to speak... print name </code></pre>
2
2009-10-21T01:34:03Z
[ "python", "path" ]
Pythonic way to only do work first time a variable is called
1,598,174
<p>my Python class has some variables that require work to calculate the first time they are called. Subsequent calls should just return the precomputed value.</p> <p>I don't want to waste time doing this work unless they are actually needed by the user. So is there a clean Pythonic way to implement this use case?</p> <p>My initial thought was to use property() to call a function the first time and then override the variable:</p> <pre><code>class myclass(object): def get_age(self): self.age = 21 # raise an AttributeError here return self.age age = property(get_age) </code></pre> <p>Thanks</p>
6
2009-10-21T00:51:30Z
1,598,204
<pre><code>class myclass(object): def __init__(self): self.__age=None @property def age(self): if self.__age is None: self.__age=21 #This can be a long computation return self.__age </code></pre> <p>Alex mentioned you can use <code>__getattr__</code>, this is how it works</p> <pre><code>class myclass(object): def __getattr__(self, attr): if attr=="age": self.age=21 #This can be a long computation return super(myclass, self).__getattribute__(attr) </code></pre> <p><code>__getattr__()</code> is invoked when the attribute doesn't exist on the object, ie. the first time you try to access <code>age</code>. Every time after, <code>age</code> exists so <code>__getattr__</code> doesn't get called </p>
13
2009-10-21T01:01:54Z
[ "class", "variables", "python", "precompute" ]
Pythonic way to only do work first time a variable is called
1,598,174
<p>my Python class has some variables that require work to calculate the first time they are called. Subsequent calls should just return the precomputed value.</p> <p>I don't want to waste time doing this work unless they are actually needed by the user. So is there a clean Pythonic way to implement this use case?</p> <p>My initial thought was to use property() to call a function the first time and then override the variable:</p> <pre><code>class myclass(object): def get_age(self): self.age = 21 # raise an AttributeError here return self.age age = property(get_age) </code></pre> <p>Thanks</p>
6
2009-10-21T00:51:30Z
1,598,205
<p>Yes you can use properties, though lazy evaluation is also often accomplished using descriptors, see e.g:</p> <p><a href="http://blog.pythonisito.com/2008/08/lazy-descriptors.html" rel="nofollow">http://blog.pythonisito.com/2008/08/lazy-descriptors.html</a></p>
2
2009-10-21T01:02:05Z
[ "class", "variables", "python", "precompute" ]
Pythonic way to only do work first time a variable is called
1,598,174
<p>my Python class has some variables that require work to calculate the first time they are called. Subsequent calls should just return the precomputed value.</p> <p>I don't want to waste time doing this work unless they are actually needed by the user. So is there a clean Pythonic way to implement this use case?</p> <p>My initial thought was to use property() to call a function the first time and then override the variable:</p> <pre><code>class myclass(object): def get_age(self): self.age = 21 # raise an AttributeError here return self.age age = property(get_age) </code></pre> <p>Thanks</p>
6
2009-10-21T00:51:30Z
1,598,215
<p><code>property</code>, as you've seen, will not let you override it. You need to use a slightly different approach, such as:</p> <pre><code>class myclass(object): @property def age(self): if not hasattr(self, '_age'): self._age = self._big_long_computation() return self._age </code></pre> <p>There are other approaches, such as <code>__getattr__</code> or a custom descriptor class, but this one is simpler!-)</p>
6
2009-10-21T01:04:03Z
[ "class", "variables", "python", "precompute" ]
Pythonic way to only do work first time a variable is called
1,598,174
<p>my Python class has some variables that require work to calculate the first time they are called. Subsequent calls should just return the precomputed value.</p> <p>I don't want to waste time doing this work unless they are actually needed by the user. So is there a clean Pythonic way to implement this use case?</p> <p>My initial thought was to use property() to call a function the first time and then override the variable:</p> <pre><code>class myclass(object): def get_age(self): self.age = 21 # raise an AttributeError here return self.age age = property(get_age) </code></pre> <p>Thanks</p>
6
2009-10-21T00:51:30Z
1,599,584
<p><a href="http://my.safaribooksonline.com/0596007973/pythoncook2-CHP-20-SECT-4" rel="nofollow">Here</a> is decorator from <a href="http://oreilly.com/catalog/9780596007973/" rel="nofollow">Python Cookbook</a> for this problem:</p> <pre><code>class CachedAttribute(object): ''' Computes attribute value and caches it in the instance. ''' def __init__(self, method, name=None): # record the unbound-method and the name self.method = method self.name = name or method.__name__ def __get__(self, inst, cls): if inst is None: # instance attribute accessed on class, return self return self # compute, cache and return the instance's attribute value result = self.method(inst) setattr(inst, self.name, result) return result </code></pre>
4
2009-10-21T09:05:04Z
[ "class", "variables", "python", "precompute" ]
Specific use case for Django admin
1,598,248
<p>I have a couple special use cases for Django admin, and I'm curious about other peoples' opinions:</p> <ol> <li><p>I'd like to use a customized version the admin to allow users to edit certain objects on the site (customized to look more like the rest of the site). At this point users can only edit objects they own, but I'll eventually open this up to something more wiki-style where any user can edit any of the objects. In other words, I'd be designating all users as 'staff' and granting them permission to edit those objects.</p></li> <li><p>I was considering also doing this for other objects where not all users would be able to edit all objects. I'd use a custom view to make sure users only edit their own objects. The benefits are that I would have a starting point for the editing interface (as the admin creates it automatically) that I could just customize with ModelAdmin since the admin functionality is already pretty close to what I'd like.</p></li> </ol> <p>I feel like the first suggestion would be considered acceptable, while the second might not be. After checking out a few other resources (<a href="http://stackoverflow.com/questions/498199/valid-use-case-for-django-admin">http://stackoverflow.com/questions/498199/valid-use-case-for-django-admin</a> and the quote from the Django Book in that question) it seems like some Django developers feel like this is the wrong idea.</p> <p>My question is: why? Are there any good reasons not to use customized admin views to grant per-object permissions from a performance, stability, security, usability, etc. standpoint? It seems to me like it could save a lot of time for certain applications (and I may end up doing it anyway) but I wanted to understand the reasons for making such a distinction between the admin and everything else.</p>
2
2009-10-21T01:13:45Z
1,598,288
<p>You are free to do whatever you want. If you want to customize the Django admin, go for it, but you will likely not be as well supported by the mailing list and IRC once you deviate from the typical admin modifications path.</p> <p>While customizing the admin might seem like the easy solution right now, more than likely it is going to be more work than just recreating the necessary forms yourself once you really try to tweak how things work. Look into the <a href="https://docs.djangoproject.com/en/1.4/ref/generic-views/#create-update-delete-generic-views" rel="nofollow">generic create/edit/delete</a> and <a href="https://docs.djangoproject.com/en/1.4/ref/generic-views/#list-detail-generic-views" rel="nofollow">generic details/list</a> views--they will expose the basic functionality you need very quickly, and are going to be easier to extend than the admin. </p> <p>I believe the view that the "admin is not your app" comes from the fact that it is easier to use other mechanisms than hacking up the admin (plus, leaving the admin untouched makes forward compatibility much easier for the Django developers). </p>
4
2009-10-21T01:30:13Z
[ "python", "django", "django-admin", "django-views" ]
Specific use case for Django admin
1,598,248
<p>I have a couple special use cases for Django admin, and I'm curious about other peoples' opinions:</p> <ol> <li><p>I'd like to use a customized version the admin to allow users to edit certain objects on the site (customized to look more like the rest of the site). At this point users can only edit objects they own, but I'll eventually open this up to something more wiki-style where any user can edit any of the objects. In other words, I'd be designating all users as 'staff' and granting them permission to edit those objects.</p></li> <li><p>I was considering also doing this for other objects where not all users would be able to edit all objects. I'd use a custom view to make sure users only edit their own objects. The benefits are that I would have a starting point for the editing interface (as the admin creates it automatically) that I could just customize with ModelAdmin since the admin functionality is already pretty close to what I'd like.</p></li> </ol> <p>I feel like the first suggestion would be considered acceptable, while the second might not be. After checking out a few other resources (<a href="http://stackoverflow.com/questions/498199/valid-use-case-for-django-admin">http://stackoverflow.com/questions/498199/valid-use-case-for-django-admin</a> and the quote from the Django Book in that question) it seems like some Django developers feel like this is the wrong idea.</p> <p>My question is: why? Are there any good reasons not to use customized admin views to grant per-object permissions from a performance, stability, security, usability, etc. standpoint? It seems to me like it could save a lot of time for certain applications (and I may end up doing it anyway) but I wanted to understand the reasons for making such a distinction between the admin and everything else.</p>
2
2009-10-21T01:13:45Z
1,598,787
<p>I've previously made a django app do precisely this without modifying the actual admin code. Rather by creating a subclass of admin.ModelAdmin with several of it's methods extended with queryset filters. This will display only records that are owned by the user (in this case business is the AUTH_PROFILE_MODEL). There are various blogs on the web on how to achieve this. </p> <p>You can use this technique to filter lists, form select boxes, Form Fields validating saves etc.</p> <p>So Far it's survived from NFA to 1.0 to 1.1 but this method is susceptible to api changes. </p> <p>In practice I've found this far quicker to generate new row level access level admin forms for new models in the app as I have added them. You just create a new model with a user fk, subclass the AdminFilterByBusiness or just </p> <pre><code>admin.site.register(NewModel,AdminFilterByBusiness) </code></pre> <p>if it doesnt need anything custom. It works and is very DRY.</p> <p>You do however run the risk of not being able to leverage other published django apps. So consider this technique <em>carefully</em> for the project you are building. </p> <p>Example Filter admin Class below inspired by <a href="http://code.djangoproject.co/wiki/NewformsHOWTO" rel="nofollow">http://code.djangoproject.co/wiki/NewformsHOWTO</a></p> <pre><code>#AdminFilterByBusiness {{{2 class AdminFilterByBusiness(admin.ModelAdmin): """ Used By News Items to show only objects a business user is related to """ def has_change_permission(self,request,obj=None): self.request = request if request.user.is_superuser: return True if obj == None: return super(AdminFilterByBusiness,self).has_change_permission(request,obj) if obj.business.user == request.user: return True return False def has_delete_permission(self,request,obj=None): self.request = request if request.user.is_superuser: return True if obj == None: return super(AdminFilterByBusiness,self).has_delete_permission(request,obj) if obj.business.user == request.user: return True return False def has_add_permission(self, request): self.request = request return super(AdminFilterByBusiness,self).has_add_permission(request) def queryset(self, request): # get the default queryset, pre-filter qs = super(AdminFilterByBusiness, self).queryset(request) # if not (request.user.is_superuser): # filter only shows blogs mapped to currently logged-in user try: qs = qs.filter(business=request.user.business_set.all()[0]) except: raise ValueError('Operator has not been created. Please Contact Admins') return qs def formfield_for_dbfield(self, db_field, **kwargs): """ Fix drop down lists to populate as per user request """ #regular return for superuser if self.request.user.is_superuser: return super(AdminFilterByBusiness, self).formfield_for_dbfield( db_field, **kwargs) if db_field.name == "business": return forms.ModelChoiceField( queryset = self.request.user.business_set.all() ) #default return super(AdminFilterByBusiness, self).formfield_for_dbfield(db_field, **kwargs) </code></pre>
2
2009-10-21T04:55:46Z
[ "python", "django", "django-admin", "django-views" ]
Specific use case for Django admin
1,598,248
<p>I have a couple special use cases for Django admin, and I'm curious about other peoples' opinions:</p> <ol> <li><p>I'd like to use a customized version the admin to allow users to edit certain objects on the site (customized to look more like the rest of the site). At this point users can only edit objects they own, but I'll eventually open this up to something more wiki-style where any user can edit any of the objects. In other words, I'd be designating all users as 'staff' and granting them permission to edit those objects.</p></li> <li><p>I was considering also doing this for other objects where not all users would be able to edit all objects. I'd use a custom view to make sure users only edit their own objects. The benefits are that I would have a starting point for the editing interface (as the admin creates it automatically) that I could just customize with ModelAdmin since the admin functionality is already pretty close to what I'd like.</p></li> </ol> <p>I feel like the first suggestion would be considered acceptable, while the second might not be. After checking out a few other resources (<a href="http://stackoverflow.com/questions/498199/valid-use-case-for-django-admin">http://stackoverflow.com/questions/498199/valid-use-case-for-django-admin</a> and the quote from the Django Book in that question) it seems like some Django developers feel like this is the wrong idea.</p> <p>My question is: why? Are there any good reasons not to use customized admin views to grant per-object permissions from a performance, stability, security, usability, etc. standpoint? It seems to me like it could save a lot of time for certain applications (and I may end up doing it anyway) but I wanted to understand the reasons for making such a distinction between the admin and everything else.</p>
2
2009-10-21T01:13:45Z
1,599,907
<p>We limit the Django Admin -- unmodified -- for "back-office" access by our admins and support people. Not by users or customers. Some stylesheet changes to make the colors consistent with the rest of the site, but that's it.</p> <p>For the users (our customers), we provide proper view functions to do the various transactions. Even with heavily tailored forms, there are still a few things that we need to check and control.</p> <p>Django update transactions are very simple to write, and trying to customize admin seems more work that writing the transaction itself.</p> <p>Our transactions are not much more complex than shown in <a href="http://docs.djangoproject.com/en/dev/topics/forms/#using-a-form-in-a-view" rel="nofollow">http://docs.djangoproject.com/en/dev/topics/forms/#using-a-form-in-a-view</a>. </p> <p>Generally, our pages that have transactions almost always include workflow elements (or related content) that make them slightly more complex than the built-in admin interface. We'll have a half-dozen or so additional lines of code beyond the boilerplate. </p> <p>Our use cases aren't simple add/change/delete, so we need more functionality than the default admin app provides.</p>
1
2009-10-21T10:19:55Z
[ "python", "django", "django-admin", "django-views" ]
Adding row to numpy recarray
1,598,251
<p>Is there an easy way to add a record/row to a numpy recarray without creating a new recarray? Let's say I have a recarray that takes 1Gb in memory, I want to be able to add a row to it without having python take up 2Gb of memory temporarily.</p>
6
2009-10-21T01:14:57Z
1,598,295
<p>You can call <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.resize.html#numpy.recarray.resize"><code>yourrecarray.resize</code></a> with a shape which has one more row, then assign to that new row. Of course. <code>numpy</code> <em>might</em> still have to allocate completely new memory if it just doesn't have room to grow the array in-place, but at least you stand a chance!-)</p> <p>Since an example was requested, here comes, modified off the canonical <a href="http://www.scipy.org/Numpy%5FExample%5FList">example list</a>...:</p> <pre><code>&gt;&gt;&gt; import numpy &gt;&gt;&gt; mydescriptor = {'names': ('gender','age','weight'), 'formats': ('S1', 'f4', 'f4')} &gt;&gt;&gt; a = numpy.array([('M',64.0,75.0),('F',25.0,60.0)], dtype=mydescriptor) &gt;&gt;&gt; print a [('M', 64.0, 75.0) ('F', 25.0, 60.0)] &gt;&gt;&gt; a.shape (2,) &gt;&gt;&gt; a.resize(3) &gt;&gt;&gt; a.shape (3,) &gt;&gt;&gt; print a [('M', 64.0, 75.0) ('F', 25.0, 60.0) ('', 0.0, 0.0)] &gt;&gt;&gt; a[2] = ('X', 17.0, 61.5) &gt;&gt;&gt; print a [('M', 64.0, 75.0) ('F', 25.0, 60.0) ('X', 17.0, 61.5)] </code></pre>
9
2009-10-21T01:32:54Z
[ "python", "numpy" ]
How to read continous HTTP streaming data in Python?
1,598,331
<p>How to read binary streams from a HTTP streamer server in python. I did a search and someone said urllib2 can do the job but had blocking issues. Someone suggested Twisted framework.</p> <p>My questions are:</p> <ol> <li><p>If it's just a streaming client reads data on background, can I ignore the blocking issues caused by urllib2?</p></li> <li><p>What will happen if urllib2 doesn't catch up with streamer server? Will the data lost?</p></li> <li><p>If the streamer server requires user authentication via GET or POST some paramaters to server before retrieving data, can this be done by urllib2? </p></li> <li><p>Where can I find some stream client example of urllib2 and Twisted?</p></li> </ol> <p>Thank you.</p> <p>Jack</p>
3
2009-10-21T01:46:38Z
1,598,371
<p>To defeat urllib2's intrinsic buffering, you could do:</p> <pre><code>import socket socket._fileobject.default_bufsize = 0 </code></pre> <p>because it's actualy <code>socket._fileobject</code> that buffers underneath. No data will be lost anyway, but with the default buffering (8192 bytes at a time) data may end up overly chunked for real-time streaming purposes (completely removing the buffering might hurt performance, but you could try smaller chunks).</p> <p>For Twisted, see <a href="http://python.net/crew/mwh/apidocs/twisted.web2.stream.html" rel="nofollow">twisted.web2.stream</a> and the many links therefrom.</p>
6
2009-10-21T01:59:14Z
[ "python", "streaming", "client", "stream" ]
Encoding of arguments to subprocess.Popen
1,598,334
<p>I have a Python extension to the Nautilus file browser (AFAIK this runs exclusively on GNU/Linux/Unix/etc environments). I decided to split out an expensive computation and run it as a subprocess, pickle the result and send it back over a pipe. My question concerns the arguments <em>to</em> the script. Since the computation requires a path argument and a boolean argument I figured I could do this in two ways: send the args in a pickled tuple over a pipe, or give them on the command line. I found that the pickled tuple approach is noticeably slower than just giving arguments, so I went with the subprocess argument approach.</p> <p>However, I'm worried about localisation issues that might arise. At present, in the caller I have:</p> <pre><code>subprocess.Popen( [sys.executable, path_to_script, path.encode("utf-8"), str(recurse)], stdin = None, stdout = subprocess.PIPE) </code></pre> <p>In the script:</p> <pre><code>path = unicode(sys.argv[1], "utf-8") </code></pre> <p>My concern is that encoding the path argument as UTF-8 is a mistake, but I don't know for sure. I want to avoid a "it works on my machine" syndrome. Will this fail if a user has, say, latin1 as their default character encoding? Or does it not matter?</p>
1
2009-10-21T01:47:31Z
1,598,375
<p>It does not matter: as long as your script knows to expect a utf-8 encoding for the argument, it can decode it properly. utf-8 is the correct choice because it will let you encode ANY Unicode string -- not just those for some languages but not others, as choices such as Latin-1 would entail!</p>
4
2009-10-21T02:01:31Z
[ "python", "localization", "subprocess" ]
Encoding of arguments to subprocess.Popen
1,598,334
<p>I have a Python extension to the Nautilus file browser (AFAIK this runs exclusively on GNU/Linux/Unix/etc environments). I decided to split out an expensive computation and run it as a subprocess, pickle the result and send it back over a pipe. My question concerns the arguments <em>to</em> the script. Since the computation requires a path argument and a boolean argument I figured I could do this in two ways: send the args in a pickled tuple over a pipe, or give them on the command line. I found that the pickled tuple approach is noticeably slower than just giving arguments, so I went with the subprocess argument approach.</p> <p>However, I'm worried about localisation issues that might arise. At present, in the caller I have:</p> <pre><code>subprocess.Popen( [sys.executable, path_to_script, path.encode("utf-8"), str(recurse)], stdin = None, stdout = subprocess.PIPE) </code></pre> <p>In the script:</p> <pre><code>path = unicode(sys.argv[1], "utf-8") </code></pre> <p>My concern is that encoding the path argument as UTF-8 is a mistake, but I don't know for sure. I want to avoid a "it works on my machine" syndrome. Will this fail if a user has, say, latin1 as their default character encoding? Or does it not matter?</p>
1
2009-10-21T01:47:31Z
1,599,509
<p>Use <code>sys.getfilesystemencoding()</code> if file names should be readable by user. However this can cause problems when there are characters not supported by the system encoding. To avoid this you can substitute missing characters with some character sequences (e.g. by registering you own error handling function with <a href="http://docs.python.org/library/codecs.html#codecs.register%5Ferror" rel="nofollow"><code>codecs.register_error()</code></a>).</p>
2
2009-10-21T08:45:15Z
[ "python", "localization", "subprocess" ]
How to use tabs-as-spaces in Python in Visual Studio 2008?
1,598,445
<p>I have some IronPython scripts that are embedded in a C# project, and it would be convenient to be able to edit them in the VS editor. VS evidently knows about Python because it provides syntax coloring for it. Unfortunately however the editor uses tab characters for indentation, whereas I want spaces. Is there a setting to change this? I don't see a heading for Python under Tools/Options/TextEditor.</p>
3
2009-10-21T02:32:57Z
1,598,460
<p>Tools -> Options -> Text Editor -> Choose Language ( you might need to choose All Languages )</p> <p>Click on Tabs and set it how you want it.</p> <p>Edit: Looks like you can add your own file types in the same area and set Tab setting specifically for them.</p>
0
2009-10-21T02:37:46Z
[ "python", "visual-studio", "visual-studio-2008", "ironpython" ]
How to use tabs-as-spaces in Python in Visual Studio 2008?
1,598,445
<p>I have some IronPython scripts that are embedded in a C# project, and it would be convenient to be able to edit them in the VS editor. VS evidently knows about Python because it provides syntax coloring for it. Unfortunately however the editor uses tab characters for indentation, whereas I want spaces. Is there a setting to change this? I don't see a heading for Python under Tools/Options/TextEditor.</p>
3
2009-10-21T02:32:57Z
1,598,477
<p>Here is one way to do it, probably not the best. On the Tools -> Text Editor -> File extension part of the Options menu add a .py extension, and set an type of editor. You don't get a python editor type, but you can pick one of the ones you use less often (for me this would be VB.net), and then make sure that the tab settings for that language fit your needs. Syntax highlighting didn't seem to be affected for me.</p>
4
2009-10-21T02:44:17Z
[ "python", "visual-studio", "visual-studio-2008", "ironpython" ]
How to use tabs-as-spaces in Python in Visual Studio 2008?
1,598,445
<p>I have some IronPython scripts that are embedded in a C# project, and it would be convenient to be able to edit them in the VS editor. VS evidently knows about Python because it provides syntax coloring for it. Unfortunately however the editor uses tab characters for indentation, whereas I want spaces. Is there a setting to change this? I don't see a heading for Python under Tools/Options/TextEditor.</p>
3
2009-10-21T02:32:57Z
39,940,667
<p>Only the print command is able to read such functions properly in python . so, a tab command can be used along with the print command. To get a space when one uses a tab function.</p>
0
2016-10-09T06:25:05Z
[ "python", "visual-studio", "visual-studio-2008", "ironpython" ]
Rounding decimals with new Python format function
1,598,579
<p>How do I round a decimal to a particular number of decimal places using the Python 3.0 <code>format</code> function?</p>
25
2009-10-21T03:27:10Z
1,598,583
<p>To round x to n decimal places use:</p> <pre><code>"{0:.{1}f}".format(x,n) </code></pre> <p>where 0 and 1 stand for the first and second arguments of the str.format() method, respectively.</p>
1
2009-10-21T03:27:57Z
[ "python", "string", "python-3.x" ]
Rounding decimals with new Python format function
1,598,579
<p>How do I round a decimal to a particular number of decimal places using the Python 3.0 <code>format</code> function?</p>
25
2009-10-21T03:27:10Z
1,598,650
<p>Here's a typical, useful example...:</p> <pre><code>&gt;&gt;&gt; n = 4 &gt;&gt;&gt; p = math.pi &gt;&gt;&gt; '{0:.{1}f}'.format(p, n) '3.1416' </code></pre> <p>the nested <code>{1}</code> takes the second argument, the current value of n, and applies it as specified (here, to the "precision" part of the format -- number of digits after the decimal point), and the outer resulting <code>{0:.4f}</code> then applies. Of course, you can hardcode the <code>4</code> (or whatever number of digits) if you wish, but the key point is, you don't <strong>have</strong> to!</p> <p>Even better...:</p> <pre><code>&gt;&gt;&gt; '{number:.{digits}f}'.format(number=p, digits=n) '3.1416' </code></pre> <p>...instead of the murky "argument numbers" such as 0 and 1 above, you can choose to use shiny-clear argument <em>names</em>, and pass the corresponding values as <em>keyword</em> (aka "<em>named</em>") arguments to <code>format</code> -- that can be <strong>so</strong> much more readable, as you see!!!</p>
49
2009-10-21T03:56:46Z
[ "python", "string", "python-3.x" ]
Rounding decimals with new Python format function
1,598,579
<p>How do I round a decimal to a particular number of decimal places using the Python 3.0 <code>format</code> function?</p>
25
2009-10-21T03:27:10Z
1,598,663
<p>In Python 3.x a format string contains replacement fields indicated by braces thus::</p> <pre><code>".... {0: format_spec} ....".format(value) </code></pre> <p>The format spec has the general layout:</p> <pre><code>[[fill]align][sign][pad][width][,][.precision][type] </code></pre> <p>So, for example leaving out all else but width, precision and type code, a decimal or floating point number could be formatted as:</p> <pre><code>&gt;&gt;&gt;print("The value of pi is {0:10.7f} to 7 decimal places.".format(math.pi)) </code></pre> <p>This would print as:</p> <pre><code>The value of pi is 3.1415927 to 7 decimal places. </code></pre>
6
2009-10-21T04:04:41Z
[ "python", "string", "python-3.x" ]
Rounding decimals with new Python format function
1,598,579
<p>How do I round a decimal to a particular number of decimal places using the Python 3.0 <code>format</code> function?</p>
25
2009-10-21T03:27:10Z
38,629,025
<p>I just found out that it is possible to combine both the <code>{0}</code> and the <code>{digits}</code> notation. This is especially useful when you want to round all variables to a pre-specified number of decimals <em>with 1 declaration</em>:</p> <pre><code>sName = 'Nander' fFirstFloat = 1.12345 fSecondFloat = 2.34567 fThirdFloat = 34.5678 dNumDecimals = 2 print( '{0} found the following floats: {1:.{digits}f}, {2:.{digits}f}, {3:.{digits}f}'.format(sName, fFirstFloat, fSecondFloat, fThirdFloat, digits=dNumDecimals)) # Nander found the following floats: 1.12, 2.35, 34.57 </code></pre>
0
2016-07-28T06:51:37Z
[ "python", "string", "python-3.x" ]
Pure python solution to convert XHTML to PDF
1,598,715
<p>I am after a pure Python solution (for the GAE) to convert webpages to pdf.</p> <p>I had a look at <a href="http://www.reportlab.org/rl%5Ftoolkit.html">reportlab</a> but the documentation focuses on generating pdfs from scratch, rather than converting from HTML.</p> <p>What do you recommend? - <a href="http://www.xhtml2pdf.com/doc/pisa-en.html">pisa</a>?</p> <p>Edit: My use case is I have a HTML report that I want to make available in PDF too. I will make updates to this report structure so I don't want to maintain a separate PDF version, but (hopefully) convert automatically. <br> Also because I generate the report HTML I can ensure it is well formed XHTML to make the PDF conversion easier.</p>
16
2009-10-21T04:26:37Z
1,598,943
<p>Have you considered <a href="http://pybrary.net/pyPdf/" rel="nofollow">pyPdf</a>? I doubt it has anywhere like the functional richness you require, but, it IS a start, and is in pure Python. The <a href="http://pybrary.net/pyPdf/pythondoc-pyPdf.pdf.html#pyPdf.pdf.PdfFileWriter-class" rel="nofollow">PdfFileWriter</a> class would be the one to generate PDF output, unfortunately it requires <a href="http://pybrary.net/pyPdf/pythondoc-pyPdf.pdf.html#pyPdf.pdf.PageObject-class" rel="nofollow">PageObject</a> instances and doesn't provide real ways to put those together, except extracting them from existing PDF documents. Unfortunately all richer pdf page-generation packages I can find do appear to depend on reportlab or other non-pure-Python libraries:-(.</p>
4
2009-10-21T05:51:58Z
[ "python", "google-app-engine", "pdf" ]
Pure python solution to convert XHTML to PDF
1,598,715
<p>I am after a pure Python solution (for the GAE) to convert webpages to pdf.</p> <p>I had a look at <a href="http://www.reportlab.org/rl%5Ftoolkit.html">reportlab</a> but the documentation focuses on generating pdfs from scratch, rather than converting from HTML.</p> <p>What do you recommend? - <a href="http://www.xhtml2pdf.com/doc/pisa-en.html">pisa</a>?</p> <p>Edit: My use case is I have a HTML report that I want to make available in PDF too. I will make updates to this report structure so I don't want to maintain a separate PDF version, but (hopefully) convert automatically. <br> Also because I generate the report HTML I can ensure it is well formed XHTML to make the PDF conversion easier.</p>
16
2009-10-21T04:26:37Z
1,599,390
<p>What you're asking for is a pure Python HTML renderer, which is a big task to say the least ('real' renderers like webkit are the product of thousands of hours of work). As far as I'm aware, there aren't any.</p> <p>Instead of looking for an HTML to PDF converter, what I'd suggest is building your report in a format that's easily converted to both - for example, you could build it as a DOM (a set of linked objects), and write converters for both HTML and PDF output. This is a much more limited problem than converting HTML to PDF, and hence much easier to implement.</p>
4
2009-10-21T08:10:38Z
[ "python", "google-app-engine", "pdf" ]
Pure python solution to convert XHTML to PDF
1,598,715
<p>I am after a pure Python solution (for the GAE) to convert webpages to pdf.</p> <p>I had a look at <a href="http://www.reportlab.org/rl%5Ftoolkit.html">reportlab</a> but the documentation focuses on generating pdfs from scratch, rather than converting from HTML.</p> <p>What do you recommend? - <a href="http://www.xhtml2pdf.com/doc/pisa-en.html">pisa</a>?</p> <p>Edit: My use case is I have a HTML report that I want to make available in PDF too. I will make updates to this report structure so I don't want to maintain a separate PDF version, but (hopefully) convert automatically. <br> Also because I generate the report HTML I can ensure it is well formed XHTML to make the PDF conversion easier.</p>
16
2009-10-21T04:26:37Z
1,605,075
<p><a href="http://www.xhtml2pdf.com/">Pisa</a> claims to support what I want to do:</p> <blockquote> <p>pisa is a html2pdf converter using the ReportLab Toolkit, the HTML5lib and pyPdf. It supports HTML 5 and CSS 2.1 (and some of CSS 3). It is completely written in pure Python so it is platform independent. The main benefit of this tool that a user with Web skills like HTML and CSS is able to generate PDF templates very quickly without learning new technologies. Easy integration into Python frameworks like CherryPy, KID Templating, TurboGears, Django, Zope, Plone, Google AppEngine (GAE) etc.</p> </blockquote> <p>So I will investigate it further</p>
8
2009-10-22T04:51:45Z
[ "python", "google-app-engine", "pdf" ]
Python Console Website
1,598,733
<p>I believe that I once saw a website that is like an online Python console. Does anyone know of such a website?</p>
14
2009-10-21T04:38:56Z
1,598,755
<p>This is one I know of:</p> <blockquote> <p><a href="http://shell.appspot.com/" rel="nofollow">http://shell.appspot.com/</a></p> </blockquote> <p>There's also Lord of the REPL's:</p> <blockquote> <p><a href="http://lotrepls.appspot.com/" rel="nofollow">http://lotrepls.appspot.com/</a></p> </blockquote> <p>Python on repl.it:</p> <blockquote> <p><a href="http://repl.it/languages/Python" rel="nofollow">http://repl.it/languages/Python</a></p> </blockquote>
12
2009-10-21T04:44:44Z
[ "python" ]
Python Console Website
1,598,733
<p>I believe that I once saw a website that is like an online Python console. Does anyone know of such a website?</p>
14
2009-10-21T04:38:56Z
1,598,764
<p>While not really a "console", <a href="http://www.skulpt.org/">skulpt.org</a> runs python code client-side with no plugins or anything, which makes it a lot faster than a server-side prompt. For server side and a more traditional shell I found this: <a href="http://shell.appspot.com/">http://shell.appspot.com/</a>.</p>
11
2009-10-21T04:46:53Z
[ "python" ]
Python Console Website
1,598,733
<p>I believe that I once saw a website that is like an online Python console. Does anyone know of such a website?</p>
14
2009-10-21T04:38:56Z
1,599,340
<p>IronPython (using Silverlight or Moonlight 2): <a href="http://www.trypython.org/" rel="nofollow">http://www.trypython.org/</a></p>
4
2009-10-21T07:55:12Z
[ "python" ]
Python Console Website
1,598,733
<p>I believe that I once saw a website that is like an online Python console. Does anyone know of such a website?</p>
14
2009-10-21T04:38:56Z
1,599,804
<p>I'm just find this one <a href="http://con.appspot.com/console/" rel="nofollow">http://con.appspot.com/console/</a></p>
3
2009-10-21T09:55:53Z
[ "python" ]
Python Console Website
1,598,733
<p>I believe that I once saw a website that is like an online Python console. Does anyone know of such a website?</p>
14
2009-10-21T04:38:56Z
5,280,029
<p>This one is very interesting, it has an interactive shell AND a script editing area:</p> <blockquote> <p><a href="http://py-ide-online.appspot.com/" rel="nofollow">Py I/O</a></p> </blockquote>
4
2011-03-12T00:54:31Z
[ "python" ]
Python Console Website
1,598,733
<p>I believe that I once saw a website that is like an online Python console. Does anyone know of such a website?</p>
14
2009-10-21T04:38:56Z
5,602,707
<p>An online python editor: <a href="http://doc.pyschools.com/console" rel="nofollow">http://doc.pyschools.com/console</a>.</p>
2
2011-04-09T03:37:49Z
[ "python" ]
Elegant setup of Python logging in Django
1,598,823
<p>I have yet to find a way of setting up Python logging with Django that I'm happy with. My requirements are fairly simple:</p> <ul> <li>Different log handlers for different events - that is, I want to be able to log to different files</li> <li>Easy access to loggers in my modules. The module should be able to find its logger with little effort.</li> <li>Should be easily applicable to command-line modules. Parts of the system are stand-alone command line or daemon processes. Logging should be easily usable with these modules.</li> </ul> <p>My current setup is to use a <code>logging.conf</code> file and setup logging in each module I log from. It doesn't feel right. </p> <p>Do you have a logging setup that you like? Please detail it: how do you setup the configuration (do you use <code>logging.conf</code> or set it up in code), where/when do you initiate the loggers, and how do you get access to them in your modules, etc.</p>
74
2009-10-21T05:07:01Z
1,598,894
<p>I am currently using a logging system, which I created myself. It uses CSV format for logging.</p> <p><a href="http://bitbucket.org/oduvan/django-csvlog/" rel="nofollow">django-csvlog</a></p> <p>This project still doesn't have full documentation, but I am working on it.</p>
6
2009-10-21T05:33:39Z
[ "python", "django", "logging" ]
Elegant setup of Python logging in Django
1,598,823
<p>I have yet to find a way of setting up Python logging with Django that I'm happy with. My requirements are fairly simple:</p> <ul> <li>Different log handlers for different events - that is, I want to be able to log to different files</li> <li>Easy access to loggers in my modules. The module should be able to find its logger with little effort.</li> <li>Should be easily applicable to command-line modules. Parts of the system are stand-alone command line or daemon processes. Logging should be easily usable with these modules.</li> </ul> <p>My current setup is to use a <code>logging.conf</code> file and setup logging in each module I log from. It doesn't feel right. </p> <p>Do you have a logging setup that you like? Please detail it: how do you setup the configuration (do you use <code>logging.conf</code> or set it up in code), where/when do you initiate the loggers, and how do you get access to them in your modules, etc.</p>
74
2009-10-21T05:07:01Z
1,598,979
<p>The best way I've found so far is to initialize logging setup in settings.py - nowhere else. You can either use a configuration file or do it programmatically step-by-step - it just depends on your requirements. The key thing is that I usually add the handlers I want to the root logger, using levels and sometimes logging.Filters to get the events I want to the appropriate files, console, syslogs etc. You can of course add handlers to any other loggers too, but there isn't commonly a need for this in my experience.</p> <p>In each module, I define a logger using</p> <pre><code>logger = logging.getLogger(__name__) </code></pre> <p>and use that for logging events in the module (and, if I want to differentiate further) use a logger which is a child of the logger created above.</p> <p>If my app is going to be potentially used in a site which doesn't configure logging in settings.py, I define a NullHandler somewhere as follows:</p> <pre><code>#someutils.py class NullHandler(logging.Handler): def emit(self, record): pass null_handler = NullHandler() </code></pre> <p>and ensure that an instance of it is added to all loggers created in the modules in my apps which use logging. (Note: NullHandler is already in the logging package for Python 3.1, and will be in Python 2.7.) So:</p> <pre><code>logger = logging.getLogger(__name__) logger.addHandler(someutils.null_handler) </code></pre> <p>This is done to ensure that your modules play nicely in a site which doesn't configure logging in settings.py, and that you don't get any annoying "No handlers could be found for logger X.Y.Z" messages (which are warnings about potentially misconfigured logging).</p> <p>Doing it this way meets your stated requirements:</p> <ul> <li>You can set up different log handlers for different events, as you currently do.</li> <li>Easy access to loggers in your modules - use <code>getLogger(__name__)</code>.</li> <li>Easily applicable to command-line modules - they also import <code>settings.py</code>.</li> </ul> <p><strong>Update:</strong> Note that as of version 1.3, Django now incorporates <a href="https://docs.djangoproject.com/en/dev/topics/logging/">support for logging</a>.</p>
44
2009-10-21T06:03:38Z
[ "python", "django", "logging" ]
Elegant setup of Python logging in Django
1,598,823
<p>I have yet to find a way of setting up Python logging with Django that I'm happy with. My requirements are fairly simple:</p> <ul> <li>Different log handlers for different events - that is, I want to be able to log to different files</li> <li>Easy access to loggers in my modules. The module should be able to find its logger with little effort.</li> <li>Should be easily applicable to command-line modules. Parts of the system are stand-alone command line or daemon processes. Logging should be easily usable with these modules.</li> </ul> <p>My current setup is to use a <code>logging.conf</code> file and setup logging in each module I log from. It doesn't feel right. </p> <p>Do you have a logging setup that you like? Please detail it: how do you setup the configuration (do you use <code>logging.conf</code> or set it up in code), where/when do you initiate the loggers, and how do you get access to them in your modules, etc.</p>
74
2009-10-21T05:07:01Z
1,599,881
<p>We initialize logging in the top-level <code>urls.py</code> by using a <code>logging.ini</code> file.</p> <p>The location of the <code>logging.ini</code> is provided in <code>settings.py</code>, but that's all.</p> <p>Each module then does</p> <pre><code>logger = logging.getLogger(__name__) </code></pre> <p>To distinguish testing, development and production instances, we have different logging.ini files. For the most part, we have a "console log" that goes to stderr with Errors only. We have an "application log" that uses a regular rolling log file that goes to a logs directory.</p>
6
2009-10-21T10:12:47Z
[ "python", "django", "logging" ]
Elegant setup of Python logging in Django
1,598,823
<p>I have yet to find a way of setting up Python logging with Django that I'm happy with. My requirements are fairly simple:</p> <ul> <li>Different log handlers for different events - that is, I want to be able to log to different files</li> <li>Easy access to loggers in my modules. The module should be able to find its logger with little effort.</li> <li>Should be easily applicable to command-line modules. Parts of the system are stand-alone command line or daemon processes. Logging should be easily usable with these modules.</li> </ul> <p>My current setup is to use a <code>logging.conf</code> file and setup logging in each module I log from. It doesn't feel right. </p> <p>Do you have a logging setup that you like? Please detail it: how do you setup the configuration (do you use <code>logging.conf</code> or set it up in code), where/when do you initiate the loggers, and how do you get access to them in your modules, etc.</p>
74
2009-10-21T05:07:01Z
5,806,903
<p>I know this is a solved answer already, but as per django >= 1.3 there's a new logging setting.</p> <p>Moving from old to new is not automatic, so I thought i'll write it down here.</p> <p>And of course checkout <a href="http://docs.djangoproject.com/en/dev/topics/logging" rel="nofollow" title="Django Logging Doc">the django doc</a> for some more.</p> <p>This is the basic conf, created by default with django-admin createproject v1.3 - mileage might change with latest django versions:</p> <pre><code>LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'handlers': { 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', } }, 'loggers': { 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True, } } } </code></pre> <p>This structure is based upon the standard <a href="http://docs.python.org/library/logging.config.html#configuration-dictionary-schema" rel="nofollow" title="Python Logging Configuration Doc">Python logging dictConfig</a>, that dictates the following blocks:</p> <ul> <li><code>formatters</code> - the corresponding value will be a dict in which each key is a formatter id and each value is a dict describing how to configure the corresponding Formatter instance.</li> <li><code>filters</code> - the corresponding value will be a dict in which each key is a filter id and each value is a dict describing how to configure the corresponding Filter instance.</li> <li><p><code>handlers</code> - the corresponding value will be a dict in which each key is a handler id and each value is a dict describing how to configure the corresponding Handler instance. Each handler has the following keys:</p> <ul> <li><code>class</code> (mandatory). This is the fully qualified name of the handler class.</li> <li><code>level</code> (optional). The level of the handler.</li> <li><code>formatter</code> (optional). The id of the formatter for this handler.</li> <li><code>filters</code> (optional). A list of ids of the filters for this handler.</li> </ul></li> </ul> <p>I usually do at least this:</p> <ul> <li>add a .log file</li> <li>configure my apps to write to this log</li> </ul> <p>Which translates into:</p> <pre><code>LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s' }, 'simple': { 'format': '%(levelname)s %(message)s' }, }, 'filters': { 'require_debug_false': { '()': 'django.utils.log.RequireDebugFalse' } }, 'handlers': { 'null': { 'level':'DEBUG', 'class':'django.utils.log.NullHandler', }, 'console':{ 'level': 'DEBUG', 'class': 'logging.StreamHandler', 'formatter': 'simple' }, # I always add this handler to facilitate separating loggings 'log_file':{ 'level': 'DEBUG', 'class': 'logging.handlers.RotatingFileHandler', 'filename': os.path.join(VAR_ROOT, 'logs/django.log'), 'maxBytes': '16777216', # 16megabytes 'formatter': 'verbose' }, 'mail_admins': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, 'loggers': { 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True, }, 'apps': { # I keep all my of apps under 'apps' folder, but you can also add them one by one, and this depends on how your virtualenv/paths are set 'handlers': ['log_file'], 'level': 'INFO', 'propagate': True, }, }, # you can also shortcut 'loggers' and just configure logging for EVERYTHING at once 'root': { 'handlers': ['console', 'mail_admins'], 'level': 'INFO' }, } </code></pre> <p><strong>edit</strong></p> <p>See <a href="https://docs.djangoproject.com/en/dev/releases/1.4/#request-exceptions-are-now-always-logged" rel="nofollow">request exceptions are now always logged</a> and <a href="https://code.djangoproject.com/ticket/16288" rel="nofollow">Ticket #16288</a>: </p> <p>I updated the above sample conf to explicitly include the correct filter for mail_admins so that, by default, emails are not sent when debug is True.</p> <p>You should add a filter:</p> <pre><code>'filters': { 'require_debug_false': { '()': 'django.utils.log.RequireDebugFalse' } }, </code></pre> <p>and apply it to the mail_admins handler:</p> <pre><code> 'mail_admins': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } </code></pre> <p>Otherwise the <code>django.core.handers.base.handle_uncaught_exception</code> doesn't pass errors to the 'django.request' logger if settings.DEBUG is True.</p> <p>If you don't do this in Django 1.5 you'll get a </p> <blockquote> <p>DeprecationWarning: You have no filters defined on the 'mail_admins' logging handler: adding implicit debug-false-only filter</p> </blockquote> <p>but things will still work correctly BOTH in django 1.4 and django 1.5.</p> <p>** end edit **</p> <p>That conf is strongly inspired by the sample conf in the django doc, but adding the log file part.</p> <p>I often also do the following:</p> <pre><code>LOG_LEVEL = 'DEBUG' if DEBUG else 'INFO' ... 'level': LOG_LEVEL ... </code></pre> <p>Then in my python code I always add a NullHandler in case no logging conf is defined whatsoever. This avoid warnings for no Handler specified. Especially useful for libs that are not necessarily called only in Django (<a href="http://docs.python.org/howto/logging.html#configuring-logging-for-a-library" rel="nofollow" title="Python logging for a library">ref</a>)</p> <pre><code>import logging # Get an instance of a logger logger = logging.getLogger(__name__) class NullHandler(logging.Handler): #exists in python 3.1 def emit(self, record): pass nullhandler = logger.addHandler(NullHandler()) # here you can also add some local logger should you want: to stdout with streamhandler, or to a local file... </code></pre> <p>[...]</p> <pre><code>logger.warning('etc.etc.') </code></pre> <p>Hope this helps!</p>
102
2011-04-27T15:50:04Z
[ "python", "django", "logging" ]
Atomic increment of a counter in django
1,598,932
<p>I'm trying to atomically increment a simple counter in Django. My code looks like this:</p> <pre><code>from models import Counter from django.db import transaction @transaction.commit_on_success def increment_counter(name): counter = Counter.objects.get_or_create(name = name)[0] counter.count += 1 counter.save() </code></pre> <p>If I understand Django correctly, this should wrap the function in a transaction and make the increment atomic. But it doesn't work and there is a race condition in the counter update. How can this code be made thread-safe?</p>
33
2009-10-21T05:49:08Z
1,599,090
<p><a href="http://docs.djangoproject.com/en/dev/topics/db/queries/#query-expressions">New in Django 1.1</a></p> <pre><code>Counter.objects.get_or_create(name = name) Counter.objects.filter(name = name).update(count = F('count')+1) </code></pre> <p>or using <a href="https://docs.djangoproject.com/en/dev/ref/models/expressions/#f-expressions">an F expression</a>:</p> <pre><code>counter = Counter.objects.get_or_create(name = name) counter.count = F('count') +1 counter.save() </code></pre>
51
2009-10-21T06:43:45Z
[ "python", "django", "transactions", "race-condition" ]
Atomic increment of a counter in django
1,598,932
<p>I'm trying to atomically increment a simple counter in Django. My code looks like this:</p> <pre><code>from models import Counter from django.db import transaction @transaction.commit_on_success def increment_counter(name): counter = Counter.objects.get_or_create(name = name)[0] counter.count += 1 counter.save() </code></pre> <p>If I understand Django correctly, this should wrap the function in a transaction and make the increment atomic. But it doesn't work and there is a race condition in the counter update. How can this code be made thread-safe?</p>
33
2009-10-21T05:49:08Z
3,572,943
<p>Or if you just want a counter and not a persistent object you can use itertools counter which is implemented in C. The GIL will provide the safety needed.</p> <p>--Sai</p>
-4
2010-08-26T07:33:45Z
[ "python", "django", "transactions", "race-condition" ]
Atomic increment of a counter in django
1,598,932
<p>I'm trying to atomically increment a simple counter in Django. My code looks like this:</p> <pre><code>from models import Counter from django.db import transaction @transaction.commit_on_success def increment_counter(name): counter = Counter.objects.get_or_create(name = name)[0] counter.count += 1 counter.save() </code></pre> <p>If I understand Django correctly, this should wrap the function in a transaction and make the increment atomic. But it doesn't work and there is a race condition in the counter update. How can this code be made thread-safe?</p>
33
2009-10-21T05:49:08Z
8,901,941
<p>In Django 1.4 there is <a href="https://docs.djangoproject.com/en/dev/releases/1.4/#select-for-update-support">support for SELECT ... FOR UPDATE</a> clauses, using database locks to make sure no data is accesses concurrently by mistake. </p>
11
2012-01-17T21:26:25Z
[ "python", "django", "transactions", "race-condition" ]
Atomic increment of a counter in django
1,598,932
<p>I'm trying to atomically increment a simple counter in Django. My code looks like this:</p> <pre><code>from models import Counter from django.db import transaction @transaction.commit_on_success def increment_counter(name): counter = Counter.objects.get_or_create(name = name)[0] counter.count += 1 counter.save() </code></pre> <p>If I understand Django correctly, this should wrap the function in a transaction and make the increment atomic. But it doesn't work and there is a race condition in the counter update. How can this code be made thread-safe?</p>
33
2009-10-21T05:49:08Z
18,937,431
<p>Keeping it simple and building on @Oduvan's answer:</p> <pre><code>counter, created = Counter.objects.get_or_create(name = name, defaults={'count':1}) if not created: counter.count = F('count') +1 counter.save() </code></pre> <p>The advantage here is that if the object was created in the first statement, you don't have to do any further updates.</p>
3
2013-09-21T20:44:27Z
[ "python", "django", "transactions", "race-condition" ]
Atomic increment of a counter in django
1,598,932
<p>I'm trying to atomically increment a simple counter in Django. My code looks like this:</p> <pre><code>from models import Counter from django.db import transaction @transaction.commit_on_success def increment_counter(name): counter = Counter.objects.get_or_create(name = name)[0] counter.count += 1 counter.save() </code></pre> <p>If I understand Django correctly, this should wrap the function in a transaction and make the increment atomic. But it doesn't work and there is a race condition in the counter update. How can this code be made thread-safe?</p>
33
2009-10-21T05:49:08Z
28,228,959
<p>Django 1.7</p> <pre><code>from django.db.models import F counter, created = Counter.objects.get_or_create(name = name) counter.count = F('count') +1 counter.save() </code></pre>
4
2015-01-30T03:53:05Z
[ "python", "django", "transactions", "race-condition" ]
How can I get an accurate UTC time with Python?
1,599,060
<p>I wrote a desktop application and was using <code>datetime.datetime.utcnow()</code> for timestamping, however I've recently noticed that some people using the application get wildly different results than I do when we run the program at the same time. Is there any way to get the UTC time locally without using urllib to fetch it from a website?</p>
8
2009-10-21T06:34:31Z
1,599,094
<p>Python depends on the underlying operating system to provide an accurate time-of-day clock. If it isn't doing that, you don't have much choice other than to bypass the o/s. There's a pure-Python implementation of an NTP client <a href="http://pypi.python.org/pypi/ntplib/" rel="nofollow">here</a>. A very simple-minded approach:</p> <pre><code>&gt;&gt;&gt; import ntplib,datetime &gt;&gt;&gt; x = ntplib.NTPClient() &gt;&gt;&gt; datetime.datetime.utcfromtimestamp(x.request('europe.pool.ntp.org').tx_time) datetime.datetime(2009, 10, 21, 7, 1, 54, 716657) </code></pre> <p>However, it would not be very nice to be continually hitting on other NTP servers out there. A good net citizen would use the ntp client library to keep track of the offset between the o/s system clock and that obtained from the server and only periodically poll to adjust the time.</p>
19
2009-10-21T06:45:09Z
[ "python", "datetime", "timestamp", "utc" ]
How can I get an accurate UTC time with Python?
1,599,060
<p>I wrote a desktop application and was using <code>datetime.datetime.utcnow()</code> for timestamping, however I've recently noticed that some people using the application get wildly different results than I do when we run the program at the same time. Is there any way to get the UTC time locally without using urllib to fetch it from a website?</p>
8
2009-10-21T06:34:31Z
1,715,510
<p>Actually, ntplib computes this offset accounting for round-trip delay. It's available through the "offset" attribute of the NTP response. Therefore the result should not very wildly.</p>
7
2009-11-11T14:29:05Z
[ "python", "datetime", "timestamp", "utc" ]
Python (Django) Shopify API Client -- For a Beginner
1,599,067
<p>I have a requirement to build a client for Shopify's API, building it in Python &amp; Django.</p> <p>I've never done it before and so I'm wondering if someone might advise on a good starting point for the kinds of patterns and techniques needed to get a job like this done.</p> <p>Here's a link to the <a href="http://api.shopify.com/" rel="nofollow">Shopify API reference</a></p> <p>Thanks.</p>
4
2009-10-21T06:36:13Z
1,600,386
<p>I think you can find some inspiration by taking a look at this:</p> <p><a href="http://bitbucket.org/jespern/django-piston/wiki/Home" rel="nofollow">http://bitbucket.org/jespern/django-piston/wiki/Home</a></p> <p>Although it is directly opposite what you want to do (Piston is for building APIs, and what you want is to use an API) it can give you some clues on common topics.</p> <p>I could mention, of course, reading obvious sources like the Shopify developers forum:</p> <p><a href="http://forums.shopify.com/categories/9" rel="nofollow">http://forums.shopify.com/categories/9</a></p> <p>But I guess you already had it in mind :)</p> <p>Cheers,</p> <p>H.</p>
0
2009-10-21T12:00:18Z
[ "python", "django", "shopify" ]
Python (Django) Shopify API Client -- For a Beginner
1,599,067
<p>I have a requirement to build a client for Shopify's API, building it in Python &amp; Django.</p> <p>I've never done it before and so I'm wondering if someone might advise on a good starting point for the kinds of patterns and techniques needed to get a job like this done.</p> <p>Here's a link to the <a href="http://api.shopify.com/" rel="nofollow">Shopify API reference</a></p> <p>Thanks.</p>
4
2009-10-21T06:36:13Z
1,603,971
<p>Your question is somewhat open-ended, but if you're new to Python or API programming, then you should get a feel for how to do network programming in Python, using either the urllib2 or httplib modules that come with more recent versions of Python. Learn how to initiate a request for a page and read the response into a file.</p> <p>Here is an overview of the httplib module in Python documentation:</p> <p><a href="http://docs.python.org/library/httplib.html" rel="nofollow">http://docs.python.org/library/httplib.html</a></p> <p>After you've managed to make page requests using the GET HTTP verb, learn about how to make POST requests and how to add headers, like Content-Type, to your request. When communicating with most APIs, you need to be able to send these.</p> <p>The next step would be to get familiar with the XML standard and how XML documents are constructed. Then, play around with different XML libraries in Python. There are several, but I've always used xml.dom.minidom module. In order to talk to an API, you'll probably need to know to create XML documents (to include in your requests) and how to parse content out of them. (to make use of the API's responses) The minidom module allows a developer to do both of these. For your reference:</p> <p><a href="http://docs.python.org/library/xml.dom.minidom.html" rel="nofollow">http://docs.python.org/library/xml.dom.minidom.html</a></p> <p>Your final solution will likely put both of these together, where you create an XML document, submit it as content to the appropriate Shopify REST API URL, and then have your application deal with the XML response the API sends back to you.</p> <p>If you're sending any sensitive data, be sure to use HTTPS over port 443, and NOT HTTP over port 80.</p>
2
2009-10-21T22:13:23Z
[ "python", "django", "shopify" ]
Python (Django) Shopify API Client -- For a Beginner
1,599,067
<p>I have a requirement to build a client for Shopify's API, building it in Python &amp; Django.</p> <p>I've never done it before and so I'm wondering if someone might advise on a good starting point for the kinds of patterns and techniques needed to get a job like this done.</p> <p>Here's a link to the <a href="http://api.shopify.com/" rel="nofollow">Shopify API reference</a></p> <p>Thanks.</p>
4
2009-10-21T06:36:13Z
6,636,469
<p>Shopify has now released a Python API client: <a href="https://github.com/Shopify/shopify_python_api" rel="nofollow">https://github.com/Shopify/shopify_python_api</a></p>
1
2011-07-09T17:57:50Z
[ "python", "django", "shopify" ]
Python (Django) Shopify API Client -- For a Beginner
1,599,067
<p>I have a requirement to build a client for Shopify's API, building it in Python &amp; Django.</p> <p>I've never done it before and so I'm wondering if someone might advise on a good starting point for the kinds of patterns and techniques needed to get a job like this done.</p> <p>Here's a link to the <a href="http://api.shopify.com/" rel="nofollow">Shopify API reference</a></p> <p>Thanks.</p>
4
2009-10-21T06:36:13Z
14,005,547
<p>I have been working on a project for the last few months using Python and Django integrating with Shopify, built on Google App Engine. </p> <p>Shopify has a valuable wiki resource, <a href="http://wiki.shopify.com/Using_the_shopify_python_api" rel="nofollow">http://wiki.shopify.com/Using_the_shopify_python_api</a>. This is what I used to get a good handle of the Shopify Python API that was mentioned, <a href="https://github.com/Shopify/shopify_python_api" rel="nofollow">https://github.com/Shopify/shopify_python_api</a>.</p> <p>It will really depend on what you are building, but these are good resources to get you started. Also, understanding the <a href="http://api.shopify.com" rel="nofollow">Shopify API</a> will help when using the Python API for Shopify.</p>
1
2012-12-22T18:47:16Z
[ "python", "django", "shopify" ]
Asynchronous Stream Processing in Python
1,599,540
<p>Let's start with a simple example. A HTTP data stream comes in the following format:</p> <pre><code>MESSAGE_LENGTH, 2 bytes MESSAGE_BODY, REPEAT... </code></pre> <p>Currently, I use urllib2 to retrieve and process streaming data as below:</p> <pre><code>length = response.read(2) while True: data = response.read(length) DO DATA PROCESSING </code></pre> <p>It works, but since all messages are in size of 50-100 bytes, the above method limits buffer size each time it reads so it may hurt performance.</p> <p>Is it possible to use seperate threads for data retrieval and processing?</p>
3
2009-10-21T08:54:21Z
1,599,681
<p>Yes, of course, and there are many different techniques to do so. You'll typically end up having a set of processes that only retrieves data, and increase the number of processes in that pool until you run out of bandwith, more or less. Those processes store the data somewhere, and then you have other processes or threads that pick the data up and process it from wherever it's stored.</p> <p>So the answer to your question is "Yes", your next question is gonna be "How" and then the people who are really good at this stuff will want to know more. :-)</p> <p>If you are doing this in a massive scale it can get very tricky, and you don't want them to step all over each other, and there are modules in Python that help you do all this. What the right way to do it is depends a lot on what scale we are talking, if you want to run this over multiple processors, or maybe even over completely separate machines, and how much data we are talking about.</p> <p>I've only done it once, and on a not very massive scale, but ended up having once process that got a long list of urls that should be processed, and another process that took that list and dispatched it to a set of separate processes simply by putting files with URL's in them in separate directories that worked as "queues". The separate processes that fetched the URLs would look in their own queue-directory, fetch the URL and stick it into another "outqueue" directory, where I had another process that would dispatch those files into another set of queue-directories for the processing processes.</p> <p>That worked fine, could be run of the network with NFS if necessary (although we never tried that) and could be scaled up to loads of processes on loads of machines if neeed (although we never did that either).</p> <p>There may be more clever ways.</p>
0
2009-10-21T09:22:56Z
[ "python", "streaming", "stream" ]
Asynchronous Stream Processing in Python
1,599,540
<p>Let's start with a simple example. A HTTP data stream comes in the following format:</p> <pre><code>MESSAGE_LENGTH, 2 bytes MESSAGE_BODY, REPEAT... </code></pre> <p>Currently, I use urllib2 to retrieve and process streaming data as below:</p> <pre><code>length = response.read(2) while True: data = response.read(length) DO DATA PROCESSING </code></pre> <p>It works, but since all messages are in size of 50-100 bytes, the above method limits buffer size each time it reads so it may hurt performance.</p> <p>Is it possible to use seperate threads for data retrieval and processing?</p>
3
2009-10-21T08:54:21Z
1,602,013
<p>Yes, can be done and is not that hard, if your format is essentially fixed.</p> <p>I used it with httplib in Python 2.2.3 and found it had some abysmal performance in the way we hacked it together (basically monkey patching a select() based socket layer into httplib).</p> <p>The trick is to get the socket and do the buffering yourself, so you do not fight over buffering with the intermediate layers (made for horrible performance when we had httplib buffer for chunked http decoding, the socket layer buffer for read()).</p> <p>Then have a statemachine that fetches new data from the socket when needed and pushes completed blocks into a Queue.Queue that feeds your processing threads.</p> <p>I use it to transfer files, checksum (zlib.ADLER32) them in an extra thread and write them to the filesystem in a third thread. Makes for about 40 MB/s sustained throughput on my local machine via sockets and with HTTP/chunked overhead.</p>
1
2009-10-21T16:33:54Z
[ "python", "streaming", "stream" ]
Is there easy way in python to extrapolate data points to the future?
1,599,754
<p>I have a simple numpy array, for every date there is a data point. Something like this:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; from datetime import date &gt;&gt;&gt; from datetime import date &gt;&gt;&gt; x = np.array( [(date(2008,3,5), 4800 ), (date(2008,3,15), 4000 ), (date(2008,3, 20), 3500 ), (date(2008,4,5), 3000 ) ] ) </code></pre> <p>Is there easy way to extrapolate data points to the future: date(2008,5,1), date(2008, 5, 20) etc? I understand it can be done with mathematical algorithms. But here I am seeking for some low hanging fruit. Actually I like what numpy.linalg.solve does, but it does not look applicable for the extrapolation. Maybe I am absolutely wrong.</p> <p>Actually to be more specific I am building a burn-down chart (xp term): 'x=date and y=volume of work to be done', so I have got the already done sprints and I want to visualise how the future sprints will go if the current situation persists. And finally I want to predict the release date. So the nature of 'volume of work to be done' is it always goes down on burn-down charts. Also I want to get the extrapolated release date: date when the volume becomes zero.</p> <p>This is all for showing to dev team how things go. The preciseness is not so important here :) The motivation of dev team is the main factor. That means I am absolutely fine with the very approximate extrapolation technique.</p>
8
2009-10-21T09:42:52Z
1,599,772
<p>The mathematical models are the way to go in this case. For instance, if you have only three data points, you can have absolutely no indication on how the trend will unfold (could be any of two parabola.)</p> <p>Get some statistics courses and try to implement the algorithms. Try <a href="http://en.wikibooks.org/wiki/Statistics" rel="nofollow">Wikibooks</a>.</p>
3
2009-10-21T09:47:42Z
[ "python", "numpy", "interpolation", "spline", "burndowncharts" ]
Is there easy way in python to extrapolate data points to the future?
1,599,754
<p>I have a simple numpy array, for every date there is a data point. Something like this:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; from datetime import date &gt;&gt;&gt; from datetime import date &gt;&gt;&gt; x = np.array( [(date(2008,3,5), 4800 ), (date(2008,3,15), 4000 ), (date(2008,3, 20), 3500 ), (date(2008,4,5), 3000 ) ] ) </code></pre> <p>Is there easy way to extrapolate data points to the future: date(2008,5,1), date(2008, 5, 20) etc? I understand it can be done with mathematical algorithms. But here I am seeking for some low hanging fruit. Actually I like what numpy.linalg.solve does, but it does not look applicable for the extrapolation. Maybe I am absolutely wrong.</p> <p>Actually to be more specific I am building a burn-down chart (xp term): 'x=date and y=volume of work to be done', so I have got the already done sprints and I want to visualise how the future sprints will go if the current situation persists. And finally I want to predict the release date. So the nature of 'volume of work to be done' is it always goes down on burn-down charts. Also I want to get the extrapolated release date: date when the volume becomes zero.</p> <p>This is all for showing to dev team how things go. The preciseness is not so important here :) The motivation of dev team is the main factor. That means I am absolutely fine with the very approximate extrapolation technique.</p>
8
2009-10-21T09:42:52Z
1,600,008
<p>You have to swpecify over which function you need extrapolation. Than you can use regression <a href="http://en.wikipedia.org/wiki/Regression%5Fanalysis" rel="nofollow">http://en.wikipedia.org/wiki/Regression%5Fanalysis</a> to find paratmeters of function. And extrapolate this in future.</p> <p>For instance: translate dates into x values and use first day as x=0 for your problem the values shoul be aproximatly (0,1.2), (400,1.8),(900,5.3)</p> <p>Now you decide that his points lies on function of type a+b*x+c*x^2</p> <p>Use the method of least squers to find a,b and c <a href="http://en.wikipedia.org/wiki/Linear%5Fleast%5Fsquares" rel="nofollow">http://en.wikipedia.org/wiki/Linear%5Fleast%5Fsquares</a> (i will provide full source, but later, beacuase I do not have time for this)</p>
1
2009-10-21T10:39:00Z
[ "python", "numpy", "interpolation", "spline", "burndowncharts" ]
Is there easy way in python to extrapolate data points to the future?
1,599,754
<p>I have a simple numpy array, for every date there is a data point. Something like this:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; from datetime import date &gt;&gt;&gt; from datetime import date &gt;&gt;&gt; x = np.array( [(date(2008,3,5), 4800 ), (date(2008,3,15), 4000 ), (date(2008,3, 20), 3500 ), (date(2008,4,5), 3000 ) ] ) </code></pre> <p>Is there easy way to extrapolate data points to the future: date(2008,5,1), date(2008, 5, 20) etc? I understand it can be done with mathematical algorithms. But here I am seeking for some low hanging fruit. Actually I like what numpy.linalg.solve does, but it does not look applicable for the extrapolation. Maybe I am absolutely wrong.</p> <p>Actually to be more specific I am building a burn-down chart (xp term): 'x=date and y=volume of work to be done', so I have got the already done sprints and I want to visualise how the future sprints will go if the current situation persists. And finally I want to predict the release date. So the nature of 'volume of work to be done' is it always goes down on burn-down charts. Also I want to get the extrapolated release date: date when the volume becomes zero.</p> <p>This is all for showing to dev team how things go. The preciseness is not so important here :) The motivation of dev team is the main factor. That means I am absolutely fine with the very approximate extrapolation technique.</p>
8
2009-10-21T09:42:52Z
1,600,707
<p>A simple way of doing extrapolations is to use interpolating polynomials or splines: there are many routines for this in <a href="http://docs.scipy.org/doc/scipy/reference/interpolate.html" rel="nofollow">scipy.interpolate</a>, and there are quite easy to use (just give the (x, y) points, and you get a function [a callable, precisely]).</p> <p>Now, as as been pointed in this thread, you cannot expect the extrapolation to be always meaningful (especially when you are far from your data points) if you don't have a model for your data. However, I encourage you to play with the polynomial or spline interpolations from scipy.interpolate to see whether the results you obtain suit you.</p>
4
2009-10-21T13:04:11Z
[ "python", "numpy", "interpolation", "spline", "burndowncharts" ]
Is there easy way in python to extrapolate data points to the future?
1,599,754
<p>I have a simple numpy array, for every date there is a data point. Something like this:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; from datetime import date &gt;&gt;&gt; from datetime import date &gt;&gt;&gt; x = np.array( [(date(2008,3,5), 4800 ), (date(2008,3,15), 4000 ), (date(2008,3, 20), 3500 ), (date(2008,4,5), 3000 ) ] ) </code></pre> <p>Is there easy way to extrapolate data points to the future: date(2008,5,1), date(2008, 5, 20) etc? I understand it can be done with mathematical algorithms. But here I am seeking for some low hanging fruit. Actually I like what numpy.linalg.solve does, but it does not look applicable for the extrapolation. Maybe I am absolutely wrong.</p> <p>Actually to be more specific I am building a burn-down chart (xp term): 'x=date and y=volume of work to be done', so I have got the already done sprints and I want to visualise how the future sprints will go if the current situation persists. And finally I want to predict the release date. So the nature of 'volume of work to be done' is it always goes down on burn-down charts. Also I want to get the extrapolated release date: date when the volume becomes zero.</p> <p>This is all for showing to dev team how things go. The preciseness is not so important here :) The motivation of dev team is the main factor. That means I am absolutely fine with the very approximate extrapolation technique.</p>
8
2009-10-21T09:42:52Z
1,614,148
<p>It's all too easy for extrapolation to generate garbage; try this. Many different extrapolations are of course possible; some produce obvious garbage, some non-obvious garbage, many are ill-defined.</p> <p><img src="http://i39.tinypic.com/am62wp.png" alt="alt text"></p> <pre><code>""" extrapolate y,m,d data with scipy UnivariateSpline """ import numpy as np from scipy.interpolate import UnivariateSpline # pydoc scipy.interpolate.UnivariateSpline -- fitpack, unclear from datetime import date from pylab import * # ipython -pylab __version__ = "denis 23oct" def daynumber( y,m,d ): """ 2005,1,1 -&gt; 0 2006,1,1 -&gt; 365 ... """ return date( y,m,d ).toordinal() - date( 2005,1,1 ).toordinal() days, values = np.array([ (daynumber(2005,1,1), 1.2 ), (daynumber(2005,4,1), 1.8 ), (daynumber(2005,9,1), 5.3 ), (daynumber(2005,10,1), 5.3 ) ]).T dayswanted = np.array([ daynumber( year, month, 1 ) for year in range( 2005, 2006+1 ) for month in range( 1, 12+1 )]) np.set_printoptions( 1 ) # .1f print "days:", days print "values:", values print "dayswanted:", dayswanted title( "extrapolation with scipy.interpolate.UnivariateSpline" ) plot( days, values, "o" ) for k in (1,2,3): # line parabola cubicspline extrapolator = UnivariateSpline( days, values, k=k ) y = extrapolator( dayswanted ) label = "k=%d" % k print label, y plot( dayswanted, y, label=label ) # pylab legend( loc="lower left" ) grid(True) savefig( "extrapolate-UnivariateSpline.png", dpi=50 ) show() </code></pre> <p>Added: a <a href="http://projects.scipy.org/scipy/ticket/864">Scipy ticket</a> says, "The behavior of the FITPACK classes in scipy.interpolate is much more complex than the docs would lead one to believe" -- imho true of other software doc too.</p>
15
2009-10-23T15:15:12Z
[ "python", "numpy", "interpolation", "spline", "burndowncharts" ]
Daemon dies unexpectedly
1,599,798
<p>I have a python script, which I daemonise using this code</p> <pre><code> def daemonise(): from os import fork, setsid, umask, dup2 from sys import stdin, stdout, stderr if fork(): exit(0) umask(0) setsid() if fork(): exit(0) stdout.flush() stderr.flush() si = file('/dev/null', 'r') so = file('daemon-%s.out'%os.getpid(), 'a+') se = file('daemon-%s.err'%os.getpid(), 'a+') dup2(si.fileno(), stdin.fileno()) dup2(so.fileno(), stdout.fileno()) dup2(se.fileno(), stderr.fileno()) print 'this file has the output from daemon%s'%os.getpid() print &gt;&gt; stderr, 'this file has the errors from daemon%s'%os.getpid() </code></pre> <p>The script is in </p> <pre><code>while True: try: funny_code(); sleep(10); except:pass; </code></pre> <p>loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons.</p> <p>[Edit]</p> <p>Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)</p>
1
2009-10-21T09:53:35Z
1,599,964
<p>What I've used in my clients is <a href="http://cr.yp.to/daemontools.html" rel="nofollow">daemontools</a>. It is a proven, well tested tool to run anything daemonized.</p> <p>You just write your application without any daemonization, to run on foreground; Then create a daemontools service folder for it, and it will discover and automatically restart your application from now on, and every time the system restarts.</p> <p>It can also handle log rotation and stuff. Saves a lot of tedious, repeated work.</p>
0
2009-10-21T10:30:29Z
[ "python", "daemon" ]
Daemon dies unexpectedly
1,599,798
<p>I have a python script, which I daemonise using this code</p> <pre><code> def daemonise(): from os import fork, setsid, umask, dup2 from sys import stdin, stdout, stderr if fork(): exit(0) umask(0) setsid() if fork(): exit(0) stdout.flush() stderr.flush() si = file('/dev/null', 'r') so = file('daemon-%s.out'%os.getpid(), 'a+') se = file('daemon-%s.err'%os.getpid(), 'a+') dup2(si.fileno(), stdin.fileno()) dup2(so.fileno(), stdout.fileno()) dup2(se.fileno(), stderr.fileno()) print 'this file has the output from daemon%s'%os.getpid() print &gt;&gt; stderr, 'this file has the errors from daemon%s'%os.getpid() </code></pre> <p>The script is in </p> <pre><code>while True: try: funny_code(); sleep(10); except:pass; </code></pre> <p>loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons.</p> <p>[Edit]</p> <p>Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)</p>
1
2009-10-21T09:53:35Z
1,602,145
<p>Why are you silently swallowing all exceptions? Try to see what exceptions are being caught by this:</p> <pre><code>while True: try: funny_code() sleep(10) except BaseException, e: print e.__class__, e.message pass </code></pre> <p>Something unexpected might be happening which is causing it to fail, but you'll never know if you blindly ignore all the exceptions.</p> <p>I recommend using <a href="http://supervisord.org/" rel="nofollow">supervisord</a> (written in Python, very easy to use) for daemonizing and monitoring processes. Running under supervisord you would not have to use your <code>daemonise</code> function.</p>
1
2009-10-21T16:55:25Z
[ "python", "daemon" ]
Daemon dies unexpectedly
1,599,798
<p>I have a python script, which I daemonise using this code</p> <pre><code> def daemonise(): from os import fork, setsid, umask, dup2 from sys import stdin, stdout, stderr if fork(): exit(0) umask(0) setsid() if fork(): exit(0) stdout.flush() stderr.flush() si = file('/dev/null', 'r') so = file('daemon-%s.out'%os.getpid(), 'a+') se = file('daemon-%s.err'%os.getpid(), 'a+') dup2(si.fileno(), stdin.fileno()) dup2(so.fileno(), stdout.fileno()) dup2(se.fileno(), stderr.fileno()) print 'this file has the output from daemon%s'%os.getpid() print &gt;&gt; stderr, 'this file has the errors from daemon%s'%os.getpid() </code></pre> <p>The script is in </p> <pre><code>while True: try: funny_code(); sleep(10); except:pass; </code></pre> <p>loop. It runs fine for a few hours and then dies unexpectedly. How do I go about debugging such demons, err daemons.</p> <p>[Edit]</p> <p>Without starting a process like monit, is there a way to write a watchdog in python, which can watch my other daemons and restart when they go down? (Who watches the watchdog.)</p>
1
2009-10-21T09:53:35Z
1,613,834
<p>You really should use <a href="http://pypi.python.org/pypi/python-daemon/1.5.1" rel="nofollow">python-daemon</a> for this which is a library that implements <a href="http://www.python.org/dev/peps/pep-3143/" rel="nofollow">PEP 3141</a> for a standard daemon process library. This way you will ensure that your application does all the right things for whichever type of UNIX it is running under. No need to reinvent the wheel.</p>
3
2009-10-23T14:27:30Z
[ "python", "daemon" ]
Configuring Python's default exception handling
1,599,962
<p>For an uncaught exception, Python by default prints a stack trace, the exception itself, and terminates. Is anybody aware of a way to tailor this behaviour on the program level (other than establishing my own global, catch-all exception handler), so that the stack trace is omitted? I would like to toggle in my app whether the stack trace is printed or not.</p>
14
2009-10-21T10:29:55Z
1,599,973
<p>You are looking for sys.excepthook:</p> <p><strong>sys.excepthook(type, value, traceback)</strong> </p> <p>This function prints out a given traceback and exception to sys.stderr.</p> <p>When an exception is raised and uncaught, the interpreter calls sys.excepthook with three arguments, the exception class, exception instance, and a traceback object. In an interactive session this happens just before control is returned to the prompt; in a Python program this happens just before the program exits. The handling of such top-level exceptions can be customized by assigning another three-argument function to sys.excepthook.</p>
20
2009-10-21T10:31:41Z
[ "python" ]
Using south to refactor a Django model with inheritance
1,600,129
<p>I was wondering if the following migration is possible with Django <a href="http://south.aeracode.org/">south</a> and still retain data.</p> <h2>Before:</h2> <p>I currently have two apps, one called tv, one called movies, each with a VideoFile model (simplified here):</p> <p><strong>tv/models.py:</strong></p> <pre><code>class VideoFile(models.Model): show = models.ForeignKey(Show, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <p><strong>movies/models.py:</strong></p> <pre><code>class VideoFile(models.Model): movie = models.ForeignKey(Movie, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <h2>After:</h2> <p>Because the two videofile objects are so similar I want to get rid of duplication and create a new model in a separate app called media that contains a generic VideoFile class and use inheritance to extend it:</p> <p><strong>media/models.py:</strong></p> <pre><code>class VideoFile(models.Model): name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <p><strong>tv/models.py:</strong></p> <pre><code>class VideoFile(media.models.VideoFile): show = models.ForeignKey(Show, blank=True, null=True) </code></pre> <p><strong>movies/models.py:</strong></p> <pre><code>class VideoFile(media.models.VideoFile): movie = models.ForeignKey(Movie, blank=True, null=True) </code></pre> <p>So my question is, how can I accomplish this with django-south and still maintain existing data?</p> <p>All three these apps are already managed by south migrations and according to the south documentation it is bad practice to combine a schema and data migration and they recommend it should be done in a few steps.</p> <p>I think it could be done using separate migrations like this (assuming media.VideoFile is already created)</p> <ol> <li>Schema migration to rename all fields in tv.VideoFile and movies.VideoFile that will move to the new media.VideoFile model, maybe to something like old_name, old_size, etc</li> <li>Schema migration to tv.VideoFile and movies.VideoFile to inherit from media.VideoFile</li> <li>Data migration to copy old_name to name, old_size to size, etc</li> <li>Scheme migration to remove old_ fields</li> </ol> <p>Before I go through all that work, do you think that will work? Is there a better way?</p> <p>If you're interested, the project is hosted here: <a href="http://code.google.com/p/medianav/">http://code.google.com/p/medianav/</a></p>
32
2009-10-21T11:05:57Z
1,600,165
<p><a href="http://docs.djangoproject.com/en/dev/topics/db/models/#abstract-base-classes" rel="nofollow">Abstract Model</a></p> <pre><code>class VideoFile(models.Model): name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) class Meta: abstract = True </code></pre> <p>May be <a href="http://www.djangoproject.com/documentation/models/generic%5Frelations/" rel="nofollow">generic relation</a> will be useful for you too.</p>
3
2009-10-21T11:12:57Z
[ "python", "django", "migration", "django-south" ]
Using south to refactor a Django model with inheritance
1,600,129
<p>I was wondering if the following migration is possible with Django <a href="http://south.aeracode.org/">south</a> and still retain data.</p> <h2>Before:</h2> <p>I currently have two apps, one called tv, one called movies, each with a VideoFile model (simplified here):</p> <p><strong>tv/models.py:</strong></p> <pre><code>class VideoFile(models.Model): show = models.ForeignKey(Show, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <p><strong>movies/models.py:</strong></p> <pre><code>class VideoFile(models.Model): movie = models.ForeignKey(Movie, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <h2>After:</h2> <p>Because the two videofile objects are so similar I want to get rid of duplication and create a new model in a separate app called media that contains a generic VideoFile class and use inheritance to extend it:</p> <p><strong>media/models.py:</strong></p> <pre><code>class VideoFile(models.Model): name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <p><strong>tv/models.py:</strong></p> <pre><code>class VideoFile(media.models.VideoFile): show = models.ForeignKey(Show, blank=True, null=True) </code></pre> <p><strong>movies/models.py:</strong></p> <pre><code>class VideoFile(media.models.VideoFile): movie = models.ForeignKey(Movie, blank=True, null=True) </code></pre> <p>So my question is, how can I accomplish this with django-south and still maintain existing data?</p> <p>All three these apps are already managed by south migrations and according to the south documentation it is bad practice to combine a schema and data migration and they recommend it should be done in a few steps.</p> <p>I think it could be done using separate migrations like this (assuming media.VideoFile is already created)</p> <ol> <li>Schema migration to rename all fields in tv.VideoFile and movies.VideoFile that will move to the new media.VideoFile model, maybe to something like old_name, old_size, etc</li> <li>Schema migration to tv.VideoFile and movies.VideoFile to inherit from media.VideoFile</li> <li>Data migration to copy old_name to name, old_size to size, etc</li> <li>Scheme migration to remove old_ fields</li> </ol> <p>Before I go through all that work, do you think that will work? Is there a better way?</p> <p>If you're interested, the project is hosted here: <a href="http://code.google.com/p/medianav/">http://code.google.com/p/medianav/</a></p>
32
2009-10-21T11:05:57Z
1,600,832
<p>I did a similar migration and I chose to do it in multiple steps. In addition to creating the multiple migrations, I also created the backward migration to provide a fallback if things went wrong. Then, I grabbed some test data and migrated it forward and backwards until I was sure it was coming out correctly when I migrated forwards. Finally, I migrated the production site.</p>
1
2009-10-21T13:25:09Z
[ "python", "django", "migration", "django-south" ]
Using south to refactor a Django model with inheritance
1,600,129
<p>I was wondering if the following migration is possible with Django <a href="http://south.aeracode.org/">south</a> and still retain data.</p> <h2>Before:</h2> <p>I currently have two apps, one called tv, one called movies, each with a VideoFile model (simplified here):</p> <p><strong>tv/models.py:</strong></p> <pre><code>class VideoFile(models.Model): show = models.ForeignKey(Show, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <p><strong>movies/models.py:</strong></p> <pre><code>class VideoFile(models.Model): movie = models.ForeignKey(Movie, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <h2>After:</h2> <p>Because the two videofile objects are so similar I want to get rid of duplication and create a new model in a separate app called media that contains a generic VideoFile class and use inheritance to extend it:</p> <p><strong>media/models.py:</strong></p> <pre><code>class VideoFile(models.Model): name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <p><strong>tv/models.py:</strong></p> <pre><code>class VideoFile(media.models.VideoFile): show = models.ForeignKey(Show, blank=True, null=True) </code></pre> <p><strong>movies/models.py:</strong></p> <pre><code>class VideoFile(media.models.VideoFile): movie = models.ForeignKey(Movie, blank=True, null=True) </code></pre> <p>So my question is, how can I accomplish this with django-south and still maintain existing data?</p> <p>All three these apps are already managed by south migrations and according to the south documentation it is bad practice to combine a schema and data migration and they recommend it should be done in a few steps.</p> <p>I think it could be done using separate migrations like this (assuming media.VideoFile is already created)</p> <ol> <li>Schema migration to rename all fields in tv.VideoFile and movies.VideoFile that will move to the new media.VideoFile model, maybe to something like old_name, old_size, etc</li> <li>Schema migration to tv.VideoFile and movies.VideoFile to inherit from media.VideoFile</li> <li>Data migration to copy old_name to name, old_size to size, etc</li> <li>Scheme migration to remove old_ fields</li> </ol> <p>Before I go through all that work, do you think that will work? Is there a better way?</p> <p>If you're interested, the project is hosted here: <a href="http://code.google.com/p/medianav/">http://code.google.com/p/medianav/</a></p>
32
2009-10-21T11:05:57Z
1,603,570
<p><strong>Check out response below by Paul for some notes on compatibility with newer versions of Django/South.</strong></p> <hr> <p>This seemed like an interesting problem, and I'm becoming a big fan of South, so I decided to look into this a bit. I built a test project on the abstract of what you've described above, and have successfully used South to perform the migration you are asking about. Here's a couple of notes before we get to the code:</p> <ul> <li><p>The South documentation recommends doing schema migrations and data migrations separate. I've followed suit in this.</p></li> <li><p>On the backend, Django represents an inherited table by automatically creating a OneToOne field on the inheriting model</p></li> <li><p>Understanding this, our South migration needs to properly handle the OneToOne field manually, however, in experimenting with this it seems that South (or perhaps Django itself) cannot create a OneToOne filed on multiple inherited tables with the same name. Because of this, I renamed each child-table in the movies/tv app to be respective to it's own app (ie. MovieVideoFile/ShowVideoFile).</p></li> <li><p>In playing with the actual data migration code, it seems South prefers to create the OneToOne field first, and then assign data to it. Assigning data to the OneToOne field during creation cause South to choke. (A fair compromise for all the coolness that is South).</p></li> </ul> <p>So having said all that, I tried to keep a log of the console commands being issued. I'll interject commentary where necessary. The final code is at the bottom.</p> <h2>Command History</h2> <pre><code>django-admin.py startproject southtest manage.py startapp movies manage.py startapp tv manage.py syncdb manage.py startmigration movies --initial manage.py startmigration tv --initial manage.py migrate manage.py shell # added some fake data... manage.py startapp media manage.py startmigration media --initial manage.py migrate # edited code, wrote new models, but left old ones intact manage.py startmigration movies unified-videofile --auto # create a new (blank) migration to hand-write data migration manage.py startmigration movies videofile-to-movievideofile-data manage.py migrate # edited code, wrote new models, but left old ones intact manage.py startmigration tv unified-videofile --auto # create a new (blank) migration to hand-write data migration manage.py startmigration tv videofile-to-movievideofile-data manage.py migrate # removed old VideoFile model from apps manage.py startmigration movies removed-videofile --auto manage.py startmigration tv removed-videofile --auto manage.py migrate </code></pre> <p>For space sake, and since the models invariably look the same in the end, I'm only going to demonstrate with 'movies' app.</p> <h2>movies/models.py</h2> <pre><code>from django.db import models from media.models import VideoFile as BaseVideoFile # This model remains until the last migration, which deletes # it from the schema. Note the name conflict with media.models class VideoFile(models.Model): movie = models.ForeignKey(Movie, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) class MovieVideoFile(BaseVideoFile): movie = models.ForeignKey(Movie, blank=True, null=True, related_name='shows') </code></pre> <h2>movies/migrations/0002_unified-videofile.py (schema migration)</h2> <pre><code>from south.db import db from django.db import models from movies.models import * class Migration: def forwards(self, orm): # Adding model 'MovieVideoFile' db.create_table('movies_movievideofile', ( ('videofile_ptr', orm['movies.movievideofile:videofile_ptr']), ('movie', orm['movies.movievideofile:movie']), )) db.send_create_signal('movies', ['MovieVideoFile']) def backwards(self, orm): # Deleting model 'MovieVideoFile' db.delete_table('movies_movievideofile') </code></pre> <h2>movies/migration/0003_videofile-to-movievideofile-data.py (data migration)</h2> <pre><code>from south.db import db from django.db import models from movies.models import * class Migration: def forwards(self, orm): for movie in orm['movies.videofile'].objects.all(): new_movie = orm.MovieVideoFile.objects.create(movie = movie.movie,) new_movie.videofile_ptr = orm['media.VideoFile'].objects.create() # videofile_ptr must be created first before values can be assigned new_movie.videofile_ptr.name = movie.name new_movie.videofile_ptr.size = movie.size new_movie.videofile_ptr.ctime = movie.ctime new_movie.videofile_ptr.save() def backwards(self, orm): print 'No Backwards' </code></pre> <h2>South is awesome!</h2> <p>Ok standard disclaimer: You're dealing with live data. I've given you working code here, but please use the <code>--db-dry-run</code> to test your schema. Always make a backup before trying anything, and generally be careful.</p> <p><strong>COMPATIBILITY NOTICE</strong></p> <p>I'm going to keep my original message intact, but South has since changed the command <code>manage.py startmigration</code> into <code>manage.py schemamigration</code>.</p>
49
2009-10-21T20:56:57Z
[ "python", "django", "migration", "django-south" ]
Using south to refactor a Django model with inheritance
1,600,129
<p>I was wondering if the following migration is possible with Django <a href="http://south.aeracode.org/">south</a> and still retain data.</p> <h2>Before:</h2> <p>I currently have two apps, one called tv, one called movies, each with a VideoFile model (simplified here):</p> <p><strong>tv/models.py:</strong></p> <pre><code>class VideoFile(models.Model): show = models.ForeignKey(Show, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <p><strong>movies/models.py:</strong></p> <pre><code>class VideoFile(models.Model): movie = models.ForeignKey(Movie, blank=True, null=True) name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <h2>After:</h2> <p>Because the two videofile objects are so similar I want to get rid of duplication and create a new model in a separate app called media that contains a generic VideoFile class and use inheritance to extend it:</p> <p><strong>media/models.py:</strong></p> <pre><code>class VideoFile(models.Model): name = models.CharField(max_length=1024, blank=True) size = models.IntegerField(blank=True, null=True) ctime = models.DateTimeField(blank=True, null=True) </code></pre> <p><strong>tv/models.py:</strong></p> <pre><code>class VideoFile(media.models.VideoFile): show = models.ForeignKey(Show, blank=True, null=True) </code></pre> <p><strong>movies/models.py:</strong></p> <pre><code>class VideoFile(media.models.VideoFile): movie = models.ForeignKey(Movie, blank=True, null=True) </code></pre> <p>So my question is, how can I accomplish this with django-south and still maintain existing data?</p> <p>All three these apps are already managed by south migrations and according to the south documentation it is bad practice to combine a schema and data migration and they recommend it should be done in a few steps.</p> <p>I think it could be done using separate migrations like this (assuming media.VideoFile is already created)</p> <ol> <li>Schema migration to rename all fields in tv.VideoFile and movies.VideoFile that will move to the new media.VideoFile model, maybe to something like old_name, old_size, etc</li> <li>Schema migration to tv.VideoFile and movies.VideoFile to inherit from media.VideoFile</li> <li>Data migration to copy old_name to name, old_size to size, etc</li> <li>Scheme migration to remove old_ fields</li> </ol> <p>Before I go through all that work, do you think that will work? Is there a better way?</p> <p>If you're interested, the project is hosted here: <a href="http://code.google.com/p/medianav/">http://code.google.com/p/medianav/</a></p>
32
2009-10-21T11:05:57Z
4,805,806
<p>I did try to walk through the solution outlined by T Stone and while I think it's a superb starter and explains how things should be done I ran into a few problems. </p> <p>I think mostly you <strong>don't</strong> need to create the table entry for the parent class anymore, i.e. you don't need</p> <pre><code>new_movie.videofile_ptr = orm['media.VideoFile'].objects.create() </code></pre> <p>anymore. Django will now do this automatically for you (if you have non-null fields then the above did not work for me and gave me a database error).</p> <p>I think it is probably due to changes in django and south, here is a version that worked for me on ubuntu 10.10 with django 1.2.3 and south 0.7.1. The models are a little different, but you will get the gist:</p> <h3>Initial setup</h3> <p><strong>post1/models.py:</strong></p> <pre><code>class Author(models.Model): first = models.CharField(max_length=30) last = models.CharField(max_length=30) class Tag(models.Model): name = models.CharField(max_length=30, primary_key=True) class Post(models.Model): created_on = models.DateTimeField() author = models.ForeignKey(Author) tags = models.ManyToManyField(Tag) title = models.CharField(max_length=128, blank=True) content = models.TextField(blank=True) </code></pre> <p><strong>post2/models.py:</strong></p> <pre><code>class Author(models.Model): first = models.CharField(max_length=30) middle = models.CharField(max_length=30) last = models.CharField(max_length=30) class Tag(models.Model): name = models.CharField(max_length=30) class Category(models.Model): name = models.CharField(max_length=30) class Post(models.Model): created_on = models.DateTimeField() author = models.ForeignKey(Author) tags = models.ManyToManyField(Tag) title = models.CharField(max_length=128, blank=True) content = models.TextField(blank=True) extra_content = models.TextField(blank=True) category = models.ForeignKey(Category) </code></pre> <p>There is obviously a lot of overlap, so I wanted to factor the commonalities out into a <em>general post</em> model and only keep the differences in the other model classes.</p> <p>new setup:</p> <p><strong>genpost/models.py:</strong></p> <pre><code>class Author(models.Model): first = models.CharField(max_length=30) middle = models.CharField(max_length=30, blank=True) last = models.CharField(max_length=30) class Tag(models.Model): name = models.CharField(max_length=30, primary_key=True) class Post(models.Model): created_on = models.DateTimeField() author = models.ForeignKey(Author) tags = models.ManyToManyField(Tag) title = models.CharField(max_length=128, blank=True) content = models.TextField(blank=True) </code></pre> <p><strong>post1/models.py:</strong></p> <pre><code>import genpost.models as gp class SimplePost(gp.Post): class Meta: proxy = True </code></pre> <p><strong>post2/models.py:</strong></p> <pre><code>import genpost.models as gp class Category(models.Model): name = models.CharField(max_length=30) class ExtPost(gp.Post): extra_content = models.TextField(blank=True) category = models.ForeignKey(Category) </code></pre> <p>If you want to follow along you will first need to get these models into south:</p> <pre><code>$./manage.py schemamigration post1 --initial $./manage.py schemamigration post2 --initial $./manage.py migrate </code></pre> <h3>Migrating the data</h3> <p>How to go about it? First write the new app genpost and do the initial migrations with south:</p> <pre><code>$./manage.py schemamigration genpost --initial </code></pre> <p>(I am using <code>$</code> to represent the shells prompt, so don't type that.)</p> <p>Next create the new classes <em>SimplePost</em> and <em>ExtPost</em> in post1/models.py and post2/models.py respectively (don't delete the rest of the classes yet). Then create schemamigrations for these two as well:</p> <pre><code>$./manage.py schemamigration post1 --auto $./manage.py schemamigration post2 --auto </code></pre> <p>Now we can apply all these migrations:</p> <pre><code>$./manage.py migrate </code></pre> <p>Let's get to the heart of the matter, migrating the data from post1 and post2 to genpost:</p> <pre><code>$./manage.py datamigration genpost post1_and_post2_to_genpost --freeze post1 --freeze post2 </code></pre> <p>Then edit genpost/migrations/0002_post1_and_post2_to_genpost.py:</p> <pre><code>class Migration(DataMigration): def forwards(self, orm): # # Migrate common data into the new genpost models # for auth1 in orm['post1.author'].objects.all(): new_auth = orm.Author() new_auth.first = auth1.first new_auth.last = auth1.last new_auth.save() for auth2 in orm['post2.author'].objects.all(): new_auth = orm.Author() new_auth.first = auth2.first new_auth.middle = auth2.middle new_auth.last = auth2.last new_auth.save() for tag in orm['post1.tag'].objects.all(): new_tag = orm.Tag() new_tag.name = tag.name new_tag.save() for tag in orm['post2.tag'].objects.all(): new_tag = orm.Tag() new_tag.name = tag.name new_tag.save() for post1 in orm['post1.post'].objects.all(): new_genpost = orm.Post() # Content new_genpost.created_on = post1.created_on new_genpost.title = post1.title new_genpost.content = post1.content # Foreign keys new_genpost.author = orm['genpost.author'].objects.filter(\ first=post1.author.first,last=post1.author.last)[0] new_genpost.save() # Needed for M2M updates for tag in post1.tags.all(): new_genpost.tags.add(\ orm['genpost.tag'].objects.get(name=tag.name)) new_genpost.save() post1.delete() for post2 in orm['post2.post'].objects.all(): new_extpost = p2.ExtPost() new_extpost.created_on = post2.created_on new_extpost.title = post2.title new_extpost.content = post2.content # Foreign keys new_extpost.author_id = orm['genpost.author'].objects.filter(\ first=post2.author.first,\ middle=post2.author.middle,\ last=post2.author.last)[0].id new_extpost.extra_content = post2.extra_content new_extpost.category_id = post2.category_id # M2M fields new_extpost.save() for tag in post2.tags.all(): new_extpost.tags.add(tag.name) # name is primary key new_extpost.save() post2.delete() # Get rid of author and tags in post1 and post2 orm['post1.author'].objects.all().delete() orm['post1.tag'].objects.all().delete() orm['post2.author'].objects.all().delete() orm['post2.tag'].objects.all().delete() def backwards(self, orm): raise RuntimeError("No backwards.") </code></pre> <p>Now apply these migrations:</p> <pre><code>$./manage.py migrate </code></pre> <p>Next you can delete the now redundant parts from post1/models.py and post2/models.py and then create schemamigrations to update the tables to the new state:</p> <pre><code>$./manage.py schemamigration post1 --auto $./manage.py schemamigration post2 --auto $./manage.py migrate </code></pre> <p>And that should be it! Hopefully it all works and you have refactored your models.</p>
9
2011-01-26T14:42:52Z
[ "python", "django", "migration", "django-south" ]
Using a Python Dictionary as a Key (Non-nested)
1,600,591
<p>Python doesn't allow dictionaries to be used as keys in other dictionaries. Is there a workaround for using non-nested dictionaries as keys? </p> <p>The general problem with more complicated non-hashable objects and my specific use case has been <a href="http://stackoverflow.com/questions/1611797/using-non-hashable-python-objects-as-keys-in-dictionaries/1612220#1612220">moved here</a>. My original description of my use case was incorrect.</p>
10
2009-10-21T12:47:53Z
1,600,700
<p>I don't see why you'd ever want to do this, but if you really really do need to, you could try pickling the dictionary:</p> <pre><code>mydict = {"a":1, "b":{"c":10}} import pickle key = pickle.dumps(mydict) d[key] = value </code></pre>
1
2009-10-21T13:03:12Z
[ "python", "data-structures", "dictionary" ]
Using a Python Dictionary as a Key (Non-nested)
1,600,591
<p>Python doesn't allow dictionaries to be used as keys in other dictionaries. Is there a workaround for using non-nested dictionaries as keys? </p> <p>The general problem with more complicated non-hashable objects and my specific use case has been <a href="http://stackoverflow.com/questions/1611797/using-non-hashable-python-objects-as-keys-in-dictionaries/1612220#1612220">moved here</a>. My original description of my use case was incorrect.</p>
10
2009-10-21T12:47:53Z
1,600,717
<p>I don't know whether I understand your question correctly, but i'll give it a try</p> <pre><code> d[repr(a)]=value </code></pre> <p>You can interate over the dictionary like this</p> <pre><code>for el1 in d: for el2 in eval(el1): print el2,eval(el1)[el2] </code></pre>
-1
2009-10-21T13:06:43Z
[ "python", "data-structures", "dictionary" ]
Using a Python Dictionary as a Key (Non-nested)
1,600,591
<p>Python doesn't allow dictionaries to be used as keys in other dictionaries. Is there a workaround for using non-nested dictionaries as keys? </p> <p>The general problem with more complicated non-hashable objects and my specific use case has been <a href="http://stackoverflow.com/questions/1611797/using-non-hashable-python-objects-as-keys-in-dictionaries/1612220#1612220">moved here</a>. My original description of my use case was incorrect.</p>
10
2009-10-21T12:47:53Z
1,600,806
<p>If you have a really immutable dictionary (although it isn't clear to me why you don't just use a list of pairs: e.g. <code>[('content-type', 'text/plain'), ('host', 'example.com')]</code>), then you may convert your <code>dict</code> into:</p> <ol> <li><p>A tuple of pairs. You've already done that in your question. A <code>tuple</code> is required instead of <code>list</code> because the results rely on the ordering an immutability of the elements.</p> <pre><code>&gt;&gt;&gt; tuple(sorted(a.items())) </code></pre></li> <li><p>A frozen set. It is a more suitable approach from the mathematical point of view, as it requires <strong>only the equality relation</strong> on the elements of your immutable <code>dict</code>, while the first approach requires the ordering relation besides equality.</p> <pre><code>&gt;&gt;&gt; frozenset(a.items()) </code></pre></li> </ol>
30
2009-10-21T13:21:19Z
[ "python", "data-structures", "dictionary" ]
Using a Python Dictionary as a Key (Non-nested)
1,600,591
<p>Python doesn't allow dictionaries to be used as keys in other dictionaries. Is there a workaround for using non-nested dictionaries as keys? </p> <p>The general problem with more complicated non-hashable objects and my specific use case has been <a href="http://stackoverflow.com/questions/1611797/using-non-hashable-python-objects-as-keys-in-dictionaries/1612220#1612220">moved here</a>. My original description of my use case was incorrect.</p>
10
2009-10-21T12:47:53Z
1,600,851
<p>One way to do this would be to subclass the dict and provide a hash method. ie:</p> <pre><code>class HashableDict(dict): def __hash__(self): return hash(tuple(sorted(self.iteritems()))) &gt;&gt;&gt; d = HashableDict(a=1, b=2) &gt;&gt;&gt; d2 = { d : "foo"} &gt;&gt;&gt; d2[HashableDict(a=1, b=2)] "foo" </code></pre> <p>However, bear in mind the reasons why dicts (or any mutable types) don't do this: mutating the object after it has been added to a hashtable will change the hash, which means the dict will now have it in the wrong bucket, and so incorrect results will be returned.</p> <p>If you go this route, either be <strong>very</strong> sure that dicts will never change after they have been put in the other dictionary, or actively prevent them (eg. check that the hash never changes after the first call to <code>__hash__</code>, and throw an exception if not.)</p>
3
2009-10-21T13:28:25Z
[ "python", "data-structures", "dictionary" ]
Using a Python Dictionary as a Key (Non-nested)
1,600,591
<p>Python doesn't allow dictionaries to be used as keys in other dictionaries. Is there a workaround for using non-nested dictionaries as keys? </p> <p>The general problem with more complicated non-hashable objects and my specific use case has been <a href="http://stackoverflow.com/questions/1611797/using-non-hashable-python-objects-as-keys-in-dictionaries/1612220#1612220">moved here</a>. My original description of my use case was incorrect.</p>
10
2009-10-21T12:47:53Z
1,600,859
<p>If I needed to use dictionaries as keys, I would flatten the dictionary into a tuple of tuples.</p> <p>You might find this SO question useful: <a href="http://stackoverflow.com/questions/635483/what-is-the-best-way-to-implement-nested-dictionaries-in-python">http://stackoverflow.com/questions/635483/what-is-the-best-way-to-implement-nested-dictionaries-in-python</a></p> <p>And here is an example of a flatten module that will flatten dictionaries: <a href="http://yawpycrypto.sourceforge.net/html/public/Flatten.Flatten-module.html">http://yawpycrypto.sourceforge.net/html/public/Flatten.Flatten-module.html</a></p> <p>I don't fully understand your use case and I suspect that you are trying to prematurely optimize something that doesn't need optimization. </p>
8
2009-10-21T13:29:38Z
[ "python", "data-structures", "dictionary" ]
Using a Python Dictionary as a Key (Non-nested)
1,600,591
<p>Python doesn't allow dictionaries to be used as keys in other dictionaries. Is there a workaround for using non-nested dictionaries as keys? </p> <p>The general problem with more complicated non-hashable objects and my specific use case has been <a href="http://stackoverflow.com/questions/1611797/using-non-hashable-python-objects-as-keys-in-dictionaries/1612220#1612220">moved here</a>. My original description of my use case was incorrect.</p>
10
2009-10-21T12:47:53Z
1,600,916
<p>Hmm, isn't your use case just memoizing function calls? Using a decorator, you will have easy support for arbitrary functions. And yes, they often pickle the arguments, and using circular reasoning, this works for non-standard types as long as they can be pickled.</p> <p>See e.g. <a href="http://www.finalcog.com/python-memoise-memoize-function-type" rel="nofollow">this memoization sample</a></p>
3
2009-10-21T13:42:27Z
[ "python", "data-structures", "dictionary" ]
Using a Python Dictionary as a Key (Non-nested)
1,600,591
<p>Python doesn't allow dictionaries to be used as keys in other dictionaries. Is there a workaround for using non-nested dictionaries as keys? </p> <p>The general problem with more complicated non-hashable objects and my specific use case has been <a href="http://stackoverflow.com/questions/1611797/using-non-hashable-python-objects-as-keys-in-dictionaries/1612220#1612220">moved here</a>. My original description of my use case was incorrect.</p>
10
2009-10-21T12:47:53Z
1,601,087
<p>To turn a someDictionary into a key, do this</p> <pre><code>key = tuple(sorted(someDictionary .items()) </code></pre> <p>You can easily reverse this with <code>dict( key )</code></p>
2
2009-10-21T14:13:16Z
[ "python", "data-structures", "dictionary" ]
setTrace() in Python
1,600,726
<p>Is there a way to use the setTrace() function in a script that has no method definitions? i.e.</p> <pre><code>for i in range(1, 100): print i def traceit(frame, event, arg): if event == "line": lineno = frame.f_lineno print "line", lineno return traceit sys.settrace(traceit) </code></pre> <p>so ideally I would want the trace function to be called upon every iteration / line of code executed in the loop. I've done this with scripts that have had method definitions before, but am not sure how to get it to work in this instance.</p>
5
2009-10-21T13:07:44Z
1,600,902
<p>settrace() is really only intended for implementing debuggers. If you are using it to debug this program, you may be better off using PDB</p> <p>According to the documentation, settrace() will not do what you want.</p> <p>If you really want to do this line by line tracing, have a look at the compiler package which allows you to access and modify the AST Abstract Syntax Tree produced by the Python compiler. You should be able to use that to insert calls to a function which tracks the execution.</p>
2
2009-10-21T13:39:15Z
[ "python", "trace" ]
setTrace() in Python
1,600,726
<p>Is there a way to use the setTrace() function in a script that has no method definitions? i.e.</p> <pre><code>for i in range(1, 100): print i def traceit(frame, event, arg): if event == "line": lineno = frame.f_lineno print "line", lineno return traceit sys.settrace(traceit) </code></pre> <p>so ideally I would want the trace function to be called upon every iteration / line of code executed in the loop. I've done this with scripts that have had method definitions before, but am not sure how to get it to work in this instance.</p>
5
2009-10-21T13:07:44Z
1,600,992
<p>I only use one simple syntax line to rule them all:</p> <pre><code>import pdb; pdb.set_trace() </code></pre> <p>Put it wherever you want to break execution and start debugging. Use pdb commands (n for next, l for list, etc).</p> <p>Cheers,</p> <p>H.</p>
2
2009-10-21T13:57:24Z
[ "python", "trace" ]
setTrace() in Python (redux)
1,601,217
<p>Apologies for reposting but I had to edit this question when I got to work and realized I needed to have an account to do so. So here it goes again (with a little more context).</p> <p>I'm trying to time how long a script takes to execute, and I am thinking of doing that by checking the elapsed time after every line of code is executed. I've done this before when the script has contained method definitions, but am not sure how it would work in this instance.</p> <p>So my question is: Is there a way to use the setTrace() function in a script that has no method definitions? i.e.</p> <pre><code>for i in range(1, 100): print i def traceit(frame, event, arg): if event == "line": lineno = frame.f_lineno print "line", lineno return traceit sys.settrace(traceit) </code></pre>
1
2009-10-21T14:31:24Z
1,601,276
<p>No, as <a href="http://docs.python.org/library/sys.html?highlight=settrace#sys.settrace" rel="nofollow">the docs</a> say, "The trace function is invoked (with event set to 'call') whenever a new local scope is entered" -- if you never enter a local scope (and only execute in global scope), the trace function will never be called. Note that settrace is too invasive anyway for the purpose of timing "how long a script takes to execute" as it will alter what it's measuring too much; if what you say is actually what you want, just take the time at start of execution and register with <code>atexit</code> a function that gets the time again and prints the difference. If what you want is different, i.e., <em>profiling</em>, see <a href="http://docs.python.org/library/profile.html?highlight=cprofile#module-pstats" rel="nofollow">cProfile</a> .</p> <p>Further note that the code example you give couldn't possibly do anything useful (even though I've edited it to fix an indent error): first it loops, then it defs a function, finally it calls settrace... then immediately ends because there's no more code after that! If you want anything to happen <strong>before</strong> that loop start, and you want to have everything at module top level (bad idea, but, whatever), you have to put the "anything" lexically <em>before</em> the loop, not after...;-)</p>
2
2009-10-21T14:39:16Z
[ "python", "trace" ]
How to make a completely unshared copy of a complicated list? (Deep copy is not enough)
1,601,269
<p>Have a look at this Python code:</p> <pre><code>a = [1, 2, 3] b = [4, 5, 6] c = [[a, b], [b, a]] # [[[1, 2, 3], [4, 5, 6]], [[4, 5, 6], [1, 2, 3]]] c[0][0].append(99) # [[[1, 2, 3, 99], [4, 5, 6]], [[4, 5, 6], [1, 2, 3, 99]]] </code></pre> <p>Notice how modifying one element of <code>c</code> modifies that everywhere. That is, if <code>99</code> is appended to <code>c[0][0]</code>, it is also appended to <code>c[1][1]</code>. I am guessing this is because Python is cleverly <em>referring</em> to the <em>same object</em> for <code>c[0][0]</code> and <code>c[1][1]</code>. (That is their <em>id()</em> is the same.)</p> <p><strong>Question:</strong> Is there something that can be done to <code>c</code> so that its list elements can be safely locally modified? Above is just an example, my real problem has a list much more complicated, but having a similar problem.</p> <p>(Sorry for the poorly formed question above. Python gurus please feel free to modify the question or tags to better express this query.)</p>
5
2009-10-21T14:38:27Z
1,601,298
<p>Use <code>[:]</code>:</p> <pre><code>&gt;&gt;&gt; a = [1, 2] &gt;&gt;&gt; b = a[:] &gt;&gt;&gt; b.append(9) &gt;&gt;&gt; a [1, 2] </code></pre> <p>Alteratively, use <a href="http://docs.python.org/library/copy.html" rel="nofollow"><code>copy</code> or <code>deepcopy</code></a>:</p> <pre><code>&gt;&gt;&gt; import copy &gt;&gt;&gt; a = [1, 2] &gt;&gt;&gt; b = copy.copy(a) &gt;&gt;&gt; b.append(9) &gt;&gt;&gt; a [1, 2] </code></pre> <p><code>copy</code> works on objects other than lists. For lists, it has the same effect as <code>a[:]</code>. <code>deepcopy</code> attempts to recursively copy nested elements, and is thus a more "thorough" operation that <code>copy</code>.</p>
4
2009-10-21T14:41:57Z
[ "python", "list", "copy" ]